text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Implementation of Robots Integration in Scaled Laboratory Environment for Factory Automation : Robotic systems for research and development of factory automation are complex and unavailable for broad deployment in robotic laboratory settings. The usual robotic factory automation setup consists of series of sensors, robotic arms and mobile robots integrated and orchestrated by a central information system. Cloud-based integration has been gaining traction in recent years. In order to build such a system in a laboratory environment, there are several practical challenges that have to be resolved to come to a point when such a system can become operational. In this paper, we present the development of one such system composed of (i) a cloud-based system built on top of open platform for innovation in logistics, (ii) a prototyped mobile robot with a forklift to manipulate pallets in a “factory” floor, and (iii) industrial robot ABB IRB 140 with a customized gripper and various sensors. A mobile robot is designed as an autonomous four Mecanum wheels system with on-board LiDAR and RGB-D sensor for simultaneous localization and mapping. The paper shows a use case of the overall system and highlights the advantages of having a laboratory setting with real robots for the research of factory automation in a laboratory environment. Moreover, the proposed solution could be scaled and replicated in real factory automation applications. Introduction The automation of warehouses and operation at factory floors is rapidly expanding. This process is enabled by the combination of industrial robots, mobile robotic systems, sensor networks and a central server system that manages and coordinates the work of all machines in the warehouse or across the factory floor. There are many open problems in this domain, such as localization of mobile systems [1], accurate object manipulation for various tasks [2], coordination of the entire system [3], management of alarm events and unwanted events, etc. In many cases, it is also not possible to fully automate the process, and interaction with humans arises as another aspect for research. Our work contributes to research in the area of the factory and warehouse automation process, as well as in other segments important for the successful integration of mobile and industrial robots together with cloud-based solutions for the integration and coordination of tasks. Taking advantage of our experience, laboratories can build a scaled version of the factory floor and deal with real-world problems not only in simulation or in a real factory, but also in the laboratory environment, where a lot of realistic situations can be prepared and validated before moving the experiment on the site (Figure 1). The paper is organized as follows. In Section 2, we give an overview of mobile and industrial robots used for factory floor automation as well as challenges and solutions for their effective coordination with cloud-based services. Robots are designed and programmed for the automation of pallets transportation and boxes palletizing as one of the most common use cases in many factories. Section 3 gives a detailed description of the hardware design and software architecture in our solution. The robot operating system (ROS) is the backbone of robotic software development. Additionally, the description and evaluation of used simultaneous localization and mapping (SLAM) algorithms are presented, as localization of the mobile robot is of the great importance for successful integration. Section 4 explains a software architecture of mobile and industrial robots relevant for integration with the cloud server. The focus is on interconnection and communication aspects. At the end, in Section 5, we give the final discussion and conclusion, establishing directions for future work. State of the Art This section reports the techniques commonly used in the development and usage of AGVs and industrial robots. The key point is increasing their autonomy and flexibility but there is a noticeable shortcoming in the availability of a verification and validation infrastructure for different approaches in robot-assisted manufacturing. In [4], the authors compared different strategies for robotic warehousing using multi-robot systems. This work is focused on AGV's roles only, providing a comparison of two collection methods, analyzing completion time and energy consumption. Similar to our work, the authors in [5] presented and demonstrated a solution for the automation of the order-picking task at an industrial shop floor. As in our case, the presented system includes AGV and a collaborative robot, but there are two important points of difference-commercial robots are used and, instead of a cloud infrastructure, an Arduino board with a TCP/IP socket server is used in order to dispatch specific commands to the AGV and the cobot manipulator. This work, instead, focuses on the development of laboratory-scaled AGV, the development of control software for AGV and the industrial robot and their integration through the OPIL cloud framework. Mobile Robots for Factory Automation Industrial technology advancement and the diversity of manufacturing strategies have raised the need for flexible and robust systems to fulfill different tasks with minimal or no changes. Mobile robots have become the main tool in solving logistics' problems of increasing productivity. The robots that are most commonly used in industrial manufac-turing facilities or warehouses are automated guided vehicles or AGVs. They are portable robots that use marked lines on the floor, radio waves, cameras for vision, or various types of sensors for navigation. They are driverless vehicles, battery powered, and suited for transferring products or equipment in an industrial environment. The AGV can have objects hooked up, such as a trolley or a trailer, which can attach itself automatically [6]. Forklifts and simple carrying beds can also be installed on the AGV in order to both lift and place a product, or to simply carry and store many parts, respectively. In order to move somewhat freely and without many constraints, a set of omni-directional wheels can be installed on the AGV [7]. Omni-directional wheels can be designed as conventional or special wheels. Conventional wheels are made by Swedish engineer Berndt Ilon, and they are also called Ilon's wheels or Mecanum wheels. These rollers have an axis rotation of 45 • and can move and rotate in almost any given direction. Mecanum wheels enable 3 degrees of freedom mobility of mobile robots. Special omni-directional wheels have changes in rollers designs [8] or angle positions of rollers to ensure greater stability [9]. Special omni-directional wheels can also have more freedom in rotation and reposition, so they can be utilized in more challenging tasks. AGVs with installed omni-directional wheels are widely spread and commonly used in industry. The navigation of autonomous mobile robots requires accurate localization. In order to provide a robot with its precise location, a combination of various sensing methods and SLAM algorithms is used. For successful navigation through an environment, this approach needs to satisfy the following: • Online computation and decision making, which is required in order to avoid unsafe situations and ensure easier incorporation of algorithms with other processes on system without overloading CPU time. • Ability to adapt to dynamic environments, illumination changes or repetitive environments. • Low-drift odometry provides information about robot position when it cannot localize itself on the map. Until the SLAM algorithm localizes again, odometry drift should be minimized to provide the system with accurate position information so that navigation is still possible. There is a wide variety of SLAM approaches integrated in ROS, such as ones with reliable solutions for planar environments using Rao-Blackwellized particle filters [10]. The availability of enough estimated particles is required to converge to a solution which well represents the environment. Hector SLAM [11] can create fast 2D occupancy grid maps from 2D LiDAR with low computation resources. One of the drawbacks of Hector SLAM is that it does not implement loop closing. (Loop closure algorithms determine whether or not a robot returned to a previously visited area [12]. Loop closure reduces the uncertainty in the mapping and improves the precision of the localization of the robot.) Leaving this feature out was done to maintain low computational requirements. On the other hand, the Hector SLAM approach does not require an external odometry source, which is an advantage in environments with high geometry constraints. In addition to 2D LiDAR-based SLAM approaches, visual SLAM can also be used in mobile robot navigation problems. One of the available visual SLAM approaches is ORB-SLAM2 [13], which is feature-based visual SLAM. ORB-SLAM2 can be used with a stereo camera system, or RGB-D camera. Loop-closure detection and relocalization of a system is based on DBoW2 [14]. As the map grows, the time required for loop closure defections and graph optimization processes increases. This can lead to significant delay in the loop closure correction, making this approach not completely suitable for use on a real robot. RTAB-Map [15] is a graph-based SLAM approach based on an incremental appearancebased loop closure detector. RTAB-Map is capable of using an RGB-D camera, stereo camera and LiDAR to perform mapping and localization. The detection of loop closures is again done using the bag-of-words approach to determine the likelihood of a new image coming from a previous location or a new location. After loop closure detection, a graph optimizer optimizes the errors in the map. Memory management implemented in RTAB-Map enables its usage in real time on larger environment areas. RTAB-Map can be used alone, with handheld camera, a stereo camera, or a 3D LiDAR for 6DoF mapping. The integration of RTAB-Map in ROS enables easier implementation on a robot equipped with camera and/or LiDAR . A map generated throughout SLAM is used to autonomously plan the path and navigate the mobile robot in the environment. The path planner generates the trajectory for a robot to follow in order to achieve the desired position in either known or unknown space. Planners are divided into two types, global and local planners. Global planners generate paths based on a static map, from start to the destination point. Global path calculations are, in most cases, slow, making this kind of planner not suitable for dynamic environments. This problem is solved using a local planner, which takes into account the robot motion model, together with sensor data to obtain best possible velocity commands that accomplish the global plan. One of the very popular local planners is a dynamic window approach (DWA) [16]. The DWA algorithm has a goal to maximize an objective function, which takes in account the distance to the target, obstacle proximity and robot's velocity. The result of the algorithm is a velocity pair (v, w), where v is the desired linear velocity of a robot, and w the desired angular velocity. The DWA algorithm consists of several steps. Firstly, the algorithm discretely samples velocities in velocity space. The second step simulates the behavior of a robot for each sampled velocity for a short period forward in time. After simulating the velocities, the algorithm calculates the cost functions and evaluates them to determine which sampled pair gives the best trajectory score. The robot has the predefined set of admissible velocities. Velocities for which the objective function is maximal are selected. Elastic band (EBand) [17] is a real-time algorithm for collision-free motion control. Two key features are used to obtain a collision-free path, 'contraction force' and 'repulsion force', respectively. The first one removes slacks in the path, while the other pushes the robot away from obstacles. This creates an elastic band, enabling the robot to smoothly follow the path. Any encountered obstacle further deforms the generated path to avoid them while keeping a smooth trajectory. In [18], the authors presented the timed elastic band (TEB) as an extension of EBand, which, during calculations, takes into consideration a short distance ahead of the global path, creating a local path consisted of multiple way-points. Each waypoint represents a temporal goal for a robot to achieve. Following way-points, the robot arrives at desired location. During trajectory optimization, TEB takes into consideration robot kinematics, dynamics, acceleration limits and geometric shape. Optimizing global planner trajectory, TEB fulfills time-optimal objectives, decreasing the trajectory execution time. For the path planner in this work, we used the time elastic band algorithm. Industrial Robots and Tools Next to mobile robots, industrial robots are inevitable for accomplishing flexible factory automation. Combined together, industrial and mobile robots are able to perform various tasks, including warehouse management, pick and place, transportation, machine tending, etc. For an industrial robot to successfully handle/manipulate an object, a gripper is being imposed as a medium between the manipulator and the manipulated object. If a gripper is to perform a task of grasping an object, one should be able to securely and safely hold it. Depending on the given object in the task, a suitable gripper is designed for the job. Industrial mechanical grippers should be made as robust, rigid objects with very few moving parts, easy to attach and detach. Depending on the task, a gripper also could be required to handle various sized objects, with different shapes, materials and mass [19]. In the case of pneumatic grippers, they can also be made as rigid mechanical grippers, or as soft, muscle-like mechanisms [20]. For a soft pneumatic gripper, grasping is mainly conducted by a suction-like mechanism, which holds firmly to the attached item and conforms itself to the shape of the grasped object [21]. In addition, soft pneumatic grippers can be designed to resemble a biologically inspired method of grasping [22]. Pneumatic grippers, whether they are soft or rigid in their design, can adapt to the various shapes of the grasped item. Because of these abilities, pneumatic grippers are convenient for their usage in the food industry. Integration of Cloud-Enabled Robot Systems The idea of separating robot hardware from computational resources and high-level reasoning is not new. In [23], the author introduced remote-brained robots as a way to accomplish effective robotic architecture, multi-robot coordination, reconfigurable and distributed modular systems and so on. This approach enabled intelligent behaviors of multi-limbed robots and opened new fields of research such as networked robots and cloud-enabled robots [24,25]. In the first case, a stand-alone robot, environment sensors, and humans communicate and cooperate through a network. The second case brings a distributed structure of information and decision making, where cloud computing is used for various calculations to overcome limited onboard storage and computing resources. In a general case, premises for the efficient realization of a distributed multi-robot system are the same as for the single robot systems [26]: • Environment perception as the vital ability of a system to build knowledge about its surrounding. Collecting information about environment structure and location of obstacles gives robots the ability to predict their future states. The environment perception task usually involves infrared (IR), light detection and ranging (LiDAR) sensors, cameras, etc., and often fused information from these devices. • Localization as a capability of robots to estimate their position and orientation with respect to the environment. • Navigation includes the previous two tasks and combines them with an effective planning system. Usually, this task is solved by engaging processes of map building and localization simultaneously, i.e., simultaneous localization and mapping (SLAM) [27]. In order to improve a robot's sensing, computation and memory resources, the utilization of cloud architecture arises as a promising approach in the development of robotics systems [28,29]. Based on such integration, cloud-enabled robotics has several advantages, such as the following: • With increased computational power and storage space, computation-intensive tasks can be performed in real time, using the cloud infrastructure (computer vision, speech recognition, object recognition, etc.). • This infrastructure can hold large data, such as global maps, so particular robot navigation can be accomplished with improved safety and efficiency. Moreover, cloud-enabled robotics offers new control strategies for cooperative robots. Sharing information in the cloud, cooperative robots take advantage of unified processing of information from multiple sources. As a result, the design and development of novel mobile robot systems hold a benefit from the fusion of a global route map and local path planning, sensor fusion, time synchronization, etc. In [30], the authors emphasized the benefits of sharing and reusing the data independent of specific robot hardware. Leveraging existing standards in an open architecture framework and network protocols, the RoboEarth platform allows any robot with a network connection to generate, share, and reuse data. As the result, combining the cloud service and local knowledge, the platform aims at increasing the speed of the robot learning process and enabling adaptation in various tasks. Implementation is based on a three-layered architecture: • Cloud (server) layer that holds the RoboEarth database containing a global world model with information about the objects, environments, actions, etc. • Hardware-independent middle layer that serves as a bridge between global knowledge and robot-specific skills. This layer contains generic components as a part of the local robot's software. • The layer that represents the robot's specific skills. Automation in factory production lines, warehousing and logistic operations is mostly based on AGV utilization with centralized, cloud-based management, usually referred to as warehouse management system (WMS). The role of this system varies according to the type of action and robot capabilities. One of the attractive fields for the cloud-based application is SLAM, which is computationally expensive to run massively within the robotic platform at every point of the unknown environment. In the case of multiple robots, SLAM can be shared between agents for faster and more accurate mapping. In [31], the authors reported a framework for grid-based FastSLAM implementation. Within this approach, the Handoop cluster (called DAvinCi server) is engaged for map estimation, while a distributed ROS architecture provides sensory data and communication among the robot agents and server. Similarly to our solution, this cloud service acts as the master node, which maintains the list of publishers-ROS nodes on robots. The visual SLAM system C 2 TAM [32] is in line with this approach. The cloud service is realized as a distributed framework serving expensive map optimization and storage. As a consequence, the robot on-board computers have increased autonomy since their role is reduced to a light camera tracking clients. Mobile Robot and Industrial Robot Design As previously mentioned, our solution is the laboratory setup for the automation of the factory floor. Robots are designed to transport EURO pallets and to pick and place objects on the pallet. The mobile robot is designed as a forklift type of a robot to carry EURO pallets with the boxes (crates) used in the meat industry. A common task in this industry is the sorting of crates based on clients' orders. The task of the industrial robot is to pick crates delivered by a mobile robot and to place them on EURO pallets for different orders. First, we will describe the mobile robot hardware and software for EURO pallet transportation followed by a description of the software for controlling the industrial robot and the design of the tool for crates manipulation. Hardware of Mobile Robot The mobile robot platform, designed in this work, is based on a four wheeled Mecanum drive ( Figure 2). The advantage of using Mecanum wheels is the ability to move in any direction without changing orientation. The wheels are actuated by four 100 W BDC motors, powered by Li-ion batteries enabling the robot to operate autonomously for around 16 h. Motors are controlled by motor controllers connected to the main on-board PC. One motor controller can control a pair of brushed DC motors using USB serial, TTL serial, RC or analog inputs. Integrated dual quadrature decoders enable easy, closed-loop speed control system implementation. For this work, a USB serial interface is used to connect the motor controllers with the main computer. The forklift system consists of a linear actuator with a BDC motor and a motor controller. The forklift system is controlled with the set of GPIO signals. To detect obstacles and navigate through the environment, the mobile robot is equipped with the laser ranging scanner RPLiDAR-A2, and Intel RealSense D415 depth camera. Li-DAR and camera sensors are used for both SLAM and path-planning algorithms, whose implementation is described later. The interface between sensors and the main PC is established using the USB 3.0 communication protocol to ensure fast data transfer, without full bandwidth consumption to prevent data loss. The RealSense depth camera has a standard field of view and high resolution [33], which is well suited for applications that require accuracy, such as 3D scanning and mapping. The standard field of view provides higher quality depth per degree. A relatively high range, up to 10 m, provides good perception of surroundings. The main PC integrates the LiDAR sensor, depth camera and motor controllers using the ROS framework, which will be explained later. To provide reasonable program execution time, enabling real-time mobile robot control and obstacle avoidance, the main PC has 16 GB DDR4 RAM memory, Ryzen 7 2700U CPU and M.2 NVMe SSD memory. Software of Mobile Robot Mobile robot software consists of several interconnected ROS packages to provide navigation in both known and unknown environments. The main packages used for mapping and navigation are as follows: • The RTAB-SLAM ROS package, assigned to provide system (mobile robot) with both map of the environment and robot localization. • The TEB ROS package provides a path for a robot to follow, based on a map and odometry from the RTAB-SLAM package. • The RoboClaw ROS package enables integration of the motor drivers with the rest of the ROS system. Additionally, ROS drivers for the LiDAR sensor and RealSense camera are used to provide communication between them and the rest of the system. Figure 3 shows connections between the ROS nodes used on the mobile robot. Mobile robot motion and path planning are provided inside the move_base ROS package. The global_planner node, upon receiving a goal which is described as the desired position and orientation of a robot, outputs a path based on the map created by the RTAB-SLAM node, considering only static obstacles. Obstacle avoidance is provided inside the TEB_planner node, using the Laser Scan topic acquired from the RPLiDAR node (ROS topics are named buses over which nodes exchange messages [34]). The final output of move_base gives the RoboClaw node the desired velocity of the robot base, which is then transformed into corresponding motor speeds. Topics global_costmap and local_costmap, based on the configuration of corresponding nodes, provide distances from obstacles which are not considered safe for a robot to be close to, and path planners adjust trajectories based on those cost maps. To control the mobile robot platform, the main PC communicates with a motor controller using a dedicated ROS motordriver_ros package. The motordriver_ros package is implemented in Python programming language, enabling easier and faster modifications. The ROS driver package for the motor driver is separated in two parts. The first part communicates with motor controllers using the USB protocol and provides different measurements, such as encoder status, motor current, etc. It also provides a way to set the motor speed to the desired value. The second part of the motordriver_ros driver communicates with the rest of the system, providing information about the robot's speed and position and waits for a new speed command to be delivered by a path planner. The four wheeled Mecanum drive can be controlled in two ways. The first way is the standard four wheeled drive, where control and path planning algorithms assume that the mobile platform can move only forward/backward and rotate in place: in other words, with more movement constraints than the Mecanum drive. The control of such a mobile platform is done using the standard approach for controlling a differential drive mobile robot. To provide more flexibility in the mobile robot motion, speed commands provided by the path planner are converted to four separated motor speeds, each for different motor, using a motion model with constraints for mobile platforms with the Mecanum wheels approach described in [35]. Since Mecanum wheels can cause drift that is not sensed by wheel encoders, we had to rely on localization from SLAM. Based on our experience, we decided to use the RTAB-SLAM algorithm [15], as it performs good results for indoor environment accuracy improvements with path loop closure. In order to evaluate localization from the RTAB-SLAM algorithm and demonstrate the improvement of localization after the loop closure, we prepared a setup to measure coordinates of the mobile robot with an external precise measurement system. We used a motion capture system from the company Vicon, which relied on reflective markers and eight fast infrared cameras. The motion capture system was set up to capture the location of the markers placed on a mobile robot to track its location in the environment (cf. Figure 4). The mobile robot is programmed to move in the environment in such a way to close the loop after some time. During the motion, the location of the robot is obtained from RTAB-SLAM and the Vicon motion capture system. Figure 5 shows the obtained localization data. We can see that the robot accumulated error while moving in the environment until it closed the loop and corrected significantly the localization estimation. The error after the loop closure was under 1 cm, which was the targeted accuracy for our experiment. Industrial Robot and Environment Setup The industrial robot system consists of the ABB IRB 140 robotic arm [36], dedicated end effector and a corresponding IRC5 controller. The external PC with ROS application is used to instruct the robot arm movements and to integrate the robot arm with the rest of the system. In order for the robot to know where each crate should be placed, an RFID sensor is used to identify each industrial crate. The setup of the industrial robotic arm and its environment was initially designed and simulated in the RobotStudio programming environment [37]. The developed software for simulation was afterwards used to control the real robot. Within the experiment setup, pallets with crates that the industrial robotic arm needs to sort are placed around the robot ( Figure 6). There are six pallets placed around the robot to perform the palletizing task. Four out of six pallets are "output" pallets that should be transferred out after sorting is complete. The "input" pallet with unsorted industrial crates, brought by the mobile robot, is located in front of the robot arm and can contain up to 24 crates. In addition to the "input" pallet, the transitory pallet is set up to temporarily accommodate crates that do not currently have a defined location, as well as crates that have not been successfully identified. After the spatial arrangement of the station elements was established and the algorithm was confirmed in the simulation environment, the system was created in the real world. The physical setup of the system in the laboratory is shown in Figure 7. For the task of manipulating industrial crates, a specific gripper was designed and prototyped. It is a two-finger electro-pneumatic gripper. The end effector consists of two claw-like fingers. They are designed in such a manner as to successfully reach within the openings on the sides of industrial crates. If the crates are not accurately positioned on the pallet, to compensate for this misalignment, there is an elastic link between the fingers and the base of the gripper. Software of Industrial Robot An increase in the usage of ROS applications in industrial robotics, and its applications on industrial robotic arms, has contributed to the development of the ROS industrial (ROS-I) platform [38,39]. In addition to the benefits of the standard ROS platform previously mentioned, ROS-I provides us with supplementary tools and capabilities specific to industrial robotic arms. One of the biggest advantages of using the ROS-I platform is the programming of the robotic arm in conventional programming languages, such as Python and C++, instead of the native language of the controller, which is company dependent. The ROS packages used in our work to communicate and control the industrial robot are as follows: • abb_driver that enables the communication between personal computer and ABB IRC5 industrial robot controller for robot control. The messages being exchanged contain information about the condition of the robot, such as the position of the robot's wrists. • paletizer package, developed for sorting/palletizing industrial crates, as well as to communicate with the rest of the system, e.g., OPIL, from which it receives the commands for palletization and reports on the state of the task. • abb_irb140_unal, the package that provides information about the physical representation of robots, such as URDF and SRDF records. For controlling the ABB industrial robotic arm, the abb_driver is used [40] (ROS driver for the ABB industrial robots), as the driver is made for specific hardware. The part of the driver located on the industrial controller is written in the programming language RAPID (the native language of the controller), while the part of the driver located on the personal computer is written in C++. The block diagram of communication between two applications/two devices is shown in Figure 8. Some of the additional features that ROS-I provides in the form of additional packages are an extended set of robotic arm models that the ROS driver can control (abb_experimental package [41]). The experimental package expands the capabilities of the ROS driver and adds the models and parameters of the robotic arms. The most important tool provided using the ROS-I platform is the MoveIt package [42]. The MoveIt package gives us the possibility of creating a functional robot from a CAD model, i.e., generates the required universal robot description format (URDF) and semantic robot description format (SRDF) files to define the parameters of the robotic arm. Additionally, an additional package for a visualization RViz provides us the visualization of 3D models of robots, as well as their movement and coordinate systems. System Integration and Experiment The integration of the system is obtained by the cloud infrastructure with open platform for innovations in logistics (OPIL). OPIL is the open IoT platform that enables digital virtualization, automated scenario setup and communication interface for different factory floor resources. Mobile robots, AGVs, workers, sensors and factory IT infrastructure can be connected through this platform in order to develop and test factory logistic automation. Figure 9 shows the high level organization of integration of our robotic systems with the OPIL platform. The connection between robots and the cloud is established with the messaging system based on the FIWARE open architecture. The deployment scheme for OPIL-based systems contains the following modules: • OPIL server as a cloud infrastructure responsible for hosting the modules, such as task planner, context management, HMI (human-machine interface), SP (sensing and perception). • Different nodes in the field, including mobile robots, AGVs, forklifts and sensors. Generic components of the OPIL server are distributed in a multi-layer architecture, where the bottom layer (IoT nodes layer) components enable interaction with the physical world. This layer may include the following components: • Robotic agent nodes (RAN) as nodes responsible for dealing with the physical actors. In the OPIL world, that could be manipulation agents (intended for loading and unloading the goods and products) or moving agents (intended for moving goods or products from one place to another). • Human agent nodes (HAN) as nodes in charge of interfacing with humans. • Sensor agent node (SAN), i.e., nodes that allow data transfer from various sensing sources to the cloud. As an illustration of RAN node functionality, let us consider the robot's motion in dynamic environments. Robot goals can be acquired from servers on the local network, or from the cloud. Setting the robot's target point starts with reading the current desired position of a robot inside of a facility and publishing that information on an adequate ROS topic. Upon reaching the destination, a task command is issued to a robot, based on readings from a server. The message consists of the position and task command, such as the lifting up or down robot tool. Additionally, the message has the maximum allowed velocity of a robot, desired acceleration rate, etc. Algorithm 1 describes the process executed inside the RAN module. Similarly, it is necessary to provide the main system with current sensor readings of the robot's environment. The ROS node tasked with this job is called the sensor agent node (SAN). SAN feeds sensor readings, such as LaserScan received from a LiDAR sensor, to a server. Those readings provide the system with the robot's current environmental status, based on which the next goal is generated. The new goal is updated over a certain part of the message body generated on server. Algorithm 1 Goal and task reading. Initialize ROS node; Open connection to a server loop read current assignment from a server publish goal to ROS topic if task command issued then call ROS service for task execution end if send current robot pose to server (odometry readings) end loop As previously mentioned, abb_driver, written in RAPID and C++ programming languages, was used to control the robotic arm using the ROS-I platform, while the sorting algorithm was written in the Python programming language. The application for palletizing consists of two modules. The sorting/palletizing module (cf. Algorithm 2) and module for communication with OPIL platform (cf. Algorithm 3). The sorting algorithm serves to relocate the industrial crates from the input pallet to the appropriate output pallet specified by the logistics system. The communication algorithm serves to report to the logistics system the current state of the robot arm and obtain the data from it. The whole process can be summarized as follows: After receiving information from OPIL that the input pallet is present and how many crates are on it, the robot is positioned above the pallet. The robot picks the industrial crate and transfers it to the RFID sensor for identification. Depending on the identification, the robot arm places the crate in a determined position on the corresponding output pallet. When the input pallet is unloaded, the robotic arm returns to the starting position to wait for the next pallet. The algorithm for communication (Algorithm 3) has the role of informing the OPIL system of the state in which the robotic arm is currently found and to take data for palletizing task. The states in which the robot can be found are predefined states depending on the task that it is currently performing. With the help of this module, the robotic arm receives information about the environment around it through a logistics system. Conclusions In this work, we developed a real-world laboratory setup for research in factory floor automation. The setup consists of two robots: a forklift-type mobile robot with Mecanum wheels and an ABB IRB 140 industrial robot arm. The integration of robots relies on the OPIL system, the open industrial IoT platform that enables the complete digital virtualization of intra-factory logistics automation. The software and hardware of the mobile robot are described in detail. The important aspect of the mobile robot is the SLAM algorithm used for navigation in the environment. For this purpose, we used RTAB-SLAM and we showed the result of the evaluation of the algorithm on our platform. The precision of the positioning in the laboratory environment after loop closure was under 1 cm, which was satisfactory for our experiment. During the development of the laboratory setup (hardware and software development), we applied modular architecture which should meet the requirements of the real factory floor automation. The four-wheeled Mecanum drive provides enough freedom for flexible path planning algorithms. The software is developed on top of the ROS architecture, divided in different function blocks, so-called nodes. ROS and ROS-I are open-source frameworks, tailored for robotics development, making this approach independent from a particular robotic platform. This enables easy integration with the OPIL cloud platform through dedicated modules. The use-case experiment showed successful integration and robotic service orchestrations with cloud service, which manages and distributes tasks on the factory floor. It receives information from sensors and robots on the current status of the factory floor, and based on the status and requirements, it issues commands to robots. The developed system is, thus, proven to be useful for the development of different solutions for factory floor automation and for the validation of research in this domain.
8,398.6
2022-01-25T00:00:00.000
[ "Computer Science" ]
Entanglement and replica symmetry breaking in a driven-dissipative quantum spin glass We describe simulations of the quantum dynamics of a confocal cavity QED system that realizes an intrinsically driven-dissipative spin glass. A close connection between open quantum dynamics and replica symmetry breaking is established, in which individual quantum trajectories are the replicas. We observe that entanglement plays an important role in the emergence of replica symmetry breaking in a fully connected, frustrated spin network of up to fifteen spin-1/2 particles. Quantum trajectories of entangled spins reach steady-state spin configurations of lower energy than that of semiclassical trajectories. Cavity emission allows monitoring of the continuous stochastic evolution of spin configurations, while backaction from this projects entangled states into states of broken Ising and replica symmetry. The emergence of spin glass order manifests itself through the simultaneous absence of magnetization and the presence of nontrivial spin overlap density distributions among replicas. Moreover, these overlaps reveal incipient ultrametric order, in line with the Parisi RSB solution ansatz for the Sherrington-Kirkpatrick model. A nonthermal Parisi order parameter distribution, however, highlights the driven-dissipative nature of this quantum optical spin glass. This practicable system could serve as a testbed for exploring how quantum effects enrich the physics of spin glasses. I. INTRODUCTION In a spin glass, quenched disorder and the resulting frustration of spin-spin interactions generate a rugged free energy landscape with many minima.This means that in some cases, below a critical temperature, the single paramagnetic thermodynamic state fractures into a multitude of distinct possible thermodynamic states [1].The number of such states is exponential in the system size.A consequence of this is that exact copiesreplicas-of such a system may cool into distinct thermodynamic states.This is replica symmetry breaking (RSB), which Parisi invoked [2,3] to solve the Sherrington-Kirkpatrick (SK) model [4].The SK model describes a network of spins with all-to-all couplings with random signs.The Parisi solution showed how RSB arises by studying the distribution of spin overlaps between different replicas, as captured by the Parisi order parameter and the ultrametric, clustered tree-like structure of the distances between replicas.Since these features depend on details of the different thermodynamic states, they cannot be identified purely by looking at averaged properties [5]. Replica symmetry breaking is one example of the idea of ergodicity breaking.In an ergodic system, the dynamics of the system explores all allowed states, such that time averages are equivalent to configuration-space averages; this equivalence between time-and configurationaverages fails in states with RSB.This has important consequences when considering the relation between in-dividual quantum trajectories of the system and the trajectory-averaged density matrix.Theoretically, studying individual trajectories corresponds to stochastic unraveling of the density matrix equation of motion [6][7][8].Physically, unraveling the dynamics into trajectories corresponds to treating the system-environment interaction as a generalized measurement of the system by the environment.Each measurement projects the system into a specific state, conditional on the measurement outcome.The sequence of measurement outcomes (and associated states) is called a quantum trajectory. We show that quantum trajectories can act as replicas to directly probe RSB.To see this link, we first discuss a simpler case, that of symmetry breaking in a standard second-order phase transition.When a perfectly isolated quantum system undergoes spontaneous symmetry breaking, it enters a macroscopic superposition, or 'cat state,' of the symmetry-broken states.This cat state is extremely fragile: Any interaction between the system and the environment allows the environment to learn which state the system chose.Backaction from measuring the environment stochastically collapses the system into one of the symmetry-broken states.Thus, each run of the experiment yields a symmetry-broken state, although the ensemble of states, averaged over experimental runs, remains symmetric. The above simple picture also extends to the case where, rather than a small number of symmetry-broken states, one has many complex ergodicity-breaking patterns, as in a spin glass.In the case of a cavity QED system, each thermodynamic state emits a characteristic pattern of photons into the environment.On each run of the experiment, the backaction from observing this field collapses the system into a distinct thermodynamic state.This corresponds to the notion of a 'weak' symmetry that is broken in individual experimental runs but not in the ensemble-averaged density matrix [9].Thus, because there is a one-to-one correspondence between thermodynamic states and emission patterns, the overlap distribution is accessible through the correlations between the photon measurement records on distinct runs of the experiment. We investigate the emergence of RSB in the open quantum system dynamics of confocal cavity QED, an experimentally practicable setting [10][11][12][13].In this system, atoms represent individual spins, while the cavity provides an all-to-all but sign-changing random interaction, dependent on the position of the atoms.This position dependence means it is possible to achieve random but repeatable interactions by controlling the placement of the atoms.By monitoring the spatiotemporal correlations of the light leaking out of the cavity, one can reconstruct the dynamics along individual trajectories.Because monitoring provides access to these correlations, the cavity QED setting gives us a powerful way to study RSB.In the RSB phase, the dynamics along each trajectory reaches a specific nonergodic state, so the spin configuration (and hence that of the emitted light) is stable over time on that trajectory.We quantify the distribution of overlaps among the patterns from different trajectories and the resulting Parisi order parameter.Together these show the distinctive features predicted by the Parisi ansatz for the SK spin glass. We consider a realization where spins correspond to single atoms, giving spin-1/2 (and thus quantum) degrees of freedom, allowing entanglement to play a role in the emergent spin organization.We show that the quantum dynamics are distinct from the semiclassical limit, in which a semiclassical energy barrier severely inhibits passage to a low-energy manifold of states.By transitioning via entangled states, the trajectory avoids semiclassical energy barriers that would otherwise bar access to the low-energy spin manifold where RSB occurs. Previously, we considered this same setup, but in the limit far above threshold where semiclassical approaches are valid, and considered the process of memory recall in such a device [14].Here, we address a very different question, focusing on the (necessarily) quantum dynamics near threshold, and the resulting distribution of lowenergy states found.A key point of this paper is that the final states at the end of the pumping sequence are classical, yet the ability to recover them relies on quantum dynamics. The theoretical possibility of a spin glass phase in a multimode cavity QED system was suggested in Refs.[15][16][17].We note that the driven-dissipative nature of the confocal cavity QED quantum spin glass discussed differs from previous theoretical investigations of trans-FIG.1.(a) Sketch of the confocal cavity QED system.Transverse pump lasers (red) illuminate a network of atomic spin ensembles (orange) and scatter light into the cavity.The atomic spin ensembles at each node create either a collective spin or an effective spin-1/2 via Rydberg-blockade.Either way, the atomic spin ensembles are held in place by optical dipole 'tweezer' lasers (not shown).The confocal cavity field is composed of a local field (blue) at each spin ensemble and a nonlocal field (green) that mediates interactions between spin ensembles.The spin states are read out by imaging the cavity emission on a camera via spatial heterodyne detection [11].(b) Example simulated detection traces of integrated cavity emission for five spins; for clarity, the y-axis is normalized to the maximum signal magnitude after 4 ms.Each spin organizes into one of two orientations above the semiclassical superradiant threshold, indicated by a dashed line.(c) Plot of the ramp schedules versus time.'Normal' (N) and 'superradiant' (SR) regimes are to either side of the semiclassical threshold (dashed vertical line).(d) Pumping scheme for a 87 Rb atom.Balanced Raman transitions realize a pseudospin-1/2 degree of freedom.See text for details. verse SK models in the closed-system context [18][19][20][21][22]. Experimental observations of RSB in physical settings have been reported in the spectra of semiclassical systems such as random lasers [23,24] and nonlinear wave propagation [25].Recent experimental results indicate that RSB in a confocal cavity QED system has been realized using a network with many XY spins per node [13].Recent theoretical work has noted that there can be phase transitions in the entanglement and correlations along individual quantum trajectories [26,27], even when such transitions are absent in the trajectory-averaged density matrix [28]. The paper is organized as follows.The next section describes the physical system we aim to simulate, its model Hamiltonian, and the Lindbladian dynamics to be unraveled by the trajectory simulations.Section III presents the quantum trajectory simulation method we employ, followed by results from individual trajectories in Sec.IV.Also discussed in Sec.IV are the evolution of entanglement entropy per spin and the difference in the lowest reachable configuration energy between entangled and semiclassical trajectories.The connection between quantum trajectories and replicas is presented in Sec.V. Evidence for RSB using the spin overlap order parameter is shown in Sec.VI.Section VII discusses the nonequilibrium nature of the system's overlap distribution via comparisons to equilibrium distributions.Section VIII provides evidence for spin glass, ferromagnetic, and paramagnetic phases.The emergence of an ultrametric structure between replicas is presented in Sec.IX.A summary and discussion of broader implications are in Sec.X.Eight appendices provide information on: A, the form of the effective spin connectivity matrices in confocal cavity QED; B, the derivation of the semiclassical critical coupling strength; C, the derivation of the atomonly theory; D, the stochastic unraveling of the master equation; E, the semiclassical limit of the spin dynamics; F, the Parisi distribution in terms of quantum trajectories; G, error bar estimation via bootstrap analysis; and H, the full set of overlap distributions used in forming the Parisi overlap order parameter. II. THE CONFOCAL CAVITY QED SPIN SYSTEM We now provide a description of a practicable system upon which we base this numerical study.All parameters have been experimentally realized or are plausible using existing technology [29].Photon-mediated spin interactions in the confocal cavity QED system were previously discussed in [14] and experimentally explored in [11,13,30].The system is depicted in Fig. 1.Two transversely orientated lasers of pump strength Ω ± scatter photons off a network of N spin ensembles, each with M spins per ensemble, where M can vary between one (realizing the spin-1/2 quantum limit) and 10 5 (describing current experiments [10,11,13,29,30]); see Sec.X and Ref. [31] for description of a practicable spin-1/2 scheme.These experiments employ 87 Rb in a 1-cm-long confocal cavity.To realize the network, each spin may be trapped at a position r i in the midplane of the cavity using an array of optical tweezers [32] (or optical dipole traps of larger waist [11,13]). Similar to recent experimental work in which spins couple to a transversely pumped cavity [33][34][35], the (pseudo)spin states considered here correspond to 87 Rb hyperfine states The two pumps scatter light into the cavity via Raman transitions [36].We will consider an atomic detuning of ∆ ± ≈ −2π×100 GHz from the 5 2 P 3/2 atomic excited state and a controllable two-photon de-tuning ω z ≈ 2π×10 kHz.The maximum single-atom, single-mode light-matter coupling strength can reach a magnitude of g 0 = 2π × 1.5 MHz; see Sec.X for more details.We consider a confocal cavity of even symmetry under reflection in the cavity center axis [37].This restricts the possible cavity modes to the set of Hermite-Gauss TEM lm modes with indices l + m = 0 mod 2 [30].An even confocal cavity retains modes of sine and cosine longitudinal character, and trapping the atoms in one of these two longitudinal quadratures with optical tweezers further restricts the set of participating modes to l + m = 0 mod 4.This results in the effective Ising coupling we consider here; see Ref. [14] and App.C. We denote the remaining mode functions by Ξ µ (r), indexed by µ for brevity.A total of N m modes participate in the near-degenerate family of confocal-cavity modes to which the atoms couple [38].The modes are detuned from the mean pump frequency by ∆ µ = ω p −ω µ ≈ −2π×80 MHz. The Hamiltonian in the rotating frame of the pump, after adiabatic elimination of the atomic excited state, is a multimode Hepp-Lieb-Dicke model [39][40][41]: The cavity modes are described by the bosonic operators a µ , while the spin ensembles are described by the collective spin operators S x/y/z i to facilitate generalization to the M > 1 case.In the spin-1/2 limit, S x/y/z i = σ x/y/z i /2.The effective coupling strength g = √ 3g 0 Ω ± /12∆ ± is the same for each pump laser, which can be achieved by controlling their pump intensities [14,33,34].Dissipation of the field of each cavity mode is incorporated using the Lindblad master equation where D[a] = 2aρa † − {a † a, ρ}.We consider a uniform cavity loss rate κ = 2π×260 kHz, similar to recent experiments [29]. The cavity-mediated interaction J ij between spin ensembles i and j may be derived using a polaron transformation of the Hamiltonian [14,30].In the ideal confocal limit, it takes the form where ∆ C is the detuning from the fundamental mode. The first two terms are local and mirror-image interactions.The local interactions are shown in blue in Fig. 1(a); for clarity, the mirror images are not shown.They arise from the constructive interference of cavity modes at the positions of the spins and their image across the cavity axis (due to the modes' even parity).The finite spatial extent of the spin ensemble and cavity imperfections regularize the delta functions to form short, but finite-range interactions with tunable length scale > ∼ 2 µm in realistic cavities [10,29].The interaction range is much smaller than the waist of the fundamental mode w 0 = 35 µm.We provide in App.A formulas for the J matrix that incorporates finite N m and size effects. The third, nonlocal term generates all-to-all, signchanging interactions [10,11].The nonlocal field is depicted in green in Fig. 1(a).By choosing positions r i either close to or far from the cavity center, the nonlocal interaction can yield J matrices that interpolate between ferromagnetic (all J ij > 0) and spin glass regimes [14].The glass regime results from Js with randomly signed off-diagonal elements.That is, these have approximately independent and identically distributed off-diagonal elements, roughly equal parts positive and negative.When atoms are distributed over a sufficiently large area, the confocal cavity-induced J connectivity matrices exhibit eigenvalue statistics that approximately those of a SK spin glass, the Gaussian orthogonal ensemble [14].Glassiness might also be achievable in a confocal cavity without position disorder [42]. The transversely pumped system realizes a nonequilibrium Hepp-Lieb-Dicke phase transition [41].At t = 0, the system is in the 'normal' state with all cavity modes in the vacuum state and the spins pointed along S z i in |↓⟩.As discussed below, we consider a protocol where the coupling strength is ramped up as a function of time.At threshold, the system transitions into to a 'superradiant' phase characterized by macroscopically populated cavity modes and spins that spontaneously break the global Z 2 symmetry to align along ±S x i .A sharp increase in cavity emission heralds the superradiant transition.Concomitantly, the spins order and the phase of the light emitted from each spin ensemble is locked to the spin orientation.Thus, the spins can be imaged in real time by spatially measuring the emitted phase of the cavity light.Holographic (spatial heterodyne) imaging has already been demonstrated [11]. In App.B, we provide a derivation of a general expression for the critical coupling strength g c at which the superradiant threshold is reached in the semiclassical limit.This is given by Note that it depends on the J matrix through its largest eigenvalue λ max : The form of the J matrix determines both the threshold and the character of the ordered phase-e.g., ferromagnetic versus spin glass [14]. (4) The ramp is the lowest-order polynomial that smoothly interpolates between two points with vanishing first derivative; the results that follow are insensitive to the precise functional form.The final pump intensity is 5× the critical value in the semiclassical limit.Given the experimentally relevant parameters employed, the timescales chosen are sufficient to allow the spins to reach an organized steady state before the onset of spontaneous emission, which occurs on approximately the 10-ms timescale.In addition, we choose to simultaneously ramp down the transverse field as ω z (t) = 2π×10[1−f (t)] kHz.This could be accomplished by changing the two-photon detuning of the pumps [33].Ramping ω z to zero turns off spin flips between different S x states because the Hamiltonian becomes diagonal in the S x basis.The ramps for g(t) 2 and ω z (t) are plotted in Fig. 1(c). III. QUANTUM TRAJECTORY SIMULATIONS Exact numerical simulations of (open) quantum manybody dynamics can be computationally expensive, especially in the confocal system due to the large number of modes in play.To explore the spin dynamics throughout the superradiant transition, we simplify the full dynamics to an atom-only Lindblad master equation whose derivation is a multimode generalization of the method of Jäger et al. [43]; see App.C for derivation.The atomonly Hamiltonian has the form where J ij is the same matrix as in Eq. (2).The other coefficients are where we restrict the treatment to the case of a completely degenerate cavity with uniform detuning ∆ µ = ∆ C for all modes.This is not an unreasonable approximation in the far-detuned regime |∆ C | ≫ κ, ω z [29]. In this limit, The Hamiltonian thus resembles a transverse-field Ising model with an additional term S x i S y j that is sufficiently small to play little role in the present simulations.The full atom-only master equation has the Lindblad form ρ given by Here, v k i is the i'th element of the k'th eigenvector of the J matrix; all eigenvalues λ k ≥ 0. Each of the N collapse operators represents an orthogonal superposition of spin operators.Appendix C presents its derivation. As noted above, the experimental protocol we consider involves ramping ω z to zero at late times.In this limit, α − goes to zero, so the final Hamiltonian has the simple Ising form H ∝ − ij J ij S x i S x j .Likewise, the collapse operators contain only S x i operators.As such, any S x i eigenstate may become a steady state above threshold (though some are energetically preferred; see Sec.IV).Quantum trajectory simulations of the atom-only master equation provide a continuous record of the state of each spin.These trajectories arise from simulating a sequence of balanced homodyne measurements of the field emitted from each spin ensemble; see App.D for details.This mimics experimentally practicable heterodyne measurements: Emitted cavity light is interfered on a camera with local oscillator (LO) light derived from the pumps to provide a phase reference [33].This procedure enables holography of the spin states; see Fig. 1(a) for illustration.Figure 1(b) shows what such data would look like as the homodyne signal for each spin is integrated in time.Each signal is dominated by noise below the superradiance threshold.Above threshold, the homodyne signals become phase-locked to the spins and undergo a bifurcation.The sign of each homodyne signal thus serves as a measurement of the corresponding superradiant spin state. A single quantum trajectory evolves under the non-Hermitian Hamiltonian and is interrupted by quantum jumps with displaced collapse operators (C k ± iβ)/ √ 2; see App.D for details.These operators represent the two quadratures of the balanced homodyne scheme.The real number β is proportional to √ κ multiplied by the coherent state amplitude of the LO and the spatial overlap of the LO and emitted cavity light.Each collapse operator can induce quantum jumps.To simulate the detection of cavity emission, these are stochastically generated to independently occur at rates ||(C k ± iβ) |ψ⟩ || 2 /2.The trajectories of each spin are derived from these simulated detections, as shown in App.D. While the quantum state diffusion method is simpler to define, the quantum jump method with high LO strength (large β) provides similar results with greater numerical stability at late times.While β influences the timescale over which the global Z 2 symmetry is broken, we find it does not affect the ensemble of steady-state spin configurations that are found. IV. QUANTUM SPIN DYNAMICS, ENTANGLEMENT, AND ENERGY BARRIERS We now explore the dynamics of the spin trajectories and elucidate the role of quantum entanglement therein.Figure 2(a,b) plots two independent quantum trajectories for a network of N = 15, M = 1 (i.e., spin-1/2) particles that share the same J matrix.A glassy J matrix is selected by assigning the spins to random positions in the cavity midplane according to a 2D Gaussian distribution with a 2w 0 -wide standard deviation [14]. We observe that a spin-aligned σ x i steady-state configuration emerges within a few milliseconds of crossing the semiclassical transition threshold.Beforehand, a combination of unitary quantum dynamics and stochastic projections from the continuous measurement drive the spins away from their initial ⟨σ z i ⟩ = −S configuration.Measurement acts to break the spin's Z 2 symmetry along σ x i .The rate at which this happens is proportional to κg 2 /∆ 2 C and is time-dependent through g.The timescale is approximately 5 ms at threshold and decreases to approximately 200 µs at the end of the ramp schedule.However, the organization timescale also depends on the structure of eigenvectors v k of the J matrix. We also observe that collective spin-flip events between different low-energy states occur beyond this timescale.A diverse range of collective spin behavior occurs.For example, Fig. 2(a) exhibits a group of three spins approaching a ⟨σ x i ⟩ = −1 steady state before collectively flipping toward ⟨σ x i ⟩ = 1 at around 750 µs.By contrast, Fig. 2(b) shows another behavior in which a group of four spins undergo an extended period of unbroken Z 2 symmetry before rapid organization into a steady-state configuration. The spin trajectory in Fig. 2(a) may be visualized using the Bloch sphere representation in Fig. 2(c).As the many-body quantum state remains pure within a single trajectory, paths through the interior of the Bloch sphere indicate entanglement between spins.We see that the quantum spins first take a non-classical trajectory of unbroken global Z 2 symmetry through the interior of the Bloch sphere.After initially moving upward towards the center of the Bloch sphere, the continuous measurement breaks spin-flip symmetry.The spins then emerge from the interior of the Bloch sphere to reach a steady-state spin configuration.Figure 2(d) shows the average of the paths the spins take. Entanglement is present during both the initial organization near threshold and during subsequent spin-flip events.We consider the entanglement entropy for each spin, given by − tr[ρ i log(ρ i )], where ρ i is the reduced density matrix for spin i.In general, the entanglement entropy can be nonzero for either entangled states or mixed states.Thus, the entropy would not be a good measure of entanglement when applied to the density matrix of the system.However, the entropy does provide a good measure of entanglement when applied at the level of individual quantum trajectories because each trajec- tory remains globally pure at all times.We choose the entropy over other measures of entanglement, such as the negativity [44], as it is more computationally tractable while faithfully capturing entanglement. The entanglement entropy per spin is shown in Figs.2(e,f) for the trajectories in panels (a) and (b), respectively.An initial increase near threshold can result from unbroken global Z 2 symmetry and transitions to other low-energy states.At first, the superposition of low-energy states largely preserves the global Z 2 symmetry.Measurement then begins to lead to superposition projection around 0.5 ms, resulting in decreasing entanglement and the breaking of Z 2 symmetry.Subsequent spikes in the entanglement accompany the spin-flip events.Last, the entanglement decays to zero as the spins reach a steady-state configuration that corresponds to a classical state.Figure 2(g) plots the entanglement entropy for each spin averaged over 200 trajectories.The initial peak slowly decays to zero, reflecting the occurrence of later spin-flip events. Figure 3 contrasts these quantum spin trajectories with semiclassical trajectories for the same J matrix used in Fig. 2. The spin-1/2 degrees of freedom are now replaced with semiclassical, collective spins, each comprised of M = 10 5 spin-1/2 atoms.The semiclassical equations of motion are derived in App.E. We see that, in contrast to the quantum dynamics, the semiclassical trajectories exhibit a rapid organization at the semiclassical transition threshold but are confined to the surface of the Bloch sphere, indicating the lack of entanglement.Unlike the quantum limit, large oscillations are observed around the x-axis on the Bloch sphere. To investigate the role entanglement might play in the evolution toward low-energy, steady-state spin configurations, we plot in Fig. 4 the energy of 20 quantum and semiclassical trajectories for the same J matrix and ramp schedule considered in Fig. 2. The shaded region is inaccessible to any unentangled spin state constrained to the surface of the Bloch sphere.We identified the boundaries of this semiclassically forbidden region through Monte-Carlo sampling of semiclassical spin states (i.e., those states constrained to the surface of the Bloch sphere) followed by gradient descent to the lowest possible energy state.We find that entanglement enables the quantum spins to follow trajectories (through the Bloch sphere interior) that bypass this semiclassical energy barrier.This allows the quantum spin network to reach lower-energy steady-state configurations.Slower ramps could allow the semiclassical trajectories to follow a more adiabatic path back downward to similarly low-energy states.However, we find that such ramps must be at least an orderof-magnitude slower than those considered here.The addition of noise to the initial state can also yield a ∼25% decrease in the semiclassical steady-state energies, but this remains an order-of-magnitude higher compared to quantum trajectories. The steady-state energy of the quantum trajectories seems to be primarily controlled by the ramp rate.Evolution through the superradiance transition has the form of a many-body Landau-Zener problem with many-body gaps controlling the adiabatic timescale of the transition.The many-body gap near the transition is on the order of ω z for a ferromagnetic J. Thus, ω −1 z sets the timescale for adiabatic evolution through the transition to either of the two ferromagnetic ground states.By contrast, spin glasses are characterized by nearly degenerate spin configurations that become exponentially numerous with N .This results in much smaller gaps near the transition and nonadiabatic evolution is more likely to occur, as we see in Fig. 4(b).The chosen ramp rate is slow enough to prevent nonadiabatic transitions to highly excited states, but not enough to prevent transitions to the nearly degenerate local minima states.Unitary evolution through the transition then produces an entangled superposition of low-energy states, as seen in Fig. 2, before projection into a single spin state occurs.The final energy of the trajectories is thus controlled by the nonadiabatic transitions experienced during the ramp as well as the measurement projection before ω z is ramped to zero. V. THE OVERLAP ORDER PARAMETER We now establish the link between quantum trajectories and the spin-glass order parameter.Order in glassy systems can be identified through correlations between the many symmetry-broken thermodynamic states.This is captured by the replica overlap [3], defined classically as q αβ = N i ⟨s α i ⟩⟨s β i ⟩/N where s α,β are replica spin states and brackets denote a time average.The overlap distribution is given by where n R is the number of replicas.Each P J (q) can have structure that varies depending on the disorder re- alization J, even in the thermodynamic limit; this is the lack of self-averaging inherent in spin glass [5].Disorderdependent fluctuations are averaged out in the Parisi distribution P (q) ≡ [P J (q)] J , where [•] J denotes an average over disorder realizations.We discuss the central features of P (q) in Sec.VI.The overlap distribution for glassy quantum systems is defined similarly after performing a Suzuki-Trotter mapping to an equivalent classical system [45][46][47]; see App.F for details. To connect the overlap distribution to quantum trajectories, we first cast the overlap into a particular form that applies directly at the quantum level.In App.F we show that the overlap distribution is closely related to the operator where the σ x i are Pauli operators for each site.We refer to the above as the overlap operator given its close correspondence to the classical overlap q αβ .It acts on the doubled Hilbert space of ρ J ⊗ ρ J , where ρ J is the density matrix for a given J realization.The statistical moments of the Parisi distribution are shown to be given by q (k) = [⟨O k ⟩] J .This close relation allows for a simple expression of the overlap distribution in terms of O.To do so, we use the eigenstate representation O = q qP q , where the sum over q includes all N + 1 overlap values linearly spaced in [−1, 1].The operators P q are projections onto the space of spin states with overlap q.The overlap is then given by The connection to quantum trajectories is now established using the pure state representation ρ J = n T α=1 |ψ α J ⟩⟨ψ α J | /n T , where n T is the total number of pure states |ψ α J ⟩.Each quantum trajectory is one of these pure states.Inserting this form into Eq.( 10) yields (11) To summarize, trajectories of the same disorder realization can find different symmetry-broken states of the glassy landscape due to stochastic evolution induced by the environment.Each pair of symmetry-broken states |ψ α J ⟩ and |ψ β J ⟩ then contribute to P J (q) through their projection onto the subspace with overlap q. The classical expression for P J (q) is recovered when the trajectories |ψ α J ⟩ are in spin eigenstates.Each state then corresponds to a classical spin vector with elements 11) then reduces to Eq. ( 8) with the replica overlap given by We refer to q αβ above as the mean-field overlap, as it corresponds to the overlap operator O in the mean-field limit-i.e., when trajectories are spin eigenstates and thus the wavefunction factorizes between sites.The fundamental difference between the classical and quantum overlap is how entanglement between spins can allow for superpositions of spin states.This allows a pairs of quantum states |ψ α J ⟩ and |ψ β J ⟩ to contribute to multiple values of the overlap distribution at once, which does not occur in the classical limit. VI. REPLICA SYMMETRY BREAKING We now explore the emergence of RSB as the system is pumped through the transverse Ising transition.To do so, we first analyze the correlations between independent quantum trajectories of a system with the same quenched disorder for all trajectories.That is, a frustrated spin system with the same J matrix for all trajectory simulations.Because the trajectories ultimately reach classical steady-state spin configurations, despite entangled quantum dynamics at intermediate times, we study the meanfield overlap q αβ in Eq. ( 12).This q αβ directly yields to the overlap distribution predicted by replica theory once steady-state spin configurations are reached, which occurs after ∼2 ms; it provides a mean-field estimate at earlier times.This q αβ depends on only first-order expectation values, and thus has the significant advantage of being directly observable from the trajectory measurement record. Once a steady-state configuration is found, the overlap takes on one of N + 1 possible values ∈ [−1, 1].The overlap distribution is always symmetric about 0 due to the global Z 2 symmetry, in the absence of a longitudinal field.An ordered phase will exhibit an overlap distribution containing 'goalpost' peaks at q = ±q EA , where q EA = q αα is the Edwards-Anderson order parameter, also known as the self-overlap.(Paramagnets do not have such peaks, but ferromagnets and spin glasses do.)Peaks may also arise associated with overlaps q αβ between replicas that settle into different spin states.These additional non-vanishing peaks between the goalposts indicate RSB and arise from the smaller overlap between distinct, lowenergy states [5]. We first consider the overlap distribution for the same J matrix considered in Figs.2-4. Figure 5(a-e) shows the time evolution of q αβ as it approaches steady state, around 4 ms.To construct the overlap distribution of a fixed J matrix, we consider 200 quantum trajectories with identical initial conditions and the same ramp schedule as shown in Fig. 1(c).We then compute the overlap q αβ between every pair of the 200 replicas and bin the results as a function of time.We exclude the self-overlaps q αα because, while they have vanishing weight in the limit of an infinite number of replicas, they provide an asymmetric bias to only the positive q = q EA peak of the overlap for finite sample sizes.At t = 0, the system is in the normal (paramagnetic) state and the overlap between any two replicas is zero because ⟨σ x iα ⟩ = 0 for all spins in the initial σ z i state.A nonzero overlap emerges as the spins transition to the superradiant regime and align along ±σ x i .The final overlap distribution shows goalpost peaks at ±q EA .We find q EA ≈ 1 in the parameter regime of our simulation, but note that q EA is commonly overestimated due to finite-size effects [48].Interior peaks indicate RSB.These interior peaks arise from correlations between distinct spin configurations that are local minima of the Ising energy E = − ij J ij s i s j , where all s i = ±1.By local minimum, we refer to spin states for which flipping any single spin raises the total energy.We note that the steady-state overlap distribution is independent of the exact measurement scheme; for any LO strength β > 0 the same ensemble of spin states are found, leading to the same overlap distribution. Figure 5(f-j) shows the full overlap matrix q αβ versus time before we bin into the above histograms.In the thermodynamic limit, the Parisi ansatz for the solution to the SK model predicts an ultrametric overlap structure emerging from RSB.Specifically, the ansatz predicts a nested block-diagonal structure where the overlap magnitudes are larger in the diagonal blocks than in the offdiagonal blocks.The diagonal blocks are then expected to further divide into smaller diagonal blocks with larger overlap and off-diagonal blocks with smaller overlap, and so on.The ansatz thus predicts a self-similar overlap matrix in the limit N → ∞, while for finite-size systems the self-similarity truncates at a finite depth.For the N = 15 case in Fig. 5(j), we find evidence for up to three levels of the RSB block structure.A primary 2×2 block structure emerges that approximately separates replicas 1-100 from 101-200.The primary block of replicas 1-100 is further subdivided into 2×2 blocks separated near replica 30, distinguishing regions of higher overlap from lower.Evidence for tertiary block structure may be found in the sub-block containing replicas 30-100; a final subdivision may be seen near replica 55.We leave to future work quantitative analyses of self-similarity, the depth of RSB, and how RSB scales with N .However, we later quantify the degree of ultrametricity in the overlap distribution in Sec.IX. To provide further insight, we delve into the structure of the steady-state overlap distribution produced by the J matrix considered in Figs.2-5.This J matrix induces a rugged Ising energy landscape that contains six local minima not related by the global Z 2 symmetry.These were found by numerically enumerating all spin states.Of the 200 quantum trajectories in Fig. 5, 66% reached one of these six local minima in steady state.An additional 20% were within one spin flip of a local minimum, while the remaining 14% were between two-to-four spin flips away.To show the relation of each minima's oc-currence probability to its energy, we plot these together in Fig. 6(a), binning each trajectory by its nearest local minimum.The result shows a clear anticorrelation: The trajectories with lowest-energy, steady-state spin configurations are observed most frequently.Though the system is not in thermal equilibrium, this tendency to lowenergy states is reminiscent of a low-temperature system; see Sec.VII for a discussion of system temperature. The overlap matrix between local minima is plotted in Fig. 6(b).The numbers in the matrix entries are the absolute value of the overlap values between the indicated minima.The diagonal entries correspond to the self-overlap, which is always unity.The overlap matrix allows us to pinpoint the pairs of spin configurations that create each peak in the overlap distribution of Fig. 5(j).This plot is reproduced in Fig. 6(c).Every peak in the distribution can be understood by considering the overlaps between the first 3 local minima in Fig. 6(b).Each peak in the distribution is annotated with the pair of minima x:y that produce that value of the overlap.The remaining local minima were found too infrequently to produce any distinct peaks in the overlap distribution. Each J matrix produces a different set of local minima, and thus different overlap distributions.This is evident in an ensemble of 100 confocal J matrices produced by assigning spins to different random locations in the cavity midplane with standard deviation 2w 0 , which lies in the spin glass regime [14].The overlap distribution for each J matrix is constructed from 200 quantum trajectories as in Fig. 5. Resulting steady-state overlap distributions for three representative J's are shown in Fig. 7. Appendix H has plots of all 100 overlap distributions. The overlap distributions all exhibit ±q EA peaks with q EA equal to unity for most disorder realizations.Variation within the interior demonstrates that the correlations between low-energy minima vary between J matrices.This is the non-self-averaging phenomenon inherent to SK spin glasses [5].The three J matrices are chosen to display a representative diversity of structure found in the overlap distributions.The J in Fig. 7(a) produces an overlap distribution that is dominated by a single lowenergy spin configuration.Other peaks (from different configurations) occur in only ∼10% of the trajectories.Figure 7(c) shows the other extreme in which many different spin configurations are found with multiple levels of clustering between states.This overlap matrix is indicative of a far glassier system.The character of most overlap matrices falls between these two for our system size of N = 15, such as the J in Fig. 7(b). In the large-size limit, high peaks should be sparse in the overlap distribution because only a small set of thermodynamic states have significant weight.The peak positions do not average out into a smooth distribution between the goalposts.This is indicative of the lack of self-averaging manifest in these order parameter observables of the spin glass state.The order parameter that does average is the Parisi distribution [5], which we discuss in Sec.VIII below. VII. EFFECTIVE TEMPERATURE The overlap distributions do not appear to be consistent with an effective thermal equilibrium model.This is not surprising in this driven-dissipative quantum optical setting.Nevertheless, it is instructive to compare these distributions to those expected at equilibrium.Equilibrium overlap distributions can be constructed by assigning probabilities to spin states according to a Boltzmann factor exp(−E/k B T ), where E is the Ising energy of the spin state and T is an effective equilibrium temperature that serves as a fit parameter.The overlap between all pairs of states is then binned and weighted by their Boltz-mann factors.We perform a least-squares fit to each of the overlap distributions in Fig. 7 to extract T fit .The corresponding distributions are shown in red.The extracted temperatures are provided in units of T c , the largest eigenvalue of the J matrix averaged over all J matrices.This quantity corresponds to the critical temperature for the SK spin glass transition in the thermodynamic limit [4].In our finite system, T c corresponds to the average crossover temperature. While the equilibrium model does capture the location of peaks in the overlap distribution, it is not able to quantitatively match their heights; instead, they seem to often be underestimated (overestimated) near the center (wings) of the distribution.The average T fit is 0.21(8)T c .Despite the lack of quantitative correspondence, the fitted temperatures are well-enough below T c to infer the presence of a low-energy ordered phase in this system, even when using realistic parameters.Indeed, the authors have observed such states in a related experimental system [13]. VIII. SPIN GLASS, FERROMAGNETIC, AND PARAMAGNETIC PHASES Two order parameters are needed to distinguish between the different types of phases described by the Ising Hamiltonian.These are the J-averaged spin overlap distribution, also known as the Parisi order parameter, and the usual magnetization.The magnetization order parameter m = i ⟨σ x i ⟩/N is used to discriminate ferromagnetic from glass or paramagnetic ordering.It should approach ±1 at low temperature in a ferromagnetic state but be close to zero in the spin glass and paramagnetic phases. The spin glass is distinguished from the paramagnet via the Parisi order parameter, the J-averaged overlap distribution.The average should form a smooth distribution for the SK spin glass.The paramagnet has a Parisi order-parameter distribution that is peaked around zero, while the spin glass and ferromagnet are peaked around ±q EA .Last, while the ferromagnet's Parisi distribution has no support between ±q EA , the spin glass has a "netwith-goalposts" structure of smooth interior support. We average the overlap and magnetization distributions for 100 confocal J matrices to yield the aggregate distributions in Fig. 8(a).Distributions characteristic of a spin glass arise, consisting of extremal peaks at ±q EA bridged by a continuous interior for the overlap and a magnetization peaked near zero.This is consistent with the result expected from the Parisi solution to the SK spin glass [5]. A fit to the thermal equilibrium model yields a T fit = 0.21(2) T c , closely matching the average temperature found from the individual fits discussed above.The magnetization is well approximated by a centered binomial distribution, indicating that the local minima are uncorrelated with a ferromagnetic state.The standard devi- The same set of spin glass J matrices as in panel (a), but the system is rapidly quenched into the superradiant regime rather than ramped according to f (t).The overlap and magnetization both cluster around zero, indicative of a paramagnetic phase.The thermal fit is poorly constrained in this regime: Shown is the distribution for T fit = 5 T c, which bears similarity to the data.ation 0.31 of this distribution is close to that expected of the SK model at this system size, 1/ √ N ≈ 0.26.The difference from a Gaussian may indicate a small ferromagnetic remnant in the confocal J matrices that could be eliminated by placing spins further from the cavity midpoint [14]. By contrast, the ferromagnetic confocal J matrices reveal a very different behavior in Fig. 8(b).The ferromagnetic ensemble is constructed by using Gaussiandistributed spin positions with standard deviation of only 0.5w 0 in the cavity midplane.This leads to ferromagnetic J matrices with predominantly positive matrix elements and two global ground states corresponding to the two fully aligned spin states [14].Two-hundred quantum trajectories with the same ramp schedule as in Fig. 1c are used to construct the overlap and magnetization distribution per J matrix.The distributions are then averaged over the 100 matrices in the ensemble to produce the aggregate distributions in Fig. 8(b).The lack of support in the interior of the overlap and magnetization distributions indicates that only these two Z 2 -related spin states are found with high probability.This is consistent with a ferromagnetic phase.A least-squares fit to the thermal model yields a temperature T fit = 0.011(2) T c , where T c is twice the maximum eigenvalue for J matrices in the ferromagnetic regime [4].The T fit is lower than that found for the spin glass ensemble.This may be due to the larger energy gap to the ground state in the unfrustrated ferromagnetic J matrices.This makes it easier to maintain adiabaticity during the ramp, and fewer Landau-Zener transitions means a lower effective temperature. A paramagnetic regime can be accessed by quenching the system into the superradiant regime rather than slowly ramping through the transition.In this case, adiabaticity is lost, and transitions into many excited states occur.Figure 8(c) shows the overlap and magnetization distributions that result from such a quench.The same spin glass J matrices as in Fig. 8(a) are considered, with 200 quantum trajectories per J, and all other parameters remain the same.Both the overlap and magnetization distributions are well approximated by centered binomial distributions of standard deviation 1/ √ N .This is indicative of a paramagnetic phase in which states are found at random. Last, we present in Fig. 9 the dynamical evolution of the Parisi order parameter distribution for the spin glass.The distribution becomes, after around 2 ms, Fig. 8(a) in steady state.We also note that a finite-size scaling analysis of the Binder ratio 1 − ⟨q 4 αβ ⟩/(3⟨q 2 αβ ⟩ 2 ) is often used to pinpoint the exact location of the spin glass transition [49].However, the Binder ratio is ill-defined in this quantum system at early times because the overlap distribution begins as a delta function at t = 0, for which both the second and fourth order moments are zero.This happens because the spins begin aligned along σ z rather than σ x , a difficulty not encountered in typical equilibrium states of the classical Ising model.This makes the scaling analysis in this system more complicated, which we leave to future work. IX. ULTRAMETRICITY A prediction of Parisi's RSB ansatz for the SK spin glass solution is the formation of an ultrametric structure in the space of replicas [50].An ultrametric space is one satisfying the strong triangle inequality: Given any three points x, y, and z, the distances between those points should satisfy d(x, z) ≤ max[d(x, y), d(y, z)].It can be shown from this inequality that any triplet of points must form an isosceles triangle, either acute or equilateral.In the SK spin glass, replicas cluster into groups corresponding to low-energy local minima.Any triplet of replicas obeys this inequality in the thermodynamic limit, where the distance between replicas is the normalized Hamming distance, or equivalently d(α, β) = 1 − |q αβ |. Numerical studies have verified that ultrametricity slowly emerges in system sizes up to 10 3 [51][52][53].The approach to ultrametricity is quantified by use of the metric K = (d max − d med )/σ(d), where d max is the largest distance in a given triplet of states and d med is the second largest (or the median).Their difference should be zero in an ultrametric space due to the isosceles condition.The difference is normalized by σ(d), the width of the distribution of distances between all states in the ultrametric space. The overlap matrices and distributions in Figures 5-7 already exhibit the expected clustering of replicas into groups associated with local minima.Figure 10(a) demonstrates this even more clearly by plotting the associated dendrogram above the overlap matrix.The clustering of replicas into four primary groups is visible. Figure 10(b) plots the J-averaged distribution of K as a function of system size.For each N , we generate 100 confocal J matrices in the spin glass regime and perform 200 quantum trajectories per J.For each J, the K distribution is computed between all triplets of tra-FIG.10.(a) The replica overlap matrix of 200 quantum trajectories for a confocal J matrix in the spin glass regime.The states cluster into one of four primary groups of states, which also appear as four primary blocks on the matrix diagonal.The above dendrogram shows the hierarchical clustering associated with the fracturing of the overlap matrix into four sectors, with various degrees of correlation between sectors.(b) The distribution of the K metric for ultrametricity, averaged over 100 realizations of J matrix for each system size.The distribution becomes increasingly peaked near zero, providing evidence of an ultrametric space emerging with increased system size.(c) The mean of the J-averaged K distribution as a function of system size, further showing the emergence of an ultrametric space. jectories and the resulting distribution is averaged over J matrices.We begin the analysis at N = 8 because we find that low-energy local minima are not reliably present in smaller systems, with only a single cluster of states typically found.But even over the restricted range of N available, we find that the K distribution becomes increasingly narrow with increasing N .Moreover, Fig. 10(c) plots the mean of the K distribution with N , further showing a decrease in K with N .These constitute evidence for the emergence of ultrametricity in the system.Oscillations in ⟨K⟩ may arise from finite-size effects but appear to dampen with increasing N .We conclude that there is evidence for an approach to ultrametricity that is consistent with the significant finite-size effects found in SK spin glasses [52]. X. DISCUSSION In summary, the transient formation of entangled states in a confocal cavity QED system allows RSB, concomitant with Ising symmetry breaking, to arise via the interplay of unitary Hamiltonian evolution and dissipative dynamics.The resulting Parisi distribution does not appear to be in equilibrium, which is expected given the driven-dissipative nature of the open quantum system.We note that previous work identified an effective temperature [14,54] associated with the multimode cavity QED dissipative dynamics.Such a temperature was found by considering detailed balance between energy-raising and energylowering processes in a system driven far above threshold.This effective temperature is much larger than T c in the spin-1/2 limit.(Although it is small in the semiclassical limit when M ≫ 1 [14].)Fortunately, however, this poses no obstacle to observing sufficiently lowenergy states and RSB because the ramp through threshold reaches a quasi-steady state much faster than the timescale ∝∆ 4 c /(g 2 ω 2 z κ) at which thermalization at T eff would occur [14]. Last, we note that the light-matter coupling strength required to reach the superradiant threshold for the spin-1/2 system is higher by a factor of √ M compared to the semiclassical limit.We estimate that this interaction strength could be achieved by Rydberg-dressing large atomic ensembles to yield a Rydberg blockade within each [31].This allows each spin ensemble to behave as if it were a single spin-1/2 degree of freedom while retaining the same collectively enhanced coupling strength ∝ √ M g 0 [55].Coupling to a Rydberg state could be realized through the addition of two pump lasers to the atomic level scheme in Fig. 1(d); Ref. [31] provides more details.Briefly, a laser at 780 nm would drive the |↑⟩ state to the atomic excited state 5 2 P 3/2 , while a blue beam at 479 nm would drive a transition from this excited state to the 100 2 S 1/2 Rydberg state.(Rydberg dressing inside optical cavities has been achieved [56].)The combined coupling terms produce a dark state that mixes the Rydberg and |↑⟩ states while avoiding the atomic excited state.Conservatively estimating an 8-µm average interatomic separation within a spin ensemble, a Rydberg-Rydberg interaction strength on the order of 200 MHz could be achieved.This should be sufficiently strong to push multiply excited Rydberg states far off resonance, resulting in an effective spin-1/2 degree of freedom for each spin ensemble.The spontaneous emission lifetime of the atoms is estimated to be greater than 10 ms-longer than the time scales shown above for RSB to emerge.This would allow RSB to be observed in a quantum optical context where implementations of, e.g., associative memory [14,17,[57][58][59][60][61]] might be realized.Doing so would provide experimental access to questions regarding how quantum effects might determine memory capacity and fidelity. The research data supporting this publication can be accessed on the Harvard dataverse [62]. , where α is any complex number with real part greater than zero and w 0 is the waist of the fundamental mode.In a confocal cavity, a given resonance supports only the set of even modes or the set of odd modes.As such, it is useful to define the Green's function corresponding to either even modes (symmetrized) or odd modes (antisymmetrized).Choosing the even case, the symmetrized Green's function is defined as The confocal cavity-mediated interaction D(r, r ′ ) is expressed in terms of Green's functions [29,30]: The ideal interaction for a perfectly degenerate cavity with infinite mode support and delta-function-wide atomic ensembles is found by setting α = 0.The α = 0 limit corresponds to the J matrix in Eq. ( 2), where the term on the first line gives rise to delta function local and mirror interactions, while the other terms give rise to the nonlocal interaction.Allowing for α > 0 provides a good approximation for cavities with both mirror aberrations and finite-sized atomic ensembles [29].In this work, we use α = 0.02 to achieve a ratio of approximately ten between the local and nonlocal interactions, which roughly matches observations in recent confocal cavity experiments [10].This α yields local and mirror interactions with Gaussian waist much smaller than w 0 and a nonlocal interaction that has a large Gaussian envelope of waist much larger w 0 .While finite α does limit the maximum distance over which the nonlocal interaction can occur, it does not significantly affect the J matrices for this work, which considers atomic positions out to only ∼4w 0 .We note that the precise form of the confocal interaction depends on the details of both the atomic distribution and the nature of the cavity imperfections [29].However, these precise details matter little for the present work since they result in only small changes to the already disordered J matrices. The degree of randomness in the J matrices produced by this cavity-mediated interaction was studied in depth in previous work [14].The elements of the J matrix become increasingly uncorrelated as w, the standard deviation of the atomic positions in the cavity midplane, becomes large compared with w 0 = 35 µm, the waist of the fundamental mode of the cavity.The correlation between randomly chosen J ij elements is less than one percent for w = 2w 0 , which is the value of w considered in the main text for generating glassy J matrices.This lack of correlation comes about because of the incommensurate periodic dependence of J ij on the positions r i , r j .The confocal J matrices produce eigenvalue spectra that approach a semicircle distribution, as expected for random matrices drawn from the Gaussian orthogonal ensemble (GOE), precisely like those of the SK model.At the system size N = 15 considered in the main text, the eigenvalue distribution is within 5% of the GOE semicircle distribution. Appendix B: Derivation of semiclassical critical coupling strength We now derive Eq. (3) for the critical coupling strength of the superradiant phase transition using linear stability analysis.The spin operators S α i are first mapped to bosonic operators b i through the Holstein-Primakoff transformation, where M is the number of atoms per ensemble.This transformation accurately models fluctuations around the normal phase when M is large.The original Hamiltonian in Eq. ( 1) is then transformed, up to a constant shift, to where N m is the total number of cavity modes and is the effective spin-photon coupling strength.The operator equations of motion, including cavity dissipation, are given by ȧµ The equations of motion are linear and thus directly solvable.To organize the set of operators, we introduce an operator-valued vector where the first 2N elements of u are the atomic operators followed by 2N m cavity operators.Using this notation, the equations of motion can be written in the concise form u = Au for a linear operator A. The critical coupling strength g c can be found from the retarded Green's function, which describes the response of the system to an external drive.It takes , where θ(t) is the Heaviside step function.The Fourier transform is then defined by G R ij (ω) = dt e iωt G R ij (t).The Green's function is related to the linear operator A by [G R (ω)] −1 = S −1 (ω − iA) in the case of linear Heisenberg equations [40], where S ij = ⟨[v i (0), v † j (0)]⟩ are the equal time commutation relations.In this case, S = diag(+1, −1, +1, −1, • • • ) follows from canonical bosonic commutation relations.The full inverse Green's function, while large, has a simple 2×2 block form.The first few rows and columns of each block are shown below: We analyze the Green's function by first assigning blocks of the matrix: The diagonal matrix D spin is 2N ×2N with alternating elements ω z − ω, then ω z + ω.The matrix D cav is also diagonal of size 2N m ×2N m , with elements −∆ µ − ω − iκ, followed by −∆ µ + ω + iκ.In this expression, µ increases from 1 to N m .The matrix C describes coupling between the cavity modes and spin modes and is of size 2N ×2N m .It is most easily expressed in terms of 2×2 blocks given by We now determine when an instability in the normal phase occurs by considering the poles of the inverse Green's function.The poles dictate the characteristic response frequencies of the system, and thus the determination of when ω = 0 becomes a pole probes the global stability of the phase.This point can be found by considering when det [G R (ω = 0)] −1 crosses zero.This procedure relates to a normal mode analysis of the linear equations of motion, in which the system is stable only when all eigenvalues of A are greater than zero.The determinant can be written using the Schur complement [63] of ) is always positive.Thus, the instability condition simplifies to det D spin − CD −1 cav C T = 0.The matrix inside the determinant has a simple tensor product structure.At ω = 0 it is given by ) where ⊗ denotes the tensor product, I is the identity operator, and J is the cavity-mediated interaction connectivity matrix introduced in Eq. ( 2). We must now determine when one of the eigenvalues of the above matrix crosses zero.The identity matrix simply shifts all eigenvalues by one.The eigenvectors of the total matrix now have the form v k ⊗ w, where v k is an eigenvector of J with eigenvalue λ k ≥ 0 and w is an eigenvector of the second matrix of ones.The second matrix has a zero eigenvalue and a nonzero eigenvalue 2 with eigenvector (1, 1)/ √ 2. We thus find that half of the eigenvalues of the matrix in Eq. (B9) are degenerate with value one, and the other half are given by The smallest value of g for which one of the above eigenvalues crossed zero occurs sets the critical coupling strength g c .The critical coupling thus depends on the largest eigenvalue λ max of the J matrix.Inserting λ max , setting the expression to zero, and solving for g yields the critical coupling strength of Eq. (3). Appendix C: Derivation of the atom-only theory We apply the method of Jäger et al. [43] to produce an atom-only theory for spins in a confocal cavity.The method accurately reproduces the low-energy spectrum of the single-mode driven-dissipative Dicke model, both below and above threshold.We extend the method to the multimode, multiple spin ensemble case described by the Hamiltonian in Eq. ( 1). We must first find "effective fields" corresponding to operators in the spin Hilbert space that best approximate the effect of the cavity modes.There is one effective field Sµ for each cavity mode that is eliminated.The field may be time-dependent.The effective fields are chosen to satisfy the differential equations Solving for the effective fields begins with an ansatz of the form Inserting the ansatz into Eq.(C1) yields the following differential equations for the coefficients: This equation can be solved explicitly in the limit that the ramp function f (t) changes slowly over the cavity loss timescale 2π/κ ≈ 10 µs.This limit is well satisfied given that f (t) ramps over a period of 600 µs.In this limit, the differential equations are given by the solutions We now restrict ourselves to the degenerate case ∆ µ = ∆ C for all µ.The effective fields can then be written as with α ± given by Eq. ( 6).The effective fields can then be used to write the atomonly master equation.The Hamiltonian part is given by Inserting the effective fields from Eq. (C5) and recognizing the J matrix then leads to the atom-only Hamiltonian presented in Eq. ( 5).The full form of the master equation is now While correct, this expression is complicated to use because it involves an infinite sum of dissipation terms.This can be avoided by noting that there are only N linearly independent jump operators that can be created out of the N operators α + S x i + iα − S y i .As such, one can rewrite the dissipation term by expanding the effective fields.This yields where D[X, Y ] = 2XρY † − {Y † X, ρ} is the non-diagonal Lindblad superoperator and the Lindblad-Kossakowski matrix J is exactly the cavity-mediated interaction matrix defined in Eq. ( 2).Diagonalization of J brings the with collapse operators The square of the coefficients multiplying the collapse operators give the associated decoherence rates.A total decoherence rate per spin can be estimated by approximating the elements v k i of the normalized eigenvectors as uncorrelated random variables.Their variance should be 1/N to enforce unit normalization.We also approximate |α + | = 2 and α − = 0, which is valid well above threshold.The summed decoherence rate per spin can then be approximated as (κg Appendix D: Stochastic unraveling and reconstructing the spin measurement record The general formalism for homodyne unraveling has been well described by various authors [6][7][8].We provide a brief discussion of the specific measurement approach, and thus the associated unraveling scheme that we use in this work, which is based on invariance properties of the Lindbladian. Spatial heterodyne detection is derived from the interference pattern of the LO and cavity light on a chargedcoupled device camera [11,33].While the measurement is a spatial heterodyne detection, meaning that the LO and cavity light have different propagation directions, the LO and cavity light possess the same optical frequency, as in a homodyne detection.We thus model the detection scheme as a balanced homodyne detection of the emitted cavity field. To derive the unraveling corresponding to balanced homodyne measurements, we start by considering the atomonly master equation with Lindblad form where the collapse operators C k are given by Eq. ( 7).We can then manipulate the master equation to cast it in terms of the collapse operators corresponding to balanced homodyne detection.First, we write the Lindblad superoperators as a sum of two equal terms with rescaled collapse operators, We then use the shift-invariance property of the master equation, which admits shifts of a collapse operator C → C + aI if the Hamiltonian is also modified as H → H −i(a * C −aC † ).We perform shifts C k → C k ±iβI for each k and pair of collapse operators in Eq. (D2). Here, β represents a real number proportional to the LO amplitude.Performing this shift yields the master equation (D3) The collapse operators (C k + iβ)/ √ 2 and (C k − iβ)/ √ 2 correspond to measurements of the two field quadratures, respectively.The Hamiltonian contribution from the two above collapse operators cancel out up to a global constant offset.The master equation is now unraveled into quantum trajectories using the standard quantum jump formalism [6] but with the above, shifted collapse operators.The extra field β means that the probability of a jump at each time step is higher, but the form of the collapse operator means that in such a jump, there is a smaller change to the state of the system.As one interpolates between β = 0 and β = ∞ this approach thus interpolates between quantum jumps (in terms of the original jump operators) and continuous quantum state diffusion. The value of β used in our simulations is 0.1 √ κ.Recall that β corresponds to √ κ multiplied by the coherent state amplitude of the LO and the overlap between the LO and emitted cavity light.While this sounds small, the relevant quantity for determining how close the system is to the quantum state diffusion limit is the ratio of the number of detections per unit time to the rate associated with spin dynamics.The quantum state diffusion limit occurs when detections occur much more quickly than system dynamics. The detection rate can be approximated for a given value of β [6] by The term in parentheses is the bare detection rate corresponding to the k'th collapse operator, where λ k is the k'th eigenvector of J.The term β 2 boosts the bare rate by the LO strength.The sum is taken over all collapse operators to approximate the total detection rate for each spin.This yields a detection rate of approximately 2 MHz at full ramp power when using typical J matrices in the spin glass regime; this translates to a timescale of about 0.5 µs.On the other hand, the spin dynamics typically occur no faster than ∼10 µs.Thus, we conclude that the dynamics are similar to those obtained in the diffusion limit. The measurement records for each spin are constructed from the balanced homodyne signals where again v k i is the i'th element of the k'th eigenvector of the J matrix.The records s i (t) now change at a rate proportional to ⟨S x i ⟩, and thus, after sufficient integration time, their signs indicate the steady-state spin configuration.An example of the measurement records for a typical quantum trajectory is shown in Fig. 1(b). Appendix E: Semiclassical equations of motion We now derive the semiclassical equations of motion that describe a homodyne unraveling of the master equation.For these semiclassical calculations, we take the quantum state diffusion limit appropriate for large spins [6,8], corresponding to the limit β → ∞ in the previous section. The stochastic differential equation describing the expectation value of an observable A is written in Itô form as where each dW k is the differential of an independent, Wiener process with ⟨dW k dW * k ⟩ = dt.The Wiener process can be either real, for homodyne detection, or complex for heterodyne detection.We consider a real Wiener process from here on for simplicity and without loss of generality.The semiclassical equations of motion are derived by first evaluating the above exact equation for S x/y/z i and simplifying the result through commutation relations.Any remaining product of spin operators A and B is then decomposed into a commutator and anticommutator, AB → [A, B]/2 + {A, B}/2.This step is necessary to achieve a semiclassical limit that retains stochastic terms from the homodyne detection.We then perform a mean-field decoupling of the anticommutator terms ⟨{A, B}⟩/2 → ⟨A⟩⟨B⟩ to arrive at semiclassical equations of motion: Above, α ± are as defined in Eq. ( 6). Appendix F: The Parisi distribution in terms of quantum trajectories In this section, we first summarize the form of the Parisi distribution in a quantum spin glass, following Refs.[64,65].We then show how it can be understood in terms of an overlap operator in a doubled Hilbert space.This leads to the trajectory formulation of the overlap distribution presented in Eq. ( 11) of the main text. Parisi order parameter for a quantum spin glass For simplicity, we consider here the SK model in a transverse field.It is a close approximation of the more realistic model derived in Sec.II.The Hamiltonian is given by where σ x,y,z i are Pauli operators acting on one of the N total spins.The J ij couplings here are all-to-all with each element sampled independently from a Gaussian distribution with zero mean and variance σ 2 J /N .We consider here the equilibrium density matrix ρ = e −H/T /Z with partition function Z = Tr e −H/T .Application of the Suzuki-Trotter formula [45,46] allows for a reformulation of Z in terms of an equivalent classical model in a higher dimension.The classical energy is [47] F2) where s denotes the set of classical spin variables s i,τ = ±1.The term h c = T ln[coth(h q /LT )]/2 describes a nearest-neighbor coupling in the Trotter dimension indexed by τ , with periodic boundary conditions.The mapping becomes exact in the limit that the number sites L in the Trotter dimension tends to infinity. Mapping the quantum partition function to an effective classical one unlocks the tools of replica theory.The free energy is evaluated via the "replica trick" log(Z) = lim n→0 (Z n − 1)/n, reducing the problem to computing the replicated partition function Z n for n replica spin systems s α .The free energy is then averaged over the quenched disorder matrices.This corresponds to the disorder-averaged, replicated partition function Z n given by E(s α , J) , (F3) where p(J ij ) = exp −N J 2 ij /2σ 2 J / √ 2πσ J and the sum is taken over all replica spin states.The disorder integrals can be computed exactly, which introduces a coupling between replicas.One finds that in the large-N limit the partition function can be expressed in terms of a local effective action with spins at different sites i decoupled, Z n = i e −Si .This gives a single-site action [64] The matrix Q αβ is the overlap order parameter that, after performing the disorder average, describes a coupling between replicas.The order parameter χ ∆τ is a replica-diagonal correlator describing a translationinvariant coupling in the Trotter dimension.Evaluation of Z n by the method of steepest descent gives selfconsistent equations for these order parameters, ⟨s α i,τ s α i,τ ′ ⟩ S , (F5) where ⟨A⟩ S = {s α } Ae −S /( {s α } e −S ) denotes an expectation value over the measure defined by S. While the overlap matrix may seem an abstract or purely theoretical object, Parisi realized [3] that Q αβ contains clear physical content: It describes the overlaps, or equivalently the distances, between the large number of distinct thermodynamic states in a single disorder realization.Finally, the Parisi order parameter P (q) is the distribution of elements of the Q αβ matrix, P (q) = lim (F6) The structure of P (q) [50,65] is modified in the quantum case by a transverse field [47,66,67]. Parisi distribution from the overlap operator To establish a connection between Q αβ as predicted by replica theory and a physical observable of the quantum system, we consider the moments q (k) ≡ dqP (q)q k of the Parisi distribution.Using Eq. (F6) yields We now insert the self-consistency equation (F5) for Q αβ and make use of the fact that different sites decouple in the effective action, Eq. (F4).In the large-N limit, this yields The expectation value over the effective action S, in which the disorder has been integrated out, is now related back to expectation values at the level of individual disorder realizations.Following Ref. [68], each distinct replica index appearing in the expectation value under S corresponds to a distinct thermal average under the classical energy (F2).The result is then averaged to yield q (k) = [q (k) J ] J , where [•] J denotes the disorder average over J realizations and q (k) J is the moment associated with an individual disorder realization, given by q (k) (F8) Here, ⟨•⟩ E denotes a thermal average with respect to the classical energy in Eq. (F2).This can be related back to quantum expectation values as ⟨ j s ij ,τ ⟩ E = Tr ρ J j σ x ij , where ρ J is the equilibrium density matrix and the expression holds for any choice of τ [47].The key step is then to apply the simple relation Tr[X] 2 = Tr[X ⊗ X], yielding q (k) The term in parentheses to the power k is precisely the overlap operator O discussed in Eq. ( 9) of the main text, leading to relation q (k) = [⟨O k ⟩] J .The characteristic function φ(t) for the overlap distribution of each J is now straightforward to compute given the above simple form for the moments.Direct evaluation yields We obtain the overlap distribution through a Fourier transform of the characteristic function.Performing the disorder average then yields the Parisi distribution, where I is the identity matrix in the matrix exponential. The exponential term can be recognized as the integral form of the Dirac delta function.To describe explicitly the action of the delta function, we expand the overlap operator in its eigenbasis as O = q qP q , where the sum runs over all N + 1 allowed values of the spin overlap q = −1, −1 + 2/N, • • • , 1.The operators P q are orthogonal projections onto the space of spin state pairs with overlap q.They mutually commute, [P q , P q ′ ] = 0, and have product relations P q P q ′ = P q δ qq ′ .Furthermore, the projectors span the full space of spin states and thus form a resolution of the identity, I = q P q .Inserting these forms into the integral expression of Eq. (F11) and evaluating the operator exponential, we find (q ′ − q) P q ′   (F12) = q ′ dt 2π exp [it (q ′ − q)] P q ′ = q ′ δ(q ′ − q)P q ′ . Inserting this form back into Eq.(F11) produces the form of the overlap distribution presented in the main text, P (q) = q ′ δ(q − q ′ ) Tr (ρ J ⊗ ρ J )P q ′ J . (F13) A variety of shapes are observed.Many distributions show interior support like peaks or a filling that is smooth.Interior support occurs in the most glassy J matrices where many nearly degenerate minima contribute to the low-energy manifold.Others show less pronounced interior structure.This occurs when the energy landscape is mostly dominated by a single ground state with energy much lower than any other local minimum.Even in these cases, there are often signs of smaller structures in the interior region arising from infrequently found local minima.This distinguishes the system from any ferromagnetic system, where only two goalpost peaks emerge from the single paramagnetic peak as the system cools. FIG. 2 . FIG. 2. (a-b) Example of two independent quantum trajectory simulations for the same system with a glassy J connectivity.The top panels show the dynamics of the N = 15 spin-1/2 system pumped through threshold.The spins begin to organize in the σ x i quadrature when the pump strength approaches the semiclassical normal-to-superradiant threshold at the time tc indicated by a vertical dashed line.Note, the quantities plotted differ from the simulated measurement records in Fig. 1b.(c) The same spin quantum trajectories from panel (a) are shown on (and within) the Bloch sphere.Note that here, the 15 traces are colored by radius rather than spin index.The red arrows show the general flow of the spin trajectories.(d) The averaged paths of the 15 spins, taken over 200 quantum trajectories.For each of the 15 spins, only trajectories with the same steady state are averaged together.(e-f) The bottom panels show entanglement entropy versus time of each spin for the trajectory simulations above; e.g., panel (e) pairs with panel (a).(g) The entanglement entropy per spin averaged over all 200 trajectories. FIG. 3 . FIG. 3. Example semiclassical trajectory simulation for the same frustrated J matrix and the same pump ramp schedule as in Fig. 2(a).(a) A sharper transition occurs with (b) dynamics restricted to the surface of the Bloch sphere.Panel (a) shows that the semiclassical transition, indicated by a vertical dashed line, is reached at time expected from Eq. (3).The red arrows in (b) show the general flow of spin trajectories. FIG. 4 . FIG. 4. (a) Plot of the energy of 20 independent semiclassical trajectories for the same J matrix as in Fig. 2. The energy is normalized to the instantaneous quantum ground-state energy E0.The N = 15 network has M = 10 5 spins per node.The shaded region is inaccessible to any unentangled state, and thus the semiclassical trajectories must climb over the barrier before reaching steady state.(b) By contrast, quantum trajectories can pass through the semiclassical energy barrier via entanglement between spins.This provides access to lower-energy steady states.Plotted are the normalized energies of 20 independent quantum trajectories for the same J matrix as above.Note the change in y-axis scale.The vertical dashed line marks the semiclassical threshold.The red (blue) colors in panel a (b) are chosen to be different shades to only help distinguish one trace from another. FIG. 5 . FIG.5.(a-e) Time evolution of q αβ for 200 quantum trajectories corresponding to the J matrix in Figs.2-4.Each panel shows the probability for each spin overlap value to occur at each given time during the ramp sequence.Error bars in this figure and in subsequent figures are estimated from bootstrap analysis; see App.G for details.(f-j) Time evolution of the full overlap matrix q αβ .The time that corresponds to each panel is the same as the panel above.The histograms in (a-e) can be recovered by binning the off-diagonal values in the overlap matrix.Each overlap matrix is ordered via an independent hierarchical clustering of spin states at each time. FIG. 6 . FIG. 6.(a) The energy of the six local minima of the Ising energy for the J matrix considered in Figs.2-5.Also plotted are the occurrence probabilities of those local minima as steady states for the 200 quantum trajectories.(b) The spin overlap matrix of these six local minima.(c) The same spin overlap distribution as in Fig. 5(j), but with annotated peaks.The notation X:Y indicates that the peak arises from the overlap of the local minima X and Y in the list of panel (a).Finite probability in the unlabeled bins is due to fluctuations of the overlap peaks around the labeled local minima. FIG. 7 . FIG. 7. Steady-state overlap distribution and overlap matrix for 200 quantum trajectories of three different confocal J matrices.The best-fit thermal distributions are shown in red.(a) Example of a J matrix that produces a dominant low-energy state; most trajectories find the same steady-state spin configuration.The temperature for the thermal fit is T fit = 0.17(1) T c.(b) A J matrix with multiple peaks and levels of structure in the overlap.The thermal fit yields T fit = 0.23(4) T c. (c) A more complex J matrix.Quantum trajectories find a large set of steady-state configurations.The thermal fit yields T fit = 0.17(3) T c. FIG. 8 . FIG. 8. Aggregate overlap and magnetization distributions in spin glass, ferromagnetic, and paramagnetic phases.The best-fitting thermal approximation is shown in red for all panels.(a) Confocal J matrices in the spin glass regime showing a smooth Parisi-like distribution and a magnetization peaked near zero.The thermal fit yields T fit = 0.21(2) T c.(b) Ferromagnetic regime showing the absence of interior support in the overlap and strong peaks in the magnetization.The thermal fit yields T fit = 0.011(2) T c. (c) The same set of spin glass J matrices as in panel (a), but the system is rapidly quenched into the superradiant regime rather than ramped according to f (t).The overlap and magnetization both cluster around zero, indicative of a paramagnetic phase.The thermal fit is poorly constrained in this regime: Shown is the distribution for T fit = 5 T c, which bears similarity to the data. FIG. 9 . FIG. 9. (a-e) Time evolution of the J-averaged Parisi distribution.The overlap distribution is averaged over all 100 J matrices in the confocal spin glass regime.The times are chosen to match those in Fig. 5.The steady-state distribution emerges after approximately 2 ms, demonstrating the distinctive goalpost peaks with a continuously filled interior. FIG. 11 . FIG.11.All 100 overlap distributions used in Fig.8(a).The y-axis is probability on a linear scale.The 100 J matrices used are all derived from that realizable in confocal cavities in the spin glass regime, as described in the main text. The diagonalized atom-only collapse operators are non-Hermitian, in general, and Each record is initialized as h k (0) = 0. Every time that a jump occurs in (C k + iβ)/ √ 2, h k (t) is increased by one, while it is decreased by one every time a jump occurs in (C k − iβ)/ √ 2. The homodyne records do not yet reflect the spin measurements because the collapse operators C k are composed of a linear combination of spin operators from different sites.To construct the spin measurements records s i (t), the relation between homodyne records and spin states must be inverted by taking the linear combination
18,657.8
2023-07-19T00:00:00.000
[ "Physics" ]
Imogolite Nanotubes and Their Permanently Polarized Bifunctional Surfaces for Photocatalytic Hydrogen Production Abstract To date, imogolite nanotubes (INTs) have been primarily used for environmental applications such as dye and pollutant degradation. However, imogolite's well‐defined porous structure and distinctive electro‐optical properties have prompted interest in the system's potential for energy‐relevant chemical reactions. The imogolite structure leads to a permanent intrawall polarization arising from the presence of bifunctional surfaces at the inner and outer tube walls. Density functional theory simulations suggest such bifunctionality to encompass also spatially separated band edges. Altogether, these elements make INTs appealing candidates for facilitating chemical conversion reactions. Despite their potential, the exploitation of imogolite's features for photocatalysis is at its infancy, thence relatively unexplored. This perspective overviews the basic physical‐chemical and optoelectronical properties of imogolite nanotubes, emphasizing their role as wide bandgap insulator. Imogolite nanotubes have multifaceted properties that could lead to beneficial outcomes in energy‐related applications. This work illustrates two case studies demonstrating a step‐forward on photocatalytic hydrogen production achieved through atomic doping or metal co‐catalyst. INTs exhibit potential in energy conversion and storage, due to their ability to accommodate functions such as enhancing charge separation and influencing the chemical potentials of interacting species. Yet, tapping into potential for energy‐relevant application needs further experimental research, computational, and theoretical analysis. Introduction The pressing need to attain carbon neutrality and zero emissions is driving the strong demand for sustainable chemical conversion and use of renewable green energy sources. [1,2]The goal is to reach 32% consumption by 2030 as established by the European Union (EU)à Energy Directive Council. [3]Solar energy, holding terawatt potential, stands as a candidate in facilitating the energy transition. [4][14] Furthermore, the exploration of conventional metal oxides, chalcogenides, carbon-based materials, one-dimensional (1D) nanoplatforms are typical, functional, and active catalysts. Addressing the issue of photocatalysis involves processes taking place on short time scales but also on restricted distances. [15]23][24] The rise of nanoscience and nanotechnology has offered an ideal playground for exploring these concepts in a variety of different nanoreactors, from nanotubes to more complex threedimensional porous architectures. [25,26]The strong interest in nanotubes is due to their well-defined, oriented porous structure combined with unique electro-optical properties induced by the rolling up of the atomic lattices. [27][30][31] In brief, the ion/energy storage capacity is primarily determined by the surface terminal groups, [32] likely hydroxyls in the case of imogolite nanotubes (INTs).Furthermore, the hollow internal diameter enables ion hosting within the cavity, adding a secondary storage mechanism.As far as photocatalysis is concerned, literature is dominated by studies on TiO 2 nanotubes [33][34][35] although other photoactive metal-oxide semiconductors nanotubes have been proposed. [36]TiO 2 nanotubes stand out as a noteworthy example. [37,38]One prominent TiO 2 nanotubes platform exhibited competitive hydrogen production rates, ranging from 30 μmol h −1 g −1 to 80 mmol h −1 g −1 . [34]However, it's important to note that these rates were achieved in conjunction with the use of co-catalysts.As for bulk materials, coupling nanotubes with metallic nanoparticles is usually necessary to promote the generation and separation of photoexcited carriers in the structure.Although the loading of noble metals as co-catalysts is beyond the scope of this perspective, recent research findings highlight the potential of exploring this avenue as an alternative direction for improving hydrogen production rates. [39,40]nother strategy is to exploit the polarization effect that originates from the noncentrosymmetric arrangement of atoms in the structure. [41,42]The permanent polarization induces positive and negative surfaces spatially separated into the catalyst, which may allow efficient electron-hole separation.Although several photoactive 2D and 3D porous nanostructures with permanent polarization have started to appear in the literature [43,44] the exploitation of 1D polarized nanotubes remains in its infancy.[47] Notably, INTs' crosssectional separation of the band edges and permanent polarization may be exploited for more elaborate chemical conversion reactions, specifically in energy applications such as hydrogen evolution reaction (HER). [48]n initial section, this perspective emphasizes the primary features and physical-chemical properties of INTs, defining their potential significance as a innovative wide bandgap insulator.In the following section delves two case studies that exemplify groundbreaking outcomes in H 2 production achieved through cuttingedge material modifications: atomic doping and Schottky junction interfacing.Lastly, opportunities, directions, and modifications are proposed to continue unleashing INT's potential. Bifunctional Surface: Inner and Outer of the Tube Imogolite nanotubes are unique 1D nanostructures, first recognized in weathered volcanic soils as true hollow tubes, [49] 30 years before the emergence of the "nanotube era." [50] The chemical composition of INTs is defined as (OH) 3 Al 2 O 3 Si(OH) from outside to inside the tube wall.It consists of an outer di-octahedral Al(OH) 3 wall on which isolated tetrahedral of O 3 Si(OH) are covalently bonded upright to the octahedral vacancy, forming the inner surface of the nanotube wall (Figure 1).The advantage of INTs in contrast to other nanotubes arises from a well-defined minimum of strain energy, [51] which leads to a defined number of tetrahedra along the nanotube circumference. [52]here are no imogolite deposits to speak of, and their extraction from soils requires several purification operations before a sufficient quantity can be obtained.On the other hand, synthetic INTs can be obtained by straightforward methods using low-temperature hydrothermal protocols. [53,54]he benefit is the ability to control or modify the nanotube crystallochemistry by changing the synthesis conditions and obtain new imogolite-like (or geo-inspired) nanostructures. [55,56]60][61][62][63] These nanotubes all have the same chirality with a zig-zag configuration. [52,64,65]he peculiar structure of INTs with inner and outer hydroxyl groups also makes them excellent candidates for surface modification.It has been widely documented that the outer wall can be functionalized with various coupling agents such as nobleand/or transition metals [66][67][68] and polymers. [55,69]More interestingly, the inner cavity of hydrophilic INTs can be rendered hydrophobic by a postsynthesis chemical modification.Postfunctionalization can be obtained by different silane agents after a perfect dehydration of the hydrophilic INT samples, but the degree of inner surface substitution is lower than 35%. [70][73] Like for hydrophilic INTs, a dependence of the inner cavity diameter with the substitution rate is obtained from the silicon to the germanium analogue end-members with a progressive increase of the nanotube diameter. [74]Interestingly, methylated-INTs roll up into an armchair structure [75] unlike their zig-zag hydroxylated analogs, which may strongly impacted their optical and electronic properties.Overall, synthetic INTs offers a whole family of 1D nanoporous structures with functionalized interfaces, namely single -walled aluminosilicate (SW-AlSi-OH), aluminogermanate (SW-AlGe-OH), methylated aluminosilicate (SW-AlSi-Me), and methylated aluminogermanate (SW-AlGe-Me) nanotubes as well as double-walled aluminogermanate (DW-AlGe-OH) nanotubes (Figure 1). [76] Permanent Polarization Owing to their tubular geometry and compositional radial symmetry, the INTs invariably present a permanent dipole density across the single or double wall interface.This feature, first inferred from electrophoresis measurements by Gustaffson in 2001, [77] has been directly [78][79][80][81] or indirectly [82] confirmed by density functional theory (DFT) as well as tight-binding DFT (TB-DFT) [83] simulations of several members of the imogolite family.Regardless of the zigzag or armchair rolling, the INTs present, at DFT level, accumulation of negative and positive charge on the inner and outer surface, respectively.One notable exception is provided by the results for the SW-AlGe-Me nanotubes, which presents an inverted dipole density with the negative and positive ends pointing toward the outer and inner surface, respectively. [75]As shown in Figure 2, the dipole density across the nanotube walls is accompanied by a marked separation of the valence band edge (VBE) and conduction band edge (CBE) on opposite sides of the nanotube cavity, which has been suggested to be conducive to the presence of long-range charge-transfer optical excitations. [79,80,84]General, composition agnostic electrostatic models describing the interplay between the permanent dipole density and associated electrostatic potential step, thence polarizing electrostatic field (vide infra) across the nanotube wall(s) have been provided in a prior study. [80]Preliminary simulations indicate such a permanent polarization to be partially resilient to the presence of point-defects, which introduce very local perturbations in the nanotube electrostatics. [78,84]Notably, recent simulations of highly idealized, thence inevitably approximated, structures for the tube ends [81] suggest the INTs can develop, in addition to a radial dipole density, also a longitudinal polarization and ensuing band bending due to physical truncation and ensuing structural relaxation. [85]gure 2. Schematic representation of the permanent dipole density (μ) at the wall of imogolite nanotubes, and associated interface electrostatic potential step (ΔV), that can be used to polarize differently reactants inside and outside the NT cavity, offering potential control of their electrontransfer kinetics.The panel shows also the μ-induced real-space separation between the valence band edge (VBE) (green) and conduction band edge (CBE) (red), and intra-wall charge-transfer excitations (wiggly arrow) separating electrons (e − ) and holes (h + ) on different sides of the NTcavity.Picture from Ref. [G4].Reproduced with permission. [80]Copyright 2023, Wiley. Band-Bending of Adsorption and Desorption Species DFT simulation of different hydroxylated and methylated INTs indicates that the dipole density across the wall, and associated electrostatic potential steps (Figure 2), is effective in tuning not only the adsorption geometry and energies of interacting molecules such as H 2 O but also the energy alignment of the molecules' electron acceptor and donor levels. [86]Combined with the demonstrated effects of the nanotubes structure and electrostatic environment in altering the molecular motions at the nanotube-H 2 O interfaces, [87,88] these findings prompt for additional research in the potential of INTs and associated radial polarization for the design and control of electron transfer kinetics at H 2 O interfaces as necessary for improved, up-scalable photocatalytic strategies to hydrogen generation. [91][92][93] The nanotube polarization will self-evidently polarize differently different molecular species (with different polarizability).Suggestions have accordingly been put forward also on the potentially beneficial use of INTs as co-photocatalyst, i.e., as support for grafted molecular or nanoparticle-based photocatalysts. [79]As the polarizability, i.e., response to the electric field from the nanotube dipole density will be invariably different for H 2 O molecules and the grafted photocatalysts, novel and favorable band alignments may/might be engineered and realized, in principle even using systems of known unsuitability for water photoreduction (e.g., Fe 2 O 3 and WO 3 ) [94] due to their insufficiently high CBE with respect to the H 2 /H 2 O redox potential. Benefit Summary of Imogolite Nanotubes as Photoelectrocatalysts The distinctive combination of bifunctional surface, permanent polarization, and band-bending modulation brings specific properties in alumino(germano)silicate nanotubes, enabling new chemical potentials for driving the typical energetic reactions, namely H 2 O splitting, CO 2 reduction, and N 2 fixation.The adaptability of nanostructuring single or double walls becomes particularly interesting for improved photon management.Unlike conventional semiconductors, INTs demonstrate the ability to transport photogenerated carriers across their walls.As a result, these multifaceted nanotubes hold the potential to manipulate the chemical potentials of model photocatalytic reactions with minimal material modifications. Together the polarization, surface, and band-bending of INTs may enable unusual but distinctive effects with promising performance benefits: -Increase the charge carrier's concentration and continuous flux after photon activation.-Enhancement of carriers' migration to the exposed outer surface since the distance length between the single and double walls are relatively short.-Improvement on time scale of adsorption and desorption species by polarization and surface charge affinity with reactant molecules.-Downward and upward of chemical potentials of model photocatalytic reactions when modified. Hydrogen Production upon Photocatalytic and Radiolytic Activation Two case studies were selected from two French teams with extensive experience in INT modifications over a decade.These teams have started to explore their catalytic performance using different activation sources.The discussion of this section centers on appreciating the unique aspects of each study, given the substantial differences in experimental conditions, rather than directly comparing the material's modifications nor performance.Particularly, regarding the activation sources (nature: photon or radiation, lamps: monochromatic, full spectrum, and power intensities).Nevertheless, each catalytic case study plays a role in positioning imogolite's in the energy applications (HER) and both are carefully analyzed to reveal its potential as nanoreactors. Firstly, Jiménez et al. developed Ti-modified aluminogermanate double-wall imogolite nanotubes. [95]The authors reported 1500 μmol −1 g −1 HER (Figure 3a) for the optimal 0.4 Ti/Ge-INT composite with typical sacrificial agent concentration (methanol 33% volume) and full Xe lamp (300 W) irradiation. [95]nterestingly, the present result was obtained without any cocatalyst thus positioning in the metal-free category.The high photoactivity was attributed to low recombination rates and accumulated electrons at the available Ti-sites, which presumably present longer lifetimes. Rather than relying on metal nanoparticles as co-catalysts and electron collectors, the hybrid active composite's inherent polarization facilitates enhanced photogenerated charge separation.Methanol decomposition functions to block the oxygen evolution reaction (OER) while generating protons simultaneously.These processes occur within the bifunctional surface nanotube, leading to effective spatial separation of useful charge carriers, with proton reduction taking place in the outer wall of the NT. Secondly, Patra et al. reported a value of 1443 μmol −1 g −1 HER (Figure 3b) in the case of aluminosilicate INT-CH 3 containing 20% Au NPs as co-catalyst with propan-2-ol 20% as sacrificial agent and monochromatic light (254 nm, 15 W) irradiation. [96]otably, such INT-CH 3 /Au photoactive composite exhibits an efficient metal-semiconductor interface, leading to a substantial gold catalytic surface contribution, while the possibility of polarization and confinement effects from the INT-CH 3 cannot be disregarded. INT-CH 3 /Au composite has proven another interesting trend in function of size-dependent.Patra et al. reported the comparison of 5 and 10 nm average size of Au NPs given a pronounced difference on hydrogen production of 10 and 90 times higher than bare INT-CH 3 /Au.This outcome indicates the presence of metallic Au and highlights the active site on the outer surface of INT-CH 3 .This site effectively converts nearby protons into H 2 .The second source used was gamma radiation from 137 Cs in argon atmosphere (Figure 3c).However, the H 2 production was one order of magnitude lower than when photoactivated, suggesting that the reduction mechanism via radiation does not bring the same benefits as photon to create a feasible metal-semiconductor Schottky barrier. Conclusions and Outlook The chosen case studies on H 2 generation via photocatalytic, photolytic, and radiolytic highlight the experimental proof-ofconcept, evidence the clear potential of INTs as a strong candidate to be added to the list of topical semiconductor photoelectrocatalysts.Extensive progress has been achieved in characterizing [95] b) photolytic, and c) radiolytic hydrogen production of imogolite nanotubes (INTs) functionalized with −CH 3 groups and containing Au NPs. [96]verse aspects such as physico-chemical, optoelectronic, magnetic, bulk, and surface properties of this nanotube.However, it is crucial to include an examination of charge carrier dynamics in order to complete the comprehensive understanding of its behavior. Considering the two presented case studies, it is evident that INT holds great potential to be further developed into a widely in-demand metal oxide with remarkable properties, not just for energy conversion but also for energy storage.Although the latter aspect was not covered in this article, interested readers can find comprehensive details elsewhere. [30]On the other hand, the introduction of imogolite double-wall nanotubes provides a novel 1D nanostructure with controlled photon management for activation, opening up potentially exciting avenues for new research and advances in hydrogen/oxygen generation, carbon dioxide reduction, and nitrogen fixation, all of which guarantee further investigations. On the horizon, there are additional opportunities for incorporating metal dopants, coupling with medium band gap semiconductors, and/or depositing bimetallic catalysts to continue harnessing and exceeding the current efficiencies of solar fuels technologies. To further and better support future research efforts in the potential of INT for photo-catalytic applications, two lines of simulation development would be very beneficial.First, definition and validation of scalable yet sufficiently accurate routes to modelling realistic electronic ground state interfaces between the nanotube's surfaces, interacting molecules and surrounding media has remained challenging over the last 10 to 15 years.Current advances in DFT-derived machine learning interatomic potentials hold great promise to this end.Second, viable simulation methods for optical electronic excitations in the nanotubes beyond the single-particle picture used so far as well as first prin-ciples approaches to excited state (excitonic) dynamics in model Imogolite-water (solvent) nanotube interfaces are currently missing or yet to be applied.While the continuous increase in academically accessible high performance computing resources authorizes outlooks toward the use of many-body approaches to valence-electron optical spectroscopy, progress on the latter front will require intense method development efforts.At least to best of our knowledge, the needed computational machinery is yet to be developed or made available. As for electrocatalytic applications, it is recommended to measure/report the INTs conductivity and/or resistivity.One can think of implementing significant vacancies or atoms in their lattice that can enhance their conductivity, and therefore electron collection for pure electrocatalysis. Figure 1 . Figure 1.(Left) Atomic structure of the dioctahedral aluminum layer (blue) and of the location of an SiO 4 tetrahedron (orange).Oxygen atoms are in red and hydrogen ones are white.(Right) Representation of the different members of the whole family of imogolite nanotubes with hydroxylated or methylated inner cavity.Orange tetrahedral correspond to aluminosilicate imogolite nanotubes (INTs) while green tetrahedral are aluminogermanate INTs.Inner (outer) radius of the nanotubes is provided.Adapted from ref.[76]
3,934
2023-12-20T00:00:00.000
[ "Chemistry", "Materials Science", "Environmental Science" ]
Modeling of human spinal column and simulation of spinal deformities Modeling and simulation is widely used in all domains as resourceful tools in design, enhancement, improving or forecasting behavior of different systems. A wide range of software is available to aid solving engineering problems. However, there are systems, such as the biological ones, to which modeling and simulation is a very difficult task. The difficulty originates in two essential features: • human body parts are very irregularly shaped; • anthropometrical normal data is very scattered, regarding age, sex, race, profession, local environment traits and so on. Therefore, biological models are not yet developed on large scale even though they would be very useful in investigating and monitoring patients suffering of widespread diseases. However, there is an encouraging start in modeling different parts of human body, such as feet [1], arms [2], mobile bones of the head [3] etc. The purpose of modeling is either depicting abnormal anatomical shapes or designing of devices such as prosthesis. The present work focuses on the class of spinal deformities, which are very common in now-days. Most individuals suffer of mild or severe spinal column deformities, such as scoliosis, lordosis, kyphosis or combinations of these. Deformities cause diminution of personal comfort and of physical or intellectual capacity of effort. When severe, deformities bring on large distortions of thorax shape and alteration of respiratory process. Such spinal diseases occur frequently at school-age population, due to incorrect posture and/or to improper desk design and less frequently to adult population, due to sedentary activities (teachers, librarians, IT specialists etc.). The elder population also suffers because of irreversible bone alteration. Plenty of statistics describe the prelevance of such diseases in different places of the world, taking into account a lot of aspects such as: age, sex, profession, life standard etc. [46].The studies emphasize very detailed the importance of identifying early stages of spinal deformities because of cautious prognostic and very high costs of treatment [7]. Engineering sciences offer a large series of equipments to investigate the human bone system. Among the methods of investigation in use, the most common are X-ray, CT or MR imaging, Moire topography, digital ultrasonic mapping and optical scanning [8-17]. The stating order of these methods is chronological regarding the implementation and inversely from invasive character standpoint. Countries which develop long term healthcare programs always include spinal deformities among the main issues to investigate and monitor, especially to school children and persons involved in specific professional activities. The main problems in tracing and monitoring spinal deformities consist of: • finding a quick and less invasive method of investigation; • establishing a set of numerical parameters to describe completely the column’s shape; • storing of a large amount of data considering the big number of subjects in the database; • accessing data and evaluating the evolution of patients. In order to model the spinal column and simulate its behavior, considering a long-term monitoring of an extended sample of population, the following workflow was conceived (Fig. 1). Introduction Modeling and simulation is widely used in all domains as resourceful tools in design, enhancement, improving or forecasting behavior of different systems.A wide range of software is available to aid solving engineering problems.However, there are systems, such as the biological ones, to which modeling and simulation is a very difficult task.The difficulty originates in two essential features: • human body parts are very irregularly shaped; • anthropometrical normal data is very scattered, regarding age, sex, race, profession, local environment traits and so on. Therefore, biological models are not yet developed on large scale even though they would be very useful in investigating and monitoring patients suffering of widespread diseases.However, there is an encouraging start in modeling different parts of human body, such as feet [1], arms [2], mobile bones of the head [3] etc. The purpose of modeling is either depicting abnormal anatomical shapes or designing of devices such as prosthesis. The present work focuses on the class of spinal deformities, which are very common in now-days.Most individuals suffer of mild or severe spinal column deformities, such as scoliosis, lordosis, kyphosis or combinations of these.Deformities cause diminution of personal comfort and of physical or intellectual capacity of effort.When severe, deformities bring on large distortions of thorax shape and alteration of respiratory process.Such spinal diseases occur frequently at school-age population, due to incorrect posture and/or to improper desk design and less frequently to adult population, due to sedentary activities (teachers, librarians, IT specialists etc.).The elder popula-tion also suffers because of irreversible bone alteration.Plenty of statistics describe the prelevance of such diseases in different places of the world, taking into account a lot of aspects such as: age, sex, profession, life standard etc. [4][5][6].The studies emphasize very detailed the importance of identifying early stages of spinal deformities because of cautious prognostic and very high costs of treatment [7]. Engineering sciences offer a large series of equipments to investigate the human bone system.Among the methods of investigation in use, the most common are X-ray, CT or MR imaging, Moire topography, digital ultrasonic mapping and optical scanning [8][9][10][11][12][13][14][15][16][17].The stating order of these methods is chronological regarding the implementation and inversely from invasive character standpoint. Countries which develop long term healthcare programs always include spinal deformities among the main issues to investigate and monitor, especially to school children and persons involved in specific professional activities.The main problems in tracing and monitoring spinal deformities consist of: • finding a quick and less invasive method of investigation; • establishing a set of numerical parameters to describe completely the column's shape; • storing of a large amount of data considering the big number of subjects in the database; • accessing data and evaluating the evolution of patients. In order to model the spinal column and simulate its behavior, considering a long-term monitoring of an extended sample of population, the following workflow was conceived (Fig. 1). Target group of subjects: -school children (different ages); -computer operating persons; -elder people; -other groups of persons potentially affected by spinal deformities. Medical interpretation and decision adequate treatment (medical gymnastics, physiotherapy, medical corset, surgical correction etc.) Fig. 1 Workflow of the biometric investigation and monitoring The text-boxes in figure 1 indicate the logical steps to follow from establishing the target group of subjects to physician's decision regarding the results of a complete, objective and non-invasive investigation.The lower text contains the concrete goals to fulfill at each step. Equipment and software to provide data for modeling As Fig. 1 shows the choice for the investigation method went to a totally non-invasive method, based on optical scanning. A team of multidisciplinary specialists developed a diagnosis method based on the equipment offered by the Canadian Company InSpeck, specialized in 3D optical measurement digitizing using non -laser technology.The technical team chose to implement the system InSpeck 3D Halfbody, which needs three cameras (Fig. 2, a).The 3D cameras are Mega Capturor II digital cameras type (Fig. 2, b).The optical image acquired by each camera is turned into a signal, sent to a PC, where the software The equipment InSpeck is an all-purpose imaging system, so that a special software, specific for the spinal column, was required in order to acquire data.A set of special markers was designed and manufactured to match the reference point of the vertebra -the spinous process.Figure 3 shows the position of 29 markers, attached to 23 vertebrae (C1…C7, T1…T12, L1…L5, S1…S3), shoulders (U1 and U2), scapulas (O1 and O2) and iliac crests (P1 and P2). Determination of postural parameters, spinal characteristic distances and angles and, finally, global deformities need the knowledge of vertebrae coordinates along a zone as extended as possible.The practical measurements included a long segment of the spine comprising the vertebrae from C7 to S3.In Fig. 4, a is rendered an image of the vertebral spine.Different colors and symbols are assigned to cervical (C1…C7), thoracic (T1…T12), lumbar (L1…L5) and sacral (S1…S3) segments. For a better description of posture and deformities, six supplemental markers picked coordinates of shoulders (U1 and U2), scapulas (O1 and O2) and iliac crests (P1 and P2). As the equipment is able to pick 3D coordinates, the characteristic parameters of spine are defined within Knowing 29 triplets of coordinates (x,y,z), a large series of parameters can be defined.In tables 1, 2 and 3 are presented the significant parameters of the vertebral column, as proposed by the authors. Beside the parameters of the vertebral column within the projection planes, the effective lengths in 3D measurement should be also considered: total length (from C7 to S3), thoracic length (from C7 to L1) and lumbar length (from L1 to L5). The hardware configuration and the software facilities of FAPS 5.5 and EM 5.5, previously mentioned, were used to create an ASCII file, in *.txt format, containing numerical data of the 29 measuring points.The file is meant to be exported to an advanced processing program.The newly developed program, written as Visual Basic Application, was designed aiming the following requirements: • development of a data basis containing a minimum set of information about the patients.The information should be accessed selectively, using different filters and should allow introducing and saving new data; • import of data (coordinates of vertebrae) from an *.xls file; • automated processing of data in order to obtain 16 parameters of posture or deformity; • numerical and graphical display of results; • print of an investigation report containing complete information about the patient (personal characteristics -such as name, age, profession -numerical and graphical results of investigation and notices of the physician if necessary). The graphical interface of the program, named INBIRE is presented in Fig. 5. Modeling of vertebrae and simulation of column deformations Numerical parameters vary in large ranges and plane projections are not intuitive enough for the physician to establish a quick and correct diagnosis.It takes a great deal of time to correlate over 20 numerical characteristics even though graphical representations are partly helpful.The aim of a large scale investigation is both efficiency and precision.Thus, a 3D flexible model is welcomed. Modeling of the spinal column is a difficult task so that the literature mentions only a few attempts to fulfill it [18][19][20].Taking into account the specific problems occurring in modeling biological parts, it was conceived the workflow presented in Fig. 6. The program chosen to work in was 3Dmax.The general initial data consider the following elements: • The spinal column is a complex biological structure, containing 33 -34 vertebrae, 344 joints and 24 intervertebral discs. • The column axis is 3D shaped curve: Each type of vertebra was created starting from a regular shape, namely a cylinder.The functions of the program, such as Cut, Chamfer, Extrude, Smooth, Scale etc., allowed modifying the primary shape until it attained a perfect match with the vertebrae described in Grey's Anatomy Atlas [21].Fig. 7 contains a print screen from the process of modeling a thoracic vertebra, whereas Fig. 8 renders all types of vertebrae together with their adjacent discs.To make the models more realistic, colors and textures were assigned to all elements.Specific zones of the column were obtained by cloning the appropriate type of vertebra and disc, in the required number.The function Array aligned the vertebrae along a spline, drawn using the physiological curvatures.The result is a standard model of spinal column (Fig. 9). Personalized models result using the coordinates got from the program INBIRE, to trace the spinal axis.Functions PathDeform and Modifiers allow relatively easy and fast editing of the standard model.Fig. 10 illustrates through zoomed areas a column deformed by scoliosis.Fig. 7 Modeling a thoracic vertebra in processing Conclusions Modeling and simulating is difficult for human body parts, especially for mobile ones.The present paper described the implementation of a totally non-invasive method of investigation for the spinal column, which is frequently affected by deformities.A large number of numerical parameters were suggested for the description of the column's shape.A special software -INBIRE -was developed to work with the all purpose imaging system InSpeck.The program provides an interactive database and the facility to export data to the modeling program 3Dmax.Using anthropometrical data, the individual vertebrae and finally the entire column was modeled as a standard.The coordinates provided by INBIRE allow modeling of personalized spinal columns, which can be stored and used by physicians to monitor the evolution of the deformities. The achievements of the research project contribute to development of local or national healthcare programs, bringing in numerical precision and efficiency in screening and monitoring spinal deformities, which are wide-spread, hard and costly to treat in advanced stages. Fig. 2 a) Imaging in Halfbody configuration b) InSpeck 3D Mega Capturor II digital camera Fig. 3 Fig. 4 a Fig. 3 Position of 29 markers to indicate the points where 3D coordinates are picked one of the three anatomic planes (Fig. 4, b): xy -frontal plane, zy -sagital plane and xz -transversal plane. Fig. 5 Fig. 5 Image of results' display Fig. 6 Fig. 6 General workflow of modeling the spinal column Fig. 8 Fig. 8 Models of single cervical, thoracic, lumbar and sacral vertebrae with adjacent discs Fig. 10 Fig. 10 Details of a column deformed by scoliosis Table 3 Parameters in transversal plane (xz)
3,057.6
2015-06-22T00:00:00.000
[ "Materials Science" ]
A Data-Secured Intelligent IoT System for Agricultural Environment Monitoring Collecting environmental information of crop growth and dynamically adjusting agricultural production has been proved an e ff ective way to improve the total agricultural yield. Agricultural IoT technology, which integrates the information sensing equipment, communication network, and information processing systems, can support such an intelligent manner in the agricultural environment. Traditional agricultural IoT could meet the service demand of small-scale agricultural production scenarios to a certain extent. However, the emerging application scenario of the agricultural environment is becoming more and more complicated, and the data nodes of the underlying access to IoT backend system are increasing in large number, while the upper-layer applications are requiring high quality of data service. Hence, the traditional architecture-based (i.e., centralised cloud computing) IoT systems su ff er from the problems such as small network coverage, data security issue, and limited power supply time while attempting to provide high-quality services at the edge of the network. Emerging edge computing o ff ers the opportunity to solve these issues. This paper builds an intelligent IoT system for agricultural environment monitoring by integrating edge computing and arti fi cial intelligence. We conducted an experiment to validate the proposed system considering the reliability and usability. The experimental results prove the system ’ s reliability (e.g., data packet loss rate is less than 0.1%). The proposed system achieves the concurrency of 500TPS and the average response time of 300ms, which meet the practical requirements in agricultural environment monitoring. Introduction Recently, the Internet of Things (IoT) is being applied to several fields, such as agriculture, logistics, and transportation [1][2][3]. Using various types of integrated microsmall sensors, wireless sensor networks (WSNs) can achieve real-time detection, acquisition, and sense of various objects [4]. The transmission of various information in physical space (such as temperature and humidity, moisture, and pressure) can make users intuitively understand the information. In China, the establishment of agricultural IT infrastructures is still in its infancy. Therefore, it is impossible to obtain timely information on all farms and different locations of crop growth environment parameters. In order to detect the temperature and suffocation environment, the environmental instruments equipped in the farms are manually operated on-site, which is time-consuming and inefficient. This situation makes the farmers monitor and control the climatic conditions are adversely affected, which in turn affects the improvement of crop yield and quality [5]. Therefore, the method to improve the accuracy and effectiveness of the crop growth environment in the acquisition of various environmental parameters is expected. On the other hand, the operation of various types of environmental control equipment to achieve intelligent operation and remote control has become a growing concern. The use of the IoT, cloud computing, big data, and other information technology promotes the transformation and upgrading of the entire agricultural industry chain and vigorously drives the development of intelligent agriculture. IoT can effectively reduce human consumption and accurately capture crop environment and other information and timely carry out actions. But traditional IoT platforms usually adopt a centralised architecture, in which the network bandwidth and processing capacity of the central node could be the bottlenecks for the horizontal expansion of the system [6]. The IoT network edges of large-scale heterogeneous generate massive amounts of heterogeneous data. The long network links for data access operations reduce the performance and efficiency of centralised data storage architectures. The resource constraints of IoT nodes make them dependent on the IoT platform to provide rich services to the external, and the long network links for edge nodes to access IoT services under the centralised IoT architecture make the network latency high, which makes it difficult for edge IoT nodes to get real-time services [7,8]. By incorporating the computing model of edge computing (EC), an edge processing layer is employed at the near-device end, effectively reducing the workload of network and computation. Edge clouds rely on shorter network links with service invokers, making it possible to provide low-latency network services and low-cost resource access compared to the traditional cloud computing model. However, the edge cloud itself is limited in resources and can easily become a bottleneck for the platform's services on the edge side of the network. For agricultural IoT, the introduction of EC means that many tasks that used to need to be processed in the cloud can be done locally with artificial intelligence algorithms and data fusion algorithms and can greatly accelerate the response speed of agricultural information and improve monitoring accuracy and more targeted development of agricultural environment management strategies in the monitoring coverage area. This paper combines the Internet of Things and its artificial intelligence technology to build a wide coverage, low power consumption IoT monitoring system suitable for monitoring agricultural environment. It achieves unified management of IoT resources and cooperative computing for environmental monitoring tasks. In this design, the contextual specificity of the environmental monitoring, coupled with the high requirement of real-time data processing, a new agricultural, environmental monitoring IoT architecture model is highlighted. The main contribution in this paper are summarised as follows: (1) Techniques related to building an edge computing platform for environment monitoring were investigated. The functional architecture and data communication architecture of the edge computing gateway were studied (2) A collaborative IoT cloud-edge architecture was proposed to realise the unified heterogeneous resource management. It enables the compatibility of the resource identification and mapping mechanism to various kinds of IoT identification standards and realises the unification of the platform resource description methods to reduce the complexity of resource description (3) The application of LSTM-based environmental indicator prediction algorithm in environmental monitoring was explored. Besides, a visualization dashboard for agricultural environmental monitoring data is built. The data collected by all monitoring nodes are displayed in a dynamic visualization, and the corresponding decision-making support was enabled by the predicting module The rest of this paper is organised as follows: Chapter 2 introduces the related work, Chapter 3 presents the system architecture and functional details, Chapter 4 presents the experiment and discussion of the proposed system, and Chapter 5 concludes the work. Related Work 2.1. Internet of Things. The Internet of Things (IoT) was first proposed by Prof. Kevin Ashton of the MIT Auto-ID Lab in 1999. IoT was initially designed to solve the problems in supply chain management [9]. The traditional IoT platform architecture usually consists of a data sensing layer, network layer, and business logic layer [10]. Such a centralised architecture has the advantages of easy construction and efficient resource management when the scale of the system is smallsize or medium-size. However, with the expansion of the IoT system, service demand is increasing (e.g., response time, intelligent analysis, data storage, and privacy protection). With the popularity of 5G technology, this demand further increases [11,12], a trusted computer or cluster of computers deployed at the edge of a network with rich service resources to provide computing and data storage services to nearby mobile devices. In 2011, Bonomi et al. proposed the concept of fog computing [13], which introduces a fog computing layer between the device and the cloud. It uses the local fog devices (e.g., routers, IP video cameras, and switches) to process some task requests in close proximity, thereby reducing the number of tasks transmitted to remote cloud computing centres. In 2013, Ryan proposed the early concept of edge computing [14] to address the problem of the rapidly growing number of mobile edge devices. In recent years, distributed IoT architecture based on edge computing has attracted the attention from many researchers [15][16][17]. Existing studies or edge computing platforms only consider a single edge cloud's vertical application in an IoT scenario, without considering multiple edge clouds in heterogeneous scenarios. Guo et al. built the first virtual fencing system based on a wireless sensor network and implemented a research test for automatic grazing of arable cattle [18]. Vijayakumar and Ramya designed a lowcost and real-time water quality monitoring system by a wireless sensor network, which requires no wiring and has the advantages of flexible deployment and low cost [19]. [20] explored the ZigBee technology in significant field conditions to ensure stable operation of wireless transmission in irrigation area environment. LoRaWAN is designed for long-range communication and networking devices using LoRa (Long Range Radio) technology and can be independently networked from wireless operators [21]. Sendra Agricultural IoT and Intelligence Computing. Agricultural IoT is the domain application of IoT technology in the whole industry chain of production, management, operation, and service in agriculture [24,25]. The most common agricultural IoT is used for production environment monitoring [19]. IoT technology is used to collect and obtain information of different elements in the agricultural production environment, including temperature and humidity, light, carbon dioxide, soil water content, and soil fertility. Early IoT applications focused on agricultural information sensing. For example, Wang et al. built a mobile observation system for bovine animals using pulse oximeters, respiratory sensors, body temperature sensors, environmental sensors, and GPS modules [26], which provided a monitoring tool to prevent the spread of diseases in the herd. González et al. developed a method to perform unsupervised behavioural classification by installing GPS sensors and movement collars on cattle to observe and record foraging [27]. Gill et al. propose a cloud-based information system that provides agriculture-as-a-service using cloud and big data technologies [28]. It collects information from different users through preconfigured devices and IoT sensors and processes it in the cloud using big data analytics. Zhu et al. design a dedicated IoT platform in precision agriculture and ecological monitoring [29]. The massive amount of data generated by agricultural, environmental monitoring exhibits complex and dynamic characteristics, and it usually involves multiple sectors, regions, and domains. With the advancement of artificial intelligence, advanced scientific data processing algorithms are applied in multiple aspects such as air quality prediction and pollution source location. Environmental data are often a series of observations obtained from various physical quantities observed in temporal order, reflecting the characteristics of entity attributes over time, i.e., a multidimensional time series [30]. Time series usually carry a specific law of variation, which is determined by the intrinsic physical properties of the monitored indicators. Time series prediction refers to the process of mining the intrinsic law of change through a large amount of series data and predicting the next point in time based on this law. Popular time series algorithms, including ARIMA [31], etc. [32], used wavelet decomposition and reconstruction to smooth the time series and demonstrated its feasibility for atmospheric pollutant concentration analysis. The change of environmental information involves multiple factors and has nonlinear characteristics, which significantly impacts the accuracy of environmental information prediction. Neural network-based methods have the advantages of self-learning and self-evolving neurons, which are good at dealing with nonlinear models. Artificial neural networks have shown better performance in environmental prediction, and the first one used by related scholars was BP neural network [33], which is with simple structure. Still, it cannot record the features of the previous moment or multiple moments to be used as learning, which leads to the poor prediction and poor generalisation. RNN [34] and many other neural networks have been applied to environmental information prediction, and certain improvements have been achieved. The various environmental monitoring parameters are affected by many factors such as climatic conditions and geographic conditions and do not show linear characteristics. Data-Secured Intelligent Edge Cloud Architecture for IoT As shown in Figure 1, the workflow is clearly described. This system integrates various kinds of sensors, RFID, video, and other sensing and monitoring devices to collect specific information of farm. This system introduces a cloud-edge collaboration mechanism to process, analyse, and store data to improve the efficiency of network bandwidth utilisation and guarantee the high quality of the platform's external services. The system integrates wireless sensor networks and achieves stable and reliable data transmission through 5G networks. The system fuses and processes the obtained massive agricultural data and realises fully automated monitoring and intelligent analysis of agrarian environment in combination with intelligent terminals. The system's main objectives are as follows: (1) to realise unified management of heterogeneous IoT resources. The proposed architecture uses mapping technology to achieve compatibility with various IoT identity standards and adopts customised identity within the system. A unified resource descriptor is realised by abstracting the behaviour and attributes of resources. (2) To realise a cloud-edge computing services. Data resources are exchanged to each edge cloud through the cloud computing centre. The services of each edge cloud are integrated to provide intelligent decision support for agricultural environment monitoring through intelligent algorithms for hierarchical processing and computing massive data. 3.1. Cloud-Edge Collaborative Architecture. The platform provides comprehensive IoT perceptive ability. The terminal is equipped with ULG series collection and transmission integrated equipment and a set of crop growth related sensors (e.g., soil moisture sensor, soil pH sensor, air temperature and humidity sensor, soil temperature and humidity sensor, light intensity sensor, and carbon dioxide concentration sensor). The platform's architecture consists of an intelligent sensing layer, cloud-edge collaboration layer, heterogeneous network layer, business logic layer, and human-machine interface layer (as shown in Figure 2). The sensing layer contains sensor monitoring nodes. The monitored data are transmitted to the edge computing gateway through wireless transmission protocols such as LPWAN and 802.11 g. The transport layer adopts heterogeneous networking rules based on LoRaWAN and Wi-Fi and 3 Wireless Communications and Mobile Computing applies star topology with low energy consumption, wide coverage, and high bandwidth. The edge computing layer is able to perform online real-time data processing on each measurement parameter of agricultural environment using artificial intelligence and data fusion algorithms at the information collection site. Each functional module is encapsulated through container technology, and information is transmitted between each functional module and between the cloud and the edge through the edge messaging middleware server. Compared with the traditional cloud architecture monitoring system, it can reduce the transmission delay, improve the system response speed, and reduce the pressure on the server side. The application layer implements an agricultural environment monitoring visualization platform to effectively manage and apply the data collected by multiple types of nodes and to make expert decisions and early warnings based on environmental factors. It is designed for the platform resource management and collaboration. Resource sharing and resource control policy can be made among edge cloud systems to achieve controlled sharing of resources and service convergence. More precisely (as illustrated in Figure 3), the cloud computing centre adapts to access all edge clouds and provides unified services to the external through the conversion of resources. The sensing devices sense the information of the physical world (such as temperature, humidity, and pressure) and transmit the collected data to the corresponding servers through the network. The execution devices execute the received instructions from the upper layer or their control logic to realise the operation of the physical world. The cloud-edge collaboration refers to the access to the corresponding edge cloud system according to the functional requirements and geographic location. The edge system shields the differences of sensing devices to realise the The data stored in the platform are mainly (1) text data collected (such as data collected by temperature and humidity sensors), (2) audio and video data, and (3) system data (such as users, service call permissions, system configuration). It is difficult to store these massive heterogeneous data using a structured database, so we use Redis (https://redis.io) as the persistent data repository. A cloud-based file storage system (https://os.iot.10086.cn) is employed to store IoT audio and video data and uses a content distribution network to distribute the resources according to the regional nature of the data. We employed SQL Server 2015 to store entity data. The structured database supports complex conditional statement queries to meet the platform application requirements. The platform uses read-write separation to operate the database to improve access performance, making data read and write in different instances. 3.2. Cloud-Edge Service and IoT Data Security. The IoT edge cloud collaboration applies to large-scale heterogeneous scenarios, using the cloud computing centre to connect edge cloud systems at the edge of the distributed network and manage and control the edge cloud systems to achieve cross-edge cloud service collaboration. The relationship between the cloud computing centre and the edge cloud systems is shown in Figure 3. The cloud computing centre is a key component of the edge cloud collaborative architecture, which connects each edge cloud system through the cloud computing centre, links the data resources between each edge cloud system, and realises cross-edge cloud collaborative services. The cloud computing centre provides management and control functions for each edge cloud, such as resource access policy, resource identification, instantiation deployment of cloud-edge microservices, security monitoring, and user management. Most of the data is stored in the edge cloud, and some resources frequently requested are stored in the cloud computing centre to reduce the dependency. In addition, the cloud computing centre integrates the platform resources and provides a unified service interface to the developers, making the underlying heterogeneous system transparent to the developers. The edge cloud system is customised according to the application and device requirements of the scenario where it is located and can be adapted to various network communication protocols and data formats in the underlying layer. It uses a unified external interface in the upper layer to communicate with the business logic layer. The deployment of the edge cloud system requires digital certificates issued by the cloud computing centre as the legal proof of identity and the key for resource sharing. The service interface uses RESTful specification (https://restfulapi.net/), HTTP protocol for synchronous data exchange, and RabbitMQ (https://www.rabbitmq.com/) message queue for asynchronous data exchange, etc. The data generated by the edge cloud system is processed, analysed, and stored locally. The edge cloud system administrator has the highest management control over the data in the edge cloud and can decide which data are open to the public and which data can only be used in the current edge cloud system. The edge cloud system is an integral part of the IoT edge cloud collaborative architecture, which is implemented according to the service requirements of the scenario, mainly consisting of device access middleware, data storage centre, data analysis and the processing module, and service provision centre and resource sharing and exchange module, and the edge cloud architecture is shown in Figure 3. The heterogeneity of edge clouds for IoT resources leads to the difficulty of resource sharing and service collaboration among edge clouds. The resource management module was aimed at managing and utilising heterogeneous resources efficiently. IoT identification is the basis of IoT resource management; due to the lack of unified identification system standards, it is challenging to share resources among systems. We propose an IoT identity mapping method to support various identity technology standards, adopt a customised identity method within the system, and decouple resource identification and resource location. It adopts Uniform Resource Name (URN) to identify platform resources and uses Uniform Resource Locators (URL) to search resources. It realises (1) the compatibility of various IoT identification standards through the resource identification mapping, (2) the unique identification of heterogeneous resources in each edge cloud, and (3) the unification of the platform resource description mode. It reduces the difficulty of resource expression format conversion in the service collaboration operation in each edge cloud and supports the sharing of resources in each edge cloud. The architecture of the identity mapping module is shown in Figure 4. The identity mapping service is deployed in the cloud computing centre to provide an identity mapping information query service. The identity mapping information includes the platform virtual identification number, the resource's physical identification number, and the edge cloud system number where the resource is located. Through this module, the platform virtual identification number can be converted to the physical identification number of the resource and the edge cloud system number where the resource is located. When retrieving the platform resources, only the platform virtual identification number of the resource needs to be passed in to discover the edge cloud system where the resource is located. Its physical identification number achieves the resource search. The identity mapping management module is deployed in the cloud computing centre to provide services for creating, modifying, and deleting identity mapping information. An edge cloud resource, which could be shared externally, must be registered in this module to obtain the platform virtual identification number before it can be accessed externally. When the physical identification number of the resource changes, only the physical identification number of the resource in the identity mapping information needs to be modified, while the platform virtual identification number of the resource does not need to be changed, which avoids the modification of applications developed based on the resource and simplifies the work of platform application developers. The identity mapping cache is deployed in each edge cloud system to cache the resource mapping information to reduce redundant data requests. Agricultural-Oriented Service Management. The platform provides the information collection function, with which we can view the real-time status of the plot on the map. The platform transmits the images collected by high-definition cameras to the data centre through the network and enables real-time preview and playback. The interface counter can monitor the total number of interface calls and the success rate of interface calls. The system manager can browse and manage the information of base stations in each block (including base station name, base station level, base station serial number, and base station map markers) and sensor information (such as sensor name, category, health value, display type, and setting the period of sensor upload data). The system can control the equipment on the farm, such as turning on/off the devices, including the fan, the fill light, the shade screen, and the automatic sprinkler irrigation. The system can also support the facilities' performance monitoring, such as the management of M2M cards and SIM card management. Some of the functional interfaces of the system are shown in Figure 5. Figure 5(a) shows the environmental factor functional interface. In this interface, the left area shows the sensor's name and the current value obtained from the sensor. Below the sensor value is the selection of parameters such as soil moisture, air humidity, and light level. The middle of the interface is shown more visually by displaying the locations where the platform has been used on a map, marked by red dots. On the right side, information such as device name, device information, status, and switches is displayed, giving an intuitive and convenient display. The video monitoring function is shown in Figure 5(b). The left side of the interface shows the completed campus monitoring. You can select the location or camera you need to view. The area's video monitoring will be displayed in the middle of the page after double-clicking. You can also adjust the direction of the surveillance camera in multiple paths through the operation area below to see the surrounding images without leaving any dead angle. You can also adjust the number of images displayed in the main interface by the number of window segments, up to 16 cameras simultaneously. Click the history video button in the upper left corner to query the history video; after selecting the camera location, enter the query period in the operation area below to query the history video; the function also has the parts of pause, resume, fast playback, slow playback, etc. Growth report real-time query as shown in Figure 5(c) is to query the real-time growth monitoring data returned by each sensor; the menu is divided into three levels, menu level 1 for the agricultural industry, menu level 2 for the company, and menu level 3 for the sensor. Intelligent control engine function as shown in Figure 5(d), the left menu of the interface to select the regional node, after selecting the middle of the interface shows the existing equipment, you can also enter the query conditions above to filter. The primary displayed device information is device name, device information, status, node name, and switch. The switch button can be used to adjust the operation and stop of the device. The system can set the threshold value to achieve automatic control. Suppose the air temperature and humidity are greater than 30 degrees. The fan will be automatically turned on to cool down and automatically turned off when the temperature is less than 25 degrees. Figure 6(a) shows the circular structure of RNN, and Figure 6(b) shows the expanded structure of Figure 6(a). We find that the output y t at the current time The expanded RNN structure can be regarded as a feedforward neural network with N intermediate layers, so it can be trained using the backpropagation algorithm. However, in the process of backpropagation, the continuous multiplication of W 3 and h t−1 tends to cause gradient disappearance and gradient explosion, which makes it difficult for the RNN to learn the forward and backward information dependence at a long distance. To solve this problem, long short-term memory networks (LSTM) are proposed. Similar to RNN, LSTM also consists of a set of repeated neural network modules, and such modules are called memory blocks. As shown in Figure 7, each memory block contains three gates, i.e., forgetting gate, input gate, and output gate. In contrast to RNN, which has only one state for recurrent transmission (output of the hidden layer), LSTM has two, namely, the hidden layer state (h t ) and the cellular state (s t ), that runs through the entire memory block. In Figure 2, s t−1 and h t−1 are the cell state and the hidden layer state at the previous time point, respectively, and x t and y t are the input and output at the current time point, respectively. The roles of the three gates are described in detail below. The forgetting gate determines how much information is discarded from the cell state. The hidden state h t−1 at the previous time point and the input x t at the current time point are fed into the memory block and after the activation function Sigmoid outputs a proportional value from 0 to 1, which represents the proportion of information retained from the cell state. Finally, the proportional value is multiplied by s t−1 to achieve the forgetting function. The forgetting gate can be represented as where W f and b f denote the weight and bias, respectively. The input gate determines how much information from the current input is added to the cell state. First, consistent with the forgetting gate, the hidden state h t−1 from the previous time point and the input x t from the current time point are input to the memory block, and after the activation function Sigmoid outputs a scale value from 0 to 1, which represents the proportion of information retained from the current input. At the same time, h t−1 and x t are input to the memory block, and x t ' is output after the activation function tanh. Finally, the proportional value is multiplied with x t ' to realise the input function. The input gate can be represented as where W i and W C denote weights and b i and b C denote biases. The output gate determines the output information for the current point in time. This is also done first by feeding the memory block with the hidden state h t−1 of the previous time point and the input x t of the current time point, which is then passed through the activation function Sigmoid to obtain a proportion. Then, the output value of the cell state after the activation function tanh is multiplied by the proportion to get the output at the current time point. The output gate can be expressed as follows. where W o and b o denote the weight and bias, respectively. Experiment The experiment was designed to verify the overall availability of the proposed system and the accuracy of its environmental prediction function. The packet loss, throughput, and response time are frequently used metrics to describe the system availability. We used a group of data to test the packet loss (between transmission layers) of the system data transmission and in the environmental information data collection. y t-1 Figure 6: The structure of RNN. 8 Wireless Communications and Mobile Computing device collects data in a given period (e.g., an hour) and sends it to the automatic monitoring base station for aggregation at regular intervals (e.g., every 2 hours). The environmental monitoring wireless node collects data once a day and sends them to the monitoring base station for aggregation, and finally, all data are transmitted to the IoT platform. As shown in Figure 8 counts the number of packet losses, with the increase of sampling frequency, the overall packet loss rate declines sharply, and when the sampling frequency reaches 60 ms, the packet loss rate drops to 0.1%. The result falls in our expectation, and it indicates the proposed system could fulfil the requirement in practice. The system is designed for large-scale heterogeneous scenarios, and the platform needs to guarantee service availability during high concurrent scenarios. The throughput and response time of the platform are verified by analysing the log data of the platform under different concurrency. We use the Pulsar tool (https://pulsar.apache.org/) to simulate the service requests to the platform, with different values of concurrent clients. The test result of workload is shown in Table 1, in which, the task queue indicates the percentage of backlogged tasks in the task queue. For each group in the experiment, 10000 transactions were executed with different size of terminals (i.e., concurrency). In the 6 th group, 11 transactions failed; that means the concurrency maximum is between 600 and 700. The relationship between the average response time and the number of concurrent clients is shown in Figure 8(b). The average response time increases with the number of concurrent threads. When the number of concurrent threads surpasses the maximum number of threads supported by the edge cloud, there is a significant increase in the average response time. As shown in Figure 8(b), the system throughput of the edge cloud system deployed with a single service reaches the maximum when the concurrency is 300, indicating that the system does not saturate with the number of tasks when the concurrency is less than 300; while the number of tasks saturates when the concurrency is more significant than 300, but the system throughput does not decrease significantly when it is less than 800. When the concurrency reaches 300, the task queue starts to have tasks piling up, but at this time, the system can still process in time, and the task processing error rate is 0. When the concurrency reaches 800, request processing exceptions start to occur. We designed an experiment based on the computing service (detailed in Section 3.5) to predict crop yield based on meteorological, environmental factors. The data were collected from February 1, 2015, to August 31, 2016. They contained four environmental factors, i.e., the maximum temperature of the day, minimum temperature of the day, the average temperature of the day and average humidity of the day, and crop yield. The dataset contains 46 data from different data sources, and each data contains 578 datasets. Some of the data in the dataset are shown in Table 2. The visualization of the dataset is shown in Figure 9. The five images from top to bottom in Figure 9 show the data changes of maximum temperature, minimum temperature, average temperature, average humidity, and yield of the day in chronological order. The horizontal axis represents the time, totalling 578 days, and the vertical axis represents temperature, humidity, and yield. As shown from Figure 9, the temperature varied with the change of the season, the humidity factor had no obvious pattern, and the work reached its highest in the harvest period (in late July). We first normalized the temperature, humidity, and yield, dividing the dataset into a training set, validation set, and test set, containing 400, 100, and 78 sets of data, respectively, and trained 100 rounds with the mean square error as the loss function. At the end of the training, the training loss reached 0.0016, the validation loss reached 0.0004, and the test loss reached 0.0176. Figure 10 shows the performance of the model. The horizontal axis represents time and the vertical axis represents the yield (after normalisation). The blue curve indicates the actual yield value, the green curve indicates the predicted yield value output from the training set, the orange curve indicates the predicted yield value output from the validation set, and the red curve indicates the predicted yield value output from the test set. As shown in Table 3, the prediction results of LSTM on the training, validation, and test sets are consistent with the true values, but the predicted values are slightly higher than the actual values in the nonharvest period and slightly lower than the actual values in the harvest period. Conclusion In this paper, an agricultural environment monitoring system is built by integrating edge computing and artificial intelligence. This paper investigates the traditional architecture of agricultural IoT system, proposes a cloud-edge collaboration framework for agricultural environment monitoring, and implements agricultural environment prediction function based on LSTM. The system has been deployed in more than 10 large-scale farms. There are still some shortcomings in the research; e.g., for sensors, improper installation location may lead to inaccurate data acquisition, and instability could result in data collection changes. Moreover, there are some wireless sensors transmission signal distance is limited. The power supply for equipment is not easy-to-obtain: solar power supply may not provide sufficient power, and the adoption of AC power required relocating power wires on the site. Advancement of sensor technology is expected in future. In other hand, for LSTM structure, the training cost of its model is relatively high. In the future, we will introduce several improved versions of LSTM (e.g., coupled LSTM) and other methods on the agricultural environment prediction model proposed in this paper to enhance the training performance of the prediction model and improve the prediction accuracy. Data Availability The data that support the findings of this study are available on request from the corresponding author. Conflicts of Interest The authors declare that they have no conflicts of interest.
7,984.2
2022-06-28T00:00:00.000
[ "Computer Science" ]
Monte Carlo simulation for background study of geophysical inspection with cosmic-ray muons . Niita, K., Sato, T., Iwase, H., Nose, H., Nakashima, H. & Sihver, L., 2006. PHITS: a particle and heavy ion transport code system, Radiat. Meas., 41, 1080–1090. Nishiyama, R., Miyamoto, S. & Naganawa, N., 2014. Experimental study of source of background noise in muon radiography using emulsion film detectors, Geosci. Instrum. Methods Data Syst., 3(1), 29–39. Nishiyama, R., Miyamoto, S. & Naganawa, N., 2015. Application of Emulsion Cloud Chamber to cosmic-ray muon radiography, Radiat. Meas., 83, Density imaging with cosmic-ray muons Several geophysical exploration methods have been developed and employed to image the internal structure of volcanoes. An important objective for volcanologists is to put constraints on models of magma plumbing systems. From the view point of volcano monitoring, the main ambition is to detect movements of magma in magma chambers or magma pathways (conduits), for which the spatial resolution of the imaging is important because of the highly heterogeneous internal structure of volcanoes. Recently, a novel technique named muon radiography (muography) has been developed for probing the internal density profiles of volcanoes (e.g. Tanaka et al. 2007;Ambrosi et al. 2011;Lesparre et al. 2012;Cârloganu et al. 2013). Muography is based on measurements of the absorption of the atmospheric muon flux inside a target material. The method is made possible by the fact that the energy spectrum of cosmic-ray muons and the energy dependence of muon range have been intensively studied. Thus the attenuation of the muon flux can be used to derive the column density of target mountains along muon trajectories. The greatest advantage of muography is its high spatial resolution compared with other geophysical exploration methods. The resolution of muography can be expressed ideally as where L is the distance between the muography detector and the target, and θ is the angular resolution of the detector. Considering typical configurations (L ∼ 1 km and θ ∼ 30 mrad), the spatial resolution becomes x ∼ 30 m, better than many other geophysical methods. It is expected that muography will allow visualization of highly heterogeneous structures near the surface of volcanoes. Several types of detectors have been utilized in muography, such as scintillation detectors (e.g. Tanaka et al. 2009;Ambrosi et al. 2011;Lesparre et al. 2010Lesparre et al. , 2012Tanaka et al. 2014), gaseous detectors (e.g. Alvarez et al. 1970; Barnaföldi et al. 2012;Cârloganu et al. 2013) and nuclear emulsions (e.g. Tanaka et al. 2007;Dedenko et al. 2014;Nishiyama et al. 2014). The advantage of scintillation detectors and gas chamber detectors is that they can provide information on the arrival time of incident muons. This enables us to monitor the time variation of the muon flux and hence the sequential change of the density profile inside volcanoes. A disadvantage is that the experiment site is limited to places with well-established infrastructure, since scintillation detectors require electricity and gas chamber detectors require continuous gas circulation for operation. On the other hand, nuclear emulsion can be installed almost everywhere except for areas of high temperature (>30 • C) and it does not require electricity for operation. Another advantage of emulsion is its high resolution for determining the incident angle of muons ( θ = 5-10 mrad), which promises a better spatial resolution (eq. 1). 1040 R. Nishiyama et al. The use of muography has spread not only to volcanology but also to various fields of geoscience and civil engineering. For instance, muography has been applied to non-invasive inspection of historic architectures (Minato 1986), monitoring of furnaces (Nagamine et al. 2005), detection of seismic faults (Tanaka et al. 2011) and detection of caves (Barnaföldi et al. 2012). Background particles Muography observations require a precise measurement of the flux of cosmic-ray muons after passing through the target mountain (we call them 'penetrating muons' in this paper). The flux of the penetrating muons is generally very small. For instance, the muon flux for nearly horizontal muons is more than two orders of magnitude smaller than the flux of vertical muons and it decreases to two orders of magnitude smaller after passing through 1 km-thick standard rock (Groom et al. 2001). This weak flux of penetrating muons presents a difficulty for muography, since a small detector and a short exposure do not provide sufficient statistics for density estimations. To obtain sufficient statistics, one has to use a larger detector or increase the exposure time, which are often not easily achieved practically. This trade-off has been discussed in other literature (e.g. Lesparre et al. 2010). Another difficulty in muography arises from background particles, namely particles arriving at the detector that are not penetrating muons. If the background noise exceeds the weak flux of penetrating muons from the target mountain, the muon flux may be overestimated, leading to a significant underestimation of the column density. Several research groups have reported that the flux of background particles is not negligible compared with the flux of penetrating muons (Carbone et al. 2014;Jourde et al. 2013;Nishiyama et al. 2014;Ambrosino et al. 2015). For instance, Carbone et al. (2014) has observed a remarkable difference between the observed particle flux and the expected muon flux at Mt. Etna with a scintillation detector. Specifically, the flux of particles coming from the mountain exceeds the expectation by a factor of 2-10. Ambrosino et al. (2015) also reports a similar result. Although the instrumental noise level of their detector is far below the expected muon rate, they still observed non-negligible amount of signal caused by some charged particles. This excess is attributed to contamination by background particles. Although the origin of the background particles is not fully understood, some prior works have considered possible candidates. For instance, Jourde et al. (2013) proposed that the background noise arises from particles entering the back of the detector with upwardgoing trajectories. These particles mimic the signals of penetrating muons, since upward-going particles have trajectories identical to those of downward-going muons emerging from target mountains. They have detected these upward-going particles in several locations by dedicated time-of-flight (TOF) analysis with scintillation detectors. Although the upward-going particles are part of the source of the background particles, they do not explain all the excess in the particle flux. Nishiyama et al. (2014) proposed that the background particles are due to low-energy charged particles (E < 1 GeV) which are scattered onto detectors from random directions. They installed two emulsion detectors with different energy thresholds (0.1 and 1.0 GeV) at the foot of a small lava dome (Mt. Showa-Shinzan) and demonstrated that only the detector with the higher energy threshold yields appropriate particle flux and density values for the lava. However, the origins of these low-energy particles have not been verified, nor has their connection with the upward-going particles. The present study aims for a better understanding of the origin of the background noise in muography via Monte Carlo simulations of particle transportation. For quantitative discussion of the background, it is required to calculate the energy spectra of major cosmic particles (not only muons) with a certain amount of systematic uncertainty. We have established a simulation framework to calculate the energy spectra of muons, electrons, gamma-rays, protons and neutrons arriving at a detector with the 3-D topography of a mountain taken into account (Section 2). The results of the simulation are shown in Section 3. The systematic uncertainty of the simulation is discussed in Section 3.2. The simulation results are compared with actual observations at Mt. Showa-Shinzan with nuclear emulsions in Section 4. In this paper, instrumental noise, such as accidental coincidence of dark noise with photo-sensors, is assumed to be sufficiently reduced and is not discussed. S I M U L AT I O N In this section, we describe the framework of the Monte Carlo simulation. The aim of the framework is to calculate the composition and the energy spectrum of cosmic particles arriving at a detector with the topography of a mountain taken into consideration. The calculation is performed in two steps. First, we calculate the energy spectra of major cosmic particles using the air-shower simulation code COSMOS. Second, we simulate propagation near a detector and a mountain with the GEANT4 toolkit (Agostinelli et al. 2003). Fig. 1 shows a schematic illustration of our simulation framework. Air-shower simulation with COSMOS COSMOS follows the development and evolution of air shower particles and describes the angular, spatial, temporal and 3-D momentum distribution of secondary particles (Roh et al. 2013). Counting the number of secondary particles enables us to collect the energy spectrum of any cosmic particle for any altitude in the atmosphere. In this study, the version of 7.641 was used (http://cosmos.n.kanagawa-u.ac.jp/). In this work, protons and helium nuclei are injected at an altitude of 400 km. The energy distribution of the incident protons and helium nuclei are taken from the BESS flight in 1998 (Sanuki et al. 2000). The physical properties (density and composition) of the atmosphere are taken from the US Standard Atmosphere (U. S. Goverment Printing Office 1976). The spherical shape of the Earth and the atmosphere is implemented properly to calculate the flux of nearly horizontal particles. The magnetic field of the Earth is neglected in this study, although it must be implemented to discuss the longitude and latitude dependence. The absence of the geomagnetic field does not affect the general conclusion of this work, as we discuss in Section 3.2. The interaction models employed in this study are PHITS 1 (Particles and Heavy Ion Transport code System, Niita et al. 2006) and JAM (Jet AA Microscopic Transport, Nara et al. 1999) 2 for the low-energy region (E < 4 GeV), DPMJET-III (Roesler et al. 2001) for the middle-energy region (4 ≤ E < 1000 GeV) and QGSJET-II-03 (Ostapchenko 2006) for 1 PHITS is not an interaction model but a general purpose MC simulation code. In COSMOS some interaction models used in PHITS can be imported. the high-energy region (1000 GeV ≤ E). The development (secondary production, decay, etc.) of the air-showers is traced until the kinetic energy of the secondaries drops below 50 MeV to save computation time and get better statistics. While tracing, particles passing through virtual spheres at several altitudes are recorded. The particle type, position, direction and kinetic energy are stored in the output. COSMOS results From the COSMOS output, we derived the energy spectrum of particle-i (i: muon, electron, gamma-ray, proton and neutron) by averaging the number of hits over the entire surface of the virtual sphere. The procedure is as follows. The number of hits is binned as a function of kinetic energy (E, index j) and zenith angle (θ , index k): where n i (···) denotes the number of hits of the ith particle, T is the equivalent exposure time and A is the geometrical acceptance for particles entering the sphere with zenith angle within θ minθ max . The radius of the Earth (R = 6.4 × 10 6 m) gives an exact value of the acceptance: A = 2π 2 R 2 (cos 2θ min k − cos 2θ max k ). α is a scaling constant and is set to 0.65 so as to fit the energy spectrum data for vertical muons (Haino et al. 2004). The difference of the normalization constant to 1 hints to the level of uncertainty affecting the simulation (∼40 per cent). Fig. 2 represents the resultant energy spectra of muons (μ ± ), electrons (e ± ), protons (p), gamma-rays (γ ) and neutrons (n) along with energy spectra reported in other literature. Although the COS-MOS results agree with the literature values in general, there are discrepancies in low-energy regions for p and n and in high-energy regions for e ± . This difference should be regarded as systematic uncertainty in the COSMOS calculation and its effect on background estimation will be discussed in Section 3.2. For GEANT4 simulation, we produced an energy spectrum model for each particle by interpolating or extrapolating the results for 300 m above sea level (asl) (Fig. 3). The energy range of the model is {E: 1 ≤ E < 10 000 GeV} for muons and {E: 0.05 ≤ E < 500 GeV} for the other particles. The zenith dependence of the spectrum is considered by binning at intervals of cos θ = 0.05 for muons and cos θ = 0.10 for the other particles, ranging from cos θ = 0 (horizontal) to cos θ = 1 (vertical). A simple power law spectrum is used for extrapolation in the high-energy region where there are not enough statistics for fitting. Local simulation with GEANT4 Since COSMOS cannot deal with the topography of a mountain, we use the GEANT4 toolkit to simulate particle propagation near the mountain and the detector. We constructed a virtual mountain and a virtual detector in a computational region of GEANT4 and injected particles from a substantially large hemisphere enclosing the mountain and detector, following the energy spectrum model derived from COSMOS. The virtual mountain has a rotationally symmetric shape and is realized by a number of small prisms with horizontal dimensions x = y = 10 m (Fig. 4a). The elevation at each point (h) is given as a function of the distance to the axis (r): This shape is adjusted so that the rock thickness becomes comparable to the case of our emulsion observations at Mt. Showa-Shinzan. The virtual mountain consists of standard rock with a density of 2.00 g cm −3 . The computational region outside the terrain is filled with air (1 atm). The virtual detector has a belt-like surface and is placed surrounding the mountain at height of 65 m. The radius and height of the belt are 500 and 10 m, respectively. Thus the total area of the virtual detector is S MC = 3.1 × 10 4 m 2 . This virtual detector records the information of particles passing through it. The hemisphere, from which particles are injected, has an oblate spheroidal shape with a long axial radius of R x = R y = 600 m and a short axial radius of R z = 300 m. The size of the spheroid is adjusted so that it encloses the mountain and the detector. The particle type, position, direction and kinetic energy of incident particles are sampled based on the energy spectrum model produced by the COS-MOS simulation. The Fritiof string model (E > 10 GeV) and Bertini cascade model (E < 10 GeV) are employed as hadronic interaction models (FTFP_BERT in GEANT4 reference physics list). For electromagnetic process and multiple Coulomb scattering, the standard electromagnetic interaction model of GEANT4 was employed. To Allkofer et al. (1985) at sea level; e + + e − : Golden et al. (1995) at 945 g cm −2 (600 m asl); p: Brooke & Wolfendale (1964) at sea level; γ : Beuermann & Wibberenz (1968) at 760 g cm −2 (2500 m asl); n: Gordon et al. (2004) at sea level (167 m asl). The solid circles denote the results of a COSMOS simulation taken at the same altitude at the experimental data for comparison. save computation time, neutrons with kinetic energy below 30 MeV are discarded during tracing. The rotationally symmetric mountain and detector allowed to enlarge the detector size without losing generality. The large detector acceptance significantly increased statistics of background particles with limited computation time. Specifically, we injected only 3.3 × 10 9 particles, which corresponded to the number of particles incident on the hemisphere in 10.8 s(=T MC ). However, we obtained enough statistics from the simulation, since the area of the detector (S MC ) was large. The effective exposure of the simulation The computation time was 1.6 × 10 4 hr for a single thread. With the aid of multithreading technology, the calculation was finished within one day in our computational resource. The rock thickness in R3 is 579-917 m (Fig. 7b). Fig. 4(c) shows the simulated number distribution as a function of kinetic energy of the particles when they reach the detector. In this figure, the contributions from penetrating muons and background particles are individually drawn. In both regions, the distribution of the penetrating muons shows a maximum at around 100 GeV. The penetrating muons make almost no contribution below 1 GeV, whereas the background particles (protons, electron and muons backgrounds) dominate the population in this range. We can calculate the flux of penetrating muons and background particles by integrating the number distribution over energy. The results of this calculation show that the flux of these background particles above 50 MeV in R2 is 7.8 times that of the penetrating muons, and is as much as 16.5 times in R3. This result indicates that the signals of penetrating muons would be overwhelmed by the massive flux of background particles in the case where the energy threshold of the detector is less than 1 GeV. To reduce the contamination by background particles, the optimal energy threshold should be above 1 GeV. This conclusion will be confirmed by our emulsion experiments in Section 4. Uncertainty of simulation In this subsection, the systematic uncertainty in the calculation of background flux is discussed. First, we have to address the systematic uncertainty in the hadronic interaction models. Although it is very difficult to declare how the model uncertainty propagates to that of our background estimation, it can be estimated pessimistically by focusing on the discrepancy between the energy spectra derived from COSMOS and the literature values (Fig. 2). Regarding this difference as systematic uncertainty of the COSMOS simulation, the uncertainty of the background flux is estimated to be ∼40 per cent. Second, it has to be taken into account that the magnetic field of the Earth is neglected in this calculation. There are two effects which must be considered: the overestimation of the background flux because of neglecting the geomagnetic cut-off and the underestimation of the scattered flux due to the propagation of the low-energy particles in the magnetic field. In conclusion, these effects are not so severe than the hadronic uncertainty stated above. The reasons are as follows. (i) The geomagnetic field prevents low-energy primaries from penetrating through the magnetosphere to the atmosphere of the Earth (geomagnetic rigidity cut-off). The absence of the geomagnetic field, therefore, overestimates the flux of protons coming into the atmosphere and hence overestimates the background flux. However, the overestimation is estimated to be no more than 17 per cent, considering the vertical rigidity cut-off of Showa-Shinzan region (8 GV). (ii) The absence of the geomagnetic field does not influence the COSMOS simulation in the top of the atmosphere because of the short length (15-30 km). The rigidity for a gyroradius of 30 km is merely ∼1 GV, assuming that the strength of the geomagnetic field is 4 × 10 −5 Tesla. (iii) The absence of the geomagnetic field does not affect the GEANT4 simulation near the surface because of the small injection hemisphere (∼500 m). The rigidity for a gyroradius of 500 m would be the order of 10 MV. Even low-energy background particles will not be bent in the hemisphere. Origin of background From our simulation, the background particles can be classified in to three categories according to their origins: (i) protons, (ii) electrons and muons produced by hadronic interaction of protons and neutrons in the atmosphere and the topographic material and (iii) electrons and muons scattered in the atmosphere. In this paper, we refer to (i) and (ii) as hadronic backgrounds and we refer to (iii) as scattered backgrounds. Fig. 5 shows each component of the background particles as a function of the energy threshold for the R2 and R3 regions. In both regions, the dominant contribution is from hadronic backgrounds. The proportion of hadronic background in the total background above 50 MeV is 89 per cent and 84 per cent for R2 and R3, respectively. Upward-going particles Although we inject only downward-going particles (cos θ > 0) in our simulation, we find upward-going particles arriving at the detector from the rear side. The flux of the upward-going particles above 50 MeV is 5.1 × 10 −2 m −2 s −1 sr −1 (23 per cent) and 8.4 × 10 −2 m −2 s −1 sr −1 (44 per cent) for R2 and R3, respectively (Fig. 5). A zenith angle distribution of the simulated flux for the background particles is represented in Fig. 6. Muography at Mt. Showa-Shinzan In this section, we compare the results of our simulation with the particle flux observed with nuclear emulsions placed at Mt. Showa-Shinzan. Mt. Showa-Shinzan is a lava dome which was extruded as a parasitic cone of the Usu volcano in 1944-1945. The relative height of the peak is ∼200 m and the diameter is ∼1 km (Fig. 7a). For a comparative study, two types of emulsion detectors were installed at the foot of the mountain, 500 m west to the summit. One is a stack of four emulsion films (Quartet detector). The other is a stack of 20 emulsion films and nine 1-mm-thick lead plates, the so-called Emulsion Cloud Chamber (ECC detector;De Serio et al., 2003). The emulsion film used in this study is OPERA-type (Nakamura et al. 2006). The effective area of the two detectors is S OBS = 104 cm 2 . The exposure duration is T OBS = 1.45 × 10 7 s (168 d). The thickness of the rock existing along the muon trajectories is given in Fig. 7(b). After chemical development of the films, the tracks in the films were scanned using the European Scanning System (Arrabito et al. 2006). The track reconstruction was conducted using the FEDRA system (Tioukov et al. 2006). We then applied our original track selection on the basis of (i) straightness and (ii) grain density (GD) of the tracks. The track selection is briefly explained in the next two subsections. Details are also given in our previous reports (Nishiyama et al. 2014(Nishiyama et al. , 2015. Straightness cut Stacking several emulsion films enables us to impose an energy threshold on the incident particles by analysing the straightness of the tracks (De Serio et al. 2003). The straightness is evaluated by using the deflection angle of the tracks in the adjacent films. Owing to the lead plates inserted between the films in ECC detector, this 'straightness' leads to the ECC detector gaining a higher energy threshold (∼1 GeV) than the Quartet detector (∼50 MeV). The energy dependence of the detection efficiency, calculated by a GEANT4 simulation, is presented in Fig. 8. Grain density cut The GD refers to the number density of silver grains produced along the tracks. It positively correlates with the ionizing power of the incident particles and hence can be used for particle identification (Toshito et al. 2004). For instance, the energy loss by ionization for particles with a kinetic energy of 100 MeV is 1.7 MeV g −1 cm 2 for electrons, 1.4 MeV g −1 cm 2 for muons and 15 MeV g −1 cm 2 for protons. Thus we can roughly distinguish thick tracks of protons from thin tracks of the others. Fig. 9(a) shows a GD distribution for selected tracks in the R2 and R3 regions of the Quartet detector. The distribution clearly has two groups: low and high GD groups. Considering the ionization power of major charged particles at near the energy threshold (∼100 MeV), the low GD group is composed of relativistic particles (protons above 1 GeV, muons and electrons) and the high GD group is composed of non-relativistic particles (protons below 1 GeV). Conversely, the Quartet detector is capable of discriminating lowenergy protons from others. In contrast to the Quartet detector, the GD distribution of the ECC detector yields a single peak because low-energy protons (<1 GeV) were already rejected based on the straightness cut. Fig. 10(a) shows the angular distribution of the tracks surviving the cuts for the low GD group of the Quartet detector, the high GD group of the Quartet detector, and the ECC detector. The particle Figure 6. Zenith angle dependence of the calculated flux of the background particles for kinetic energy above 50, 100, 200 and 500 MeV (protons, electrons and muons are added). cos θ < 0(>0) indicates downward (upward) going particle. flux was derived from the number of tracks after efficiency correction and normalization with respect to detection area and exposure time. Fig. 10(b) shows the detection efficiency for each group (see Nishiyama et al. 2014 for the Quartet detector and Nishiyama et al. Figure 8. Probability that particles pass through the emulsion detectors and are recognized as a straight track, calculated using GEANT4. This survival probability is represented for each pair of particle type and detector type. 2015 for the ECC detector). The detection efficiency becomes maximum for tracks perpendicular to the film and decreases gradually as the slope of the track (θ) increases. For the ECC detector, the efficiency is 71 per cent at θ = 0 (centre of the view), decreasing to 40 per cent at θ = 0.6. This trend is the same for the low GD tracks in the Quartet detector. The detection efficiency of the high GD tracks (protons) is higher than 85 per cent over the entire the view. Fig. 10(c) shows the particle flux for the three groups of tracks after efficiency correction. The values of the flux in each region (R1, R2 and R3) are tabulated in Table 1. For comparison, the simulation spectra convoluted with the detector response (Fig. 8) are also shown. Observed flux In this paragraph, we compare the observed and calculated flux for each of the three groups. The flux derived from the ECC detector agrees well with the MC expected flux (first row in Table 1). Although there is a slight deviation beyond the observational error in R3, this difference can be attributed to the fact that the mountain realized in the MC computational region is quite simplified and does not reproduce the real topography of Mt. Showa-Shinzan. The flux derived from the high GD group of the Quartet detector should be compared to the MC expectation of the proton flux (second row in Table 1). The values of the observed and expected flux agree well within the systematic uncertainty of the simulation (∼40 per cent, see Section 3.2). It is noticeable that a substantial amount of protons come from the direction of the mountain. The particle flux derived from the low GD group of the Quartet detector should be compared with the sum of the MC expectation for muons and electrons (third row in Table 1). In the free sky (R1), both the observed and calculated fluxes are higher than the flux from the ECC detector. This is because low-energy cosmic electrons (>50 MeV) contribute approximately 10 per cent of the observed 1048 R. Nishiyama et al. flux. It is clear from the results for R2 and R3 regions that background particles have a significantly higher flux than the penetrating muons. The particle flux derived from the Quartet low GD group exceeds significantly the flux derived from the ECC detector. The ratio Quartet low GD/ECC is 5.2 and 7.3 for R2 and R3, respectively. These values agree well with the MC expectations, 4.0 and 7.4 for R2 and R3. If we assume that all the observed tracks in the Quartet detector are due to penetrating muons, the muon flux is significantly overestimated and hence the target density is significantly underestimated. Fig. 11 shows the estimated density of Mt. Showa-Shinzan from the Quartet and ECC detectors. The density values derived from the ECC detector (1.98-2.39 g cm −3 ) are consistent with the bulk density of typical volcanic rocks. On the other hand, the density values derived from the Quartet detector are significantly lower, lower even than the water density (1 g cm −3 ) in the lower elevation regions. This fact strongly suggests that the ECC detector collects only penetrating muons and the Quartet detector is affected by contamination from low-energy (<1 GeV) charged particles other than penetrating muons. D I S C U S S I O N A N D C O N C L U S I O N From the numerical and experimental studies, it is concluded that the origin of the background noise is low-energy charged particles such as electrons, muons and protons, in case of muographic measurements performed with muon trackers without particle identification capabilities and with energy thresholds below 1 GeV. In this section, this result is compared with other prior works. Carbone et al. (2014) have observed an excess of particle flux of a factor of 2-10 in observations at Mt. Etna. Our simulation and observation agree with this work in this point. However, it should be kept in mind that the Etna observation was performed at ∼3000 m asl and our simulation assumes nearly sea level observation. The flux of electrons at 3000 m asl would be one order of magnitude higher than that at sea level. In addition, the flux of protons and neutrons depends more strongly on the altitude. Further consideration is required to discuss the altitude dependence of background flux. In addition, the background flux depends not only on altitude but also on the longitude and latitude of the detector site through a rigidity cut on primary cosmic rays, but this effect was not considered in our simulation. The geomagnetic effect must be properly implemented in the simulation in future studies for a more quantitative discussion. Jourde et al. (2013) experimentally demonstrated the existence of upward-going particles which enter the detector from the rear side. Our simulation also verified the existence of upward-going particles. However, it must be mentioned that the backgrounds consist not only of upward-going particles but also of downward-going particles with significant abundance. As seen in Section 3.4, the upward-going particles amount for only 44 per cent of the total flux of background particles in the cos θ range of [0: 0.15], and 23 per cent in [0.15: 0.25]. The flux of upward-going particles decreases with an increase of elevation angle. This result is quite consistent with Jourde et al. (2013) as that work reports that the upward-going particles are observed mainly at low elevation angles. This work measures the bulk density of Showa-Shinzan lava to range between 1.98-2.39 g cm −3 . The associated error (1σ ) is 0.07-0.23 g cm −3 . By contrast, Tanaka et al. (2007) performed muography observation at the same target with emulsion detectors and obtained 2.71-2.91 g cm −3 with a 1σ error of 0.17 g cm −3 at each data point. The Showa-Shinzan lava is classified as dacite from the SiO 2 content of rock samples. The bulk density of the rock samples is 2.32 g cm −3 (Nemoto et al. 1957). The prior work exceeds the sampled density significantly. On the other hand, the density values estimated from the ECC detector of the present work agree well with the sampled density. This shows that the accuracy of muography has been improved with the aid of background reduction with the ECC detector. The detector used in the prior work was a stack of four emulsion films. Thus this observation would have been affected by background noise. To avoid underestimation of the density as seen in the case of the Quartet detector, this prior work performed background subtraction (Okubo & Tanaka 2012). To be precise, the flux of background particles was estimated from the flux of nearly horizontal directions and was subtracted from the observed flux in the other directions. This subtraction process is based on an assumption that the flux of background particles is isotropic. However, this assumption is not correct. The background flux calculated by simulation and measured by emulsions depends strongly on incident angle. In conclusion, the origin of the background noise of muography has been confirmed to be (i) protons, (ii) muons and electrons produced by hadronic interaction of protons and neutrons in the atmosphere and the topographic material, and (iii) muons and electrons which are scattered in the atmosphere. The flux of these particles exceeds the weak flux of penetrating muons at low energy (<1 GeV). Thus the only way to achieve accurate muography is to impose an energy threshold on the particles entering detectors. It is not sufficient to reject upward-going particles by TOF analysis, since the background contains downward-going components as well as upward-going particles. It is still not satisfactory to perform particle identification, since the backgrounds contain low-energy muons produced by hadronic interaction of protons and neutrons. The present work confirms that a desirable energy threshold is 1 GeV when the observation is performed at nearly sea level and the target thickness is approximately 1 km. The result will be of importance for detector design in future works, not only for emulsion detectors but also for other types of detector. At the same time, previous muography works should be reviewed from the viewpoint of whether or not they have been affected by background particles. A C K N O W L E D G E M E N T S For this study, we used the computer systems of the Earthquake and Volcano Information Center of the Earthquake Research Institute, University of Tokyo. This work greatly benefited from discussions and encouragement from H. K. M. Tanaka and S. Okubo (The University of Tokyo). RN has been supported by a fellowship and grant from JSPS (DC2-25-9317) during the essential part of this study. We would like to thank N. Mitsuhiro (F-lab, Nagoya University), C. Bozza (Salerno University) and V. Tioukov (INFN, Napoli) for their support on the emulsion experiments. We would like to thank H. Oshima and T. Maekawa (Hokkaido University) for their support with the field work at Mt. Showa-Shinzan. We thank two anonymous reviewers for valuable comments to improve the manuscript.
7,868
2016-08-01T00:00:00.000
[ "Physics" ]
Data Offloading Security Framework in MCLOUD With the popularity of smartphones and upsurge of mobile applications, mobile devices have become prevalent computing platform. Although MCLOUD paradigm solves the problem of limited resources constraint of mobile system and unavailability of internet, through several offloading technique. Nevertheless, mobile users are still reluctant to adopt this paradigm, due to security concerns of their data. This research provided security on the users task and still minimize the total computational time of the MCLOUD. Tasks were programmatically broken down into smaller tasks and encrypted using homomorphic encryption system and assigned to slave devices which were admitted into the MCLOUD during the resort point process through the WIFI Direct wireless network. Using a test bed of three (3) smartphone devices, several task sizes ranging from 2KB, 4KB up to 20KB were used to test the implemented security framework and the time taken to complete computation of each task size is recorded for both MCLOUD and standalone architecture, the total execution time was compared and findings shows that computation involving security on MCLOUD takes less time compared to computation on standalone devices, the following readings were recorded. For the 4KB task size, MCLOUD spent 48500microseconds while standalone spent 241000microseconds; for the 6KB task size, MCLOUD spent 99500microseconds while standalone spent 453000microseconds; whereas in the 8KB task size, MCLOUD spent 109500microseconds while standalone spent 553000microseconds,which is approximately five (5) times faster than standalone execution. MCLOUD framework was observed to have a lower computation time, decreasing computational time ratio, higher throughput per seconds, It was discovered that computation on distributed encrypted data (MCLOUD) using homomorphic encryption is safer and faster than standalone single device computation. Background MCLOUD is an offspring of mobile cloud computing and is a cloud of mobile devices connected together with or without the availability of internet for the purpose of sharing resources for various reasons like computation, storage and so on. MCLOUD as a photogene of Mobile Cloud Computing is also the mixture or hybrid of cloud computing technology, mobile computing technology [1], and wireless network to produce good computational resources to mobile users. The ultimate goal of MCLOUD is to enable offloading and execution of powerful mobile applications on a large amount of mobile devices, with a good and satisfying user experience, Thus saving the limited resources of the mobile phone users. According to [2], the term "cloud" refers to a hosted service of a configurable distributed resource pool of networks, servers or storage over the Internet, where a user can gain an application using a "pay as you go" manner. The "cloud" can be viewed as an unlimited resource pool. Mobile cloud computing can be delivered when a mobile device uses cloud-based services by means of a mobile apps installed locally in the mobile devices and secondly, when cloud-based applications are running inside the user's mobile devices. Mobile cloud technology basically focuses on the user's mobile devices to access cloud-based services through wireless network communications [3]. This is achieved when mobile applications are deployed (i.e., mobile offloading) to the cloud servers or cloudbased applications running on other user's mobile devices. According to [4] Computation offloading is an enriched way to improve the performance as well as reducing the battery power consumption of a smartphone application by executing some parts of the application through code offloading to a remote server. There are two basic approaches to carrying out Data offloading, first, offloading in a static environment and second, offloading in a dynamic environment [5,6]. In static offloading, the programmers pre-determine the application components and in dynamic offloading (also called context-aware offloading), the execution location of the components is not pre-determined. According to [7], there are three major architecture under mobile cloud computing which are • Device to cloud architecture • Cloud to device architecture • Device to device architecture. This research work focused mainly on the third tier of the architecture of mobile computing and the security of it. Journal of Computer Sciences and Applications Cryptography in data security is the process of transforming data using an algorithm to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key and vice versa, to ensure privacy, confidentiality and integrity [8]. Literature Review Traditional cloud computing can be categorize into three (3) tiers namely infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) according to NIST standard. research and work has been carried out in the field of mobile cloud computing over the decades, and according to [7] mobile cloud computing is categorise into three specific class of architectures according to their mode of use and the intent they deliver . They are; • Mobile cloud "Device to Cloud" (D2C) architecture, • Mobile cloud "Cloud to Device" (C2D) architecture and • Mobile cloud "Device to Device" (D2D) architecture. [9] in their research also classify mobile cloud computing into two(2) architecture namely • Non cloudlet architecture and • Cloudlet architecture. According to our research, based on the existing and reviewed work and mode of processing, we categorize mobile cloud (MCLOUD) into two (2) main Architecture namely • Mobile cloud "Device to Cloud" (D2C) architecture, • Mobile cloud "Device to Device" (D2D) architecture. Device to Cloud (D2C) Architecture Under this category, users " mobile applications run in the cloud, where mobile devices are becoming part of a larger cloud environment, mobile devices connect to the remote infrastructure clouds through Internet connection as a medium Unlike the traditional cloud-based applications (e.g., via a desktop or server), the mode of using the devices are different in this case. Users connect their mobile devices to the remote cloud servers through mobile networks (e.g., wireless access points). Device to Device (D2D) Architecture In this category of architecture, mobile devices instantiates and share their own locally created network connections thus forming an environment for sharing information with other users within proximity [10]. In this approach, mobile devices are able to convey information directly to their nearby mobile devices without any help of the overlay network, as seen in Figure 2. According to [10], they worked on content distribution in mobile cloud, focusing mainly on smartphone to Smartphone device platform of communication, but unlike our work they failed to address the security issue on the content data. Methodology Traditional mobile cloud computing architecture utilize remote cloud environment for storage and computation operation, unlike this, MCLOUD paradigm utilize neighboring mobile devices for temporary storage and computation operation. We design this framework by using WI-FI direct to discover and create a cluster of smartphone devices, D 1 …D n , and we breakdown task T into n subtasks t 1 …t n , programmatically, using the list of smartphone admitted into the MCLOUD and then we apply homomorphic encryption on the subtasks before assigning them to the devices in the MCLOUD, using the resort point generated list. The cooperating devices received their apportioned chunks, which executes itself due to the pre-installed MCLOUD application on the cooperating devices and return result back to the master device in an encrypted state, which can only be decrypted by the master device. Result Using a test bed of three (3) smartphone devices, several task sizes ranging from 2KB, 4KB up to 20KB were used to test the implemented security framework and the time taken to complete computation of each task size is recorded for both MCLOUD and standalone architecture, the total execution time was compared and findings shows that computation involving security on MCLOUD takes less time compared to computation on standalone devices, the following readings were recorded. For the 4KB task size, MCLOUD spent 48500microseconds while standalone spent 241000microseconds; for the 6KB task size, MCLOUD spent 99500microseconds while standalone spent 453000microseconds; whereas in the 8KB task size, MCLOUD spent 109500microseconds while standalone spent 553000microseconds, which is approximately five (5) times faster than standalone execution. MCLOUD framework was observed to have a lower computation time, decreasing computational time ratio, higher throughput per seconds. Conclusion This research presented a concept of security in mobile device computing (MCLOUD), considering device to device architecture of mobile cloud computing. This secure offloading framework for smartphone devices will surely affect subscriber's response positively towards (MCLOUD) mobile device computing and it will facilitate and encourage the adoption of MCLOUD by smartphone users, due to the security of their work and considerable time taken for the distributed computation rather than standalone computation since all smartphone are resource limited.
1,965.6
2017-04-18T00:00:00.000
[ "Computer Science" ]
Coffee and cashew nut dataset: A dataset for detection, classification, and yield estimation for machine learning applications Conventional methods of crop yield estimation are costly, inefficient, and prone to error resulting in poor yield estimates. This affects the ability of farmers to appropriately plan and manage their crop production pipelines and market processes. There is therefore a need to develop automated methods of crop yield estimation. However, the development of accurate machine-learning methods for crop yield estimation depends on the availability of appropriate datasets. There is a lack of such datasets, especially in sub-Saharan Africa. We present curated image datasets of coffee and cashew nuts acquired in Uganda during two crop harvest seasons. The datasets were collected over nine months, from September 2022 to May 2023. The data was collected using a high-resolution camera mounted on an Unmanned Aerial Vehicle . The datasets contain 3000 coffee and 3086 cashew nut images, constituting 6086 images. Annotated objects of interest in the coffee dataset consist of five classes namely: unripe, ripening, ripe, spoilt, and coffee_tree. Annotated objects of interest in the cashew nut dataset consist of six classes namely: tree, flower, premature, unripe, ripe, and spoilt. The datasets may be used for various machine-learning tasks including flowering intensity estimation, fruit maturity stage analysis, disease diagnosis, crop variety identification, and yield estimation. a b s t r a c t Conventional methods of crop yield estimation are costly, inefficient, and prone to error resulting in poor yield estimates.This affects the ability of farmers to appropriately plan and manage their crop production pipelines and market processes.There is therefore a need to develop automated methods of crop yield estimation.However, the development of accurate machine-learning methods for crop yield estimation depends on the availability of appropriate datasets.There is a lack of such datasets, especially in sub-Saharan Africa.We present curated image datasets of coffee and cashew nuts acquired in Uganda during two crop harvest seasons.The datasets were collected over nine months, from September 2022 to May 2023.The data was collected using a high-resolution camera mounted on an Unmanned Aerial Vehicle .The datasets contain 30 0 0 coffee and 3086 cashew nut images, constituting 6086 images.Annotated objects of interest in the coffee dataset consist of five classes namely: unripe, ripening, ripe, spoilt, and coffee_tree.Annotated objects of interest in the cashew nut dataset consist of six classes namely: tree, flower, premature, unripe, ripe, and spoilt.The datasets may be used for various machine- Value of the Data • Flowering intensity estimation.Flowering represents an important stage in coffee and cashew farming since it affects crop yield.It has a significant impact on yield in that flowering intensity is positively correlated with the amount of crop yield.Therefore, flowering intensity could be an important predictor of crop yield [1] .Our dataset contains a flowering class, which can be used to train machine learning models to estimate the flowering intensity of cashew crops.• Fruit detection.In crop yield estimation using computer vision techniques, accurately detecting objects of interest, e.g., fruit pods is critical.Our dataset can be used to train machine learning models for the automated detection of coffee cherries and cashew apples.The dataset may also facilitate the development of new algorithms for small object detection, currently an open research problem in computer vision, since coffee cherries and cashew apples are relatively small.• Fruit maturity stage analysis.Our datasets contain coffee cherries and cashew apples at various stages of growth or maturity.It is these maturity stages that constitute object classes in the datasets.The dataset can be used to build machine learning models for automated analysis of fruit maturity stages for various purposes, including harvest scheduling.• Coffee crop variety identification.Our coffee dataset features the two main varieties grown in Uganda, namely Robusta ( Coffea canephora ) and Arabica ( Coffea arabica ) [2] .Within the Robusta variety, there are at least ten Coffee Wilt Disease resistant (CWD-r) clones, also known as KR lines.These clones are also resistant to leaf rust, tolerant to Red Blister Disease (RBD), have larger coffee bean sizes, are higher yielding, and have better cup quality.This means that our dataset can potentially be used for building models for the automated identification of Robusta coffee varieties. • Crop yield estimation.Our coffee and cashew datasets may also be used for yield estimation using machine learning methods, similar to work in [3][4][5] .Various machine-learning approaches may be used for yield estimation with this dataset, including object detection, image-based regression, and vegetation index-based methods.• Fruit disease diagnosis.Coffee cherries and cashew apples belong to three and five classes, namely unripe, ripe, spoiled and flower, immature, unripe, ripe, and spoiled, respectively.Images belonging to the spoiled class in each dataset may be used for coffee and cashew fruit disease diagnosis. Data Description The datasets presented in this work consist of high-resolution images of coffee and cashew plants acquired using Unmanned Aerial Equipment (UAV) equipment from small and large-scale farms across Uganda.Images range approximately between 10 MB and 12 MB in size, approx.40 0 0 by 320 0 pixels in dimension and 72 pixels/in in Dots per inch (DPI).Each image is annotated with multiple bounding boxes, each enclosing an object of interest.Each image is accompanied by metadata, including the date (timestamp) and the geographic location (latitude and longitude) where it was captured. The majority of the images for coffee capture the full height and breadth of the tree (or plant) from two opposite lateral sides.A few of the images involved imaging the same tree from an overhead position that covers the entire canopy.For cashew trees, images were captured from different lateral sides (no top-view images).The image data for coffee and cashew nuts have been meticulously annotated.These annotated datasets, stored in the YOLO (You Only Look Once) format [ 6 ], are now readily accessible on the Hugging Face platform.Tables 1 and 2 provide the number of annotated object instances per class in the coffee and cashew datasets, while Figs. 1 and 2 show sample images for the two crops.The five classes in the coffee image dataset, as shown in Fig. 1 , have the following class IDs and labels: 0: Unripe, 1: Ripening, 2: Ripe, 3: Spoilt, and 4: Coffee_tree. Our coffee and cashew nut datasets for machine learning yield estimation is the first of its kind and we did not come across any similar publicly available dataset.Existing datasets such as [7][8][9] consist of coffee leaf images designed for nutritional deficiency and/or plant disease detection and classification.Our dataset was collected using UAV equipment while the studies cited above used smartphone cameras for data collection. Field data collection The datasets were collected from small and medium-scale farms in significant coffee and cashew growing regions of Uganda, including in the southern, central, eastern, and northern parts of the country.However, most of the coffee images were collected from two demonstration farms operated by the Uganda National Coffee Research Institute (NaCORI) in Kituza village, Mukono district, central Uganda.NaCORI is a governmental agency responsible for researching and developing new coffee varieties.Fig. 3 shows one of the demonstration farms (i.e., Block 13) where most of the coffee images were collected.The choice of farms and purposefully sampled plants from which data was collected was advised by agricultural experts who were part of the field data collection team. The coffee and cashew nut image data was collected in three phases.In the first phase, coffee image data was collected from Bukomansimbi, Kyotera, Mukono, Buikwe, and Masaka districts in Central Uganda between November and December 2022.In the second data collection phase, coffee image data collection took place in June 2023 in the Eastern Uganda districts of Luuka, Jinja, Mbale, and Sironko.In the third data collection phase, cashew nut image data was collected from Lira, Abim, and Nakasongola districts in March 2023.Data collection was carried out during the peak of the harvest season(s) for each crop.The details of images collected per region are shown in Table 3 . Materials and Methods Preparatory activities were carried out before field data collection.These included obtaining authorization letters, designing the data collection guidelines, training data collectors, and pilot fieldwork.This was done to prepare the field data collection team, to test equipment and data collection instruments, and to evaluate sample images for quality assurance. The imaging equipment consisted of a UAV, commonly referred to as a drone.Specifically, we used a DJI Mini 3 Pro drone equipped with a high-quality camera that had a 48 MP 1/1.3 in CMOS sensor, lens with aperture of f/1.7 and focus range of 1 m -∞ , shutter speed of 2-1/80 0 0s and an ISO range of 10 0-640 0 (Auto and Manual). A custom drone flight strategy was developed and used.This included using manual flight plans, flying at low altitudes and at close distances of about 1 m from coffee and cashew trees, adjusting camera orientation for optimal exposure and visibility of objects of interest, optimal spatial resolution, and flight speed. Images were primarily collected under optimal weather conditions for flying a UAV for farmbased data acquisition, including natural illumination (sunshine), precipitation, temperature, cloud cover, wind speed, and humidity.This was done to ensure that the resulting images were of high quality.Multiple images (at least three) were acquired per each purposively sampled coffee and cashew tree, taken from different viewpoints including from the top and opposite lateral sides.Full tree height and breadth and close range (approx. 1 m) images focused on coffee cherries and cashew apples were acquired ( Fig. 4 ). Data preprocessing and annotation We conducted thorough data cleaning, eliminating blurry and overexposed images while resolving any inconsistencies.The cashew data was labelled using an online annotation tool called Makesense AI1 .The annotated data was saved in YOLO format [ 7 ] with six class IDs representing the cashew labels: 0: Tree, 1: Flower, 2: Premature, 3:Unripe, 4: Ripe, and 5: Spoilt based on a categorization in [ 10 ].Fig. 5 shows an example of cashew nut image annotation. For the coffee image data, the coffee specialist from NaCORI expertly handled the annotation process using an offline tool called VGG Image Annotator (VIA) [ 8,11 ] to annotate the images.The coffee annotated data was saved in YOLO format with 5 class IDs representing the coffee labels: 0:Unripe, 1:Ripening, 2:Ripe, 3:Spoilt, and 4:Coffee_tree based on categorisation in [12] .Fig. 6 shows an example of coffee image annotation. Fig. 1 . Fig. 1.Sample coffee images showing the different coffee class labels. Fig. 3 . Fig. 3. Aerial view of the Block 13 demonstration farm at the National Coffee Research Institute (Latitude 0 o 15' 30.696"N and Longitude 32 o 47' 25.266" E) where the coffee images were collected. Fig. 4 . Fig. 4. A coffee tree in Block 13 with a label for image data collection. learning tasks including flowering intensity estimation, fruit maturity stage analysis, disease diagnosis, crop variety identification, and yield estimation. © 2023 The Author(s).Published by Elsevier Inc.This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ) Table 1 Number of annotated object instances per class in the coffee dataset. Table 2 Number of annotated object instances per class in the cashew dataset. Table 3 Regions of Uganda where the current datasets were collected.
2,584.4
2023-12-01T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]
A Novel Interval-Valued q-Rung Dual Hesitant Linguistic Multi-Attribute Decision-Making Method Based on Linguistic Scale Functions and Power Hamy Mean The interval-valued q-rung dual hesitant linguistic (IVq-RDHL) sets are widely used to express the evaluation information of decision makers (DMs) in the process of multi-attribute decision-making (MADM). However, the existing MADM method based on IVq-RDHL sets has obvious shortcomings, i.e., the operational rules of IVq-RDHL values have some weaknesses and the existing IVq-RDHL aggregation operators are incapable of dealing with some special decision-making situations. In this paper, by analyzing these drawbacks, we then propose the operations for IVq-RDHL values based on a linguistic scale function. After it, we present novel aggregation operators for IVq-RDHL values based on the power Hamy mean and introduce the IVq-RDHL power Hamy mean operator and IVq-RDHL power weighted Hamy mean operator. Properties of these new aggregation operators are also studied. Based on these foundations, we further put forward a MADM method, which is more reasonable and rational than the existing one. Our proposed method not only provides a series of more reasonable operational laws but also offers a more powerful manner to fuse attribute values. Finally, we apply the new MADM method to solve the practical problem of patient admission evaluation. The performance and advantages of our method are illustrated in the comparative analysis with other methods. Introduction Multi-attribute decision-making (MADM) is a frequently used method for determining choices in daily life. This is because most real-life decision-making problems are very complicated, and to make a wise decision, decision makers (DMs) have to evaluate all the feasible forms of multiple aspects before determining the ranking order of alternatives [1][2][3][4][5][6][7][8][9][10]. There are many methods to obtain the rankings of alternatives in the MADM framework, and the aggregation operator (AO) is a powerful technique, which helps DMs acquire the optimal or best alternative. AO refers to a series of special functions that integrate individual attribute values into a collective one. By ranking the comprehensive evaluation values of candidates, the corresponding ranking results of alternatives are determined. More and more complex realistic decision-making problems have prompted many scholars and scientists to realize the importance of studying and exploring the interaction between attributes. This is because there exist complex interrelationships between attributes, which should be considered along with the attribute values when obtaining the comprehensive evaluation ranks of alternatives. Considering it, the Bonferroni mean (BM) [11] and HEronian mean (HEM) [12] were proposed to capture the interrelationship among interacted attributes. Therefore, BM and HEM were widely used by scholars to semantics. (In Section 3, we analyzed the shortcomings of these operating rules in detail.) Second, the method based on IVq-RDHL Maclaurin symmetric mean operators fails to consider the impact of DMs' unreasonable evaluation value on the final decision result, although they have the capacity of capturing the interrelationship among attributes. More and more evidence shows that while considering the impact of DMs' evaluation value, the relationship among attributes can also be considered [40][41][42][43][44][45]. Based on the above analysis, an MADM method under IVq-RDHL sets is proposed in this paper. The main motivations of this study are as follows: (1) Firstly, we introduce the operational laws of IVq-RDHL values based on linguistic scale functions to overcome the shortcoming of existing operations rules. (We analyzed in detail why novel operational rules are more effective and rational in Section 3). (2) Secondly, considering the good performance of PHM in fusing fuzzy information, we generalized PHM into IVq-RDHL sets and presented AOs for IVq-RDHL values that can overcome the shortcomings of existing AOs. (3) Thirdly, we introduced a new MADM method under the IVq-RDHL environment based on the novel AOs. In addition, a practical example about the patient admission evaluation is employed to show the validity and advantages of our new method. The paper is structured as follows. We review basic notions connected with IVq-RDHL sets in Section 2. Section 3 proposes novel operations for IVq-RDHL values based on a linguistic scale function and studies their properties. Section 4 puts forward new IVq-RDHL AOs based on PHM. Section 5 introduces a new IVq-RDHL MADM method. Section 6 is a series of illustrative examples. Section 7 summarizes the manuscript and outlines future research. Preliminaries In this section, we briefly review the basic concepts, including IVq-RDHLSs, Hamy mean, power average, and power Hamy mean operators. The IVq-RDHLSs Definition 1 ([35]). Let X be a fixed set and S be a continuous linguistic term sets (LTS) of S = {s i |i = 1, 2, . . . , t }, then an interval-valued q-rung dual hesitant linguistic set (IVq-RDHLS) D defined on X is expressed as x ∈ X to the set D, respectively, such that r l D , r u D , η l D , η u D ∈ [0, 1], and 0 ≤ r u variable (IVq-RDHLV), which can be denoted as d = s θ , (h, g) for simplicity. In addition, IVq-RDHLV can transferred into other fuzzy sets. For example, if q = 1, then D is IVDHFLS; if q = 2, then D can be simplified to a dual hesitant interval-valued Pythagorean linguistic set; if r l = r u and η l = η u , then D is simplified to the q-rung dual hesitation linguistic set. Then, the operational laws of IVq-RDHLVs are as follows. Definition 3 ([35] ). Assume that d = s θ , (h, g) is an IVq-RDHLV, then the score function of d is and the accuracy function of d is where #h and #g represent the number of interval values in h and g, respectively. Suppose that d 1 = s θ 1 , (h 1 , g 1 ) and d 2 = s θ 2 , (h 2 , g 2 ) are any two IVq-RDHLVs, then (1) If S(d 1 ) > S(d 2 ), then d 1 is superior to d 2 , denoted by d 1 > d 2 ; (2) If S(d 1 ) = S(d 2 ), then calculate the accuracy score of the two IVq-RDHLVs From the results, we can find that S(d 1 ) < S(d 2 ), then d 1 < d 2 . If the score values are the same, then the accuracy values can be considered for comparison, which can be calculated by Equation (3). HM, PA and PHM Operator Definition 4 ([18]). Suppose that a i (i = 1, 2, . . . , n) is a collection of non-negative crisp numbers, then the PA operator is defined as PA(a 1 , a 2 , . . . , a n ) = where T(a i ) = n ∑ j=1,i =j Sup a i , a j . In addition, Sup a i , a j represents the support for a i from a j , satisfying the conditions: (2) Sup a i , a j = Sup a j , a i Definition 5 ([32]). Assume that a i (i = 1, 2, . . . , n) is a collection of non-negative real numbers. If then HM (k) is the HM operator. In addition, k = 1, 2, . . . , n, C k n is the binomial coefficient, and (i 1 , i 2 , . . . , i k ) t raverses all the k-tuple combination of (1, 2, . . . , n). Definition 6 ([31] ). Suppose that a i (i = 1, 2, . . . , n) is a collection of non-negative real numbers. Then, the PHM operator is defined as PHM (k) (a 1 , a 2 , . . . , a n ) where k = 1, 2, . . . , n,C k n is the binomial coefficient, and (i 1 , i 2 , . . . , i k ) traverses all the k-tuple combination of (1, 2, . . . , n). In addition, T(a i ) = n ∑ j=1,i =j Sup a i , a j and Sup a i , a j stand for the support for a i from a j satisfying the properties presented in Definition 4. Necessity of Proposing New Operations of IVq-RDHLVs Feng et al. [35] originated some operations of IVq-RDHLVs, which, however, have some shortcomings. (1) The existing operations of IVq-RDHLVs are not closed. To illustrate this shortcoming in more detail, we provide the following example. [35] has a drawback: the calculation results of the linguistic term set (LTS) easily breaks the predefined upper limit; thus, these operations are not closed. (2) These rules proposed by Feng et al. [35] assumed that the semantic gap between any two adjacent LTs is always equal. However, in practical MADM problems, DMs may feel that the semantic gap will change when the subscript of the LT increase or decrease. For example, DMs may believe that the semantic gap between "extremely poor" and "very poor" is greater or smaller than "good" and "very good". For Equations (9) and (10), it can be seen that the absolute deviations between adjacent linguistic subscripts are different. For example, in Equation (9), the semantic gap between "extremely poor" and "very poor" is smaller than "good" and "very good". However, in Equation (10), the semantic gap between "extremely poor" and "very poor" is greater than "good" and "very good". For more information, please refer to the literature by . In particular, if ε = β = 1, then Equation (11) is reduced to Equation (8). Proof. According to the operational rules of IVq-RDHLVs in Definition 8, we can obtain ∪ r 1 ∈h 1 ,r 2 ∈h 2 ,η 1 ∈g 1 ,η 2 ∈g 2 r l According to the above process, it is obvious that Equation (16) holds for λ ≥ 0. In addition, we have According to the above process, Equation (17) is kept. Based on Definition 7 and Definition 8, we have So far, Theorem 1 have been proved. Comparison Method of IVq-RDHLVs Based on LSF Based on LSF f, the comparison method of IVq-RDHLVs is put forward as follows. Definition 9. Assume that d = s θ , (h, g) is an IVq-RDHLV, then the score function of d can be calculated by and the accuracy function of d can be computed by For any two IVq-RDHLVs d 1 = s θ 1 , (h 1 , g 1 ) and d 2 = s θ 2 , (h 2 , g 2 ) , the specific comparison rules are presented in Definition 3. Distance Measure of IVq-RDHLVs In this section, we propose a new concept of the distance between two IVq-RDHLVs based on the LSF. Definition 10. Suppose that d 1 = s θ 1 , (h 1 , g 1 ) and d 2 = s θ 2 , (h 2 , g 2 ) are any two IVq-RDHLVs, then the distance betweend 1 and d 2 is expressed as In addition, #h represents the number of interval values contained in h 1 and h 2 , and #g represents the number of elements that make up g 1 and g 2 . Remark 2 ([35] ). Assume that d 1 = s θ 1 , (h 1 , g 1 ) and d 2 = s θ 2 , (h 2 , g 2 ) are any two IVq-RDHLVs. From Definition 10, it is obvious that #h 1 = #h 2 and #g 1 = #g 2 , which means that h 1 and h 2 should have the same number of values, and g 1 and g 2 should have the same number of values when calculating the distance. However, this condition cannot be always satisfied. In order to make the MD and NMD numbers of the two IVq-RDHFEs equal, Feng et al. [35] proposed two methods to adjust the IVq-RDHFEs, which fail to satisfy the condition. Inspired by this idea, we extend the shorter IVq-RDHLV to satisfy the condition by adding the largest interval values in MD and NMD, respectively. Example 2. Assume that there are two evaluation values denoted by IVq-RDHLVs, which are defined on a given LTS S = {s 0 , s 1 , s 2 , s 3 , s 4 , s 5 }. For calculation,d 1 and d 2 can be transformed tod 1 and d 2 , respectively (q = 5). If we use LSF1, then the distance between d 1 and d 2 is For two IVq-RDHLVs d 1 and d 2 , d(d 1 , d 2 ) represents the distance between d 1 and d 2 and satisfy the following properties: . . , n) be a collection of IVq-RDHLVs, then the results aggregated by the IVq-RDHLPHM operator is also an IVq-RDHLV and Proof. From Definitions 8 and 9, we can obtain Finally, for all i, and there is only one MD and NMD in d, then Proof. According to , then σ i j = 1 n hold for all i. According to Theorem 2, we obtain Similarly, we can also prove that IVq − RDHLPHM (k) (d 1 , d 2 , . . . , d n ) ≤ y. Thus, the proof of Theorem 4 is completed. Next, we will explore several special cases of the IVq-RDHLPHM operator when the parameter values changes. Case 1. When k = 1, the proposed IVq-RDHLPHM operator is simplified to IVq-RDHL power average (IVq-RDHLPA) operator. Moreover, for all Sup d i , d j = t > 0, i = j, then the IVq-RDHLPHM operator is simplified to the IVq-RDHL average (IVq-RDHLA) operator. Moreover, for all Sup d i , d j = t > 0, i = j, then the IVq-RDHLPHM operator is simplified to interval-valued dual hesitant linguistic HM (IVDHLHM) operator. In addition, for all Sup d i , d j = t > 0, i = j, the IVq-RDHLPHM operator is simplified to interval-valued dual hesitant Pythagorean linguistic HM (IVDHPLHM) operator. Theorem 5. Let d i = s θ i , (h i , g i ) (i = 1, 2, . . . , n) be a collection of IVq-RDHLVs, then the results aggregated by the IVq-RDHLPWHM operator is also an IVq-RDHLV and The specific proof process of Theorem 5 is omitted here because it is similar to the proof of Theorem 2. A MADM Method under IVq-RDHLSs For a MADM problem, DMs express their assessment with IVq-RDHLSs information. Assume that there are m alternatives {A 1 , A 2 , . . . , A m } that will be evaluated, and n attributes {C 1 , C 2 , . . . , C n } should be considered in the process of decision making. The weight vector of the attributes is w = (w 1 , w 2 , . . . , w n ) T , which satisfy that 0 ≤ w j ≤ 1 and ∑ n j=1 w j = 1. When evaluating, DMs use IVq-RDHLS d ij = s θ ij , h ij , g ij (i = 1, 2, . . . , m; j = 1, 2, . . . , n) to express their evaluation of the attribute C j (j = 1, 2, . . . , n) of A i (i = 1, 2, . . . , m). Later, the overall evaluation information can be collected, and a decision matrix can be obtained, which can be written as R ij = d ij m×n . In the following, we introduce the steps of how to determine the ranks of alternatives based on the IVq-RDHLPWHM operator. Step 1. Standardize the original decision values. Before aggregation, the original decision values should be standardized according to the following formula: Step 2. Calculate the support Step 3. Compute the T d ij between the two IVq-RDHLS d il and d im by Step 4. Compute the power weight δ ij associated with IVq-RDHLSs by Step 5. Calculate the overall evaluation values d i of the alternative A i by Step 6. Calculate the score values of d i (i = 1, 2, . . . , n) according to the Equation (20). Step 7. Rank all the alternatives according to the score values, and choose the best alternative. A Case Study in Assessment Indicator System of Patient Admission Evaluation With the development of society, the aging of the population and the improvement of health awareness, people's medical needs are increasing rapidly. However, medical resources, such as bed resources, medical technology, and operating rooms, are limited. When scarce resources cannot accommodate a large number of hospitalized patients, a feasible solution is to prioritize the patient's hospitalization list. In practical MADM problems, there are always qualitative criteria whose values can hardly be depicted by crisp values, such as the level of pain, the severity of illness, etc. The IVq-RDHLs provide a new and powerful technique to represent the qualitative judgments of experts. Therefore, in order to assess the relative priorities of patients for treatment on a waiting list, we construct an evaluation index system to allow the patients with a high degree of disease severity to be hospitalized. Patient Admission Evaluation Criteria In this section, we build a patient admission evaluation index system for general patient prioritization. In practical problems, the evaluation of patient prioritization involves multiple factors and multiple indicators, such as the level of pain, the severity of illness, and the impact on the patient's life year, etc. To solve this problem, Li et al. [47] conducted an investigation on the admission process and obtained four dimensions, including clinical and functional disorders (C 1 ), expected outcomes (C 2 ), social factors (C 3 ), patient's basic information (C4), as shown in Table 1. Parameters Brief Description Clinical and functional disorders (C 1 ) Clinical and functional disorders are an essential dimension of the indicator system, which describe the severities of patients' diseases and the degrees of treatment needed in terms of the disease characteristics. It includes disease severity, pain level, etc. Expected outcomes (C 2 ) Expected outcomes refer to the effectiveness of treatment after hospitalization from the hospital's point of view. To be precise, before admission, the hospital has the right to evaluate if the patients have expected negative effects after receiving treatment, such as mortality. In this sense, it includes the difficulty of treatment, the complication probability, etc. Social factors (C 3 ) When considering the admission of patients, we need to maximize social welfare from a moral point of view. In this regard, social factors include resource consumption during waiting periods, limitations in doing activities of daily living and so on. Patient basic information (C 4 ) The basic information of the patient should be considered in the comprehensive assessment process. For example, when other conditions are the same, patients who wait longer will be given higher priorities for treatment. A patient's basic information can be described as follows: gender, age, waiting time under the same condition, etc. Example 3. Suppose that there are four patients A i (i = 1, 2, 3, 4) needed to be considered for patient admission. The parameters C j (j = 1, 2, 3, 4) are employed in assessing the patients, in which C 1 represents clinical and functional disorders, C 2 represents expected outcomes, C 3 represents social factors, and C 4 represents patient basic information, whose weight vector is w = (0.3, 0.3, 0.2, 0.2) T . Doctors mainly express their assessment information on patients in 5 levels {s 1 = noting, s 2 =low, s 3 = medium, s 4 = high, s 5 = very high}. To express the evaluation information of the patient's symptoms in detail, doctors request to evaluate the four patients A i (i = 1, 2, 3, 4) from the perspective of the four parameters C j (j = 1, 2, 3, 4) by IVq-RDHLs. Then, a decision matrix d ij = s θ ij , h ij , g ij (i = 1, 2, 3, 4; j = 1, 2, 3, 4) consisting of IVq-RDHLs can be obtained, which is shown in Table 2. Step 1. After analysis, it is obvious that all attributes belong to benefit type, so there is no need to standardize the original matrix according to Equation (42). Step 2. Calculate the support Sup(d il , d im ) according to Equation (43) Step 3. Calculate T d ij according to Equation (44 Step 7. According to the score values S(d i )(i = 1, 2, 3, 4), the ranking orders of the patients can be determined, that is A 2 > A 1 > A 3 > A 4 , which indicates that A 2 is the optimal patient that should be admitted to the hospital. Sensitivity Analysis From Definition 12, we can find that the parameters k, q, and LSF f play an important role in calculating the final decision results. Therefore, it is of high necessity to explore the influence on the final score values and ranking orders of alternatives. In this section, we explore how the parameters k, q and LSF affect the outcome of the decision, respectively. The Impact of the Parameter q In this part, we explore the influence of the parameter q on the final results. In order to do that, we select different values of q to calculate the results, and the results are shown in Table 3. For convince, we assume k = 2 and choose the LSF 1 when aggregating. The data in Table 3 tell us that the score values increase as the value of q increases. Although the score values are different, the ranking orders are the same A 2 > A 1 > A 3 > A 4 , and A 2 is the optimal patient that should be admitted. Then, how to determine the appropriate value of q is a meaningful and worthy issue. Considering this, the method proposed by Feng et al. [3] suggests that the value of q should be the smallest integer The Influence of the Parameter k In this part, we discuss the impact of parameter k on the IVq-RDHLPWHM operator. Then, we assume the parameters q and LSF are stable and change the values of k. Based on this idea, we calculate the final results using the IVq-RDHLPWHM operator and obtain the results, as shown in Table 4. From Table 4, the score values of the four patients are obviously different. Specifically, the score values decrease when the parameter k increases. However, the ranking orders have not changed, i.e., A 2 > A 1 > A 3 > A 4 . In practical problems, the appreciated value of k can be selected according to the preference of the DM. When the DM has a positive attitude towards the alternatives, a smaller value of k will be selected. The Influence of the LSF Obviously, LSF f have an important impact on the final results. Therefore, we select different LSF f values to solve Example 3, and the score values are shown in Table 5. It is found from Table 5 that with different f values, the score values of the alternatives are different. However, A 2 > A 1 > A 3 > A 4 is always the ranking orders of alternative, and A 2 is the optimal alternative. Different LSF f values represent the understanding of a semantic gap by different DMs. In reality, the appropriate LSF f can be selected according to actual needs. If the DM thinks that adjacent semantics are equal, we can choose LSF1. If the DM has a positive attitude towards semantics, choose LSF2; otherwise, choose LSF3. All in all, the existence of LSF proves the practicality and flexibility of the new method. Table 5. Score functions and ranking orders by different LSFs (q = 4, k = 2). Validity Analysis In this section, we perform a comparative analysis between the two existing methods and our method. Later, we divided the comparison into two subsections as follows. Compared with the Method Based on the IVq-RDHLWMSM Operator In this subsection, the method based on the IVq-RDHLWMSM operator proposed by Feng et al. [35] and the method we proposed are used to calculate the decision results, and the results are shown in Table 6. In addition, we choose different values of k when using the IVq-RDHLWMSM operator [35] and choose different LSF values when using the IVq-RDHLPWHM operator. Although the score values are obtained by different methods and different parameters, the ranking orders are the same. To be precise, compared with the method proposed by Feng et al. [35], our method can consider the understanding of different semantics according to the adjustment of LSF. Therefore, our method has strong flexibility compared to the IVq-RDHLWMSM operator proposed by Feng et al.'s [35]. Table 6. Score functions and ranking orders by different methods. Methods and the results are shown in Table 6. In addition, we choose different values of k when using the IVq-RDHLWMSM operator [35] and choose different LSF values when using the IVq-RDHLPWHM operator. Although the score values are obtained by different methods and different parameters, the ranking orders are the same. To be precise, compared with the method proposed by Feng et al. [35], our method can consider the understanding of different semantics according to the adjustment of LSF. Therefore, our method has strong flexibility compared to the IVq-RDHLWMSM operator proposed by Feng [48]) A hospital intends to choose a supplier of medical equipment, and the choice of the supplier is affected by many factors. Suppose that there are four suppliers A i (i = 1, 2, 3, 4) that should be evaluated based on the four attributes C j (j = 1, 2, 3, 4, 5): the quality (C 1 ), the price (C 2 ), the service performance (C 3 ), and the user evaluation (C 4 ). The weight vector of the attributes is w = (0.29, 0.28, 0.18, 0.25) T . When evaluating, DMs were advised to use the interval-valued Pythagorean fuzzy linguistic variables (IVPFLVs) to express their opinion. Then, the decision matrix composed of IVPFLVs is omitted here, and it can be found in reference [48]. Later, two methods based on the IVPFLWA operator [48] and the IVq-RDHLPWHM operator were conducted to calculate the final ranking results. The results presented in Table 7 depict that the score values of each alternative are different, but the ranking orders are the same. In other words, our method cannot only deal with the problems under the IVq-RDHLVs environment but also solve the problems under the IVPFLVs fuzzy environment, which shows the usefulness and power of our proposed method. Advantages of the Proposed Method In this section, we analyze the advantages and strengths of our proposed method point by point. The Flexibility of Its Operation In most real MADM problems, DMs may have different semantic preference. The LSF is well-known for its ability to match different DMs' semantic translation requirements. Our method based on IVq-RDHLPWHM also allows the DMs to choose the appropriate LSFs according to personal preference and the actual semantic environment for selecting the most suitable alternative. To illustrate this advantage, we use our method to solve the problem of Example 4 and obtain the final results by different LSFs (see Table 8). It can be clearly seen from Table 8 that although the score values obtained by different LSFs are different, the ranking orders calculated by different LSFs are completely the same. Therefore, our method can consider more semantic gaps and has more flexibility in solving MADM problems. . Its Capability of Effectively Dealing with DMs' Unreasonable Evaluation Values The IVq-RDHLPWHM operator we proposed combined the PA operator and Hamy mean operator. The PA operator is famous for its ability to weaken the influence of extreme evaluation values on the final result because it considers the power weighting of attributes. In the decision-making process, influenced by knowledge background and educational experience, DMs may feel hesitant or have a prejudice against some alternatives; as a result, they may give egregiously high or low arguments when providing assessment information. Therefore, our method can eliminate the influence of extreme values and make the decision results more reasonable. It Powerfully Deals with the Complex Interrelationship among Multiple Attributes When Aggregating The IVq-RDHLPWHM operator combines the PA operator and the HM operator. The HM operator can handle the complex interrelationships among attributes. Further, we use the IVq-RDHLPWHM operator to solve Example 4 by using different values of k and obtain the final results (presented in Table 9). Specifically, the value of parameter k indicates how to consider the relationship among attributes in the calculation process. However, the final ranking result of the alternatives shows that no matter what the values of k, the ranking orders are the same. Therefore, our method is robust to deal with the MADM problems because it can flexibly handle the correlation among attributes. Table 9. Score values and ranking results of Example 4 with different values of k in the IVq-RDHLPWHM operator (q = 4). [48] and our method to solve the problem and obtain the final results (shown in Table 10). Obviously, we can find that the IVPFLWA operator cannot deal with the problem, but our method can still obtain the results. Next, we analyze the reasons for this result. The IVPFLWA operator can only solve the problem that satisfies that the sum of the second power of MG and the second power of NMG should be smaller than or equal to one. However, our method can break the constraint according to the definition of the IVq-RDHLPWHM operator. To be more precise, 0.9 2 + 0.5 2 = 1.06 > 1, but if we set q = 3, then 0.9 3 + 0.5 3 = 0.854 < 1. Therefore, the IVq-RDHLPWHM operator can still deal with Example 5 and obtain the ranking orders, i.e., A 3 > A 4 > A 2 > A 1 , but the IVPFLWA operator cannot solve it. The parameter of q makes DMs describe more information compared with the IVPFLs, which provide a useful tool for DMs when expressing their evaluation on the alternatives more comprehensively. Our method based on LSF1 (t = 3, k = 2, q = 4) S(d 1 ) = 0.0442 ; S(d 2 ) = 0.0627; S(d 3 ) = 0.0934; S(d 4 ) = 0.0721. Summarization Based on the above analysis, we have summarized the characteristics of the existing MADM and displayed them in Table 11. From Table 11, it is obvious that our proposed method has more advantages in solving MADM problems compared with other methods. Next, we mainly analyze the following three aspects: (1) Compared with the method based on the IVq-RDHLWMSM operator proposed by Feng et al. [35], IVq-RDHLPWHM can effectively handle DMs' extreme evaluation values; thereby, the decision results are more accurate and realistic. (2) In addition, the operational laws of our method are more flexible than those of the IVq-RDHLWMSM operator [35] because of the existence of LSF. By choosing different LSFs, the IVq-RDHLPWHM operator can capture the subjective evaluation of DMs, and it can make the final decision results more valuable. For example, a DM thinks that the semantic gap between "extremely bad" and "very bad" is smaller than "very good" and "extremely good". When aggregating the evaluation values, we should choose LSF 2 to make the final decision results more reasonable. (3) Compared with the method based on the IVPFLWA operator [48], IVq-RDHLPWHM can capture the complicated relationship among the attributes and use it in the calculation of the final results. In addition, the applications of the IVq-RDHLPWHM operator are wider as it can deal with a larger information space, which shows that our method can effectively reduce the loss of evaluation information. In the condition of the DMs believe that adjacent semantics are not equal, the proposed IVq-RDHLPWHM operator based on LSF in this article can handle this issue well. Moreover, the IVq-RDHLPWHM operator can also handle the MADM problem, which is under the condition of the complex interrelationship among attributes and the extreme values proposed by DMs. All in all, our method is more robust and effective than existing methods when dealing with MADM problems. Conclusions In this paper, we introduce a new MADM method under the IVq-RDHLs fuzzy environment. Firstly, we proposed new operational rules of IVq-RDHLs that can satisfy different DMs' semantic translation requirements. Secondly, inspired by the idea of the PHM operator and IVq-RDHLs, the IVq-RDHLPHM operator and the IVq-RDHLPWHM operator were proposed, which can not only reduce the negative influence of extreme values but also consider the interrelationships among attributes. Thirdly, we put forward a method based on the IVq-RDHLPWHM operator and show the main steps to dealing with MADM problems that involve IVq-RDHLs fuzzy information. Finally, numerical examples were used to reveal the validity of our method, and comparative analyses were used to explain the powerfulness of our methods. After the comparative analysis, it is obvious that the proposal of the IVq-RDHLPHM operator and the IVq-RDHLPWHM operator provide new solutions for MADM problems. It can not only deal with the understanding of different semantics between LTs by DMs but also capture the relationship among attributes and eliminate the influence of extreme values on the final decision. In the future, we will extend the decision-making methods to more actual MADM problems. In addition, combining IVq-RDHLs with traditional multi-objective decision-making methods (such as TOPSIS, AHP, etc.) to propose novel and powerful MADM methods is a good research direction. Moreover, we will explore more methods for IVq-RDHL information and apply them in modern realistic decision-making situations. Author Contributions: Conceptualization, X.S.; formal analysis, X.F. and J.W.; funding acquisition, J.W.; methodology, X.S. and X.F.; supervision, J.W.; validation, X.S. and X.F.; writing original draft, X.S. and X.F. All authors have read and agreed to the published version of the manuscript.
7,929.6
2022-01-22T00:00:00.000
[ "Mathematics", "Computer Science" ]
Hardness Prediction in Quenched and Tempered Nodular Cast Iron Using the Hollomon-Jaffe Parameter The Hollomon-Jaffe parameter is usually used to stablish a equivalence between time and temperature in a tempering treatment, but not to predict the harness of the alloy after the treatment. In this paper this last possibility has been studied. A group of cast iron samples was annealed and cooled at different rates in order to obtain samples with three different hardness values. These samples were tempered using different times and temperatures. The Hollomon-Jaffe parameter was calculated for each case and a relationship based on a logistic function between that parameter and the final hardness was stablished. This relationship was found to depend on the initial hardness and the lowest hardness achievable. Introduction The heat treatment known as tempering, which consists in heating the hardened material to a temperature below the lower critical temperature, is a common procedure carried out after quenching in order to improve toughness and ductility of an iron alloy with martensitic microstructure, including welds [1][2][3] and the newest generations of steels [4,5]. Although the cost is a hardness reduction, the benefits clearly outweigh the inconveniences as the material properties are adapted to the in-service demands. Furthermore, the combination of a quenching and tempering treatment is technically a lot easier to perform than a quenching where the cooling rate must be accurately controlled or a certain temperature must be maintained during a certain period of time (e.g., austempering). The two main parameters of the process are the temperature and the time the material is held at that temperature, being those two parameters complementary and interchangeable. This means a lower temperature can be compensated by a longer soaking time and a higher temperature should be accompanied by a lower soaking time if the same hardness reduction is to be obtained. Several equations have been proposed as a means to study the combined effect of time and temperature and other variables [6][7][8], although the most known and used worldwide is the Hollomon-Jaffe equation [9], which is expressed as: where TP is the Hollomon-Jaffe parameter (also known as the Larson-Miller [10] or the tempering parameter), T is the temperature in kelvin, t is the soaking time in hours and C is a constant. This equation was derived from the known Arrhenius equation [11], which in turn is the simplest form of the Van't Hoff equation [12,13]. The importance of Equation (1) lies in the fact that there is a relationship between the value of TP and the decrease in hardness, which, in turn, relates to the mechanical properties of the material. The selection of a correct value for constant C in Equation (1) can be important for the use of the Hollomon-Jaffe parameter. Initially, it was proposed a dependence of C with the carbon content of the steel and a mean value of 20 if time was expressed in hours was proposed [14] shortly after. Although this value has been widely used [15][16][17][18] even for new steels [19,20], other values have also been proposed for different alloys, basically through the fit of experimental data [16,[21][22][23]. Even a negligible influence of the value of C in the applicability of the tempering parameter has been suggested [24], which could explain the aforementioned wide and appropriate use of a value of 20 for C. Nevertheless, some authors have stated that the use of the Hollomon-Jaffe parameter is not suitable for all iron alloys [25]. Cast irons are iron alloys with a high carbon content that solidify with an eutectic and are used to manufacture metallic elements using the casting technology. The vast majority of those alloys are characterized by a microstructure of graphite in a ferrous matrix and can be subjected to heat treatments such as quenching, although their high carbon content imposes some limitations and oil quenching is always used except for surface hardening, when water can also be used. As the combination of quenching and annealing is cheaper and more controllable than processes like austempering or martempering, it should be preferred if the mechanical characteristics that provide an austempering are not needed or other problems like cracking during quenching are not a drawback. The main aim of this work has been to study the applicability of the Hollomon-Jaffe parameter TP on the tempering of nodular cast iron after quenching, to determine the best value for the constant C and to use TP to predict the hardness of the samples after a hardening and tempering treatment. The results show this parameter is useful in the study of the tempering process over a wide range of C values and that the final hardness of the cast iron can be estimated from the initial hardness of the alloy and the Hollomon-Jaffe parameter. Materials and Methods The material used in this investigation was a trapezoidal bar with a section of 3.5 cm 2 . This bar was made of a GJS-400-15 as-cast nodular iron alloy whose composition can be seen in Table 1. Its microstructure is shown in Figure 1. It contains both nodular and vermicular graphite (G) -around 70% nodular-in a ferritic-pearlitic matrix where pearlite accounts for 26% of the matrix. Its carbon equivalent (CE), calculated using Equation (2) [26] ranges from 3.93 to 4.57. CE = %C + 0.28 · %Si + 0.007 · %Mn + 0.303 · %P + 0.033 · %Cr + 0.092 · %Cu + 0.011 · %Mo + 0.054 · %Ni (2) The hardness of this alloy was 87.6 HRB (181.7 HV), what implies a minimum ultimate tensile strength of 390 MPa and a minimum yield strength of 250 MPa according to the specifications of EN 1563. The minimum elongation for this cast iron is 15%. The bar was cut into samples of 15 mm width. These samples were separated into three groups of 20 samples each one in order to obtain three different hardness groups after a heat treatment: • Group 1: Samples were austenized at 880 • C for 15 min and quenched in water. Oil is usually preferred so as to prevent cracking, but the small size of the samples avoided that problem even when quenching in water. • Group 2: Samples were austenized at 880 • C for 15 min and cooled under air flow (24 m/s approximately) at room temperature. This airflow was obtained using two small air blowers at 100 mm from the samples. • Group 3: Samples were austenized at 880 • C for 5 h and quenched in water. This long period of treatment leaded to a partial dissolution of the cementite in the pearlitic fraction of the matrix, which becomes graphitized. The resulting lower martensite content of the matrix after quenching implies also a lower hardness than for group 1. The applied austenizing treatments do not follow the usual recommendations of around 1 h/25 mm of section [27], so a complete austenization was not guaranteed for the whole samples. Nevertheless, some exploratory quenching tests were done at 880 • C and water quenching with soaking times of 10, 15, 20, 25 and 30 min and, except for the 10-min samples, all of them showed the same mean hardness at the surface. As the objective was only to obtain 3 different groups of samples with different hardness at the surface and not to obtain an homogenized or fully austenized microstructure, the lowest soaking time congruent with the objective was selected. After quenching, the samples were tempered using different combinations of time and temperature. In total, 20 combinations were tested, according to Table 2. In order to calculate the value of the Hollomon-Jaffe parameter, a value of 20 was supposed for C. As can be seen in Table 2, the selected values of temperature and time cover a range that goes from 12.3 to 22.6. In this range the values for one temperature partly overlaps with the values of the previous temperature. Once the tempering treatment was finished, the samples were cooled in water to avoid a further evolution of the microstructure. The hardness of the quenched and tempered samples was, after grinding using 220 and 500 grit sandpaper, measured using the Rockwell-C or Rockwell-B scales, depending on the hardness of the samples. These values were converted to the Vickers scale to facilitate a better comparability. Finally, the samples were polished and etched using Nital-3 (3% HNO 3 in ethanol) and its microstructure studied using light microscopy. Microstructure and Hardness after Quenching The structure of group 1 samples after quenching in water can be seen in Figure 2. Quenching in water generates a mixed matrix microstructure, mainly martensitic (M), but also containing areas of lower bainite (B) and ferrite (F), still present due to an incomplete austenization. Each one of these areas has a different hardness: 801 ± 75 HV0.1 for the martensitic areas, 620 ± 52 HV0.1 for the bainitic areas and 180 ± 34 HV0.05 for the ferrite grains. The mean hardness of the quenched samples was 54 ± 3 HRC (592 HV). This last value averages the effect of the different components of the microstructure as the size of the indentations done on the samples cover all of them. When the sample is quenched under a forced airflow the cooling rate is not high enough to produce a highly hardened microstructure, although an increase of 110 HV units is obtained when compared to the untreated cast iron. In this case the microstructure of the matrix (Figure 3) consists mainly of a mixture of fine pearlite (P) and ferrite (F) with a mean hardness of 28.6 ± 1 HRC (291 HV), although some retained austenite (RA) can still be found. The structure that is obtained after a water quenching if the sample is maintained 5 h at 880 • C in the oven consists of martensite islands in a ferritic matrix (Figure 4). The higher proportion of ferrite in these samples when compared to the original microstructure indicates the 5 h the samples have been soaked at 880 • C had a ferritizing effect, resulting in the partial conversion of the pearlitic cementite to ferrite and graphite. Furthermore, a partial decarburization process took place at the surface of the samples due to the high temperature of the oven [28], as can be seen at the lower-left part of Figure 4 as round areas corresponding to the decarburized graphite nodules (DGN). In this case, the combination of ferrite and martensite gave the quenched samples a hardness of 42 ± 3 HRC (420 HV). Figure 5 shows the hardness measured for each sample after the quenching and tempering heat treatments as a function of temperature and time. The largest part of the hardness reduction takes place during the first hour, with a higher reduction rate during the first 10 min. After 1 h, the hardness values decrease very slowly or stabilize. This behaviour is expected due to the logarithmical relation between time and hardness in definition of the Hollomon-Jaffe parameter. This logarithmic relationship implies the need of a 10-fold increase of time in order to double the hardness decrease. The dependence with temperature is linear and that linearity can be seen in Figure 5 despite the effect of data dispersion. Hardness and the Tempering Parameter A better way of correlating time and temperature with hardness is the use of the Hollomon-Jaffe parameter. If hardness is plotted against the tempering parameter using C = 20, Figure 6 is obtained. In that graphic, the initial hardness of the quenched samples is represented by a horizontal line at the Y-axis. According to Figure 6, the hardness decrease follows approximately the same curve independently of the initial hardness, but only once the tempering parameter is high enough to produce a noticeable change in the microstructure and the hardness. That point seems to be dependent of the initial hardness value as a tempering parameter equal to 16 is needed to see a reduction in hardness when the initial microstructure is a mixture of ferrite and fine pearlite (291 HV), but the hardness reduction is clear for the more hardened samples (580 HV) with a value of the tempering parameter of 12. The need for a minimum value of TP in order to obtain noticeable changes is also found in the literature [29,30]. Regarding the value of C, the Hollomon-Jaffe parameter is mainly used on the linear part of the curve that represents the evolution of hardness. To obtain the best fit to the data points, a linear fit was done using all the points of group 1 samples except the one corresponding to the highest TP as that point no longer follows a linear evolution. The best fit was found with C = 24.57 (R 2 = 0.981). Nevertheless, other C values give very good results. Figure 7 shows the value of R 2 when different values of C are used. As that figure shows, the range of values from which C can be chosen and still have a coefficient of determination R 2 over 0.95 goes from 17 to 39, what explains the different values proposed by different authors or the aforementioned negligible influence of C in the applicability of the tempering parameter [24]. Evidently, the value of TP will change with C. In this case, as the use of C = 20 gives good results (R 2 = 0.971), it was decided to maintain that value in pursuit of a better comparability with bibliographic results. It's important to note that the evolution of hardness, and also of the microstructure, depends on three diffusion processes: tempering of martensite, ferritization and decarburization of the outer layer of the samples, but despite the combination of these processes, the tempering parameter remains suitable to study the changes due to tempering. Microstructure Evolution The evolution of the microstructure is shown in Figure 8 for the samples of group 1, the more hardened ones. The microstructure shows tempered martensite until the tempering parameter reaches a value near 18, when the decomposition of martensite has progressed and the microstructure is composed of globular carbides in a ferritic matrix along with nodular and compacted graphite. Thenceforth, the ferritizing process dissolves those carbides to produce, for the highest values of the tempering parameter, a fully ferritic microstructure. Furthermore, decarburization also takes place and nodular and compacted graphite disappear from the outer layers of the samples, leaving footprints [31] that can be seen clearly in Figure 8f. Figure 8d,d', which have lower hardness, perhaps this behaviour is due to a slight higher influence of temperature than time that is not covered by the formula of the Hollomon-Jaffe parameter. Evidently, that could not be the case, but the different series of points in Figure 6 tend to show that, for similar TP values, the samples with higher temperatures and lower times tend to have lower hardness than the samples with lower temperatures and higher times. Prediction of Hardness Using the Tempering Parameter Despite minor inconsistencies that could be due to small mistakes during the procedure of tempering and quenching or, simply, to statistical dispersion of the results, the Hollomon-Jaffe parameter TP fulfils its role and combines the effect of tempering temperature and time for the tested nodular cast iron. In this case TP has also been use as a means to predict the hardness of a tempered sample. In order to do that, some points were taken into account: • Low values of the tempering parameter will not produce any change in the microstructure of the samples. • The curve that represents the evolution of hardness with TP is limited by the initial hardness and the hardness of the softer microstructure achievable, a fully ferritic matrix in this case. Considering these restrictions, the chosen function to fit the data as a function of TP was based in the logistic function, which is a sigmoid function: where A, B, D and E are the fitting variables. Usually, a linear relationship between TP and hardness would be used, but this function has been preferred as a means to extend its validity over all values of HP. Some considerations must be made now regarding the function: • Without any tempering (TP = 0), the hardness remains unchanged. Mathematically, for the chosen function, this implies that for −∞ A equals the initial hardness, H 0 . • It can be supposed that for a infinite value of TP, the hardness will be the lowest of the values obtained after tempering the samples, H f . This implies, taking into account the previous restriction, that B = H 0 − H f . So, the values of A and B can be directly calculated from the initial and the lowest attainable hardness (133 HV) and the only remaining variables are D and E, both of them needed to fit the width and position of the transition from H 0 to H f . The fitted functions for each one of the groups are represented in Figure 9. Each group of samples follows a different curve, which is to be expected, but there is some linearity that is shared between the 3 groups and provide an estimation for the hardness values of the quenched samples independently of the initial hardness. The range where this happens is marked in Figure 9. The values of coefficients D and E should also be dependent on H 0 and H f and both decline as H 0 − H f increases (see Figure 10). The fitted functions provide other insights as they show clearly how the onset of the decrease in hardness requires higher values of TP as the initial hardness lowers. That value of TP seems to coincide with the point where the curve reaches the area of linearity. This fact could be used to estimate the minimum TP needed to produce a change in a hardened sample using data obtained from samples with other hardness. If the onset of change is supposed to be related to the loss of 5% of the initial hardness, the following values are obtained for TP: 8.3, 16.3 and 11.0. The end of the tempering could be estimated the same way. Supposing it ends when hardness reaches 105% of H f , the functions give the following values for TP: 25.05, 22.81 and 21.88. Conclusions The following conclusions can be extracted from the study of the results: • The tempering of three groups of hardened nodular cast iron, each one with a different initial harness, has shown the Hollomon-Jaffe is suitable for this alloy even when there are more than one diffusion process (tempering of martensite, ferritization and decarburization). • The constant C of the Hollomon-Jaffe parameter can be chosen from a wide range of values without affecting noticeably the applicability od TP. The common choice C = 20 gives, in this case, good results. • The evolution of hardness relates with that of the microstructure. These changes follow, roughly, the following sequence: tempering of martensite, ferritizing and decarburization. • The onset of microstructure and hardness changes depends on the initial hardness of the samples. The changes begins at lower TP values as initial harness increases. • Once hardness has begun decreasing, the evolution follows approximately the same linear evolution regardless of the initial hardness. • Hardness evolution can be predicted using a logistic function, which can also be used to predict the onset of hardness changes.
4,508
2021-02-09T00:00:00.000
[ "Materials Science" ]
Dimerization of Protegrin-1 in Different Environments The dimerization of the cationic β-hairpin antimicrobial peptide protegrin-1 (PG1) is investigated in three different environments: water, the surface of a lipid bilayer membrane, and the core of the membrane. PG1 is known to kill bacteria by forming oligomeric membrane pores, which permeabilize the cells. PG1 dimers are found in two distinct, parallel and antiparallel, conformations, known as important intermediate structural units of the active pore oligomers. What is not clear is the sequence of events from PG1 monomers in solution to pores inside membranes. The step we focus on in this work is the dimerization of PG1. In particular, we are interested in determining where PG1 dimerization is most favorable. We use extensive molecular dynamics simulations to determine the potential of mean force as a function of distance between two PG1 monomers in the aqueous subphase, the surface of model lipid bilayers and the interior of these bilayers. We investigate the two known distinct modes of dimerization that result in either a parallel or an antiparallel β-sheet orientation. The model bilayer membranes are composed of anionic palmitoyl-oleoyl-phosphatidylglycerol (POPG) and palmitoyl-oleoyl-phosphatidylethanolamine (POPE) in a 1:3 ratio (POPG:POPE). We find the parallel PG1 dimer association to be more favorable than the antiparallel one in water and inside the membrane. However, we observe that the antiparallel PG1 β-sheet dimer conformation is somewhat more stable than the parallel dimer association at the surface of the membrane. We explore the role of hydrogen bonds and ionic bridges in peptide dimerization in the three environments. Detailed knowledge of how networks of ionic bridges and hydrogen bonds contribute to peptide stability is essential for the purpose of understanding the mechanism of action for membrane-active peptides as well as for designing peptides which can modulate membrane properties. The findings are suggestive of the dominant pathways leading from individual PG1 molecules in solution to functional pores in bacterial membranes. Introduction Protegrin-1 (PG1) is a potent antimicrobial, β-hairpin, cationic peptide [1,2]. A simple model that explains how PG1 kills bacteria involves oligomeric (typically octameric or decameric) peptide pores in anionic lipid bilayer membranes [3,4], which mimic the inner membrane of Gram-negative bacteria. Cytosolic potassium is released through these pores, and sodium enters the cell, causing a significant transmembrane potential decay, a subsequent cell volume expansion and lethal membrane rupture [5,6]. NMR experiments indicate that PG1 dimers are structural prerequisites of PG1 pores inside anionic lipid membranes and of PG1 β-sheets on the surface of cholesterol-containing zwitterionic lipid bilayers, which, in turn, mimic mammalian cell membranes [3,7]. Two distinct dimer packing modes are prevalently observed, depending on the studied environments: parallel and antiparallel. The parallel structure is in an NCCN packing mode, where N and C stand for the peptide's N-terminus and C-terminus, respectively. In particular, parallel dimers have been observed on the surface of cholesterol-containing zwitterionic lipid bilayers, whereas antiparallel dimer structures have been observed on the surface of dodecylphosphocholine micelles. Importantly, parallel structures have been observed inside anionic lipid bilayers. Indeed, the model of the octameric pore determined by NMR [3] and simulated by molecular dynamics [4] comprises of four distinct, structurally stable, parallel dimers. In order to develop mechanistic explanations of the antimicrobial activity of PG1, substantial efforts have been expended on determining how protegrin monomers and dimers interact with model membranes [4,[8][9][10][11][12][13][14][15][16]. For example in [15] the energetics of protegrin binding and inserting in model membranes were determined. The preferred orientations and conformations of PG1 were also established in model membranes [16]. Less attention has been paid to the phenomenon of dimerization, which is notable in its own right, since PG1 carries a (+7) charge. Indeed, little is known regarding the thermodynamics and kinetics of peptide-peptide aggregation at the molecular level and how the environment dictates PG1 dimerization. In this work, we attempt to address this gap by investigating the dimerization of PG1 in various environments. Understanding on a molecular basis how peptides of oligomers structures maintain stability remains an important challenge, since detailed knowledge of dimerization through calculation of free energy and monitoring ionic bridge and hydrogen bond networks increases our basic understanding not only of pepetide oligomer structure and function, but also of the origin and progression of peptide selectivity to different membranes. Furthermore, it may suggest rules for the rational design and engineering of antibiotic peptides. We employ atomistic simulations to calculate the potential of mean force of dimer formation. In particular, we determine the energetically preferred structures in water, on the surface of lipid bilayers and inside the hydrophobic core of lipid membranes. We investigate the effects of hydrogen bonds between the peptides and ionic bonds between peptides and ions, or peptides and lipids. Illustrating the mechanism of dimerization contributes to our efforts to further explain the molecular mechanism of antimicrobial activity. In what follows, we describe the details of the computer simulations and the calculation of the potential of mean force. We apply a variant of constrained molecular dynamics (MD) simulations and the thermodynamic integration method to determine the potential of mean force, and calculate the equilibrium binding constant and related adsorption free energy. We then present and discuss the results in the context of earlier work. Based on our results, we also speculate on the dominant kinetic pathways that PG1 follows from solution to pores. Microscopic Models for PG1 Dimers in the Water Phase, on the Surface of a POPG:POPE Membrane and inside a POPG:POPE Membrane PG1 is an 18-residue cationic β-hairpin antimicrobial peptide (RGGRL CYCRR RFCVC VGR-NH2) [1]. PG1 dimerizes either in a parallel structure, hereafter denoted by PG1 d p , or an antiparallel one, hereafter denoted by PG1 d a . In Figure 1 the parallel dimer structure is illustrated. The structure of PG1 d p has been determined by NMR [3]. The atomic coordinates were downloaded from the protein data bank (PDB code 1ZY6). Currently there is no atomistic resolution structure of the antiparallel structure. We constructed the initial antiparallel NCCN β-sheet arrangement (PG1 d a ) from the parallel configuration by rotating one PG1 peptide 180 o about the Cys 15 α-carbon atom to satisfy the antiparallel NCCN packing model for the PG-1 dimer, which was implied by the rotational-echo double-resonance solid state NMR [7]. The formation of PG1 d p and PG1 d a is investigated in the following three distinct environments: We simulated the formation of PG1 d p and PG1 d a in bulk water. We solvated the two peptides with nearly 7030 TIP3P water molecules, 34 chlorine ions, and 20 sodium ions. Chlorine and sodium ions were added to create 0.15 M physiological salt solution and to neutralize the charge of the two identical PG1 peptides. Environment 2: Lipid Bilayer Surface The PG1 dimers in the parallel PG1 d p and antiparallel PG1 d a β-sheet arrangements were placed on the surface of a mixed lipid bilayer, consisting of 224 lipids, including 56 anionic palmitoyl-oleoyl-phosphatidylglycerol (POPG) and 168 palmitoyl-oleoyl-phosphatidylethanolamine (POPE) in a 1:3 ratio (POPG:POPE). This is a composition previously used to model the inner membrane of Gram-negative bacteria [4]. Both PG1 d p and PG1 d a dimers were oriented parallel to the membrane such that one of the PG1 peptide backbones was parallel to the membrane along the y direction, with residues Cys6, Cys8, and Cys15 laying on the xy-plane. The center of mass of the peptides was positioned 29Å from the center of mass of the membrane. This separation distance corresponds to a minimum free energy profile of the interaction between a dimer of protegin-1 in the parallel arrangement and a model lipid membrane [15]. Here we made the assumption that the free energy minimum for dimers in the parallel and antiparallel packing models is attained at the same distance from the membrane. The system is solvated with nearly 10800 TIP3P water molecules, 44 chlorine ions, and 86 sodium ions. Chlorine and sodium ions are added to create 0.15 M physiological salt solution and to neutralize the charge of the peptides and POPG head groups. To build a transmembrane complex for molecular dynamics (MD) simulations of dimers in the NCCN β-sheet PG1 d p and PG1 d a arrangements, we used the CHARMM-GUI membrane builder with the replacement method [17]. The peptide dimer was placed with its principal axis parallel to the bilayer normal and the dimer center of mass was located at the bilayer center of mass ( Figure 2). For both dimer configurations, we use the solvated lipid bilayer system of 152 lipids (i.e., 76 lipids in each leaflet) containing 114 POPE lipids and 38 POPG lipids. The system is solvated with nearly 4300 TIP3P water molecules with 29 chlorine ions, and 53 sodium ions. Chlorine and sodium ions are added to create 0.15 M physiological salt solution, to neutralize the charge of the peptides and POPG head groups. The surface area occupied by two peptides and used in the membrane builder is about 320Å 2 [12]. Water is shown as van der Waals spheres. The solution Na + and Cl − counterions are shown as small yellow and large green spheres, respectively. The peptide Cys and Arg residues are shown as sticks. The sidechain atoms of the other residues and the bilayer lipid atoms are omitted for clarity. . Molecular Dynamics Protocol Our goal is to use molecular dynamics simulations to calculate a potential of mean force (PMF) for the dimerization of PG1. In the next section, we present the details of the PMF calculation. In this section we discuss the molecular dynamics simulation protocol. For each one of the three environments, and for each of two dimer conformations, a set of 9 different simulations is conducted with the center of mass of the two peptides at different distances. All systems were constructed in a rectangular volume cell using the program CHARMM [18] and CHARMM-GUI Modeler [17]. The CHARMM-27 force field [19] with CMAP corrections [20] was employed. All structures of the β-hairpin PG1 were generated with two disulfide bonds, amidated C-termini, and six positevly charged arginines reflecting the typical protonation state of arginine. An assumption that may be of limited accuracy is that the protonation state of PG1 does not change when PG1 is embedded inside the lipid bilayer. We should note though that the two ends of the PG1 structure, i.e., the N-and C-termini on the one end and the β-hairpin on the other end, are both outside the lipid hydrophobic core and in contact with lipid headgroups. We used the NAMD software package [21] employing the Nose-Hoover-Langevin pressure controller [22,23] for all simulations. The pressure was set to 1 atmosphere with a piston period set to 200 fs and piston decay of 100 fs. The system was heated to 310 K (above the gel-liquid crystal phase transition of the mixed membrane [24] ) in increments of 30 K, running for 5000 steps at each temperature. After minimization and heating, all simulation boxes were equilibrated for 4 ns in the NPT ensemble. The water molecules were simulated using the TIP3P water model [25]. The van der Waals interactions were smoothly switched off over a distance of 4Å, between 8 and 12Å. The electrostatic interactions were simulated using the particle mesh Ewald summation with a grid of approximately 1 point per 1Å apart in each direction [26]. During equilibration, in all simulation boxes for Environments 2 and 3, the area per lipid remained constant at the mixed (1:3) POPG:POPE system average at a value, 63.7 ± 1.4Å 2 , and 63.2 ± 1.7Å 2 , respectively. The average dimensions of the equilibrated simulation box for Systems 1, 2 and 3 are 67. 3 We calculate the potential of mean force, W (D), along a single reaction coordinate corresponding to the separation distance, D, between the centers of mass of two peptides. The separation intervals include the distance between the center of mass of the two peptides for a stable dimer structure as determined by NMR experiments [3] in a POPC membrane. A simple geometry is implemented to represent the two peptides: we consider each PG1 peptide as a cylinder of radius a and length L. The two cylinders are always parallel to each other along their long axis. For all systems the y-coordinate is defined by the vector connecting the centers of mass of the two peptides. The principal axis of both peptides is parallel to the x-axis. In Environment 2, the two cylinders lie parallel to the membrane surface and for Environment 3 both cylinders lie with their long axes perpendicular to the membrane surface. We restrict ourselves to the two orientational modes, PG1 d p and PG1 d a , in which the peptide backbones remain parallel to each other in an NCCN packing mode as observed in [3,7]. We simplify the analysis representing PG1 by a cylinder with an "effective" radius a = 4.5Å. Thus, the minimum separation distance between two peptides is chosen as 2a = 9Å. The simulation procedure is broken down into several stages: (i) The two peptides are positioned in either the parallel or antiparallel orientation, at a distance D. The peptide separation, D, ranges from 9Å to 25Å in increments of 2Å. There are thus 54 systems constructed (two orientations, three environments, nine separation distances). We should note that for PG1 d p in Environment 3, we added another separation distance of D = 27Å, in order to ascertain that the PMF attains a plateau at long distances, as discussed in more detail in Section 3. Each of the 55 constructed system is then equilibrated over 4 ns in the NPT ensemble. During this equilibration the PG1 peptides are restrained using harmonic springs with a a force constant 20 (kcal/mol)/Å 2 applied to all peptide backbone atoms. (ii) A 4 ns production run is then conducted for each of the 55 initial equilibrated systems. In production runs, and in order to restrain the peptides and their orientations, we use harmonic springs coupled to the three carbon C B backbone atoms of Arg1, Arg10 and Cys15. All spring constants were 20 (kcal/mol)/Å 2 . In addition, and in order to better ascertain convergence of the PMF calculation, we extended the simulation of the parallel configuration, PG1 d p , inside the membrane by an additional 4 ns, to 8 ns, for all examined distances. (iii) The instantaneous restraint forces are computed for each of the 55 system configurations for PG1 dimers with a sampling interval of 0.2 ps, and averaged to obtain the mean force is the force exerted on the harmonic restraint springs. We concentrate our efforts on reducing the statistical errors. A difficulty is that, on short time scales, the results are highly correlated, and thus unsuitable for statistical analysis. We find that the correlation time for estimating the error due to solvent force fluctuations is about 0.1 ns, and membrane fluctuations and systematic error due to the harmonic restraints require data for no less than 0.5 ns to compute reliable average forces. Using the block-averaging method [27] we find the statistical errors in F res (D) to be within 0.4 (kcal/mol)/Å in all cases. The total sampling time must therefore be long enough to ensure a collection of uncorrelated configurations. (iv) The PMF can be evaluated by applying the mean force integration method which was developed for the PMF calculation of a peptide in the vicinity of a neutral POPC membrane [28]. This method is a variant of constrained MD and thermodynamic integration [29][30][31][32][33][34]. In particular, the PMF, W (D), is calculated using (1), where the integration over the D coordinate is performed using the trapezoidal rule: An estimate of the relative equilibrium binding constant between the two PG1 peptides is where β = 1/k B T with k B Boltzmann's constant, D min is the minimum separation distance or collision radius between the two peptides and D max determines the radius of binding or separation distance which divides the free and bound volumes [28]. The relative dimerization free energy ∆G 0 is obtained via the following expression In Section 3, we present and analyze the MD results for our systems with the help of Equations (1) and (3). Binding Affinity of PG1 Peptides in the Parallel and Antiparallel β-Sheet Arrangements The potential of mean force, W (D) for protegrin dimerization is calculated for six systems. Two distinct protegrin dimer structures were examined each in three separate environments. In Figure 3 the six PMFs are plotted as a function of the peptide-peptide distance, D. For parallel and the antiparallel orientations in Environments 1 and 2, as well as for the antiparallel configuration in Environment 3, each value of W (D) represents the mean of eight 0.5 ns simulations, and the error bar represents the standard deviation. For the parallel configuration PG1 d p inside the membrane (Figure 3 (c)) we present two plots. The dashed line represents the mean of the first eight 0.5 ns simulations. The solid line represents the mean of the first sixteen 0.5 ns simulations. As observed, the PMF values remain constant within the standard deviations. It is also observed that the PMF reaches a zero energy plateau at large D. . We find that for bulk water, the PMF minimum is about -17.5 kcal/mol for the PG1 d p complex at a separation distance D = 11Å. The PMF minimum is -4.8 kcal/mol for PG1 d a at D = 11Å. We can then rather confidently remark that the protegrins in parallel form a more stable dimer in water than in an antiparallel configuration. The calculated PMF has a minimum in a position which corresponds to equilibrium positions of the two PG1 peptides in the parallel dimer PG1 structure determined by NMR experiments [3]. On a bilayer surface, the PMF minimum is -20.4 kcal/mol for the PG1 d p complex at a separation distance of D = 11Å. The PMF minimum is -23.2 kcal/mol for the PG1 d a at the same distance D = 11Å. This result points to a stabler PG1 d a dimer on the POPE/POPG surface, although the difference between the PMFs of the two dimer configurations is not pronounced enough to draw definite conclusions. Finally, when the peptides are inserted in parallel inside the POPE/POPG membrane, the PMF exhibits a rather broad minimum plateau which extends from approximately D = 17Å to D = 19Å and which is about -8.0 kcal/mol. The minimum is -3.8 kcal/mol for the PG1 d a at D = 15Å. Thus, the PG1 d p dimer in the transmembrane configuration forms a relatively stronger binding complex compared to PG1 d a . From these results we can see that the separation peptide distances corresponding to the PMF minimum for Environments 1 and 2 are approximately equal to the distance of 11Å obtained from NMR experiment for the parallel dimer inside a POPC membrane. The results of MD simulations for PG1 dimers inserted inside a POPE:POPG membrane apparently deviate from the NMR measurements, which were conducted for a zwitterionic POPC membrane. Clearly the type of phospholipids significantly impacts the interaction between PG1 peptides. It would be interesting to use simulations to investigate PG1 dimerization in zwitterionic lipid membranes, but such calculations are beyond the scope of the present study. Notably, the parallel dimer configuration appears to be overall more favorable than the antiparallel one. Remarkably, dimerization appears to be more favored on the surface of a lipid bilayer than either in the solvent or inside the membrane . In earlier work [15], we determined that protegrin monomers are more likely found on the surface of POPE/POPG lipid bilayers rather than in the aquous subphase. In light of these findings, we may summarize that under equilibrium circumstances, a larger fraction of the total protegrin molecules population will be found in dimeric, or perhaps, oligometic structures on the surface of POPE/POPG lipid bilayers than in bulk water. Certainly, it has been found in [16] that protegrin monomers prefer a transmembrane orientation than one lying flat on the membrane surface. At equilibrium then, it is likely that the majority of protegrin molecules are embedded inside the membrane, in monomeric, dimeric, or even, oligomeric forms, all structural precursors of the biologically relevant pores. These findings notwithstanding, it is important to note that the concept of equilibrium is ill-defined in biological systems. A bacterial membrane may, for example, collapse under the influence of large numbers of antimicrobial peptides. This may occur in time scales comparable to time scales of protegrin self-association and membrane binding, likely rendering kinetics as important as thermodynamics. Indeed a complete mechanistic understanding of how protegrin molecules function may only be plausible by combining both kinetics and thermodynamics studies. This is admittedly beyond the scope of this manuscript. In order to calculate the peptide-peptide binding affinity we need to define the binding geometry parameters, D min and D max . Thus, the minimum separation distance between two peptides is chosen as D min = 2a = 9Å. When analyzing the restraint forces F res (z) for all three systems, we find that forces F (z) decrease monotonically with peptide-peptide distance. Especially the PMF on D max = 23Å calculated using (1) is about 1 k B T , and for D max = 25Å it is less than 1 k B T for all systems. We can suggest that above 25Å we will have sufficient plateaus for our PMF. Therefore the peptide is considered to be bound within D max = 25Å. Accordingly, using (3), we find a free energy of ∆G 0 = -16.2 ± 0.9 kcal/mol for the formation of a PG1 d p dimer in a 150 mM NaCl solution at 310 K. The binding free energy for a PG1 d a in water is calculated to be ∆G 0 = -3.5 ± 0.9 kcal/mol. For peptides on the POPE:POPG membrane surface, we calculate a binding free energy for a PG1 d p of ∆G 0 = -19.1 ± 1.2 kcal/mol and a binding free energy for a PG1 d a of ∆G 0 = -21.8 ± 1.2 kcal/mol. Lastly, for two peptides inserted inside the POPE/POPG membrane the PG1 d p binding free energy is ∆G 0 = -7.4 ± 1.3 kcal/mol and the PG1 d a one is ∆G 0 = -2.9 ± 1.3 kcal/mol. The Role of Ionic Bridge and Hydrogen Bond Networks in Dimerization Stability It is worthwhile stressing that although protegrin molecules carry a large positive charge, they still dimerize strongly. Certainly, the environment somehow mitigates the electrostatic repulsion. Arginines aside, the sequence and structure of the peptides clearly promote stong association. In this section, we analyze molecular dynamics trajectories to elucidate the mechanism of interaction between two PG1 peptides. We focus on the thermodynamically most stable structure, which according to the calculated PMFs has the two peptides at a distance of 11Å apart. Ionic Bridges First, we investigate the formation of ionic bridges between peptides. These are formed when a negatively charged group is proximal to and interacts with argninine residues. We define two distinct types of ionic bridges between two pairs of molecules: the contact solute pairs (CSP) and the solvent-separated solute-peptide pairs (SSSP). CSP bridges form when a negatively charged group, such as a chloride ion or a lipid phosphate group, are in direct and stable contact with argninine residues. SSSP bridges form when there are water molecules (typically one) in-between an arginine and a negative atom, or group. For POPG lipids, there are four types of relevant oxygens: (1) OH oxygens, (2) phosphate oxygens, (3) ester oxygens, and (4) carbonyl oxygens. We examine and average over all four types. In particular, we count a CSP ionic bridge when any arginine guanidinium group (RNHC(NH2)2+) of an arginine residue is found within 4.3Å of a Cl − ion or of any of the anionic lipid head group oxygens. An SSSP ionic bridge is identified when the peptide guanidiniums are within 7.6Å of a Cl − ion or of any of the head group oxygens. More precisely, an ionic bridge is accounted for when these interactions are stably present for at least 10 consecutive simulation picoseconds. Notably, characteristic distance for ionic bridge to be significantly greater than the characteristic distance for hydrogen bonds. That is why we can make suggestion that in different environments we may have different geometrical structure for PG1 dimers. In Figure 4 we present the number of counterions N Cl bound to both peptides as a function of the distance in Environments 1 and 2. The number of these ionic bridges changes with the distance between the peptides. For long distances between the peptides, more ionic bridges are formed as they come closer to one another. A maximum number of guanidinium-chloride bridges is reached in water when the peptides are approximately 11Å apart, regardless of their orientation, parallel or antiparallel. This is the distance when the PMF reaches its minimum in water for both PG1 d p and PG1 d a . Notably, the actual number of ionic bridges is higher for the parallel structure than the antiparallel, when the PMF well is deeper for the parallel than the antiparallel comformation. This result is suggestive of the importance of chloride ionic bridges in dimerization of PG1 in water and may explain the preferential dimerization in the parallel comformation. On the membrane surface, the role of chloride bridges in PG1 dimerization ceases to be as important. In Figure 4 we again present the number of chloride ion bridges as a function of the distance between the two peptides, whereas in Figure 5 we present the number of ionic bridges with lipid oxygens, as a function of the distance between the two peptides. Expectedly, when positively charged peptides approach the membrane surface, there is a re-arrangement of ionic bonds, especially at a distance close to the Debye-Huckel screening length [15]. In our case, the corresponding Debye-Huckel length is approximately 10Å. Negatively charged ions are expelled from this area and what we observe is that ionic bridges are now formed between arginines and lipid headgroup oxygens. As a result the numbers of chloride-guanidinium ionic bridges is not as high for peptides on the membrane surface, as it is for peptides in water. Importantly, there is no discernible trend for the number of chloride ionic bridges as a function of the distance. On the other hand, the number of ionic bridges increases as the peptides get closer and reaches a maximum again at a distance of approximately D = 11Å, when the PMF is minimum for peptide dimerization on the membrane. Interestingly, the number of ionic bridges the peptides form with lipid oxygens on the membrane surface is approximately identical for the two comformations. Indeed, the PMFs of the parallel and antiparallel structures are not significantly different at their minimum. These results are suggestive of the importance of ionic bridges between peptide arginines and lipid oxygens in the dimerization of PG1 on the surface of lipid bilayers. In Figure 5, we also present the number of ionic bridges the peptides form with lipid oxygens, when they are embedded in the hydrophobic core of the membrane. There is again an increase in the number of ionic bridges with a decreasing peptide-peptide distance. There is however, no strong correlation between the distances where the maximum number of ionic bridges forms and where the PMFs become minimum. The PMFs are of smaller depth and are flatter for peptides inside the membrane than in the other two environments. The absence of trend suggests that ionic bridges are not the determining interaction for dimerization of peptides inside the membrane. Indeed, dimerization in the membrane is expected to be largely dictated by hydrophobic effects. We can analyze the importance of ionic bridges further by looking at their types more closely. In Figure 6 we present the average number of both chloride and oxygen ionic bridges when the peptides are at a distance D = 11Å away. In water, only the chloride ionic bridges are important. Their average number is N Cl = 3.1 ± 0.3 for the parallel dimer structure, as opposed to N Cl = 2.0 ± 0.3 for the antiparallel structure. These numbers are averages over the last 4 ns of simulations. Again, according to calculated PMFs (Figure 6), peptides preferrentially dimerize in parallel structures in water. Simulation results suggest that ionic bridges may contribute to the parallel structure being more favorable. On the membrane surface, the number of dimer-bound counterions is equal to N Cl = 1.2 and N Cl = 0.9 for parallel and antiparallel structures, respectively. On average, we find that 70% of bound Cl − counterions are in the SSSP states and only 30% in the CSP states for Environments 1 and 2. For peptides on the surface of the membrane, the average numbers, N O , of anionic lipid headgroup oxygens binding to the PG1 dimer peptides are approximately equal for the parallel and antiparallel arrangements: N O = 4.5 ± 0.5 and N O = 4.7 ± 0.5, respectively. We find that almost all ionic bridges (more than 80%) form with the phosphate oxygens. For peptides embedded in the hydrophobic core of the lipid bilayer, the average number of guanidinium-oxygen ionic bridges is N O = 5.9 ± 0.5 for the parallel dimer and N O = 5.1 ± 0.5 for the antiparallel one. . Hydrogen bonds Next, we analyze the hydrogen bonds between PG1 peptides and between the dimers and their environments. We distinguish these two categories as endogenic, or intramolecular, and exogenic, or external, hydrogen bonds. We identify a hydrogen bond when two candidate atoms are closer than 2.4Å [35]. We present the number of hydrogen bonds averaged over the last 4 ns of simulations. Notably, the PG1 dimer topology diagram [36] suggests six possible intramolecular hydrogen bonds for the parallel and antiparallel orientations (Figure 7). The calculated numbers vary, with the antiparallel orientation having a larger number than the parallel orientation, regardless of the environment. There are more endogenic hydrogen bonds for both orientations on the membrane surface than both in bulk water and inside the membrane. For the dimer on the surface, an average of 2.0 hydrogen bonds were observed for the PG1 d p structure and 4.6 hydrogen bonds for the PG1 d a structure. On the other hand, for the dimer in the water, only 1.4 and 2.2 hydrogen bonds were found for PG1 d p and PG1 d a , respectively. A PG1 dimer inserted perpendicularly into the membrane has 1.7 hydrogen bonds in the PG1 d p structure and 2.8 in the PG1 d a configuration. Calculated PMFs indicate that the PG1 dimer is more stable near the membrane surface in the PG1 d a conformation, and the numbers of hydrogen bonds may only to a small extent explain this. On the other hand, taking into account that the relative strength of hydrogen bonds is much weaker than ionic bonds [37], we can assume that the parallel configuration in water is rendered more stable primarily by counterions. . Figure 7. Average number of endogenic (a) and exogenic (b) hydrogen bonds between two PG1 peptides in all three studied environments. p: the parallel; a: the antiparallel β-sheet arrangement of the PG1 dimer. . The number of exogenic hydrogen bonds is shown in Figure 7. The numbers are practically identical for PG1 d p and PG1 d a dimers in bulk water: N H = 63 ± 2 for the parallel structure and N H = 62 ± 2 for the antiparallel. The average numbers of exogenous hydrogen bonds are also very similar for the peptide inserted into the membrane: for the parallel orientation the number is N H = 45, whereas for the antiparallel it is N H = 41. Overall, no clear discernible trends are observed that are in accord with the PMF calculations. In general, it appears that the number of hydrogen bonds is not a strong determinant of most stable dimer structures, in constast to ionic bridges which appear to better explain the preferential formation of one type of dimer over the other. Conclusions We present free energy calculations of dimerization of the cationic β-sheet antimicrobial peptide PG1 in parallel and antiparallel structures in different environments. Our simulation results provide important evidence that the driving force for the dimerization of this peptide inside and outside the membrane is determined by the formation of ionic bridges, more so than the formation of hydrogen bonds. Ionic bridges between peptide arginines and chloride ions dictate the dimerization of PG1 in water, stabilizing the parallel dimer structure. On the surface, dimerization is influenced less by chloride ionic bridges and more by lipid oxygen ones. Inside the hydrophobic core, ionic bridges are no longer the determining interaction, with hydrophobic effects likely dominating. This work also provides supporting evidence for the existence of the antiparallel state of a PG-1 dimer on the surface of the membrane. Although the parallel orientation is dominant in water and in the transmembrane inserted state, the antiparallel competes with the parallel one on the surface of the lipid membrane. The results of PMF simulations are consistent with the results of NMR experimental observations which determined that the PG1 dimer adopts an antiparallel structure upon binding to DPC micelles [7] and a parallel NCCN packing structure in a transmembrane orientation inside POPE/POPG membranes [3]. Calculation of PMFs for PG1 dimers and their complexes, particularly inside the membrane, may be useful in elucidating the pathway of action and selectivity of this peptide. Importantly, these calculations may help explain biological functions of antimicrobial peptides in terms of biophysical interactions.
7,840
2010-09-09T00:00:00.000
[ "Biology", "Chemistry" ]
Predicting Adolescents’ Educational Track from Chat Messages on Dutch Social Media We aim to predict Flemish adolescents’ educational track based on their Dutch social media writing. We distinguish between the three main types of Belgian secondary education: General (theory-oriented), Vocational (practice-oriented), and Technical Secondary Education (hybrid). The best results are obtained with a Naive Bayes model, i.e. an F-score of 0.68 (std. dev. 0.05) in 10-fold cross-validation experiments on the training data and an F-score of 0.60 on unseen data. Many of the most informative features are character n-grams containing specific occurrences of chatspeak phenomena such as emoticons. While the detection of the most theory- and practice-oriented educational tracks seems to be a relatively easy task, the hybrid Technical level appears to be much harder to capture based on online writing style, as expected. Introduction While some social variables, such as gender and age, have often been studied in author profiling (see e.g. the overview paper by Reddy et al. (2016)), educational track remains largely unexplored in this respect.The goal of this paper is twofold: we aim to develop a model that accurately predicts adolescents' educational track based on their language use in social media writing, and gain more insight in the linguistic characteristics of youngsters' educational background through inspection of the most informative features for this classification task.The paper is structured as follows: we start by discussing related research (Section 2).Next, we describe the corpus, as well as the three main types of Belgian secondary education, i.e. the three class labels in the classification experiments (Section 3).Finally, we discuss our methodology (Section 4) and present the results (Section 5). Related Research Related work on this topic is scarce; only some studies in education profiling can be found, and they examine the impact of tertiary (and not secondary) education, on text genres other than social media writing.Furthermore, Dutch is never the language of interest.Estival et al. (2007), for instance, approached tertiary education profiling as a binary classification task (none versus some tertiary education) for a corpus of English emails.They obtained promising results with an ensemble learner (Bagging algorithm) using characterbased, lexical and structural text features while explicitly excluding function words.Pennebaker et al. (2014), however, stressed the importance of function words in a related task: they linked students' writing in college admission essays to their later performance in college.Obtaining higher or lower grades appeared to be associated with the use of certain function words, belonging to either 'categorical' or 'dynamic' writing styles.In previous work on language and social status, Pennebaker (2011) had already pointed out the importance of pronouns: he described a more frequent use of you-and we-words as more typical of high status, as well as a less frequent use of I-words.When we expand the scope of previous research from profiling studies to other related linguistic fields, we again conclude that this specific topic is underresearched.There are many studies on the characteristics of (youngsters') computermediated communication (CMC) (see e.g.Varnhagen et al. (2010), Tagliamonte andDenis (2008) and many more) and even some on the interaction between CMC and education (see e.g.Vandekerckhove and Sandra (2016) for the impact of CMC on school writing).However, the impact of educational track on adolescents' online writing is not addressed.For this specific topic, we can -to our knowledge -only refer to our previous sociolinguistic work focusing on youngsters with distinct secondary education profiles, in which we have shown that teenagers in practice-oriented tracks tend to deviate more from formal standard writing on social media, by using more typographical chatspeak features (e.g.emoji), more nonstandard lexemes (e.g.dialect words) and more non-standard abbreviations (Hilte et al., 2018a,b).While for all examined linguistic features, these differences were very consistent between the two 'poles' of the continuum between theory and practice, i.e.General and Vocational students, the Technical students did not always hold an intermediate position, but their chat messages showed a rather unpredictable linguistic pattern (Hilte et al., 2018a,b).We investigate in this paper whether these sociolinguistic results are confirmed in machine learning experiments. Data Collection Our corpus consists of Flemish1 adolescents' private chat messages, written in Dutch on the social media platforms Facebook Messenger and Whats-App.The data were collected through school visits during which the students were informed about the research, and could voluntarily donate chat messages.We asked for the students' (and for minors, their parents') consent to store and analyze their anonymized texts. Methodology In this section, we describe the preprocessing of the data and the feature design (resp.Sections 4.1 and 4.2) as well as the experimental setup (Section 4.3). Preprocessing Since we will predict educational track on a participant-level, we must ensure to have sufficient data (and thus a fairly representative sample of online writing) for each participant.For this purpose, we deleted the participants who donated fewer than 50 chat messages.Next, we divided the remaining corpus in a training set (70% of the participants), and a test set (15%).A second test set (15%) was put aside for future experiments.This division was random but stratified, i.e. every subset contained the same proportion of participants per educational track. Feature Design The features used in the classification experiments consist of general textual features and features representing the frequency of typical chatspeak phenomena. The general features include frequencies for token n-grams (uni-, bi-and trigrams) and character ngrams (bi-, tri-and tetragrams).In addition, average token and post length and vocabulary richness (type/token ratio) are taken into account as well.Finally, we use the dictionary-based computational tool LIWC (Pennebaker et al., 2001) in an adaptation for Dutch by Zijlstra et al. (2004) to count word frequencies for semantic and grammatical categories.While counts for individual words are already captured by the token unigrams, these counts per category can allow for broader generalizations for words which are semantically or functionally related.However, we note that the accuracy of this feature might not be optimal, as the social media texts are very noisy (and contain many non-standard elements, e.g. in terms of orthography or lexicon), whereas LIWC is based on standard Dutch word lists. The set of chatspeak features contains counts for occurrences of several typographic phenomena.It includes the number of character repetitions (e.g.'suuuuuper nice!!!') and combinations of question and exclamation marks (e.g.'what?!').The number of unconventionally capitalized tokens is added as well (alternating, inverse or all caps, e.g.'AWESOME').The final typographic features are emoticons and emoji (e.g.:), <3), the rendition of kisses and hugs (e.g.'xoxoxo'), hashtags for topic indication (e.g.'#addicted') and 'mentions' for addressing a specific person in a group conversation (e.g.'@sarah').We also add an onomatopoeic variable, i.e. the number of renditions of laughter (e.g.'hahahahah').Another typical element of chatspeak are non-standard abbreviations and acronyms (e.g.'brb' for 'be right back').The final feature concerns language or register choice per token, in order to explicitly take into account the authors' use of words in a different language or linguistic variety than standard Dutch.We count the number of standard Dutch, English, and non-standard Dutch (e.g.dialect) lexemes.While the other chatspeak features are detected with regular expressions (typographic and onomatopoeic markers) or predefined lists (abbreviations), this lexical feature is extracted using a dictionary-based pipeline approach.For each token, we first checked if it was an actual word (and not e.g. an emoticon).Next, we checked if it occurred in a list of standard Dutch words and named entities.If not, we checked its presence in a standard English word list.Finally, if the token was absent again, it was placed in the 'non-standard Dutch' category.Figure 1 shows a sample of authentic chat messages from the corpus, illustrating the use of several chatspeak features. For each participant, an individual feature vector was created containing the counts for all of these features.We proceeded with relative counts (to normalize for submission size) by dividing the absolute counts by the author's total number of tokens (e.g. for token unigrams, emoji, ) or n-grams (for n-gram frequencies).For initial dimensionality reduction, we applied a frequency cutoff, only taking features into account that are used at least 10 times in the corpus, by at least 5 different participants. Experimental Setup We compared different models to predict Flemish adolescents' educational track based on their social media messages.The classification algorithms we tested were: Support Vector Machines, Naive Bayes (Multinomial, Gaussian and Bernoulli), Decision Trees, Random Forest, and Linear Regression.For all classifiers, we used the Scikit-learn implementation (Pedregosa et al., 2011).For each model, we searched for the optimal parameter settings through a randomized cross-validation search on the training data.We searched for optimal values for classifier-bound parameters (e.g.kernel for SVM), as well as an optimal feature scaler (no scaling, MinMax scaling or binarization) and an optimal percentile for univariate (chi-square based) feature selection, chosen from a continuous distribution.We compared the models' performance in 10-fold crossvalidation experiments on the training data. Results In Section 5.1, we discuss the best model resulting from the 10-fold cross-validation experiments on the training data and compare it to different baseline models.In addition, we inspect the most informative features for the task.In Section 5.2, we discuss additional experiments which provide further insight in the classification problem. Model Performance and Feature Inspection The best performing model in CV-setting on the training data is a Multinomial Naive Bayes classifier, with optimized parameters: the value for the smoothing parameter alpha is 0.98, and the model uses the 12.50% best features (according to chi-square tests).The features were binarized.The classification report (Table 2) indicates that the performance is good, with a value of 0.68 for (prevalence-weighted macro-average) precision, recall and F-score (std.dev.0.05).While precision is very similar for the three educational levels, recall is good for General Education, but slightly worse for the Vocational and much worse for the Technical level.Consequently, the model seems to miss many Technical profiles, confusing them with the other educational tracks. The confusion matrix (Table 3) shows that most (64%) misclassified Technical profiles were incorrectly labeled as the more theory-oriented General track, rather than as the more practice-oriented Vocational track (36%). As Table 5 the first model reaches an average F-score of 0.60 (see Table 4 for the detailed classification report), the BoW-model achieves a lower score of 0.55, and particularly underperforms in the detection of Technical profiles, with an F-score of 0.38 (vs 0.50 for the full model). In order to better understand the differences and similarities between both models, we compared their feature sets (after feature selection was applied) and inspected the 1000 most informative ones, using information gain as ranking criterion. While we expected that the most informative features for the BoW-model would be lexical and the ones for the full model stylistic, this analysis suggests that in both models, many of the most informative selected features are specific occurrences of chatspeak markers.For the BoWmodel, which uses only token unigrams as features, many of the most informative tokens contain one or more chatspeak features (e.g.colloquial register, a spelling manipulation, an emoticon, character repetition, etc.).Some other informative tokens seem to be more content-than stylerelated, revealing topics such as hobbies, specific locations, friends and school.Strikingly, although the full model contains abstraction of chatspeak phenomena (e.g. total count for emoticons), specific occurrences of these genre markers are still most informative.bic roots, who spell it in many different ways.Because of these alternative spellings, 'wallah' does not appear among the most informative tokens in the BoW-model.However, for the full model, several related character n-grams (e.g.'wlh', 'wll') do. Next, we compared the full model to a stylistic model using only chatspeak features (both abstractions and specific occurrences), and no token or character n-grams.This stylistic model performs slightly worse on both the training set (F-score = 0.64, std.dev.0.04) and unseen data (F-score = 0.59) (see Table 5).However, inspection of the most informative features in this feature set provides further insight in the education profiling task.Many of the most informative features are again specific occurrences of stylistic phenomena (e.g.specific emoticons, specific lexemes containing letter repetition).Some abstract representations of online writing style characteristics appear among the top-1000 features too (such as the total use of character repetition, of onomatopoeic laughter, acronyms, English words, mentions and hashtags, and emoticons), but much less prominently.These findings suggest that even in a purely stylistic model, abstract representation of certain style features is not informative enough for education profiling, and appears to be less important than the use of these features within specific tokens or contexts.adolescents, aged 13-16 (F-score = 0.69 in crossvalidation, std.dev.0.09; and 0.55 on unseen data).This might be due to the fact that the older teenagers have been together in the same peer networks and class groups for a longer time, and might write more similarly on social media.Furthermore, some of the younger students might actually still change educational track. Conclusion We conducted classification experiments to predict educational track for Flemish adolescents, based on their social media writing.These first results are promising and indicate that the task is doable.However, although the best model strongly outperforms a probabilistic baseline, its performance is similar to that of a simple BoWmodel.This might give the impression that lexical features are still very important; however, inspection of the most informative features revealed that many of the most informative tokens contain stylistic features typical of the informal online genre.The most informative features for the full model suggest that abstraction of these stylistic chatspeak features (or at least, the current implementation) is still of lesser importance than specific occurrences. While the distinction between General and Vocational high school students appears to be relatively easy to make, the detection of students in the intermediate Technical track is much harder.This could indicate that these students are truly a hybrid class with subsets of students that are simply not that different from their peers in more theory-or more practice-oriented tracks, respectively.In addition, related research shows that these students' online writing is rather unpredictable and does not follow a clear pattern (Hilte et al., 2018a,b). In future work, we want to experiment with additional algorithms, such as ensemble methods, and with a post-level rather than a participant-level approach (in order to have more data samples at our disposal).We also want to improve the current feature design and particularly the abstract representation of style features, because as van der Goot et al. ( 2018) write, abstract features may increase generalizability to other corpora (and even genres and languages) in author profiling tasks, compared to lexical models.Finally, we want to further investigate the creation of different classifiers for different subgroups of participants (e.g.boys versus girls).Finally, we stress that this profiling task is not only relevant in a Belgian context, since the educational tracks serving as class labels correspond to several countries' secondary education programs.Furthermore, the inclusion of stylistic features -i.e.chatspeak phenomena occurring in any language -adds to this generalizability.While specific lexemes or specific realizations of chatspeak markers may not always be relevant in other languages or corpora, the abstract stylistic features are more universal on social media.We argue that these models for education profiling, when further improved, could be used in different languages and applications.For instance, the addition of an educational compound can increase existing profiling tools' performance, which can be important in different tasks (e.g. the detection of fake accounts on social media, and many more). Supplementary Materials Because of the decision of our university's ethical committee, in line with European regulations to ensure the adolescents' privacy, we cannot make the dataset publicly available.The code will be made available. Figure 1 : Figure 1: Example messages from the corpus. Table 1 : Distributions in the corpus. troduced, the BoW-model obtains almost identical scores in cross-validation: it yields an overall precision, recall and F-score of 0.67 (std.dev.0.03).There is, however, a difference in how well both models generalize to unseen data.While Table 5 : Comparison of the different models and baselines. on a sub-token level (e.g. the n-gram 'sss' captures repetition of the letter 's' in different words).We can illustrate a clear advantage by the Arabic word 'wallah' (meaning 'I swear on God's name'), which is often used by our participants with Ara- Table 6 : Classification report for binary task (in crossvalidation). Table 8 : Comparison of the models for separate groups.
3,708.6
2018-10-01T00:00:00.000
[ "Computer Science" ]
Dynamics and Control of a Flexible Solar Sail Solar sail can merely make use of solar radiation pressure (SRP) force as the thrust for space missions. The attitude dynamics is obtained for the highly flexible solar sail with control vanes, sliding masses, and a gimbaled control boom. The vibration equations are derived considering the geometric nonlinearity of the sail structure subjected to the forces generated by the control vanes, solar radiation pressure (SRP), and sliding masses. Then the dynamic models for attitude/vibration controller design and dynamic simulation are obtained, respectively. The linear quadratic regulator (LQR) based and optimal proportional-integral (PI) based controllers are designed for the coupled attitude/vibration models with constant disturbance torques caused by the center-of-mass (cm)/center-of-pressure (cp) offset, respectively. It can be concluded from the theoretical analysis and simulation results that the optimal PI based controller performs better than the LQR based controller from the view of eliminating the steady-state errors. The responses with and without the geometrical nonlinearity are performed, and the differences are observed and analyzed. And some suggestions are also presented. Introduction Solar sail is a novel spacecraft with the unique propulsion style and has a great application potential in the near future. The continuous thrust can be obtained using the huge and highly flexible membrane to reflect solar photons; thus the much longer mission duration is possible by solar sail [1]. It is reported that lots of basic scientific questions involving the impact cosmic rays have on the long-term conditions of the earth environment and on earth itself can be answered by in situ exploration of the heliopause and the heliospheric interface by solar sailing [2]. This mission can hardly be accomplished by spacecraft with chemical fuels such as the two Voyager spacecraft launched in the 1970s. The space missions such as deorbiting [3], pole sitter [4], heliostorm warning [5], and many novel orbits [6][7][8] can be accomplished by solar sail much more suitable than the traditional spacecraft. The IKAROS [9] and NanoSail-D [10] solar sail spacecraft have been launched into space and several basic theories concerning solar sail have been demonstrated and several space missions have been accomplished. One of the critical problems with respect to success or failure for an orbiting solar sail is the attitude dynamics and control problem. The attitude control system (ACS) is proposed by Wie and Murphy consisting of a propellantless primary ACS and a microthruster-based secondary ACS [11]. The sliding masses and roll stabilizer bars are used for yaw and pitch and roll control, respectively, in the former ACS. The lightweight pulsed plasma thruster (PPT) is used for attitude recovery from off-nominal conditions in the latter one. In addition, the robust attitude controller is developed by employing the attitude control actuators abovementioned [12,13]. The modal data can be obtained for solar sail with attitude control actuators as in [14] by using the finite element method with high fidelity. The attitude controller considering the vibration of solar sail structure can be designed based on the modal coordinate state-space system, and the effectiveness of the controller is verified. The passive attitude stabilization method (to spin solar sail) is proposed in the presence of a cm/cp offset in [15,16]. The attitude control methods are proposed and the simulations are carried out using the control boom, control vanes, and sail shifting and tilting [17,18] in the presence of cm/cp offset. The attitude dynamics are derived by employing the sliding masses and roll stabilizer bars (RSB) for attitude 2 Mathematical Problems in Engineering control actuators. The validity of the ACS is demonstrated by numerical simulations [19]. A high performance solar sail attitude controller is presented by employing sliding masses inside the supporting beams, and its ability of performing time efficient reorientation maneuvers is demonstrated [20]. The proposed controller combines a feedforward and a feedback controller; the former is a fast response controller, while the latter can be used to respond to unpredicted disturbances [21]. The solar sail ACS presented in [22] includes the movement of the small control mass in the solar sail plane and the rotation of ballast with two masses at the extremities to realize the pitch/yaw and roll control, respectively. The robust attitude controllers are designed using ∞, QFT, and input shaping methods, respectively, to allow for the uncertainties of inertia, natural frequencies, damping, and modal constants of solar sail. The performances of the controllers are analyzed using the linear and nonlinear dynamics, respectively [23]. The reduced dynamic model of a flexible solar sail with foreshortening deformation coupling with its attitude and vibration is derived in [24][25][26]. The Bang-Bang control scheme combining input shaping method is used to eliminate the vibration for a time optimal attitude maneuver. The attitude control system is proposed using small reaction wheels and magnetic torquers for a solar sail on low earth orbit, and the validity is demonstrated by numerical simulation [27]. The dynamic model of the multibody solar sail with control boom and reaction wheel is derived in [28,29]. The controllability and stability are analyzed and the proposed attitude control scheme is demonstrated. The effectiveness of the solar sail attitude control system employing four control vanes is demonstrated by simulating deep space exploration missions [30]. The dynamics is derived and the controllability and stability are analyzed for solar sail with control boom, reaction wheel, and control vanes. The effectiveness of the proposed ACS is demonstrated for the solar sail with cm/cp offset [31]. The trajectory tracking control is accomplished by using solar sail with four control vanes to control the solar angle of the sail. The resultant sailcraft thrust vector amplitude can be controlled by the additional degree of freedom of the control vanes [32]. A robust nonlinear attitude control algorithm is developed for solar sail with four control vanes with single degree-of-freedom rejecting disturbances by cm and cp offset. The control allocation is studied by using nonlinear programming [33]. The novel and practical ACS for solar sail is continuously proposed. A bus-based ACS is presented in [34] including the highly reflective panel actuator for roll control located at the free end of the bus-based boom and the tether control mass actuator for yaw/pitch control running along the bus-based boom. This scalable ACS can decrease the risk and complexity involved in the design of the sail deployment subsystem. The ACS designed for IKAROS is much more novel and practical. The ACS utilizes the reflectivity control device (RCD) to realize the yaw/pitch control for the spinning solar power sail. It is a fuel-free and oscillation-free ACS [35]. But this ACS will fail to generate the required torques when it is edge to the sun (the sun angle approaches 90 ∘ ). And recently, a novel attitude control method is proposed in [36]. The required torque can be generated by adjusting the position of a wing tip along the boom. This method presents an effective attitude approach for large sailcraft. But the exact shape of the films is difficult to determine. The dynamic modeling and control for the flexible sailcraft are studied in [37,38]. In [37], the vibration equations for the axial and transverse deformations are established by considering the geometrical nonlinearity of the sailcraft. But merely the vibration analysis is insufficient for the orbiting sailcraft experiencing attitude motion. The coupled attitude/vibration analysis is required. In [38], the coupled attitude/vibration dynamics is established and the solving process is also presented. The controller is designed by using the Bang-Bang based PD theory. The effectiveness of the controller is verified by numerical simulation. But the large deformation of the structure is not considered. The coupled attitude-orbit dynamics of a solar sail is studied in [39]. The equilibrium point of the dynamics can be obtained by designing the inertia of the sail, and the stability of the equilibrium point is analyzed through a linearization. The attitude dynamics and control are studied thoroughly in above references. The ACS for solar sail by using actuators such as control boom and control vanes or by using means such as spinning and translating/tilting sail panels are studied. Little research concerns attitude dynamic modeling with highly flexible structure and vibration modeling with geometrical nonlinearity and the attitude/vibration control. But this is a problem worth studying. This paper establishes the attitude dynamics for the large flexible solar sail with control vanes, control boom, and sliding masses. And the vibration equations are also presented. The LQR and optimal PI based controllers are designed for the attitude/vibration dynamics. By theoretical analysis and simulation, the controller with better performance is selected and some discussions are presented. And the differences between the dynamics models with and without the geometrical nonlinearity are also inserted and analyzed. Attitude Dynamics In this section, the configuration and structure of the sailcraft are presented. The related reference frames and coordinate transformations are given. The kinetic and gravitational potential energies and the generalized loads are obtained. The Lagrange equation method is adopted to derive the attitude dynamics with control vanes, control boom, and sliding masses. The attitude dynamics is an important part of the coupled attitude/vibration dynamics. energy caused by vibrations and elastic potential energy of the membrane structure can be neglected. The paper mainly focuses on the dynamics for highly flexible solar sail. The following assumptions are made to simplify the problems. (1) The detailed vibration of the membrane is neglected in this paper as most of the kinetic energy caused by vibrations is by the vibrations of the supporting beams. (2) The masses of control vanes and control boom are neglected. We regard the control vanes and control boom as rigid bodies. (3) The deformation of the structure is not affected by the thermal loads, and the wrinkle effect is also neglected. (4) The torque by the cm/cp offset is the disturbance torque. And other disturbance torques (such as gravity gradient torque) are neglected. The Reference Frames and Coordinate Transformations The Inertial Reference Frame (E X Y Z , ). It is an inertial reference frame for solar sail attitude and orbital motion. E X is in the vernal equinox direction; E Z is in the Earth's rotation axis, perpendicular to equatorial plane; E Y is in the equatorial plane and finishes the "triad" of unit vectors. The unit vector is e = (i , j , k ) . The body reference frames of solar sail (O x y z , ) and control vanes (C x y z , , = 1, 2, 3, 4): O is the geometric center of the film; O x is perpendicular to the film pointing to the payload side; O y and O z are in the plane of the sail. C is the geometric center of the th control vane. When there exist no relative motions between solar sail and control vanes, is coincident with . The related frames can be found in Figure 2. The attitude motion is described in Euler angles, using the yaw-roll-pitch notation. In this notation a three-element vector Θ = [ , , ] is used to describe the attitude of solar sail with respect to the inertial frame. The transformation from to can be realized by a 3-1-2 rotation, as shown in Figure 3 (right), with the following rotation matrix: where (2) , , and are yaw, roll, and pitch angles, respectively; and are short for cos and sin . And the transformation from to can be obtained as The attitude control can be performed by varying , , and . . ( If the orbital motion and its influence on attitude motion are neglected, we have ] . The components of the position and velocity vectors of an arbitrary point on the supporting beam can be computed by using the above five equations. In Figure 4, 1 and 2 are the boom tilt and azimuth angles relative to the sailcraft body axes, respectively. The components of r can be expressed as follows by using these two angles: r = r cos 1 i + r sin 1 cos 2 j + r sin 1 sin 2 k . The position and velocity vectors of the payload can be obtained as whereṙ = ∘ r + × r . And ∘ r can be expressed as Thusṙ can be obtained aṡ The position vectors of the two sliding masses can be obtained as If the orbital motion and its influence on attitude motion can be neglected, then the position and velocity vectors can be obtained as follows , ] . 6 Mathematical Problems in Engineering The Kinetic Energy. The kinetic energy of the sail system can be obtained by neglecting the orbital motion and its influence on attitude motion. Consider The above equations can be rewritten as where J = ∑ 4 =1 J is the moment of inertia of sail system, J ( = 1-4) is the moment of inertia of the th sail system, and the expression of J can be expressed as J = diag [ , , ] in O x y z . The kinetic energy of the payload is The kinetic energy of the sliding masses can be obtained as follows by neglecting the orbital motion and its influence on attitude control: 2.5. The Gravitational Potential Energy. The gravitational potential energies of the sail system , payload , and sliding masses ℎ1 , ℎ2 can be obtained as Mathematical Problems in Engineering 7 2.6. The Generalized Forces. The components of the sunlight unit vector S can be obtained as follows by referring to Figure 3 (left) in : Based on the abovementioned equations, the components of S in can be obtained as The resultant SRP force can be simply modeled as follows for a perfectly reflective control vane: where is the overall sail-thrust coefficient ( max = 2), = 4.563 × 10 −6 N/m 2 , is the area of the th control vane, = is the maximum control vane thrust, and n = i is the unit normal vector of the control vane. The components of n in can be written as The control torque by the th control vane can be expressed as In fact, L and F are dependent on the deformations of the supporting beams and control vanes. In this paper, it is assumed that L and F are not influenced by the deformations of the structure when computing the control torque by the control vanes. Thus L and F can be expressed as The force of each control vane can be written as where is the sun angle of the th control vane that can be measured by the sun sensor. And the components of the control torques can be obtained as where is the Lagrange function, ( = 1-3) is the vector of the generalized coordinates, and is the vector of the generalized force. can be computed as where , , and ℎ are the kinetic energies of the supporting beams, payload, and sliding masses, respectively. , , and ℎ are the gravitational potential energies of the sail system, payload, and sliding masses, respectively. The Vibration of Solar Sail The mechanical and material models of the sailcraft are given. The displacement filed, stress, and strain are presented based on the Euler beam assumption. The Lagrange equation method is adopted to derive the vibration equations with large deformations based on the calculations of kinetic energies and works done by the external loads. The Mechanical Model. The body frame O xyz is established to describe the vibration (see Figure 5). F is the force vector generated by the control vane used located at the tip of the supporting beam. , , and are the components in O xyz. ( ) is the distributed load caused by SRP. The Material Model. The carbon fibre enforced composite material is used to construct the inflatable deployment supporting beam. The model of the viscoelastic material can be represented in Figure 6. The total stress is the summation of the elastic and viscous stresses. It can be expressed as where , , and are the total, elastic, and viscous stress, respectively. And is Young's modulus, () is the strain (strain rate), and is the coefficient of the internal damping expressed as . is the proportionality constant of the internal damping. The dimensions of and are N * s/m 2 and s, respectively. 3.3. The Displacement Field, Stress, and Strain. The deformation of a spatial solar sail supporting beam is presented in Figure 7. The spatial displacement fields for supporting beam are as ( , , , ) = 0 ( , ) . By using Green strain definition, von Karman's nonlinear strain-displacement relationships based on assumptions of large deflections, moderate rotations and small strains for a 3D supporting beam are given as follows, together with the strain rate and normal stress: The single underlined terms in the preceding equation indicate the contribution by the geometrical nonlinearity. The dynamics models and corresponding numerical simulation without considering the geometrical nonlinearity can be obtained and carried out by neglecting these terms. The Energies and Works Done by External Forces. The strain energy can be derived as For an isotropic support beam, the extensional stiffness and bending stiffness can be computed as The dissipation function is as The kinetic energy for the support beam is as where and are integration constant for the support beam, expressed as And the work done by the external force can be obtained as where represents the position of the support beam, ( , ), V( , ) are the displacements along and directions, respectively, and the corresponding loads , are known functions. ℎ , , and V ℎ are the position coordinate of the sliding mass, the length of the support beam, and the constant-velocity of the sliding mass. The dynamic response is studied only during the period [0, /V ℎ ]. ℎ is the mass and ℎ ( ℎ , ) and ℎ ( ℎ , ) are the components of the acceleration for a material point on the support beam the sliding mass just arrived at. And the corresponding transverse deflections are ( ℎ , ) and V( ℎ , ), respectively. The Assumed Modes Method. The axial displacement 0 ( , ) and the transverse deflections V 0 ( , ) and 0 ( , ) can be represented as follows by considering the assumed modes method: where ( ), ( ), and ( ) are the time-dependent generalized coordinates and ( ), ( ), and ( ) are the assumed modes. For assumed mode method, the mode shape functions should satisfy the following requirements in this paper: The following first two assumed modes satisfying the assumed mode principle are used in this paper: With the help of the assumed modes, the detailed expressions of , , , and Ψ can be obtained as where is the Lagrange function, The axial vibration equations can be obtained as follows for 1 , 2 : The following can be seen from the above equations. (1) Since the assumed modes rather than the natural modes are used, the coupling between axial and transverse vibrations is rather severe for axial vibrations. Mathematical Problems in Engineering 11 (2) The nonlinearity is apparent in above equations only existing in the terms of 1 , 2 , 1 , 2 . We take (47a) as an example. The terms including 2 and its first and second derivative, 1 , 2 , 1 , 2 , can be regarded as the applied loads for 1 . (3) The coupling between the axial and transverse vibrations will disappear as long as the linear displacementstrain relationship is adopted. (4) There are no external loads for axial vibrations according to above equations. The transverse vibration equations for 1 , 2 and 1 , 2 can be obtained as The complexity, high coupling, and nonlinearity are obvious in above equations. The Coupled Rigid-Flexible Dynamics The dynamics for control and dynamic simulation should be presented for solar sail based on the simplification of equations derived in previous sections. Some assumptions are made before simplifying the equations as follows. (1) Only the attitude motion is affected by the sliding mass; the vibration of solar sail structure is not affected by the sliding mass. (2) Although the attitude control is accomplished by rotating control vanes and the gimbaled control boom, moving sliding masses, the specific time histories of the actuators are not considered here. (3) The four supporting beams are regarded as an entire structure; thus the identical generalized coordinates are adopted in the dynamic analysis. The Simplified Attitude Dynamics. The kinetic energy of solar sail supporting beams can be obtained as The subscripts 2 and 3 represent the second and third supporting beams. The displacements can be expressed as The subscripts , , and of the left terms in (50) represent the displacements along , , and directions. 1 ( ) and 2 ( ) are the generalized coordinates. The displacements 0 ( , ), V 0 ( , ), and 0 ( , ) can be obtained using the assumed modes as ] . (51) The vibration velocity can be obtained as Without loss of generality, = = 1 = 1 = 1 and = 1 = −1 can be assumed in above expressions. And the kinetic energy of the sliding masses can be obtained as follows by neglecting the vibration related terms: The kinetic energy of the payload can be obtained as The following simplified attitude dynamics can be obtained by neglecting the second and higher terms of attitude angles and vibration modes in computing the kinetic where , , and are the generalized nonpotential attitude control torques generated by the control vanes. And the attitude control torques generated by the gimbaled control boom and sliding masses can be found on the left side of the above equation, coupled with,,, making the attitude controller design problem rather difficult. The Simplified Vibration Equations. The coupling between the axial and transverse vibrations is neglected, and only the transverse vibration is considered. The attitude dynamics for dynamic simulation can be obtained as follows based on previous derivation by neglecting the action of solar-radiation pressure: The following equations can be obtained by rearranging the above equations: (57) 1 ( ) and 2 ( ) are used to represent all the generalized external forces; they can be used for vibration control. These complicated equations abovementioned are the basis for obtaining the simplified equations used for controller design. The following equations can be obtained by neglecting the two and higher terms: The related coefficients in the preceding equation can be found in (A.1) in the appendix. The dynamics (see (59)) is not suitable for attitude controller design, because the control inputs exist in some coefficients of the angular accelerations. This will make the controller design rather difficult. Thus the following dynamic model is obtained by putting the terms including the control inputs right side of the dynamic equations:̈− 2̈1 − 4 5 2̈2 = +, The expressions for,, and̈are as ,, and̈are the functions of the positions of sliding masses and payload and the angular accelerations. The state variables in (60) are defined as The dynamics (60) can be written as the state-space model as followinġ 1 = 2 , 2 = 1 7 + 2 8 + 3 9 + 4 10 + , 3 = 4 , 4 = 5 7 + 6 8 + 7 9 + 8 10 + , 5 = 6 , 6 = 9 7 + 10 8 + 11 9 + 12 10 + , where , , and are the generalized control inputs for attitude control, and 1 and 2 are the generalized control inputs for vibration control. The controllability can be satisfied by analyzing (63). The attitude/vibration control inputs and the coefficients will be presented later in this section. The form of a matrix differential equation can be written as follows for the above equation: The expressions of the related matrices and vectors can be found aṡ where A, x, B, and u are the system matrix, state variables vector, control matrix, and control inputs vector, respectively. Mathematical Problems in Engineering 15 The complexity of the state-space model can be seen by presenting the detailed expressions of control inputs for attitude/vibration and related parameters in (A.2) in the appendix. Attitude/Vibration Control and Simulation The controller for the attitude and vibration is developed using the LQR and optimal PI theory. The dynamic simulation is performed based on the coupled dynamics derived in the previous sections. LQR and Optimal PI Based Controllers Design. The dynamics for solar sail can be written as follows, together with its quadratic cost function also defined aṡ The optimal control input u * ( ) within the scope of u( ) ∈ R , ∈ [ 0 , ∞], can be computed as follows to make minimum: where P is the nonnegative definite symmetric solution of the following Riccati algebraic equations: And the optimal performance index can be obtained as follows for arbitrary initial states: The closed-loop system, is asymptotically stable. For an orbiting solar sail, the primary disturbance torque is by the cm/cp offset and the impact of the tiny dust in the deep space. The state regulator can be used to control the dynamic system affected by pulse disturbance torque with good steady-state errors but will never achieve accurate steady-state errors if the dynamics is affected by constant disturbance torque. Thus the optimal state regulator with an integrator to eliminate the constant disturbance torque should be presented. This optimal proportional-integral regulator not only eliminates the constant disturbance torque but also possesses the property of optimal regulator. For the following linear system with the corresponding performance index, where Q = Q ⩾ 0, R = R > 0, and S = S > 0. If [A, B] is completely controllable, and meanwhile [A, D 11 ] is completely observable with the relationship D 11 D 11 = Q, the solution can be expressed aṡ If B B is full rank, we can geṫ K ( = 1, 2, 3, 4) is the function of A, B, Q, R, S. The closed-loop system, is asymptotically stable. The compromise between the state variables (the angle, angular velocity, the vibration displacement, and velocity) and control inputs (the torque by all the actuators) can be achieved by using both the LQR based and optimal PI based controllers for the coupled attitude/vibration dynamic system. The fact is that the attitude control ability is limited to the control torque by the control actuators. The change of the attitude angles and rates should not be frequent to void exciting vibrations. Thus the weighting matrix Q and the control weighting matrix R are selected as follows for LQR based controller: The weighting matrix Q and the control weighting matrix R are selected as follows for the optimal PI based controller: The relevant parameters for dynamic simulations are given in Table 1. The dynamic simulation results are obtained as follows by using the LQR controller based on the abovementioned Q and R, related parameters, and initial conditions. The dynamics simulations with (denoted by "with GF") and without (denoted by "without GF") considering the geometrical nonlinearity are presented. And the corresponding discussions and analysis are also given according to the calculated results. The roll, pitch, and yaw errors are presented based on the dynamics models with and without considering the geometrical nonlinearity. The control effect with zero steadystate error can be achieved for roll axis by referring to the results. But the steady-state errors exist for pitch and yaw axes from figures). A simple and practical controller should be designed to eliminate the steady-state errors. These errors will affect the precision of the super long duration space mission deeply. The differences between the dynamics with and without the geometrical nonlinearity effect can be observed. The amplitudes and phases change a bit by comparing the corresponding results. It can be observed by the partial enlarged figures (the right figures of Figure 9) that there are no steady-state errors for roll, pitch, and yaw rates. The relative satisfied dynamic performance is achieved. In fact, a result with much more satisfied dynamic performance can be obtained, but it will make the desired control torques much larger during the initial period. This will not only excite the vibration of the structure but also increase the burden of the control actuators. And the differences between the dynamics models with and without the geometrical nonlinearity can also be seen merely in the phase of the angular velocities. The desired attitude control torques are presented in Figure 10. The roll torque will tend to be zero with the decrement of the state variable errors. The pitch and yaw torques tend to be constant from the middle and bottom figures. The pitch and yaw steady-state errors can never be eliminated although the pitch and yaw torques (0.09 N * m and −0.09 N * m) are applied continuously by theoretical analysis and simulations. This is just a great disadvantage of the LQR based controller in this paper. The initial control torques can be afforded by vibrating the gimbaled control boom and moving the sliding masses that can afford a relative large torque. The control vanes can be used to afford the control torques when solar sail is in steady-state process. The expression of the attitude control inputs , , and can be found in previous section. Moreover, the differences between dynamics responses with and without geometrical nonlinearity can also be observed by comparing the corresponding results. The required torques for the attitude control are a little larger for the dynamics with geometrical nonlinearity. And the difference in the phase is also clear. It can be seen that the vibration of the supporting beam is depressed effectively by the simulation results in Figure 11. It can be seen that the steady-state errors and the dynamic performance are relatively satisfied. It can also be seen that the dynamics responses between the model with and without the geometrical nonlinearity effect are different in the vibration displacement and velocity magnitudes and phases ( Figure 15). The simulation results can be obtained as follows by using the optimal PI based controller adopting the abovementioned parameters and initial conditions. The roll, pitch, and yaw errors are given by using the optimal PI based controller. These errors tend to disappear by observing the partially enlarged plotting of the figure (the corresponding right figures) (shown in Figure 12). And the relative satisfied dynamic performance can also be obtained. From the view of eliminating the steady-state errors, the optimal PI based controller performs better than the LQR based controller by theoretical analysis and simulation results. The former can achieve attitude control results without pitch and yaw steady-state errors. From the view of dynamic performance, both the controllers perform well. In addition, it can be seen that the difference between the models with and without geometrical nonlinearity is merely in the phase. The roll, pitch, and yaw rate errors are presented in Figure 13. The dynamic and steady-state performances are satisfied by observing the simulation results. And by observing the difference between the dynamics responses with and without the influence by the geometrical nonlinearity, it can be concluded that the merely difference is the phase. The desired control torques of the roll, pitch, and yaw are obtained in Figure 14 by using the optimal PI based controller. It can be seen that the value of the desired roll control torque decreases to zero as the values of the roll angle and rate decrease to zero. The desired pitch and yaw control torques tend to be the constant values, 0.09 N * m and −0.09 N * m, respectively. The desired initial control torques are large by observing Figure 14. It is suggested that the gimbaled control boom combining the sliding masses is adopted to afford the initial torque. Then the control vanes can be used to control the solar sail after the initial phase. The phase difference can also be observed by inspecting the dynamics responses during the 4800∼5000 s period. The vibration can be controlled effectively based on the optimal PI controller. And the relative satisfied dynamic and steady-state performances can be achieved. In addition, the tiny differences exist in the magnitudes of the required torques. And the phase differences also exist. It can be concluded as follows by analyzing and comparing the simulation results based on the LQR and optimal PI theories in detail. (1) From the view of eliminating the steady-state errors of the pitch and yaw errors, the optimal PI based controller performs better than the LQR based controller by theoretical analysis and simulation results. The dynamic performances of the two controllers are both satisfied, and the desired control torques are relatively small. (2) The weighting of the state variables is reflected by the weighting matrix according to the LQR and optimal PI theories. The constant weighting matrices selected in this paper can effectively reflect the initial state variable errors but maybe reflect the final state variable errors deficiently. The time-varying weighting matrices should be selected. (3) For the actual orbiting solar sail, the control actuators (control vanes, gimbaled control boom, and sliding masses) cannot afford excessive large control torques. Thus the control torques should be as small as possible on the premise of acquiring satisfied dynamic performance and steady-state errors. It will not only reduce the burden of the control actuators but also avoid exciting structural vibration. In engineering practice, the weighting matrices and the control weighting matrices should be selected based on the attitude control requirement (e.g., dynamic performance and steady-state errors, etc.). (4) The dynamics responses influenced by the geometrical nonlinearity can be observed. In a word, the tiny differences between the magnitudes of the physical variables exist for the models with and without the geometrical nonlinearity. And the relative clear differences exist in the phases for the attitude angles, rates, required torques, and so forth. Conclusions The coupled attitude and structural dynamics is established for the highly flexible sailcraft by using the von-Karman von-Karman's nonlinear strain-displacement relationships based on assumptions of large deflections, moderate rotations,and small strains. The Lagrange equation method is utilized to derive the dynamics for controller design and dynamic simulation. The LQR and optimal PI based controllers are designed for the dynamics with the constant disturbance torque caused by the cm/cp offset. By theoretical analysis and simulation, it can be seen that both controllers can compromise between the state variables and control inputs. Because the control actuators can hardly afford large control torque, the dynamic performance should be relaxed to reduce the requirement for the attitude control actuators. Besides, the optimal PI based controller can also eliminate the constant disturbance torque caused by the cm/cp offset that always exists, while the LQR based controller fails to achieve the results with satisfied steady-state performance although the attitude control torque is always applied to the coupled dynamic system. The dynamics simulations with and without the geometrical nonlinearity effect are also carried out. And the differences between the models in magnitudes and phases are identified and analyzed. It can be concluded that the optimal PI based controller performs better than the LQR based controller from the view of eliminating the steadystate error caused by the disturbance torque by the cm/cp offset.
8,232
2014-11-30T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Comparative Analysis of Image Registration for Printed Circuit Boards -Image registration is a process of overlaying of two or more images taken at different times, different viewpoints and from different sensors. This paper aims to present comparative analysis of different image registration techniques for automated inspection of Printed Circuit Boards. In the automated inspection of Printed circuit Boards using image subtraction, it is essential to match the size and orientation of two images (reference and test image). The major goal of this paper is to provide a comparative analysis of image registration techniques used for automated inspection of Printed circuit boards. Correlation coefficient is using as a similarity measure for compare the result on quantitative basis. The results show the qualitative as well as quantitative analysis of the proposed work. The image registration is broadly classified into two categories: intensity based image registration and feature based image registration. This paper is organised as follows: in next section we describe the related work, in section III, material and methods have been presented, in section IV, results have been described and finally in section V, the conclusions have been described. II. RELATED WORK R.M. Ezzeldeen at al. presented comparative study for image registration techniques of remote sensing images. They had compared fast fourier transform, a contour-based transform, a wavelet based transform, a Harris-Pulse coupled neural network based technique and Harris-moment based technique [8]. J.Jiang et al. developed shape registration for remote sensing images with background variation. In the proposed method, level line detector had been used a shape recognition [9]. T. Sritarapipat et al. described fusion and registration of Thailand Earth Observation satellite multispectral and panchromatic images. The maximum a posteriori criterion had been used to solve the problem of fusion and registration. To determine the optimum fine resolution multispectral image and mapping parameters, the metropolis algorithm had been used [10]. H.Goncalves et al. developed CHAIR automatic image registration based on correlation and hough transform. For determination of translational shifts and control points, the distance of diagonal brighten strip in the correlation image had been computed [11]. L.Chang presented an automatic registration of coastal remotely sensed imagery by affine invariant feature matching with shoreline constraint. They had used an automatic filtering technique for removal of wrong matches [12]. Q.Xu et al. presented an improved scale Invariant Feature Transform (SIFT) match for optical satellite images registration by size classification of blob like structures. They had classified the blob-like structures according to their physical sizes. They had used scale normalization and size classification method [6]. W.C.Lin et al. proposed an approach to automatic blood vessel image registration of microcirculation for blood flow analysis of nude mice. They had used microscopic system to provide precise and continuous quantitative data of blood flow rate in individual micro vessels of nude mice and also used Powell's optimisation search method [13] . F.P.M. Oliveira et al. developed registration of pedobarographic image data in the frequency domain. The method was comprised of fourier transform, cross-correlation and phase correlation [14]. T.Araki et al. presented a comparative approach of four different image registration techniques for quantitative assessment of coronary artery calcium lesions using intravascular ultrasound. They had used the mean lesion area, the mean lesion arc, mean lesion span, mean lesion length and mean lesion distance from catheter as five set of calcium lesion quantification parameters [15]. P. Qiu et al., proposed feature based image registration using non-degenerate pixels. The proposed method has two properties [16]: 1. There should be provision of proper information to approximate the geometric matching. 2. It should be easily identified by a computer algorithm. P.A.Legg et al., described feature neighbourhood mutual information for multi modal image registration. The spatial and structural image properties efficiently incorporated by using similarity measure. It also provided the better accuracy as compared to existed methods [17]. Z-.H. Najad et al. had proposed an adaptive image registration on method based on Scale Invariant Feature Transform (SIFT) and Random Sample Consensus (RANSAC). The proposed method consisted of two parts [18]: 1. In first part, mean based adaptive RANSAC was used and threshold value had been chosen based on mean of distances between each point and its model transformation one. In this, if the calculated mean value is less than the distance, the point was discarded. 2. In second part, Adaptive RANSAC was used for increasing the capability of the method. The performance of proposed method had been compared with other existing methods using evaluation criteria like True Positive (TP) rate and mismatch ratio. M.I.Patel et al., proposed image registration of satellite images with varying illumination level using Histogram of oriented gradient (HOG) descriptor based Speeded-Up robust feature (SURF). The incorrect matches with SURF degraded image registration and it would be reduced by using HOG. The proposed work consists of three steps [19]: 1. In first step, the intensity difference between two images using their mean values had been removed. 2. In second step, keypoint using SURF had been extracted. 3. In third step, feature descriptors had been computed using Euclidean distance. S.Nagarajan et al., developed feature based registration of historical aerial images by area minimization. The main focus of proposed method was based on Time Invariant line (TIL). The exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images had been determined [20]. A.D.Savva et al. introduced comparative study on three dimensional Computer Tomography data of geometry based and intensity based medical image registration. The correlation coefficient, Mattes mutual information and mean square error had been examined using intensity based registration. In feature based, Fast point feature histogram, signatures of histograms of orientations and surface normal had been used [21]. Y.Zhuang et al. proposed infrared and visual image registration based on mutual information with a combined particle swarm optimization and Powell search algorithm. PSO algorithm had been used to obtain registration parameter that is close to the global minimum. Powell search had been used to find a more precision registration parameter [22]. K.Aghajani et al. proposed a robust image registration method based on total variation regularization under complex illumination changes. Reference image was reconstructed by a linear synthetic model which consists of multiplicative and additive coefficients. The total weighted total variation was used to minimize the reconstruction error. As a result of this, the smoothing effect on the coefficients across the edges had been reduced using weighing function [23]. R.W.K.So et al., presented a novel learning based dissimilarity metric for rigid and non-rigid medical image registration by using Bhattacharyya distances. Bhattacharya distance is a mathematical tool which is used to estimate the distance or difference of two probability distributions. The approximation of proposed dissimilarity metric had been adopted with Markov Random Field for non-rigid registration [24]. H.Han et al., introduced a variational problem arising in registration of diffusion tensor images. The major focus of the proposed method was to prove the existence of a global minimizer which ensures regular spatial transformation for the registration of diffusion tensor images [25]. K.Thakur et al., introduced implementation and analysis of template matching for image registration. The intensity based image registration with reduced time delay and Normalized Cross Correlation (NCC) had been used for matching purpose. The parameters like Peak signal to noise ratio and Mean square error were used for comparison of resultant registered image with original image [26]. R.Panda et al., developed a novel evolutionary rigid body docking algorithm for medical image registration. The ligand was taken as target image and protein as reference image. For finding optimal configurations, first hand objective function had been introduced using Genetic algorithm. It recovered the rotational and translational parameters of different kind of images used in the experiment [27]. P.Dong et al., introduced scalable joint segmentation and registration framework for infant brain images. In the proposed method, one year old image with ground truth tissue was taken as reference domain. For registration purpose, tissue probability maps had been estimated with sparse patch based multi-atlas label fusion technique [28]. From the above discussions it has been observed that most of the work of image registration has been done on remote sensing and medical applications but limited to electronics applications. In image subtraction method used for inspection of defects in printed circuit boards, same size and orientation of two images is a major requirement [4,5]. So, this paper presents image registration for automated inspection of printed circuit boards. Following techniques have been used for image registration methods: A. Intensity based image registration Intensity based method is operated directly on intensity values of images and can be performed automatically [29].It depends on the relationship between the intensity values of pixels of two images [30].The main goal of intensity based registration approach is to find a set of transformation parameters that globally optimizes a similarity measure [31]. Mean squared differences and normalized cross-correlation are commonly used similarity measures in intensity based image registration and are used for intra-modal registration whereas mutual information is another kind of similarity measure used in intensity based image registration and used in multi-modal image registration [31]. The main advantage of this process is that it is flexible but drawback of this process is that it is sensitive to illumination [13]. Intensity based image registration requires the following requirements:  Two Input images. One reference image, which is used as defect free image and second test image, which is used as defected image.  A metric, which defines the similarity metric for evaluating the accuracy of registration.  An optimizer, which defines the methodology for minimizing and maximizing the similarity metric.  Transformation type, which defines the type of two dimensional transformations that brings the misaligned image into alignment with reference image. The followings steps have been used in intensity based image registration:  In first step, transform type has been specified and an internally transformation matrix has been determined.  In second step, the image transformation that is applied to test image (unregistered) with bilinear interpolation has been determined.  In third step, metric will compare the unregistered test image to the reference image and metric value is computed.  In fourth step, optimizer will check the stop condition. It is a condition that warrants the termination of the process. If there is no stop condition, the optimizer automatically adjusts the transformation matrix to begin the next iteration. C. Phase Correlation method Phase correlation method is related to Fourier shift theorem. It gives the cross power spectrum, which is equivalent to the phase difference between the reference and test images. Following equations have been obtained from [32]. If , is a translated and rotated replica of , with translation , and rotation then , + sin + , sin + cos (1) As per Fourier translation property and Fourier transform property, transform of and will be given as: ( , cos sin , sin cos (2) If and are magnitude of and then , cos sin , sin cos (3) The rotational movement without any translation can be deduced in such a way like using phase correlation by representing as a translational displacement with polar coordinates. , , (4) By using Phase correlation method the rotational angle can be found out and images can be registered. D. Correlation coefficient Correlation coefficient is the basic image quality similarity measure used for finding correlation between two images. Its value varies between -1 and 1. If it is equal to 0, it means no relationship and if it is equal to 1, it means both images are same. And if it is equal to -1, they are anti-correlated [33]. The correlation coefficient can be calculated as: r is the Pearson's correlation coefficient ; x i is the reference image; y i is test image ; ̅ and are the mean value of the pixels of the reference image and test image IV. RESULTS The following results have been generated using MATLAB 2015a on the personal computer. A. Intensity based image registration It is operated on direct intensity values of pixels of two images. Fig. 3(a) to 3(c) shows the results of intensity based image registration. Phase correlation based 0.9955 Table I shows the comparison of correlation coefficient of different registration methods. The value of correlation coefficient of intensity based image registration is 0.4746, for feature based image registration is 0.6653 and for phase correlation based image registration is 0.9955. Hence phase correlation shows better results than the other two. V. CONCLUSIONS Image registration is an important task for image subtraction method, which compares two images of same size and orientation. From results, it has been concluded that for registration of images, phase correlation method is better than other two methods. As shown in table 1, the value of correlation coefficient of phase correlation based image registration is close to 1, shows the effectiveness of this method.
2,989
2017-06-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Constructing Multivariate Survival Trees: The MST Package for R Multivariate survival trees require few statistical assumptions, are easy to interpret, and provide meaningful diagnosis and prediction rules. Trees can handle a large number of predictors with mixed types and do not require predictor variable transformation or selection. These are useful features in many application fields and are often required in the current era of big data. The aim of this article is to introduce the R package MST . This package constructs multivariate survival trees using marginal model and frailty model based approaches. It allows the user to control and see how the trees are constructed. The package can also simulate high-dimensional, multivariate survival data from marginal and frailty models. Introduction Decision trees have gained popularity in many application fields because they can handle a variety of data structures, require few statistical assumptions, and yield classification and prediction rules that are easy to interpret. Generalizations of classification and regression trees (CARTs) applied to survival analysis can provide meaningful prognosis rules in medical research. Many authors have proposed tree-based methods to handle univariate (or uncorrelated) survival data (see, e.g., Gordon and Olshen 1985;Davis and Anderson 1989;Therneau, Grambsch, and Fleming 1990;LeBlanc and Crowley 1992). We have extended the research to handle multivariate survival data (Su and Fan 2004;Fan, Su, Levine, Nunn, and LeBlanc 2006;Fan, Nunn, and Su 2009). The goal of this paper is to discuss constructing multivariate survival trees in R (R Core Team 2017) using the MST package (Su, Calhoun, and Fan 2018) available at the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=MST. The paper is organized as follows. In Section 2 we introduce multivariate survival trees and their construction. This will help users decide which model is most appropriate for their dataset. We discuss the MST package including its features in Section 3 and analyze a simulated and a real multivariate survival dataset in Section 4. We conclude with a discussion on possible future developments. Background Multivariate failure time arises when individuals or objects under study are naturally clustered or when individuals experience multiple events of the same type (namely, recurrent times). Examples include disease incidence times of family members, component failure times in medical devices, and tooth loss times in dental studies. When not all subjects experience the event of interest and failure times are not all independent, multivariate survival methods must be used to account for the censoring and dependence. The details to analyze correlated failure times and construct multivariate survival trees are discussed in our papers Su and Fan (2004) and Fan et al. (2006Fan et al. ( , 2009; key aspects are summarized herein. The construction of multivariate survival trees adopts a modified CART procedure to model the correlated failure times. The procedure consists of three steps: (1) growing a large initial tree, (2) pruning it back to obtain a sequence of subtrees, and (3) selecting the best tree size. For the tree growing and pruning purposes, one needs a splitting statistic that handles the dependence of failure times and a split-complexity measure to evaluate the performance of the tree. There are two main approaches to analyzing correlated failure times and developing the splitting statistic. One is the marginal approach where the correlation is modeled implicitly using generalized estimating equations on the marginal distribution formulated by a Cox (1972) proportional hazards model. The other approach is the frailty model approach where the correlation is modeled explicitly by a multiplicative random effect called frailty. The frailty term corresponds to some common unobserved characteristics shared by all correlated failure times in a cluster. We have developed two frailty model based methods: one based on the semiparametric gamma frailty model (Su and Fan 2004) and the other based on the parametric exponential frailty model (Fan et al. 2009) to improve the computation speed. The marginal method uses a robust log-rank statistic given in Appendix A in Equation S1. Su and Fan (2004) used an integrated log-likelihood shown in Equation S2 in Appendix A for the gamma frailty model; however, the MST package uses a Wald test statistic to improve stability and computation time. The exponential frailty model uses a score test statistic given in Equation S3 in Appendix A. The splitting statistic is calculated for each possible split and the largest value corresponds to the best split. The MST package uses the survival package (Therneau 2017) to fit marginal and frailty models and the MASS package (Venables and Ripley 2002) to simulate from a multivariate normal distribution. In R, we can calculate the splitting statistic with the following logic where x is a numeric predictor variable and 0.5 is a candidate split: The robust log-rank statistic is calculated by fitting the model R> coxph(Surv(time, status)~I(x > 0.5) + cluster(id))$rscore The Wald test statistic is calculated by fitting the model R> coxph(Surv(time, status)~I(x > 0.5) + frailty.gamma(id, method = "em"), + control = coxph.control(eps = 1e-04, iter.max = 10, + toler.inf = sqrt(1e-09), outer.max = 5))$wald.test The gamma frailty model can be computationally intensive, so the coxph.control function is utilized as shown above to speed up the convergence. The Wald test statistic should yield similar splits to the integrated log-likelihood studied by Su and Fan (2004). The score test statistic for the exponential frailty model is calculated by utilizing matrix multiplication, simulated annealing with the optim(. . . , method = "SANN") function, and a quasi-Newton method to estimate the parameters in the model. The default values are used when optimizing parameters in the score test statistic, except that the maximum number of iterations is increased from 100 to 800 and the initial value estimates are set to 1. The best choice for the splitting statistic depends on many factors including the true hazard function, the types of covariates in the model, the sample size, the cluster size, and the censoring rates. While the true hazard function is typically unknown, prior simulation studies and data analyses (Nunn, Fan, Su, Levine, Lee, and McGuire 2012) showed that the frailty model based approaches often performed better than the marginal model based approach in dental applications. However, the marginal and exponential frailty models are much less computationally intensive and often select the correct cutoff points even under model misspecification. All three splitting statistics outperformed the naive log-rank statistic that ignores the dependence. The MST package also allows users to grow a tree using a stratified or naive log-rank statistic, but we do not suggest using these splitting statistics except in rare circumstances. Interested readers are referred to our simulation studies (Su and Fan 2004;Fan et al. 2009). Growing the initial tree is done by splitting nodes iteratively until either a small number of observations or uncensored times remain at the terminal node. The final tree model can be, theoretically, any subtree of the initial tree, and the number of subtrees can become massive. Following the CART approach, the initial tree is pruned to reduce the number of subtrees considered. Let T denote a binary tree, T i denote the set of internal nodes, and |T | denote the number of nodes of a tree. Defining G(h) to represent the maximum splitting statistic on a particular node h, the split-complexity measure is: where G(T ) = h∈T G(h) measures the overall goodness of splits of T , |T i | measures the complexity of the tree, and α ≥ 0 acts as a penalty for each additional split. The goodness of split measured at each split h is the robust log-rank statistic, the squared Wald test statistic, and the score test statistic for the marginal, gamma frailty, and exponential frailty models, respectively. Note that for a given split, each of these measures has an approximate χ 2 (1) distribution when the sample size at h is large. An efficient pruning algorithm is determined by comparing each branch T h to a possible trim at node h and solving for α. Since the split complexity measure is 0 when T h is trimmed and h becomes a terminal node, this equates to: which is equivalent to Thus, for any internal node h of the initial tree, the value G(T h )/|T i h | is calculated where T h denotes the branch with h as its root. The weakest link of the tree is the node such that G(T h )/|T i h | is minimal. The subtree that prunes off the branch with the weakest link is taken and this process is repeated until we have pruned back to the root node. This procedure reduces the number of possible subtrees (which can be very massive) to a manageable value. After pruning back the tree, we have to select the best-sized subtree. The best-sized subtree is taken as the tree with the largest splitting complexity measure in Equation 1 with a prespecified α. LeBlanc and Crowley (1993) suggested α be fixed within the range of 2 ≤ α ≤ 4. Our previous simulation study (Fan et al. 2009) showed α = 4 typically yielded the better performance compared with α = 2 and α = 3 for most models. The number of uncensored events, α = log(n 0 ), can also be used to select the best-sized subtree. Since we have already used the sample to grow and prune the tree, we need to use a validation method for tree size selection. Two validation methods are implemented: the test sample approach and the bootstrap method. When the sample size is large, one may use a subset of the data to grow and prune the initial tree and the other subset to select the best-sized tree. When the sample size is small or moderate, one may use bootstrapping techniques as described previously in our papers. There are alternative approaches to constructing survival trees, the most similar approach to the one implemented in package MST is available in package rpart (Therneau, Atkinson, and Ripley 2017). The two main differences are that rpart implements the CART procedure assuming independence and prunes the tree using cross-validation. It should also be noted that the rpart package uses an exponential scaling statistic to grow the tree, which differs from the naive log-rank statistic provided by MST. The MST package is slightly more automated where the user simply inputs the data and candidate variables and the MST function outputs the final tree; whereas, the rpart package allows a bit more control when pruning the tree. Both packages allow users to control how the tree is grown and automatically sort trees based on survival rate. Other methods for constructing survival trees include the ctree function in the partykit package which constructs conditional inference trees with stopping rules (Hothorn, Hornik, and Zeileis 2006) and the DStree package which builds a tree for discrete-time survival data (Mayer, Larocque, and Schmid 2016). Nice features with the alternative approaches are the faster computation time compared with package MST. Splitting statistics that handle the dependence are typically more computationally intensive. Simulation studies specifically comparing trees constructed using the rpart, ctree, and MST functions are warranted. Simulating multivariate survival data While the main purpose of the MST package is to construct multivariate survival trees, the package also includes the rmultime function to generate multivariate survival data. The details of generating survival data are described in Su, Fan, Wang, and Johnson (2006); the possible models that can be simulated are given in Table 1. The user specifies the coefficients (β 0 and β 1 ), the cutoff values (c), the censoring rate, and the model with the respective parameters. The arguments of the function are described in the R package documentation. gamma.frailty: log.normal.frailty: marginal.multivariate.exponential: marginal.nonabsolutely.continuous: An example where multivariate failure times are simulated using this functionality and a tree is constructed is presented in Section 4. There are several packages that can simulate survival data; those include packages genSurv (includes one time-dependent covariate; Araújo, Meira-Machado, and Faria 2015), PermAlgo (generates independent survival data conditional on covariates; Sylvestre, Evans, MacKenzie, and Abrahamowicz 2015), simMSM (simulates multistate models with possibly nonlinear baseline hazards; Reulen 2015), survsim (simulates survival data with recurrent events or competing risks; Moriña and Navarro 2014), and many others. However, to our knowledge, the rmultime function is the only function available in R that can simulate high-dimensional, multivariate survival data from marginal or frailty models. Constructing multivariate survival trees The MST function in the R package constructs the multivariate survival tree. The wrapper function grows, prunes, and selects the best tree. The main arguments are described below: • formula: A linear survival model with the survival response, the predictors, and the cluster (or id) variable (e.g., Surv(time, status)~x1 + x2 | id). • data: Data to grow and prune the tree. • selection.method: Indicates the method of selecting the best-sized tree. Options are "test.sample" (for large datasets) or "bootstrap" (for small/moderate datasets). • test: Test sample to select the best-sized tree if selection.method = "test.sample". • B: Number of bootstrap samples if selection.method = "bootstrap". A rough guideline for the number of bootstrap samples is 25 ≤ B ≤ 100 following LeBlanc and Crowley (1993). The MST function has some additional features to improve computational efficiency or allow users to see more details about the construction of the multivariate survival tree for a given dataset; these features are briefly described below: • minsplit: Indicates minimum number of observations to continue splitting the branch. • minevents: Indicates minimum number of uncensored events to continue splitting the branch. • minbucket: Indicates minimum number of observations in any terminal node. • maxdepth: Indicates maximum depth of the tree if user wants to stop growing the tree at a certain depth. • mtry: Indicates number of variables randomly considered at each split. • distinct: Indicates if all distinct cutpoints considered. If distinct = FALSE, then only cutpoints from delta to 1 -delta with nCutPoints are considered. • LeBlanc: Indicates if the entire sample or the out-of-bag sample is used while bootstrapping. The interested reader can find more details in Fan et al. (2006). • sortTrees: Indicates if trees should be sorted such that splits to the left have lower risk of failure. • plot.Ga: Indicates if a plot of the goodness of split vs. tree size is produced. • details: Indicates if detailed information should be given about how the final tree was constructed. The plot.Ga and details parameters provide additional information regarding the construction of the tree for the given dataset, while the other additional features above can improve computational efficiency. The default sortTrees = TRUE will sort the trees such that splits to the left have higher rates of survival. However, it is possible that a terminal node on the left has lower rate of survival than a terminal node on the right if they do not share the same parent node. The output of the MST function returns the initial tree, the trees pruned and considered in the best tree selection, and the best-sized tree. The final multivariate survival tree depends on the penalty (α) used, so four possible final trees are returned corresponding to α = 2, 3, 4, and log(n 0 ). All trees are objects of class 'constparty' with methods defined in package partykit (Hothorn and Zeileis 2015). A 'constparty' tree requires observations that fall in the same terminal node to have the same prediction (or constant fit). While this requirement may not be met if two observations come from two different clusters, there are many benefits with treating the tree as a 'constparty' object. One is the added functionality for print/plot/predict methods. The plot method for 'party' objects produces Kaplan-Meier curves in each terminal node, which gives the survival rate for an observation with an average cluster effect. The predict method for 'party' objects returns the median survival time, but can easily be modified to return any prediction and handle the dependence (see examples in Section 4). Additionally, the 'constparty' object allows more compatibility with other R packages and provides additional functions to extract elements from the tree. Simulation example We illustrate the ability of the MST package to correctly construct multivariate survival trees with the following simulation. Suppose we have 200 clusters each with 4 patients per cluster. Suppose the failure times (e.g., time until illness) follow a marginal multivariate exponential distribution described below: Suppose failure times have a 0.65 correlation within the cluster and approximately half the failure times are censored. Each patient has four covariates (x 1 , x 2 , x 3 , x 4 ) with the first two covariates having cutpoints at 0.5 and 0.3, respectively, with the same β coefficients at 0.8, while the last two covariates have no effect on failure. The failure times can be simulated using the following commands: The dataset can be analyzed with the following commands: R> fit <-MST(formula = Surv(time, status)~x1 + x2 + x3 + x4 | id, + data, test, method = "marginal", minsplit = 100, minevents = 20, + selection.method = "test.sample") The output of the final tree is a 'constparty' object: R> (tree_final <-getTree(fit, "4")) Figure 1 illustrates the construction of the multivariate survival tree for the simulated data. Panel A gives the large initial tree, Panel B gives the splitting statistic, G α (T ), based on the number of terminal nodes, and Panel C gives the final tree. In this case, the four split penalties considered yield the same final tree. The final tree correctly identifies that patients with x 1 ≤ 0.5 and x 2 ≤ 0.3 are at a higher risk of failure and it does not include x 3 or x 4 which have no effect on failure. The marginal model is best in this situation because the simulated data was generated without a frailty term; however, this is typically unknown. The gamma and exponential frailty models often yield similar final trees. Panels A and C were produced using the plot function and panel B was created with the plot.Ga = TRUE command in the MST function with slight modifications to increase resolution and readability. Users can go further by fitting a model using the terminal nodes of the tree as the predictors. For example, the code below can fit a marginal model using the final tree. Each observation is sent down the tree and assigned to a terminal node; the marginal model is then fit with the terminal node as the predictor. Users can use a frailty term in coxph or use the frailtypack R package (Rondeau, Mazroui, and Gonzalez 2016) to fit a frailty model. Note that, in practice, it is highly recommended that a new dataset be used since the data was already used to grow the tree and data reuse is known to result in inferences that are overoptimistic. Computational speed A major limitation with constructing multivariate survival trees is the computational intensity. There are several factors affecting computation speed: the model chosen to handle the dependence, the number of categorical and continuous predictors, the number of cutpoints considered, and the minimum node size to continue splitting the branch. To assess computation time, we simulated survival data with a marginal dependence structure setting ρ = 0.65 and varying the number of observations and predictors. Half of the predictors were clusterspecific factors (constant within cluster) with either 75% or 25% being continuous. Each continuous variable had around 100 distinct cutpoints and each categorical variable had 8 unordered levels. We set one continuous and one nominal failure-specific factor (which may vary within cluster) to be informative (β = 1). We used the default setting to grow the multivariate survival tree. Table 2 gives the number of minutes it takes to grow and select the best-sized tree using an Intel Core i7-4510U processor. While the MST package does not utilize more efficient software programs such as C++ or Fortran, most of the computation time was devoted to calculating the splitting statistic. Fitting thousands of Cox proportional hazards models, while adjusting for dependence, can be very time consuming. The exponential frailty model was the fastest method, the marginal model was the second fastest, and the gamma frailty model was the slowest. Even for moderate sample sizes, the gamma frailty model can be very time consuming. The exponential frailty model was developed to improve computation time, while assuming a constant baseline hazard function. The computational speed gained from using the marginal and exponential frailty models is essential for analyzing large datasets or using many bootstrap samples. Increasing the number of observations or predictors increases the computation time. Nominal categorical variables are converted to ordered by replacing levels with the estimated beta coefficients from a Cox proportional hazards model adjusting for the dependence. Thus, growing trees from datasets with more categorical predictors were faster due to having less distinct cutpoints. Using percentiles can greatly improve computation time. Users are advised to carefully consider the model dependence and input parameter settings to maximize accuracy and reduce computation time. tooth (tooth-level factor) and the average severity score over the entire mouth for each subject (patient-level factor) were included. Tables 3 and 4 give the fifty-one factors assessed that could potentially affect tooth loss. Factors were analyzed separately for molar and non-molar teeth. Analysis of tooth loss The data consisted of 5,336 patients with periodontal disease with 25,694 molar teeth and 40,196 non-molar teeth. The average age was 56 years, with 49% men, 9% had Diabetes Mellitus, and 23% were smokers. For molars, 1,870 (7.3%) teeth were lost with the median tooth loss time of 0.53 years and the maximum tooth loss time of 5.58 years. For non-molars, 2,723 (6.8%) teeth were lost with the median tooth loss time of 0.54 years and the maximum tooth loss time of 5.59 years. Due to the large sample size, a test sample was used to select the best-sized tree with one third of patients randomly assigned to the test sample. For the molar tooth analysis, data is partitioned with the following commands: R> data("Teeth", package = "MST") R> molar <-subset(Teeth, molar == 1) R> set.seed(482046) R> id.train <-sample(x = unique(molar$id), size = floor(nPatients * 2/3), + replace = FALSE) To improve computational speed, only 50 equally spread out cutoff points from the 5th to the 95th percentiles were used as candidates for continuous variables with over 50 distinct cutoff points; this was specified via the option distinct = FALSE, together with delta = 0.05 and nCutPoints = 50. A large number of observations was required in each terminal node, minbucket = 100, to ensure more accurate survival estimates. The exponential frailty model was used to handle the correlation and α = 4 was used as the split penalty when selecting the best-sized tree. The multivariate survival trees are constructed with the following command: R> form <-as.formula(paste0("Surv(time, event)~", + paste0("x", 1:51, collapse = " + "), " | id")) R> fit_molar <-MST(form, data = molar_training, test = molar_test, + method = "exp.frailty", selection.method = "test.sample", + distinct = FALSE, delta = 0.05, nCutPoints = 50, minevents = 10, + minbucket = 100) Figures 2 and 3 give the multivariate survival trees for molar and non-molar teeth, respectively. Molar teeth are categorized into thirty possible terminal nodes. The risk of molar tooth loss is dependent on 20 factors (8 tooth-level factors and 12 subject-level factors). The tree for non-molar teeth is constructed with the same commands, except for using molar == 0 and removing x22 from the formula. Non-molar teeth are categorized into forty-three possible terminal nodes. Twenty-five factors (7 tooth-level factors and 18 subject-level factors) impacted the risk of non-molar tooth loss. Factors listed in the tree do not necessarily represent variable importance. It is possible that informative variables are not included in the final tree due to a masking effect. A detailed tooth analysis using multivariate survival random forests to assess variable importance will be studied in a subsequent paper. The current dental prognosis is often based on clinical opinion where the clinician weighs several factors. However, an accurate prognosis is crucial to the development of an appropriate treatment plan (Mordohai, Reshad, Jivraj, and Chee 2007). There are many ways multivariate survival trees can be used to help clinicians make better decisions and improve patient care. One approach is to estimate the survival curves for each possible terminal node by fitting a marginal model that handles the correlation within subjects. The multivariate survival trees on tooth loss give the estimated survival rate for each terminal node. Each split to the left has higher rates of survival. The probability of survival at 3 years ranged from 0.999 to 0.211 for molar teeth and 1 to 0.358 for non-molar teeth. A combination of the survival curves and clinical experience can help prescribe better treatment. Conclusions To our knowledge, package MST is the only available package in R to construct multivariate survival trees. The MST package is easy-to-use and allows users to see and control how the survival trees are constructed. Tree based analysis automatically groups observations with differing outcomes and hence renders it an excellent research tool in medical prognosis or diagnosis. In addition, trees can handle data with a large number of predictors with mixed types (continuous, ordinal, and categorical with two or more levels) without the need for variable transformation, dummy variable creation, or variable selection. These off-the-shelf features of a data analysis tool are often required in the current era of big data. Over the years, we have received many requests from researchers in various fields to share our R codes. We hope that our newly developed R package, MST, will help facilitate the utilization of multivariate survival trees in many application fields. Many factors must be considered when determining the appropriate model. The gamma frailty model performs well even when the underlying model is misspecified. However, the gamma frailty model takes much longer to fit than the marginal and exponential frailty models. The computational speed gained from using the marginal and exponential frailty models is essential for analyzing large datasets or using many bootstrap samples. Using percentiles as the candidate splits, instead of all distinct cutpoints, can improve computation speed at the expense of slightly less accuracy. Future work will be devoted to improving computational efficiency and providing additional features and flexibility. Possible additions include allowing positive stable frailty models, providing more control when pruning a tree, and handling missing observations. The MST function also provides the framework to construct multivariate survival random forests. However, random forests are very computationally intensive, so methods and parameter settings that improve speed as assessed in Section 4.2 should be considered. Other faster ensemble methods, such as extremely randomized trees (Geurts, Ernst, and Wehenkel 2006), could also be implemented. While the main components to construct multivariate survival trees are available, we hope to continue improving computational speed and functionality in future versions of the MST package. A. Splitting statistics formulas Suppose that there are n units with K i correlated failure times in the i-th unit, i = 1, . . . , n. Let F ik and C ik be the failure time and censoring time corresponding to the k-th failure in the i-th unit, respectively. The observed data consist of {(Y ik , ∆ ik , X ik ) : i = 1, . . . , n; k = 1, . . . , K i }, where Y ik = min(F ik , C ik ) is the observed failure time; ∆ ik = I(F ik ≤ C ik ) is the failure indicator, which is 1 if F ik ≤ C ik and 0 otherwise; and X ik = (x ik1 , . . . , x ikp ) denotes the pq-dimensional covariate vector associated with the k-th observation of the i-th unit. We consider only time-independent covariates. To ensure identifiability, it is assumed that the failure time vector F i = (F i1 , . . . , F iK i ) is independent of the censoring time vector C i = (C i1 , . . . , C iK i ) conditional on the covariate vector X i = (X i1 , . . . , X iK i ) , i = 1, . . . , n. We will use the following notations to write the splitting statistic. Let λ 0 (t) represent an unspecified baseline hazard function, β is an unknown regression parameter, and I(·) is the indicator function. For a constant change point c, x ikj induces a binary partition of the data according to a continuous covariate X j . If X j is discrete, then the form x ikj ∈ A is considered where A can be any subset of its categories. The marginal model develops the splitting statistic by formulating the hazard function of the k-th failure in the i-th unit as follows: λ ik (t) = λ 0 (t) exp(β · I(x ikj ≤ c)). Fan et al. (2006) showed that the splitting statistic, called the robust log-rank statistic, is: where While Equation S1 looks formidable, the calculation of G M (c) is computationally inexpensive because it is available in closed form and depends primarily on the evaluation of indicator functions. The gamma frailty model develops the splitting statistic by formulating the hazard function as follows: λ ik (t) = λ 0 (t) exp(β · I(x ikj ≤ c))w i , where w i denotes the frailty term for the i-th unit. The frailty term w i is assumed to follow some known positive distribution; the MST package uses the common choice that w i ∼ Γ(1/ν, 1/ν) where ν represents an unknown variance. Su and Fan (2004) showed that the integrated log likelihood splitting statistic is:
6,917.4
2018-02-27T00:00:00.000
[ "Computer Science" ]
Hydrodynamic drag reduction of shear-thinning liquids in superhydrophobic textured microchannels Super-hydrophobic textured surfaces reduce hydrodynamic drag in pressure-driven laminar flows in micro-channels. However, despite the wide usage of non-Newtonian liquids in microfluidic devices, the flow behaviour of such liquids was rarely examined so far in the context of friction reduction in textured super-hydrophobic micro-channels. Thus, we have investigated the influence of topologically different rough surfaces on friction reduction of shear-thinning liquids in micro-channels. First, the friction factor ratio (a ratio of friction factor on a textured surface to a plain surface) on generic surface textures, such as posts, holes, longitudinal and transverse ribs, was estimated numerically over a range of Carreau number as a function of microchannel constriction ratio, gas fraction and power-law exponent. Resembling the flow behaviour of Newtonian liquids, the longitudinal ribs and posts have exhibited significantly less flow friction than the transverse ribs and holes while the friction factor ratios of all textures has exhibited non-monotonic variation with the Carreau number. While the minima of the friction factor ratio were noticed at a constant Carreau number irrespective of the microchannel constriction ratio, the minima have shifted to a higher Carreau number with an increase in the power-law index and gas fraction. Experiments were also conducted with aqueous Xanthan Gum liquids in micro-channels. The flow enhancement (the flow rate with super-hydrophobic textures with respect to a smooth surface) exhibited a non-monotonic behaviour and attenuated with an increase in power-law index tantamount to simulations. The results will serve as a guide to design frictionless micro-channels when employing non-Newtonian liquids. Introduction Super-hydrophobic (SH) micro-and nano-textured surfaces have been a subject of significant interest in a plethora of applications (Geyer et al. 2020;Sharma et al. 2018;Hwang et al. 2018;Jiang et al. 2020;Gaddam et al. 2021;Chen et al. 2019), where the existence of a heterogeneous wetting state namely the Cassie-Baxter state helps in achieving the desired functionality. Similarly, in flow settings where SH-textured surfaces are involved, particularly in microscale laminar flows, the Cassie-Baxter state helps in hydrodynamic drag reduction (Li et al. 2017;Ko et al. 2020) and enhanced thermo-hydraulic performance (Dilip et al. 2018;Sharma et al. 2020). Since the velocity slip induced by the entrapped gas pockets on an SH-textured surface reduces the pressure drop, such surfaces alleviate the pumping power requirement in microfluidic devices (Davis and Lauga 2009). However, most of the microfluidic devices operate with non-Newtonian liquids, such as blood (Laxmi et al. 2020), mucus (Elberskirch et al. 2019), polymeric liquids (Raoufi et al. 2019) and even colloidal suspensions . Despite the importance of such non-Newtonian liquids in microfluidics, the hydrodynamic drag reduction of these liquids is rarely investigated in the context of pressure-driven flow through SH-textured micro-channels. Therefore, it is essential to understand the flow behaviour and concomitant influence of geometrically different SH surface textures on flow friction in micro-channels. The slippage properties of an SH surface are characterised by apparent slip length. This is defined as the distance where the no-slip boundary condition is satisfied when the tangent of the linear velocity profile is extrapolated near the wall. However, since it is difficult to measure the apparent slip length experimentally, the effective slip length is defined to characterise slippage properties of geometrically different SH surface textures (Lauga and Stone 2003). Ever since the pioneering work by Philip et al. (1972) to characterise slippage properties on SH surfaces between two parallel plates, most of the research efforts have focused on finding analytical expressions for the effective slip length of Newtonian liquids on generic SH surface textures, such as longitudinal ribs, transverse ribs, posts, and holes (Lauga and Stone 2003;Ybert et al. 2007;Teo and Khoo 2009;Chen et al. 2020). Since the viscosity difference between water and air is about two orders in magnitude, most of these investigations assumed that the liquid-gas interface as a shear-free boundary. Several numerical studies were also undertaken to characterise the effective slip length involving inertial and microchannel constriction effects (Sharma et al. 2020(Sharma et al. , 2019Cheng et al. 2009;Game et al. 2017). The effective slip length was expressed as a function of gas fraction, microchannel constriction, and Reynolds number for Newtonian liquids. Experimental research involving pressure-driven flow through SH-textured micro-channels also estimated the effective slip length to confirm the theoretical predictions (Tsai et al. 2009;Ko et al. 2020). Some recent findings show that the slippery properties of SH-textured surfaces are however compromised due to surfactants adsorbed at the air-water interface (Peaudecerf et al. 2017;Li et al. 2020). Despite the significant progress in understanding the slippage properties on SH surfaces, the studies are mostly for Newtonian liquids, especially water. On the other hand, only a few investigations are directed towards the slippage of non-Newtonian liquids on the SH-textured surfaces. A recent review about the slippage of non-Newtonian liquids on randomly and periodically textured surfaces described the wall slip phenomena in detail (Malkin and Patlazhan 2018). Haase et al. (2017) investigated the behaviour of shear-thinning liquids on SH ribs arranged normal to the flow direction using both numerical simulations and experiments. Compared to water, the shear-thinning liquids exhibited an apparent slip length of more than three times. Micro-particle image velocimetry-based measurements confirmed their numerical results. In another research involving numerical simulations, Patlazhan et al. (Patlazhan and Vagner 2017) studied a shear flow of shear-thinning liquids over ribs arranged parallel and perpendicular to the flow direction. They showed that the apparent slip length associated with shear-thinning liquids is considerably larger than those with Newtonian liquids. Furthermore, the apparent slip length was found to be a non-monotonic function of the shear rate. Crowdy et al. (2017) corroborated these findings, especially, the non-monotonic behaviour of the apparent slip length associated with the shear flow over ribs. In another numerical study, Javaherchian and Moosavi (2019) employed phase-field method to investigate the flow behaviour of power-law liquids in micro-channels with ribbed walls. They reported that pressure drop reduction is more when shear-thickening liquids were employed as compared to shear-thinning liquids. In summary, the main takeaway of the reported investigations is that the apparent slip length is a non-monotonic function of the shear rate. At the same time, all the efforts were undertaken on one-dimensional geometries such as ribs that are arranged either parallel or normal to the flow direction and they are predominantly shear flows. In addition, there is only one investigation that experimentally demonstrated that the apparent slip length increases when shear-thinning liquids were employed as compared to the Newtonian liquids. Therefore, in this work, we have investigated the pressure-driven flow through micro-channels containing both one-dimensional (longitudinal and transverse ribs) and bi-dimensional (posts and holes) SH textures and elucidated the flow behaviour in detail. In particular, the polysaccharide-based shear-thinning liquids are used as model liquids in both numerical simulations and experiments. The aqueous Xanthan Gum liquids were characterised by the Carreau-Yasuda model. In the numerical simulations, we focus our efforts on understanding the flow behaviour and estimating the friction reduction of shear-thinning liquids on SH textures. We have varied the non-dimensional shear rate or the Carreau number to compare the influence of texture topography, power-law exponent, and microchannel constriction on friction reduction. In the experiments, we have quantified and compared the flow enhancement of the shearthinning liquids in micro-channels with top and bottom walls covered with either posts or ribs, to those with smooth walls. Shear-thinning liquids Three kinds of shear-thinning liquids were prepared by dissolving Xanthan Gum (Merck, Germany) in de-ionised water. The mass concentration of these aqueous Xanthan Gum (XG) solutions is 1, 2 and 5 g/L, which are designated as XG1, XG2 and XG5, respectively. The shear viscosity properties of these liquids were measured using a MCR 301 Anton-Paar rheometer. The viscosity variation with an increase of shear rate for all the XG liquids is shown in Fig. 1a, indicating a shear-thinning behaviour. The effective viscosity (η eff ) as a function of shear rate (γ) of the XG liquids is expressed by Carreau model fit, given by Eq. 1: Here, n is power-law exponent, λ is relaxation time, η i and η o are the infinite and zero-shear viscosities, respectively. The fitting parameters for each of the XG liquids are listed in Table 1. These shear-thinning liquids were employed in the flow experiments in SH-textured micro-channels in this research. On the other hand, the flow behaviour and concomitant friction reduction of the shear-thinning liquid on SH-textured surfaces were investigated numerically by employing only the Carreau parameters of the XG5 liquid, for the sake of simplicity. However, to understand the influence of power-law exponent on the flow behaviour through numerical simulations, different values of n, i.e., n = 0.32 and 0.48, were taken from the other XG liquids while keeping the rest of the parameters the same for the XG5 liquid. The effective viscosity as a function of shear rate and powerlaw exponent for the XG5 liquid is shown in Fig. 1b. Numerical details To understand the flow behaviour in SH-textured microchannels, we examine four types of SH textures, namely longitudinal ribs, transverse ribs, posts, and holes, as shown in Fig. 2a. Furthermore, we considered sufficiently wide micro-channels for both numerical and experimental investigations. Therefore, the computational domain reduces to a unit cell containing a single SH texture as shown in Fig. 2b. The liquid flows in an SH-textured microchannel with a height of 2H. The unit cell has a size of L with the SH texture having a characteristic dimension of s. Furthermore, the pressure-driven flow is considered to be fully developed and the liquid flows in the x-direction. Therefore, due to the periodically repeating flow field in the x-direction, the inlet and outlet of the computational domain are designated as the periodic boundaries. In addition, since the microchannel is sufficiently wide (z-direction) exhibiting a 2D flow condition, the sidewalls of the computational domain are considered to be symmetric boundaries. To further reduce the computational time, half channel height is considered (y-direction) by designating the top wall of the domain to be a symmetric boundary. All the simulations involving solution of the mass and momentum equations were performed in the Ansys Fluent 2020R1 framework. When the liquid flows over the SH-textured surface, a liquid-gas interface (LGI) forms between the protrusions. The LGI is assumed to be flat and shear-free in this research. This indicates that (1) the deformation of the liquid-gas interface is negligible and (2) the underlying gas layer imparts negligible friction to the liquid flow. Based on the average velocities (U) of the liquid attained in the micro-channels, the capillary number (Ca = μU/σ, where μ is the viscosity of the liquid and σ is the surface tension of the liquid-gas interface) was estimated. Since the Ca is a ratio of viscous forces to the surface tension forces, the magnitude of the Ca, therefore, indicates the influence of surface tension on the deformation of the liquid-gas interface. The Xanthan Gum liquids have surface tension and viscosities in the range of 50-72 mN/m (Brunchi et al. 2016) and 0.0037-5.07 Pa.s (see Fig. 1b), respectively. Based on these values, the capillary numbers are evaluated, and our estimates show that Ca ≪ 1, which indicates that the surface tension forces are sufficiently high to resist the interface from deformation. At the same time, the shear stress at the liquid-gas interfaces scales as τ gl ~ η g /η l , where η g and η l are the viscosities of the gas and liquid, respectively. Consequently, the magnitude of τ gl varies from 10 -3 to 10 -6 . Such low magnitudes of the interfacial shear stress allowed us to impose the shear-free condition at the liquid-gas interface. The shear-free LGI assumption was previously used for the flow of water in SHtextured micro-channels (Sharma et al. 2020;Cowley et al. 2016). Although the two-phase models, such as coupled liquid-gas models (Sharma et al. 2019;Maynes et al. 2007) and surface tension-based models (Gaddam et al. 2015;Liu et al. 2021), would improve the accuracy of predictive flow modelling, such models are computationally expensive. Therefore, a computationally fast single-phase model with reasonably valid assumptions aids in investigating the influence of a wide range of parameters on flow enhancement. The solid-liquid interface (SLI) on the top wall of the SH texture is assumed to obey the no-slip boundary condition. The pressure-driven flow was initiated by specifying the pressure gradient (ΔP/L) at the periodic boundaries of the computational module. Since the abrupt changes in viscosity and velocity gradients are expected near the boundary of where LGI and SLI meet, the region near the wall is well-refined. A grid independence test was carried out to find optimal cell size for the computational domain. A maximum number of cells in the computational domain after the refinement range vary anywhere between 0.5 and 1.1 million in the simulations. The key non-dimensional parameters pertaining to the microchannel and SH texture geometry are (a) gas fraction, which is defined as the ratio of the area of the LGI to the area of the SLI + LGI and (b) microchannel constriction ratio (HL = H/L), which is defined as the ratio of the half channel height (H) to the unit cell size (L). The applied shear rate (or pressure gradient) was represented by the non-dimensional Carreau number (Cu), which is a ratio of the characteristic shear rate to the transitional shear rate. Here, the characteristic shear rate is a ratio of the average velocity (U) across the microchannel to the half channel height. At the same time, the transitional shear rate is the shear rate where transition from the Newtonian viscosity in the plateau (see Fig. 1b) to the power-law region occurs. For the XG5 liquid, this transitional shear rate is appeared to be 0.1 s −1 as can be seen from Fig. 1b. The Fanning friction factor, which is expressed as f = 2 ΔP D h /L ρ U 2 (Lemos 2012) was calculated at different Carreau numbers. Here, the ρ is the density of the XG liquid and D h (= 4H) is the hydraulic diameter of the microchannel. Subsequently, the friction factor ratio (FFR), a ratio of friction factor in an SH-textured microchannel to the smooth microchannel, was estimated. It should be noted that the thickness of the gas layer is not considered in this research. It has been shown that the frictional characteristics do not get affected when the aspect ratio of the surface textures is greater than unity (Sharma et al. 2019). Therefore, the single-phase model considered here can accurately predict the friction factor when the thickness of the gas layer is more than the size of the texture. Fabrication of super-hydrophobic textured surfaces To assess the flow enhancement of XG liquids in SH-textured micro-channels, surfaces containing an array posts and ribs are fabricated by a femtosecond laser micromachining workstation (LASEA LS5, Belgium) on stainless steel (SS). The femtosecond laser has a nominal wavelength of 1032 nm and a pulse duration of 310 fs. The beam was steered through a telecentric focusing lens and scanned at a speed of 1000 mm/s using a grid scanning strategy to create posts. The scanning electron microscope (SEM, EOL JCM-600) images of the posts and ribs are shown in Fig. 3a. They are further analysed through a focus variation microscope (Alicona G5) to obtain their dimensions. The 3D height map of the surface with posts together with their profile are shown in Fig. 3b. The distance/space between the posts is 75 μm and have a size of about 29 μm, which corresponds to a gas fraction of ~ 85%. While the spacing between the ribs is 100 μm with a size of about 15 μm, providing a gas fraction of ~ 85% on the surface. The side walls of the posts and ribs are in turn covered by the so-called laser-induced periodic surface structures (LIPSS). The LIPSS are nanoscale ripples with a periodicity of 800-900 nm and depth of 100-200 nm, as confirmed by our previous studies (Gaddam et al. 2021;Siddiquie et al. 2020). Such nanoscale structures on top of microscale features provide an additional energy barrier for wetting and are expected to resist the Cassie-Wenzel transition on SH-textured surfaces (Gaddam et al. 2021;Wu et al. 2017) as shown in Fig. 3a. In addition, the trapezoidal shape of the posts also helps in maintaining the Cassie-Baxter state (Huang et al. 2021). The textured surfaces are further functionalised by applying Trichloro (1H,1H,2H,2H-perfluorooctyl) silane (Merck, Germany) to impart the super-hydrophobicity. The contact angle on the smooth and SH-textured SS surfaces was measured using a goniometer (OCA 15EC, Data Physics GmbH, Germany). While the smooth surface exhibited a contact angle (CA) of 71.2° ± 2.9 with water, the contact angle on the SH-textured surfaces was measured to be more than 150° both with water and XG liquids (see Fig. 3c). Fabrication of micro-channels The smooth and SH-textured micro-channels were prepared on the same surface with the sticker technique (Kojić et al. 2020). Briefly, an SS sheet was cut into two pieces of 25 mm × 100 mm size. In one-half of each piece the SH textures were fabricated by femtosecond laser machining and silane functionalisation. Next, a slot of 2 mm × 70 mm was machined on a two-side adhesive plastic tape and bonded onto the SS sheets as shown in Fig. 4 to complete the microchannels. Therefore, the top and bottom walls of the SHtextured microchannels were decorated with either posts or ribs. The ribs were machined normal to the flow direction; thus, the configuration is transverse ribs. The microchannels with two heights were investigated using tapes with a thickness of 90 μm (Tesa 64621) and 200 μm (3 M 9088). The XG liquids (XG1, XG2 and XG5) were fed through the microchannels with the same inlet using a syringe pump (Legato 110, KD Scientific) as shown in Fig. 4. The inlet pressure was monitored through a pressure gauge (0-4 bar, LEO-Record, Keller, Germany) to ascertain the applied pressure is less than the critical burst pressure required to cause the Cassie-Wenzel transition, to maintain the Cassie-Baxter state throughout the experimental domain. The critical burst pressure for posts and ribs are estimated using the expressions provided elsewhere (Lobaton and Salamon 2007). The volume flow rate from smooth and SH-textured microchannels was estimated by measuring the mass of the liquid collected from the outlets using a precision weighing balance. It is well-known that the properties of some polymers degrade with time of operation. However, the aqueous XG solutions tend to retain their viscosities for up to 330 h (Zhong et al. 2013). Since the typical length of each experiment is only 0.5-2 h, depending on the flow rate in our work, the XG solutions are expected to be stable during the course of the flow testing. Friction factor behaviour of Newtonian and shear-thinning liquids After the validation of the numerical set-up (see the supplementary material), a pressure-driven flow of shear-thinning liquid (XG5) through smooth microchannels and microchannels containing ribs arranged normal to the flow direction was simulated to understand the behavior of friction factor. In particular, the microchannels with the constriction ratio of unity (or a height of 200 μm) containing transverse ribs at gas fractions of 50% and 90% are considered here. A pressure gradient corresponding to 3 × 10 -4 s −1 < γ < 3 × 10 4 s −1 was applied across the smooth and textured microchannels to estimate the friction factor. Figure 5a shows the friction factor as a function of shear rate. The friction factor for the flow of Newtonian liquid in smooth and textured microchannels is also shown for comparison. Here, the viscosity of the Newtonian liquid corresponds to the upper plateau of the shear-thinning liquid, which is 5.07 (see Fig. 1b). It is apparent from Fig. 1b that the shear-thinning liquid exhibits a Newtonian behavior for γ < 10 -2 s −1 and γ > 10 2 s −1 . The Reynolds number (Re = ρUD h /η eff ) calculated based on the average velocity and resulting effective viscosity for a smooth channel corresponding to the above range of shear rates is 3 × 10 -9 < Re < 20. Consequently, the product of friction factor and Reynolds number (f Re) was calculated to be 96 in the Newtonian regime, i.e., γ < 10 -2 s −1 (or Re < 1 × 10 -9 ), which is also well in agreement with the classical Hagen-Poiseuille flow. Therefore, as can be seen from Fig. 5a, the slope of the friction factor curve for shear-thinning liquid is linear for γ < 10 -2 s −1 (Newtonian Fig. 4 a An exploded view of the microchannel configuration employed for the experiments. b An illustration of slip flow in smooth and SHtextured microchannels is also shown regime) for the smooth microchannel and aligning well with the Newtonian one. At the same time, the value of f Re is constant when the inertial effects are not predominant for the Newtonian liquids in the SH-textured microchannels as confirmed by the previous studies (Sharma et al. 2020;Brunchi et al. 2016;Cowley et al. 2016;Samaha et al. 2011). Consequently, the slope of the friction factor curve is also linear for the SH-textured microchannels in the Newtonian regime. Finally, when the applied shear rates are again in the lower Newtonian plateau i.e., γ > 10 2 s −1 , the slope of the friction factor curve tending towards linearity for both the smooth and SH-textured microchannels as shown in Fig. 5a. This behaviour of the friction factor in smooth channels as a function of shear rate in the entire regime for XG liquids is also similar to what was observed in another investigation (Shende et al. 2021). On the other hand, the friction factor showed a distinct non-linear behavior in the shear-thinning regime (10 -2 s −1 < γ < 10 2 s −1 ) for flow past transverse ribs at different gas fractions. To assess the flow friction behavior of shear-thinning liquid in SH-textured microchannels, the friction factor ratio is plotted as a function of shear rate in Fig. 5b and compared with Newtonian liquid. As can be seen, the FFR is constant at all the shear rates for Newtonian liquids flow past SH-textured microchannels with different gas fractions. However, FFR remained constant for γ < 10 -2 s −1 for the shear-thinning liquid in the Newtonian regime, followed by a clear departure at γ > 10 -2 s −1 for the SH-textured microchannels. In the shear-thinning regime (10 -2 s −1 < γ < 10 2 s −1 ), the minimum of FFR was noticed at a shear rate of ~ 0.2 s −1 and ~ 0.6 s −1 for microchannels with a gas fraction of 90% and 50%, respectively. Once the Newtonian regime is established at the high shear rates (γ > 10 3 s −1 ), the FFR is observed to plateau towards constant FFR curves. Viscous dissipation with Newtonian and shear-thinning liquids To further validate how the minima of the friction factor ratio occur in the shear-thinning regime (10 -2 s −1 < γ < 10 2 s −1 ), the viscous dissipation rate is estimated in the smooth and textured microchannels for a flow of water and shear-thinning liquid. In particular, the viscous dissipation rate per unit volume (φ) for a 2D flow is calculated numerically as (Winter 1987): Here, u and v are the velocity components in the x and y directions. The velocity and length quantities are normalized by the average velocity (U) and half channel height (H) to make the viscous dissipation rate a dimensionless quantity. In addition, we considered microchannels with a constriction ratio of unity and contained transverse ribs at a gas fraction of 90%, and the shear rates are varied from 0.01 to 10 s −1 to calculate the non-dimensional viscous dissipation rate (ϕ). Then, the viscous dissipation ratio (ϕ ratio), which is a ratio of the viscous dissipation rate in the SH-textured 5 a Friction factor as a function of shear rate for a microchannel with plain surfaces and textured microchannels with a gas fraction of 50% and 90%. b Friction factor ratio as a function of shear rate for Newtonian and shear-thinning (XG) liquids fed through textured microchannels with a gas fraction of 50% and 90% microchannel to the smooth channel is calculated for water and shear-thinning liquids. First, the dimensionless viscous dissipation rate is estimated for a flow of water in a smooth channel. As shown in Fig. 6, irrespective of the shear rate, the viscous dissipation for water in a smooth channel with a constriction ratio of unity takes a value of 6, which agrees with another investigation (Haase et al. 2016). At the same time, the viscous dissipation rate reduced to a value of 3.21 at all the shear rates for a flow of water in textured microchannels. Therefore, the viscous dissipation ratio for the Newtonian liquids is constant (ϕ ratio = 0.53). In the case of shear-thinning liquid, the viscous dissipation rate in the smooth channel reduced from 14.8 (γ = 0.03 s −1 ) to 1.9 (γ = 9.7 s −1 ). Similarly, the viscous dissipation rate in the SH-textured microchannels also decreased with an increase in the shear rate. While the reduction in viscous dissipation rate in smooth microchannels is gradual until a shear rate of about 0.2 s −1 , it decreased rapidly in the textured microchannels. The ratio of viscous dissipation rate for the shear-thinning liquid is minimized at a shear rate of 0.23 s −1 as shown in the figure. Influence of Carreau number Since the shear-thinning behaviour is predominantly observed in the range 10 -2 < γ < 10 2 , the numerical simulations were performed for a flow past SH textures, such as posts, holes, longitudinal and transverse ribs, in microchannels by applied the pressure gradients corresponding to the shear rates in that regime. In particular, the Carreau number (Cu) was varied between 0.03 and 97.5, while the microchannel constriction ratios (HL) from 0.5 to 2 and the power-law exponents from 0.16 to 0.48 were investigated. The microchannel constriction ratios 0.5, 1 and 2 correspond to channel heights of 100 μm, 200 μm and 500 μm, respectively. The gas fraction of SH textures in the microchannels was varied from 0.5 and 0.9. Figure 7 shows the friction factor ratio as a function of the Carreau number for all SH textures in microchannels with a constriction ratio of unity and a gas fraction of 90%. The friction factor ratio was observed to rapidly decrease until Cu ~ 2.3 for all the SH textures before a gradual increasing trend. Similar to the behaviour of the Newtonian liquids (Cheng et al. 2009), the longitudinal ribs and posts outperformed all the SH textures for shear-thinning liquids, too, when compared to the transverse ribs and holes. This is because liquids undergo periodic acceleration-deceleration cycles for a flow past holes and transverse ribs on the slip and non-slip interfaces that are normal to the flow direction. This leads to friction losses in such discontinuous SH texture configurations, whereas such losses are not present in the case of continuous SH textures, such as longitudinal ribs and posts. Such non-intermittent SH textures not only reduce the friction losses but also help in negating the detrimental effects of contamination at the liquid-gas interfaces ). To understand the flow behaviour of the shear-thinning liquid at the liquid-gas interface, the velocity profiles are plotted in Fig. 8. Here, the interfacial velocities of the SH-textured microchannels are normalized by the average velocity (U) across the corresponding smooth microchannel. Figure 8a shows the velocity profiles of Fig. 6 Viscous dissipation rate (open symbols) as a function of shear rate for water and shear-thinning liquid flowing in smooth and textured microchannels. The viscous dissipation ratio (filled symbols) in SH-textured and smooth microchannels for a shear-thinning liquid is also shown. Here, the GF = 0.9 and HL = 1 Fig. 7 Friction factor ratio as a function of Carreau number for all the SH surface textures. The textured microchannel constriction ratio here is HL = 1 the shear-thinning liquid at different Carreau numbers in smooth microchannels and microchannels containing transverse ribs arranged at a gas fraction of 50% and 90%. The velocity profiles for the SH-textured microchannels are extracted at the centre of the liquid-gas interface (x/L = 0.5). As can be seen, when the shear rate is in the Newtonian regime (Cu = 0.01), the maximum velocity (U max ) is 1.5U in a smooth microchannel. Whereas, in the shear-thinning regime where the friction factor minima was observed at Cu ~ 2.3, U max is reduced to ~ 1.3U. At the same time, when the gas fraction is 90%, the normalized slip velocity (U slip ) for Cu = 2.3 at the liquid-gas interface was estimated to be ~ 1.5 times that of Cu = 0.01. This indicates that the slippage at the liquid-gas interface is more in the shear-thinning regime as compared to the Newtonian regime for the same shear-thinning liquid. In addition, as can be seen from the figure, the shear-thinning regime has pronounced slippage even at lower gas fractions (50%) when compared to the slippage at high gas fractions (90%) in the Newtonian regime. Figure 8b shows the normalized interfacial velocity profiles for transverse ribs at a gas fraction of 90% at different Carreau numbers. In the Newtonian regime (Cu = 0.01), the interfacial velocity profile shows a parabolic behaviour (note that the normalised interfacial velocity is on the log-scale). On the other hand, such behaviour is no longer present in the shear-thinning regime (Cu = 2.3-97.5), where the normalized interfacial velocity increases steeply near the LGI/ SLI boundary and reached a plateau along the length of the interface. Also, the interfacial velocity profile has the highest magnitude at Cu = 2.3, where the friction factor minima was observed. Influence of gas fraction Next, the influence of gas fraction on the friction factor ratio was investigated. Figure 9a shows the friction factor ratio as a function of gas fraction (50%-90%) for all the SH textures when the microchannel constriction ratio is unity. At lower gas fractions (50%), except for the longitudinal ribs, other SH textures showed similar behaviour. However, with an increase in the gas fraction the posts exhibited a significant decrease in the friction factor ratio. At high gas fractions (90%), while the posts and longitudinal ribs resulted in a low friction factor ratio, the holes and transverse ribs underperformed when compared to them. It should be noted that these trends associated with the flow of shear-thinning liquids past the SH textures are similar to the behaviour of Newtonian liquids (Cheng et al. 2009). The effect of the gas fraction on the minima of friction factor ratio in the shearthinning regime is elucidated through Fig. 9b. As can be seen, the friction factor ratio minima is shifted from Cu ~ 2.3 at high gas fractions (90%) to Cu ~ 6.1 at low gas fractions (50%) for the flow past the posts. A similar behaviour is also observed for the other SH textures. Influence of microchannel constriction ratio Another important parameter that influences the flow behaviour in SH-textured microchannels is the constriction ratio. At high constriction ratios, i.e., when the separation between the top and bottom walls is sufficiently large, the flow field generated the slip and no-slip regions on the walls does not interact with each other. However, when the separation between the walls decreases, an interaction of flow field from the walls could influence the overall flow behaviour and thus flow friction. To assess this, the constriction ratio of the microchannels varied from 0.5 to 2.5 in the simulations. The friction factor ratio as a function of the Carreau number at different constriction ratios for all the SH textures is shown in Fig. 10. As can be seen, the friction factor ratio decreased by ~ 70%, when the constriction ratio was decreased from 2.5 to 1 and from 1 to 0.5. A further decrease in the constriction ratios to 0.25 and 0.1 also led to a similar magnitude of reduction in the friction factor ratio (not shown) for all the SH textures. This indicates that the reduction in the microchannel height for a given period of the SH texture yields a considerable benefit in terms of slippage for the shear-thinning liquids, which is similar to the Newtonian liquids (Sharma et al. 2019;Kant and Pitchumani 2021). Furthermore, the minima of friction factor ratio is noticed to be at Cu ~ 2.3 for a gas fraction of 90%, irrespective of the microchannel constriction ratio for all the SH textures. At the same time, the friction factor ratio decreases by an order of magnitude with an increase in the microchannel height by two times as shown in Fig. 10. Therefore, the laminar drag reduction becomes negligible in those channels with a height of tens of millimetres. At the same time, the presence of slip (LGI) and no-slip (SLI) regions on the walls causes the disturbance in the flow velocity and shear rate field and thus influences the distribution of viscosity near the walls. Figure 11 shows the shear rate and viscosity distribution with an increase in the Carreau number in low constriction ratio (HL = 0.5) microchannels decorated with posts. As can be seen, there exists a stark difference in the shear rate distribution near the slip and no-slip regions when the liquid inertia is low. This also results in a tangible variation in the viscosity distribution at the walls. However, with increased liquid inertia, the sudden jump in the shear rate near the LGI/SLI boundaries vanishes. Consequently, the viscosity distribution near the wall resembles the same pattern as the shear rate as shown in the figure. The interaction of the flow field due to the separation between the top and bottom walls is represented by the viscosity variation in the microchannels as shown in Fig. 12. Since the local viscosity is a function of the local shear rate inside the domain, the variation of viscosity along the microchannel height is plotted for the microchannels with different constriction ratios. Here the viscosity is normalized by the zero-shear rate viscosity. The viscosity variation normal to the flow direction for a flow past posts in the microchannel constriction ratio of 0.5, 1 and 2.5 is shown in Fig. 12a-c, respectively. As can be seen, when the top and bottom walls are closer to each other (HL = 0.5), the local viscosity showed a dramatic variation even near the half channel height (y/H = 0.9). Especially, the wall effect has profoundly influenced the viscosity till 50% (y/H = 0.5) of the half channel height for HL = 0.5. At the same time, even though the separation between the top and walls is increased to HL = 1, the wall effect is significantly pronounced up to 30% (y/H = 0.3) of the half channel height. Although, the viscosity variation in this case is suppressed after y/H = 0.75. That means, for the microchannel with a height of 200 μm, the wall effects are present up to a distance of 75 μm from each wall, given the period of the SH texture is less than 100 μm. A further increase in the microchannel constriction ratio to 2.5 has led to suppression of wall effects, i.e., the Here is HL = 1 viscosity variation is noticed up to 20% of the half channel height. The viscosity contours across a single post in the microchannels with HL = 0.5, 1 and 2.5 shown in Fig. 12d, elucidates the viscosity variation. Influence of power-law exponent Since the concentration of XG dictates the shear-thinning behaviour of the XG liquids as can be seen in Table 1, here, we attempted to investigate the strongly and weakly shearthinning liquids on the flow behaviour and friction factor ratio. The strong or weak nature of the shear-thinning liquids can be described by viscosity ratio (α, a ratio of the infinite shear viscosity and zero-shear viscosity) and power-law exponent (n). For instance, α takes a value of 1 for the water, whereas for the XG liquids α < 1. At the same time, n = 1 for the water, while n < 1 for the XG liquids. The lower the value of either α or n, the stronger the shear-thinning nature. Here, for the sake of simplicity, we kept α constant and n is varied at three levels, i.e., 0.16, 0.32 and 0.48 corresponding to the XG5, XG2 and XG1 liquids. Figure 13a, b shows the friction factor ratio as a function of the Carreau number for a flow past longitudinal ribs and posts, respectively. Here, the microchannel constriction ratio is unity, and the gas fraction is 90%. As can be seen, the weaker the shear-thinning liquid, the more the attenuation of non-linearity of the friction factor ratio curve. Furthermore, the minima of the friction factor ratio is shifting from Cu ~ 2.3 at n = 0.16 to Cu ~ 6.1 at Fig. 10 Friction factor ratio as a function of Carreau number for microchannels with different constrictions ratios when the microchannels are decorated with a transverse ribs, b longitudinal ribs, c posts and d holes at a gas fraction of 90% n = 0.32 to Cu ~ 13.5 at n = 0.48 for all the SH textures. It is apparent that the FFR curve tends towards linearity with as n → 1. A similar behaviour was observed for microchannels with different constriction ratios. The normalized viscosity profiles at the wall along the microchannel height for all the power-law exponents are shown in Fig. 13c. As can be seen, when the weak shearthinning liquids (n = 0.16) are employed, the disturbance in the viscosity near the LGI/SLI boundary is apparent due to the sudden jump in the shear rate. However, disturbance in the viscosity near the LGI/SLI boundary is being vanished as the shear-thinning liquid is becoming stronger. The viscosity contours along the microchannel height normal to the flow direction shown in Fig. 13d is also confirming that the viscosity disturbance is attenuating with an increase in n. It is apparent that when n → 1, the disturbance in the viscosity due to the LGI/SLI boundaries confine to the wall. The interfacial velocities for a single post in a microchannel for Newtonian and shear-thinning liquids shown in Fig. 14a also confirms that the velocity near the LGI/SLI boundaries steeply varying for strongly shear-thinning liquid (n = 0.16). Whereas an increase in the n is attenuating the curvature of the velocity profiles near the boundaries. The corresponding shear stress profiles for the Newtonian and shear-thinning liquids shown in Fig. 14b also confirm the same. Influence of relaxation time To investigate whether liquid relaxation time (λ) has any influence on the friction factor ratio, numerical simulations were performed by varying the λ at three levels, i.e., 0.24, 1.22 and 2.44 of the XG5 liquid by keeping the rest of the parameters same. As can be seen from Fig. 15a, decreasing the relation time from 2.44 to 1.22 has led to a shift in the transitional shear rate from 0.1 to 0.3 s −1 , and a further decrease to 0.24 resulted in a transitional shear rate of 1 s −1 . Subsequently, the Carreau number is redefined taking these transitional shear rates into the account for numerical simulations. Figure 15b shows the friction factor ratio for a flow past posts in a microchannel with HL = 1 and GF = 0.9. It is apparent that the friction factor ratio remains the same for all the relaxation times considered. That is, the relaxation time shows no influence on the flow behaviour of the shearthinning liquids in the SH-textured micro-channels. It also confirmed that the Carreau number can be considered as a non-dimensional parameter while comparing the flow behaviour on geometrically different rough surfaces and considering shear-thinning liquids with different properties. Finally, empirical correlations were developed for the frication factor ratio as a function of the Carreau number at different gas fractions, power-law exponents, and microchannel constriction ratios (see the supplementary material). Discussion on minima of the friction factor ratio As can be seen in previous sections, the friction factor ratio was noticed to be minimized for the shear-thinning liquids at certain shear rates. Some of the previous research also reported that the hydrodynamic drag minimises on the SH textures when shear-thinning liquids are employed (Haase et al. 2017;Patlazhan and Vagner 2017;Crowdy 2017). It is well-accepted that the reduction in hydrodynamic friction for a flow of shear-thinning liquid is attributed to the depletion layer near the wall having considerably less viscosity than the bulk liquid as shown in Fig. 16a. Consequently, Patlazhan and Vagner (2017) showed that the ratio of viscosities (η. ratio = η d /η b , where η d and η b are the viscosities of the depletion and bulk layers, respectively) in the depletion layer to the bulk governs the non-monotonic behavior of the hydrodynamic drag in shear-thinning liquids. However, determination of the thickness of the depletion layer is essential to establish the non-monotonic variation of the η. ratio, which is not addressed earlier. In addition, it is also noted in the previous work that such non-monotonic variation in η. ratio is not apparent in flow over high gas fraction SH-textured surfaces due to the shear rate disturbances present in the microchannels (Patlazhan and Vagner 2017). Therefore, here we analysed the behaviour of the depletion layer to understand the non-monotonic variation in the hydrodynamic friction curves. Recently, the thickness of the depletion layer is shown to be minimised at critical applied shear rates for the shearthinning liquids (Koponen et al. 2019;Turpeinen et al. 2020). At these critical applied shear rates, the contribution of slip to the flow enhancement becomes dominant. Consequently, we estimated the thickness of the depletion layer (d) on the SH-textured surfaces from the numerical simulations as (Koponen et al. 2019): Here, u s is the slip velocity at the wall, η w is the viscosity at the wall, η b is the viscosity of the bulk liquid far Fig. 12 The viscosity variation normal to the flow direction at different heights for a post in textured microchannels with a constriction ratio of a HL = 0.5, b HL = 1 and c HL = 2.5 and at Cu = 3. The vis-cosity contour plot textured microchannels with different constriction ratios at Cu = 3 across a single post away from the wall, and τ w is the wall shear stress. By taking τ w = η w γ w , where γ w is the shear rate at the wall, the thickness of the depletion layer is evaluated for a flow past microchannels with transverse ribs at different constriction ratios (HL = 1 and 2.5), gas fractions (GF = 0.5 and 0.9) and power-law exponents (n = 0.16 and 0.32). Here, the microchannel heights corresponding to HL = 1 and 2.5 are 200 and 500 μm, respectively. The thickness of the depletion layer for a flow past transverse ribs arranged at a GF = 0.5 in a microchannel with a height of 100 μm is estimated to be ~ 2.5 μm. Figure 16b shows the non-monotonic variation of the η. ratio with the Carreau number when the depletion layer thickness is 2.5 μm. Whereas, such behavior is no longer appears when the thickness of the depletion layer is increased to 5 μm. Therefore, it is apparent that locating the depletion layer is essential to capture the viscosity disturbances inside the microchannel. To further understand the behavior of the depletion layer, the variation of its thickness at different Carreau number is estimated as shown in Fig. 16c. As can be seen, the thickness of the depletion layer indeed becomes minimum at Cu = 2.3 irrespective of the constriction ratio and shifts towards the higher Carreau number by increasing the power-law exponent. Consequently, when the depletion layer thickness minimises, the hydrodynamic drag reduces. Therefore, it is evident that the depletion layer plays an important role in the slip flow of shear-thinning liquid on SH-textured surfaces. It should be also noted that the determination of depletion layer thickness depends on the velocity profile fitting inside the channels (Turpeinen et al. 2020). However, such exercise is outside the scope of this research and therefore, further work is warranted in this direction especially in the case of 3D flows inside the microchannels. Flow enhancement in super-hydrophobic textured microchannels To further validate the numerical results, water and shearthinning liquids (XG1, XG2 and XG5) were fed through the smooth and SH-textured microchannels with different heights and the flow enhancement (ε) was estimated using Eq. 4: Here, ε s and ε t are the flow rates in the smooth and SH-textured microchannels, respectively. The microchannel heights investigated in the experiments are 90 μm and 200 μm correspond to the microchannel constriction ratios of HL = 0.45 and HL = 1, respectively. Furthermore, the SH-textured surfaces were fabricated in such a way that the gas fraction for both ribs and posts are 85% to have a fair comparison. To validate the microchannel configuration adopted in this research, first the smooth microchannels were fabricated both the sides. Then, the equal volume flow rates from the two outlets were ensured for each flow rate at the inlet. Figure 17 shows exemplary images of the quantity of liquid obtained in the smooth (right) and SH-textured (left) microchannels with a height of 90 μm for the XG liquids. Here, the microchannels were fed with a flow rate of 8.33 mm 3 /s. The flow rate obtained with the XG2 liquid in the (4) = t s − 1. smooth and SH-textured microchannels 2.5 mm 3 /s and 5.55 mm 3 /s, respectively. Whereas, smooth and SH-textured microchannels resulted in a flow rate of 1.94 mm 3 /s and 6.11 mm 3 /s, respectively, for the XG5 liquid. Figure 18a, b quantifies the flow enhancement for the smooth and SH-textured microchannels when fed with water and XG liquids for posts and transverse ribs. As shown, the flow enhancement due to the SH textures is constant for the flow of water with respect to the smooth microchannels. While the posts resulted in a flow enhancement of 0.4-0.56, whereas the transverse ribs showed an enhancement of 0.27-0.32 when the water is fed through the microchannels. On the other hand, the flow enhancement increases up to a flow rate of 0.83 mm 3 /s and decreases thereafter for the XG2 and XG5 liquids in the microchannel with HL = 0.45 for both the posts and transverse ribs. Since the XG5 liquid is strongly shear-thinning (n = 0.16) than the XG2 liquid (n = 0.31), the flow enhancement is found to be more in the former than the latter. For both liquids, the flow enhancement curves are plateauing towards the limit set by the water at low and high flow rates. It should also be noted that the maximum flow enhancement is at least 20 times that of the water when the XG5 liquid is employed in the case of posts. At the same time, the transverse ribs exhibited less enhancement than the posts, which is in corroboration with the numerical results. However, the flow enhancement is noticed to be insignificant in the case of XG1 liquid for both posts and transverse ribs, although it was marginally higher than the water in the former. This could be attributed to the weakly shear-thinning (n = 0.47) nature of the XG1 liquid. When the height of the microchannels is increased corresponding to HL = 1, the magnitude of the flow enhancement is decreased by 40% for the XG5 liquid, which also qualitatively corroborates the numerical estimates (see Fig. 10). At the same time, the maxima of the flow enhancement is also shifted to a higher inlet flow rate. This is because a relatively high shear rate is needed to obtain the maximum flow enhancement (or a minimum friction factor ratio) in microchannels with a high constriction ratio than those with a low constriction ratio. Conclusion Super-hydrophobic textured surfaces are known to exhibit hydrodynamic drag reduction properties when Newtonian liquids are employed both in external and internal flow settings. In this work, the flow behaviour of the shearthinning liquids in the pressure-driven flow through the Fig. 17 Images showing flow rate enhancement in the SHtextured microchannels with posts (left) compares to the smooth microchannels (right) for a XG2, b XG5 liquids. Here the microchannel height is 90 μm and the flow rate is 8.33 mm 3 /s Fig. 18 The flow enhancement in microchannels with 2H = 90 μm (HL = 0.45) and 200 μm (HL = 1) and decorated with a posts and b transverse ribs as a function of the flow rate (mm 3 /s) for water and XG liquids super-hydrophobic textured microchannels in investigated in detail. The friction factor ratio, which signifies the hydrodynamic drag reduction, is estimated as a function of geometric, flow and fluid parameters through numerical simulations. The friction factor ratio exhibited non-monotonic variation with the Carreau number on one-and bi-dimensional surface textured irrespective of the microchannel constriction ratios. The stronger the shear-thinning liquid lesser the friction factor ratio. The super-hydrophobic dual-scale topographies fabricated by the femtosecond laser are used to prepare the textured microchannels. Aqueous Xanthan Gum liquids when fed through such superhydrophobic textured microchannels containing posts and ribs led to a huge increase in the flow enhancement when compared to water. The experimental results were qualitatively in agreement with the numerical findings. In this research, we considered liquid-gas interface to be a shear-free boundary. However, the circulation in the gas cavities could affect the local slip velocity for the shear-thinning liquids similar to the Newtonian liquids and thus the friction factor. Consequently, the geometry and size of the underlying gas cavities is another important factor to consider in the numerical modelling. At the same time, given the importance of lubricant-impregnated textured surfaces in the context of robust functional surfaces in a range of applications when compared to the gas-cushioned textured surfaces, our future studies will focus on understanding the interaction of shear-thinning liquids with such surfaces. The elucidated flow behaviour of shear-thinning liquids in superhydrophobic textured microchannels in this work not only relevant to microfluidic devices but also indirectly significant to the food processing and pharmaceutical industries. Especially, when employing the automated cleaning-in-place processes are adopted to clear the viscoelastic deposits on vessel surfaces and pumping viscoelastic liquids through the pipelines. In such internal and external flow settings, the usage of superhydrophobic textures on walls could minimise the pumping power requirements and reduce the water usage requirements, respectively. In addition, the ultrafast lasers to fabricate drag reducing superhydrophobic surfaces is also demonstrated in this work. The dual-scale structures are fabricated at a rate of 2 mm 2 /s on metallic sheets, but much higher processing speed can be achieved with high dynamics scan head and laser sources. Therefore, ultrafast laser processing can be adopted to fabricate superhydrophobic periodic textures rapidly over large areas and on 3D/freeform surfaces where the drag reduction properties are needed, right from microfluidics to marine vehicles.
11,923.8
2021-08-10T00:00:00.000
[ "Physics" ]
Chronic Central Administration of Ghrelin Increases Bone Mass through a Mechanism Independent of Appetite Regulation Leptin plays a critical role in the central regulation of bone mass. Ghrelin counteracts leptin. In this study, we investigated the effect of chronic intracerebroventricular administration of ghrelin on bone mass in Sprague-Dawley rats (1.5 μg/day for 21 days). Rats were divided into control, ghrelin ad libitum-fed (ghrelin ad lib-fed), and ghrelin pair-fed groups. Ghrelin intracerebroventricular infusion significantly increased body weight in ghrelin ad lib-fed rats but not in ghrelin pair-fed rats, as compared with control rats. Chronic intracerebroventricular ghrelin infusion significantly increased bone mass in the ghrelin pair-fed group compared with control as indicated by increased bone volume percentage, trabecular thickness, trabecular number and volumetric bone mineral density in tibia trabecular bone. There was no significant difference in trabecular bone mass between the control group and the ghrelin ad-lib fed group. Chronic intracerebroventricular ghrelin infusion significantly increased the mineral apposition rate in the ghrelin pair-fed group as compared with control. In conclusion, chronic central administration of ghrelin increases bone mass through a mechanism that is independent of body weight, suggesting that ghrelin may have a bone anabolic effect through the central nervous system. Introduction The brain is regarded as the master regulator of homeostasis and metabolism. It has been suggested that bone and energy metabolism require tightly coordinated regulation so that longitudinal growth, or bone remodeling, are in accordance with energy supply and demand [1]. Numerous studies have investigated the mechanisms involved in the central regulation of bone metabolism and possible connections between bone metabolism and energy metabolism [2][3][4][5][6][7]. However, the data are contradictory regarding bone metabolism regulation. In addition, the factors that co-regulate bone metabolism and energy metabolism are not yet clear. Leptin, a major adipokine that regulates appetite, has been widely investigated as the major factor co-regulating bone metabolism and energy metabolism [6]. Ducy et al. demonstrated that intracerebroventricular (ICV) injection of leptin induced bone loss in the spine, suggesting a role for leptin in bone metabolism regulation through a central mechanism [6]. Data from subsequent reports supported the bone catabolic effect of leptin and have shown that leptin inhibits bone formation via the sympathetic nervous system [4,5]. However, several recent studies have reported contradicting data [2,3,7]. Leptin administration via gene therapy into the hypothalamus did not have a significant effect on bone metabolism [3]. Leptin receptor deficient mice exhibit decreased bone mass, demonstrating leptin's anabolic effect on bone [7]. Furthermore, ICV injection of leptin increases bone mineral density (BMD) and mineral apposition rates [4], a finding which is the exact opposite of the initial studies reporting a catabolic effect of central leptin [6]. Given this conflicting data, further studies are required to clarify the mechanisms underlying the central regulation of bone metabolism. Ghrelin is a stomach hormone that acts centrally to increase appetite [8]. Several studies have investigated the effect of chronic ICV ghrelin on energy metabolism. Chronic ICV ghrelin infusion increased food intake and weight gain [8], reversed the effect of leptin on food intake [9], increased the glucose utilization rate of adipose tissue [10], and increased lipogenic enzymatic activity in adipose tissue and liver [11]. A recent study investigated the central effects of ghrelin and leptin on body and bone marrow adiposity, as well as adipose tissue and bone marrow gene expression, and reported central ghrelin had no effect on bone marrow adiposity [12]. However, to date, no study has investigated the effect of chronic ICV ghrelin on bone metabolism. Ghrelin and leptin have opposite effects on energy metabolism, but also on the sympathetic nervous system and other pathways [13][14][15]. Central leptin stimulates sympathetic outflow [13] whereas central ghrelin suppresses sympathetic activity [14]. Since the sympathetic nervous system has been suggested as the major pathway governing the effect of central leptin on bone metabolism [6], it is possible that sympathetic suppression through central ghrelin can affect bone metabolism. Therefore, the present study was designed to investigate the chronic effects of centrally administered ghrelin on bone metabolism. Animals and peptide Male Sprague-Dawley rats (6 weeks old) weighing 180-230 g were used. Body weight and food intake were monitored daily. All animal experiments were performed with approval from the Institutional Animal Care and Use Committee (IACUC) of the Clinical Research Institute at Seoul National University Hospital (an AAALAC accredited facility). National Research Council (NRC) guidelines for the care and use of laboratory animals were also observed (1996 revision). The standard rodent chow (Purina Rodent Chow; Biopia, Korea) was used. Ghrelin (rat) was purchased from Bachem Inc. (Bubendorf, Switzerland).The ghrelin peptide was prepared with concentration of of 0.25 mg/ ml which corresponds to 1.5 mg/day (6 mg/day; 7.14 mg/Kg body weight/day). Surgery Rats were anesthetized by intraperitoneal injection of 50 mg/kg zoletil and 10 mg/kg xylazine and surgically implanted with a 22gauge stainless-steel cannula (Plastics One Inc. Roanoke, VA, USA) into the third cerebral ventricle. Osmotic mini-pumps (Model 2004, 0.25 mL/h; Alzet Corp., Cupertino, CA, USA) filled with saline or rat ghrelin peptide were implanted under the dorsal chest skin. The mini-pumps were connected to the ICV cannula via a catheter. Alzet Brain infusion kit 2 with infusion cannula of ID = 0.18 mm and OD = 0.36 mm was used. The cannula was stereotactically placed 0.72 mm posterior to the bregma on the midline and implanted 7 mm below the outer surface of the skull surgery via a stereotaxic apparatus. For the verification for correct cannula placement, cresyl violet staining and brain dissection was performed. Study design Fifteen rats were divided into 3 groups: the control group (n = 5) received ICV infusions of saline for 21 days; the ghrelin ad libitumfed (ghrelin ad lib-fed) group (n = 4) received ICV infusions of ghrelin (1.5 mg/day) for 21 days; the ghrelin pair-fed group (n = 6) received ICV infusions of ghrelin (1.5 mg/day) for 21 days and were allowed to eat only as much chow as consumed by the control group on the previous day. Radiological analyses and bone histomorphometry Nondestructive, three-dimensional evaluation of bone mass and architecture were performed using a microCT scanner (Skyscan 1076 for tibia and lumbar spine and Skyscan 1172 for femur; Skyscan, Aartselaar, Belgium). Lumbar spine and femur were dissected from soft tissue, fixed in 70% ethanol, and analyzed. Image acquisition of tibia and lumbar spine L3 was performed with a source voltage of 100 kV, current of 100 mA, a 0.5-mm aluminum filter, and an isotropic voxel size of 8.8 mm. Image acquisition of femur was performed with a source voltage of 70 kV, current of 141 mA, a 0.5-mm aluminum filter, and an isotropic voxel size of 11.55 mm. For the tibia metaphysis trabecular bone, 251-slice-thick volume of interest starting 150 slices distal to the proximal growth plate were analyzed. For the tibia diaphysis cortex, the mid-diaphysis cortical bone volume of interest was analyzed. For the femoral metaphysis, 101-slice-thick volumes of interest starting 150 slices proximal to the distal growth plate were analyzed. For the lumbar spine, trabecular bone volumes of interest of whole trans-axial spine body images were analyzed. For trabecular volumetric bone mineral density (BMD) analyses using microCT, phantoms with predefined densities and CTA software were used for BMD measurements, as described in the manufacturer's manual (Skyscan, Aartselaar, Belgium). The trabecular and cortical bone volumes of interest were outlined by interpolation of operator-drawn regions exclusively representing the trabecular and cortical bone, respectively. BMD (g/cm 2 ) of the ex vivo tibia, femur, and lumbar spine (L2 and L3) were measured using the dual energy X-ray absorptiometry (DXA) instrument PIXIMUS (GE Lunar, Madison, WI, USA). A phantom was scanned daily for quality control. For histomorphometric analyses of dynamic parameters, rats were injected with calcein (20 mg/kg body weight) 2 and 6 days prior to sacrifice. Dynamic histomorphometry analyses were conducted on the lumbar spine L5 using the Bioquant program (Bio-Quant. Inc., San Diego, CA, USA) [16]. Statistical Analyses Data are presented as the mean 6 SEM. Statistical analyses were performed using analysis of variance (ANOVA) with least significant difference (LSD) as a post hoc comparison. Significance was defined as P,0.05. Statistical analyses were performed with SPSS for Windows (version 17.0, SPSS Inc., Chicago, IL, USA). Chronic ICV ghrelin infusion increases body weight and food intake Chronic ICV ghrelin infusion (1.5 mg/day for 21 days) significantly increased body weight in ghrelin ad lib-fed compared with ICV saline-infused control rats (33164 g vs. 31269 g, P,0.05). However, there was no difference in body weight between the control rats and ICV ghrelin-infused rats that were pair-fed the food intake of the control rats (ghrelin pair-fed) (316611 g) (ANOVA F value = 4.21) (Fig. 1A). Although there was no significant difference in cumulative food intake between the ghrelin pair-fed and control groups, ghrelin ad lib-fed rats had significantly higher cumulative food intake compared with ghrelin pair-fed rats (P,0.05) (ANOVA F value = 2.86) (Fig. 1B). Chronic ICV ghrelin infusion increases bone mass Chronic ICV ghrelin infusion significantly increased bone mass in ghrelin pair-fed group compared with control rats. This was indicated by increased bone volume percentage (BV/TV, 32.461.5% vs. 24 (Fig. 2). The trabecular pattern factor (Tb.Pf), a parameter inversely correlated to the connectivity, was significantly decreased in the ghrelin pairfed group, indicating that the trabecular structure was more connected in the ghrelin pair-fed group (P,0.05, Fig 2) (ANOVA F value = 6.46). This result is consistent with the decreased structure model index (SMI) observed in the ghrelin pair-fed group, which suggests that the trabecular structure was more plate-like in the ghrelin pair-fed group, as compared with the rodlike structures in the control group (P,0.05, Fig. 2) (ANOVA F value = 4.45). There were no significant differences in bone mass observed between the control group and ghrelin ad-lib fed group. Similar trends were observed in femur and lumbar spine, with lower statistical significance ( Figure S1 and S2). Cortical area (Ct.Ar) was significantly increased in the ghrelin ad lib-fed group compared to the control group (P,0.05, Figure S3) (ANOVA F value = 3.57). There was a tendency of increased cortical area fraction and cortical thickness in the ghrelin ad lib-fed group compared to the control group (P,0.1, Figure S3) (ANOVA F value = 1.73 and 2.49, respectively). There were no significant differences in ex vivo BMD measurements between the three groups ( Table 1). Chronic ICV ghrelin infusion increases the mineral apposition rate Chronic ICV ghrelin infusion significantly increased the mineral apposition rate (MAR) in the ghrelin pair-fed group compared with the control group (5.060.2 mm/d vs. 4.060.1 mm/d, P = 0.014, Fig. 3) (ANOVA F value = 5.36). The ghrelin ad libfed group tended to have a higher MAR compared with the control group (P = 0.073, Fig 3). No significant differences in bone formation rate were observed among the three groups. Effect of chronic ICV ghrelin infusion on serum markers Chronic ICV ghrelin infusion significantly increased serum leptin in both the ghrelin ad lib-fed group and ghrelin pair-fed group compared with the control group (Fig. 4). Chronic ICV ghrelin infusion significantly decreased serum ghrelin in the ghrelin ad lib-fed group compared with the control group (P,0.05) (ANOVA F value = 3.55 and 6.31, respectively). However, there were no significant differences in IGF-1, CTX, TRAP-5b, and P1NP among the three groups. Discussion Using a rat ICV administration model, we demonstrate that intracerebral infusion of ghrelin results in increased bone mass, connectivity, and mineral apposition rates in ghrelin pair-fed group. Chronic ICV administration of ghrelin increased weight gain in the ghrelin ad lib-fed group; however, this effect was abolished by pair feeding. These findings indicate that chronic central administration of ghrelin increased bone mass through mechanisms independent of changes in body weight. The duration of ICV ghrelin treatment (21 days) in this study could be considered relatively short to induce profound changes in bone mass. However, it should be noted that establishment of an animal model of chronic ICV treatment is technically challenging and the present study is the longest ICV ghrelin treatment attempted (21 days), with previous studies infusing ghrelin for only 6-12 days [8][9][10]12]. Future study using Alzet osmotic pump model 2ML4 could extend chronic infusion up to 28 days. Although we clearly demonstrated anabolic effects of ghrelin on bone (independent from ghrelin-induced effects on feeding), the mechanism for this effect is not known. One possible mechanism may be suppression of the sympathetic nervous system. Sympathetic activity has been reported to be suppressed by ICV administration of ghrelin [14] and stimulated by ICV administration of leptin [13]. Hypothalamic administration of leptin was reported to decrease bone formation due to the ability of leptin to increase sympathetic tone [5,6,17]. Numerous animal [5,18] and human [19,20] studies demonstrate that beta-blockers have protective effects on bone metabolism. Therefore, ICV ghrelin may increase bone mass through sympathetic suppression. Future studies using pharmacologic blockade of the sympathetic nervous system or local sympathectomy surgery will clarify this mechanism. Other possible mechanisms include the secretagogue effect of ICV ghrelin on growth hormone [21]. However, the control group and ghrelin pair-fed group did not differ in serum IGF-1 levels so this does not support a role for growth hormone. These data are supported by a previous study that also reported no change in plasma IGF-1 after 7 days of ICV ghrelin treatment [9]. It has also been reported that ICV administration of ghrelin increased the plasma concentration of growth hormone on day 6 but this was not sustained on day 12 [22]. Therefore, it is unlikely that the effect of ICV ghrelin is mediated through a growth hormone-IGF-1 axis. In contrast to the ghrelin pair-fed group, the ghrelin ad lib-fed group had a slight increase in cortical bone area and no significant difference in trabecular phenotypes compared with controls. However, the ghrelin ad lib-fed group did exhibit increased body weight gain. Therefore, the increased cortical bone mass could be the result of increased body weight and a consequent increase in mechanical loading on bone. Recently, we and others have reported that increased fat mass could have adverse effects on bone mass because of deleterious metabolic effects [23][24][25][26][27]. In addition, many researchers have suggested that weight gain and obesity could increase cortical bone mass through mechanical loading with a resultant decrease in trabecular bone due to metabolism or systemic effects [27][28][29]. Fatty acid lipotoxicity, numerous adipokines, inflammatory cytokines, aromatase, and insulin resistance have been suggested to mediate the deleterious effects of fat on bone metabolism [27][28][29][30]. Therefore, the absence of an increase in trabecular bone mass observed in the ghrelin ad lib-fed group could in part result from the deleterious effects of increased fat mass. The ghrelin ad lib-fed group had a significantly decreased serum ghrelin concentration, likely due to a compensatory response to increased body weight. And this decrease in serum ghrelin could result in decreased bone formation, which would then neutralize the anabolic effect of ICV ghrelin on bone. Central ghrelin is reported to have an effect on adipose tissue independent from its effects on food intake [10,11,48]. Therefore, the ghrelin pair-fed rats could have some changes in adipose tissue deposition, which could be independent of food intake and weight change. These changes in adipose tissue deposition could have some direct and indirect effect on the bone phenotype of these ghrelin pair-fed rats. Furthermore, the ghrelin effect on adipose tissue deposition may have contributed to the lack of weight gain in the ghrelin pair-fed rats. However, we did not investigate the adipose tissue phenotypes in the present study to determine the role of adipose tissue deposition in the central regulation of bone metabolism. In contrast to the increased bone mass determined by microCT measurement, there were no significant changes in ex vivo BMD measured by DXA in tibia, femur, and lumbar spine among the three groups. Several possible explanations may account for the discrepancy. DXA measurement reflects both trabecular and cortical bone mass, whereas microCT gives separate estimates of BMD for trabecular and cortical bone as well as reports volumetric mineral density in g/cm 3 . Therefore, DXA measurements cannot identify the difference in trabecular bone observed by microCT. It has been reported that the measurement precision of the excised femur BMD is generally not as precise as that for the intact femur BMD in vivo, although the general efficacy of ex vivo DXA measurements were acceptable [31]. Although similar trends were observed, the effects of ICV ghrelin on femur and spine were not statistically significant. There could be a skeletal site-specific effect of ICV ghrelin. Interestingly, a previous study reported a skeletal site-specific effect of leptin with lower femur BMD and higher spine BMD observed in leptin deficient mice [32]. However, no skeletal site-specific effect was observed in leptin receptor-deficient mice [7]. We measured serum ghrelin to investigate possible leakage of ghrelin into systemic circulation from the ICV injection. However, we found a significant decrease in serum ghrelin in the ghrelin ad lib-fed group as compared to the control group. This result indicated that leakage to systemic circulation was unlikely. This decrease in serum ghrelin and increase in serum leptin could be a compensatory response to the weight gain of the ghrelin ad lib-fed group. However, the increase in serum leptin in ghrelin pair-fed group is not due to the weight gain, since there was no weight gain in the ghrelin pair-fed group. Another possible explanation is that the increase in serum leptin observed in the ghrelin ad lib-fed group and ghrelin pair-fed group is likely a compensatory response related to ICV ghrelin injection. Previous studies have reported a similar trend of increased blood leptin levels in ICV ghrelin injected animals [9,10]. Circulating ghrelin and leptin may contribute to bone metabolism via a direct effect on bone cells. Generally, ghrelin increases both osteoblast and osteoclast function [33][34][35] and leptin increases osteoblast function but decreases osteoclast function [36,37]. However, these direct effects of ghrelin and leptin were inconsistent depending on the concentrations, assays, and cell types used [34,38,39]. Therefore, the role of serum ghrelin on bone metabolism is complex, and future study specifically aimed to investigate the role of serum ghrelin on bone metabolism is required to determine this issue. We observed no significant differences in CTX, TRAP-5b, or P1NP, which are serum biochemical markers of bone turnover. Since ghrelin was placed in a subcutaneously transplanted osmotic pump, it is possible that after 21 days of incubation at body temperature, the effect of ghrelin was diminished due to peptide degradation. Therefore, the serum biochemical markers measured after 21 days of treatment may not have captured the dynamic bone metabolism state during the treatment period. However, other possibilities could be limitations from the ELISA assay or serum preparation procedure. The findings of the current study have important clinical implications since ghrelin or ghrelin mimetics could be utilized as potential therapeutic modalities for osteoporosis. A cross-sectional study showed that serum ghrelin positively correlated with BMD [40]. In addition, there are a number of clinical trials investigating the effects of ghrelin or ghrelin mimetics on various conditions, including sarcopenia, cancer-related cachexia, anorexia nervosa, cystic fibrosis, postoperative gastric ileus, and gastroparesis [41][42][43][44][45]. These studies have already validated the safety and efficacy of ghrelin or ghrelin mimetics. First limitation in the present study is the dose of ICV ghrelin treatment (1.5 mg/day). This dose is slightly higher than some studies (1.0 mg/day and 1.2 mg/day) [9,46]. However, other studies have used higher doses (5-20 mg/day) [11,47,48]. The dose of present study (1.5 mg/day) could be insufficient or subthreshold to induce adequate central ghrelin effect. Second limitation was the statistical power of the present study. Most of the findings are based on LSD post hoc comparisons, which is very liberal. Additional post hoc comparisons with Scheffe and Tukey HSD test were performed and some of the findings (trabecular number, cortical bone area, serum ghrelin, body weight and food intake) did not reach statistical significance. And, it should also be noted that some of the findings with tendency of P,0.1 may be false positives. Third limitation was the conflicting fact that long duration of treatment was required to induce a sufficient bone metabolism change, whereas long duration of incubation in body temperature result in risk of ghrelin peptide degradation. In conclusion, chronic central administration of ghrelin increases bone mass through a mechanism that is independent of body weight, suggesting that ghrelin may have a bone anabolic effect through the central nervous system.
4,750.2
2013-07-02T00:00:00.000
[ "Biology" ]
Effect of WiFi signal exposure in utero and early life on neurodevelopment and behaviors of rats The aim of this study is to examine the long-term effects of prenatal and early-life WIFI signal exposure on neurodevelopment and behaviors as well as biochemical alterations of Wistar rats. On the first day of pregnancy (E0), expectant rats were allocated into two groups: the control group (n = 12) and the WiFi-exposed group (WiFi group, n = 12). WiFi group was exposed to turn on WiFi for 24 h/day from E0 to postnatal day (PND) 42. The control group was exposed to turn-off WiFi at the same time. On PND7-42, we evaluated the development and behavior of the offspring, including body weight, pain threshold, and swimming ability, spatial learning, and memory among others. Also, levels of proteins involved in apoptosis were analyzed histologically in the hippocampus in response to oxidative stress. The results showed that WiFi signal exposure in utero and early life (1) increased the body weight of WiFi + M (WiFi + male) group; (2) no change in neuro-behavioral development was observed in WiFi group; (3) increased learning and memory function in WiFi + M group; (4) enhanced comparative levels of BDNF and p-CREB proteins in the hippocampus of WiFi + M group; (5) no neuronal loss or degeneration was detected, and neuronal numbers in hippocampal CA1 were no evidently differences in each group; (6) no change in the apoptosis-related proteins (caspase-3 and Bax) levels; and (7) no difference in GSH-PX and SOD activities in the hippocampus. Prenatal WiFi exposure has no effects on hippocampal CA1 neurons, oxidative equilibrium in brain, and neurodevelopment of rats. Some effects of prenatal WiFi exposure are sex dependent. Prenatal WiFi exposure increased the body weight, improved the spatial memory and learning function, and induced behavioral hyperactivity of male rats. Introduction As the economy and society develops, Wireless Fidelity (WiFi) communication services are extensively used in household, industrial, military, medical, and scientific setting in recent years. The increase in exposure to the WiFi wireless communication signal has posed major concerns with regard to its effects on human health (Othman et al. 2017a;At-Assa et al. 2013). Potentially harmful effects of WiFi exposure have been studied on various tissues and body systems (Redmayne et al. 2013). Studies have suggested that continuous RF radiation exposure could have an effect on human health and lead to disorders like headaches, cancer, anemia, among other health problems (Othman et al. 2017b;Redmayne et al. 2013;Pallarés et al. 2013;Aït-Aïssa et al. 2012). One research revealed that in rats, WiFi radiation (2.45 GHz) can cause a reduction in sperm parameters and an upsurge in apoptosis-positive Responsible Editor: Mohamed M. Abdel-Daim Hongmei Wu and Dongyu Min equally contributed to this paper. Highlights • WiFi signal exposure in utero and early life increased the body weight of male rat. • No change in neuro-behavioral development was observed in WiFi signal exposure group. • WiFi signal exposure increased learning and memory function of male rat. • WiFi signal exposure enhanced relative levels of BDNF and p-CREB proteins in the hippocampus. • No neuronal loss and degeneration were observed in WiFi signal exposure group. • No change of apoptosis-related proteins (Bax and caspase-3) and activities of SOD and GSH-PX in hippocampus were observed in WiFi signal exposure group. Extended author information available on the last page of the article cells in the seminiferous tubules (Mortazavi et al. 2013;Le Quément et al. 2012) Saili et al. demonstrated that acute WiFi signal exposure (2.45 GHz) can affect the cardiovascular system (heart rhythm and blood pressure) in mature male rabbits. Review studies raised a degree of scientific uncertainty of the risk of radiofrequency (RF) transmissions to human health and suggested taking precautions, especially in children (Castaño-Vinyals et al. 2022;Kheifets et al. 2005). Among several possible biological targets, the effect of WiFi signal on the nervous system has received a special focus because of its immense cellular diversity, electrical nature, and organizational complexity (Shokri et al. 2015). The memory performance for a year was inversely related to the collective duration of wireless phone use and more significantly to RF-EMF dose (Schoeni et al. 2015). The implication of this is that RF-EMF exposure influences memory function (Zhijian et al. 2013). Other studies have reported some beneficiary effects of microwave irradiation (Bayat et al. 2023). They announced beneficial cognitive effects of RF radiation in Alzheimer's disease (Mortazavi et al. 2013;Banaceur et al. 2013), but the evidence for this remains indistinct. Currently, the effects of 2.4 GHz WiFi signal exposure on the nervous system have mostly been studied in adult animals, and few in utero and postnatal exposure studies have been carried out. The embryonic period is considered a critical phase in the development and the nervous system and immature brain plasticity toward environmental factors recognized them as possible RF field targets. However, the neurological effects of WiFi exposure early in life, particularly during pregnancy, have not been extensively studied (Çelik et al 2016;Altunkaynak et al. 2016). There is limited data with regard to the long-term WiFi effects on the physiology of the brain. Whether WiFi exposure has positive, negative, or no effects on neurodevelopment and behaviors is not clear. Given the progressive increase in the 2.45 GHz wireless networks, concerns should be raised concerning the effects of continuous and long-term whole-body exposure to these high WiFi network frequencies (Shokri et al. 2015). Therefore, we explored the long-term effects of prenatal and early life exposure to 2.54 GHz WiFi signals on neurodevelopment and behaviors as well as biochemical alterations of Wistar rats. Animals Adult Wistar rats (both sexes) were procured from Harbin Medical University (Harbin, China). Both female (weight 240-280 g) and male (weight 280-320 g) rats were used for the experiments. The feeding method and first day of pregnancy (E0) determination were conducted as described previously. Approval for animal procedures was provided by the Experimental Animal Ethics Committee, Daqing Campus of Harbin Medical University. Experimental design On E0, the gravid rats were separated into 2 groups: the control group (n = 12) and the WiFi-exposed group (WiFi group, n = 12). Separate housing was provided for every pregnant rat. WiFi group was exposed to turn on WiFi for 24 h/day for 9 weeks. The control group was exposed to turn-off WiFi for the same time. WiFi device was (802-16e 2005 WiMAX Indoor CPE antennae, model number: WIXFMM-130, China) with a frequency of 2450 MHz (2.45 GHz). The duration of radiation was 24 h/day in a 30-cm distance from the antenna to the cages. We test the average electric field intensity is 2.1 V/m, the average power density is 82.32 mV/ m 2 , average magnetic field intensity is 14.31 mA/m, and there are no differences between the inside and outside of the plastic cage. The litters born from the control and WiFi group rats were included in the control and WiFi groups, correspondingly. On the PND21, the offspring were weaned then separated into various cages based on sex. We carried out the following experiments on the offspring. Neuro-behavioral development The pups were tested for behavioral development as described by Schneider and Przewłocki (2005). This test was performed on each group containing eight animals. Swimming performance: Swimming test was used to evaluate motor development and integration of a coordinated series of reflex responses on PNDs 8, 10, 12, and 14. Pain threshold: Pain threshold was determined by the hot plate method on PNDs 9, 11, 13, and 15. Self-grooming test Self-grooming test was conducted to assess the repetitive and stereotyped behavior of each rat. The tests were performed as previously described (Kim et al. 2017). Briefly, the rats were individually put into standardized cages under light (40 lx). The cages were empty to avoid digging in the bedding, which is considered a competing behavior. The rats were familiarized to the cages for 5 min, then timed for 10 min. An experienced investigator scored the total time spent in grooming during the period. Open field test In the present study, open field test was conducted on PND 30 to evaluate the rats' locomotor activity. The test was performed as previously described. Prior to the test, the rats were familiarized with the test box for 5 min. Subsequently, the cumulative distance traveled by each rat plus the movement duration (within a 10-min session) was recorded using the auto-tracking camera system (YH-OF, YiHong, China). Morris water maze This was carried out to determine the spatial learning and memory of rats, as detailed before (Wu et al. 2017). The rats were first trained for 4 consecutive days (i.e., from PND 36 to 39). After the training, an 8-cm-diameter platform was put at the center of the third quadrant of the water maze, then hidden 1.5 cm underneath the surface of the water. Each rat was given four trials (60 s each) daily to locate the platform. The time taken to locate the platform (escape latency) was regarded as index of performance. On day 5, the platform was eliminated, and the number of times that a rat passed through the circular area (diameter, 10 cm) that previously contained the platform within 60 s was recorded. And this was taken as the index of spatial memory. Tissue preparation Histological tissue preparation was conducted according to the methods described by Wu et al. (2018). Briefly, on PND 43, rat tissues from each group were perfused in ice-cold saline (0.9%) then in 4%-paraformaldehyde. Subsequently, the brains were removed, then post-fixed using 4% paraformaldehyde, then cryopreserved at 4 °C in a 30% w/v sucrose solution. Next, frozen coronal sections (50 μm thick) were generated on slides for staining and immunohistochemistry. All the sections were preserved at − 80 °C for further studies. Nissl staining Hippocampal neuron apoptosis was evaluated based on the results of Nissl staining in the brain section. For Nissl staining, the normal neurons were round, and the nuclei appeared as pallid blue (Guan et al. 2019). The sections were washed with distilled water and immersed for 15 min in crystal violet (0.1%) at room temperature (RT), then washed again with distilled water. Subsequently, the sections were dehydrated, then sealed with neutral balsam before taking photomicrographs using a light microscope. NeuN immunohistochemistry NeuN immunohistochemistry was carried out to assess neuron apoptosis as described by our previous report (Wu et al. 2017;Wang et al. 2017). Briefly, the sections were incubated with the primary antibody mouse anti-NeuN (1:50, Chemicon International, USA) overnight at 4 °C. After rinsing, the sections were incubated with suitable biotinylated antibodies (1:200) for 15 min at 37 °C then subjected to 3.3-diaminobenzidine (DAB) exposure for about 2 min. Sections were subsequently put into tap water to stop the DAB reaction. Following several washing and dehydration steps, the sections were put in coverslips then visualized using a light microscope and photographed. The control sections were incubated with 10% normal goat plasma as opposed to the primary antibody. All the subsequent sections were incubated as illustrated above and were observed under a light microscope. No positive immunoreaction was observed. Measurement of superoxide dismutase (SOD), malondialdehyde (MDA) The contents and enzymatic activity of SOD in the brain of rats from every group were assessed using various commercial assay kits as per the methods described by the manufacturer (Nanjing Jian Cheng Bioengineering Institute, Nanjing, China). The level of MDA was expressed in nmol/mg protein. The SOD activity was identified as U/mg protein. Western blot assay This assay was conducted as per the methods described by Wu et al. (2017). The experiment involved using the hippocampal tissues from different groups. An aliquot of hippocampal tissue was homogenized in ice-cold RIPA lysis buffer. Then, 30-μg protein samples were resolved using SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel) electrophoresis and transferred to nitrocellulose membranes. After blocking for 1 h using 5% skim milk, the membranes were incubated overnight at 4 °C together with the following antibodies: anti-BDNF (1:1000; Cell Signal, USA), anti-CREB (1:500; Cell Signal, USA), anti-Bcl-2 (1:1000; Cell Signal, USA), anti-Caspase3 (1:500; Cell Signal, USA), and mouse anti-GAPDH (1:1000, Zhongshan Jinqiao Biotechnology, China). The next day, the membranes were incubated for 1 h at RT together with the HRP-labeled goat anti-rabbit/anti-mouse secondary antibody (1:2000, Santa Cruz, USA). HRP signals were detected using a chemiluminescence imaging machine (Image Quant LAS 4000 minis; Applygen Technologies Inc., China). A scanner was used to digitized the images before they were analyzed by Quantity One software (Bio-Rad Laboratories, USA). Statistical analysis All data were shown as the mean ± SEM. The analyses were completed in SPSS version 18.0 (Beijing Stats Data Mining Co., Ltd.). Statistical analysis was conducted using a two-way analysis of variance (ANOVA) to compare two levels of prenatal status (WiFi versus control) and sex (males versus females), and the means were separated by Bonferroni's post-tests. Effects of WIFI signals on bodyweight and neuro-behavioral development of offspring The body weight of offspring was monitored for 6 weeks after birth. As showed in Fig. 1A, the mean body weight of male rats in the WiFi group (WiFi + M) significantly increased compared with male rats in the control group (Con + M) at the P28, P35, and P42 (p < 0.01). However, no considerable difference in the mean body weight was found during the entire exposure period between female rats in the WiFi group (WiFi + F) and female rats in the control group (Con + F). The result of the neuro-behavioral development test indicated no significant differences in the WiFi group relative to the control group in the following aspects: swimming performance (Fig. 1B), pain threshold (Fig. 1C), and negative geotaxis (Fig. 1D). Effects of WiFi signals on the stereotyped and repetitive behaviors of offspring Self-grooming was used to observe the stereotyped and repetitive behaviors of each group within 10 min. No remarkable changes in total times (Fig. 1E) and duration of self-grooming (Fig. 1F) were found between the WiFi group and the control group. The results of open field tests showed that for the WiFi + M group, the distance spontaneously traveled was considerably longer than that of the Con + M group (p < 0.05), while the WiFi + F group did not differ significantly from the Con + F group (Fig. 1G). Furthermore, the results of the velocity of movement showed no significant changes between the groups (Fig. 1H). Effects of WiFi signals on spatial learning and memory function of offspring The spatial learning and reference memory of each group were assessed using the Morris water maze test. In the place navigation test, the escape latency was remarkably shortened in the offspring of the WiFi + M group relative to other three group rats, especially during sessions 3 (p < 0.01) and 4 (p < 0.01) as showed in Fig. 2A and B. Concerning the probe test, the frequency of passing (passing times) of rats in the WiFi + M group showed a marked elevation compared with other three groups (p < 0.01) ( Fig. 2C and D). BDNF/CREB signaling pathway is considered to play significant functions in learning and memory processes. Thus, we evaluated the changes in BDNF protein levels and the ratio p-CREB/CREB in the hippocampal tissues of different groups of rats by Western blot analysis. The BDNF expression in the rats' hippocampal tissues in each group showed that the expression of BDNF was remarkably elevated in WiFi + M group relative to the other three groups (p < 0.01) (Fig. 2E). Results of p-CREB expression in the rats' hippocampal tissues showed that the ratio of p-CREB to total CERB was significantly increased in the WiFi + M group compared with other three groups (p < 0.01) (Fig. 2F). Effects of WiFi signals on neurons in the hippocampus of offspring We examined whether WiFi signals could affect neurons in the hippocampus of WiFi-exposed rats. We conducted a histological study through Nissl staining to examine neuronal changes in the CA1 region of the hippocampus. Regarding Nissl staining of the control group, a clear neuronal cell outline with a compact structure and abundant cytoplasm and a cell body was observed (Fig. 3A). Neuronal degeneration and loss were not detected in the WiFi group. No remarkable differences were detected with regard to the number of surviving cells in the CA1 region of the hippocampus between the WiFi group and the control group (Fig. 3B). In neurological research, NeuN has been considered to be a reliable marker of mature neurons; the expression level of NeuN has been used to directly evaluate neuronal death or loss (Huttner et al. 2014;Soylemezoglu et al. 2003). Similarly, NeuN immunohistochemistry also revealed no differences in mature neurons in the pyramidal cells of the hippocampal CA1 region in the WiFi-treated rats (Fig. 3C). The number of positive cells (dark brown indicates NeuN-positive cells) in the hippocampal CA1 has no obvious change in the WiFi group relative to the control group (Fig. 3D). Bax and caspase-3 are key proteins related to neuron damage. To examine the effect of WiFi on hippocampal neurons at the molecular level, the expressions of Bax and caspase-3 in the hippocampal tissues of different groups of rats were tested by Western blot analysis. The results showed no significant changes in the expression of caspase-3 and Bax proteins between the groups (Fig. 3E and F). Effects of WiFi signals on brain oxidative stress response of offspring To further study the effect of WiFi on the brain of offspring, MDA content SOD antioxidant enzyme activity in the rats' hippocampal tissues in each group was measured. MDA level is an important indicator of lipid peroxidation. No statistically significant alteration was found in MDA content in the hippocampal tissues of WiFi rats compared with the control group (Fig. 4B). Besides, we also tested the activity of the antioxidant enzyme. Figure 4A shows that compared with the Con group, SOD activity in the hippocampus of the WiFi group was not significantly changed. Discussion Internet access is currently considered a must have in daily routines and therefore has been installed in almost all communication gadgets (Zhi et al. 2018;Nazıroğlu et al. 2013). Consequently, continuous exposure to WiFi has become a very common risk factor for poor health (Foster and Moulder 2015;At-Assa et al. 2013;Aït-Aïssa et al. 2012). The present study investigated the effects of a 2.54 GHz WIFI signal exposure during prenatal and early life (24 h/day for 9 consecutive weeks) on rat neurodevelopment, behaviors, and cognition as well as biochemical index alterations. Our study found that WiFi exposure did not affect offspring physical and functional development. These results agree with a study by Poulletier et al. Behaviorally, exposed offspring exhibited no alteration in motor and emotional behavior. Contrarily, some studies have revealed that exposure to WiFi radio frequencies during pregnancy could affect neurological functions of offspring (Othman et al. 2017b, a). However, this could have been due to the high radiation dosage tested in these studies. Herein, we revealed that in most tests, the effect of WiFi treatment was dependent on the sex of the offspring, and this was consistent with the findings of Zhang et al.; however, its mechanism of action remains unclear. EMF exposure has been shown to have contradictory effects on the cognitive functions of animals including humans. Dubreuil et al. revealed that RF exposure can reduce performance in rodents, particularly in tasks that require spatial memory (Dubreuil et al. 2003 900 MHz frequency for 5 weeks (i.e., 2 h a day, 5 days per week, SAR 3 W/kg) can significantly improve memory and learning abilities of young rats (Kumlin et al. 2007). Herein, we observed that prenatal WiFi exposure can enhance cognitive ability. Several studies have shown that protein synthesis occurs in neuronal dendrites and might be the cellular basis of memory and learning. Currently, it is not known whether microwave radiation affects protein No difference expressions of apoptosis protein in the hippocampus between different groups. E, F The expression of apoptosis-related proteins of hippocampus of offsprings in each group: E representative Western blots for Bcl-2 and F representative Western blots for caspase-3. All data are expressed as mean ± SEM (n = 6) Fig. 4 Effects of WiFi signals on brain oxidative stress response of offspring in different groups. A SOD activity in hippocampus. B MDA contents in hippocampus. All data are expressed as mean ± SEM (n = 6) synthesis, particularly in the brain. In this study, we found that prenatal WiFi exposure can increase the expression of BDNF and phosphorylated CREB proteins. However, the detailed mechanism still needs further study. The brain has been shown to be more prone to oxidative injury during development in early years of life (Çelik et al 2016). Besides, oxidative stress can be activated via several mechanisms, such as electromagnetic radiation, thus resulting in molecular impairment. The oxidative stress response due to exposure to WiFi signals has been previously investigated in an animal model. It is noteworthy that previous studies that investigated the harmful effects of RF-EMR have reported inconsistent findings. Ozben and Kamali et al. implicated microwave radiation (Shokri et al. 2015;Foster and Moulder 2015;At-Assa et al. 2013;Aït-Aïssa et al. 2012) in apoptosis through their ability to trigger lipid peroxidation of cell membranes and as a result yield apoptosis signal (Kamali et al. 2018;Ozben 2007;Dasdag et al. 2008). However, other studies have indicated that EMR has no considerable impact on the antioxidant defense system because of unaltered oxidative stress markers such as MDA (At-Assa et al. 2013). In the present study, WiFi exposure did not induce brain oxidative stress response in offspring, suggesting that the possible damaging impacts of such radiations could be dependent on the exposure duration, dose, age, and body posture (Peter and Richard 2010). Contradictory results of the mentioned research could be because of differences in study methods, especially in the duration of exposure and dose of WiFi signal. In conclusion, the findings of this study indicate that prenatal WiFi exposure does not affect the offspring's hippocampal neurons, oxidative equilibrium in brain, and neurodevelopment and emotional responses. Notably, some effects of WiFi exposure are sex dependent. Prenatal WiFi exposure increased the body weight, improved the spatial memory, and learning function and induced behavioral hyperactivity of male rats. However, there is a need to conduct further studies, especially on biochemical and neuromolecular mechanisms underlying such effects.
5,182.8
2023-08-10T00:00:00.000
[ "Biology", "Psychology" ]
The trade-off between deep energy retrofit and improving building intelligence in a university building In the last three decades, deep energy retrofit measures have been the standard option to improve the existing Danish building stock performance, with conventional techniques including envelope constructions insulation, windows change and lights replacement. While such techniques have demonstrated large technical and economic benefits, they may not be the optimal solution for every building retrofit case. With the advancement in the field of smart buildings and building automation systems, new energy performance improvement measures have emerged aiming to enhance the building intelligence quotient. In this paper, a technical evaluation and assessment of the trade-off between implementing deep energy retrofit techniques and improving building intelligence measures is provided. The assessment is driven by energy simulations of a detailed dynamic energy performance model developed in EnergyPlus. A 2500 m2 university building in Denmark is considered as a case study, where a holistic energy model was developed and calibrated using actual data. Different performance improvement measures are implemented and assessed. Standard deep energy retrofit measures are considered, where the building intelligence improvement measures are in compliance with the European Standard EN 15232 recommendations. The overall assessment and evaluation results will serve as recommendations aiding the decision to retrofit the building and improve the performance. Introduction When it comes to the building sector, Denmark is not an exception as the building block contributes to a substantial 40% of the overall energy consumption share with an equivalent contribution in terms of greenhouse gas emissions [1]. Obviously, heating ranks at the top of the building services regarding energy usage, mainly to fulfil space heating and domestic hot water demands. In the recent decades, Denmark has set very ambitious, yet realistic goal to become a fossil fuel-free country by 2050 [2], relying only on alternative and renewable energy resources in the energy and transport sectors. Thus, major efforts have been devoted to improving the country energy generation and supply sectors with fields including multi-generation technologies, energy storage, innovative heat and electricity production technologies and energy storage given full consideration. However, reducing the overall energy demand is as important as enhancing the energy generation and supply sector. Therefore, a balanced approach aiming to improve energy supply stability and capacity in addition to reducing the different sectors energy usage is key towards attaining the holistic 2050 goal. In this context, designing and constructing new highly energy efficient buildings and improving the performance of the exiting building stock is a major milestone towards reducing consumption and achieving the country ambitious energy and environmental aims. In the recent decades, the Danish building regulations, referred to as 'Danish BR', have been continuously improved and upgraded, mainly on the level of newly built buildings. Thus, the standards for buildings' design and operation have been drastically tightened, forcing strict regulations in terms of the building thermal envelope components and characteristics as well as indoor thermal comfort and air quality. In addition, guidelines have been set regarding the implementation and performance of the various building services and energy supply systems. Fig. 1. The evolution of the maximum allowed annual primary energy usage for a 150 m 2 residential building [3]. Figure 1 presents the evolution of the maximum allowed primary energy use for a newly built 150 m 2 residential building in Denmark considering the transition between different building standards [3]. It is shown that the primary energy use falls by 95% from around 350 kWh/m 2 per year for a building built in 1961 to only 20 kWh/m 2 based on the future building regulation for 2020. Considering these numbers and the fact that around 80% of the Danish building stock was built before 1980, it is clear that there is a large potential to improve the overall energy performance of the exiting building stock to live up to the current standards and regulations. While there has been a progressive tightening in the standards dealing with the design of new buildings in Denmark, less has been done in terms of organizing and promoting the existing buildings holistic energy retrofit process. Overall, the main share of the energy retrofit projects in the last three decades was driven by the need to modify or change, either on the level of envelope materials, energy systems or building services [4]. Thus, there is an absence of a systematic methodology aiding the building's retrofit decision-making. In this context, the Danish government has realized the potential of improving the overall energy performance of exiting buildings and has launched recently a comprehensive strategy: "Strategy for the energy renovation of the existing building stock" [5]. The strategy has a large set of guidelines, regulations and initiatives aiming to establish a more cost-effective, energy efficient and systematic energy retrofit projects around the country. Considering the large energy savings potential in retrofitting exiting Danish building, a large block of research investigations and studies have been presented in the recent years, targeting various aspects of buildings energy retrofit from the theoretical and experimental perspectives. Buildings built in the 1850-1930 period have been investigated by Odgaard et al. [6], who reported a 48% energy savings potential through adding 100 mm insulation to the brick exterior walls. A similar study was conducted considering a retrofit process for a multi-family building from the 1890's to comply with the 2015 Danish building regulation. The authors highlighted energy savings up to 68% through interior walls insulation, windows upgrade and mechanical ventilation system with heat recovery implementation [7]. Mørck et al. [8] reported the energy retrofit process of a school building in Ballerup, Denmark, suggesting a package of insulating the exterior wall with 33 cm mineral wool, adding 25 cm insulation to the roof, upgrading the windows to triple-glazing, installing LED lights along with a 150 m 2 photovoltaic solar system. Implementing the proposed retrofit package resulted in reducing the primary energy use of the school from 200 to 75 kWh/m 2 considering both heating and electricity. Another recent investigation by Jradi et al. [9] proposed a systematic approach for building energy retrofitting using dynamic energy performance models. Four daycare centers in Aarhus, Denmark, were considered as case study buildings, where a full scale building dynamic models were developed and various energy retrofit measures were simulated and reported. Based on the simulation results, a package of upgrading electrical equipment, installing LED lights, replacing the heating circulation pump and upgrading the ventilation system with heat recovery was chosen. The implementation yielded energy savings of around 28%. This is accompanied by saving 5.1 tons on the annual CO2 emissions along with a 4 years payback period. Deep Energy Retrofit vs Improving Building Intelligence A major technique which has been implemented in the majority of energy renovation projects, especially dealing with buildings of 50 years old and more, is the deep energy retrofit (DER) approach. Such technique is generally implemented on a holistic level, targeting the building thermal envelope components as well as upgrading or changing the energy systems and devices. The MSER Building Guide [10], set by Massachusetts gas and electric utilities defines deep energy retrofit as the holistic building retrofit process targeting building enclosure and energy systems, resulting into energy efficient high performing buildings. In addition, the IEA-EBC Annex 61 [11] has investigated business and technical concepts for deep energy retrofits of public buildings. As part of the investigation, an overall definition of deep energy retrofits was established as: 'Deep Energy Retrofit is a major building renovation project in which site energy use intensity, including plug loads, has been reduced by at least 50% from the prerenovation baseline'. From the technical perspective, different studies highlighted the positive impacts of implementing deep energy retrofits in existing buildings, allowing large energy savings, less greenhouse gas emissions and better thermal comfort and indoor air quality. This is in addition to the economic benefits, mainly lowering the building running and operational costs as well as limiting maintenance costs. Figure 2 depicts the results reported by Mørck et al. [12], considering 26 deep energy retrofit projects completed in different countries. buildings [12]. It could be noted that physical envelope measures are the priority, almost in each case, in addition to measures targeting the ventilation, heating, cooling and lighting systems upgrade. However, measures targeting the building energy management patterns were only implemented in 3 buildings out of 26. Speaking about energy management and improving the building intelligence quotient, intelligent buildings is an old notion which was used in the 1980's, with no common or universal definition. An early definition of intelligent buildings was provided by Leifer et al. [13], who stated that an intelligent building is a building that harness communication technology to automatically control multiple energy systems, coupled with energy use predictions and evaluations. A very widely used definition of intelligent buildings in the US is buildings that are capable of integrating different systems to effectively manage resources in an organised and coordinated mode [14]. In our work, we define building intelligence as the capability of the building to adopt automated strategies to control and manage the performance of multiple energy systems, responding to occupants demands while ensuring thermal comfort and high energy performance. In this context, we will employ the European standard EN 15232 on impact of Building Automation, Controls, and Building Management [15], as a guideline for the energy management measures to be implemented and assessed. While deep energy retrofit techniques as envelope components upgrade, lights change and ventilation system upgrade have demonstrated large positive technical and economic impacts, they might not be the optimal solution for every retrofit project. With the advancement in the field of smart buildings and automation systems, new performance improvement measures have emerged aiming to enhance the building intelligence quotient. In this work, an overall technical assessment of the trade-off between deep energy retrofit techniques implementation and enhancing the building intelligence is performed. The evaluation is supported by dynamic energy simulations of a detailed building performance model developed in EnergyPlus. A Danish university building is considered as a case study where multiple performance improvement measures are implemented and assessed. Along with standard deep energy retrofit measures, multiple building intelligence enhancement measures are chosen, in compliance with the European Standard EN 15232 recommendations. Case Study Building A 2500 m 2 university building is considered as a case study in this work to evaluate and assess the trade-off between deep energy retrofit and improving building intelligence. The MMMI office building, shown in Fig. 3, was opened in the 1995 and is used on a daily basis by researchers and students of the University of Southern Denmark in Odense [16]. The building has two long floors with a small basement, and constitutes mainly of 100 rooms, between offices, research rooms, laboratories, seminar and meeting rooms. The overall specifications and characteristics of the building are listed in Table 1. The building thermal envelope complies with the BR1995 Danish building regulation regarding the thermal transmittance values of the various components. Windows are double-glazed, and roofs and walls are mainly made of concrete. In terms of energy systems, an in-direct district heating system with a heat exchanger on the main supply is employed to satisfy the building heating demands. In addition, domestic hot water is produced onsite in a large central tank. Natural ventilation is employed in the majority of the building, with individual mechanical ventilation units devoted to specific rooms and laboratories. Regarding energy labelling, the building is now classified as an 'E' building, with an annual heating and electricity consumption of 269 MWh and 178 MWh in 2018 respectively. Considering the reported numbers, there is a large potential in improving the building performance, on both levels, the thermal envelope components and energy systems operation and control. Building Energy Modelling and Simulation The energy modelling and simulation framework described by Jradi et al. [16] is adopted in this work, employing a package of tools including i) Sketchup Pro for 3D architectural drawing, ii) OpenStudio for building energy model development and iii) EnergyPlus for running energy simulations and results reporting. Using various design information and specifications, a 3D architectural model was developed for the MMMI model in Sketchup Pro with a detailed depiction of the building geometry, orientation and rooms distribution. Fig. 4, was imported to OpenStudio to develop the detailed holistic energy model of the building where various specifications and characteristics are defined. This includes building constructions and materials, loads, schedules, HVAC systems, lighting and equipment. In addition, onsite weather conditions and actual collected occupancy numbers and patterns were imported as part of the energy model to provide a more realistic characterization of the actual building performance. Regarding lighting and equipment schedules, default office building schedules were initially used. Then these schedules were updated using field investigations and information collected on the building use. After completing the whole building energy model development with all the specifications defined, the well-established and validated simulation engine of EnergyPlus was used to run building energy simulations and report the energy performance at various levels. Using actual data collected onsite on the building monthly energy use for heating, ventilation, lighting, equipment and overall electricity, the developed baseline building model was calibrated with a reported maximum uncertainty of 3.8% and 5.2% for monthly heating and electricity consumption respectively. The pie chart shown in Fig. 5. provides an overview on the annual energy usage breakdown of the MMMI building. It is shown that the heating dominates the energy use scheme with around 60%, followed by electric equipment and building services with 20%, lighting with 12% and ventilation electricity consumption by around 8%. Moreover, Fig. 6 depicts the monthly profile of the building total heating and electricity consumption as predicted by the holistic energy model. It could be seen that electricity consumption is generally stable over the year with an average use of around 14.7 MWh on a monthly basis. On the other hand, heating energy usage has an expected profile, peaking in the month of January with over 52 MWh of heating consumption. Table 2 depicts the Danish building energy labelling system with the corresponding annual primary energy usage associated to each class. The MMMI building annual primary energy usage is calculated per indoor floor area and is also reported in Table 2. This primary energy use takes into account the overall building energy consumption for heating, ventilation and electricity, excluding equipment consumption. A weighting factor of 1 is used for heating and a factor of 2.5 is used for electricity, based on the Danish building standard recommendations. Considering the building energy usage profile for 2018, the annual primary energy use of the building in its current state is 194.2 kWh/m 2 . As shown in Table 2, for the MMMI building to comply with the acceptable energy class 'C', its primary energy use should be reduced to reach 136.2 kWh/m 2 , which seems viable with effective actions. For the building to comply with the more strict A2010 class, around 62% reduction on the primary energy use should be exhibited to reach the 71.9 kWh/m 2 maximum line. Nevertheless, if a new office building of the same size as the MMMI building is to be built today, the building is allowed to consume a maximum of 41.4 kWh/m 2 primary energy per year. Considering the numbers reported in Table 2, it is clear that the building needs an energy upgrade retrofitting to improve the overall energy performance, reduce the energy usage, enhance its energy label and to live up to the current building energy standards. Energy Retrofit Measures The results reported in the previous section shows that the considered office building consumes around 194 kWh/m 2 per year as primary energy to satisfy heating and electricity demands. Considering the fact that the acceptable energy class for an existing building based on the Danish building regulation 'BR18' is 'C', corresponding to an annual primary energy use of 136 kWh/m 2 , it is clear that the building would be able to attain this level with effective and targeted modifications. However, to reach the more strict and energy efficient 'A' classes, a set of improvement measures are needed with the requirement to reduce around 62% of the primary energy use to reach the 71 kWh/m 2 line. Aiming to assess the impact of implementing various energy retrofit measures on the energy performance of the MMMI building, a set of different improvement measures are designed, implemented and simulated employing the calibrated holistic building model. The assessment is purely technical, meaning that the electricity and heating savings in each improvement case is reported in comparison to the current baseline state. The considered energy retrofit pool comprises a mix of deep energy retrofit measures targeting the building envelope and services in addition to management and automation measures dealing with various building systems to improve the building intelligence quotient. The list of considered energy improvement measures are presented in Table 3. In total, a set of 19 building energy performance improvement measures are considered. As divided in Table 3, the improvements set includes deep energy retrofit, lighting system management, ventilation system management and heating system management measures. Twelve deep energy retrofit measures are included, considering conventional standard Danish buildings energy improvement techniques. This comprises building exterior walls and roofs insulation, windows upgrade, LED implementation, PV system installation in addition to various energy systems components and devices upgrade. The deep energy retrofit measures are in compliance with the Danish building regulation BR18 recommendations regarding building envelope and energy systems. On the other hand, the rest of the measures aiming to control and manage the operation of various energy systems in the building are considered based on the guidelines of the European Standard EN 15232, dealing with "Energy performance of buildings -Impact of Building Automation, Controls and Building Management" and aiming to enhance the building intelligence quotient. These energy management measures are divided into three categories, measures targeting the lighting system, the ventilation system and the heating system, being the major energy systems and units in the considered MMMI case study building. The European Standard EN 15232 is the official regulation targeting building automation systems in the EU, developed as a support for the upgraded EU Energy Performance of Building Directive (EPBD). The standard comprises methods, strategies and guidelines for building automation and control systems (BACS) auditing, evaluation and assessment. In specific, a set of automation, control and technical management functions are included for each building service domain aiming to enhance the operation and improve the overall building intelligence and management level, including heating, cooling, lighting and ventilation systems. Four energy efficiency classes are defined by the EU 15232 standard, where 'A' is a highly energy efficient and fully automated and controlled building with a building automation system of high performance, and 'D' is a building with no building automation system or is equipped with an automation system of minimal features. In addition, generic suggested energy efficiency factors for various building types are provided, which highlights how much energy is to be saved when moving from a lower class to a higher one. In overall, BACS class 'C' is regarded as an acceptable and a standard class for existing buildings in terms of building automation and control. Although the considered MMMI building has a building management system implemented, the features and roles of this system is only exposed on a centralized level, meaning that there is no control and management on the operation of the single energy systems on a zone level. For this reason, the building is placed under BACS efficiency class 'C' in its current state. Results and Discussion The considered energy retrofit measures are designed, implemented and simulated using the developed dynamic building energy model. Considering the simulation results, the impact of each improvement measure on the overall MMMI building performance in terms of heating and electricity consumption is reported and compared to the base case scenario. Figures 7 and 8 present the percentage of heating and electricity consumption saved on an annual basis due to the implementation of each of the 19 different energy retrofit measures considered and simulated. Based on the results reported, it is clear that there is a large variation in the impact of each improvement measure on both the heating and electricity profile of the MMMI building. While all retrofit measures resulted in electricity savings, the case is different from the heating consumption perspective as 7 out of the 19 measures lead to an increase in the heating consumption. Saying that, this increase is not major and varies from 0.7% to 6.8% of the overall building heating consumption, with the addition of a pre-heating loop to the ventilation units resulting in the maximum addition on the heating demand. Improvement measures as installing LED lights and upgrading the building electrical equipment and devices also lead to a slight increase in the heating consumption, mainly due to the fact that energy efficient lights and equipment are characterized with lower rates of heat loss inside the building compared to conventional devices. In terms of reducing the heating demand, two measures stand out: adding a heat recovery unit to the ventilation systems and implementing adaptive heating control for weather compensation with a respective heating savings of 13.3% and 16.8%. Deep energy retrofit targeting the building thermal envelope including insulation of constructions and upgrading windows resulted in heating savings ranging from 4% to 9%. An interesting point is noted as some energy management measures allows comparable heating savings as upgrading the building envelope, including heating setpoint management with 9.5% and implementing CO2-based ventilation system control with around 7.3% reduction on the heating demand. On the other hand, a different technical impact of the implemented measures is reported on the building electricity consumption profile. The improvement measures allowing the highest savings on the building electricity consumption are mostly measures targeting building systems management and control. This includes implementing CO2-based ventilation control with 15.2%, installing daylight sensors in common and open rooms with 14.8% and implementing a mix of CO2-and temperature-based ventilation system control strategy with 13.9% reduction on the electricity demand. Nevertheless, deep retrofit measures targeting upgrading building lights and electrical equipment allows an average of 11% electricity savings. In addition, it is noted that measures as replacing hot water circulation pumps, upgrading ventilation system fans and managing the heating system setpoint scheme allows a small reduction on the electricity use (1.2% to 4.1%). However, if the technical assessment presented is coupled with an economic evaluation, such measures would emerge as favourable due to being simple to implement with minimal investment costs compared to other measures along with a lower payback period. Considering the results of the single improvement measures implementation in the building, two holistic retrofit packages were designed and implemented in the dynamic building model. The aim of the packages' implementation is to assess and compare the impact of choosing to go with deep energy retrofit measures or going with a package targeting building energy systems management to enhance the building intelligence quotient. The two investigated packages are as follows: - CO2-based ventilation control implementation, heating system setpoint management, adaptive heating control implementation and PV solar system installation. The two retrofit packages, RPA and RPB, are implemented and defined in the dynamic energy building model and a yearly energy performance simulation of the MMMI building is performed to report the technical impacts of the two packages on the building operation and on the holistic energy usage profiles. Figure 9 shows the impact of retrofit packages A and B on the MMMI energy usage profile as predicted by the dynamic building model. It is shown that the deep energy retrofit package (RPA) yields larger savings on the heating consumption and equipment electricity consumption, while the energy management package (RPB) allows more savings on the electricity consumption for ventilation and lighting consumption in the case study building. On a holistic level, RPA implementation reduces the energy usage for heating by 27.2% and the overall building electricity use by 29.1%. Comparably, RPB yields overall heating and electricity savings of around 23.3% and 33.8% respectively. While RPB have relatively less impact on the building heating profile compared to RPA, implementing the energy management package RPB allows a substantial reduction on the building ventilation electricity consumption of 71% and lighting consumption of around 60%. This is mainly due to the effective energy management and control strategies implemented as part of the retrofit package. Table 4 provides a summary on the impact of the two retrofit packages on the building annual primary energy use. Overall, the deep energy retrofit package RPA allows reducing the annual primary energy usage of the building to 132.7 kWh/m 2 , while the building energy management package RBP without the PV system implementation allows further reduction on the primary energy consumption to 111.1 kWh/m 2 . With these numbers, both packages allow enhancing the building energy class to comply with the 'C' level of the Danish building regulation. Furthermore, adding a PV solar system covering 10% of the building roof area as part of the RPB retrofit package allows a drastic reduction on the building annual primary energy usage to reach 93 kWh/m 2 with a substantial reduction of around 52.1% compared to the current MMMI building case. One additional added value in terms of implementing the energy management package RPB is raising the building energy efficiency class according to the EN 152323 BACS building automation standard from Class 'C' to Class 'B', considering the strategies implemented targeting energy systems operation and building intelligence. Conclusion In this paper, a technical evaluation and assessment of the trade-off between implementing deep energy retrofit techniques and improving building intelligence measures is provided. The holistic assessment and evaluation provided is driven by dynamic energy simulations of a detailed building performance model developed in EnergyPlus. As a case study, a 2500 m 2 university building situated in Odense, Denmark, is considered to evaluate energy improvement scenarios. A holistic dynamic energy model of the building is developed and calibrated using actual collected data onsite. Then, a list of energy improvement measures is implemented and simulated using the calibrated model to evaluate the impact on the heating and electricity consumption profiles. The set of measures comprise both, deep energy retrofit measures and energy management measures. It was shown that adding a heat recovery unit to the ventilation system and implementing adaptive heating control for weather compensation provide the highest impact on the heating savings with 13.3% and 16.8% respectively. On the other hand, implementing CO2based ventilation control and installing daylight sensors in common rooms yield the largest savings on electricity consumption with 14.8% and 15.2%. Overall, the results show that energy management measures have comparable energy savings and, in some cases, higher impact when compared to deep energy retrofit measures targeting building envelope or upgrading components. Nevertheless, two packages were considered for further evaluation, RPA deep energy retrofit package and RPB energy management and building automation package. While RPA yields higher heating savings, RPB allows saving 23% and 33% on the heating and electricity demand respectively. Moreover, implementation of the RBP package along with a PV system installation allows reducing the annual primary energy usage by 52%. The reported results show that conventional deep energy retrofit techniques including envelope components insulation, windows change, and lights replacement are not the optimal solution for every building retrofit case. It is also demonstrated that energy management techniques, which could have the edge from the economic perspective, would yield comparable savings in terms of heating and electricity consumption. In summary, the trade-off between implementing deep energy retrofit techniques and improving building intelligence measures is governed by many factors. This includes building type and age, energy systems installed, building automation level, thermal envelope status in addition to the technical and economic feasibility. While this study deals with a holistic technical assessment of the different scenarios, an overall economic assessment is vital on a case by case basis to aid the retrofit project decision-making.
6,508.6
2020-06-30T00:00:00.000
[ "Engineering", "Environmental Science" ]
Formation of Asymmetrical Structured Silica Controlled by a Phase Separation Process and Implication for Biosilicification Biogenetic silica displays intricate patterns assembling from nano- to microsize level and interesting non-spherical structures differentiating in specific directions. Several model systems have been proposed to explain the formation of biosilica nanostructures. Of them, phase separation based on the physicochemical properties of organic amines was considered to be responsible for the pattern formation of biosilica. In this paper, using tetraethyl orthosilicate (TEOS, Si(OCH2CH3)4) as silica precursor, phospholipid (PL) and dodecylamine (DA) were introduced to initiate phase separation of organic components and influence silica precipitation. Morphology, structure and composition of the mineralized products were characterized using a range of techniques including field emission scanning electron microscopy (FESEM), transmission electron microscope (TEM), X-ray diffraction (XRD), thermogravimetric and differential thermal analysis (TG-DTA), infrared spectra (IR), and nitrogen physisorption. The results demonstrate that the phase separation process of the organic components leads to the formation of asymmetrically non-spherical silica structures, and the aspect ratios of the asymmetrical structures can be well controlled by varying the concentration of PL and DA. On the basis of the time-dependent experiments, a tentative mechanism is also proposed to illustrate the asymmetrical morphogenesis. Therefore, our results imply that in addition to explaining the hierarchical porous nanopatterning of biosilica, the phase separation process may also be responsible for the growth differentiation of siliceous structures in specific directions. Because organic amine (e.g., long-chair polyamines), phospholipids (e.g., silicalemma) and the phase separation process are associated with the biosilicification of diatoms, our results may provide a new insight into the mechanism of biosilicification. Introduction Biomineralization is the formation of hard tissues with complex structures and multifunctional properties, which occurs in almost all the living organisms from prokaryotes to humans [1,2]. Some of the morphologically gorgeous and structurally intricate biominerals are exemplified by the biosilica formed in the aquatic organisms including diatoms and sponges [3,4]. These biogenic minerals are structured in the nanometer to micrometer scale range, and composed of amorphous silica [5][6][7]. Diatom is well known for the spectacular design of its silicabased cell wall (termed frustules) [2,8,9]. More than 40 years ago, Nakajima and Volcani have noticed that diatom biosilica contained unusual amino acid derivatives such as N,N,Ntrimethylhydroxylysine and dihydroxyproline [10,11]. This observation is the first to indicate that diatom silica is a composite material. In recent decades, a variety of organic and biological molecules have been successfully separated and identified from cell-wall extracts of diatoms [12,13]. An emerging consensus is that polysaccharides [14,15], proteins [16][17][18][19][20], and polyamines [21] are general organic components of diatom cell walls. In such a context, many efforts have been made to explore how these components interact with silicic acid, silicate, or silicon-containing compound, and influence silica morphogenesis [2,16,22,23]. In terms of polyamines, all genera of diatoms investigated so far incorporate polyamines into their silica-based cell walls [24]. Most surprisingly, cell-wall extracts from Coscinodiscus diatoms exhibit predominately polyamines, whereas silaffin-related peptides appear to be absent [25]. These observations stimulate a polyaminesbased phase separation model to be proposed for the pattern formation of the diatom cell-walls with hierarchically hexagonal porous structures [25]. In this model, polyamines are able to undergo a phase-separation process within a specialized membrane-bound compartment termed silica deposition vesicle to form an emulsion of microdroplets. These droplets form a hexagonally arranged monolayer within the silica deposition vesicle. Silica precipitation occurs at the interface between the solution and the organic microdroplets [26], which cause the formation of honeycomb-like framework. A defined fraction of the polyamine population is consumed by its co-precipitation with silica. As a result, smaller droplets separate from the surface of the original microdroplet. Silicification continues at these newly created water/polyamine interfaces of smaller droplets and a smaller hexagonal package of silica is thereupon developed. This mechanism would allow the creation of additional hexagonal frameworks at smaller and smaller scales. Finally, hierarchically porous structures and spectacular patterns are exhibited in the silica-based frustules. Polyamines in diatoms appear to be speciesspecific, which play an important role in the formation of frustules with species-specific patterns [21]. In other words, biosilicification in diatoms might be modulated by the specific structure of polyamines involved in the precipitation process [27]. Sponge spicules also possess highly hierarchical and organized siliceous nanostructures. The laminated spicule structure consists of alternating layers of silica and organic material [28]. Although the mechanism of biosilicification in sponges is distinct from that of silica formation in diatoms [29], organic amines have also been identified from the marine sponge Axinyssa aculeata [30]. These polyamines separated from sponge can deposit silica and the polyamine-derived macromolecules are chemical factors involved in silica deposition in sponges [30]. Phospholipids also play an important role in biosilicification [31]. Diatom silicification takes place in the silica deposition vesicle [32], whose membrane, called the silicalemma, consists of a typical lipid bilayer [33]. The overall outline of diatom's silica structure is determined by shaping of this kind of membrane-bound compartment [34]. Hildebrand et al. found that the silicalemma is tightly clung to siliceous structures in areas where silica is deposited [35]. This indicates that membrane components of silica deposition vesicle could become part of the silica structure [36,37]. Recently, X-ray photoelectron spectroscopy (XPS) [38] and solid state NMR (SSNMR) [39] studies were performed on diatom cells for analyzing the chemical composition of the diatom surface. The XPS analysis revealed a high concentration of lipids present as a structural part of the cell wall in the form of carboxylic esters. The SSNMR study also demonstrated that lipids are tightly associated with silica, even after harsh chemical treatment. All these imply that phospholipids may involve in the amines-mediated biosilica deposition in diatoms. [40,41]. Although the phase separation model successfully explain the important aspects of silica patterning in diatoms, biosilica in diatoms and sponges have other nanometer-scale details, and their nuanced structural and biological functions are well beyond the current ranges used in advanced materials [42]. Taking the centric diatom Thalassiosira eccentrica as an example, the ground-plan of its areolae is a two-dimensional system of hexagonal meshes [43]. Moreover, starting from this ground plan, the vertical growth of areolae walls and the horizontal extension on the distal side of areolae walls occur in sequence. It indicates that the asymmetrical development of silica deposition can be well achieved in diatom silicification [44]. However, it is still difficult to understand how the differentiation of solid siliceous structures would occur in different directions [34]. In this study, dodecylamine (DA) and phospholipid (PL) were selected as model organic additives to initiate phase separation and influence silica precipitation. Phospholipid, which has a hydrophilic head and two hydrophobic tails, is a major component of all the plasma membranes including the silicalemma in diatoms and sponges [45]. The goal of this study is to examine the effect of phase separation of biosilicification-associated model organic components on the development of silica morphology, and thus to reveal the contribution of the organic phase separation to growth differentiation of biogenic silica. As a consequence, asymmetrical discus-like silica particles with controlled aspect ratios were indeed obtained during the phase separation of PL and DA, and the morphological evolution of the deposited silica from spherical through sunflower-looking to discus-like features were also exhibited at different conditions. Since the organic amines, membrane lipids, and the phase separation process are the important features of diatom silicification, our results may be useful for a deeper insight into biosilicification. Materials All starting chemicals were purchased from Sinopharm Chemical Reagent Co., Ltd, and used as received without further purification. Phospholipids (PL) are of biotech grade while all other reagents, such as ethanol, dodecylamine and tetraethyl orthosilicate, are of analytical grade. Deionized water was also used in these syntheses. For all experiments, glassware was cleaned with aqua regia (3:1 HCl/HNO 3 ), rinsed thoroughly with ultrapure water, and oven-dried overnight before use. Preparation In a typical biomimetic synthesis, 0.10 g of PL and 0.16 g of DA (0.863 mmol) were dissolved in 30 mL of ethanol through ultrasonification, and then stirring for about 5 min in a closed 100 mL flask until the solution became clear (Fig. S1a in Supplementary Information). Afterwards 30 mL of TEOS (0.134 mmol, 2.2 mM) was injected into the solution using a 50 mL syringe with stirring. In succession, 30 mL of H 2 O was added to the above solution to obtain a turbid suspension (Fig. S1b). This suspension was then heated in a 80uC thermostated water bath, and became clear again with the increase of temperature (Fig. S1c). After 24 h of thermostated reaction, the solution was moved out of the water bath, and cooled down to room temperature naturally. As the temperature of the solution lowered, a white turbidness gradually appeared. Notably, the turbidness could be explicitly distinguished after the flask was cooled down for an hour at room temperature (Fig. S1d). Nevertheless, the centrifugated precipitate could dissolve in ethanol, and thus no silica could be obtained in this case, indicating that the isolated precipitate should be organics, i.e., an undissolvable organic phase was first formed at room temperature. After the flask was continuously stationed for another 1 day (Fig. S1e), the resultant particles were isolated by centrifugation, cleaned by three cycles of centrifugation/washing/redispersion in ethanol, and dried at room temperature for 1 day in vacuum. The obtained sample was named as sample L5. For other morphogenesis of silica structures, the same synthetic procedures were deployed except that some experimental parameters were varied. The detailed experimental conditions and the corresponding aspect ratios of the silica particles are listed in Table S1. Moreover, in order to understand the detailed microstructures, some samples were also calcined at 550uC in air for 6 h to remove the occluded organic components, and XRD and nitrogen physisorption analyses were performed. Characterization Several analytical techniques were used to characterize the products. Field emission scanning electron microscopy (FESEM) (JEOL JSM-6700 F) was applied to investigate the size and morphology. Transmission electron microscope (TEM) images were obtained on a JEM 2010 transmission electron microscope with an accelerating voltage of 200 kV. The samples for the TEM measurements were prepared by dropping a few drops of sample suspension with ethanol as the solvent on a copper grid, and the solvent was allowed to evaporate to dry state before analysis. The powder X-ray diffraction (XRD) patterns of the samples were recorded with a Japan Rigaku TTR-III X-ray diffractometer 0.154056 nm), employing a scanning rate of 0.02us 21 in the 2h range 0.8-10u. Infrared spectra were collected using a Nicolet 8700 FT-IR spectrometer on KBr pellets. Thermogravimetric and differential thermal analysis (TG-DTA) was carried out using a SDTQ 600 TG/DTA thermal analyzer (TA, USA) with a heating rate of 10uC/min from room temperature to 800uC in a flow air atmosphere. N 2 -sorption isotherms of the samples were measured by using a Micromeritics Tristar II 3020 M instrument at liquidnitrogen temperature. From the adsorption isotherm, the Barrett-Joyner-Halenda theory (BJH) was used to calculate the mesopore volume and its size distribution. Specific surface areas were calculated by using the Brunauer-Emmett-Teller (BET) method in the relative pressure range of P/P 0 = 0.05-0.3. Pore volumes were obtained from the volumes of N 2 adsorbed at P/P 0 = 0.95 or in the vicinity. The dispersibility of suspensions was estimated by dynamic light scattering (DLS, DYNAPRO-99). Figure 1b and c present the side and front view of an individual particle, respectively. The discus-like morphology is further confirmed and a ridge between the two halves is visible (indicated by black arrowheads in Fig. 1b). The ratio (D/T, i.e. aspect ratio) of particle diameter ("D" in Fig. 1c) to thickness ("T" in Fig. 1b) is 1.6060.06. It should be pointed out that the two halves are not completely symmetric (e.g., Fig. 1b), which is also observed in the corresponding TEM image (Fig. 2a). The TEM analyses (e.g., Fig. 2b) also show that the discus-like particles are not hollow, but solid. Fig. 2c and d depicts the localmagnification TEM images of the areas framed in Fig. 2a and b, respectively. The disordered pores are obviously discernable, and no resolved diffraction peaks can be observed in the XRD patterns including calcined sample L5 (Fig. 3a), indicating that the arrangement of the pore channels may be random [46]. Fig. 3b presents the N 2 adsorption-desorption isotherm with the inset of the BJH pore size distribution of the calcined sample L5. One can see a typical type IV isotherm with a N 2 hysteresis loop in the calcined sample, indicating the mesoporous property [47]. The adsorption isotherm shows a well-defined capillary condensation step at relative pressure (P/P 0 ) of 0.40-0.50, corresponding to the pore size of 3.3 nm. The Brunauer-Emmett-Teller (BET) surface area is calculated at 730 m 2 ?g 21 and the pore volume is 0.62 cm 3 ?g 21 . Therefore, the silicified product is an asymmetrical discus-like structure possessing disordered mesoporous character. Results and Discussion The FT-IR spectrum (Fig. 4) of sample L5 displays three characteristic peaks of silica: Si-O-Si asymmetric stretching at 1081 cm 21 , symmetric stretching at 800 cm 21 , and Si-OH stretching at 965 cm 21 [48][49][50][51]. The H-bonded OH groups with various OH???H distances are responsible for the intense absorption at 3428 cm 21 , and the band at 1633 cm 21 is due to the d(HOH) of physisorbed water [52]. Bands detected at 2926 and 2855 cm 21 are assigned to the stretching vibrations of the CH groups, which indicate the existence of organic components [50]. The characteristic vibration of C-C bonds at 1468 cm 21 is also observed. Moreover, the bands at 553 and 1722 cm 21 can be assigned to the O-P-O bond and the carbonyl group, respectively, both of which should originate from phospholipid molecules [53]. Figure 5 presents the TG-DTA curves of the original silica particles (sample L5). The TG curve reveals ,25.4% total weight loss from room temperature to 800uC. A ,5.2% of weight loss from room temperature to 120uC and the corresponding endothermal peak at 50uC in DTA curve indicate the evaporation of the surface-adsorbed water and ethanol. The small endothermic peak at 218uC in the DTA curve is believed to originate from the organic component decomposition and/or the polycondensation of the silica network [54]. The exothermic peak at 328uC can be ascribed to the combustion of the incorporated organic substances [55]. The weight loss at temperatures above 400uC (6.3%) is generally due to the further condensation and dehydration of silanol groups [56]. FT-IR and TG-DTA analyses of the as-synthesized product confirm the co-existence of silica and organic components, indicating the formation of an organic-inorganic composite. Moreover, our results also show that the asymmetrical structured silica including semispherical or discus-like particles can be obtained in a relatively broad range of ethanol/water volume ratio (E/W), as shown in Figs. 6. When the E/W is varied between 15/45 and 25/35, the interconnected semispherical particles are always obtained ( Fig. 6 and 7b). This is due to the fact that the lower the E/W ratio, the more the precipitated turbidness. As a result, more and more oil droplets are formed. Therefore, the silicified particles are inclined to connect each other. When the E/ W is under 15/45, however, both PL and DA can not be well dissolved, and an irregular aggregate is formed ( Fig. 6 and Fig. 7a). Conversely, when the E/W is over 35/25, the morphologies of the products change from discus ( Fig. 1) to microsphere ( Fig. 6 and Fig. 7c). This is possible because higher ethanol concentrations facilitate the dissolution of organic components, and do not favor the formation of the oil-water interface [57,58]. Further increasing E/W to 40/20 leads to the formation of ca. 60-nm-diameter spherical nanoparticles (Fig. 6 and Fig. 7d), which is probably because of the shrinking effect of ethanol at such a high E/W [59,60]. The concentration of silica precursor (TEOS) is also another important factor for the morphogenesis of the asymmetrical silica (Fig. 8). Monodisperse discus-like particles can be obtained with a TEOS concentration of 2.2 mM (sample L5, Fig. 1). However, the interconnection among the particles becomes more significant in the case of both higher and lower concentrations of TEOS. With the decrease of TEOS concentration to 1.5 mM, the development of the two halves is insufficient and the deposition region of silica is predominantly confined to the water/organics interfaces. Therefore, the obtained particles become thinner (as indicated in Fig. 8b) and the siliceous extension at the interfaces makes some particles interconnect each other. Further interconnection among the silica wafers occurs with decreasing the TEOS concentration to 0.7 mM (Fig. 8a). Nevertheless, it is almost impossible to collect any precipitate as the concentration is lower than 0.7 mM. Conversely, as the concentration of TEOS is over 2.2 mM, the particles show better growth with an increase in thickness and diameter (Fig. 8cf): while the TEOS concentration is 3.0 mM, particles with obvious ridges exhibit asymmetrical discus-like shapes, and some asymmetrical particles fuse together along their ridges (as indicated by arrows in Fig. 8c). Further increasing the concentration to 3.7 mM results in the extra formation of spherical particles together with the asymmetrical aggregates of silica (as indicated by arrows in Fig. 8d). More spherical particles with lager diameters can be observed as 4.5 mM or 5.2 mM of TEOS is used (Fig. 8e and f). The emergence of extra spherical silica at the higher concentrations of TEOS can be ascribed to the independent nucleation and growth of silica in the reaction solutions. We have noted that the reaction solution with 3.7, 4.5 or 5.2 mM of TEOS cannot become clear under the same heating conditions. In other words, silica precipitation has occurred before the cooling-down, which may result in the formation of the extra silica spheres at the higher TEOS concentrations. In summary, the morphology of silica is sensitive to the concentrations of TEOS over the range of 0.7 to 5.2 mM. Thicker and more robust siliceous structures are formed with increasing the concentrations of silica precursor. Similar phenomenon has been found by Finkel et al [61] when they tried to quantify silicification in marine diatoms. The frustules became more heavily silicified with increasing silicate concentrations over the range of 0.02-1.1 mM. Therefore, changes in the frustules thickness of diatoms may provide a paleoproxy for surface silicate concentrations under conditions where they lived [61]. For a better understanding of the morphogenesis details of the asymmetrical siliceous structures, a series of experiments with different concentrations of PL or DA were carried out. The experimental details are depicted in Table S1. Increasing PL concentration from 0 to 1.70 g/L (samples L1-L5; Fig. 9a-e and Fig. 1) leads to an increase in the aspect ratio of the obtained particles (see the line symbolized with ' ' in Fig. 10). Many connected particles appear with the further increase of PL concentration to 2.0 g/L (sample L6; Fig. 9f), so their aspect ratios are not calculated. It should also be pointed out that the asymmetry between the two halves is much more significant in Fig. 9c-e relative to Fig. 1. Nevertheless, cracked spheres are prepared without the addition of PL (Fig. 9a), which can be determined by the shape of the initial DA micelle [62]. On the other hand, when the concentration of PL and the pH of initial reaction mixture are fixed at 1.70 g/L and 11.6, respectively, the particles become thinner with decreasing DA concentration (see the line symbolized with '#' in Fig. 10). In the absence of DA, silica films are finally produced (data not shown). It is probably due to the fact that the property of organic aggregates and the DA concentration in the system pose an important influence on the silica morphogenesis. DA can interact with PL in solution [63,64]. The incorporation of DA molecules can introduce more amino groups into the organic aggregates. These amino groups further interact with silanol groups of silicates, and induce the preferential deposition of silica at the organic interface. Meanwhile, increasing DA concentration inevitably leads to more DA molecules coprecipitation into silica and/or anchoring to the surfaces of siliceous structures. These also favor the growth and thickening of the siliceous structures. As a result, the thicker silica structures can be formed at the higher concentration of DA. In contrast, thinner silica particles with higher diameter/thickness (D/T) ratios are obtained at the lower concentration of DA. Meanwhile, the excessive extension of silica at the interface causes the connection of the neighboring particles to be easier. Especially, in the absence of DA, silica deposition predominately occurs at the PL interface owe to the electrostatical interaction between Si-O 2 groups from silicates and ammonium head groups from PL molecules [65]. Silica deposition is confined to the extension of silica at the interface. The connection of the neighboring particles occurs commonly and the film-like siliceous structures are finally obtained without DA. On the basis of the above results, it can be concluded that both PL and DA are indispensable factors during the formation of asymmetrical silica particles. Moreover, it can be seen from Fig. 10 that these two additives display opposite effects on the aspect ratio of the resultant particles. The change of aspect ratio can be considered as an indication of silica asymmetrical growth in different directions. Therefore, the asymmetrical growth of siliceous structures can be well controlled by changing the proportion of organic components PL and DA in our experiments. In the past few years, the fabrications of asymmetrically structured silica have been reported. Non-spherical silica Janus colloids, for instance, were produced by asymmetric wet-etching at the wax/ water interface [66]. However, it is not achieved directly by the asymmetrical deposition of silica. Wang et al. [67] used a singlestep emulsion templating method creating budded mesoporous silica capsules with the protruding stumps formed in particular orientations, and the radiolaria-like morphology of silica with multicellular structured spines has also been obtained [68]. However, to the best of our knowledge, no report on the preparation of asymmetrical silica structures in the presence of phospholipid and organic amine can be found, and the aspect ratio (diameter-to-thickness ratio) of the obtained particles can be finely controlled by tuning the feeding amount of organic components ( Fig. 9 and 10). In our biomimetic experiments, PL and DA are used as the biosilicification-associated model organic components to form PL-DA composite emulsion by a deliberate heating-cooling process (see the experiment details and Fig. S1) and create oil-water interface at room temperature for the deposition of silica. Specifically, PL can dissolve in the ethanol/water mixture at 80uC [69]. Therefore, a 80uC pretreatment temperature was selected to promote PL dissolution and reinforce PL-DA interaction. In fact, the solution became clear during continuous heating process, which suggests that neither organic turbidness nor silica precipitation formed in this process (Fig. S1c). After the flask is removed out of the water bath and cooled down naturally, however, white organic turbidness appears with the gradual decrease in temperature, and the phase separation can be directly observed at room temperature (Fig. S1d) [57]. Furthermore, our DLS results also reveal that the larger micelles (1781.56712.4 nm in diameter) indeed occur in the suspension at room temperature, confirming the phase separation process present. It has been well known that the dodecyl chains of DA molecules can interact with the PL hydrophobic chains by van der waals force, while their NH 2 or NH 3 + heads interact with P-O 2 groups of PL by hydrogen bonding and electrostatic interaction [63]. Therefore, in such physico-chemical environment, the organic emulsion is formed, and subsequently the hydrolysis of TEOS occurs near the oil/ water interfacial region owe to the electrostatical interaction of Si-O 2 from silicates and the ammonium groups from PL and DA molecules [65]. As previously reported, asymmetrical polystyrene particles with flattened shapes were produced at an oil-water interface [70]. Driven by surface tension [57,70], the particles appear to be spreading at the fluid interface, which leads to the appearance of ridge and subsequent formation of discus-like particles. It should be pointed out that although the preheating process was carried out first, the formation of organic turbidness and silica precipitation did occur at room temperature. These results suggest that the precipitation of asymmetrical silica structures can be achieved by phase separation of the organic components (e.g., Fig. 1, Fig. 9c-f and Fig. 10). It appears that the interaction between the different organic molecules and their phase separation can significantly affect physico-chemical growth environment of the siliceous structures, and finally control the silica morphologies [71]. To further understand the formation details of the discus-like silica particles, some time-dependent silicification experiments are also carried out. It is found that the reaction solution turns gradually turbid during the cooling process at room temperature, and the precipitate obtained by centrifugation after 1 h of standing is organic components because the precipitate can completely dissolve in ethanol. However, the precipitate obtained after 1.25 h of standing can incompletely dissolve in ethanol, indicating that the silicified structures have formed. SEM analysis reveals that the silicified structures consist of small silica particles of ca. 250 nm in diameter (Fig. 11a), and the interconnection of the particles leads to the appearance of some larger aggregates with diameter above 400 nm, as arrowed in Fig. 11b. After 1.5 h of reaction, however, some particles with thin margin can be found (arrowed in Fig. 11c), indicating that the morphological development of the particles may occur at the oil/water interface. When the silicification system continues standing for 1.75 h, the particles have developed into discus-like embryos with diameter up to 1 mm (Fig. 11d). Moreover, a few discus-like particles with the expanding margin can be clearly observed (typically arrowed in Fig. 11d), further supporting that the formation of the discus-like structures occurs at the oil/water interface. Further prolonging the standing time to 2 or 2.25 h leads to the appearance of the well-developed asymmetric discuses of ca. 2 mm in diameter, and many of them exhibit conjoined structures (Fig. 11e-h). Combined with the results depicted in Fig.1, it is not difficult to find that at the oil/ water interface, aggregation, fusion and margin expansion of the small siliceous particles, as well as further growth lead to the monodisperse perfect discus-like asymmetric structures. On the basis of our time-dependent experiments, a tentative mechanism is proposed and illustrated in Fig. 12 for the formation of discus-like asymmetric silica. Namely, when the bulk solution is cooled down naturally, the hydrolysis of TEOS and the precipitation of silica occur slowly near the oil/water interfacial region with the phase separation of organic components. The silica formation begins with the appearance of small particles (Fig. 12a, Fig. 11a). With the growth and aggregation of them, larger aggregates of silica particles can be formed (Fig. 12b, Fig. 11b). Further growth of these aggregates get their surfaces smoother, and the growth environment (oil/water interfacial region) facilitates their expansion at the oil/water interfaces. Therefore, flakelike silica structures appear (Fig. 12c, Fig. 11c), and further develop into discus-like particles with a diameter of ca. 1 mm, which is much smaller than the final product (2-3 mm) (Fig. 12d, Fig. 11d). As the margin expansion process continues, several neighboring particles (e.g., two particles) are joined together to form the ''conjoined structures'' (Fig. 12e, Fig. 11e-f). The further fusion and growth of the conjoined structures lead to discus-like particles with diameter above 2 mm (Fig. 12f, Fig. 11g-h). Finally, the fully development of their two halves results in the formation of welldefined asymmetric discus-like structures of 2-3 mm in diameter (Fig. 12g, Fig. 1). Implication for biosilicification Silicification in diatoms is a complicated process involving architecture design from nano-to microsize level [72]. The siliceous structures formed in different scales and stages can be unified in the mineralization system of diatoms, and finally assemble into hierarchical and multifunctional frustules. The valve development of Thalassiosira eccentrica can be divided into three stages. Formation of base layer (areolae) defines the structure in the x, y plane (Stage 1), and subsequent deposition (Stage 2) involves expansion in the z axis but only in one direction [34,43,44]. During the development of the outer layer (Stage 3), however, the differentiation of the plane occurs again, forming a right angle to the previous plane (Stage 2) and lying parallel to the base layer (Stage 1) [43]. The formation of the two-dimensional system of hexagonal meshes (areolae) in stage 1 can be well explained by the phase separation model [25]. However, it is not clear whether this model is also suitable for the asymmetrical precipitation of silica including vertical expansion in stage 2 and horizontal growth in stage 3. Space-limited by the membrane-bound compartment and promoted by organic amines, siliceous base layer with pores in a hexagonal arrangement formed during the phase separation of organic droplets [40,41]. However, the role of organic amines and phospholipids on biosilicification may be not restricted to influencing the development of base layer. Our experiments exhibit the controlled deposition of asymmetrical silica particles during the phase separation. The asymmetrical particles emerge as the concentration of PL is over 0.70 g/L. The addition of PL favors the morphology transition from spherical to discus-like particles and the aspect ratio regularly increases with increasing the concentration of PL (e.g., Fig. 9). These results show that phospholipids can provide distinct chemical influences in organicamine-induced silica precipitation [34,43,64]. That is, their aspect ratios can be easily adjusted through varying the stoichiometric compositions of the mineralization system (including DA and PL, Fig. 9 and 10). And the degree of fusion among the neighboring siliceous structures is drastically affected by the concentration of silica precursor (Fig. 8). Therefore, it can be presumed that the phase separation of organic droplets is still an important process for the oriented differentiation of silica. In other words, the phase separation model may be broadened to explain the formation of siliceous structures in the last two stages. Conclusions A series of experiments were accomplished by introducing PL and DA into the reaction system to initiate phase separation of organic components and influence the morphogenesis of silica. The results show that this phase separation process leads to the formation of asymmetrically non-spherical silica structures, and the aspect ratios of the asymmetrical structures can be well controlled by varying the concentrations of PL and DA. A tentative mechanism is also proposed based on the time-dependent experiments. Moreover, controlling the degree of fusion among the neighboring siliceous structures can be achieved via modulating the concentration of silica precursor (TEOS) in the silicified region. Based on the special importance of phospholipids (e.g., silicalemma), organic-amines and the phase separation process for biosilicification, our results suggest that in addition to explaining the biosilica nanopatterning, the phase separation process may be also involved in the growth differentiation of siliceous structures in specific directions. This provides a new insight into the mechanism of biosilicification.
7,191.8
2013-04-09T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science", "Materials Science" ]
Emergence of encounter networks due to human mobility There is a burst of work on human mobility and encounter networks. However, the connection between these two important fields just begun recently. It is clear that both are closely related: Mobility generates encounters, and these encounters might give rise to contagion phenomena or even friendship. We model a set of random walkers that visit locations in space following a strategy akin to Lévy flights. We measure the encounters in space and time and establish a link between walkers after they coincide several times. This generates a temporal network that is characterized by global quantities. We compare this dynamics with real data for two cities: New York City and Tokyo. We use data from the location-based social network Foursquare and obtain the emergent temporal encounter network, for these two cities, that we compare with our model. We found long-range (Lévy-like) distributions for traveled distances and time intervals that characterize the emergent social network due to human mobility. Studying this connection is important for several fields like epidemics, social influence, voting, contagion models, behavioral adoption and diffusion of ideas. Introduction There is a burst of studies on human mobility nowadays due to the increasing availability of data that allow us to determine, using mobile phones and location-based social networks, the spatial location of people. On the other hand, it is clear that there must be an intimate connection between human mobility and encounter networks. People tend frequently to visit popular places in a city meeting other people there. If this occurs often, there is a chance that a contagion process takes place. In this way, there is feedback between human mobility in space and the structure of the encounter network. The goal of this research is to study the emergence of encounter networks due to human mobility in cities. The science of networks has witness an exponential growth due to the ubiquity of the concept of network in many areas of the human endeavor [1]. In particular, social networks are now studied not only by researchers on the social sciences, but by people on the exact sciences as well [2]. All this emergent science acquires importance due to the vast range of applications in many different areas. On the other hand, human mobility just recently started to be a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 explored in detail, thanks to geolocalized data of mobile phones and location-based social networks. Some of these studies show that human mobility follows a long-range dynamics, akin to Lévy walks [3,4], as has been shown before as a common strategy in many animal species and humans [5][6][7][8][9][10][11][12][13][14]. It is clear that the spatial effects imposed by cities affects the mobility patterns of humans by constraining the motion of individuals and providing efficient transportation networks that allows long-range displacements. Understanding human mobility in urban areas is an important and challenging problem due to the fact that millions of people live and interact in big cities [15]. The recent advent of diverse technologies that we use in our daily routines, like mobile phones and GPS, allows the study of urban human mobility in detail [10,11,[16][17][18][19][20][21][22], with many applications in different multidisciplinary fields like epidemic spreading and contagion processes [23][24][25][26], social influence [27] and urban traffic [28,29]. Recently the connection between social networks and mobility has started to be explored as well [30][31][32][33][34][35][36][37][38]. In this paper, we explore the emergence of encounter networks due to human mobility in cities. There are many different motivations of why people move. Of course, we live in specific locations and we have to move to work on a daily basis during the week. We need to move to many other places like banks, shops, markets, bars, restaurants, visit friends and so on. The studies of human mobility started to flourish due to the digital trace leave by mobile devices and the interaction of people through location-based social networks [18,19,21,22,29,[39][40][41][42][43][44][45]. Some studies have addressed this type of mobility to characterize displacements of people from one location to another [46,47], to identify patterns and routines in visited locations [18,42,48] and to establish statistical properties of the structure of spatial networks that emerge from the interplay between the locations in urban regions and human mobility [44,49]. In addition to the spatial mobility and its structure, there are different types of networks associated with the interactions between humans; many of them coupled with spatial translations and inducing a collective dynamics. For example, many of our activities require to coincide spatially and temporally with people at work, in restaurants, in a party, in a train station, among many other places. Now, whereas different types of networks, in particular social networks, have been studied extensively in the last two decades, the way of how the social networks influence human mobility, and vice versa, has been explored only in recent times. The interplay between social networks and mobility has been explored in the context of contact networks [24,33,[50][51][52][53][54], location based social networks [31,55], face to face networks [56] and the spreading of diseases [23,[57][58][59][60]. Here, we analyze the dynamics of multiple agents or walkers, visiting specific locations in space, and their co-coincidences (temporal and spatial) at different sites. From this information we obtain a temporal network of encounters or a contact network. We introduce a random walker that visits locations with a strategy that combines transitions to nearest sites with long-range displacements; this navigation strategy is inspired by Lévy flights in continuum spaces. We analyze the capacity of this navigation strategy to explore different locations. With this random walk strategy, we study the collective dynamics of simultaneous noninteracting agents. In this case, previous encounters are considered as a criterion to establish a connection between agents defining an encounter network that evolves in time. We analyze the temporal evolution of the topology of the network by different methods, and we establish connections between the resulting structure and the mobility of the walkers. We will start our analysis by presenting real data in two cities: New York and Tokyo. We apply a similar approach to study the dynamics of two groups of people visiting places like restaurants, gyms, museums, among other specific sites in these two cities. We find that the dynamics of these groups is similar to the random Lévy strategy. Finally, we study the temporal evolution of encounter networks of humans. We observe how the global dynamics of users gives additional information not captured when only spatial displacements are considered, for example, the emergence of routines. The methods introduced here are general and can be implemented to the analysis of different types of dynamics with applications in human mobility, spreading of diseases and epidemiology. Encounters in cities: Exploring Tokyo and New York In this section we study the human mobility and collective behavior of people visiting specific locations in two big cities. We use data from Foursquare check-ins explored in [55] for the analysis of spatial-temporal patterns of users activity in location based social networks. The dataset is available in [61] and contains check-ins in New York city and Tokyo, collected for about 10 months (from 12 April 2012 to 16 February 2013), for anonymous users visiting locations like restaurants, gyms, bars, universities, among others. It contains 227428 check-ins in New York city and 573703 check-ins in Tokyo. Each user's check-in is associated with its time stamp, the GPS latitude and longitude coordinates of the visited location and a brief description of the place [55]. For the case of New York the dataset contains the trajectories of N = 1083 users and N = 2293 users in Tokyo. In Fig 1 we present the traces of two users in New York and two users in Tokyo; the color of the line connecting two locations makes reference to the respective check-in reported in the dataset. In order to determine the characteristics of the dynamics followed by people in New York and Tokyo, we study the time τ between two successive check-ins registering the visit of locations and the respective geographical distance r separating them. We show in Fig 2 the results for all the users. In Fig 2(a) we depict the probability distribution P(τ) of times τ. By using the methods described by Clauset et al. in [62] for fitting power laws to empirical data, we establish that the times τ are well described by the probability distribution P(τ) / τ −γ in the interval τ min τ. In particular, for users in New York we have γ = 2.37 and τ min = 85.5h; on the other hand, in the city of Tokyo γ = 2.45 and τ min = 107.7h. All these values represent the best fit that minimizes the respective Kolmogorov-Smirnov distances [62,63]. It is surprising that both urban areas have a similar exponent (around −5/2) and coincide without any parameter adjustment. To illustrate this scale invariance in time, we show in Fig 2(c), the time series of differences of time between check-ins. Notice that here we are witnessing a clear instance of burst and heavy tails in human dynamics [64]. In Fig 2(b), we show the probability distribution P(r) of the distance r between the locations reported in two successive user check-ins. We analyze the entries reported for all the users in order to observe globally the spatial dynamics in each city. In this case, by searching the best fit of the form P(r) / r −δ for displacements of length r in the interval 0.001Km r 10Km, we obtain the value δ = 1.147 for the New York dataset and δ = 1.150 for displacements registered by users in Tokyo; we apply the same methods implemented for the analysis of P(τ). Again, we obtain the same behavior for the probability distribution P(r) for New York and Tokyo. The distribution follows an inverse power-law near to P(r) / r −1 with an abrupt decay due to finite-size effects. As we will see in the following sections, this dynamics can be obtained with our model and are similar to the probabilities shown in Fig 7(b) for the Monte Carlo simulation with α = 2. This suggest that the displacements of people in big cities have a connection with the Lévy strategy in our model. In what follows, we study the collective dynamics and the encounter network that emerge in these two cities. Since many of the locations in the datasets are places where each user can stay many minutes, for example in a restaurant, a library or a museum, we consider as a cocoincidence of users (temporal and geographical) if they register their locations at a distance D Δr within the same hour. We do not know if the users are moving together or even if they are friends, family or have a relationship. We only know that they visit the same location for a certain window of time. This is the only condition to establish a link in the encounter network. There exists different options to define a co-coincidence depending on the length Δr and also the time window to explore. In Fig 3, we present the frequency f(n) of a number n of co-coincidences in the datasets explored for different values of the distance Δr. We observe that in both cities f(n) maintain similar characteristics for the values of Δr in the interval 1m Δr 200m; in addition, the results are well approximated by the relation f(n) / n −3 . Now, we explore the temporal evolution of the resulting network with N nodes associated to N users, where each link between users is the consequence of previous encounters. The resulting network at time t is described by an adjacency matrix A(t) with entries A ij (t) = 0 if there is no co-coincidences between users i and j at time t or before. On the other hand, A ij (t) = 1 reveals at least c coincidences of these two users in the interval of time [0, t]. From the adjacency matrix we can describe the collective dynamics by means of different global quantities. For example, the average degree hk(t)i and the average clustering coefficient hC(t)i, given by Eqs (7) and (9), respectively (see the Methods section). In Fig 4 we depict the results for the temporal evolution of hk(t)i and hC(t)i for the encounter network when we consider at least c = 1, c = 2, c = 3 previous coincidences of users visiting specific locations in New York and Tokyo. We use the length Δr = 100m to define co-coincidence of users (a similar behavior for the resulting temporal networks is observed for the different values of Δr explored in Fig 3). The results reveal the differences between the collective dynamics in these two cities. One important feature that emerges from our approach is that in the case of New York, the average clustering coefficient of the temporal network reveals a stable configuration after a couple of weeks with an average clustering coefficient around hCi = 0.1; this behavior remains for months. A similar result is obtained in our model of independent random walkers depicted in Fig 10; however, in that case, the stationary state is a consequence of the finite memory of the walkers. In the case of real cities, such stationary dynamics can be related with the fact that after some time we tend to encounter the same people at the same places and, therefore, the chance of incorporating new links decreases and the clustering remains almost constant. On the other hand, the dynamics in Tokyo is different and the clustering coefficient does not reach a stationary value for the period of time registered in the datasets. Additionally to the temporal evolution of the system, it is important to analyze the final structure of the encounter network. In we present the structure of the final configuration for the dataset studied in Fig 4. We depict the largest connected component for the resulting network when we consider one, two or three encounters (c = 1, c = 2 and c = 3) to define the network. It is observed how, for the users in New York, the case with c = 3 leads to a structure with a low-clustering coefficient. On the other hand, in Tokyo there are more encounters between users and therefore the network acquires more links. In order to analyze the effect of the quantity Δr, in Table 1, we present the detailed analysis of the final encounter networks obtained by using different values of the length that determines the co-coincidence of users, namely Δr = 1m, 10m, 20m, 50m, 100m, and 200m. We analyze the structure of the largest component of the final network with the following quantities: number of nodes, number of edges, average degree, average clustering coefficient, diameter and average distance. We observe that in both cities, for the minimum number of encounters c = 1, the largest component of the final configurations contain a high fraction of the users and these structures have the small-world property for all the values of Δr considered. On the other hand, for Tokyo this property is preserved for c = 2 and c = 3. These results can be seen in Fig 5, where we depict the final configuration obtained for the particular case Δr = 100m. Long-range random walk strategy In this part, we are interested in a navigation strategy, similar to Lévy flights, that allows to randomly visit specific locations in a spatial region. We consider N points randomly located in a 2D plane. We introduce an integer number a ¼ 1; 2; . . . ; N that identify these different locations. In addition, we know the coordinates of the locations and we denote as l ab the distance between the places a and b. In the following, the distance l ab = l ba ! 0 can be calculated by different metrics; for example, in some cases could be appropriated the use an Euclidean metric, whereas, in other contexts, a Manhattan distance could be more useful. We define a discrete time random walker that at each step visits one of the locations. The transition probability In the left panels we present the average degree hk(t)i and in the right panels we depict hC(t)i for the encounter networks obtained considering at least c = 1, c = 2 or c = 3 previous encounters (c is the number of encounters). The results are obtained from the analysis of the datasets in [55,61] and using Eqs (7) and (9). We define a co-coincidence of users if they register their locations at a distance D 100 meters at the same hour. In (c), we show the probability distribution of the degree of the network that emerges in New York and Tokyo. Notice that for Tokyo, with c = 1 and c = 2, the degree distribution develops a heavy tail. We use the value Δr = 100m. and α and R are positive real parameters. The radius R determines a neighborhood around which the random walker can go from the initial site to any of the locations in this region with Table 1. The size and color of each node is related to its degree. https://doi.org/10.1371/journal.pone.0184532.g005 Encounter networks due to human mobility equal probability; this transition is independent of the distance between the respective sites. That is, if there are S sites inside a circle of radius R, the probability of going to any of these sites is simply 1/S. Additionally, for places beyond the local neighborhood, for distances greater than R, the transition probability decays as an inverse power law of the distance and is proportional to l À a ab . In this way, the parameter R defines a characteristic length of the local neighborhood and α controls the capacity of the walker to hop with long-range displacements. (See a complete discussion in the Methods section). In particular, in the limit α ! 1 the dynamics becomes local, whereas the case α ! 0 gives the possibility to go from one location to any different one with the same probability. In this limit, we have w ð0Þ a!b ðRÞ ¼ N À 1 . Our model is then a combination of a rank model [18,19,66] for shorter distances and a gravitylike model for larger ones [17]. In Fig 6(a) we illustrate the model for the random strategy introduced in Eq (1). In Fig 6(b), we present Monte Carlo simulations of the random walker described by Eqs (1) and (2). We generate N random locations (points) on a 2D plane on the region [0, 1] × [0, 1] in R 2 and, for different values of the exponent α, we depict the trajectories described by the walkers. In the case of α ! 1, it is observed how the dynamics is local and only allows transitions to sites in a neighborhood determined by a radius R = 0.17 around each location. In this case, all the possible trajectories in the limit t ! 1 form a random geometric graph [67,68]; we can identify features of this structure in our simulation. On the other hand, finite values of α model spatial long-range displacements such as the dynamics illustrated in Fig 6(b) for the case α = 5. We observe how the introduction of the long-range strategy improves the capacity of the random walker to visit and explore more locations in comparison with the local dynamics defined by the limit α ! 1. In addition, when the number of locations N is large, the trajectories described by the random walker are similar to Lévy flights in a continuum. This connection is illustrated in Fig 7, where we study the navigation strategy given by Eq (1) to visit N ¼ 2000 locations in the plane. In Fig 7(a) we depict one trajectory of the random walker with α = 3; whereas, in Fig 7(b) we present the probability P(l) to make a displacement of length l. The results are obtained using Monte Carlo simulations of the random strategy with different values of α. We observe the behavior P(l) / l −α+1 , characteristic of Lévy flights; however, in the cases explored, this behavior is modified for large l due to the finite-size effect of the domain and the finite number of points. To obtain the scaling observed in Fig 7(b), let us assume that we have an infinite plane and a high constant density of sites. In this case, the probability to find a site between the circular regions with radii l and l + dl is proportional to 2πldl. Then, P(l)dl / l −α 2πldl, and therefore P(l) / l −α+1 . In order to quantify the capacity of the random walker to visit the N locations in space, we use the time τ (α) (R) that gives the average number of steps needed to reach any of the N sites, independently of the initial condition (see Eqs (5) and (6) (1). It is observed how, for α >> 1, different values of R define diverse ways to visit the N sites in the plane; in particular, R << 1 characterize a local strategy that require many steps to reach the locations. On the other hand, strategies with α 1 are optimal and in this interval the results are independent of the parameter R. The results observed with the aid of the global time τ α (R) suggest that long-range strategies always improve the capacity of the random walker to reach any of the N locations. The random walk model introduced by Eq (1) is motivated by the fact that many search strategies in a random environment follow this long-range power law dependency. The reason is that this Lévy-like strategy is more efficient, in general, than other strategies. That could be the reason why, as mentioned before, this Lévy flight mode of searching or mobility is used not only by many animal species, but for humans as well [5][6][7][8][9][10][11][12][13][14]. In the context of searching in a changing complex environment, like a city, it turns out that this strategy is also very useful. In our model, we define a local environment with S sites where the probability of choosing any of these sites is the same. Therefore, the probability of visiting one of these places is simply 1/S, and the more sites we have in our vicinity, the less likely is to visit one particular site. In this sense, our long-range model is similar to the rank model studies by other authors, where the transition probability is inversely proportional to the rank (defined as the number of sites in my neighbor) to some power [18,19,66]. Outside our local neighborhood, we choose to have a transition probability that depends on the spatial distance decaying as a power law, similar to a gravity-like model of migrations [17]. It is worth mentioning that recently we have introduced a Lévy-flight strategy to navigate networks, generalizing previous work, and showing that this strategy is indeed more efficient [69][70][71][72][73][74]. In the following section, we will explore the collective effect of many random walkers and their coincidences in space and time, and the corresponding emergent temporal encounter network. Encounter networks due to human mobility Multiple random walkers In this section we study the simultaneous dynamics of N random walkers, each of them following the strategy described in Eqs (1) and (2) to visit independently N fixed locations in space, as we described before. We are interested in the coincidence or encounter of these walkers at different locations. We define a "social network" based on the encounters of the random walkers by using the following criteria: Each random walker moves independently visiting locations with transition probabilities given by Eq (1). Each walker at time t can remember the co-coincidences (to visit the same location at the same time) that has had with other random walkers at time t and at previous M − 1 steps. In this way, the value M quantifies the memory of each walker to remember previous encounters. The emergent collective dynamics is described by a temporal simple undirected network [75,76] with a N × N adjacency matrix A(t) at time t, with entries A ij (t) = 1 if in the temporal interval (t − M, t] exists at least c encounters between the walkers i and j. If this condition is not fulfilled, A ij (t) = 0. In addition, we consider that the dynamics starts at t 0 = 0, and for t < 0, there are no encounters. With this definition, the adjacency matrix A(t) is symmetric and has binary entries zero or one; A ii (t) = 0, because we do not consider coincidences of a walker with itself. As an illustration of this process, in Fig 9 we present Monte Carlo simulations of N = 20 simultaneous random walkers visiting N ¼ 50 locations on the plane. We choose the memory value M = 20 and the minimum of encounters is c = 2. In this case, each random walker follows a strategy defined by Eq (1) with parameters α ! 1 and R = 0.25. We depict the paths followed by the random walkers and the corresponding network associated to the encounters for different times. The emergence of new connections is apparent and, for times t > M some links can vanish as a consequence of the finite memory of the walkers. In the section Supporting Information, we present two videos to illustrate the dynamics of the temporal network. In the S1 Video, we present the complete simulation for the times 0 t 100. In the S2 Video, we include a simulation for the case with Lévy flights with α = 5; the locations to visit and other parameters are the same as in Fig 9. In the following we describe the evolution of the average degree hk(t)i and the average clustering coefficient hC(t)i, at time t, for the temporal network associated to the encounters of random walkers (see the Methods section for precise definitions). In Fig 10 we plot our findings obtained from Monte Carlo simulations with N = 500 random walkers. We show the ensemble average of the results as a function of time for different values of the parameter α. In this case we observed how the two global quantities hk(t)i and hC(t)i grow, starting from the null value associated to an empty network, evolving to a stationary state due to the equilibrium between the creation of new links and the removal of connections associated to the finite memory of each walker. It is observed how different types of random-walk strategies lead to different stationary limits. In addition, for all times, increasing the value of α increases the ensemble average of the global quantities hk(t)i and hC(t)i. In other words, for the local dynamics (cases with α >> 1) the values hk(t)i and hC(t)i are greater than the results obtained for the long-range dynamics. However, this result depends on the quantities that define the system, i.e, the distribution of the N locations in space, the parameters R and α, the memory M, and the minimum number of contacts c needed to establish a new link in the network. Once we have studied the temporal evolution of quantities that describe the dynamics and discovered a stationary limit, we explore the structure of the network in the limit t ! 1. In Fig 10(c) we show the results for the probability distribution of degrees P(k) in the network of encounters for different values of the exponent (α = 2, α = 5, α = 10). We consider the Discussion It is important to mention that we are modeling an encounter network of agents without considering the social network that might exist between them. That is, we assume that, at the beginning, they do not know each other and the social bond that might emerge is due to several coincidences in the same place at the same time. Of course, we are aware that in reality you can coincide in this way with many people without establishing a social bond. That is why it is important to distinguish between an encounter network and a social network, although they are intertwined. On the other hand, it is clear that an established social network of friendship influence the mobility. First, we tend to move together with friends, and secondly, we move to meet fiends at some location. Thus, encounter networks contain both a real social network of friendship and simply a network of strangers. Anyhow, for the cases of propagation of diseases, epidemics, behavioral adoption or diffusion of ideas, the encounter network can be as important as a social network. In short, there is a feedback: mobility generates friends and friends move together or move to meet friends. Conclusions In this paper we explore the connection between human mobility and encounter networks in cities. We analyze real data for two big metropolitan areas: New York City and Tokyo. The data we used is from the location-based social network Foursquare. As a first result, we obtained a probability distribution for the travelled distances of users that decays as an inverse power law, and is the same for New York City and Tokyo. Not only that, we obtained a probability distribution for the successive times of check ins, that follows again a power law and is the same in both cities. Secondly, using the data set, we construct a temporal encounter network of New York City and Tokyo, that we characterize with the average degree and the average clustering coefficient. One result using these quantities and some others, is that the encounter network in Tokyo tend to be a small world, whereas for New York is more like a big world, at least under some circumstances. This empirical results inspired us to introduce a model that considers multiple random walkers that visit specific locations randomly located in space, following a long-range powerlaw strategy for the transition probability, akin to Lévy flights. We measure the encounters or coincidences in space and time and establish a link between these walkers if they coincide several times, generating in this way a temporal encounter network. We characterize this temporal network with global quantities, like the average degree and the average clustering coefficient. There is a qualitative agreement between this model and the empirical data that we used. The encounter network that we analyzed here is related with the social network, since people tend to visit popular places in a city meeting other people there. If this happens with some frequency, there is a chance that friendship or familiarity emerges between people due to these encounters. There is also the case where people go together to the same place precisely because they are friends; that is, there is a feedback between human mobility and social networks. However, we cannot distinguish in our analysis of the data set this intertwined relationship. Finally, we think that our results can be useful in several fields like epidemics, social influence, contagion models and diffusion of ideas. Master equation In this part we present statistical properties of the random walk strategy defined by Eqs (1) and (2). The temporal evolution is modeled as a discrete time Markovian process for which the probability p(a, t 0 ;b, t) to find the random walker at position b at time t, starting from the site a at time t 0 , satisfies the master equation [77]: Here, the discrete time t = 0, 1, 2,. . ., denotes the number of steps or transitions made by the random walker. The Markovian process modeled by Eq (3) can be explored by different methods in order to characterize the dynamics with quantities like the stationary probability distribution, the mean-first passage time, among others [77]. All these quantities can be obtained analytically from the spectral properties of the transition matrix with elements w ðaÞ a!b ðRÞ by applying the methods presented in [69] for Lévy flights on networks or by considering the process as a random walker in a weighted network [78,79]. For example, due to the result O ðaÞ ab ðRÞ ¼ O ðaÞ ba ðRÞ, there is a detailed balance condition that relates the probability p(a, t 0 ;b, t) with the reversed case p(b, t 0 ;a, t) that allows to establish that, for finite α, the random walker can reach any of the N locations. On the other hand, in the case α ! 1, the dynamics is constrained to transitions from one site to places in the local neighborhood. In this limit, the random walker can be trapped in some regions and never visit all the N sites. However, for specific geometries and random distributed locations, a minimal value r c of the radius can be calculated in order to define a strategy with local transitions that can reach any of the locations. In the case of N ) 1, random locations on the region [0, 1) × [0, 1) in R 2 , with Euclidean distances, all the possible trajectories of the random walker generate a random geometric graph [67,68]; this allow us to find that the critical value is r c ¼ . In this way, the local strategy α ! 1, with radius R > r c , can reach any of the N sites. Also, from the detailed balance condition we obtained the stationary distribution of the random walker p 1 b lim T!1 1 T P T t¼0 pða; t 0 ; b; TÞ, that gives the probability to reach the location b at time t ! 1. This quantity is given by: The stationary distribution p 1 b allows to characterize the dynamics at time t ! 1 and to rank the locations based on the geographical distances. In addition, the average time hT a i ¼ 1=p 1 a is an important quantity in the context of Markovian processes and gives the average number of steps required for the random walker, starting in the location a, to return for the first time to this location [69]. In addition, we are interested in the capacity of each random walker to visit the different N locations in space. In order to characterize the dynamics we use the eigenvectors and eigenvalues of the transition matrix W with elements w ðaÞ a!b ðRÞ. The right eigenvectors of this stochastic matrix satisfy W|f a i = λ a |f a i for a ¼ 1; ::; N . The corresponding set of eigenvalues is ordered in the form λ 1 = 1 and 1 > l 2 ! :: ! l N ! À 1. On the other hand, using the right eigenvectors we define the matrix Z with elements Z ab = ha|f b i. This matrix is invertible, and a new set of vectors h " 0 a j is obtained by means of Z À 1 ab ¼ h " 0 a jbi. In terms of these eigenvectors and eigenvalues, a similar approach to the methods introduced in [69] allows to analyze the master Eq (3) to obtain the mean first-passage time and in particular the time: that gives the average number of steps needed to reach the site a from a randomly chosen initial location. Now, in order to quantify the capacity of the walker to reach N sites, we use the average of the quantity τ a over all the locations, defined as This global time τ gives the average number of steps needed to reach any of the N sites, independently of the initial condition. We denote this quantity as τ α (R) to emphasize the dependence of this quantity with the parameters α and R. Temporal networks An important quantity in the study of networks is the degree of each node, that gives the number of connections to that node. In the case of temporal networks, the degree k i (t) of the node i at time t is: k i ðtÞ ¼ P N l¼1 A il ðtÞ. In terms of this quantity we define the average degree at time t as: Another important quantity to characterize the topology of networks is the clustering coefficient [1]. This coefficient C i (t)of the node i at time t, quantifies the fraction of connected neighbors 4 i (t) of the node i with respect to the maximum number of these connections given by k i (t)(k i (t) − 1)/2. In terms of the adjacency matrix we have for k i (t) ! 2 [1]: otherwise C i (t) = 0. Here A 3 (t) = A(t)A(t)A(t) = 4 i (t)/2. In addition, the average clustering coefficient at time t is given by: In this way, for each time t, we can calculate the adjacency matrix A(t) and obtain the global quantities hk(t)i and hC(t)i that describe the structure of the corresponding temporal network. Supporting information S1 Video. Monte Carlo simulation of N = 20 simultaneous random walkers visiting N ¼ 50 locations on the plane, represented by stars. Each random walker visits independently the locations with a strategy determined by the transition probability w ðaÞ a!b ðRÞ in Eq (1) with parameters α ! 1 and R = 0.25. We choose the parameter M, characterizing the memory of the walker, as M = 20, and the minimum number of encounters to establish a link is two (c = 2). In the left panel we depict the trajectories followed by the walkers and in the right we plot the respective encounter network for the discrete times 0 t 100. (AVI) S2 Video. Monte Carlo simulation of N = 20 simultaneous random walkers visiting N ¼ 50 locations on the plane, represented by stars. In this case we depict the dynamics with α = 5 that defines a navigation strategy with long-range transitions. The rest of the parameters are the same as in the S1 Video. (AVI)
8,843.8
2017-10-12T00:00:00.000
[ "Physics", "Computer Science" ]
Robustness-Aware Word Embedding Improves Certified Robustness to Adversarial Word Substitutions Natural Language Processing (NLP) models have gained great success on clean texts, but they are known to be vulnerable to adversarial examples typically crafted by synonym substitutions. In this paper, we target to solve this problem and find that word embedding is important to the certified robustness of NLP models. Given the findings, we propose the Embedding Interval Bound Constraint (EIBC) triplet loss to train robustness-aware word embeddings for better certified robustness. We optimize the EIBC triplet loss to reduce distances between synonyms in the embedding space, which is theoretically proven to make the verification boundary tighter. Meanwhile, we enlarge distances among non-synonyms, maintaining the semantic representation of word embeddings. Our method is conceptually simple and componentized. It can be easily combined with IBP training and improves the certified robust accuracy from 76.73% to 84.78% on the IMDB dataset. Experiments demonstrate that our method outperforms various state-of-the-art certified defense baselines and generalizes well to unseen substitutions. The code is available at https://github.com/JHL-HUST/EIBC-IBP/. Introduction Deep neural networks have achieved impressive performance on many NLP tasks (Devlin et al., 2019;Kim, 2014).However, they are known to be brittle to adversarial examples: the model performance could dramatically drop when applying imperceptible crafted perturbations, especially synonym substitutions, into the input text.These phenomena have been observed in a wide range of practical applications (Alzantot et al., 2018;Ren et al., 2019;Wallace et al., 2019;Zang et al., 2020;Maheshwary et al., 2021;Meng and Wattenhofer, 2020;Yu et al., 2022). To mitigate the vulnerability of NLP models, many adversarial defense methods have been proposed to boost the model robustness from various perspectives, such as adversarial training (Wang et al., 2021b;Dong et al., 2021;Li et al., 2021;Si et al., 2021), advanced training strategy (Liu et al., 2022), input transformation (Wang et al., 2021a), and robust word embedding (Yang et al., 2022).However, these methods could only provide empirical robustness, i.e., the robust accuracy of these models varies depending on the heuristic search used in the attacks.In contrast, certified robustness guarantees that a model is robust to all adversarial perturbations of a given input, regardless of the attacks for evaluation.Certified robustness provides a lower bound on the robust accuracy of a model in the face of various adversarial attacks. In this work, we aim to design better training methods for certified robustness.In particular, our algorithm is mainly based on Interval Bound Propagation (IBP).IBP is initially designed for images (Gowal et al., 2019) and is also utilized to provide certified robustness in NLP models (Huang et al., 2019;Jia et al., 2019).In the first step, we compute the interval of embedding of all possible texts perturbed on the current input by word substitutions, where the embedding layer is fixed using the commonly used word embeddings, such as GloVe (Pennington et al., 2014).Then, in the second step, given the pre-computed interval, IBP is used to estimate the upper and lower bounds of the output layer by layer and minimize the worstcase performance to achieve certified robustness. However, previous works of IBP method (Huang et al., 2019;Jia et al., 2019) use fixed word embeddings and we argue that may not be good enough for certified robustness.As shown in the experiments of Huang et al. (2019), the embedding space significantly impacts the IBP bounds and the effectiveness of IBP training.Though the close neighbor words in the embedding space are selected for the synonym set, the volume of the convex hull constructed by them is still large for IBP training, which will lead to loose bounds through propagating and a poor robustness guarantee.Inspired by the above observation, in this work, we develop a new loss to train robustness-aware word embeddings for higher certified robustness. We first decompose certified robust accuracy into robustness and standard accuracy.We optimize for robustness from the perspective of embedding constraint and optimize for standard accuracy by training the model normally.It can be proved that the upper bound of certified robustness can be optimized by reducing the interval of the convex hull constructed by synonyms in the embedding space.Therefore, we propose a new loss called Embedding Interval Bound Constraint (EIBC) triplet loss.Specifically, given a word, on each dimension in the embedding space, we aim to reduce the maximum distance between each word and its synonyms, which is actually to make a smaller interval of the convex hull formed by synonyms.Then, we freeze the embedding layer after training the word embeddings by EIBC triplet loss, and train the model by normal training or IBP training to achieve higher certified robust accuracy. Extensive experiments on several benchmark datasets demonstrate that EIBC could boost the certified robust accuracy of models.Especially when EIBC is combined with IBP training, we could achieve SOTA performance among advanced certified defense methods.For instance, on IMDB dataset, EIBC combined with IBP training achieves 84.78% certified robust accuracy, surpassing IBP by about 8%, which indicates that constraining the embedding interval bound will significantly boost the performance of IBP.Our main contributions are as follows. • We prove theoretically that the upper bound of certified robustness can be optimized through reducing the interval of the convex hull formed by synonyms in the embedding space.(Alzantot et al., 2018;Ren et al., 2019;Ivgi and Berant, 2021;Si et al., 2021).A stream of work aims to improve the effectiveness and efficiency of textual adversarial training by adversary generation based on gradient optimization (Wang et al., 2021b;Dong et al., 2021;Li et al., 2021).(Ishida et al., 2020) to guide the model into a smooth parameter landscape that leads to better adversarial robustness.Besides, adversarial detection methods detect the adversarial examples before feeding the input samples to models by training a classifier (Zhou et al., 2019) or randomized substitution (Wang et al., 2022).However, these methods can only provide empirical robustness, which is unstable for attacks based on different heuristic searches.Certified robustness is proposed to guarantee that a model is robust to all adversarial perturbations of any given input.Interval Bound Propagation (IBP) calculates the input interval involving all possible word substitutions and propagates the upper and lower bounds through the network, then minimizes the worst-case loss that any combination of the word substitutions may cause (Jia et al., 2019;Huang et al., 2019).Randomized smoothing methods, such as SAFER (Ye et al., 2020) and RanMASK (Zeng et al., 2021), mask a random portion of the words in the input text to construct an ensemble and utilize the statistical properties of the ensemble to predict the output.Zhao et al. (2022) propose Causal Intervention by Semantic Smoothing (CISS), which associates causal intervention with randomized smoothing in latent semantic space to make provably robust predictions. Most previous works do not attach importance to word embeddings concerning certified robustness.Our work introduces EIBC triplet loss to achieve certified robustness through constraining word embeddings and incorporates it into IBP to boost certified robustness. In the field of adversarial images, Shi et al. (2021) improve the IBP training method by mitigating the issues of exploded bounds at initialization and the imbalance in ReLU activation states.It is worth noting that our work differs from Shi et al. (2021).We particularly focus on reducing the difference between the upper and lower bounds of initial inputs by fine-tuning the embeddings.The reduction of bounds interval provably causes the tightening of bounds in following propagation. Preliminaries For the text classification task, a model f : X → Y predicts label y ∈ Y given a textual input x ∈ X , where x = ⟨x 1 , x 2 , • • • , x N ⟩ is a sequence consisting of N words, and the output space Y = {y 1 , y 2 , • • • , y C } contains C classes.In this paper, we focus on an adversarial scenario in which any word in the textual input can be arbitrarily replaced by its synonyms so as to change the model's prediction.Formally, we use S(x i ) to denote the synonym set of the i th word x i of input x.Then, we formulate the set consisting of all the adversarial examples with allowed perturbations of x: (1) Our goal is to defend against the adversarial word substitutions and train models with certified robustness, i.e., If Eq. ( 2) holds and the model classifies the instance correctly, that is, y = y true , then we call the model prediction on input x is certified. We can easily decompose certified robust accuracy into robustness and standard accuracy.Robustness cares about whether the model prediction is consistent under perturbations.Clearly, achieving robustness is a necessary condition for obtaining models with high certified robust accuracy.We then illustrate the conditions to be satisfied for robustness in terms of interval bound.For a K-layer neural network, assuming we can calculate the interval bound of the output logits z K : z K ≤ z K ≤ z K of all the perturbed inputs x ′ ∈ B adv (x), the model with robustness satisfies that the lower bound of the model's largest logit z K ymax is greater than the upper bound of other logits, i.e., To evaluate the model's certified robust accuracy, we just need to replace the model's largest logit z K ymax with the logit of the true class z K ytrue in Eq. (3). Interval Bound Propagation IBP provides the solution to estimate the interval bound layer by layer.We could represent a K-layer neural network model as a series of transformations f k (e.g., linear transformation, ReLU activation function): where z k is the vector of activations in the k th layer. To calculate the interval bound of the output logits, we need to construct the interval bound of the input vector and propagate it through the network.Let φ(x i ) ∈ R D denote the embedding word vector of word x i with D dimensions.The word vector input is We obtain the interval bounds of the word vector input z 0 by constructing the convex hull of S(x i ) in the embedding space: where φ(x i ) j is the j th element of the word vector of word x i .z 0 and z 0 are the lower and upper bounds of z 0 , respectively.Similarly, for subsequent layers k > 0, we denote the lower and upper bounds of activations in the k th layer as z k and z k , respectively.The bounds on the z k can be obtained from the bounds of previous layer z k−1 : Methodology In this section, we first theoretically demonstrate the influence of word embedding on the model robustness and then introduce the proposed EIBC triplet loss to optimize the word embedding.Finally, we describe how to incorporate EIBC into the training process. Word Embedding Matters Robustness Previous works on the IBP method (Huang et al., 2019;Jia et al., 2019) use fixed word embeddings. As illustrated in Figure 1, IBP constructs an axisaligned box around the convex hull constructed by synonyms in the embedding space.As stated in Huang et al. (2019), since synonyms may be far away from each other, the interval of the axisaligned box can be large.Through propagating the interval bounds in the network, the interval bounds become too loose to satisfy the certified conditions.To be concrete, based on Eq. ( 3), training a model with certified robustness is an optimization problem formulated as follows: We propose the following theorem to demonstrate that minimizing the objective in Eq. ( 7) could be converted to an optimization objective with respect to the word embeddings by backpropagating the interval bounds through the network.We provide the proof in Appendix A. Theorem 1 The upper bound on the solution of Eq. (7) is minimize max where max(•) and | • | are the element-wise operators. Theorem 1 inspires us that we could approach certified robustness by reducing the interval of the convex hull constructed by synonyms in the embedding space. Robustness-Aware Word Embedding Based on Theorem 1, we attach importance to word embeddings and propose the Embedding Interval Bound Constraint (EIBC) triplet loss to train robustness-aware word embeddings to achieve higher certified robustness while maintaining their representation capability for classification.Figure 1: EIBC triplet loss reduces the area of the axis-aligned box formed by synonyms and meanwhile holds the distance between the word and its nonsynonyms in the embedding space.x i is a word with as its synonyms and x j , x k as its non-synonyms.The dashed line represents the bound of the convex hull constructed by synonyms. We measure the interval of the convex hull constructed by synonyms of word x i in the embedding space by: where ∥ • ∥ p indicates p-norm.According to Theorem 1, the certified robustness can be optimized by minimizing d bound (x i , S(x i )) for each word x i in the input squence x. Meanwhile, non-synonyms may be connected by multiple synonym pairs, and simply reducing the distance between synonyms will also reduce the distance between non-synonyms.To prevent all words from being drawn close to each other and hurting semantic representation, we also control the distances between words and their non-synonyms.Inspired by FTML (Yang et al., 2022), we adopt the triplet metric learning to reduce the interval of convex hull constructed by synonyms and increase the distance between words and their non-synonyms simultaneously.Consistent with Eq. ( 9), we also use the p-norm distance of word vectors in the embedding space as the distance metric between two words x a and x b : In this work, we adopt the Manhattan distance, i.e., p = 1 and provide analysis on different pnorms in Section 5.7. Finally, we design the EBIC triplet loss for each word x i as follows: where S(x i ) denotes the synonym set of word x i , and N (M ) denotes the set containing M words randomly sampled from the vocabulary.We set M to be the same as the maximum size of the synonym set of a word to maintain the duality of the maximization and minimization problem.Note that the purpose of increasing the distance between words and their non-synonyms is to prevent them from getting too close and losing semantic representations, without constantly increasing their distance. Thus we set a scalar hyperparameter α to control that they would no longer be pushed away once the distance exceeds α. We minimize L EIBC (x i , S(x i ), N (M )) to reduce the interval of convex hull shaped by word x i and its synonyms (positive samples) and maintain the distances between x i and its non-synonyms (negative samples) in the embedding space. Figure 1 illustrates the effect of EIBC triplet loss.In the embedding space, the interval of the convex hull constructed by synonyms of word x i is reduced, while distances between x i and its nonsynonyms x j , x k are maintained. Overall Training Process As described in Section 3, we decompose certified robust accuracy into two parts: certified robustness and standard accuracy.We utilize the proposed EIBC triplet loss to achieve certified robustness from the perspective of word embeddings, and optimize for standard accuracy by training the model normally. In the first part, we use EIBC triplet loss to finetune the pretrained word embeddings, e.g., GloVe word embeddings (Pennington et al., 2014) to get robust word embeddings.To employ the L EIBC to each word of input x in the embedding space, we sum up L EIBC of each word and take the mean value as our final loss L emb to train the word embeddings: In the second part, since our BIEC method merely provides the word embedding with certified robustness, which is componentized, we could combine it with various training methods to boost the certified robust accuracy.Specifically, we freeze the embedding layer trained by EIBC triplet loss and train the model with normal cross-entropy loss or with IBP training method (Jia et al., 2019) towards higher certified robust accuracy. The loss of IBP training is as follows: where L CE denotes the normal cross-entropy loss and L IBP denotes the IBP loss (we give a brief description of the IBP loss in Appendix B).Scalar hyperparameter β governs the relative weight between the robustness and standard accuracy.The IBP loss uses ϵ to control the perturbation space size, and ϵ = 1 means the original size.To maintain the balance between robustness and standard accuracy during training, the IBP training method gradually increases β and ϵ from 0 to 1.With the help of EIBC, we could reduce the training epochs to half of the original IBP training method. Experiments This section evaluates the proposed method with three advanced certified defense methods on three benchmark datasets.In addition, we further study EIBC on the generalization to unseen word substitutions, the empirical robustness, the trade-off between clean and robust accuracy, the training procedure, and the robustness with different distance metrics. Experimental Setup Tasks and Datasets We focus on evaluating certified robustness against adversarial word substitutions.Aligned with previous works (Jia et al., 2019;Ye et al., 2020;Zhao et al., 2022), we evaluate the proposed method on three benchmark datasets for the text classification task, including IMDB (Maas et al., 2011), YELP (Shen et al., 2017), and SST-2 (Wang et al., 2019). Baselines We compare our proposed method with IBP (Jia et al., 2019), SAFER (Ye et al., 2020) and CISS (Zhao et al., 2022).We use the models with the best results for baselines.We also make our own implementation of IBP method on the TextCNN model (Kim, 2014).schedule and hyperparameters depending on certified robust accuracy, and the performance is better than that reported in Jia et al. (2019). Perturbation Setting Following previous work, we use the same synonym substitutions as in Jia et al. (2019) and Zhao et al. (2022), which are initially defined in Alzantot et al. (2018).The synonyms of each word are defined as the n = 8 nearest neighbors satisfying the cosine similarity ≥ 0.8 in the GloVe embedding space (Pennington et al., 2014) processed by counter-fitting (Mrksic et al., 2016). Model Setting Jia et al. (2019) adopt a simple CNN model with the filter size of 3 and 100 as the hidden size, termed CNN in the experiments.We adopt a TextCNN model (Kim, 2014) with three filter sizes (2, 3, 4) and 200 as the hidden size, termed TextCNN.Following Jia et al. (2019), we set a linear layer before the CNN layers of the models to further control the shape of the convex hull constructed by synonyms.We study the impact of different architectures in Appendix C.3. Implementation Details We use the default train/test split for IMDB and YELP datasets.For SST-2, we use the default training set and take the development set as the testing set.For the generalization of EIBC, we set the hyperparameter α = 10.0 in Eq. ( 11) for all experiments.Analyses of the impact of α are discussed in Section 5.5. For the EIBC+Normal training method, we first use our EIBC triplet loss to train the word embeddings for 20 epochs, then we use cross-entropy loss to train the model with only 1 epoch, because further unconstrained normal training will lead to a decline in certified accuracy as shown in Section 5.6.For the EIBC+IBP training method, we use EIBC triplet loss to train the word embeddings and the IBP training method to train the model simultaneously, with half epochs of the original IBP method.We provide more implementation details in Appendix C. Main Results We combine the proposed EIBC with normal training and IBP training, respectively, to boost the certified robustness.Then, we compare them with three state-of-the-art baselines, IBP, SAFER, and CISS, in terms of certified robust accuracy against word substitutions. As seen from Table 1, EIBC incorporated with normal training already achieves certified robustness to a certain extent without any other defense technique.Especially on the YELP dataset, it gains 89.51% certified robust accuracy, which performs significantly better than SAFER and IBP.Also, EIBC combined with IBP training achieves dominant certified robustness on all datasets with clear margins.For instance, it achieves 84.78% certified robust accuracy on the IMDB dataset, surpassing the original IBP on the TextCNN model by about 8%.This indicates that the tight embedding bounds benefiting from EIBC will considerably boost the performance of IBP. It is worth noting that though EIBC combined with IBP training is implemented on simple CNN architectures, it achieves higher certified robust accuracy than SAFER and CISS based on large-scale pre-trained BERT models (Devlin et al., 2019), suggesting the superiority and lightness of our approach. Generalization to Unseen Substitutions The defense methods generally assume that the synonym lists used by attackers are known, which is under the ideal assumption.To study the generalization of our method to unseen word substitutions, we only use part of the word substitutions to train the model and all the word substitutions for robust evaluation. Specifically, for each word with n synonyms, we randomly select its ⌈γn⌉ synonyms (0 < γ ≤ 1) for training, where γ controls the proportion of the seen word substitutions during training.We observe the certified robust accuracy under the word substitutions based on the entire synonyms. Figure 2 shows the certified robust accuracy with different γ.The performance of IBP decreases rapidly with the decline of γ, but the EIBC combined with normal training is relatively stable, indicating that EIBC has a remarkable generalization to unseen word substitutions.It also suggests that the improvement benefiting from the word embeddings is more generalized than that from other parts of the model under unseen word substitutions.Furthermore, EIBC combined with IBP training achieves the best certified robust accuracy in most cases. Empirical Robustness We utilize the Genetic Attack (GA) (Alzantot et al., 2018) to investigate the empirical robustness of our method.GA generates a population of perturbed texts by random substitutions, then searches and updates the population by the genetic algorithm.Following Jia et al. (2019), we set the population size as 60 and run 40 search iterations on 1,000 testing data randomly sampled from each dataset. As shown in the YELP dataset.Among all the defense baselines, our proposed method exhibits better performance with a clear margin under GA. Clean Accuracy versus Robust Accuracy In Eq. ( 11), our EIBC triplet loss uses hyperparameter α to control the distance between words and their non-synonyms to hold the semantic representation capability of the word embeddings.We use clean accuracy to denote the accuracy (%) on clean testing data without any perturbation, and robust accuracy to denote the certified robust accuracy (%) against word substitutions.We observe the trade-off between clean accuracy and robust accuracy controlled by α.As depicted in Figure 3, when α is low, the distances among any words are close, which harms the semantic representation of word vectors and leads to low clean accuracy.Meanwhile, the interval of convex hull constructed by synonyms is also small.Thus, the output bounds are tight, and the gap between robust accuracy and clean accuracy is reduced.Further, when α approaches 0, the term pushing away the non-synonyms in EIBC triplet loss tends to be invalid.The shape decline in clean accuracy in this case demonstrates the importance of pushing away non-synonyms.As α grows, the distance between words and their non-synonyms gradually increases, thus ensuring better semantic representation and higher clean accuracy.However, the further increase of α leads to the enlargement of the interval of convex hull formed by synonyms and hinders the robust accuracy. Training Procedure To investigate how the word embeddings pretrained by EIBC help improve the training process, in Figure 4, we illustrate the changing curve of the certified robust accuracy in the training procedure for IBP, EIBC with normal training, and EIBC with IBP training. With loose interval bounds, the certified robust accuracy of IBP increases slowly during the training procedure, finally achieving a relatively low certified guarantee.For EIBC combined with normal training, since the word embeddings trained by EIBC have provided the model with initial certified robustness, the model only normally trains one epoch to achieve a certified robust accuracy slightly lower than IBP.However, further normal training without constraint leads to a decline in certified robust accuracy.We could combine EIBC with IBP training to achieve the best certified robust accuracy with half epochs of IBP.These results suggest that tightening word embeddings with EIBC can boost the certified robustness and accelerate the training process of IBP. Analysis on Distance Metric We explore the effect of different l p -norm distance metrics in Eqs. ( 9) and ( 10), such as Manhattan distance (p = 1), Euclidean distance (p = 2), and Chebyshev distance (p = ∞).the results of models trained by EIBC combined with IBP training on the IMDB and YELP datasets.EIBC with Euclidean distance achieves competitive robustness to EIBC with Manhattan distance.The performance of Euclidean distance and Manhattan distance is relatively close on the two datasets because they can constrain the bound on each dimension in the embedding space.In contrast, the effectiveness of Chebyshev distance is the worst as it can only constrain one dimension, which is inefficient. Conclusion In this work, we attach importance to word embeddings and prove that the certified robustness can be improved by reducing the interval of the convex hull constructed by synonyms in the embedding space.We introduce a novel loss termed the Embedding Interval Bound Constraint (EIBC) triplet loss to constrain the convex hull.Since EIBC merely provides word embeddings with certified robustness, which is componentized, we could incorporate EIBC into the normal training or IBP training to boost the certified robust accuracy.Experiments on three benchmark datasets show that EIBC combined with IBP training achieves much higher certified robust accuracy than various stateof-the-art defense methods.EIBC also exhibits good generalization to unseen word substitutions. We will further study how to incorporate EIBC with other certified defense methods in future work.Moreover, we will apply the proposed method in transformer-based models and extend the research to defend against character-level or sentence-level perturbations. An essential difference between image and text data is that text data is discrete and needs to be transformed into continuous word vectors by word embeddings.Tightened bounds of word embeddings benefiting from EIBC could boost the certified robustness of IBP, which is a typical example to indicate that word embeddings are vital to the robustness of NLP models.We hope our work could inspire more studies on the robustness of NLP models enhanced by word embeddings. Limitations As pointed out by Shi et al. (2020), applying IBP technologies to large-scale pre-trained BERT models is challenging because of the calculation of bound propagation on the attention layer is relatively loose.Since BERT is currently one of the most popular architectures in NLP, there is a limitation that the proposed method combined with IBP training could not generalize to BERT architectures.However, it is worth noting that the proposed method based on TextCNN architectures achieves better certified robustness than the advanced baselines, SAFER and CISS based on BERT.Besides, this paper focuses on enhancing the model's robustness to word substitutions, but not investigates the robustness to character-level or sentence-level perturbations. A Proof of Theorem 1 In Theorem 1, minimizing the objective in Eq. ( 7) is converted to an optimization objective with respect to the word embeddings.We prove the theorem in two steps.Firstly, we prove the upper bound solution of the optimization objective in Eq. ( 7) is to minimize the maximum gap between the model's logits and its bound.Secondly, we convert the optimization of the gap to an optimization objective with respect to the word embeddings by backpropagating the interval bound. Lemma 1 The upper bound on the solution of Eq. ( 7) where max(•) and | • | are the element-wise operators. Proof of Lemma 1.For a fixed model, we have: , which is a constant.Therefore, the optimization objective in Eq. ( 7) is equivalent to: Besides, we have the following upper bound relationship: Then, based on Eq. ( 15) and Eq. ( 16), we can easily derive that Eq. ( 14) is the upper bound on the solution of Eq. ( 7).□ Bound Backpropagation We back-propagate the interval bounds from the output logits to the embedding space through the network layer by layer. Assuming we have already obtained the interval bounds of layer k + 1, we need to calculate the bound of the previous layer k.We mainly deal with two cases: • For an affine transformation, denoted by z k+1 = Wz k + b, we have: where | • | is the element-wise absolute value operator. • For an element-wise monotonic activation function (e.g.ReLU, tanh, sigmoid), denoted by z k+1 = h(z k ), we have: where C a is the Lipschitz constant of the activation function. For z 0 ∈ R N * D , we use max * (•) to denote the max operator over each dimension of the embedding space, and we have max * (z 0 ) ∈ R D .With the bound backpropagation, we have: where C 1 and C 2 are calculated by interval bound backpropagation, and they are constant matrices for a fixed model.Then, we can derive the upper bound of the optimization objective in Eq. ( 14): According to Eq. ( 5), we have: and then we can construct the upper bound on the solution of Eq. (20): minimize max x i ∈x ( max Based on Lemma 1, we can derive that Eq. ( 22) is the upper bound on the solution of Eq. ( 7).□ B Interval Bound Propagation Here we give a brief description of Interval Bound Propagation (IBP) (Gowal et al., 2018;Jia et al., 2019) on its calculation of bound propagation and training loss. Bound Propagation For Eq. ( 6), IBP provides corresponding calculation methods for affine layers and monotonic activation functions: • For the affine transformation, denoted by z k+1 = Wz k + b, we have: where | • | is the element-wise absolute value operator. • For the element-wise monotonic activation function (e.g.ReLU, tanh, sigmoid), denoted by z k+1 = h(z k ), we have: IBP Loss For the interval bounds calculated by Eq. ( 5), the IBP method scales them with scalar ϵ: Using bound propagation, we can get the lower bound and upper bound of logits with the scalar ϵ: z K (ϵ) and z K (ϵ), respectively.Similar to Eq. ( 3), we can get the worst-case logits and use them to construct the IBP loss: where L CE is the cross-entropy loss and z K worst (ϵ) is the worst-case logits: Then, IBP loss can be combined with normal cross-entropy loss to train the model and boost the certified robust accuracy: C More Experimental Details C.2 Detailed Setup For the EIBC+Normal Training method, we divide the overall training process into two steps.In the first step, we use EIBC triplet loss to fine-tune the pretrained word embeddings, namely GloVe word embeddings (Pennington et al., 2014).We use the constant learning rate in the first e emb 1 epochs and the cosine decay learning rate schedule in the last e emb 2 epochs to decrease the learning rate to 0. In the second step, we freeze the embedding layer and use the normal cross-entropy loss to train the model with e model epochs. For the EIBC+IBP training method, we use EIBC triplet loss to train the word embeddings and the IBP training method to train the model simultaneously.We use the constant learning rate in the first e 1 epochs and the cosine decay learning rate schedule in the last e 2 epochs to decrease the learning rate to 0. For implementing the IBP training method, following Jia et al. (2019), we use a linear warmup over ϵ and β in the first e 1 epochs from ϵ start to ϵ end and β start to β end , respectively. All the experiments are run for five times on a single NVIDIA-RTX 3090 GPU and the median of the results is reported.We provide the details of the EIBC+Normal training and EIBC+IBP training method in Table 4 and Table 5 Our implementation of the IBP training method follows the original settings described in Jia et al. (2019) except for a few differences below: • We do not use early stopping but instead the cosine decay learning rate schedule to stabilize the training process. • Jia et al. ( 2019) removes the words that are not in the vocabulary of the counter-fitted GloVe word embeddings space (Mrksic et al., 2016) from the input text data.However, some datasets, such as YELP, contain some short text samples, and such a pre-processing approach would result in no words existing.We retain all the words that appear in the vocabulary of the original GloVe word embeddings, which is a much larger vocabulary. We also show the model performance on the IMDB dataset under the two pre-processing approaches.The results are in Table 6. • We set the β end to 1.0 instead of 0.8 towards higher certified robust accuracy. C.3 Robustness on Different Architectures We B3.Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified?For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?5 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?Not applicable.We use publicly available and commonly used datasets for classification tasks. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?Not applicable.We use publicly available and commonly used datasets for classification tasks. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created?Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results.For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. The number of examples, details of train/test/dev splits C Did you run computational experiments? 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5, Appendix C The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. Figure 2 : Figure 2: The certified robust accuracy (%) against unseen word substitutions on IMDB and SST-2 datasets with different γ.The methods are implemented on TextCNN models. Figure 3 : Figure 3: The impact of hyperparameter α on the tradeoff between clean accuracy and robust accuracy of EIBC with normal training on the IMDB dataset. Table 1 : Zhao et al. (2022)ion of IBP, we tune and choose the best training The certified robust accuracy (%) against word substitutions on the IMDB, YELP and SST-2 datasets.All models are trained and evaluated using the word substitutions fromJia et al. (2019)as the perturbations for a fair comparison.Ye et al. (2020)andZhao et al. (2022)do not report their results on the SST-2 dataset. Zhao et al., 2022ion.†Resultsare obtained fromZhao et al., 2022 Table 2, without any defense technique, the genetic attack can dramatically mislead the normally trained model and degrade its accuracy to 8.0% on the IMDB dataset and 40.5% on * Our implementation. Table 2 : The empirical robust accuracy (%) against genetic attack on IMDB and YELP datasets.The methods are implemented on TextCNN models. Table 3 : The certified robust accuracy (%) of models trained by EIBC+IBP Training method using different distance metrics on the IMDB and YELP datasets. Table 6 : The certified robust accuracy (%) against word substitutions on the IMDB dataset with different vocabulary.The methods are implemented on TextCNN models.CF means vocabulary of counter-fitted word embeddings. Table 7 : The certified robust accuracy (%) of models with different architectures and defense methods on the IMDB dataset. implement IBP, EIBC with normal training, and EIBC with IBP training on two architectures, i.e., CNN and TextCNN.As shown in Table 7, using the same architectures, EIBC combined with IBP training performs better than IBP on both CNN and TextCNN models.Using the same training method, the models based on the TextCNN architecture perform better than that based on the CNN architecture, because TextCNN is more complicated.A1.Did you describe the limitations of your work?Limitations A2.Did you discuss any potential risks of your work?Not applicable.Our work focuses on improving the robustness of NLP models without potential risks as far as we know.A3.Do the abstract and introduction summarize the paper's main claims?A4.Have you used AI writing assistants when working on this paper?B2.Did you discuss the license or terms for use and / or distribution of any artifacts?Not applicable.We use publicly available and commonly used datasets for classification tasks.
8,583.4
2023-01-01T00:00:00.000
[ "Computer Science" ]
Satellite and ground-based sensors for the urban heat island analysis in the city of Rome. Remote Sens In this work, the trend of the Urban Heat Island (UHI) of Rome is analyzed by both ground-based weather stations and a satellite-based infrared sensor. First, we have developed a suitable algorithm employing satellite brightness temperatures for the estimation of the air temperature belonging to the layer of air closest to the surface. UHI spatial characteristics have been assessed using air temperatures measured by both weather stations and brightness temperature maps from the Advanced Along Track Scanning Radiometer (AATSR) on board ENVISAT polar-orbiting satellite. In total, 634 daytime and nighttime scenes taken between 2003 and 2006 have been processed. Analysis of the Canopy Layer Heat Island (CLHI) during summer months reveals a mean growth in magnitude of 3–4 K during nighttime and a negative or almost zero CLHI intensity during daytime, confirmed by the weather stations. Introduction Since the early 1960s, numerous satellite sensors have been launched into orbit to observe and monitor the Earth and its environment.During the years, technologies have improved considerably and the number of satellite missions has increased.Among the several remote sensing applications, a OPEN ACCESS relatively new one is the study of urban areas, in which the surface temperature is of primary importance to the study of urban climatology. Over the last century, the world has witnessed a huge growth in its population; almost all of the predicted world population growth over the next 30 years will be concentrated in urban areas.This intense and relatively fast-paced increase in urban population changes the characteristics of the Earth's surface and atmosphere. Anthropogenic activities induce changes in the physical characteristics of the surface (albedo, thermal capacities, heat conductivity, moisture) and have significant implications for the local energy budget [1].The removal of natural land cover and the introduction of artificial materials, such as concrete and asphalt, modify the surface energy balance, resulting in an increase in surface temperature; this creates an increase in sensible heat flux and a resultant rise in air temperature.Heat retention from artificial surfaces impacts the natural energy balance and can worsen existing air pollution conditions.Thus, as cities add more roads, buildings, industries and people, temperatures in downtown areas become much higher than temperatures in rural surroundings, creating the Urban Heat Island (UHI) phenomenon [2]. The UHI describes quantitatively the increased temperature of either the urban surface or the urban atmosphere compared to its rural surroundings.The temporal and spatial characteristics of the UHI vary with changes in local urban form and function.Local meteorological conditions, geography (topography, presence of water bodies such as lakes or rivers, soil types, etc.) also affect the magnitude of an UHI.Also, population, as a surrogate measure of the density of urban living, was originally linked to UHI intensity [3]. The CLHI and the BLHI refer to a warming of the urban atmosphere whereas the SHI refers to a warming of the surface.The urban canopy layer is the layer of air closest to the surface in cities, extending upwards to approximately the mean building height.Above the canopy layer lays the urban boundary layer, which may be 1 km or more in thickness at daytime, shrinking to hundreds of meters or less at night.These components are depicted in Figure 1. The CLHI are typically detected by ground stations using thermometers in order to measure air temperature in the canopy layer whereas the thermal remote sensors observe the surface heat island or, more specifically, they see the spatial patterns of upwelling thermal radiance received by the detector and use it to estimate the surface temperature [4].UHI's have long been studied by ground-based observations taken from fixed thermometer networks.An alternative method uses infrared radiometry from aircraft or satellite platforms.An advantage in using satellite data with respect to ground-based observations is to provide more spatially representative measurements of surface temperature over large areas of cities.The conceptual framework of employing satellite remote sensing data for surface UHI assessment was originally employed by Rao [5].Inspired by his work, research was pursued to understand the energetic basis of this phenomenon through field observations and numerical and scale modeling of the energy exchanges of urban and rural environments [1].To extend the database of work, UHI's were studied in several cities, such as Houston [6], Seoul [7], Los Angeles and Paris [8]. Streutker [6] studied the UHI by the use of nighttime NOAA (US National Oceanic and Atmospheric Administration)-AVHRR (Advanced Very High Resolution Radiometer) thermal data in order to produce surface temperature maps of the city of Houston for 21 selected dates covering a two year period (1998-1999).He found the surface UHI intensity ranging from 1.06 to 4.25 °C depending on season and weather conditions. Lee [7] examined the potential of using the NOAA-AVHRR thermal data to map the pattern of brightness temperature distribution in order to study surface urban heat islands in the Seoul metropolitan area.He located the warm areas during daytime, associated with business activities and industrial and densely residential districts.Lee figured out that the AVHRR-derived brightness temperatures were highly correlated with the ground surface temperature and the surface air temperature, therefore he supported the potential of utilizing the AVHRR data to retrieve the air and ground surface temperature fields in a city to evaluate the urban heat island.Dousset and Gourmelon [8] studied the summertime microclimate of the Los Angeles and Paris metropolitan areas by combining satellite multi-sensors data with in situ air quality in a Geographic Information System (GIS) platform.They used the NOAA-AVHRR thermal data corresponding to various times of day in order to produce images of average surface temperature from which statistics were extracted.In addition, they used SPOT (Satellite Pour l'Observation de la Terre)-HRV (High Resolution Visible) visible and near infrared measurements at a 20-m spatial resolution to estimate the land cover classification for the cities.The study revealed a strong UHI of 7 °C for Paris at night and a "negative" surface UHI during the day, where commercial/industrial and airport regions as well as densely suburbs displayed higher surface temperature than downtown Paris.For the city of Los Angeles, the statistics of the diurnal cycle of Land Surface Temperature (LST) as derived from the average images highlighted a strong heat island in the range of 7.5-9 °C during the day and a weaker heat island in the range of 2-5 °C at night, which was attributed to the influence of the Pacific Ocean. Hung et al. [9] utilized TERRA-MODIS (Moderate Resolution Imaging Spectroradiometer) image data to assess the UHI of eight mega-cities in Asia.They used both daytime and night-time MODIS data acquired over 2001-2003 period to produce surface temperature maps for the eight cities at 1 km spatial resolution.The diurnal and seasonal patterns of the satellite-derived UHIs revealed that all cities exhibited significant heat islands.The correlation between heat islands and surface properties as well as the relationship between UHI magnitude and city population was also determined. In this paper, we study the spatial and temporal evolution of canopy layer UHI in the city of Rome, combining air temperature observations from the meteorological urban sensor network and both daytime and nighttime data from the satellite-based sensor AATSR (Advanced Along Track Scanning Radiometer) to produce temperature maps. Study Area The urban area of this study is Rome, Italy, a city of 3.7 million people.It is located in the central-western portion of the Italian Peninsula, in a valley enclosed by the Apennines Mountains: the Tolfa Mountains lie to the North, the Sabatini Mountains to the East and the Colli Albani to the South.Figure 2(a) shows the area investigated and locations of the weather stations employed in this study.The city of Rome covers an overall area of about 1,285 km 2 , placed inside the motorway orbital road "Grande Raccordo Anulare" (orange ring).Its altitude ranges from 13 to 120 m above mean sea level.Rome enjoys a typical Mediterranean climate, which is characterized by relatively mild winters and hot summers.The climate is fairly comfortable from April through June, and from mid-September to October.In August, the temperature during the heat of the day often exceeds 32 °C; the average high temperature in December is about 14° C, while subzero lows are not uncommon [10]. In the years after the Second World War, Rome underwent the most significant urban growth of its history.The population grew from 1,500,000 to 2,800,000 in approximately 25 years, from 1945 to 1971, due to the huge flux of immigration.Between 1951 and 1971 this rise consequently raised the number of houses from 320,000 to 870,000, as sketched in Figure 2(b).During these years the city grew chaotically with little town planning, which resulted in deep modifications to the landscape within the built areas and the nearby suburbs.In fact, Rome is composed of a more elaborate urban lattice with buildings showing a variety in heights (from 4 to 12 storeys); this implies multiple surfaces for the reflection and absorption of sunlight increasing the efficiency at which urban areas can be heated.This is called the canyon effect.In addition, buildings tend to block the wind, inhibiting cooling by convection [4]. Satellite-Based Sensors: AATSR and MERIS In this work, we used observations provided by the Advanced Along-Track Scanning Radiometer (AATSR) and the MEdium Resolution Imaging Spectrometer (MERIS) [11], onboard the ENVISAT satellite.ENVISAT was launched in 2002, and is the biggest Earth remote sensing project of the European Space Agency (ESA).It flies in a sun-synchronous polar orbit at roughly 800-km altitude, and provides complete coverage of the globe every three days. The Advanced Along-Track Scanning Radiometer was designed primarily to study the Sea Surface Temperature (SST) with high accuracy; but, it has been successfully used for land, atmosphere, cloud and cryosphere applications [12].The AATSR is a multispectral radiometer providing data products with a resolution of 1 km at nadir derived from measurements at seven different wavelengths in visible and infrared channels (in the range 0.55 μm to 12 μm).A special technique is used to allow observations of each surface location from two different angles, at 55 degrees from vertical (the forward view) and at an angle close to the vertical (the nadir view).The swath width is constant at 500 km [11].In this work, level 1b products ATS_TO_1P, comprising gridded brightness temperature and reflectance (GBTR) images, were acquired from ESA.These images provide calibrated and geolocated brightness temperatures from all three infrared channels, reflectances from the near-infrared and visible channels, and information on whether the surface is land or ocean [13]. MERIS is a passive, programmable, imaging spectrometer, which operates in the solar reflective spectral range.Fifteen spectral bands can be selected by ground command in the range 390 nm to 1,040 nm with variable bandwidth from 1.25 nm to 30 nm; therefore, observations are nominally limited to the day side of the Earth.The instrument scans the Earth's surface by the 'push broom' method (68.5 degrees field of view) with a 1,150-km swath width [11]. The main objective of MERIS is the study of ocean color and the role of the ocean in the climate system.In addition, is used for atmospheric monitoring to detect cloud properties, water vapor and aerosol for land use [14].Two different product resolutions are available, full resolution (FR, about 300 m) and reduced resolution (RR, about 1 km).In this work we used MERIS level 2b RR geophysical products (MER_RR_2P) which provide cloud mask information at a resolution of 1.2 km [15]. Both sensors (AATSR and MERIS) offer a synergistic potential that contributes to climate studies and global change observations in addressing environmental features in a multi-disciplinary way.Since they are co-located in the same platform, they observe almost the same scene at the same time. Ground-Based Sensors: Meteorological Weather Station Ground-based meteorological weather stations, managed by the ARPA Lazio (Regional Agency for Environmental Protection), were used in order to measure air temperature in the Rome area (http://www.arpalazio.net/main/).The stations are placed at around 2 m above the ground, equipped with anemometer, thermometer, hygrometer, barometer, rain gauge, pyranometer; measurements from these sensors are collected every hour, archived and quality checked by ARPA Lazio.The thermometer (HMP 45C, Vaisala) measurement range is −35 °C to +60 °C with an accuracy of around ±0.2 °C.Details on station location and data availability are provided in the next section. Data from AATSR and MERIS MERIS level 2b and AATSR level 1b data were provided by ESA, selecting pixels inside of a circle with radius of 20 km centered in the center of Rome at coordinates 41°53′N 12°29′E.MERIS has been used for cloud detection identifiable by a flag parameter presence in the MERIS data (if cloud type is equal to zero the pixel is cloudless), whereas the AATSR sensor has been used in order to estimate the air temperature by brightness temperatures from AATSR 11 μm and 12 μm channels over Rome between 2003 and 2006, as reported in Table 1. Only data from AATSR nadir view were considered in order to reduce the impact of the atmosphere on measurements.In fact, the observations made in forward view are more susceptible to atmospheric scattering and absorption than in the nadir view because the path length is approximately twice that of the nadir view [16]. Since both AATSR and MERIS share the same platform, the data are spatially and temporally co-registered.This allows products to be processed by combining the thermal channels of AATSR and the multi-spectral information of MERIS [17]. Data from Meteorological Weather Stations Data from ground-based meteorological weather stations were collected in order to directly measure air temperature in the canopy layer using thermometers.Table 2 reports the eight stations employed in this study, provided from the ARPA Lazio, and the time period of data availability.The temporal resolution of the temperature data is one hour.As displayed in the Figure 2(a), the urban ground stations are all situated around the center of Rome and are all built upon an asphalt surface, except for the Villa Ada station which is situated at the edge of a park (Villa Ada park).For the rural area, data was collected at Pratica di Mare station (the only one available as rural from ARPA Lazio).Anyway, this station can be considered an acceptable representative of the flat rural area surrounding Rome. Literature Formulation Surface temperature and canopy layer temperature are important factors controlling most physical, chemical and biological processes on Earth.Knowledge of these geophysical parameters is necessary for many environmental studies and management of Earth surface resources. Different approaches have been published in the last years in order to retrieve sea and land surface temperature (SST and LST) from satellite-derived radiances [18][19][20][21][22].Among these methods, the two-channel or split-window algorithms have been the most commonly used [18,21,23].The split-window technique refers to the different atmospheric attenuation suffered by the surface emitted radiance within the atmospheric window 10-12.5 μm [18]. This implies that the split-window algorithms take advantage of the differential absorption in two close infrared channels to correct for the atmospheric effects, describing the surface temperature in terms of a linear combination of brightness temperatures measured in both thermal channels.The following Equation ( 1) is the general formula for the split-window algorithms: where T s is the LST and T i are the brightness temperatures of two close thermal infrared channels; the coefficients depend on the atmospheric state and on the surface emissivity and they are chosen in order to minimize the error in the LST determination.Numerous studies have been done to estimate these coefficients over sea and land surface; but, sometimes fixed values are utilized, imposing significant errors to the results [24].In this framework, an objective of the present paper is to find the most appropriate split-window algorithm from literature in order to study the UHI in Rome, or to propose a more suitable optimization of the split-window technique for the area under investigation. Three different split-window formulations for LST found in the literature have been analyzed and compared with ground data.These are hereafter named Price [21], Ulivieri [23] and Prata & Platt [25].Although these algorithms were originally implemented with data from the AVHRR on board the NOAA polar orbiting satellites, in this work they have been tested by using the brightness temperatures from the thermal infrared AATSR channels (hereafter T 11 and T 12 ), i.e., the channels 11 and 12 centered at 10.8μm and 12 μm with a bandwidth of 1μm.It is important to emphasize that the original AVHRR channels employed in the algorithm formulations, i.e., channels 4 and 5, have the same central frequencies of the AATSR, with also the same bandwidths. With reference to the algorithms reported in Table 3, we assumed the surface emissivity as ε 11 = 0.95 and ε 12 = 0.96, typical mean values for urban areas [26][27][28][29], where again the subscripts refer to channels 11 and 12 of AATSR. The resultant land surface temperature has been compared with the air temperature from Cinecittà ground weather station by selecting the AATSR pixel covering the weather station location.The ENVISAT satellite passes over Rome around 9:00-10:00 and 20:00-21:00 UTC.To perform this comparison only cloud-free data were utilized, since the presence of clouds carries a decrease in the brightness temperature [27]. The results are presented in Table 3, employing AATSR brightness temperatures and weather station air temperatures for the period reported in Table 2 for the Cinecittà station.The three dissimilar root mean square (rms) differences between the estimated T s and the measured air temperature at 2 m above the ground yields an uncertainty in the evaluation of the capability of these algorithms in the estimation of urban surface temperature, and therefore an uncertainty in their applicability for the Rome area.( ) . The last column is the rms difference between the estimated T s and the measured air temperature in Rome. New Formulation In order to employ a method for the evaluation of the UHI in Rome with less uncertainty, we have developed a local and more suitable algorithm for the estimation of the CLHI.This method exploits the availability of air temperature data from ground-based weather stations in the study's area. In general, there are two approaches solving the problem of determining surface temperature using the split-window channels.The first assumes that the effects due to land and atmosphere can be decoupled and then the method is to separate out the surface effects (emissivity) from the atmospheric effects (water vapor).The second approach is to accept that the surface and atmosphere are coupled and then the aim is to solve the problem without taking explicit account of either emissivity or water vapor, but to account for their effects simultaneously.The difficulty in the first approach is that an estimation of the emissivity must be provided and validated; this requires global surface spectral emissivity information that is, unfortunately, currently unavailable [29]. In this work, we have chosen to develop an algorithm for the estimation of the air temperature close to the surface (canopy layer) following the second approach.We perform a multiple linear regression with pixel-by-pixel cloud free, calibrated day and night brightness temperatures T 11 and T 12 observed by the channels 11 and 12 of AATSR for the nadir view. The linear regression formula implemented in order to retrieve the air temperature T a very close to the surface from T 11 and T 12 is: where a 0 , a 1 , a 2 are the regression coefficients, depending simultaneously on atmospheric water vapor and land surface emissivity.Equation ( 2) was solved as an Ordinary Least Squares regression where the brightness temperatures and the air temperatures play the role of predictors and response variables respectively. In order to study the tendency of UHIs, different coefficients for day and night, as well as for urban and rural areas, were computed exploiting the data set of Tables 1 and 2; these are summarized in Table 4.In particular, a data set of cloud-free brightness temperatures (T 11 and T 12 ) and the corresponding T a in time and space from the eight weather stations was selected from 2003 to 2005 for the computation of the regression coefficients (1,809 data samples).In order to match satellite-based and ground-based data, we utilized the AATSR pixels covering the weather station locations, selecting the air temperatures from the ground stations corresponding to the time closest to that of the satellite passes, with a maximum time difference of less than 30 minutes.Table 4. Regression coefficients for the estimation of air temperature T a in the Roma area (canopy layer) using AATSR brightness temperatures (channels 11 and 12).The satellite passes are around 9:00-10:00 and 20:00-21:00 UTC.Then, a validation test of the proposed approach was performed using the set of independent data of T 11 , T 12 and T a belonging to 2006 (663 samples).For this independent test, the estimation of T a using equation (2) showed accuracies in terms of rms error of about 3 K during the day, with better accuracies (about 2 K) during the night, when differential surface heating is absent. Results and Discussion UHI studies are generally conducted in one of two ways: measuring the UHI in air temperature through the use of automobile transects and weather station networks, and measuring the UHI in surface (or skin) temperature through the use of airborne or satellite remote sensing.In situ data have the advantage of a high temporal resolution and a long data record, but have poor spatial resolution.Conversely, remotely-sensed data have higher spatial distribution but low temporal resolution and a shorter data record. In this paper, we have analyzed the UHI by both weather stations and satellite remote sensing, but with satellite maps targeted to the estimation of the air temperature in the canopy layer. UHI Analysis from Weather Stations A frequently used metric to describe the degree of development of the UHI is the heat island intensity, ΔT u-r .This is the difference in temperature between urban and rural locations within a given time period.The temperature measurements in the urban area were recorded by seven weather stations all situated in or near to the center of Rome, whereas the temperature measurements for the rural site were retrieved by a weather station named Pratica di Mare (Table 2). In order to take advantage of the high temporal resolution of the in situ data, since each station records the air temperature every hour, the daily averaged trend of UHI intensity for each month was computed.The trend was obtained by subtracting the averaged monthly urban air temperature to the averaged monthly rural air temperature, for each year (2003)(2004)(2005)(2006). Figure 3 reports this trend measured during 2003 by the Rome weather stations for the months corresponding to the higher UHI.Similar tendency was found for the other years. This figure shows the CLHI intensity progressively increasing from midday and reaching a maximum (around 5 K) approximately a few hours later, remaining high through the night hours until predawn hours when levels begin to fall.During the day the CLHI is typically fairly weak or even negative (a cool island), probably due to areas that are shaded by tall buildings or other structures.The green line shows a negative CLHI intensity for the Villa Ada station, which is situated at the edge of a park.In fact, vegetation provides important shading effects as well as cooling through evaporation. The CLHI analysis by ground station measurements points out how seasonal variations affect the heat island magnitude, with weak intensity in winter and reaching a maximum during summer when the amount of incoming solar radiation is greatest and artificial surfaces like asphalt tend to warm faster than those of the surrounding rural areas, acting as a reservoir of heat energy.Even the weather plays an important role, particularly wind and clouds.In fact, heat island magnitudes are largest under calm and clear weather conditions.Increasing winds mix the air and reduce the heat island and increasing clouds reduce radiative cooling at night, which in turn reduce the heat island [2]. Comparing the CLHI behavior with respect to the SHI, from literature we can infer that SHI is usually most distinct during the day when strong solar heating can lead to larger temperature differences between dry surfaces and wet, shaded, or vegetated surfaces [2,22]. UHI Analysis from AATSR The potential of analyzing the UHI with satellite-based sensors lies in the availability of maps with good spatial distribution and resolution (1 km or better). Detailed maps of the air temperatures observed in the cities are currently not a routine part of the urban weather monitoring services provided by city authorities and by the meteorological offices.Furthermore, ground station networks are not homogeneously distributed and, in some locations, do not exist at all.Therefore, Earth Observation satellites, with their high spatial resolution, offer a means to characterize the UHIs and in general to provide more efficient urban meteorological services for the benefit of citizens. An advantage of studying the UHI is that the quantity of interest is not the absolute urban temperature, but the difference in temperature between the urban and rural areas.Therefore some sources of systematic error in the temperature retrieval (radiometric calibration errors, variations in surface emissivity [21]) are thus partially removed in the differencing procedure [6]. In this section, the CLHI across Rome by means of the AATSR data was analyzed.In total, 634 daytime and nighttime scenes taken between 2003 and 2006 were processed, at around 9:00-10:00 and 20:00-21:00 UTC.Through the Equation (2), air temperature was retrieved in order to study the magnitude of the UHI pixel-by-pixel during day and night.Then, the intensity of CLHI, ΔT u-r , was obtained by subtracting the averaged monthly urban and rural air temperature in clear-sky conditions; both were estimated by applying the regression coefficients of Table 4.As an example, the results of ΔT u-r , during day and night in June, July and August 2003 are presented in Figure 4, where the main roads of Rome are reported in black (the city is chiefly located inside the motorway orbital road).With reference to Figure 3, a ΔT u-r of about 3-4 K is expected during the night (around 21:00 UTC) and a negative or almost zero CLHI intensity during the daytime (around 9:00 UTC).From Figure 4, the monthly maps of CLHI intensity from AATSR confirm this behavior.It should be noted that during the daytime the surface has not been warmed yet and the main contributions to heating are from anthropogenic sources like traffic which is more intense in the morning along the main roads, in particular along the South-East of the Rome ring road (Grande Raccordo Anulare).During the winter months, the contribution to heating of the main roads is detectable but much less intense, as expected.During the night, as expected, the UHI is particularly significant.Also, a greater nighttime ΔT u-r during July 2003 was found, in addition to the fact that July 2003 was extremely hot. Moreover, the nighttime maps reveal a greater ΔT u-r in the center and East sides of Rome: this can be ascribed both to the presence of the more densely built up areas of the city, as sketched in Figure 2, and to the presence of a typical evening breeze from the sea in the West side (named "Ponentino"), playing an important role in the western mitigation of the CLHI intensity, since heat island magnitudes are largest under calm weather conditions. The UHI behavior noticed in the monthly maps is also confirmed analyzing a map for a specific day: as an example, the ΔT u-r for the 20th August 2003 is shown in Figure 5, where an intense heat island was detected.Therefore, the processing of satellite data for UHI intensity detection is able to monitor not only the additional heating effects of the main roads with intense traffic, seasonal dependent, and of the particularly densely built-up areas, but also is able to detect slight cooling effects due to particular weather phenomena.With cities becoming more densely populated and with a growing recourse to artificial surfaces, this remote sensing application proves extremely useful (e.g., health issues). Conclusions In this work, the UHI of Rome was analyzed using both ground-based weather stations and the satellite-based AATSR sensor.First, UHI spatial characteristics were assessed using air temperatures measured by the weather stations, then UHI maps were produced using brightness temperatures from AATSR.Satellite derived maps confirmed, during summer months, an average intensity of CLHI of about 3-4 K during nighttime and a negative or almost zero CLHI intensity during daytime.The UHI intensity reaches a maximum during summer when the amount of incoming solar radiation is greatest, because artificial surfaces like asphalt tend to warm more intensively than those of surrounding rural areas, acting as a reservoir of heat energy. During the morning, the urban surface warming is mainly due to traffic, which is more intense along the main roads, while during the night the UHI is evident especially in the center and East sides of Rome, which are also the more densely built up areas of the city. Figure 1 . Figure 1.Schematic of the main components of the urban atmosphere. Figure 1 . Figure 1.(a) Study area of Rome with the employed weather stations across Rome.Urban weather stations are marked A-G, and rural station with a green star.(b) The periods of significant growth of Rome, starting from top:1951, 1964, 1977, 1991. Figure 2 . Figure 2. Daily average trend of the UHI intensity ΔT u-r measured by the seven meteorological stations across Rome over six months (in 2003).The red and blue circles are the positive and negative CLHI intensity, respectively.The green line is the Villa Ada trend. Table 1 . ENVISAT satellite sensors: data set available. Table 2 . Meteorological weather stations of ARPA Lazio and time period of data availability.
6,862.8
0001-01-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Cytocompatibility evaluation of hydroxyapatite / collagen composites doped with Zn + 2 The cytocompatibility of synthetic hydroxyapatite/collagen composites alone or doped with Zn was tested by using primary culture of osteoblasts. The hydroxyapatite (HAP) was synthesized having calcium hydroxide and orthophosphoric acid as precursors. A new HAP composite was developed adding 1.05 w% of Zn(NO3)2.6H2O forming HAPZn. The pure type I collagen (COL) was obtained from bovine pericardium by enzymatic digestion method. The HAP/COL and HAPZn/COL composites were developed and characterized by SEM/EDS. The cell viability and alkaline phosphatase activity in the presence of composites were evaluated by MTT assay and NBT-BCIP assay, respectively, and compared to osteoblastic cells of the control. Three individual experiments were accomplished in triplicates and submitted to the variance analysis and Bonferroni’s post-test with statistically significant at p<0.05. The HAPZn/COL composite did not stimulate the proliferation and increasing of alkaline phosphatase activity of the osteoblastic cells. The tested composites did not alter the cellular viability neither caused alterations in the cellular morphology in 72 h showing adequate properties for biological applications. INTRODUCTION Biodegradable polymers and bioactive ceramics are being combined in a variety of composite materials for tissue engineering scaffolds [1] with the objective of substitution and regeneration of hard tissues.Calcium phosphates (CaP)/collagen composites have been developed due to its similar composition with the bone tissue [2][3][4].The main advantageous in collagen load with hydroxyapatite is the modulation of the adhesion process to osteoprogenitor cells migrate and differentiate in substratum [5]. Research groups have studied how to find biomaterials and techniques to impart appropriate biological properties to synthetic composites for replacement of the human skeleton.Zinc is an essential trace element with stimulatory effects on osteoblastic cell proliferation and bone formation in vitro and in vivo.It hold an inhibitory effect on osteoclastic bone resorption [6,7].Several researchers have attempted to doper materials with Zn +2 at low concentrations increasing the bioactivity of bone cells [4,[6][7][8] and decreasing and regulating the inflammatory reaction [9]. Evaluation of the citocompatibility of the composite is usually performed via in vitro citotoxicity test.It is a sensitive and reproducible screening method to detect cell death or other effects on cellular functions.The primary culture osteoblasts is a well established model to investigate biocompatibility evaluating the cellular viability due its proliferated capacity [8,10,11]. The present in vitro study developed hydroxyapatite/collagen composites doped with Zn +2 to attempt the materials association with adequate properties for biological applications in the recovery of the bone tissue by trauma or pathogenies. The materials were covered with a fine layer of gold (Au) and the morphology and semi-quantitative elementary analysis of the sample microareas were obtained by X-ray energy dispersive spectroscopy (EDS) and scanning electron microscopy (SEM) thought JSM model 6360LV -JEOL equipment, using 10 to 15 KeV.The results were determined for the analysis of three different areas from each sample. Culture of Osteoblasts Osteoblasts were isolated from the calvaria of 1-5 days old Wistar rats by enzymatic digestio method [14].Briefly, after cut into small pieces, the calvaria bone was digested with 1% trypsin and four times with 2% collagenase.The supernatant of the three last washes were centrifuged 1000xg for 5 min and the pellet was ressuspended in 5 mL of RPMI-1640 (Sigma, St Louis, USA) medium supplement with 10% fetal bovine serum (FBS) (GibcoBRL, NY, USA) and 1% antibiotic-antimycotic solution (GibcoBRL, NY, USA).After confluence the cells were replicated and used on passage 2. Osteoblasts were plated at the density 1 x 10 5 cells and incubated with granules of the different composites.The same culture medium containing osteoblasts without the presence of the composites was use as a negative control.The experiments were performed 72 h after incubation. Cellular Viability and Alkaline Phosphatase Activity Osteoblast viability was evaluated by MTT assay in the presence of composites.The MTT (3-(4,5dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide) is based on the capacity of viable cells to metabolize tetrazolium to formazan crystals, a purple dye that can be solubilized and measured by optical density.After incubation for 18 h the plates were measured at 595 nm. Alkaline phosphatase production of osteoblasts was analyzed by NBT-BCIP (indolylphosphatenitroblue tetrazolium) assay.Nitroblue tetrazolium is hydrolyzed by the alkaline phosphatase secreted by osteoblasts and the intensity of enzyme activity was measured at 595 nm.The morphology of the osteoblasts in contact with the composites and in the control group was observed by transmitted light microscopy and photographies were obtained. Data were analyzed statistically using variance test (ANOVA) and Bonferroni's post-test.Differences were considered significant at p<0.05 RESULT AND DISCUSSION Human bone is mainly composed of hydroxyapatite crystals and collagen fibers.Calcium phosphate (CaP)/COL composites are a biodegradable artificial bone developed to engineer the organized bone structure, mimicking the biological conditions [5].These composites are considered one of the most promising biomaterials to replace autologous bones due to structural and biological similarity to this tissue and excellent biocompatibility [1,3,5].The composite development is one alternative that is being considered and studied to combine the typical bioactive behavior and mechanical properties such as elastic modulus and toughness of some materials and to produce composites with properties closer to bone tissue [3,5]. The morphology and semi-quantitative composition of present chemical elements in the HAP/COL sample can be observed in Fig. 1.The scanning electron micrograph (SEM) of HAP/COL surface showing a homogeneous aspect with HAP particles into COL matrix (Fig. 1A) and the energy dispersive spectroscopy (EDS) showing peaks of high intensity of Ca, P and C, and residual elements of its synthesis, Na and Cl (Fig. 1B).The qualitative chemical composition of the composites showed similar characteristics to their constituent materials.The composites tested by MTT assay showed no significant differences in absorbance when compared with the control (p>0.05).The cell viability in composites doped with Zn +2 (HAPZn/COL) or undoped (HAP/COL) showed no difference (Fig. 2). The alkaline phosphatase production (Fig. 3) of the osteoblasts in presence of the Zn +2 doped and undoped composites was comparable to alkaline phosphatase production of the control cells.The osteoblasts in contact with HAPZn/COL and HAP/COL composites produced alkaline phosphatase comparable to control cells. The presence of Zn +2 in low concentrations can induce a greater metabolization and can produce a greater alkaline phosphatase production of the osteoblasts.The literature studies related that zinc released from the ZnCaP stimulates osteoblastic activities [6,7,9].Nevertheless, zinc is known as a potent inhibitor of the crystal growth of apatite, which is a controversial issue.Sogo et al., 2004, reported that cytocompatibility test using preosteoblastic of mouse calvaria showed no significant differences between pure αTCP and αZnTCP with a zinc content of 0.11 wt%.Webster et al., 2004, reported that the adhesion of human osteoblasts was greater on HA doped with Zn +2 than undoped HA.Kawamura et al., 2003 related that the optimum content of zinc was found to be 0.316 wt% in the composite ceramic of βZnTCP and hydroxyapatite (HA) for promoting bone formation in vivo, and a zinc content of 0.63 wt% was excessive for both ceramics.In this study, the result showed no significant difference between the experimental groups.A zinc amount of 1.05 w% in the HAP materials did not indicate an efficient content of this divalent metal to stimulate effects on proliferation and increasing of alkaline phosphatase activity of the bone cells in the presence of HAPZn/COL composite.Probably, the amount of the dopant Zn +2 1.05 wt% in HAPZn was insufficient just after mixed with the collagen during the synthesis of the HAPZn/COL composite.Thus, the tested composites did not promote this effective stimulation. Osteoblasts in direct contact with the HAP/COL (Fig. 4A) and HAPZn/COL (HAPZn/COL (Fig. 4B) composites showed typical morphology and produced alkaline phosphatase comparable to the control cells (Fig. 4C).Osteoblastic cells are the basic structural and functional units in bone growth and metabolism and its behavior in the presence of materials is a suitable experimental model for biocompatibility evaluation [7].The osteoblasts produced free alkaline phosphatase in the matrix that was colored by BCIP-NBT.Only normal osteoblastic cells can metabolize and transform the tetrazolium salt in blue crystals observed in the cells (Fig. 4).This evidences the presence of alkaline phosphatase already metabolized that will begin the mineralization. The doped and undoped composites with a zinc content of 1.05 w% showed cytocompatibility.The osteoblasts grew well in contact with the composites and showed normal morphology in 72 h.These findings are in accord with the literature that describe the citocompatibility of Zn +2 within a nontoxic level [16]. CONCLUSION The tested HAP/COL composite doped with Zn+2 in amount of 1.05 w% did not stimulate effects on proliferation and increasing of alkaline phosphatase activity of the osteoblastic cells.The HAP/COL composites doped and undoped with Zn+2 showed acceptable cytocompatibility presenting adequate properties for biological applications.Further investigation is important to determine the effective amount of Zn+2 in these composites. Figure 2 : Figure 2: Cell viability after 72 h of incubation: osteoblasts showed no significant difference proliferation in presence of the composites when compared to control.Results represent mean ± SD of three triplicates from three separate experiments (p<0.05). Figure 3 : Figure 3: Alkaline phosphatase production of osteoblasts after 72 h of incubation: cells showed no significant difference in presence of the composites when compared to control cells.Results represent mean ± SD of three triplicates from three separate experiments (p<0.05).
2,125.2
2007-01-01T00:00:00.000
[ "Materials Science", "Biology" ]
Acquisition of a space representation by a naive agent from sensorimotor invariance and proprioceptive compensation In this article, we present a simple agent which learns an internal representation of space without a priori knowledge of its environment, body, or sensors. The learned environment is seen as an internal space representation. This representation is isomorphic to the group of transformations applied to the environment. The model solves certain theoretical and practical issues encountered in previous work in sensorimotor contingency theory. Considering the mathematical description of the internal representation, analysis of its properties and simulations, we prove that this internal representation is equivalent to knowledge of space. Sensorimotor theory Sensorimotor contingency theory argues that the acquisition of space knowledge in the brain is a result of the interaction between perception and body movement.''Passive perception'' alone is not sufficient to create a representation of space, instead many authors propose ''active percetion'' in which action is a necessary component of perception. 1,23][4][5][6][7][8] The agent is able to use its body to compensate for sensory changes.In response to sensory changes, which are a result of changes in the environment or body movements, the agent will move to counter the effect of the initial changes.Poincaré 4,5 described the compensation algorithm as the capability of the body to compensate for a transformation of the environment.Nicod 6,9 applied the concept of compensation to auditory signals and stated that a space representation can emerge when body movements are used in interaction with the auditory system.More recently, O'Regan and Noe ¨2 used psychological arguments obtained from experiments on humans and animals to clearly define the sensorimotor contingency approach and outline its expectations.Philipona et al. 7,8 performed physical simulations, modeling, and analyses of the tangent spaces of the manifold of sensorimotor interactions.These studies showed that it is possible to retrieve the dimension of the learned space for any sensory and motor dimensions.The algorithm of Philipona retrieved the dimension of the group of transformations in which their agent moved without any prior knowledge of it and without knowledge of the sensory and motor dimensions.Laflaquiere et al. 10,11 implemented this type of algorithm in order to retrieve the dimensionality of the environment in which a robot moved its arm. In a psychological study, Aytekin et al. 12 demonstrated that humans use a sensorimotor approach to learn space from auditory stimulation.The group properties and the metric of the auditory space were shown to be captured by the human brain, thanks to sensorimotor interactions.Terekhov and O'Regan 13,14 showed that an agent using internal compensation movements could acquire external movements and the metric without knowledge of the environment. The present article extends the field of sensorimotor research by demonstrating a naive agent that learns the group properties of space and provides new insights into theoretical results.The naive agent creates a usable representation of space and retrieves its fundamental properties such as its group operation.The term ''naive'' indicates that before learning the agent had no awareness of space or its properties.Using compensation, this agent is able to learn an internal representation of space.We extend the Laflaquie `re and Terekhov models, merging their algorithms.Our results remove all ambiguity in the mathematical representation of space and extend the usability of the representation.This representation is itself a group and is isomorphic to the group of compensable transformations.The agent measures sensory signals from the environment, which correspond to changes in its perception.The change in perception can be due to movements generated by the agent (internal movement, external movement, or both), a change in the environment (a transformation applied to the sources of signal), or to a combination of these.The agent will act to compensate for these changes in order to retrieve the same signal it was experiencing before the change. In this article, the agent's proprioceptive capabilities alone are used to create a full representation of its environment without any a priori knowledge of it.This representation has the same properties as its embedding space ðR 2 Þ.The learning algorithm is based on the notion of compensation defined by Poincaré (Poincaré, 1898, 1902) for the visual perception of space. In order to prove that our agent has learned a representation of space, we use the following points: The agent captures invariant proprioceptive domains in a stationary environment, which provide an internal calibration for its own movements.The agent learns the transformations of the environment by compensable movements (movements of the foot, retina, or both), which link environment transformation to body transformation. The agent can distinguish between learned transformation and non-learned transformation (compensable and non-compensable).The agent captures the properties of learned transformations.The group properties for combinations of external movement are reproduced by the agent's internal representation.When the agent uses its internal representation to predict or reproduce a combination of movements, the combinatory effect is preserved.The internal representation is isomorphic to the group of transformations.A handicapped agent that cannot apply the algorithm (no sensory matching) cannot learn the compensable movements and therefore cannot learn the space representation. Using all of the above, we show that the internal representation is equivalent to a representation of space. In the next section, we describe the agent in detail.We describe its sensory system and body.We explain the algorithm applied during learning and the theoretical requirements for proof that the agent learned a full representation of space.Next, we present the computational logic and calculations that have been applied to the agent.We present the solution used to validate our theoretical claims and discuss the effects studied.We then present our theoretical results and simulations.We compare the learned compensable transformations to noise and non-compensable transformations.We also show the proof of the group properties learned by the agent and its internal representation of space.Taking the example of an agent with a particular form of handicap, we show how this is reflected in terms of the representation of space.Finally, we discuss the results of the model and present future work. The environment The environment is composed of a number of light sources.Signal propagation obeys physical laws and is generated by a simple ray tracing algorithm for each source.The environment and its state is defined by .The different states of the environment are noted with the q subscript giving q to describe the environment in state q.All the possible states belong to the set " : q 2 ". The environment can be subject to transformation T and the environment will then be in a different state. The transformation of the environment can be rigid transformation (translation and rotation), geometrical transformation (scaling), or any type of transformation (noise, intensity change, and random movement of the sources of light).When subject to a rigid transformation, the effect applies on all the sources of signal with a displacement of their locations in the physical space. Sensations of the change in the environment The agent has a retina, which is sensitive to the source of lights.The retina is a detector composed of visual cells, which are sensitive to the sources depending on the source position relative to the sensor.Let s 2 S describe the sensorial activity of the agent.The dimension of S is the dimension of the retinal signal given by the number of visual cells.An example of this type of sensory detector is shown in Figure 1 .The signals are measured by the agent's detector.The detectors are sensitive to a signal emitted by the sources and the signal variation on a detector is directly linked to the source's localization in the space surrounding the agent.The signals captured by the detector depend on location with a function where X r is the relative position of the retina in the agent and is the environment's effect on the agent (which depends on the absolute position of the agent in the environment).The function is a binary relation between the environment and R m where m is the dimension of the signal-measuring sensor ðX r ; Þ ¼ P N i¼0 ðX i À X r ; Þ where the X i is the lights' location in the environment relative to the retina.It is important to note that the signal function must be continuous and invertible. Invertibility can be ensured by considering a sufficiently complex detector, that is, one composed of sufficient number of retina cells.We do not suppose we have a full domain where the function is invertible, however, in our simulation, we only got limitations with very few cells in the retina and this showed interesting defects in the space representation.Defects in the space representation obtained using an insufficient number of cells will be discussed later in the article.(The change in the environment can generate sensory variation in the agent.) Agent description The agent is a simple body composed of two moving parts: a foot which generates body displacements and a retina which can be moved inside the body (Figure 2).For simplicity, we present, in the Figure, a one-dimensional environment, however, our agent was validated and tested in a two-dimensional environment.The proprioceptive state of the agent p ðp 2 R n Þ is defined by combining the retina proprioceptive state p r and the foot motor activity p f as p ¼ ðp r ; p f Þ ? 2 P ¼ P R Â P F .Where P R and P F are the sets of all proprioceptive values.The foot proprioceptive state p f 2 P F can allow the agent to move both forward and backward. In the following part of the article, we are using the subscripts q and q 0 in order to distinguish the state of the studied objects.p q and p q 0 are the proprioceptive states of the agent corresponding to two states q and q 0 .We are also using this notation to distinguish the states of the environment q and q 0 .Using such a simple agent allows us to use precise mathematical tools to clarify the properties of space perception.Space and compensable transformations, as proposed by Poincaré, 3 are central to our study and are discussed in the two next sections. The movements of the agent are of two kinds: internal when the agent moves its retina and external when the agent moves its foot.The effect of the movement of the retina is related to the proprioception of the retina p r and the relative position of the retina in the agent is given by a function Å The proprioception of the foot p f generates external movement.The body of the agent is fully displaced when the agent moves its foot.The movement of the foot is given by the function Applying the effect of ðp f Þ to the agent is equivalent to a geometrical transformation to its body location (rotation and translation).When the foot proprioceptive value p f is equal to 0, the agent is at rest. Å and are bijective and thus are invertible functions.This bijectivity is a particular case that can be generalized. 15he absolute position of the retina in the evironment can be given by the position of the agent in the environment and the relative position of the retina in the agent X r .While the agent doesn't know its absolute position, it can act by changing its retina position with the movements of its body.Only foot movements generate any displacement of the body.Foot movements are not bounded, while the retina movement is solely within the body of the agent.While this is a strong hypothesis, it is necessary for purely mathematical reasons.However, the limits of this hypothesis are not tested as we typically consider small movements centered on the agent's body and only small foot movements are needed to compensate. Compensable transformation In 1895, H. Poincaré wrote his work on space and geometry. 3He defined geometry in relation to a totally naive brain, which has access to its sensorimotor flow only.The geometry can be inferred by considering certain types of sensory changes.Of all the possible sensory variations, some occur without motor commands and, therefore, must be related to external changes.Some changes that are related to external rigid displacements can be compensated by the agent's motor commands.In this case, the sensory variations due to the external changes and the motor commands are opposite, so that the initial and final sensory states are identical.This is what is meant by compensable.Because the function linking the positions of light sources and the agent sensor signal is invertible, for a stationary agent, we can state that for every change of the environment, the sensory perception of the agent also changes.When considering different positions of the agent in the environment with identical values of the proprioceptive state vector p (identical retina position and foot activity), the signals measured by the sensor are necessarily different.In the general case, the agent is unable to infer its sensorial variations from variations in its proprioceptive state.However, compensable transformations can be thought of as a type of sensorial variation due to external transformations that are compensated by a specific movement of the agent.These compensable transformations can be detected by the agent.According to Poincaré's proposition, the set of compensable transformations is a group and is equivalent to the group characterizing the external geometric space.By experiencing the compensable transformations, the agent can capture the most important properties of the (Euclidean) space in which it moves. Capturing the set of compensable transformations In this section, we introduce the formalism for compensable transformations by defining È as a set of binary relations ' T that will allow us to build a representation of the group of compensable transformations T .We consider two cases: auto-compensable transformations in a stationary environment and compensable transformations more generally.Making this distinction allows the set of ' T to map specific transformations T unambiguously.Initially, the ' T functions were proposed by Terekhov et al. 13,14 They are built from a catalog of all compensable transformations the agent detects.By matching identical sensory inputs before a change and after compensation, ' T functions map proprioception observed before a change to proprioception observed after the compensation. For a given displacement T of the environment, the agent will recover the same function ' T for both different initial positions and different environments.As shown by Terekhov and O'Regan, 14 the functions ' T provide the agent with the notion of space. (In this article, the ' T are not functions but binary relation as they are not unique mappings from P to P. However they are real functions in the article of Terekhov.) Compensable transformations Let ðs; p q Þ 2 S Â P describe the state of the agent in the environment.After a given rigid displacement of the environment (lights moving in our case), the agent's new state is ðs 0 ; p q Þ.Such a transformation is compensable if there exists a new proprioceptive state p q 0 corresponding to a displacement of the agent in the environment and the resulting state is ðs 00 ; p q 0 Þ with s 00 ¼ s.In other words, the agent has moved in order to recover the initial perceptual state.Given a compensable transformation of the environment T , the function ' T is the mapping between P and itself such that for all proprioceptive states p q 2 P, one has p q 0 ¼ ' T ðp q Þ, where p q 0 is the new proprioceptive state which compensates for T ' T : P !P; This definition does not depend on the perceptual state s.This is a consequence of the definition of compensable transformations.Cases where s 00 ¼ s can be found as long as the agent can evaluate its sensory flow.Difficulties arise when discussing how an agent can detect such coincidences from the sensorimotor flow alone, without a priori knowledge.How can it infer for a set of ðp q ; p q 0 Þ that they correspond to the same ' T ?That is, how does it determine that they compensate for the same unknown external compensable transformation T ?Terekhov and O'Regan proposed a solution 14 whereby they collected all the agent compensations corresponding to the same external transformation T and repeated for all other values of T .However, as the agent itself does not have access to T , this solution is somewhat artificial.In this article we propose a new solution to this problem by introducing the notion of auto-compensable transformation.In this case, the environment is stationary.Since this is not known by the agent, we must also introduce the identity transformation of the agent's body. (As mentioned earlier, while the ' T are functions in case of the Terekhov's article, in our case there are not.However, when we keep the foot proprioceptive state constant before and after the compensation (respectively the retina proprioceptive value before and after the transformation of the environment), the compensation ' T becomes a function as there is a unique retina state compensating the applied transformation (respectively for the foot).When the environment is stationary, ' 0 is a function (see next section). In the article, we will then use the term of function even if it can be seen as an inaccurate term. Auto-compensable transformation An auto-compensable transformation is the displacement of a part of the agent's body (e.g.retina), which compensates for sensorial variations induced by the movement of another part of the agent's body (e.g.foot displacement).The distinction between auto-compensation and compensation more generally is that the transformation T is fully determined by the agent in the case of auto-compensation.The agent can then build a set of functions ' T , which can be used as internal references to represent external compensable transformations.In order to avoid any confusion between the agent's transformations T and external transformations, the environment must remain stationary when determining the function ' T .However, the agent cannot know whether the environment is stationary or not.This problem has been discussed by Roschin and Frolov, 16 Laflaquie `re, 17 and more recently by Marcel et al. 15 without finding a fully satisfactory solution.We will address this by introducing the identity transformation.The identity transformation can be considered a simple saccadic movement.After a displacement of the agent's foot, the sensorial state of the retina changes.The agent then recovers the initial proprioceptive state of the foot by performing the exact inverse displacement of the foot.The initial sensorial state of the retina will be recovered.If it is not, then the agent can state that the environment changed during the identity transformation.Therefore, by retaining only those transformations which do not involve movements of the external environment, the agent can build unambiguous ' T functions. È the set of sensorimotor functions As previously defined, È is the set of functions ' T , which maps the proprioceptive states onto themselves.We will show that our agent can capture the set of compensable transformations T and that it learns the group properties of this set.Using equation ( 5), we show (in Appendix 1) that if the agent's state modification from p q to p q 0 compensates for the environment transformation T , we have q is the function that describes the sensor position relative to the agent's body in the physical space given the agent's proprioceptive state p r q .ðp q Þ is the function that describes the movement of the agent's body given the foot proprioceptive state p f q .We distinguish between the reverse function of , À1 ðX q Þ ¼ p f q , and the opposite function of , ðp f q Þ À1 , which is the opposite displacement of ðp f q Þ.The function Å À1 is the inverse function of Å.In this article, we present the ' T functions and their properties and demonstrate that there is a calculable solution.It is important to note that the agent has no knowledge of Å and .Only the mapping of p q to p q 0 is known.We can describe this mapping by a mathematical function.In order to do so, we first consider transformations that are autocompensable in a stationary environment before moving to the general case of a nonstationary environment. Auto-compensable transformation.When considering the auto-compensable transformation, the environment is stationary.We calculate ' 0 , where 0 indicates no movement of the environment.When the agent moves its foot, it compensates for the movement with its retina.The absolute position of the retina does not change.It is the same before the foot movement and after the compensating movement of the retina.After the foot movement, the environment perceived by the agent is changed in accordance with the physical laws of signal propagation and perception.Compensation is the act of retrieving the initial signal by a movement different to that which generated the difference.For every foot movement and initial retina position, the final retina position corresponding to the compensated movement is unique in a stationary environment (bijectivity of Å and ).The signal measured after the compensating retina movement is compared to that before the foot movement.The algorithm applied to the agent is as follows 1.The agent is in an environment at a given location X q , with the retina at a given position X r q ¼ Åðp r q Þ.The agent can move its foot and generate a body displacement ðp f q Þ, where p f q is the foot state. 2. The agent moves to a new location X q 0 in response to a foot movement induced by the foot motor activity p f q 0 .The transformation generating the displacement is given by X q h ðp f q 0 Þ X q 0 .3. The agent compensates for the foot movement by moving its retina in such a way that the retina visual perception is the same before and after the initial movement.The practical implementation is given in Appendix 1.What the agent sees after compensation is exactly what it saw before the foot movement.The final relative retina position is measured only by its internal state p r q 0 .4. The agent creates an internal representation of autocompensation by mapping the initial internal state p q to the final state p q 0 , ' 0 ðp q Þ ¼ p q 0 .5. We show in the Appendix 1 that Compensable transformation.In this case, the environment is not stationary.When the environment is moved, the agent compensates by movement of the foot, retina, or both.The signal on the sensor is the same when measured before the movement of the environment and after the compensating movement of the agent.The algorithm is as follows 1.The agent is in an environment at a given location X q , with the retina at a given position X r q ¼ Åðp r q Þ.The agent can move its foot and generate a body displacement based on the function ðp f q Þ where p f q is the foot state.2. The environment is moved to q 0 with q !T q 0 .3. The agent compensates the external movement by either: a. A foot movement only.The body is moved in such a way that its retina visual perception is the same before and after the initial movement.The retina proprioceptive value remains unchanged. The foot proprioceptive value is p f q 0 .b.A retina movement only.The body is not moved but the retina is moved in such a way that its retina visual perception is the same before and after the initial movement.The retina proprioceptive value is p r q 0 .The final foot proprioceptive value is 0. c.Both foot and retina movement.Both the body and retina are moved in such a way that the retina visual perception is the same before and after the initial movement.The foot proprioceptive value is p f q 0 .The retina proprioceptive value is p r q 0 .4. The agent creates an internal representation of the transformation T by mapping the initial internal state of the agent p q ¼ p r q p f q ! to the final state 5. We show in the Appendix 1 that The set of ' T ðpÞ defines a manifold in internal state space and can be used to show that the agent captures the geometrical space as defined by Poincaré. 3The acquired knowledge is a space representation given that its properties mathematically correspond to those of space (including its group properties and the isomorphism with the compensable transformation set).This is discussed in the section ''È is a representation of the geometrical space.''Figure 3 illustrates the steps of the coincidence algorithm outlined above. È is a representation of the geometrical space As outlined previously, we will show that the set È of functions ' T that corresponds to the mapping of proprioceptive states to compensable transformations has all the properties of the external space in which the agent is situated.The agent is capable of capturing the relevant properties even though it is not aware that such a set of transformations is a group.Thus, the set È of ' T is a group and is also isomorphic to the set of T . Combinatory property.The combinatory operation on the set of ' T functions is defined such that for any T q , T q 0 there exists T q 00 such that ' T q 0 ' T q ¼ ' T q 00 .In other words, the combination of two compensable transformations is itself a compensable transformation.Demonstration: From the construction of the function ' T , for any compensable transformation T , there is an associated function ' T .Using the definition of a compensable transformation, there exists an action, which compensates for an external transformation.The compensation can be either an internal or external movement.As the displacement caused by the movement of the foot can be any length ððp f ÞÞ, all combinations of compensable transformations can be compensated by foot movement.Thus, any combination of compensable transformations is compensable. It is also the case that 8T q ; T q 0 and T q 00 where T q 0 T q ¼ T q 00 then ' T q 0 ' T q ¼ ' T q 00 ¼ ' T q 0 T q .Demonstration We begin with Applying equation ( 9) twice gives Combining these functions with equation ( 8), we obtain The combination of ' T functions does not depend on the intermediary steps selected by the agent. Group property.The axioms necessary to validate a group are closure, associativity, the existence of an identity, and the existence of an inverse for every element of the group.Closure 8p 2 P; 8' T q ; ' T q 0 then ' T q 0 ' T q ðpÞ 2 P (10) where P is the set of possible proprioceptive measures of the retina and foot.This first property is obvious.It comes from the learning algorithm where the agent changes its proprioceptive state with a retina or foot movement to compensate for an external agent movement.The result of applying ' T q to any p ¼ p r p f belongs to P. We must demonstrate that 8p 2 P; 8T q and T q 0 then ' T q 0 ' T q 0 ðpÞ 2 P. Demonstration: From the algorithm, we have 8p q 2 P; 8T q then ' T q ðp q Þ 2 P. As ' T q ðp q Þ ¼ p q 0 , we apply the same logic: 8p q 0 2 P; 8T q 0 then ' T q 0 ðp q 0 Þ ¼ ' T q 0 ' T q ðpÞ 2 P Associativity 8' T q ; ' T q 0 ; ' T q 00 then ' T q ð' T q 0 ' T q 00 Þ ¼ ð' T q ' T q 0 Þ ' T q 00 (11) Demonstration ' T q ð' T q 0 ' T q 00 Þ ¼ ' T q ' T q 0 T q 00 ¼ ' T q T q 0 T q 00 ¼ ' T q T q 0 ' T q 00 ¼ ð' T q ' T q 0 Þ ' T q 00 Identity 8' T ; ' 0 then where ' 0 is the identity Inverse Demonstration for both identity and inverse: In the group of transformations, 8T we have T T À1 ¼ Id which implies 8T ; This implies that ' À1 T ¼ ' T À1 which proves the existence of an identity element ð' 0 Þ and the existence of a inverse function ' À1 T ¼ ' À1 T for any ' T . These points demonstrate that the set È of the functions ' T is a group. There is an isomorphism between the set of compensable transformations T and the set of functions ' T .In order to prove there is an isomorphism, we have to prove that the two sets have the same dimension and that the ' T is linear and injective. For the dimension of both sets, we are using previous work from Philipona et al. 7,8 and Laflaquiere et al., 11 which show that the internal representation of a compensatory agent has the same dimension as the geometrical space of transformation. For linearity, we have shown that 8T q ; T q 0 then ' T q ' T q ¼ ' T q T q 0 .In the limit of compensability, we can easily extend it to 8T q ; T q 0 ; ; then ' T q ' T q 0 ¼ ' T q T q 0 which is the expected linearity. To show that the ' T are injective, we must prove that 8T q ; T q 0 if ' T q ¼ ' T q 0 then T q ¼ T q 0 .Since the set of ' T is a group, for every item in the set, there is a reverse function.Thus, ' T q ¼ ' T q 0 then ' T q ' À1 T q 0 ¼ Id ¼ ' T q T 0 À1 q .Previously, we determined that if ' T ¼ Id, then T ¼ Id.Therefore T q T q 0 À1 ¼ Id and T q ¼ T q 0 . Theses points demonstrate that the set of functions ' T is isomorphic to the set of compensable transformations and is thus a space representation. Computation Learning phase: Algorithm for computation of ' T In this section, we present the algorithm used to calculate ' T where T is the external transformations applied to the agent.The set of transformations is a set of translations in two dimensions. For the simulation, we considered a limited set of transformations of the environment T and a limited set of possible compensations (both foot and retina; see Appendix 1 for details).Applying this algorithm allowed us to calculate the complete set of functions ' T ðp q Þ ¼ p q 0 . Using ' T functions to retrieve movement By referencing the memorized tuples of three elements ðT ; p q ; p q 0 Þ , the combination of any two can be used to retrieve the third.There are multiple possible solutions for given p q and T . ' T functions are only sensitive to compensable transformations In order to test the algorithm, we applied transformations other than those the agent learned.We first applied a continuous transformation, where the length of translation is not a multiple of the basic step size used in the computation.We then applied a scaling transformation, where the source objects are deformed by homothetic deformation.We also added random noise to the source light signals, with an amplitude ranging from 10% to 500% of the initial signal.Starting with a random initial retina proprioceptive state p r q and a random foot movement p f q , we applied a random transformation T and then compensated for it, giving the final retina proprioceptive state p r q 0 and foot action proprioceptive state p f q 0 .We then retrieved ' T q from the best proprioceptive set ðp q ; p q 0 Þ and extracted the associated T q .We compared the applied T a and extracted T e (note that the T q found from the algorithm is noted as the extracted T e .When applying unknown (or non-compensable) transformations, coincidence matching could not be done.That is, it was not possible to match the states before and after movement.Exact matching was not possible and the error on the comparison of sensory measures increased with the difference between the initial image and the final image.The error between the applied T a and the extracted T e also became more significant as we measured k T a ; T e k.This test was repeated for 1000 transformations, starting with random retina positions and we calculated the error between the applied movement and the compensated transformation for the best tuple ðp q ; p q 0 Þ. Internal space as a group of transformations A calculation was performed for the full set of points in the combined ' T in order to verify ' T 1 ' T 2 ¼ ' T 3 then T 3 ¼ T 1 T 2 .For any random transformations T 1 and T 2 , we calculated T 3 ¼ T 1 T 2 .We then compared ' T 3 acquired directly from the set of ' T and the calculated As shown in the paragraph on the validation of the group axioms, this property proves the associativity, identity, and inverse axioms. Testing parameters and mathematical functions.Our initial selection of functions and parameters did not affect our results.In order to demonstrate this, we repeated the simulations with multiple sets of proprioceptive functions Åðp r Þ ¼ X r and varied the ratio of the applied movements to the retina or foot movements. Figure 4 illustrates the simple and complex proprioceptive models used in this article. In all simulations, we used a grid for the environment and a grid for the body displacement.The relevant values for the simulation are as follows: The number of steps the agent moves on a grid within the environment, # Steps.The ratio of environment displacement step size to body movement step size, Áe Áb .The proprioceptive function measure ÅðpÞ, which is one of: Affine function: Uncoupled proprioceptive function (simple model).Movement in any direction is associated with a single proprioceptive measure.Åðp r x Þ ¼ x and Åðp r y Þ ¼ y.This generates two independent ' T functions, ' x and ' y , where T ðx; yÞ is the applied transformation. Coupled proprioceptive function (complex model). Movement in any direction affects all proprioceptive measures.ÅðpÞ ¼ Åðp r 1 ; p r 2 ;; p r i ;; p r n Þ ¼ ðx; yÞ.This generates a single multidimensional ' T , where T is the applied transformation.The retina of the agent is composed of randomly located cells in the retina.In the next section, we present the results of varying sensory parameters such as the number of cells and retina size in the simulations. Compensable movement versus non-compensable movement The estimated displacement when applying either compensable movement or non-compensable movement is listed in Table 2. Since we know the applied transformation (even though the agent does not), we know the expected position after the transformation.We compare this to the agent's estimated position after transformation and calculate the difference between them.When the difference is zero, the expected and estimated positions are identical.For this test, we selected 1000 random movements (for each type of transformation) and looked for the transformations T , which could be retrieved from the known ' T .The transformations were as follows: Compensable transformations: Translations that are multiples of the agent step length and land on the agent grid positions.Continuous transformations: Translations of continuous length which are not multiples of the agent step length.Scale transformations: Include both a regular translation and a homothetic transformation applied to the source object.Noise transformations: Random noise is applied to the signal of the source object.The noise factor 100%.The source signal varied from 0 to 2. Results are presented in Tables 1, 2, and 3. Results in Tables 1 and 2 show that only compensable movements were learned by the agent.We measured the distance k T a ; T e k between the applied transformation T a and the extracted transformation T e .The estimated movement is retrieved from the ' T , which maps the prioprioceptive state before the transformation to the proprioceptive state after compensation.We retrieved the learned transformation using the tuple p b and p a where p b is the proprioceptive measure of the agent (retina location before the movement and agent in rest position) and p a is the proprioceptive measure of the retina and the foot movement after compensation.The applied transformation T a is directly linked to ðp b ; p a Þ.The Tables 1, 2, and 3 compare the distances k T a ; T e k (T a applied transformation and T e estimated transformation). For compensable transformations, the agent always retrieved the correct transformation.That is, the difference between T a and T e was always zero.This capacity to retrieve the applied movement T for any tuple p a ¼ ' T ðp b Þ was expected from the model.Non-compensated transformations were not properly retrieved.The error on the estimated movements was typically on the order of the agent size but was on the order of the size of the environment for scale and noise transformations.The continuous transformations gave better results than the scale and noise transformations.Contrary to the results for scale and noise transformations, the average distance for continuous transformations was dependent on the size of the agent step length.Continuous movements will always yield a better match than non-regular movements (scale or noise transformations).This can be interpreted by considering the grid of possible agent positions.As the ' T were sampled on the grid of possible agent positions, any estimation of movement was always on a node of this grid.Thus, the estimations for continuous movements (indeed any tested transformations) always fell on the grid.In this case, the continuous movements were translations and the error depended on the agent step size and whether the continuous movement landed between two grid positions.The nearest grid position was used as the estimated position.Although the agent did not learn continuous movements, continuous movements are similar to compensated movements within the precision of the agent step length. For scale and noise transformations, the deformations of the source signal could not be matched by the agent space representation.As the deformation of the image is not a rigid transformation, the perceived sensory signal before and after the transformation cannot correspond exactly.This gives rise to a significant error on the difference in sensory signals.The error was on the order of the agent size and did not depend on any of the variable parameters (agent step length, retina step length, and proprioceptive function).For an agent with one or two retina cells, the compensation algorithm did not work properly and the agent could not retrieve the movement.The results show that the compensable transformations were fully learned while noncompensable transformations were not mapped very well.Results for linked proprioceptive sensors.Moving the retina in any direction affects all proprioceptive measures.We measured the distance k T a ; T e k between the applied transformation T a and the extracted transformation T e . The agent learned the compensable transformations and was able to distinguish between compensable and noncompensable transformations. Group properties of the function ' T The results of combining the ' T functions in order to validate the group properties are presented in Table 4.In order to observe any effect on the quality of the space representation, we varied the numbers of cells and sources used in the simulations. Sensorimotor contingency theory The agent is a sensorimotor contingency model.The knowledge acquired by the agent is not the result of direct sensory analysis but of the creation of an abstract representation built on the interaction between the sensory inputs and motor control via proprioceptive signals.This result is predicted by the sensorimotor contingency theory, where abstract notions do not reflect regularities in the sensory inputs per se but reflect robust laws describing the possible changes of sensory inputs following actions on the part of the agent.The set of ' T is the space of internal representation where T is the transformation in the environment.By applying the compensation algorithm, we showed that the set of ' T is indeed a space representation of the agent's environment. Space knowledge without a priori knowledge of body or environment.It is important to note that the agent acquires its space representation without any a priori knowledge of either the structure of space or the group of transformations that describe it.There is no initial hypothesis that the agent is in space.The agent only has its sensory data, motor action, and the sensorimotor association.Furthermore there is no need for a strong hypothesis on the sensory information the agent needs to use.Only visual coincidence matching is used; no preprocessing of images or knowledge of the metric is necessary.It is important to note that no model for the environment is given to the agent and no assumptions are made about its body (proprioceptive organization or sensory capabilities). Distinguishing external movement from internal action.Using this model, the agent is able to distinguish between movements of the environment and its own movements.During the learning phase with both stationary and non-stationary environments, the agent acquires the set of ' T and can compare changes in the external environment to its proprioceptive changes.The auto-compensable transformations act as a reference for any external transformations of the environment (see Figure 5). The set È of ' T is a representation of space Poincaré makes the distinction between sensible space and geometrical space.Sensible space is explicitly related to raw measures from different sensory systems.Poincaré argues that despite the major differences between these spaces, an agent can retrieve the properties of geometrical space from the sensible spaces by considering the effects of actions on the sensible space.As the geometrical space can be defined by the group of rigid transformations, if an agent can capture these rigid transformations and their group property, the agent acquires a representation of the geometrical space.As in our study, the agent does not have any a priori knowledge of the rigid transformations or their properties.However, using its sensorimotor system alone, the agent can acquire a subset of these transformations (the compensable transformations).Poincaré argued that the agent will acquire not only the set of compensable transformations but also learn that they behave as a mathematical group. In the present article, the agent fully learned the set of compensable transformations and that this set was a group.More importantly, because the internal representation is isomorphic to the group of compensable transformations, the agent can also learn other information about it, for example, the metric or topology.Terekov has used a similar model to retrieve the metric. 14hile the agent did not have any a priori knowledge of its body or environment, we made some assumptions about the mathematical functions of the model such as the bijectivity and invertibility of the functions Å and for the retina and foot proprioceptives states.Another assumption that we made was that the foot displacement was unbounded.Further generalization could use Marcel's formalism 15 of a surjective model for the proprioceptive states and include boundaries for the agent displacement. While our theoretical framework does not limit or specify the type of compensable transformation, all the simulations were done using translation transformations.Other type of transformations may also be simulated and we are currently working on rotations.Furthermore, this work used the visual sensory system for the coincidence matching, but other sensory systems, such as auditory, tactile, or vestibular systems, could be tested.In this study, we applied the sensorimotor compensation theory to a two-dimensional geometrical space.However, we believe that a more general compensation theory could be formalized on any type of physical space (not necessarily a geometrical space) with its own specific types of compensable transformations. Defects in ' T give defects in the space representation Our algorithm requires the signal function to be continuous and invertible.The number of retina cells is an important parameter as it affects the continuous and invertible properties of the sensorial signal.If the number of cells is too low, for different absolute retina positions, the sensory measure will not be unique.There calculated compensating position will be ambiguous.For translations where the compensation is exact (i.e. the agent's retina can find the exact same position the agent had before a transformation), there is no effect of retina size, number of cells, or proprioceptive signal as long as the conditions of a continuous signal and reversibility are met.When we analyzed the structure of the ' T function for extreme parameters (one and two visual cells), we found that the agent was not able to properly recognize compensable movements.In the case of one single retina cell, the recognition was extremely limited.The effects of these extreme parameters on both transformation retrieval and group properties are shown in Tables 3 and 4. The curves plotted in Figure 6 illustrate this effect.When the space representation is fully learned, the ' T is well separated as in (a).However, this is not the case when the space representation is invalid or incomplete.For two retina cells (b), the ' T curves occasionally overlap.They are totally invalid for one retina cell (c). With such kind of handicap, an agent is not able to properly develop a representation of space. Problem of the rotation We presented in this article a general framework and exact mathematical proofs that are related to rigid transformation as rotation and translation.However, during the simulation, we have shown only results on the translation.It is important to note that we are currently working on rotation simulation.But rotations have two noticeable effects. First, the simulation grid is not invariant by rotation but it is by translation.Thus, the rotation transformation shows similar defects on the representation of space as the handicapped agent.We have been able to find a solution to resolve this but giving the full explanation in this article would have been problematic. Second, the rotation is periodic.Rotating by 2 is equivalent to no rotation in terms of sensory perception.This very interesting property creats complexity that will be presented in a future article. These reasons forced us to not include rotation in the present article.But we consider that this point does not limit the results as the mathematical results are general for both translation and rotation and only the simulation was restricted. Conclusion General statistical learning algorithms for perceptual capabilities require a prerequisite model of the environment and body in order to acquire the ability to behave and generate actions.Space knowledge is predefined in the model and is thus restricted by any assumptions of the model.By contrast, our agent, whose learning is based on proprioceptive compensation (or more generally, algebraic learning), learns the properties of the surrounding space without any prior assumptions.The learned representation is a group, as proven in this article.Furthermore, the agent's internal representation can be used to distinguish the agent's own movements from those of the environment.Our algorithms and implementation allow the usage of ' T for space computation.Future work includes a further theoretical formalization and simulations using other transformations such as rotations.We will also consider the acquisition of other types of knowledge, for example, object knowledge and arithmetic knowledge. Implementation of the algorithm The environment movements can be done on a grid.The length of the grid steps is Áe.The agent movements (foot and retina) are done on another grid of steps length Áb.When the sensory system is large enough to compensate properly (continuity and reversibility conditions are met), the agent exactly compensates the movements and the curves are well separated and linear (for affine proprioceptive only); (b) when the sensory system is not sufficiently large (two retina cells), the compensatory algorithm yields ambiguous results and cannot retrieve a well-separated manifold of ' T for different transformations T, resulting in an incorrect representation of space; and (c) when the sensory system is totally invalid (one retina cell), the algorithm cannot find a match and the curves are invalid.The algorithm does not generate a representation of space in this case.The agent is in a physical space.The position of the retina in its body is determined by Åðp r q Þ, (b) the agent moves its body by a foot movement ðp f q Þ, (c) the environment is moved (all points in the environment are moved) following a transformation T, (d) the agent compensates with a foot movement ðbfp f q 0 Þ and a new retina position X r 0 ¼ Åðp r q 0 Þ, (e) The local movement of the retina is associated with a local transformation , (f) the final position of the retina relative to the environment is identical to its initial position, and (g) the applied transformations are equivalent ðp f q Þ T ¼ ðp f q 0 Þ. Figure 1 . Figure1.The signal measured by each visual cell depends on the distance between the sources i located at Xi and the retina located at X r .The set of signals defines the visual signal vector, which depends on the number of cells in the retina.The signal is given by the function ðX r ; Þ ¼ P N i¼0 ðX i À X r ; Þ, where is the effect of the global position of the agent in the environment. Figure 2 . Figure 2. The agent is in a physical space and can move in any direction (left or right along X in this figure) by controlling a foot.The foot displacement is associated with the external proprioception p f .The agent has a retina which can move freely inside the body.The retina position depends on the internal proprioception p r .The environment is composed of a number of lights which can move together (not individually) in any direction. Figure 3 . Figure 3.The matching coincidence algorithm is based on the comparison of the signal measured by each of the retina's cells before an external movement is applied to the agent and after the compensating retina internal movement. Figure 4 . Figure 4. Comparison between simple and complex agents.(a) Simple agent.When the retina moves in the direction X, only the proprioceptive p x is transformed into p 0 x .p y is not affected by the movement and (b) complex agent.When the retina moves in any direction, all proprioceptive detectors are affected ðp 1 ; p 2 ; and p 3 Þ then ðp 0 1 ; p 0 2 ; and p 0 3 Þ. Figure 5 . Figure5.Set of curves ' x where x is the amplitude of the translation of the agent.The step size of the translation is 0.5 with an agent length of 4. For this simple agent (x and y movements are decoupled), there are eight proprioceptive steps.We noted the external transformation with the x and y following the length of the displacement.(a) ' x is expressed in terms of foot motor proprioceptive values (from À2 to 2) and the retina proprioceptive measure (from 0 to 1).The ordinate is the retina value after compensation.This surface corresponds to all possible couples p f , p r for a single movement (À1,2 x in our case) and (b) projection of ' x on the retina proprioceptive coordinates without foot movement.Each curve corresponds to a unique external displacement.We see that the curves are symmetric ' x ' Àx ¼ ' 0 then ' x ¼ ' À1 Àx .' 0 is the invariant curve (no external transformation). Figure 6 . Figure 6.The set of ' T curves for an affine uncoupled proprioceptive retina measure.Each curve corresponds to a learned movement T. (a) When the sensory system is large enough to compensate properly (continuity and reversibility conditions are met), the agent exactly compensates the movements and the curves are well separated and linear (for affine proprioceptive only); (b) when the sensory system is not sufficiently large (two retina cells), the compensatory algorithm yields ambiguous results and cannot retrieve a well-separated manifold of ' T for different transformations T, resulting in an incorrect representation of space; and (c) when the sensory system is totally invalid (one retina cell), the algorithm cannot find a match and the curves are invalid.The algorithm does not generate a representation of space in this case. Figure 7 . Figure 7. (a)The agent is in a physical space.The position of the retina in its body is determined by Åðp r q Þ, (b) the agent moves its body by a foot movement ðp f q Þ, (c) the environment is moved (all points in the environment are moved) following a transformation T, (d) the agent compensates with a foot movement ðbfp f q 0 Þ and a new retina position X r 0 ¼ Åðp r q 0 Þ, (e) The local movement of the retina is associated with a local transformation , (f) the final position of the retina relative to the environment is identical to its initial position, and (g) the applied transformations are equivalent ðp f q Þ T ¼ ðp f q 0 Þ. Table 3 . Varying sensory parameters and number of signal sources for compensable and non-compensable movement.Since the system was able to properly compensate with only three retina cells for one source and with any number of retina cells for 10 sources, further results are not included.The # Steps are 10 Â 10 and the retina to agent step length ratio is 1.The distance k T a ; T e k between the applied transformation T a and the extracted transformation T e has been measured. aParametersError on k T a ; T e k a Table 1 . Simple model (uncoupled proprioceptive measure).aProprioceptivefields are decoupled with p ¼ ðp x ; p y Þ. Movements in X direction affect only p x .Movements in Y direction affect only p y .' x , and ' y functions were calculated for different step lengths and different retina to agent step length ratios.We measured the distance k T a ; T e k between the applied transformation T a and the extracted transformation T e a Table 4 . Testing the group propery of the function ' T by calculating the combination of ' T1 and ' T2 in order to generate a ' T3 where T 3 ¼ T 1 T 2 .a Since the system combined properly with only three retina cells for one source and with any number of retina cells for 10 sources, further results are not included.The # Steps are 10 Â 10 and the retina to agent step length ratio is 1.
12,758.2
2016-11-15T00:00:00.000
[ "Mathematics" ]
New ideas in neutrino detection . What is new in the field of neutrino detection? In addition to new projects probing both the low and high ends of the neutrino energy scale, an inexpensive, effective technique is being developed to allow tagging of antineutrinos in water Cherenkov (WC) detectors via the addition to water of a solute with a large neutron cross-section and energetic γ daughters. Gadolinium is an excellent candidate since in recent years it has become very inexpensive, now less than $8 per kilogram in the form of commercially available gadolinium trichloride. This non-toxic, non-reactive substance is highly soluble in water. Neutron capture on gadolinium yields an 8.0 MeV gamma cascade easily seen in detectors like Super-Kamiokande. The uses of GdCl 3 as a possible upgrade for the Super- Kamiokande detector – with a view toward improving its performance as an antineutrino detector for supernova neutrinos and reactor neutrinos – are discussed, as are the ongoing R&D efforts which aim to make this dream a reality within the next two years. New projects There are a number of interesting new projects under construction around the world which will extend our understanding of neutrinos. Some of these will be probing very low neutrino energies, while others will look at the very highest energy neutrinos. Table 1 contains a list of some of these new neutrino experiments. It is not a complete list of every new or proposed project, but rather indicates some of the interesting new developments in neutrino detection which we can expect to see in the next few years. Note that these detectors are sensitive to some fifteen orders of magnitude in neutrino energies! I need patience, and I need it now! The new projects mentioned in the last section will collect neutrinos from our Sun, our galaxy, and possibly from extragalactic sources as well. However, none of them will be very good at observing some of the most interesting neutrinos -those produced in supernova explosions. But who has the patience to wait for the next nearby supernova? Theorists and experimentalists alike wonder how we can get more neutrino data like SN1987A provided, as nearby supernovas are fairly rare events. On the other hand, supernovas themselves are not rare at all; on average, there is one supernova explosion somewhere in our Universe every second. Consequently, all the neutrinos which have ever been emitted by every supernova since the onset of stellar formation suffuse the Universe. These constitute the diffuse supernova neutrino background (DSNB), also known as the 'relic' supernova neutrinos. If observable, the DSNB could provide a steady stream of information about not only stellar collapse and nucleosynthesis but also the evolving size, speed, and nature of the Universe itself. What is more, these relic supernova neutrinos travel, on average, six billion lightyears before reaching the Earth -certainly the ultimate long baseline for studies of neutrino decay and the like. In 2003, the Super-Kamiokande Collaboration published the results of a search for these supernova relic neutrinos [1]. Unfortunately, this study was strongly background limited, especially by the many low-energy events below 19 MeV which swamped any possible DSNB signal in that most likely energy range. Consequently, this study could see no statistically significant excess of events and therefore was only able to set upper limits on the DSNB flux. If it were possible to look for coincident signals from these inverse beta events, i.e., for a positron's Cherenkov light followed shortly and in the same spot by the gamma cascade of a captured neutron, then these troublesome backgrounds could be greatly reduced. DSNB models vary, but in principle Super-K should then clearly see a few of these events every year. A much larger, future detector like the proposed Hyper-Kamiokande [2] would, with coincident neutron detection, collect a sample of relic supernova neutrino events equal to what was seen seventeen years ago from SN1987A every month or so. But how can neutron detection be made to work in very large water Cherenkov detectors such as these? A modest proposal John Beacom and I are proposing to introduce non-toxic, water soluble gadolinium (tri)chloride, GdCl 3 , into the rebuilt Super-Kamiokande-III detector. As neutron capture on gadolinium produces an 8.0 MeV gamma cascade, the inverse beta decay reaction,ν e + p → e + + n (1) in such a modified Super-K will yield coincident positron and neutron capture signals. This will allow a large reduction in backgrounds and greatly enhance the detector's response to both supernova neutrinos (galactic and relic) and reactor antineutrinos. The gadolinium must compete with the hydrogen in the water for the neutrons, as neutron capture on hydrogen yields a 2.2 MeV gamma, which is essentially invisible in Super-K. The neutron stopping power of Gd in solution can be seen in figure 1. So, by using 100 t of GdCl 3 we would have 0.1% Gd by mass in the SK tank, and just over 90% of the inverse beta neutrons would be visibly caught by gadolinium. Due to recent decline in the price of gadolinium as a result of new large-scale production facilities opening up in Inner Mongolia, adding this much GdCl 3 to Super-K would cost no more than $500,000 today, though it would have cost $400,000,000 back when SK was first designed. We propose calling this new project 'GADZOOKS!' In addition to being an expression of surprise, here's what it stands for: Gadolinium Antineutrino Detector Zealously Outperforming Old Kamiokande, Super! This proposal is detailed in our recent article [3]. Note that this is the only method of detecting neutrons which can be extended to the tens-of-kilotons scale and beyond, and at reasonable expense -adding no more than 1% to the capital cost of detector construction -as well. Galactic supernova neutrinos Naturally, if we can do relics, we can do a great job with galactic supernovas, too. With 0.1% gadolinium in the Super-K tank, the copious inverse betas get individually tagged, allowing us to study their spectrum and subtract them away from the directional elastic scatters, which will double our pointing accuracy. The 16 O NC events no longer sit on a large background and are hence individually identified, and the O(ν e , e − )F events' backwards scatter can be clearly seen, providing a measure of burst temperature and oscillation angle. In addition, based on event timing alone, Super-K with GdCl 3 will be able to immediately identify a neutrino burst as a genuine supernova. This is due to the fact that the average timing separation between subsequent neutrino interactions would be much longer than the timing separation between coincident events (except for a very close supernova, but in that case see the 'SN early warning' section). Even a modest number of these coincident inverse beta events would be a clear signature of a burst and could not be faked by mine blasting, spallation, or dropped wrenches. These same distinctive inverse beta signatures will allow SK to look for black hole formation (and other interesting things) out to extremely long times after the burst. Above 6 MeV, coincident inverse beta background events, primarily due to the many nuclear power reactors in Japan, will occur on the level of less than one a day. This is to be compared with about 150 single events a day in our final low-energy sample. Therefore, the presence of Gd in the SK water will mean that signals from a supernova will take much longer to drop below the background level, making late neutrino observations of the cooling SN remnant possible for the first time. SN early warning Inspired in part by our GADZOOKS! preprint, another group of scientists has recently pointed out the possibility of being able to tell that a wave of SN neutrinos was about to pass through the Earth [4]. Let us suppose that a relatively large, rather close star, like Betelgeuse, is about to explode as a supernova. Carbon burning takes about 300 years, then neon and oxygen burning each power the star for half a year or so. Finally, silicon ignites, forming an inert iron core. After about two days of Si burning, the star explodes as a supernova. But during silicon burning the star is hot enough (T > 10 9 K) that the pair annihilation process starts to produce large numbers ofν e s with an average energy of 1.87 MeV. This is coincidentally just above the inverse beta threshold of 1.8 MeV. Therefore, if Super-K has GdCl 3 in it when this happens, we would expect to see ∼1000 inverse beta neutron capture singles (the positron is not above Cherenkov threshold) a day. This is seven times the current low-energy singles rate in SK, and could not be missed. No other detector on Earth would know that the main burst was about to arrive -only SK with Gd could do this! Surely the astronomical and neutrino communities, not to mention our gravity-wave colleagues, would appreciate knowing that a nearby star was about to explode. Now, it is granted that the supernova has to be pretty close. This trick will only work well out to about 1 kiloparsec in Super-K or 5 kpc in Hyper-K. On the other hand, these are the most valuable bursts and we would have the most to lose if we missed one due to calibration or scheduled detector downtime. Such downtime could be postponed a few days in the event of a sudden rise in the neutron capture rate. So, I like to think of this as a supernova insurance policy. Reactor antineutrinos It does not have anything to do with the detection of supernova neutrinos, but if we were to introduce a 0.1% solution of gadolinium into Super-Kamiokande, we could collect enough reactor antineutrino data to reproduce KamLAND's first published results [5] in just three days of operation. Their entire planned six-year data-taking run could be reproduced by Super-K with GdCl 3 in seven weeks, while Hyper-K with GdCl 3 could collect six KamLAND-years ofν e data in just one day. Super-K would collect enough reactorν e s every day to enable it to monitor, in real time, the total reactorν e flux. This means that, unlike KamLAND, it would not be dependent on the power companies which operate the reactors accurately reporting their day-to-day power output. Note that these plentiful reactorν e events would not be confused with the comparatively rare relic supernovaν e s because of the widely differing antineutrino energy ranges and spectra of the two processes. Figure 2 shows the expected coincident signals in a gadolinium-enhanced Super-K. Also inspired by our GADZOOKS! preprint, another set of scientists has calculated the effect such reactor antineutrino measurements would have on the precision of the solar neutrino mixing parameters [6]. They find that after just three years of data-taking Super-K with gadolinium could reduce the error on ∆m 2 12 from the current value of ±10% to just over ±1% at the 99% confidence level. This would constitute the first precision determination of one of the fundamental neutrino parameters. The corresponding improvement on the precision of sin 2 θ 12 , while not as dramatic as that for ∆m 2 12 , would nevertheless be significant in its own right. (1) Explore the chemistry, stability, and optical properties of GdCl 3 in detail. (2) Understand any changes needed in the SK water system in order to recirculate clean water but not remove the GdCl 3 solute. (3) Soak samples of all materials which comprise the Super-K detector in water containing GdCl 3 over a period of greater than one year and then look for any GdCl 3 -induced damage. A scaled-down version of the Super-K water filtration system was built at the University of California, Irvine. We are currently using this system to test out new water filtration technologies in order to maintain the desired GdCl 3 concentration in the otherwise pure water. Gadolinium retention rates of over 99.9% per pass have been achieved. Meanwhile, at Louisiana State University materials aging studies are underway. After a GdCl 3 exposure equal to 30 years at the proposed concentration in Super-K we see no significant damage to the aged detector components. Preliminary measurements of the optical properties of GdCl 3 were conducted in Japan during the spring of 2004, with very promising results. After two years of these bench tests, I was allowed to use the K2K experiment's one kiloton (KT) water Cherenkov tank, a 2% working scale model of Super-Kamiokande at KEK, for large-scale Gd studies. This was possible only after K2K's long-baseline neutrino beam turned off for good in early 2005 and final post-calibration runs were completed. In November 2005 I introduced 200 kg of GdCl 3 into the KT. The good news is that adding gadolinium chloride itself did not hurt the water transparency in the KT tank, and the water filtering system developed at UCI worked perfectly. The bad news is that the chlorine attached to the gadolinium to make it dissolve in water attacked some old rust in the KT tank, which is made of painted iron, and lifted it into solution. This then made the water transparency go down and the water change color. Finally, at the end of March 2006, we removed GdCl 3 and drained the KT so we could look inside and be sure of what was happening. This inspection of the inside of the KT tank showed large areas (about 20% of the total inner surface area) which had not been properly painted back in 1998these were very rusty. It is not believed that the GdCl 3 itself caused the rust. This has been checked with tabletop tests involving clean and pre-rusted iron samples soaked in GdCl 3 solutions. As Super-K is made of stainless steel, not (badly) painted iron, we still expect this idea will work in Super-K, though more studies are clearly needed. It has been decided that the next step in the gadolinium R&D will be to build a custom-made tank out of stainless steel, and make it as similar to Super-K as possible. In April, 2006, Lawrence Livermore National Lab agreed to fund the construction and operation of a stainless steel Gd-testing tank in the US. Construction will most likely begin sometime in July 2006. We learned a number of important things in the kiloton detector: (1) GdCl 3 is easy to dissolve in water. (2) GdCl 3 itself (i.e., in the absence of old rust) does not significantly affect the light collection. (3) Choice of detector materials is critical with GdCl 3 . (4) The 20-inch Super-K PMTs operate well in conductive water. (5) Our Gd filtration system works as designed at 3.6 t/h and can easily be scaled up to higher (Super-K level) flows. All of these findings are of course applicable to putting GdCl 3 into Super-Kamiokande someday. Since Super-K is made of good quality stainless steel, not iron, we do not expect such rust trouble there. Even so, we should (and will) make things work with gadolinium in a stainless steel test tank first. After discussions at the most recent Super-Kamiokande Collaboration meeting in May 2006, it now appears quite likely that the decision will be made to put gadolinium into Super-K sometime in the next two years. The University of Tokyo is beginning to assign some of their young people to focus on the project, and we now have a gadolinium working group within the Super-K Collaboration and this is extremely encouraging!
3,607.8
2006-10-01T00:00:00.000
[ "Physics" ]
A divisive model of evidence accumulation explains uneven weighting of evidence over time Divisive normalization has long been used to account for computations in various neural processes and behaviours. The model proposes that inputs into a neural system are divisively normalized by the system’s total activity. More recently, dynamical versions of divisive normalization have been shown to account for how neural activity evolves over time in value-based decision making. Despite its ubiquity, divisive normalization has not been studied in decisions that require evidence to be integrated over time. Such decisions are important when the information is not all available at once. A key feature of such decisions is how evidence is weighted over time, known as the integration kernel. Here, we provide a formal expression for the integration kernel in divisive normalization, and show that divisive normalization quantitatively accounts for 133 human participants’ perceptual decision making behaviour, performing as well as the state-of-the-art Drift Diffusion Model, the predominant model for perceptual evidence accumulation. D ivisive normalization has been proposed as a canonical computation in the brain 1 . In these models, the firing rate of an individual neuron is computed as a ratio between its response to an input and the summed activity of a pool of neurons receiving similar inputs. For example, the activity of a visual cortex neuron f i responding to an input u i can be computed as the input divided by a constant S plus a normalization factor-the sum of inputs received by the total pool of neurons 1 : Divisive normalization models such as described in Eq. (1) have been used successfully to describe both neural firing and behavior across a wide range of tasks-from sensory processing in visual and olfactory systems [2][3][4][5] , to context-dependent value encoding in premotor and parietal areas 6 . For example, in the visual domain, divisive normalization explains surround suppression in primary visual cortex, where the response of a neuron to a stimulus in the receptive field is suppressed when there are additional stimuli in the surrounding region 7 . Analogously, in economic decision making, divisive normalization explains how activity in parietal cortex encodes the value of a choice option relative to other available alternatives instead of the absolute value 6 . More recently, dynamic versions of divisive normalization models have been used to describe how neural activity in economic decision making tasks evolves over time 8,9 . Despite the success of divisive normalization models, they have never been studied in situations that require evidence to be integrated over time. Such evidence accumulation is important in many decisions when we do not have all the information available at once, such as when we integrate visual information from moment to moment as our eyes scan a scene. In the lab setting, evidence accumulation has typically been studied in perceptual decision making tasks over short periods of time. In one such task, called the Poisson Clicks Task 10 , participants make a judgment about a train of auditory clicks. Each click comes into either the left or right ear, and at the end of the train of clicks participants must decide which ear received more clicks. The optimal strategy in this task is to count, i.e., integrate, the clicks on each side and choose the side with the most clicks. A key feature of any evidence accumulation strategy is how evidence is weighted over time, which is also known as the kernel of integration. For example in the optimal model of counting, each click contributes equally to the decision, i.e., all clicks are weighed equally over time. In this case, the integration kernel is flat-the weight of every click is the same. While such flat integration kernels have been observed in rats and highly trained humans 10 , there is considerable variability across species and individuals. For example, Yates and colleagues 11 showed that monkeys exhibit a strong primacy kernel, in which early evidence is overweighed. An opposite, recency kernel, where early evidence is under weighed, was observed in humans 12,13 . Recently, in a large scale study of over a hundred human participants, we find that different people use different kernels with examples among the population of flat, primacy, and recency effects. Intriguingly, however, the most popular kernel in our experiment is a bump shaped kernel in which evidence in the middle of the stimulus was weighed more than either the beginning or the end 14 . In this work, we show how dynamic divisive normalization 8 can act as a model for evidence accumulation in perceptual decision making. We provide theoretical results for how the model integrates evidence over time and show how dynamic divisive normalization can generate all of the four integration kernel shapes: primacy, recency, flat, and (most importantly) the bump kernel which is the main behavioral phenotype in our task 14 . In addition, we provide experimental evidence that divisive normalization can quantitatively account for human behavior in an auditory perceptual decision making task. Finally, with formal model comparison, we show that divisive normalization fits the data quantitatively as well as the state-of-the-art Drift Diffusion Model (DDM), the predominant model for perceptual evidence accumulation, with the same number of parameters. Results A divisive model of evidence accumulation. Our model architecture was inspired by the dynamic version of divisive normalization developed by Louie and colleagues to model neural activity during value-based decision making 8 . We assume that the decision is made by comparing the activity in two pools of excitatory units, R left and R right (Fig. 1a). These pools receive timevarying input C left and C right . In the Clicks Task (below), these inputs correspond to the left and right clicks, more generally they reflect the momentary evidence in favor of one choice over the other. An inhibitory gain control unit G, which is driven by the total activity in the excitatory network, divisively inhibits the R unit activity. The time-varying dynamics of the model can be described by the following system of differential equations: A decision is formed by comparing the difference in activity δ between the two R units Example simulated dynamics of the R and G units for punctate inputs (of the form used in the Clicks Task) are shown in Fig. 1b, c. The model has three free parameters: τ R , τ G , and ω I . As is clear from this plot, the R unit activity integrates the input, C, over time, with each input increasing the corresponding R unit activity. In addition, closer inspection of Fig. 1b reveals that the inputs have different effects on R over time-for example, compare the effect of the first input on the right, which increases R right considerably, to that of the last input on the right, which increases R right much less. This suggests that the model with these parameter settings integrates evidence over time, but with an uneven weighting for each input. Divisive normalization generates different kernel shapes. How can we quantify the integration kernel-how much each piece of evidence weighs-given by a circuit that generates divisively normalized coding? We integrate the set of differential equations to provide an explicit expression for the integration kernel. We first consider the evolution of the difference in activity, δ, over time. In particular, from Eqs. (2) and (4), we can write where ΔC is the difference in input, We can then integrate Eq. (5) to compute the following formal solution for δ as a function of time (for details of derivation, see Methods section): This expression shows explicitly that the activity of the network acts to integrate the inputs ΔC over time, weighing each input by the integration kernel function K t; t 0 ð Þ. Importantly, K t; t 0 ð Þ represents the degree to which evidence ΔC at time t 0 contributes to the decision. While clearly not a closed form expression for the integration kernel (notably K t; t 0 ð Þ still depends on G(t)), Eq. (7) gives some intuition in how evidence is accumulated over time in this model. In particular, the kernel can be written as a product of two factors: an exponential function (Fig. 2a) and the inverse of the G activity (Fig. 2b). The exponential function is increasing over time, and since G is increasing with time ( Fig. 1c), the inverse of G is decreasing over time. Under the right conditions, the product of these increasing and decreasing functions can produce a bump shaped kernel, Fig. 2c. More intuitively, we can consider the integration kernel as being affected by two processes: the leaky integration in R and the increasing inhibition by G. If we consider the start of the train of clicks when G is small, the model acts as a leaky integrator (Eq. (2)), which creates a recency bias since earlier evidence is "forgotten" through the leak. Over time, as G unit activity increases, G exerts an increasing inhibition on R, and when inhibition overcomes the leaky integration, later evidence was weighed less than the preceding evidence. These intuitions suggest that the shape of the integration kernel is determined by a balance between how fast the leaky integration in R happens (the rate of R) and how fast the inhibitory G activity grows (the rate of G). These two rates are determined by the inverse of the time constants τ R and τ G respectively-i.e., when τ is large, the rate is slow. The balance between the rate of R and the rate of G can then be described as the ratio τ R /τ G -i.e., when τ R is larger than τ G , R activity is slower than G activity, and similarly; when τ R is smaller than τ G , R activity is faster than G activity. To investigate how integration kernels can change depending on a ratio between the rate of R and the rate of G, we simulated the integration kernel using different τ R /τ G ratios, and show that integration kernel shape changes from primacy, to bump, to flat, and then to recency as τ R /τ G decreases (Fig. 2d). When τ R /τ G is much larger than 1, the rate of integration is much slower than the rate of inhibition by G. This inhibition suppresses input from later evidence, thus producing a primacy kernel. As τ R /τ G decreases towards one-τ R decreases and τ G increases, inhibition slows down and allows for leaky integration to happen, thus producing a bump kernel. When τ R /τ G reaches one, i.e., the two rates balance out, a flat kernel is generated. Finally, when τ R /τ G decreases to below one, leaky integration overcomes inhibition, generating a recency kernel. Humans exhibit uneven integration kernel shapes. To examine the model in the context of behavior, we looked at behavioral data from 133 human participants. Most of these data (108 subjects) was previously published 14 . We observed that a large cohort of human participants weighed evidence unevenly when performing an auditory decision making task adapted from Poisson Clicks Task 10 . In this task, on every trial participants listened to a train of 20 clicks over 1 s at 20 Hz (Fig. 3a). Each click was on either the left or the right side. At the end of the train of clicks participants decided which side had more clicks. Participants performed between 666 and 938 trials (mean 750.8) over the course of approximately 1 h. Basic behavior in this task was comparable to that in similar perceptual decision making tasks in previous studies 10,11 . Choice exhibited a characteristic sigmoidal dependence on the net difference in clicks between left and right (Fig. 3b). We quantified the integration kernel, i.e., the impact of every click on choice, with logistic regression in which the probability of choosing left on trial k was given by Fig. 2 How divisive normalization generates different integration kernel shapes. a-c Example simulation demonstrates how the two components in the integration kernel K (Eq. (7)) combine to generate a bump shaped kernel. K (c) is a product of an increasing exponential function (a) and the inverse of 1 + G (b), which is decreasing over time. d Simulations of primacy, bump, flat, and recency integration kernels using decreasing log ratios of τ R and τ G to demonstrate that the shape of the integration kernel is determined by a balance between the rate of the leaky integration in R and the rate of the G inhibition. where ΔC i is the difference between left and right for the ith click (i.e., ΔC i = ΔC left,i − ΔC right,i , therefore, ΔC i was +1 for a left click and −1 for right). The integration kernel was quantified by the regression weights β click i , and β base characterized the overall bias. We found that participants weighed the clicks unevenly over time (repeated measures ANOVA on β click i : F(19, 2508) = 34.47, p < 0.00001). Importantly, post-hoc Tukey's test showed that the middle of the kernel was significantly higher than either the beginning or the end of the click (3rd-9th clicks were higher than the 1st click, and 10th-12th clicks were higher than 16th-20th clicks, p < 0.00001), which indicated that on average participants tended to weigh the middle of the click train more than the beginning or the end, forming a bump shaped kernel (Fig. 3c). This uneven kernel shape contributed as a source of approximately 27% of the total errors in participants' choices (see Supplementary Note 1 and Supplementary Fig. 1). Divisive normalization fits different kernels in humans. To investigate whether our divisive model could account for the range of integration kernels observed in human behavior, we fit the model to participants' choices using a maximum likelihood approach. To fit the model to human behavior we assumed that a choice is made by comparing the activity in the two R units (i.e., δ = R left − R right ) with some noise, parameterized by σ, and an overall side bias (i.e., overall bias to either left or right). We also added an additional offset parameter μ to the kernel. With Eq. (7), the probability of choosing left at trial k is given by We computed the probability of a choice on a given trial at t = T, where T is the time at the end of the stimulus. The model has a total of six free parameters (τ R , τ G , inhibition weight ω I , noise σ, offset μ, and an overall bias). Using parameters that best fitted to each participant's choice, we first reconstructed integration kernel from divisive normalization for each participant from the kernel function (Eqs. (7) and (9)). Divisive normalization can account for all four types of integration kernel in human participants (Fig. 4a-d and Supplementary Fig. 3). We also used divisive normalization to generate simulated choices for each participant for each trial using the best fitting parameters, and showed that the resulting psychometric curve also matched well to that of human participants ( Fig. 4e and Supplementary Fig. 4). The distributions of best fitted parameters are plotted in Supplementary Figs. 5 and 6. Our simulations in the previous section suggested that by shifting the balance between the integration and inhibition time constants (the ratio τ R /τ G ), divisive normalization can generate the four types of kernel. We therefore examined the fitted parameter values in terms of τ R /τ G . We found that log(τ R /τ G ) is significantly different across kernel shapes (one-way ANOVA F (3, 129) = 12.64, p < 0.001), post-hoc Tukey's test showed that this difference is driven by the difference in log(τ R /τ G ) between participants with primacy kernel and bump kernel (Fig. 4f), which is in line with our prediction. However, log(τ R /τ G ) in participants with flat and recency kernels are indistinguishable from either participants with either bump or primacy kernel, suggesting that the ratio of time constants is not the only factor in determining integration kernel shapes. Divisive normalization performs as well as DDM does. Finally, to demonstrate that divisive normalization is comparable to an established model for such evidence accumulation tasks, we compared our model quantitatively to Drift Diffusion Model (DDM)-the predominant model used to account for this type of perceptual evidence accumulation behavior. In its simplest form, the DDM assumes that an accumulator integrates incoming evidence over time (for example in our task the evidence +1 for a left click and −1 for a right click), with some amount of noise σ a added at every time step. In addition, a bias term is added to describe an overall bias to choosing either left or right. In an interrogation paradigm such as ours, a decision is made by comparing the accumulator activity with the bias when the stimulus ends, e.g., in our task, if the accumulator activity is larger than the bias, the model chooses left. Intuitively, this form of DDM: with only drift (input, i.e., clicks C) and diffusion (noise added by Wiener process W), and without any bound, would predict that every piece of evidence over time is integrated with equal weight-i.e., a flat integration kernel. Thus, the most basic form of DDM should not be able to generate a bump shaped integration kernel. An extension can be added to the standard DDM in the form of a "memory drift" to account for primacy or recency integration kernels as well. This memory drift λ arises out of λ acts to maintain the memory of the evidence estimate. When memory is subtractive (λ < 0), DDM becomes leaky and earlier evidence is "forgotten" and thus weighed less, creating a recency bias. When memory is additive (λ > 0), accumulator activity drifts exponentially over time, and the direction of the drift is determined by the initial stimulus, thus creating a primacy effect. When λ = 0, LCA (Eq. (11)) reduces to basic DDM (Eq. (10)), and the integration kernel is flat. However, LCA alone should not be able to generate a bump shaped kernel. Brunton and colleagues extended the LCA to include additional processes 10 : First, a bound, B, that describes the threshold of evidence at which the model makes a decision. In the context of an interrogation paradigm, evidence coming after the bound has been crossed is ignored. Second, a sensory adaptation process which controls the impact of successive clicks on the same side. This process is controlled by two adaptation parameters: (1) the direction of adaptation ϕ, which dictates whether the impact of a click on one side either increases (ϕ > 1) or decreases (ϕ < 1) with the number of clicks that were previously on the same side; and (2) a time constant τ ϕ , that determines how quickly the adapted impact recovers to 1. Overall the Brunton model has six free parameters-neuronal noise, memory drift, bound, two parameters controlling sensory adaptation, and bias. We fit these parameters using the maximum likelihood procedure described in the work of Brunton and colleagues 10 , and following code from Yartsev and colleagues 17 . We generated choices for each participant using the best fitting parameters, and computed an integration kernel for each participant using these model generated choices. To establish the validity of divisive normalization, we compare it with the two variations of DDM: (1) LCA and (2) Brunton DDM. We first show that, confirming our intuition, the leak and competition in LCA (Eq. (11)) can produce a primacy, recency, and flat effect, but LCA alone cannot fit to the bump kernel (Supplementary Note 3, Supplementary Fig. 7, and Supplementary Table 1). We found that only after introducing both bound and sensory adaptation (i.e., Brunton model) can DDM account for the behavioral data as well as divisive normalization can, both in formal model comparison using log likelihood, Akaike information criterion (AIC), and Bayesian information criterion (BIC) ( Table 1), and in integration kernel and choice curve (Supplementary Note 4, Supplementary Fig. 8, and Supplementary Table 1; distribution of fitted parameters plotted in Supplementary Fig. 11). Importantly, we show that the full Brunton DDM as reported in ref. 10 with nine parameters accounts for the behavioral data equally well (Supplementary Note 5, Supplementary Fig. 9, and Supplementary Table 1), suggesting that increasing the number of parameters did not improve model performance significantly. We also show that the LCA with the addition of just a bound does not account for the bump shaped integration kernel either (Supplementary Note 6, Supplementary Fig. 10, and Supplementary Table 1), suggesting that decreasing the number of parameters worsens the model performance. This result that divisive normalization can account for behavior as well as DDM can further support divisive normalization as a model for evidence accumulation. Discussion In this work, we propose dynamic divisive normalization as a model for perceptual evidence accumulation. Theoretically, we provide a formal expression for the integration kernel, i.e., how this model weighs information over time, and we show how the shape of integration kernel falls naturally out of divisive normalization as the result of a competition between a leak term and a dynamic change in input gain. Experimentally, we show how Post-hoc Tukey's test indicates primacy kernel has a significantly higher τ R to τ G ratio than bump kernel has. On each box, the round marker indicates the mean and the vertical line indicates the median. The left and right edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers. Size of round markers is scaled by number of participants in each kernel group. dynamic divisive normalization can account for the integration kernels of human participants in an auditory perceptual decision making task. In addition, with quantitative model comparison, we show that dynamic divisive normalization explains participants' choices as well as the state-of-the-art Drift Diffusion Model (DDM), the predominant model for such perceptual evidence accumulation tasks. Together, these results suggest that evidence accumulation can arise from a divisive normalization computation achieved through the interactions within a local circuit. The result that LCA alone does not account for the bump shaped kernel is particularly interesting. Both divisive normalization and LCA produce different integration kernels via a tradeoff between leak and competition. The main difference is that the competition in LCA is subtractive and the competition in divisive normalization is divisive. Superficially these two types of competition may seem similar in the sense that they both reduce accumulator activity, but they actually produce qualitatively different behavioral hypotheses. Specifically, behaviorally, we have shown that the leak and competition tradeoff in LCA alone cannot produce the bump kernel. The addition of both a bound and sensory adaptation to pure LCA is necessary to account for our human behavioral data, whereas the leak and competition trade off alone in divisive normalization can account for all four integration kernels, including the bump kernel. Of course, importantly, our results also indicate that the leak and inhibition in divisive normalization is not the only factor influencing the integration kernel shape. An interesting line of future work would be to understand how different parameters in divisive normalization trade off with each other to produce different kernel shapes. While our findings suggest that our model accounts well for human behavior in this one task, an obvious question is whether dynamic divisive normalization is at play in other types of evidence accumulation and in other decisions? For example, the Drift Diffusion Model has been used to model evidence accumulation in a number of paradigms (from auditory clicks 10,17,18 , to visual discrimination [19][20][21] , to random dot motion 16,[22][23][24]. Likewise, the DDM can account for choice and reaction time data in quite different settings such as memory retrieval 25 , cognitive control 26 , and economic and value-based decision making [27][28][29][30][31][32] . Is divisive normalization also at play in these cases? If divisive normalization is a canonical neural computation, then the simple answer is "it must be", but whether its influence extends to behavior is largely unknown (although see the emerging literature on divisive normalization in economic and value-based decisions 6,8,33 ). If people are using divisive normalization in these decisions then what computational purpose does it serve? From a computational perspective, the DDM is grounded in the sequential probability sampling test which is the optimal solution to evidence accumulation problems for two-alternative decisions under certain assumptions 16,34 . Is divisive normalization optimal under other decision making constraints? In this regard, an intriguing finding by Tajima and colleagues suggests that time-varying normalization may be almost optimal for multi-alternative decisions by implementing time-depending, nonlinear decision boundaries in a free response paradigm 35 . While such a decision boundary may not be optimal in the current task which has an interrogation paradigm, it may be optimal in a free response paradigm. This idea can be tested in future experiments with a free response paradigm where having a decision boundary is necessary. Other advantages of divisive normalization may be its ability to encode the state of the accumulator over a wide dynamic range of evidence 1,36 , or its relation to optimal Bayesian inference in some cases 37 . Of course an alternate account is that divisive normalization is necessary for other functions (e.g., balancing excitation and inhibition 1 ) and the behavior we observe is simply the exhaust fumes of this function leaking out into behavior. On the other hand, stochastic choice variability is fit with a single, timeinvariant noise term in divisive normalization, whereas DDM typically models noise as time (or stimulus) dependent. This does not pose a problem for the model to account for the current data, because only a single duration of the stimulus was used, but it might limit the generalizability of the model. At the neural level, an obvious question is can our neural model explain neural data? In this regard, it is notable that our model was adapted from Louie et al.'s model of lateral intraparietal (LIP) area neurons 8 . LIP has long been thought to contain a neural representation of the state of the accumulator [38][39][40] and it is likely that, just like Louie's model accounts for the firing of LIP neurons in his task, our model may well be consistent with many of these past results. However, the accumulator account of LIP has recently been challenged [41][42][43][44] and other areas in prefrontal cortex [45][46][47][48] and striatum 49 have been implicated in evidence accumulation. Whether our divisive normalization explains neural firing in these areas is unknown. On the other hand, it is important to note that the timescales in our application of the divisive normalization model should probably be interpreted as something more high-level than synaptic timescales. Normalization has long been found to be a computation that has been associated with multiple mechanisms and circuits 1 . One possible interpretation for the model in the context of our work would be that the model timescale reflects the timescale at the circuit level, and one possible reason for the variance across individuals could be due to modulation by neuromodulatory systems such as norepinephrine. Furthermore, we note that other neural network models of evidence integration have also been proposed, perhaps most importantly the model of Wang 50 . In its simplest form, the Wang model also considers two mutually inhibiting units that, superficially, look similar to the R units in Fig. 1a. However, the dynamics of the Wang model and the way it makes decisions are quite different. In particular, the mutual inhibition is calibrated in such a way that the Wang network has two stable attractor states corresponding to the outputs of the decision (e.g., left or right). The input, combined with the dynamics of the network, pushes the network into one of the two attractor states, which corresponds to the decision the network makes. Because the attraction of an attractor gets stronger the closer the network gets to it, the initial input to the model has a strong effect on the ultimate decision leading to a pronounced primacy effect in the Wang model. In contrast to Wang attractor model, our dynamic divisive normalization is essentially a line attractor network, with a single fixed point in A-G space which is stable for all values of δ (Supplementary Note 7 and Supplementary Fig. 12). This structure allows divisive normalization to exhibit a number of different integration kernels as shown in Fig. 2 depending on the parameters. However, it is important to note that the two models are different in nature and explanatory power-specifically, the attractor model directly implements the choice mechanism, whereas the divisive normalization model implements only the decision variable computation, and choice has to be computed separately by putting the decision variable through a softmax function. Finally, several important questions remain to be answered. The bump shaped kernel is a novel behavior in our task and stand in contrast to previously published results in humans in a similar auditory clicks task 10 , that observes a flat integration kernel. So what is causing this difference? One possible explanation is that behavior in this kind of task is extremely varied across participants (as suggested by our data), and that the larger number of human participants in our study better sample the whole range of behavior. Importantly, our previous work 14 has shown that the fitted parameter values of DDM in our participants are consistent with those reported by Brunton and colleagues 10 . In addition, Wyart and colleagues have shown that decision weights of incoming pieces of evidence fluctuated with slow cortical oscillations 51 . Even though they did not directly observe an uneven integration kernel in their data, their result that decision weights correlate with a slow rhythmic pattern is consistent with the bump kernel observed in our data, suggesting that the bump kernel may generalize to other tasks. Future work investigating the relationship between neural activity and behavior in our task would further test this idea of synchronization between cortical oscillations and integration kernel. Neural data could also shed light on why we observe such large individual differences in integration kernel (and by implication, processing time) across participants. There is also the question of how to interpret the result that Brunton et al. model requires not only a decision bound but also sensory adaptation to account for the bump shaped kernel. It would suggest that the individual differences in kernels is caused by a difference at the sensory processing level but not at the decision making level. However, we note that the Brunton model may not be the only variant of DDM that could account for this bump shaped kernel-other potential candidates include DDM with a collapsing bound, DDM with variable drift rate, etc. Future work to answer these questions would be to compare these models on different kinds of datasets. In sum, dynamic divisive normalization can account for human behavior in an auditory perceptual decision making task, but much evidence remains to be accumulated before we can be sure that this model is correct! Methods Participants. One hundred eighty-eight healthy participants (University of Arizona) took part in the experiment. We analyzed the data from 133 participants (55 participants were excluded due to poor performance-accuracy lower than 60%). We have complied with all relevant ethical regulations for work with human participants. The experiment was approved by the Institutional Review Board at University of Arizona. All participants provided informed written consent prior to the experiment. Experimental procedures. Participants made a series of auditory perceptual decisions. On each trial they listened to a series of 20 auditory "clicks" presented over the course of 1 s. Clicks could be either "Left" or "Right" clicks, presented in the left or right ear. Participants decided which ear received the most clicks. In contrast to the Poisson Clicks Task 10 , in which the click timing was random, clicks in our task were presented every 50 ms with a fixed probability (p = 0.55) of occurring in the "correct" ear. The correct side was determined with a fixed 50% probability. Participants performed the task on a desktop computer, while wearing headphones, and were positioned in chin rests to facilitate eye-tracking and pupillometry. They were instructed to fixate on a symbol displayed in the center of the screen, where response and outcome feedback was also displayed during trials, and made responses using a standard keyboard. Participants played until they made 500 correct responses or 50 min of total experiment time was reached. Psychometric curve. Psychometric curves show the probability of the participant responding leftward as a function of the difference between the number of left clicks and the number of right clicks C left − C right . The identical procedure was used to produce model-predicted curves, where the model-predicted probability of choice on each trial was used instead of the participants' responses. Integration kernel. To measure the contribution of each click to the participant's choice on each trial, we used logistic regression given by logit(Y) = βX, where Y ∈ {0, 1} is a vector of the choice on each trial and X is a matrix in which each row is the 20 clicks (ΔC = C left − C right ) on that trial, coded as +1 for left and −1 for right. The identical procedure was used to produce model-predicted integration kernels, where the model-predicted choice on each trial was used instead of the participants' responses. Derivation of kernel function of divisive normalization. The model and the dynamical equations for R and G are described in the main text. These are reproduced here: From Eq. (1a) we can consider how the difference in activity δ(t) = R left (t) − R right (t) changes over time: where ΔC(t) = C left (t) − C right (t) describes the difference in input over time. To derive a formal expression for the kernel function, we integrate Eq. (3a) using the ansatz: Taking the derivative of (4a) and multiplying both sides with τ R , we get: Combining Eqs. (3a), (4a), and (5a), we get: Integrating Eq. (7a) we get: Substituting Eq. (8a) back into Eq. (4a), we get Maximum likelihood estimate. We fit all the models to participants' choice data using a maximum likelihood approach. To evaluate how well a particular set of parameter values fits the behavioral data of a particular participant, we compute the probability of observing the data given the model. Assuming the trials are independent, we can compute the probability of observing the data, D, given the model, m, as the following: where D is the full set of the participant's choices across all trials, θ m is the set of parameters for a particular model m (e.g., divisive normalization), and d k is the participant's choice on trial k. The best-fit parameter values (i.e., maximum likelihood values) are the parameters θ m that maximize the logarithm of Eq. (10a), i.e. the log likelihood LL: where pðd k jθ; mÞ is the probability of the single choice made at trial k given the parameters θ m . For our model, this probability is given in Eq. (9). Optimization of model parameters. After computing the log likelihood LL per the description above, we then pass the negative log likelihood (whose minimum is at the same parameter values as the maximum of the positive log likelihood) to the fmincon.m function from Matlab's optimization toolbox using its interior-point algorithm, which implemented the parameter optimization. The output from fmincon.m is the parameter values that maximize the likelihood of the data. We used on average 360 starting points (with random initial conditions) for each participant to avoid fmincon finding only the local minima and not the global minima. The log likelihoods reported in Table 1 are the averages over all participants. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data associated with this study are available on https://osf.io/fekpn/. A reporting summary for this Article is available as a Supplementary Information file. Code availability Experiment code was created with Psychtoolbox-3 and custom MATLAB code. All analyses were created using custom MATLAB and R code. Code is available from the corresponding author upon request, and will be uploaded to https://github.com/ janekeung129/DivisiveNormModel2020.
8,488.2
2019-04-05T00:00:00.000
[ "Computer Science" ]
Uses of Phage Display in Agriculture: A Review of Food-Related Protein-Protein Interactions Discovered by Biopanning over Diverse Baits This review highlights discoveries made using phage display that impact the use of agricultural products. The contribution phage display made to our fundamental understanding of how various protective molecules serve to safeguard plants and seeds from herbivores and microbes is discussed. The utility of phage display for directed evolution of enzymes with enhanced capacities to degrade the complex polymers of the cell wall into molecules useful for biofuel production is surveyed. Food allergies are often directed against components of seeds; this review emphasizes how phage display has been employed to determine the seed component(s) contributing most to the allergenic reaction and how it has played a central role in novel approaches to mitigate patient response. Finally, an overview of the use of phage display in identifying the mature seed proteome protection and repair mechanisms is provided. The identification of specific classes of proteins preferentially bound by such protection and repair proteins leads to hypotheses concerning the importance of safeguarding the translational apparatus from damage during seed quiescence and environmental perturbations during germination. These examples, it is hoped, will spur the use of phage display in future plant science examining protein-ligand interactions. Introduction Since its development by Smith [1], phage display has proven to be a powerful tool for protein interaction studies in Immunology, cell biology, drug discovery, and pharmacology. Phage display is one of the preeminent means by which scientists identify proteins having affinity for other molecules and has a staggering throughput capacity for screening with libraries with titers approaching 10 9 virions per microliter. Its utility lies principally in generating molecular probes against specific targets and for the identification, analysis, and manipulation of protein-ligand (including protein-protein) interactions. Modern phage display libraries permit the sought attribute (namely, protein with affinity for a ligand (bait)) to be directly coupled to the DNA sequence encoding the protein in a nondestructive manner. Random DNA libraries, or those formed from cDNA after randomly priming mRNA, provide a host of different amino acid contexts that can translate into a continuum of affinities for the bait. Recovery of overlapping clones of a particular protein permits examination of this region of the protein, directing the experimenter to the specific site capable of binding the ligand. With the protein-binding site effectively located, this information can be used to predict target attributes that serve 2 Computational and Mathematical Methods in Medicine as the foundation of ligand-protein affinity, guiding future protein engineering efforts. This technique, due to its simplicity and efficacy, has been responsible for discoveries of synthetic antibodies and molecular interactions and utilized in directed evolution. The applications of phage display for discovery of protein-ligand interactions have become increasingly complex as its utility has been recognized in a diversity of fields, including the identification of targets of bioactive molecules. For example, Huperzine A is a plant-produced, bioactive compound with multiple neuroprotective effects [2,3]. Magnetic biopanning approaches have been used to identify some of the target pathways influenced by Huperzine A's pharmacological effects which are responsible for alleviating a host of dysfunctions, potentially including Alzheimer's disease [4]. Despite the utility of phage display, the technique has received less attention from plant scientists, with the exception of sustained programs developing antibodies to a host of different cell wall components [5], a topic discussed in other literature [6] and thus not examined here. However, phage display has much to offer other fields of plant research. This review surveys the applications of phage display in the discovery of protein-protein interactions in various fields of plant science concerned with maximizing crop plants' seed production and the utilization of the nutrients stored in seeds, from protecting crops from harmful pests to alleviating human allergenic reactions to seed storage proteins. Our objective in highlighting this literature is to heighten the awareness of plant biologists to the utility of the technique for more than antibody production alone. If successful, phage display should figure more prominently in the research of those plant scientists examining molecular interactions in the future. Applications of Phage Display in Agriculture: Seed Production Why focus on seed production? On a fundamental level, it is necessary to understand seed attributes as human reliance on seeds is so pervasive. Seeds are our major food source (70% of our diet [7,8]); they are fodder for our livestock, a method of bulk food transport, storage, germplasm preservation, and a vehicle for technology delivery. It is imprudent not to understand more about how a seed fulfills its function as a propagule, a process on which we depend so utterly, yet about which we still know so very little [9,10]. In addition to constituting the majority of humanity's food, recent additional uses for the energy stored in seeds (biofuels [11]) have periodically led to higher seed and commodity prices worldwide [12,13]. While governments attempt to mitigate the negative impact of increasing staple food prices on the poor [12], demand for seed as food and biofuel feedstock and the land on which to produce it continues to increase [14]. The growing global population is projected to increase cereal consumption for food alone by a billion metric tons in the next 30 years (FAO, 2002, http://www.fao.org/docrep/004/y3557e/y3557e00.htm); yet yield losses due to unpredictable biotic and abiotic stresses are projected to increase [15]. These grim facts have added urgency to the requirement to improve understanding of all facets of seed production. It is imperative that we do this if we are to feed ourselves [16]. [17]. PIs are a plant protection strategy that can attenuate nutrient assimilation in the insect gut or by microbes by inhibiting the activity of pest digestive proteases [18]. There are a large number of PIs used by plants as natural protection against pests [19]. PI production can be induced in the plant body by pest/pathogen attack through the jasmonic acid pathway [20], but are also subject to developmental regulation, their production being stimulated in storage tissues [21]. In seeds, PI transcription is stimulated by abscisic acid (ABA) (inhibitory to germination) and inhibited by gibberellic acid (GA) (stimulatory for germination) [22]. Thus, endogenous seed protease activity (responsible for storage protein breakdown for use by the establishing seedling) is reduced during the anabolic period of seed development, permitting unhampered accumulation of the storage proteins, while this hindrance is alleviated during the period of seedling establishment allowing access to energy and components constituting the storage proteins (Figure 1(a)). Reduction, through the NADPH-dependent thioredoxin h system, of specific disulfide bonds necessary to impart the PI with its inhibitory confirmation [23] also aids the removal of seed PI influence from establishing seedlings [24]. Typically, PIs are heat labile, permitting humans to acquire the full nutritional value of the seed storage proteins (some of which are protease inhibitors in their own right [25]) in cooked food that is denied to insects and microorganisms [26]. The plant usually encodes a considerable variety of PIs that are used to inhibit a wide range of pest proteases and isoforms within a protease class. Protease isoform prevalence in the insect can vary, exhibiting adaptability on the part of the pest in attempts to overcome this plant defensive mechanism [31][32][33][34]. Strategies using phage display to inform directed evolution [35,36] or specific site-directed mutation [37] efforts to produce PIs with greater specificity [38] or affinity [39] for the pest protease active site aim at enhancing this natural means of protecting crops. The PIs are usually quite specific for their protease target [40], and phage display has been at the center of efforts to construct PIs with a greater range of targets. This enhanced generality includes biopanning for PI variants that can inhibit proteases of a diversity of insect pests [41]. Another facet of phage displaybased protection enhancement takes the opposing strategy, endeavoring to identify PIs that are even more finely tuned to the target species (pest) protease class [42]. These various attempts to use phage display to acquire novel PIs are geared toward providing a greater range of PIs Figure 1: A graphic depiction of events occurring during the stages of late maturation, quiescence, and germination of orthodox seeds [27]. (a) Four stages during a plant's lifecycle commencing with seed maturation desiccation and ending with postgermination seedling establishment (Postgerm). Seed water content is represented by the solid blue line in the graph and is depicted as well by shades of blue in the background highlighting stages in the continuum encompassing late seed maturation, quiescence, and the three classical phases of water uptake during seed germination (imbibition, lag, and embryo elongation/seedling establishment (establish)). Phase III has been placed to span the completion of germination because turgor-driven embryo cell expansion, required to protrude from the seed, necessitates additional water uptake. The axis representing time has been broken during quiescence to emphasize that, although this period can last for centuries, certain species seeds remain viable [28,29]. Events that are beneficial for the preparation of maturation desiccation or the resumption of growth are presented as green lines. Events occurring that are detrimental to the cellular constituents are depicted as purple lines. The commencement and termination of these events are signified by short-dashed lines. A drying event, followed by rehydration during germination, has been inserted as a long-dashed blue line. This region is also highlighted by yellow shading that depicts a period of high temperature stress. The abundance of the seed storage proteins is depicted as a yellow bar whose thickness is tapered at both ends to signify net accumulation during late embryogenesis and rapid hydrolysis during seedling establishment. (b) Late embryogenesis abundant protein (LEAP) synthesis and utilization during late seed maturation and quiescence. The overall progression of a non-dormant (quiescent) seed toward the completion of germination (100% progression) is depicted as a solid line commencing at the arrow (seed imbibes) on the time axis. To emphasize the capacity of the seed to preserve its physiology at a point above 0 progression ( -axis) during the dehydration/supraoptimal temperature event (dash-dotted brown line), the trajectory of progression deviates partially from that had no drying/thermal stress occurred. The red line, and the dash-dotted red progression trajectory emanating from it, portrays a seed without the capacity to preserve its physiology. The difference (double-headed arrow) is the seed hydration memory [30]. The only manifestation of the stressful event interrupting the progression of germination is a slightly delayed point on the time axis at which the embryo protrudes. A seed unable to maintain its physiology may or may not be capable of completing germination, hence the question mark. The production of the LEAPs and their utilization to presumably preserve the seed's physiology, post-imbibition, are indicated. The time axis is broken during the stressful event to signify its unknown duration. Graph adapted from Nonogaki et al. [10]. affording protection to plants than is available to the conventional plant breeder. The development and identification of PIs with unique capabilities of downregulating the activity of specific pest proteases, through phage display or other means, will permit these plant protection mechanisms to augment those existing naturally in the plant. Stacking PIs with different protease target sites may help to broaden pest susceptibility while delaying the acquisition of resistance to the PIs [43,44]. Discovering Non-Protease Inhibitor Protective Peptides. Phage display can identify peptides or proteins that have affinity for a vast array of molecules. Peptides with high affinity for proteins key to a pest's lifecycle can be disruptive to the pest's permanence or pathology [45]. Once identified, such peptides can be engineered and introduced into most crop plants for endogenous production, providing a novel line of defense against plant pests. Such specific, plant-contained, protective mechanisms may prove to be less damaging to offtarget organisms in the crop environment than conventional pesticides [46]. Chemoreception-disruptive peptides selected from peptide libraries have been shown to decrease parasitism by nematodes, albeit at doses 3 orders of magnitude greater than the Aldicarb nematocide control [47]. Despite this much lower competence, the Aldicarb mimetic with high affinity to acetylcholinesterase, when produced in planta, was effective in reducing parasite load in potato by cyst nematodes [46] that are otherwise difficult to control due to their sessile habit and location, embedded in the plant roots. Therefore, in situ production of the mimetic with a lower efficacy counteracted this liability, resulting in nematode control, which is also the goal of the generally applied nematocide possessing greater potency but only a portion of which arrives at the site of action. Similarly, phage display identified peptides binding to zoospores of the fungal pathogen Phytophthora capsici. Many of the zoospore-binding peptides resulted in the premature encystment of the zoospore without any other inductive signal. In addition to aiding in the identification of zoospore-displayed receptors controlling encystment, the authors postulated that such peptides might represent a novel plant defensive mechanism [48]. Subsequently, decreased infection by this soil-borne fungus resulted when a protective peptide was expressed in planta in a form allowing its secretion into the rhizosphere [49]. Uses in Plant Virology. Phage display has been used by various plant virologists in identification of peptides that bind to a pathogenic virus's coat protein. The phage displayisolated peptides were very specific and highly sensitive. At the very least, these have diagnostic potential as they can be produced as fusions with proteins that serve as an antigen for antibody-reporter molecule conjugates [50]. They may also constitute the basis for a novel, introduced disease resistance strategy. Peptides with high affinity and specificity for vital viral proteins could be identified, and subsequently, the capacity to synthesize these peptides may be introduced into plants. In planta peptide production might prevent viral proliferation in infected cells. Such a strategy has been used successfully with antibodies [51], but antibody folding usually requires an oxidizing environment conducive to forming specific intracellular disulfide bonds necessary for function [52]. Phage display-selected peptides may not be so exacting in their requirements [53]. Indeed, phage displayselected peptides capable of binding to a coat protein of the rice black streaked dwarf virus (RBSDV), when produced recombinantly for diagnostic purposes, have been shown to also disrupt proper coat protein folding and reduce the pathogenicity of RBSDV [54]. Phage display has also assisted in the elucidation of various host systems secunded to the virus to permit successful infection and replication. Using the viral replication enhancer protein, AC3 as bait, a phage library of random dodecapeptides fused to a coat protein was panned to identify interacting peptides that were then analyzed for homology to proteins from the model plant, Arabidopsis thaliana. The revelation of the pathways to which these proteins are integral has allowed a more sophisticated understanding of events required for successful viral lifecycle and the role of the multifunctional protein AC3 in events leading to virus-induced gene silencing [55]. Identification of Immune Targets in Plants. Plants are known to have a very complex and diverse immune system against microbes [56]. The first active line of defense occurs at the plant cell surface when microorganism-associated molecular patterns (MAMPs) such as lipopolysaccharides, peptidoglycans, or bacterial flagellin are detected by pattern recognition receptors (PRRs). These PRRs are responsible for pattern-triggered immunity (PTI) in plants [57][58][59]. To circumvent PTI, adapted pathogens can deliver effector molecules directly into the plant cell. As a countermeasure, plants have developed corresponding resistance (R) proteins to recognize these effectors and their modified targets which results in effector-triggered immunity (ETI) [59]. Both PTI and ETI involve specific families of proteins but the distinction between both types is not yet clear. What is clear is that a large number of proteins participate in the immunity process. Rioja et al. used phage display to study these interactions and to identify Arabidopsis proteins able to bind bacterial pathogens [60]. For this, they constructed two phage-display libraries from the cDNA of microbechallenged Arabidopsis. Recombinant phage displaying plant proteins capable of interacting with different species of Pseudomonas (the pathogen) were selected by biopanning using microbial cells as selection ligands. In this way, plant proteins involved in defense responses were identified and subsequently confirmed in vitro for the capacity to bind microbial cells. Using different strains of Pseudomonas as bait allowed discrimination between common bacterial receptors and specific targets of virulent or avirulent strains. Applications in Cell Wall Research. Interest in using cellulose and other plant cell wall components as feedstock for biofuel production continues to grow worldwide for a host of reasons. Current means of deconstructing cellulose polysaccharides to glucose for conversion to biofuels are less efficient and more expensive than practical for an industrially relevant process. One avenue being explored for more efficient conversion of cellulose to glucose is through enhanced enzymatic degradation. It has been demonstrated that some cellulases and hemicellulases retain their function when fused to a viral coat protein [61,62]. These clones can subsequently be reengineered to alter (randomize) specific regions of interest imparting novel functionalities/affinities to the displayed enzyme combinatorially. The resultant library of phage displayed variant enzymes can then be screened over substrates/inhibitors to study the individual amino acids imparting the observed/desired property. Computational and Mathematical Methods in Medicine 5 Programs have also used phage display libraries to discover or improve upon carbohydrate binding modules focused on the use of these regions to enhance the binding affinity of the glycoside hydrolase/binding module construct to various crystalline morphologies, which may improve upon their productivity [63]. Additional uses include highly specific probes for cell wall constituents, which are critical to refining our understanding of plant cell wall construction [64][65][66]. Furthermore, a library of fungal endo--1,4-xylanase enzyme variants permitted the simultaneous assessment of the influence of many different individual residues on the affinity for xylanase inhibitor proteins [67]. Subsequent work has permitted the development of an endo--1,4-xylanase enzyme that retains its catalytic competence while being completely insensitive to xylanase inhibitor proteins found in wheat flour [68]. The fungal xylanase is used in the food industry to enhance nutritional value and properties, but its inactivation by the endogenous inhibitors found in the foodstuffs on which it is used has been a problem for the industry. Moreover, through a computational approach, the pH stability of the enzyme has now been greatly improved leading to an increase in its utility in the food preparation industry [69]. Phage Display Uses in Combating Allergies to Seed Storage Proteins. Almost 5% of humans have some form of food hypersensitivity [70]. Identified food allergens include the seed storage proteins that can induce a variety of allergic syndromes [71,72]. Phage display has assisted in the rapid identification of antigens eliciting hypersensitive responses [73] including those previously uncataloged [74]. Once individuals suspect they are allergic to a particular food, a more sophisticated assessment of the component(s) in the food causing the allergic reaction is necessary if any alleviation is to be attained. Epitopes from a library of allergens from the food in question [75], panned over patient IgE, can rapidly and cheaply identify the specific allergen(s) causing the hypersensitive response [74]. For example, peanut allergies are quite common (∼1% of the population of the USA [76]), are perceived to be increasing [77], and can be severe [78]. Phage display has been used to identify precisely what proteins are causing the hypersensitive reaction in peanutsensitive patients [79], implicating the seed storage proteins as significant and accounting for 6 of the 8 allergens identified in peanut to date [80]. Similarly, "baker's asthma, " a common occupational affliction, was until recently only known to be caused by an allergic reaction to "flour" components. Phage display was used to identify a causal agent in wheat flour as native gliadin (33% of all cases) and, more specifically, -and -gliadin, which were causal in 12% of all Baker's asthma [81]. The use of such epitope display accurately identifies the causal agent of food allergies that, once identified, can be the subject of investigations aimed at rendering it less antigenic. Such an approach has been used in a program aimed at mitigating allergenic reactions in celiac disease. Celiac (or also coeliac) disease affects approximately 1% of the human population [82]. It is induced by components in several cereal storage proteins in common use (bread, pasta, and beer). It is a complex disease with aspects of both autoimmune disease and food hypersensitivity [83]. In the autoimmune response, tissue transglutaminase (tTG) enzyme is targeted by self-antibodies but only after gluten ingestion when tTG is complexed with gluten [84,85]. The enzyme deaminates the abundant glutamine residues, which can comprise up to ∼35-40% of the amino acids constituting the -gliadin component of gluten [86]. Antibodies are also specifically produced against tTG-deaminated gliadin fragments from gluten, a hallmark of food hypersensitivity [87]. Approaches to alleviate disease symptoms include attempts to block portions of gliadin using synthetic, highaffinity peptides, thus preventing tTG action/gliadin modification and subsequent formation of immunostimulatory epitopes. Phage display has played a critical role in the identification of the peptides possessing a strong affinity for gliadin. These act to first depress tTG activity against the gliadin substrate in vitro by steric hindrance, the eventual goal being to attenuate the autoimmune response by decreasing the association of the enzyme with its substrate, minimizing inflammation in vivo [88]. The second prong of this program is to cover the epitopes on gliadin, masking the protein fragments from the antibodies binding to them [89]. This program has passed the first several hurdles in the long road to providing a modicum of relief for celiac disease sufferers, including proof that the synthetic peptides act to block tTG activity against gliadin as did the phagetethered peptides on which they were based, which does not necessarily follow [90]. The program awaits trials of the identified gliadin-binding peptides in vivo. In addition to their potential therapeutic uses, the various peptides, binding to different sites on the gliadin protein, [89] could provide valuable tools for researchers in the field of celiac disease. Phage Display Identifies Protein Isoaspartyl Methyltransferase Substrates in the Stored Seed Proteome. The tTGmediated alteration of gliadin glutamine residues, through deamidation, enhanced the antigenicity of gliadin fragments [91]. The proteins present in dry seeds are particularly susceptible to a host of nonenzymatic conversions, many of which are deleterious [92][93][94][95][96][97][98][99], and some of which may play a role in preparing the seed for the completion of germination upon rehydration [100]. Regardless, these conversions can also result in peptides that are recognized by the human immune system or are recalcitrant to hydrolysis. For example, spontaneous isoaspartyl formation is known to result in autoimmune responses [101] and interfere with peptide degradation [102] decreasing the nutritional value of ingested seed products [103] and, if sufficiently widespread in the stored proteome, would be disastrous for germination and seedling establishment [104,105]. Orthodox seeds [27] are capable of extreme dehydration allowing them to remain viable in extremes of temperature [106,107] and in some instances, for centuries [28,29]. This remarkable feat means that the seed proteome is at risk for deleterious alteration for the whole of this time as there is insufficient water present to effect repair. A prominent detrimental alteration is the conversion of L-Asn or L-Asp residues in proteins to succinimide that, upon water addition, usually converts to the unusual, uncoded amino acid, L-isoAsp [108][109][110][111]. In the imbibed state, isoAsp in proteins is recognized, methylated, and repaired by protein L-isoaspartyl methyltransferase (PIMT) [112,113]. What proteins are most at risk for isoAsp formation or for which PIMT has highest affinity? Due to the labile nature of the labeled isoAsp and susceptibility of proteins to form isoAsp during rigorous extraction necessary to obtain samples, these identifications have not been facile [114][115][116]. Moreover, the abundance and susceptibility to damage of the seed storage proteins [93] have made identification of additional PIMT target proteins using extracts from seeds difficult [116]. An alternative approach used phage display to mitigate the influence of protein extraction on the generation of isoAsp while largely removing the seed storage proteins from the analysis [117,118]. A group of proteins involved in aspects of translation were revealed as important substrates of PIMT in seeds. This led to the realization that the stored proteins essential for the translational apparatus must be especially important to protect from general dysfunction because there is no means of replacing them (or any other protein) from either the stored or de novo produced transcriptomes if translation is compromised in the majority of cells comprising a tissue and/or organelles [119,120] present in cells ( Figure 2). The recovery of an LEA protein by PIMT1 was intriguing as it may indicate that this LEA protein needs protection from isoAsp formation by PIMT1 to retain its function, forming part of an interactive network of protein protective mechanisms extant in seeds. T-DNA insertional mutants of this LEA in two different Arabidopsis ecotypes were incapable of entering secondary dormancy when seeds were exposed to supraoptimal (40 ∘ C) germination temperatures for several days prior to being placed at permissive temperatures (25 ∘ C) [117]. Such a specific phenotypic manifestation of the loss of this LEA's function suggested it safeguarded a crucial subset of proteins involved in the proteomic memory of environmental conditions the seed has experienced thus far following imbibition (supraoptimal temperatures). High temperature and/or desiccation after a period of imbibition during which important environmental cues had been perceived and the transcriptome/proteome altered accordingly, but prior to radicle protrusion, would expose the proteome and the integrated environmental information it represents to deleterious conditions. This necessitates protective mechanisms be invoked to ensure the heat-stressed/dehydrated proteins retain their function so that germination can resume at the appropriate point at which it left off once the seeds are rehydrated [140]. Dubrovsky [30] referred to the capacity of seeds to resume germination from the point at which they had progressed prior to dehydration as the "seed hydration memory" (Figure 1(b)). The concept of the LEA proteins safeguarding environmental cues, acquired during the imbibed period and embodied in a heat-sensitive proteome, can be subsumed into their role of aiding the survival of water loss during maturation desiccation, quiescence or after imbibition [141,142]. The dysfunction of some heat-labile molecule(s), when not protected by SMP1, results in a seed that cannot "remember" the supraoptimal temperature it has experienced and thus behaves inappropriately, completing germination immediately when removed to 25 ∘ C rather than entering thermal dormancy (Figure 1(b)). It was necessary to ascertain with what target proteins the SMP1 LEA protein associates because these would be candidates for controlling the induction of secondary dormancy due to high heat [117] but this was not known. In fact, uncertainty exists regarding whether LEA proteins serve exclusively as general "spacer" molecules ("molecular shields" or crowders) that simply prevent deleterious aggregation upon water loss or if they can act as specific protectors of individual target molecules so-called "client molecules" [143][144][145]. Therefore, recombinant SMP1 and its soybean GmPM28 homolog were used as bait in screens at two different temperatures and with two independently produced Arabidopsis seed, phage display libraries [146]. Biopanning over these recombinant LEA homologs demonstrated that the same protein clients, indeed the same region of the same protein clients, are consistently retrieved by both baits at two different temperatures [146]. The client proteins identified did not have a single target protein in common with the PIMT1 screens, yet those involved in translation were again prominent among the protected target proteins further entrenching the contention that protection of the proteins involved in translation is paramount for safeguarding the longevity of orthodox seeds (Figure 2). Conclusions Predictions of dire consequences for humanity if food (read seed) production is not drastically increased is a goad for researchers investigating seed production to endeavor to understand more of the complexities of this event. Frequently, the understanding sought lies at the level of protein-ligand or protein-protein interactions. In this regard, phage display has proved extremely useful for both the discovery of such Splicing G e n o m e o r g a n i z a t i o n a n d p a c k a g i n g T r a n s c r i p t i o n Transcription factor Stored transcriptome s y n t h e s i z e d t r a n s c r i p t o m e (2) (3) X (1) Stored proteome apparatus, cytosolic Figure 2: Those proteins essential to translation are the proteome's "Achilles' heel" for seed longevity. In the imbibed seed, there are three means by which functional proteins can be recruited into the newly reestablished, active metabolism. The proteins may be part of (1) the stored proteome that has survived maturation desiccation and subsequent rehydration with their function intact. New protein can be translated from either (2) the stored transcriptome consisting of mRNA, produced during seed maturation, that survived maturation desiccation/rehydration or (3) de novo transcribed mRNA. Only those proteins essential to translation must be present in the stored proteome, sufficiently numerous and in an active state following imbibition, to carry out translation (probably with an emphasis on self-replacement) if the embryo is to survive. Various classes of proteins are color coded according to their function (red: transcription/nuclear organization; light blue: Housekeeping/metabolism; dark blue: organelles; purple: translation). The proteins essential to translation are depicted decorating the ribosome in the cytosol, or in those organelles with their own genomes. The dysfunction of the proteins essential for translation has been emphasized by their partial transparency and an "X" through the molecule representing this class in the stored proteome. A lack of translation results in the eventual demise of the entire proteome over time (partially transparent functional proteome). interactions and their subsequent manipulation towards an end. This review has highlighted, for the first time, the impact phage display has had on agricultural research concerned with seed production. Efforts to safeguard the crop plant's capacity to produce seeds and to protect the seeds themselves for exclusive human use/consumption have successfully employed phage display. Phage display has aided in the production of enzymes specialized for use in food processing, making nutrients more readily available. It has also provided the means of specifically identifying the causal agent(s) of seed allergies, and indications are that it may be instrumental in providing the first means of mitigating the effects of a prominent seed-related ailment. The use of phage display has permitted insights into the seed's endogenous natural protective and repair mechanisms, allowing a more fundamental understanding of the events transpiring during late embryogenesis, quiescence, and germination; in short, what makes seeds so excellent in their role as propagules.
7,088.2
2013-04-28T00:00:00.000
[ "Biology" ]
Ultralight scalars in leptonic observables Many new physics scenarios contain ultralight scalars, states which are either exactly massless or much lighter than any other massive particle in the model. Axions and majorons constitute well-motivated examples of this type of particle. In this work, we explore the phenomenology of these states in low-energy leptonic observables. After adopting a model independent approach that includes both scalar and pseudoscalar interactions, we briefly discuss the current limits on the diagonal couplings to charged leptons and consider processes in which the ultralight scalar $\phi$ is directly produced, such as $\mu \to e \, \phi$, or acts as a mediator, as in $\tau \to \mu \mu \mu$. Contributions to the charged leptons anomalous magnetic moments are studied as well. Introduction Lepton flavor physics is about to live a golden age. Several state-of-the-art experiments recently started taking data and a few more are about to begin [1]. These include new searches for lepton flavor violating (LFV) processes, forbidden in the Standard Model (SM), as well as more precise measurements of lepton flavor conserving observables, such as charged lepton anomalous magnetic moments. The search for LFV in processes involving charged leptons is strongly motivated by the observation of LFV in the neutral sector (in the form of neutrino flavor oscillations). In what concerns muon observables, the search for the radiative LFV decay µ → eγ is going to be led by the second phase of the MEG experiment, MEG-II [2,3], while the long-awaited Mu3e experiment will aim at an impressive sensitivity to branching ratios for the 3-body decay µ → eee as low as 10 −16 [3,4]. A plethora of promising experiments looking for neutrinoless µ − e conversion in nuclei is also planned. Flavor factories and experiments aiming at a broad spectrum of flavor observables, such as Belle II and LHCb, will also contribute to this era of lepton flavor, mainly due to their high sensitivities in the measurement of tau lepton observables [5,6]. On the flavor conserving side, improved measurements of the muon anomalous magnetic moment are expected at the Muon g-2 experiment [7], hopefully shedding light on a well-known long-standing experimental anomaly. With such an exciting experimental perspective in the coming years, it is natural to ask what type of new physics can be probed. In this work we will concentrate on ultralight scalars that couple to charged leptons and study their impact on leptonic observables. In this context, we will use the term ultralight scalar to refer to a generic scalar φ that is much lighter than the electron, m φ m e , and can therefore be produced on-shell in charged lepton decays. In practice, this also means that φ can be assumed to be approximately massless in all considered physical processes. We will take a model independent approach and neglect m φ in our analytical calculations. Actually, this is not an approximation if φ is exactly massless, the case for a Goldstone boson whose mass is protected by a (spontaneously broken) global continuous symmetry. There are many well-known examples of such ultralight scalars. If the apparent absence of CP violation in the strong interactions is explained by means of the Peccei-Quinn mechanism [8], a new pseudoscalar state must exist: the axion [9,10]. Although its mass is not predicted and can vary over a wide range of scales [11], a large fraction of the parameter space (corresponding to large axion decay constants) leads to an ultralight axion. Interestingly, such low mass axion would be of interest as a possible component of the dark matter of the Universe [12][13][14]. Axion-like particles, or ALPs, generalize this type of scenario by making the mass and decay constant two independent parameters. This allows for a larger parameter space, again including a substantial portion with very low ALP masses. The solution to the strong CP problem could also be intimately related to the flavor problem of the SM [15]. This naturally leads to a flavored axion [16][17][18], although an axion with flavorblind interactions is also possible [19]. Another popular ultralight scalar is the majoron, the Goldstone boson associated to the breaking of global lepton number [20][21][22][23]. While this state can gain a small mass by various mechanisms, and then be a possible dark matter candidate [24,25], it is expected to be exactly massless in the absence of explicit breaking of lepton number. Another possible ultralight scalar is the familon, the Goldstone boson of spontaneously broken global family symmetry. Finally, the Universe could also be filled with ultralight scalars in the form of fuzzy cold dark matter [26]. While many of the previously discussed examples are pseudoscalar states, the ultralight scalar φ can also have pure scalar couplings. This would be the case for a massless Goldstone boson if the associated global symmetry is non-chiral. Therefore, restricting the phenomenological exploration to just pseudoscalars would miss a relatively large number of well-motivated scenarios. This has actually been the case in many recent works [27][28][29][30][31][32][33][34][35], which were mainly interested in the phenomenology of flavored axions (or ALPs) and majorons [36]. Motivated by the principle of generality, we will consider a generic scenario where the CP nature of φ is not determined and explore several leptonic observables of interest. These include processes in which φ is produced in the final state, such as α → β φ or α → β φ γ. In this case, we will generalize previous results in the literature, typically obtained for pure pseudoscalars or for the case of a massive φ. We will also study processes in which φ is not produced, but acts as a mediator. A prime example of this category is To the best of our knowledge, the mediation of this process by an ultralight axion has only been previously considered in [27]. We will extend the study to more general scalar states and provide detailed analytical expressions for the decay width of the process. The analogous γ decays will also be studied, in this case for the first time here. Charged lepton anomalous magnetic moments constitute other interesting examples of observables induced by the ultralight φ. The rest of the manuscript is organized as follows. We introduce our general setup, as well as our notation and conventions, in Sec. 2. In Sec. 3 we discuss the current bounds on the lepton flavor conserving couplings of the scalar φ. These are often constrained by studing their impact on astrophysical processes, but also receive indirect bounds due to their contribution to the 1-loop coupling of φ to photons, as we will show. Sec. 4 contains the main results of this work. In this Section we discuss the impact of φ on several leptonic observables and consider the phenomenological implications. We summarize our findings and conclude in Sec. 5. Finally, a pedagogical discussion on an alternative parametrization of the φ Lagrangian in terms of derivative interactions is provided in Appendix A. Effective Lagrangian We are interested in charged leptons processes taking place at low energies in the presence of the ultralight real scalar φ. For practical purposes, we will consider φ to be exactly massless, but our results are equally valid for a massive φ, as long as m φ m e holds. The interaction of the scalar φ with a pair of charged leptons α and β , with α, β = e, µ, τ , can be generally parametrized by where P L,R = 1 2 (1 ∓ γ 5 ) are the usual chiral projectors. No sum over the α and β charged lepton flavor indices is performed. The dimensionless coefficients S L and S R are 3 × 3 matrices carrying flavor indices, omitted to simplify the notation. Eq. (1) describes the most general effective interaction between the ultralight scalar φ and a pair of charged leptons. In particular, we note that Eq. (1) includes both scalar and pseudoscalar interactions. An alternative parametrization for this Lagrangian based on the introduction of derivative interactions, applicable to the case of pseudoscalar interactions only, is discussed in Appendix A. Finally, Eq. (1) includes flavor violating (charged lepton fields with α = β) as well as flavor conserving (charged lepton fields with α = β) interactions. Some of the LFV observables considered below receive contributions from the usual dipole and 4-fermion operators. Therefore, our full effective Lagrangian is given by with where F µν = ∂ µ A ν − ∂ ν A µ is the electromagnetic field strength tensor, with A µ the photon field, and we have defined Γ S = 1, Γ V = γ µ and Γ T = σ µν . No sum over the α, β, γ and δ charged lepton flavor indices is performed in Eqs. (3) and (4). The coefficients K X 2 and A I XY , with I = S, V, T and X, Y = L, R, carry flavor indices, again omitted to simplify the notation, and have dimensions of mass −2 . We assume m α > m β and therefore normalize the Lagrangian in Eq. (3) by including the mass of the heaviest charged lepton in the process of interest. Eq. (3) contains the usual photonic dipole operators, which contribute to α → β γ and lead to while Eq. (4) contains 4-lepton operators. In summary, the effective Lagrangian in Eq. (2) corresponds to the one in [37], extended to include the new operators with the scalar φ introduced in Eq. (1). In the following, we will disregard φ interactions with quarks and concentrate on purely leptonic observables, such as the LFV decays α → β φ or α → β β β , and the electron and muon anomalous magnetic dipole moments. Even though φ couplings to quarks are possible, and indeed present in specific realizations of our general scenario, the prime example being the QCD axion, they introduce a large model dependence. We also note that leptophilic ultralight scalars, such as the majoron, are also well-motivated possibilities that naturally appear in models with spontaneous violation of global lepton number. Bounds on lepton flavor conserving couplings Let us comment on the current experimental contraints on the lepton flavor conserving couplings of the scalar φ. We will start discussing the stellar cooling mechanism. Since this subject has been extensively studied in the literature, and we do not want to delve further into the topic, only a brief outline will be presented. Then we will discuss another source of constraints, the 1-loop coupling between φ and a pair of photons. Stellar cooling The production of φ scalar particles inside stars, followed by their emission, may constitute a powerful stellar cooling mechanism. If this process takes place at a high rate, it may alter star evolution, eventually leading to conflict with astrophysical observations [38]. This allows one to place strong constraints on the φ scalar couplings. The dominant cooling mechanisms are scalar bremsstrahlung in lepton-nucleus scattering, − + N → − + N + φ, and the Compton process γ + − → − + φ. Their relative importance depends on the density and temperature of the medium, and therefore on the astrophysical scenario. In particular, the Compton process dominates only at low densities and high temperatures, conditions that can be found in red giants. Limits can also be derived from the production of ultralight scalars in supernovae. The scalar φ can be efficiently produced and, since it will typically escape without interacting with the medium, a net transport of energy out of the supernova will take place. Such a loss of energy may dramatically affect other processes taking place in the supernova, such as neutrino production. Plenty of works have recently studied the question of cooling by the emission of ultralight scalars in astrophysical scenarios [11,35,[39][40][41]. However, to the best of our knowledge, all of them consider axions or ALPs. These are low-mass pseudoscalars and thus, their impact on stellar evolution can only be used to constrain pseudoscalar couplings. Even though we will not provide a detailed calculation to support this statement, we will argue that similar bounds can be set on the scalar couplings. To make explicit the pure scalar and pseudoscalar interactions, we can use a redefinition of our Lagrangian in Eq. (1) which, for the diagonal terms, can be written as with S = S L + S * R . For a pure pseudoscalar, only Im S is present. The currently most stringent limit on the pseudoscalar coupling with electrons is obtained from white dwarfs. Specifically, the limit is obtained by considering the bremsstrahlung process, which can be very efficient in the dense core of a white dwarf. Using data from the Sloan Digital Sky Survey and the SuperCOSMOS Sky Survey, Ref. [42] found (at 90% C.L.) Im S ee < 2.1 × 10 −13 . The coupling with muons has been recently studied in some works [35,39,40]. In this case the process ultimately used to set the contraint is neutrino production, clearly suppressed if energy is transported out of the supernova by scalars produced in µ + γ → µ + φ. Using the famous supernova SN1987A, Ref. [40] has found Im S µµ < 2.1 × 10 −10 . Setting precise limits for the scalar parts of the couplings would imply the calculation of the cross sections and the energy-loss rates per unit mass, as required to perform a complete analysis. Instead, one can gauge the relevance of the bounds on the scalar couplings with the following arguments. First, we note that if the charged lepton mass is neglected, the scalar and pseudoscalar couplings contribute in exactly the same way to the relevant cross sections. This is, however, a bad approximation, due to the low energies involved in the astrophysical scenarios that set the limits. For this reason, one must keep the charged lepton mass. We have numerically integrated the cross sections for a wide range of low energies and found that, for the same numerical value of Re S and Im S, the scalar interaction always gives larger cross sections. Therefore, the constraints on the scalar couplings will be stronger and we can conclude that Re S ββ Im S ββ max , with β = e, µ. Nevertheless, we point out that a detailed analysis of the cooling mechanism with pure scalars is required to fully determine the corresponding bounds. Finally, one should note that these limits are based on the (reasonable) assumption that the scalar properties are not altered in the astrophysical medium. In particular, its mass and couplings are assumed to be the same as in vacuum. Some mechanisms have been recently proposed [43,44] (see also previous work in [45]) that would make this assumption invalid. These works are mainly motivated by the recent XENON1T results, which include a 3.5 σ excess of low-energy electron recoil events [46]. An axion explaining this excess would violate the astrophysical constraints, since the required coupling to electrons would be larger than the limit in Eq. (7), see for instance [41]. This motivates the consideration of mechanisms that alter the effective couplings to electrons or the axion mass in high density scenarios. If any of these mechanisms are at work, larger diagonal couplings would be allowed. However, we note that additional bounds, not derived from astrophysical observations, can be set on the diagonal couplings. This is precisely what we proceed to discuss. 1-loop coupling to photons The interaction of the scalar φ to a pair of photons is described by the effective Lagrangian where g Sγγ and g Aγγ are the couplings for a pure scalar and a pure pseudoscalar, respectively, and F µν is the dual electromagnetic tensor, defined as The g Sγγ and g Aγγ couplings can be induced at the 1-loop level from diagrams involving charged leptons, as shown in Fig. 1. Since g Sγγ and g Aγγ are constrained by a variety of experimental sources, this can be used to set indirect constraints on the φ couplings to charged leptons introduced in Eq. (1). In particular, we will take advantage of this relation to get additional limits on the lepton flavor conserving couplings of φ. The 1-loop analytical expression for g Sγγ and g Aγγ can be written as [47] |g where I = S, A. Here A S 1/2 and A A 1/2 are 1-loop fermionic functions defined as for the scalar coupling and for the pseudoscalar case, with τ β = m 2 φ /4m 2 β . The function f (τ ) can be found for instance in [48]. It is given by In this work we consider the case of an ultralight scalar. In the massless limit, the loop functions reduce simply to A S 1/2 (0) = 4 3 and A A 1/2 (0) = 2, and then we can write with the couplings to the charged leptons being given by We are now in position to compare to the current experimental limits on the coupling to photons. These are of two types. First, let us consider astrophysical limits. Magnetic fields around astrophysical sources of photons may transform these into scalars, an effect that can be used to set constraints on their coupling. Ref. [49] provides a comprehensive recollection of limits from astrophysical observations. Using results from [50], this reference finds that for scalar masses in the range m φ 1 peV − 1 neV, astrophysical constraints imply for both scalar and pseudoscalar couplings. Taking this into account, we can find the relations which translate into very stringent bounds on the diagonal couplings to charged leptons, S ee 10 −11 and S µµ 10 −9 . The OSQAR experiment [51], a light-shining-through-a-wall experiment, has also derived limits for massless scalars. Again, these are valid for both scalar and pseudoscalar couplings, and therefore, These relations also imply strong contraints on the diagonal couplings to charged leptons, but milder than in the previous case, S ee 10 −7 and S µµ 10 −5 . Finally, we point out that these indirect limits are strictly only valid if the diagrams in Fig. 1 are the only contribution to the φ coupling to photons. If more contributions exist, possible cancellations among them may reduce the total coupling so that the constraints are satisfied for larger couplings to charged leptons. We should also note that astrophysical constraints are subject to the same limitation discussed above. They rely on the assumption that the properties of φ in the astrophysical medium are the same as in vacuum. Leptonic observables The off-diagonal S βα A scalar couplings, with A = L, R, can be directly constrained by the LFV decays α → β φ. Using the effective Lagrangian in Eq. (1), it is straightforward to obtain where terms proportional to the small ratio m β /m α have been neglected. 1 Several searches for α → β φ have been performed and used to set experimental contraints on the off-diagonal S βα A effective couplings. Let us start with muon decays. The strongest limit on the branching ratio for the 2-body decay µ + → e + φ was obtained at TRI-UMF, finding BR (µ → e φ) < 2.6 × 10 −6 at 90% C.L. [52]. However, as explained in [53], this experimental limit must be applied with care to the general scenario considered here. The reason is that the experimental setup in [52] uses a muon beam that is highly polarized in the direction opposite to the muon momentum and concentrates the search in the forward region. This reduces the background from the SM process µ + → e + ν eνµ , which is strongly suppressed in this region, but also reduces the µ + → e + φ signal unless the φ − e − µ coupling is purely right-handed. Therefore, we obtain a limit valid only when S eµ L = 0: A more general limit can also be derived from [52]. Using the spin processed data shown in Fig.(7) of [52], the authors of [53] obtained the conservative bound BR (µ → e φ) 10 −5 , valid for any chiral structure of the S eµ A couplings. This bound is similar to the more recent limit obtained by the TWIST collaboration [54], also in the ∼ 10 −5 ballpark. With this value, one finds an upper limit on the e − µ flavor violating couplings of 2 where we have defined the convenient combination Several strategies can be followed for newer µ → e φ searches. The authors of [35] advocate for a new phase of the MEG II experiment, reconfigured to search for µ → e φ by placing a Lyso calorimeter in the forward direction. Also, as pointed out in [55,56] and recently discussed in [35] as well, the limit in Eq. (24) can be substantially improved by the Mu3e experiment by looking for a bump in the continuous Michel spectrum. The detailed analysis in [56] shows that µ → e φ branching ratios above 7.3 × 10 −8 can be ruled out at 90% C.L.. This would imply a sensitivity to an |S eµ | effective coupling as low as 4.5 × 10 −12 , improving an order of magnitude with respect to the limit in Eq. (24). Turning to τ decays, the currently best experimental limits were set by the ARGUS collaboration [57], which found µ → e φ and τ → e φ, it may lead to an error of the order of 20% in τ → µ φ. This deviation is acceptable, but can be accounted for by including additional terms proportional to m µ /m τ , hence leading to a much more complicated analytical expression. Completely analogous comments can be made for the rest of the observables discussed in this Section. 2 See also the recent [35] for a comprehensive discussion of the experimental limit of [52] and how this gets altered for different chiral structures of the S eµ A couplings. at 95% C.L.. These limits are weaker than those for muon decays, but still lead to stringent constraints on the LFV τ couplings with the scalar φ. It is straightforward to find These limits for the LFV couplings to τ leptons are expected to be improved at Belle II. In fact, new methods for τ → φ searches at this experiment have been recently proposed [58]. 4.2 α → β γ φ The decay width for the 3-body LFV process α → β γ φ can be written as where terms proportional to m β /m α have been neglected. Here I (x min , y min ) is a phase space integral given by and we have introduced the usual dimensionless parameters x and y, defined as which, together with z = 2E φ /m α , must fulfill the kinematical condition x + y + z = 2. We point out that our analytical results match those in [53], except for redefinitions in the couplings. 3 The phase space integral in Eq. (29) depends on x min and y min , the minimal values that the x and y parameters may take. While one could naively think that these are just dictated by kinematics, they are actually determined by the minimal β lepton and photon energies measured in a given experiment. This not only properly adapts the calculation of the phase space integral to the physical region explored in a real experiment, but also cures the kinematical divergences that would otherwise appear. In fact, we note that the integral in Eq. (29) diverges when the photon energy vanishes (y → 0). This is the well-known infrared divergence that also appears, for instance, in the radiative SM decay µ → eννγ. Another divergence is encountered when the photon and the β lepton in the final state are emitted in the same direction. The angle between their momenta is given by Since we work in the limit m β = 0, one finds a colinear divergence in configurations in which the photon and the β lepton have their momenta aligned (θ βγ → 0). However, any real experimental setup has a finite experimental resolution, which implies a non-zero minimum Figure 2: Illustration of the allowed phase space region for the process µ → e γ φ in a given experiment. The blue continuous lines correspond to cos θ eγ = ±1 and therefore delimit the total phase space that would be in principle available due to kinematics. The red dashed line represents x inf (y) and corresponds to the minimal θ eγ angle measurable by the experiment, excluding the region below it. The green dotted straight lines at x min and y min are the minimal positron and photon energy, respectively, that the experiment can measure, while y int is the value of y for which x min and x inf intersect. Finally, the yellow surface is the region where we must integrate. measurable E γ and a non-zero minimum θ βγ angle. Therefore, by restricting the phase space integration to the kinematical region explored in a practical situation, all divergences disappear. Direct comparison with Eq. (22) allows one to establish the relation which tells us that α → β γ φ is suppressed with respect to α → β φ due to an additional α coupling and a phase space factor. In fact, the latter turns out to be the main source of suppression. In order to illustrate the calculation of the phase space integral for a specific case, let us focus on the µ → e γ φ decay and consider the MEG experiment [59]. This experiment has been designed to search for µ → e γ and therefore concentrates on E e m µ /2 and cos θ eγ −1 (positron and photon emitted back to back). However, due to the finite experimental resolution, these cuts cannot be imposed with full precision, which makes MEG also sensitive to µ → e γ φ. The final MEG results were obtained with the cuts [59] cos θ eγ < −0.99963 , 51.0 < E γ < 55.5 MeV , 52.4 < E e < 55.0 MeV . This defines the MEG kinematical region for the calculation of the phase space integral in Eq. (29) since µ → e γ φ events that fall in this region can be detected by the experiment. For instance, events with cos θ eγ < −0.99963, or equivalently θ eγ > θ min eγ = 178.441 • , were at the reach of MEG. The kinematical region can be divided into two subregions: and y int < y < y max = 1 , where x inf = x inf (y) is the value of x such that cos θ eγ = cos θ min eγ for each value of y. This can be easily found by solving Eq. (31): Finally y int is the value of y for which x min and x inf coincide. These two subregions are illustrated in Fig. 2, where the experimental restrictions have been modified for the sake of clarity by enlarging the kinematical region of interest. A realistic representation obtained with the MEG cuts in Eq. (33) is shown in Fig. 3. This clearly illustrates the strong suppression due to the phase space integral. Having explained how to compute the phase space integral and illustrated the strong suppression it introduces, we can obtain results for the MEG experiment. Using the cuts in Eq. (33), the phase space integral in Eq. (29) can be numerically computed to find I (x min , y min ) MEG = 3.8 × 10 −8 . Combining this result with Eq. (28), we obtain the branching ratio of µ → e γ φ restricted to the MEG phase space, obtaining MEG results require BR (µ → e γ) < 4.2 × 10 −13 [59], a bound that must also be satisfied by BR MEG (µ → e γ φ). This leads to This bound is notably worse than the one given in Eq. (24), as expected due to the strong phase space suppression at MEG, an experiment that is clearly not designed to search for µ → e γ φ. More stringent bounds were obtained at the Crystal Box experiment at LAMPF [60][61][62]. Several searches were performed, with different experimental cuts and branching ratio bounds. These result in different limits on the |S eµ | effective coupling, as shown in Table 1. Adapting the limit from the µ → eγ search in [60] along the lines followed in the previous discussion for MEG, we find |S eµ | < 9.5 × 10 −11 . This bound is still not better than the one given in Eq. (24), but it is in the same ballpark. A very similar bound is obtained with the results of a later analysis, in this case more specific to µ → e γ φ [61,62]. Finally, the Mu3e experiment is not well equipped to detect the photon in µ → e γ φ and therefore cannot improve on these limits. As explained in [56], a future Mu3e-Gamma experiment including a photon conversion layer could increase the sensitivity to µ → e γ φ. 4.3 β decay width in the absence of φ can be found in [63]. Here we are interested in the new contributions mediated by the scalar φ, which are given by the Feynman diagrams shown in Fig. 4. It is straightforward to derive the associated amplitude, given by Here u and v are spinors, q = p 1 − p 2 and k = p 1 − p 3 are the φ virtual momenta and we have explicitly indicated the flavor indices of the S L,R coefficients. Also, we define the diagonal couplings S ββ ≡ S ββ L + S ββ * R . The total decay width can then be written as where Γφ is the decay width in the absence of φ, given in [63], and In writing Eq. (43) we have only kept the lowest order terms in powers of m β for each possible combination of couplings. This is equivalent to 0th order for all terms, with the exception of the ones in the first line, where the factor log mα m β avoids the appearance of an infrared divergence. An expression including terms up to first order in m β is given in Sec. A. In order to evaluate the relevance of the new contributions mediated by the scalar φ we drop the 4-fermion operators in Eq. (4) and consider a simplified effective Lagrangian containing only left-handed photonic dipole and scalar-mediated operators Then, inspired by [64], we parametrize the K L 2 and S L coefficients as Λ is a dimensionful parameter that represents the energy scale at which these coefficients are induced, while κ is a dimensionless parameter that accounts for the relative intensity of these two interactions. In case of κ 1, the dipole operator dominates, while the scalar mediated contribution dominates for κ 1. We point out that m α in Eqs. (44) and (45) is a global factor given by the mass of the heaviest charged lepton in the process and that Eq. (45) assumes S βα L = S ββ L . Fig. 5 shows BR(µ → eγ) and BR(µ → eee) as a function of Λ and κ. Our results are compared to the current bounds and the future sensitivities for the MEG-II and Mu3e experiments. We observe that for κ 1 and BR(µ → eee) > 10 −16 , Λ must be necessarily below ∼ 3000 TeV. A slightly lower upper limit for Λ is found when κ 1 and BR(µ → eγ) > 10 −14 . These are precisely the final expected sensitivities in MEG-II and Mu3e. Furthermore, we note that the search for the scalar mediated contribution in Mu3e will actually be very constraining in all the parameter space. γ decay width in the absence of φ can be found in [63]. The new contributions mediated by the scalar φ are obtained from the Feynman diagrams shown in Fig. 6. While the diagram on the left involves a flavor conserving (γγ) and a flavor violating (βα) vertex, both vertices in the diagram on the right violate flavor (γα and γβ). The associated amplitude is slightly different from that of the previous process Figure 6: Tree-level Feynman diagrams contributing to the process − α → − β − γ + γ described by the effective Lagrangian in Eq. (1). and is given by Finally, the total decay width can be written as where Γφ is the decay width in the absence of φ, given in [63], and Here m max f = max (m β , m γ ) and then the expression depends on the process in question. Once again, we have only kept the lowest order terms in powers of m β and m γ for each possible combination of couplings. 4.5 Also for this process, complete expressions for the − α → + β − γ − γ decay width in the absence of φ can be found in [63]. The new contributions mediated by the scalar φ are given by the Feynman diagrams shown in Fig. 7. We note that both vertices are necessarily flavor violating. The associated amplitude is given in this case by Here q = p 1 − p 3 and k = p 1 − p 4 are different from their definitions in the processes above. Writing one more time the decay width as the sum of two contributions, where Γφ is the decay width in the absence of φ, given in [63], we find that where m max f = max (m β , m γ ). where Lepton anomalous magnetic moments In the case of the muon anomalous magnetic moment, the deviation is at the level of ∼ 4 σ, whereas for the electron anomalous magnetic moment the significance is a little lower, slightly below ∼ 3 σ. While further measurements (and possibly improved theoretical calculations) are required to fully confirm these anomalies, these intriguing deviations can be interpreted as a possible hint of new physics [72]. The charged leptons anomalous magnetic moments also receive contributions mediated by the scalar φ. We show in Fig. 8 the relevant Feynman diagram. We will assume that the dominant contribution is induced by lepton flavor conserving (diagonal) couplings, and therefore take = . We find the simple expression which agrees with previous results in the literature. In particular, it matches exactly the expression given in [73] in the limit of a massless scalar, with the equivalence Now we are able to compare with the experimental measurements. Figure 9 shows favored regions for the diagonal coupling S ββ , with β = e, µ, due to the electron and muon anomalous magnetic moments. Results for S ee derived from (g −2) e measurements are shown on the left panel, whereas the right panel shows results for S µµ as obtained from (g −2) µ measurements. Given the low significance of the (g − 2) e anomaly, one stays within the 3 σ region even if S ee = 0, but a value of about |S ee | ∼ 10 −5 is required in order to achieve agreement at the 1 σ level. The deviation in (g − 2) µ is more significant, and this implies that one must introduce larger S µµ values in order to reconcile the theoretical prediction with the experimental measurement. In this case, S µµ couplings of the order of 10 −4 are necessary. In both cases, the required values are in conflict with the bounds discussed in Sec. 3, see Eqs. (7) and (8), and therefore a mechanism to suppress the processes from which they are derived would be necessary for the ultralight scalar φ to be able to provide an explanation to the current g − 2 anomalies. Conclusions Ultralight scalars appear in a wide variety of SM extensions, either as very light states or as exactly massless Goldstone bosons. Examples include the axion and the majoron, two wellmotivated hypothetical particles at the core of two fundamental problems: the conservation of CP in the strong interactions and the origin of neutrino masses. These states, as well as other ultralight scalars, can be produced in many leptonic processes or act as their mediators, leading to many exotic signatures. In this work we have explored the impact of ultralight scalars in many leptonic observables. We have adopted a model independent general approach, taking into account both scalar and pseudoscalar interactions to charged leptons, therefore going beyond most existing studies. First, we have briefly reviewed the current bounds from stellar cooling, which set important constraints on the diagonal couplings, and discussed indirect limits from the 1-loop generation of a coupling to photons. Then, we have revisited the decays α → β φ and α → β γ φ, in which the scalar φ is produced, and provided complete expressions for the γ , in which φ contributes as mediator. Finally, the effect of ultralight scalars on the charged leptons anomalous magnetic moments has also be discussed. The phenomenology of ultralight scalars is very rich, since they are kinematically accessible in most high-and low-energy processes. We have discussed many purely leptonic processes, but if φ couples to quarks as well, many hadronic and semi-leptonic channels open. This could give rise to many signatures at kaon factories [74]. Furthermore, ultralight scalars may leave their footprints in other processes. For instance, they can be produced and emitted in tritium beta decay [75] or µ − e conversion in nuclei [76], have a strong impact in leptogenesis [77], and give rise to non-resonant phenomena at colliders [78]. In our opinion, this diversity of experimental signatures and their potential to unravel some of the most important problems in particle physics through their connection to ultralight scalars merits further investigation. A Parametrization in terms of derivative interactions Eq. (1) is completely general and includes both scalar and pseudoscalar interactions of the field φ with a pair of charged leptons. An alternative parametrization in terms of derivative interactions is given by The coefficients S L,R have dimensions of mass −1 and carry flavor indices, omitted for the sake of clarity. Notice that the diagonal β − β − φ vertex is proportional to ( S L + S * L ) ββ P L + ( S R + S * R ) ββ P R , and therefore the diagonal couplings can be taken to be real without loss of generality. As will be shown below, Eq. (57) only includes pseudoscalar interactions for φ. Therefore, it can be thought of as a particularization of Eq. (1). 4 Physical observables must be independent of the parametrization chosen. We proceed to show now that the two parametrizations considered here are completely equivalent for a pure pseudoscalar in processes involving on-shell leptons. First, we recall the equations of 4 The parametrization in Eq. (57) is completely general if φ is a pure pseudoscalar, usually the case of the Goldstone bosons in many models. In such scenarios, the two parametrizations for the effective Lagrangian L φ introduced here are related to two possible ways to parametrize the Goldstone boson. Eq. (1) follows from a cartesian parametrization, that splits a complex scalar field in terms of its real and imaginary components. Alternatively, the parametrization in terms of derivative interactions in Eq. (57) would follow from a polar parametrization, that splits a complex scalar field in terms of its modulus and phase. As we will prove below, they lead to the same results for observables involving on-shell leptons. motion for the lepton fields α and its conjugate¯ α valid for on-shell leptons. One can now rewrite Eq. (57) as the sum of a total derivative and a derivative acting on the lepton fields. The total derivative does not contribute to the action, whereas the derivative on the lepton fields can be replaced using the equations of motion in Eq. (58). This leads to Therefore we find a dictionary between the S X and S X coefficients which for the diagonal couplings reduces to Since both S ββ X are real parameters, Eq. (62) implies that the diagonal S ββ couplings must be purely imaginary. It is straightforward to show that, in this case, the flavor conserving interactions of φ in Eq. (1) are proportional to γ 5 (see Eq. (6)). This proves that Eq. (57) is not general, but only includes pseudoscalar interactions, and there is no one-to-one correspondence between the two parametrizations. Given a set of S X couplings, one can always find the corresponding S X couplings using Eqs. (60) and (61). However, certain sets of S X couplings, namely those with non-vanishing real parts, cannot be expressed in terms of S X couplings. This stems from the fact that purely scalar interactions are not included in Eq. (57). The equivalence for the case of a pure pseudoscalar can be explicitly illustrated by comparing the analytical expressions obtained with Eqs. (1) and (57) for a given observable. We can start with a trivial example, the process α → β φ, discussed in Sec. 4.1. Using the parametrization in Eq. (57), one can easily derive the decay width of this two-body decay, where terms proportional to m β have been neglected. This results differs from Eq. (22) only by a factor m 2 α , as one would obtain from the direct application of the dictionary in Eqs. (60) and (61). Let us now consider a less trivial example: − α → − β + β − β . The computation of its amplitude with the Lagrangian in Eq. (57) makes use of the same Feynman diagrams shown in Fig. 4. In this case one obtains M φ =ū (p 3 ) 2 − / q S ββ L P L + S ββ R P R v (p 4 ) i q 2 + iεū (p 2 ) / q S βα L P L + S βα R P R u (p 1 ) −ū (p 2 ) 2 (−/ k) S ββ L P L + S ββ R P R v (p 4 ) where the factor of 2 preceding the diagonal coupling is due to the addition of the Hermitian conjugate, as explicitly shown in Eq. (57). Again, explicit flavor indices have been introduced. The decay width is computed to be We note that infrarred divergences also occur in interference terms at this order in m β mα . This explains the appearance of several log factors. The decay width in Eq. (65) can be compared to a previous result in the literature. The authors of [27] drop all interference terms in their calculation, and then their result must be compared to the first line in Eq. (65). One can easily relate the S L,R coefficients to the ones in [27] as for the flavor violating terms, and for the flavor conserving ones. With this translation, it is easy to check that both results agree up to a global factor of 1/2. In order to compare the − α → − β + β − β decay widths obtained with both parametrizations we need an expanded version of Eq. (43) that includes terms up to O m β mα . This is given by
10,117.8
2020-08-03T00:00:00.000
[ "Physics" ]
Artefacts Removal of EEG Signals with Wavelet Denoising The recording of EEG signals often still contains many contaminants electrical signals that originate from non-cerebral origin such as ocular muscle activity called artefacts. The amplitude of artefacts can be quite large relative to the size of amplitude of the cortical signals of interest. In this paper, an application of wavelet denoising method for artefacts removal of EEG signals is proposed. The experiment result shows that contaminant artifact of EEG signals can significantly removed. Introduction Biomedical signal in various form is a source of information that comes from human body and useful in medical interpretation. The Brain is an organ of the human body is composed of neuron that can generate the electrical potential energy known as neuroelectric potentials [1]. Electroencephalography (EEG) is a non-invasive measurement of brain electrical activity obtained by placing electrodes on the scalp in areas of the brain [2]. EEG provides brain signal information based on recording data result in non-invasive method to analyze the brain activity that is important for medical (i.e. diagnosis, monitoring, and managing diseases or disorders of the nerves) and research (i.e. neuroscience, cognitive science, cognitive psychology, neurolinguistics and psychophysiological research) uses. The EEG signals is often contaminated by artefacts that often comes from muscle activity, which may reduce its usefulness for clinical or research by disturbing interpretation of the signal [2 -5]. The main sources of artefacts are the EOG artefacts, ocular artefacts (eye movement and eye blink), noise muscles (EMG), heart signal, and various kinds of noise which is mixed with brain signals and often be the artefacts in EEG recordings [6][7][8][9][10]. Removal of artefacts is the real solution for quantitative analysis in the EEG recordings. The researchers eliminate artefacts in the EEG signals called the raw EEG data [11] to obtain a signal that is "clean EEG signal" that can be further analyzed. Quantitative methods for the analysis of EEG has been developed by many researchers. The method is widely used to treat and reduce the noise of the brain signals including bandpass filters, autoregressive models [arjon], Finite Impulse Response ( FIR ). In this paper, a method of using wavelet denoising with daubechies wavelet (db1) and 3rd decomposition level is proposed. Wavelet is a mathematical model that fits in detecting and analyzing the events that occur in different scales, providing information on the time and frequency domains [13]. The wavelet method has the ability to transform a time domain signal into time and frequency that helps to better understand the characteristics of the signal. Denoising has an important role in signal analysis by removing the signal noise and retain important information. In a state of statistical usually often associated with discrete and rarely sampled on continuous functions. Nowadays, the stationary wavelet transform (SWT) from the EEG signals mixed artefacts has been widely used in the denoising signal. Methodology Sample data of EEG signal obtained by experiments. Experiments in this study conducted by eight untrained subjects, who are males aged 20-22 years with good health, thin hair and without any abnormalities. The tools used in this experiment is the Emotiv EPOC wireless EEG Neuroheadset which has 14 electrodes and 2 reference and the package comes with an application programming interface (API). This will record EEG signals from the experimental activities in which there are 128 data per second of one epoch. Before the electrodes set on the tool, preferably sprinkled with a liquid electrolyte is inserted to improve conductivity of the electrode. The coated electrode attached to the subjects scalp based on channels F7, T7, O1, O2, T8 and F8, respectively (See Figure 1). Fig. 1. A top view of the brain that shows the locations for EEG recording according to the 10-20 system. Recording of sample data and design of the experiment with their stimulus were conducted with software embended with OpenViBE system. Stimulus scenario used in the experiment are given in Table 1 and 2. In the first scenario with the first stimulus, the data is recorded at normal conditions for 30 seconds. In the 31-32 seconds period, the sign '+' will appear in the screen to indicate a preparation for the second stimulus. When the arrow stimulus appear (second stimulus), subject must close their eyes for 30 seconds. For the third stimulus, start with the normal conditions for about 18 seconds. Then in 18-20 seconds periods, the sign '+' will appear again which is indicates the preparation for next stimulus. When the arrow stimulus appear, the the subject start to blinking. The detail of all scenario are given in Table 1 and 2. EEG signal processing Wavelet theory is a relatively new concept developed. The properties of the wavelet are given as follow: The time complexity is linear. Wavelet transform can be done perfectly with the time that is linear; Wavelet coefficients that are rarely elected. Practically, most of the wavelet coefficients of low value or zero. This condition is very beneficial, especially in areas of compression or data compression. Wavelet can be adapted to different types of functions, such as functions that are not continuous, and the function defined on bounded domains [15]. In general, a common function of wavelet is defined as the following equation [18 -21]. where s and τ , s ≠ 0, denotes the scale and translation parameters, and t indicates time. In the continuous wavelet transform, the signal is analyzed using a set of basic functions that are related to scaling and simple transition. The development of the CWT is presented in the following equation. Wavelet decomposition is embodied in the input signal and the filtered, lowpass filter generates a waveforms called approximation and highpass filter generates a random waves called detail. The correlation of both filter with wavelet functions arranged in a hierarchy scheme called multiresolution decomposition, where decomposition separates the signal into "details" at different scales and "approximation". Decomposition and reconstruction schemes, and its procedure are given in Figure 2 and 3. The wavelet denoising aims to remove noise in the form of artefacts in EEG signals recorded on while preserving the signal characteristics, regardless of the frequency content. The process of denoising (noise reduction) based on the elimination or reduction of the data signal is considered noise. In the use of wavelet denoising, there are many methods used to reduce noise. Denoising is applied by downloading the form of signal threshold wavelet. Input coefficients used discrete wavelet transform. With thresholding, wavelet transform will be able to remove noise or other undesirable signals in the wavelet domain. Then, the desired signal will be obtained after performing the inverse of the wavelet. With this method, we need to understand the concept of wavelet coefficients that represent a measurement at a frequency between the signal and wavelet functions are selected. Wavelet coefficients calculated as a convolution of the signal and the wavelet function which is associated with a bandpass filter. When we analyze the signal at a high scale, we obtain the global information of a signal called approximation. And on a smaller scale, we obtain information from a signal called details [ 26-28 ]. Results and discussion In removing the artefact with wavelet denoising method, a description of the recording EEG signal denoising is described in the EEG model form with embedded noise as follows. with the X signals indicates the corrupted recorded signals with noise, S indicates the clean EEG signals, and G indicates the added noise and n is a point shots. The denoising procedure consists of three stages, namely: The decomposition, thresholding detail coefficients, and the reconstruction. The extracted signals of three subjects (subject 1, 4, and 6) are given below (Figures 4 -12) to indicates the robustness of developed method. The input signal is the raw EEG data then decomposed by the decomposition of the signal with noise on wavelet bases where there is information on signal wavelet coefficient. Denoising can be obtained by thresholding wavelet coefficients. Denoising is done by separating the wavelet coefficients with the download threshold. Generally, wavelet coefficients associated with low frequency (hereinafter referred to as the coefficient of approximation). While the high frequency (hereinafter referred to as the coefficient of detail) is to be at denoise. The wavelet denoising method using one of the families that Daubechies wavelet (dbN). Daubechies db1 used is the decomposition level 3. Wavelet group is used to pass the signal through wavelet transform blocks for the decomposition process, then the signal will be decomposed into wavelet coefficients which will be in thresholding and denoising where the process occurs. Thresholding function to eliminate noise and preserve the information that is important to the maximum signal. Raw EEG data was still contains some high frequency where the artefacts are at this frequency. Wavelet method is performed to obtain signal finer and enhance noise removal of artefacts by eliminating high frequency in order to obtain the final result of clean EEG signals with amplitude 2-6 μV range of values where the value is smaller than the signal Raw EEG signal. So the value of the amplitude of the wavelet denoising result is closer to the EEG signal amplitude range. Conclusion The data sample has been obtained (hereinafter referred to as the raw EEG signal) and then processed the signal artefacts that tend to be the dominant high frequency, signal denoising with wavelet denoising. In order to obtain clean EEG signals with amplitudes of 2-6 µV and the range of values of the dominant frequency is in the frequency of 8-13 Hz which includes a group of alpha wave rhythm at the time of trial that showed subjects fully awake and relaxed. So the proposed research method produces a signal with amplitude and frequency ranges that approach the EEG signal. Wavelet denoising in this study using Daubechies db1 signal denoising level 3 of SWT denoising 1-D, the necessary analysis using different types of Daubechies to compare the results of denoising signal.
2,310
2017-01-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Minimizing Aliasing in Multiple Frequency Harmonic Balance Computations The harmonic balance method has emerged as an efficient and accurate approach for computing periodic, as well as almost periodic, solutions to nonlinear ordinary differential equations. The accuracy of the harmonic balance method can however be negatively impacted by aliasing. Aliasing occurs because Fourier coefficients of nonlinear terms in the governing equations are approximated by a discrete Fourier transform (DFT). Understanding how aliasing occurs when the DFT is applied is therefore essential in improving the accuracy of the harmonic balance method. In this work, a new operator that describe the fold-back, i.e. aliasing, of unresolved frequencies onto the resolved ones is developed. The norm of this operator is then used as a metric for investigating how the time sampling should be performed to minimize aliasing. It is found that a time sampling which minimizes the condition number of the DFT matrix is the best choice in this regard, both for single and multiple frequency problems. These findings are also verified for the Duffing oscillator. Finally, a strategy for oversampling multiple frequency harmonic balance computations is developed and tested. Mathematics Subject Classification 65L70 1 Introduction Many processes found in engineering and scientific applications are time-periodic.Some examples are fluid flows inside turbomachines, vibrations of structures, or voltages within AC circuits.In order to understand these processes, they may be simulated by integrating their governing equations in time until a periodic solution is obtained.In cases when the initial condition is far from being periodic and/or transient phenomena decay slowly, it may however take a long time until the solution reaches a periodic state.In terms of a numerical simulation that integrates between discrete time steps, this translates into a large number of iterations, and consequently, a high computational cost [10,17]. An alternative approach that can be more computationally efficient is to directly seek solutions that belong to some finite dimensional vector space spanned by a set of periodic basis functions.This paper considers the special case when these basis functions are sinusoids with different frequencies.In this case, the solution thus takes the form of a truncated Fourier series, for which the unknowns become the Fourier coefficients.There exist several methods in the literature for determining these Fourier coefficients.One approach is to integrate the equations obtained from substituting the Fourier series into the governing equations against each of the basis functions that span the vector space.This represents a Galerkin method, and will yield one equation per basis function.In cases when the governing equations are linear, the equations resulting from the integration uncouple with respect to the Fourier coefficients.If the governing equations on the other hand are nonlinear, each Fourier coefficient will in general be present in each of the final equations.When the Galerkin method is applied to linear problems it is sometimes referred to as the Linear Frequency Domain method [7,15].Some applications of the aforementioned approach to nonlinear problems can be found in e.g.[31,39]. Unfortunately, the Galerkin integral can become very complicated to solve analytically in the nonlinear case [16,31].This is especially true when the dimension of the vector space is large and/or when the governing equations are complex in nature.Other methods have therefore been developed for nonlinear problems.One common approach is to require that the equations resulting from the substitution only are satisfied at a discrete set of time instances, rather than in a variational sense as in the Galerkin method.This leads to a class of methods which here will be referred to as harmonic balance methods.Harmonic balance methods can be formulated in both the time and the frequency domain, with the main difference being that the solution variables are time samples in the former, and Fourier coefficients in the latter [10,26].Within the electronics community, harmonic balance is most often formulated in the frequency domain [10], and the modern version of harmonic balance is acredited to Nakhla and Valch [34].Within the fluid dynamics community, the time domain harmonic balance method was first introduced by Hall et al. [16] and the frequency domain formulation by McMullen [33].A good review of the history and theory of harmonic balance can be found in a two part review paper by Gilmore and Steer [10,11]. The harmonic balance method is based on applying a discrete Fourier transform (DFT) to either calculate the time derivative term (time-domain formulation), or the Fourier transform of the nonlinear terms (frequency domain formulation) [10,26].Applying a DFT to nonlinear problems can however introduce aliasing, which is known to reduce the accuracy of the harmonic balance method [24,29,32], and sometimes even lead to nonphysical solutions [28].Several methods have therefore been developed to address these problems.Huang and Ekici propose to add a time-spectral viscosity operator to the harmonic balance equations [19].This operator acts by damping the sinusoids which it is applied to, and has been shown by Huang and Ekici to improve convergence in cases when aliasing otherwise prevents it [19].Another approach investigated by La Bryer and Attair [28] is to filter the solution between each nonlinear iteration.The main idea of the filtering approach is to limit the amplitude of the sinusoids that have the highest frequencies, since these in general are the ones most contaminated by aliasing.In cases when the nonlinearity of the governing equations is known, exact frequency domain filters which keep the lower frequency sinusoids perfectly free from aliasing can in fact be constructed by using a sharp cut-off frequency based on Orszag's rule [28,36].It should also be noted that a frequency domain filter with a sharp cut-off frequency is equivalent to the oversampling strategy employed by Frey et al. in their harmonic balance solver [8]. In addition to filtering, a filtered inverse discrete Fourier transform has also been used to improve the convergence and stability of the harmonic balance method [3,18].This approach is particularly useful in cases when discontinuities and/or strong gradients lead to slow harmonic convergence (Gibbs phenomenon).The use of a filtered reconstruction operator is thouroughly discussed by Gottlieb and Shu [12].For its application to Fourier spectral methods alongside so-called reprojection approaches the reader to referred to [9]. Harmonic balance methods were originally developed for problems where the solution contains one fundamental frequency and its harmonics.Several problems in nature can however be expected to have solutions which contain a more general set of frequencies.In these cases, the solution belongs to a wider class of functions known as almost periodic functions [1,26].A fundamental problem that arises when the harmonic balance method is applied to almost periodic functions is to construct the DFT.In the original harmonic balance method, the standard discrete Fourier transform based on M uniformly distributed points between 0 and the reciprocal of the lowest resolved frequency is almost exclusively employed.Here, M ≥ 2K + 1, where K is the number of frequencies included in the Fourier series expansion.The reason for choosing this sampling is that it satisfies the assumptions of the Whittaker-Kotel'nikov-Shannon sampling theorem [23,38,40].A uniform time sampling that satisfies the sampling theorem can also be constructed for almost periodic signals as long as the frequencies considered are commensurable.Such a sampling may however require a very large number of sampling points, which makes this approach unfavourable from a computational perspective [21,26].Other approaches have therefore been developed for cases when the solution is an almost periodic function.Some of these are based on introducing a basis for the frequency set, such that each frequency can be written as an integer linear combination of a finite set of fundamental frequencies [26].The method developed by Frey et al. [8] takes into account only frequencies which are multiples of one the fundamental frequencies and applies the so-called harmonic set approach.The advantages of the harmonic set approach are that its computational cost scales linearly with the number of frequencies considered and that the standard discrete Fourier transform can be used within each harmonic set.The harmonic set approach does however neglect nonlinear coupling between different harmonic sets, except through common frequencies such as the zeroth frequency.These coupling terms may be accounted for by introducing several time variables, one for each fundamental frequency in the basis [14,22].This approach once again relies on the standard discrete Fourier transform with uniform time samples, but requires a substantially larger computational cost than the harmonic set approach. A third approach, which is the main topic of this paper, is to employ the almost periodic Fourier transform (APFT) introduced by Kundert et al. [25,26] in the harmonic balance computation.The APFT differs from the standard discrete Fourier transform in two ways.First, it considers an arbitrary set of frequencies and is therefore well suited for cases when the solution is an almost periodic function.Secondly, it does not require that the time samples are distributed uniformly like in the standard discrete Fourier transform.Instead, Kundert et al. suggest that the time sampling is chosen so that it minimizes the condition number of the DFT matrix used in the APFT [25,26].Finding a time sampling that satisfies this criterion for an arbitrary set of frequencies is however a very complex problem, that to the authors' knowledge has not yet been solved analytically.In the original work on harmonic balance methods for arbitrary frequency sets, Chua and Ushida [2] employ a uniform discretization and use oversampling to avoid ill-conditioning.The same strategy has also been used in recent years by Ekici and Hall [5,6].Kundert et al. [25,26] were the first to obtain a well conditioned DFT matrix based on 2K + 1 time samples.This was done using their Near-Orthogonal Selection Algorithm.Since the work of Kundert et al., several other algorithms have been developed to compute a time sampling that minimizes the condition number of the DFT matrix, see e.g.[13,21,27,35,37]. The rationale of choosing a time sampling that minimizes the condition number of the DFT matrix is that it limits the effect that a perturbation of the sampled signal can have on the resulting DFT.Since all unresolved frequencies manifest themselves as perturbations of the sampling, this implies that a low condition number can limit the amount of aliasing produced by the DFT [26].In [26], it is also noted that the condition number does not say anything about how much the amplitude of a particular unresolved sinusoid affects the resolved ones.This implies that the condition number in itself can not be used to identify an alias free DFT.In order to do this, the mapping of the unresolved sinusoids onto the resolved ones must be defined [26].In this work, an operator that describes this mapping has been developed.The benefit of this operator, hereinafter referred to as the alias operator, is that it directly defines how a particular time sampling affects aliasing.As will be shown later, the norm of this operator also provides an a-priori bound on the amount of aliasing that a particular time sampling gives rise to. In this paper, the relation between the alias operator and the condition number of the DFT matrix employed in the APFT will be investigated.After this, the relation between the norm of the alias operator, the condition number of the DFT matrix and the actual alias error obtained in several harmonic balance computations will be investigated.Finally, the benefits of employing oversampling in multiple frequency harmonic balance computations based on the APFT will be investigated. Harmonic Balance Method The purpose of this paper is to study how aliasing occurs in harmonic balance solutions of first order ordinary differential equations of the following form Here, q ∈ R N is a vector that contains the unknown solution and f : R N × R → R N is a nonlinear function that describes the dynamics of the problem.The nonlinearity of f (q, t) implies that an almost periodic solution to Eq. ( 1) can contain an infinite number of sinusoids with different frequencies.Computing the amplitude of all these sinusoids is however not feasible from a numerical perspective.Therefore, approximate solutions that are spanned by a limited number of sinusoids are considered instead Here, Λ represents the set of frequencies included in the series expansion Note that the only requirement put on these frequencies is that ω −k = −ω k .If a Galerkin projection of Eq. ( 1) onto the subspace spanned by the sinusoids in Eq. ( 2) is performed, the following is obtained Here, Ω Λ = Ω Λ ⊗I, where ⊗ is the Kronecker product, I is an identity matrix of size N × N , and The vectors qΛ and fΛ ( qΛ , Λ) in Eq. ( 4) further consist of 2K + 1 sub-vectors, in which the kth sub-vector contains qk and fk respectively.Calculating the Galerkin projection of Eq. ( 1) can be very complicated in cases when the number of sinusoids considered in Eq. ( 2) is large, and/or the function f (q, t) is very complex in nature.In order to overcome this difficulty, the harmonic balance method approximates fΛ ( qΛ , Λ) by a discrete Fourier transform (DFT) of f (q, t) instead The DFT of f (q, t) in Eq. ( 6) is performed in two steps.In the first step, f (q, t) is evaluated at a set of time instances This will in turn require the realization of Eq. ( 2) at these time instances Here, In the second step, the sampling of f (q, t) is transformed back into the frequency domain using the inverse of E −1 Λ (t), here denoted E Λ (t).For this inverse to be well defined, the columns in E −1 Λ (t) must be linearly independent.Due to the structure of E −1 Λ (t), this holds true if M = 2K + 1 time instances can be found such that the columns in E −1 Λ (t) are linearly independent.As it turns out, this is always possible.In order to prove this, note that for an equidistant sampling, the entries of E −1 Λ (t) take the form For an equidistant time sampling, E −1 Λ (t) thus corresponds to a transposed Vandermonde matrix.Its determinant is therefore given by It is easy to verify that this expression is non-zero for almost all choices of Δt.This shows that for any given set of frequencies one can find an equidistant sampling point distribution t such that E −1 Λ (t), and thereby E −1 Λ (t), is invertible.The time sampling is usually chosen to minimize the condition number of E −1 Λ (t).Note that a small condition number automatically ensures that E −1 Λ (t) is invertible.Some authors use a numerical optimization procedure to minimize the condition number.It is important to note that the cost of this optimization procedure is usually negligible compared to solving the harmonic balance system. In the literature, the discrete Fourier transform defined by the matrices E −1 Λ (t) and E Λ (t) is often referred to as the almost periodic Fourier transform (APFT) [26].The APFT represents a generalization of the standard DFT to arbitrary sets of frequencies and sampling points, and will in fact be equivalent to the standard DFT when the frequencies in Λ are integer multiples of a single base frequency, and the time samples are distributed uniformly between zero and the reciprocal of this base frequency. Equation ( 6) represents the frequency domain formulation of the harmonic balance method.An equivalent formulation in the time domain may also be expressed as follows This formulation is easily obtained by premultiplying Eq. ( 6) from the left by E −1 Λ (t) and then using Eq. ( 8).The matrix E −1 Λ (t)E Λ (t) in the above equation will further be equal to the identity matrix when M = 2K +1 time samples are employed.If more than M = 2K +1 time samples on the other hand are used, this matrix will represent a projection that corresponds to the application of a modal filter. The frequency domain formulation of the harmonic balance method presented in Eq. ( 6) will be used for the remainder of this paper.Due to the equivalence of Eq. ( 6) and ( 12), however, the results presented will also apply to the time domain formulation of the harmonic balance method. Error Sources It is well known that the harmonic balance method can give rise to two types of errors: aliasing and harmonic truncation [10,11,26].Aliasing occurs because f (q, t) can contain more frequencies that those accounted for by the DFT in Eq. ( 6).Aliasing will on the other hand not occur if Eq. ( 4) is solved.This is because the Galerkin projection in this case corresponds (up to a constant) to the standard formula for calculating Fourier coefficients.Independently on whether the harmonic balance or Galerkin method is used, however, the harmonic truncation error will always be present.This error occurs as a result of the fact that the contribution of the unresolved sinusoids to the solution, including their nonlinear coupling with the resolved sinusoids, is neglected. Both the alias error and the harmonic truncation error can naturally be reduced by including more frequencies in Λ.This is however not always possible in practical applications since the computational cost of the harmonic balance method scales almost linearly with the size of Λ, here denoted #Λ.It is therefore often hard to avoid aliasing and harmonic truncation errors in the solution.This highlights the importance of understanding how these errors occur, and how they can be limited.In this paper, the focus is put on the aliasing error. Alias Operator As noted in the previous section, aliasing occurs when f (q, t) contains more frequencies than those accounted for by the DFT in Eq. ( 6).Let Λ denote the union of Λ and the set of all frequencies that are contained in f (q, t).In cases when f (q, t) is a polynomial in q, with coefficients that are finite Fourier sums in t, the set Λ will be finite.This in turn implies that the signal f (q, t) can be written as Note that Λ Λ .From this, it follows that Eq. ( 13) can be rewritten as Based on this expression, the DFT of f (q, t) may now be expressed as This shows that the Fourier coefficients obtained from the DFT of f (q, t) consist of two terms.The first term represents the correct (alias free) Fourier coefficients corresponding to the resolved frequencies, and the second term the fold-back of the remaining Fourier coefficients onto the resolved ones.The operator E Λ (t)E −1 Λ \Λ (t) that defines the fold-back of the unresolved Fourier coefficients in Eq. ( 15) will be referred to as the alias operator for the remainder of this paper.Based on Eq. ( 15), it is easy to show that the norm of this operator puts the following bound on the alias error Provided that Λ is known, this relation may be used to obtain an a-priori bound on the alias error that a particular time-sampling t can give rise to.As will be shown next, however, determining Λ for a particular problem is not a trivial task. Selection of Unresolved Frequencies The set Λ is completely defined by the function f (q, t) and the set Λ.In the present work, it is assumed that f (q, t) is a polynomial of degree p in q.This assumption allows the set Λ to be calculated by simple addition and subtraction of the frequencies in Λ.Clearly, this assumption is not valid for all problems.Despite this, however, it can still be a good approximation provided that f (q, t) itself approximates an analytic function, i.e., one that is locally given by a convergent power series (which then coincides with its Taylor series).The choice of Λ directly influences how well the norm of the alias operator will represent the actual alias error obtained in a harmonic balance simulation.To see this, note that Eq. ( 16) represents the largest possible alias error that any fΛ \Λ can give rise to.This means that the bound in Eq. ( 16) may not be representative if some elements in fΛ \Λ are negligible for a given problem.It is also important to keep in mind that Eq. ( 16) gives a relative bound on the alias error.As such, if Λ has been selected to contain all relevant frequencies for a given problem, then fΛ \Λ 2 ≈ 0 and consequently, the actual alias error will be small independent of the norm of the alias operator. Relation to Condition Number The condition number of E −1 Λ (t) has been successfully used by several authors in the past as a measure for selecting the time sampling in multiple frequency harmonic balance computations [13, 21, 25-27, 35, 37].This raises the question of how the condition number relates to the alias operator defined in the present work.The condition number can be defined based on the l 2 norm as This definition can be used to derive a bound on the norm of the alias operator as follows Both matrices in the last equation have entries with norm 1.From this, it follows that there exists an upper bound on E −1 Λ \Λ (t) 2 , and a lower bound on E −1 Λ (t) 2 , both independent of t.Equation may therefore be rewritten as This equation shows that the condition number in fact puts a bound on the aliasing operator.Note, however, that this does not imply that a time sampling which minimizes the condition number also minimizes the norm of the alias operator. The complexity of the alias operator, condition number and α Λ, Λ , t in Eq. ( 19) makes it very hard to investigate how they relate analytically.Numerical optimizations were therefore used to search for time samplings which give a minimal condition number, minimal norm of the alias operator or maximal value of α Λ, Λ , t .These results could then be used to investigate whether a time sampling that gives, e.g., a low condition number, also yields a low norm of the alias operator.For a detailed description of the optimization procedure used in the present work, see "Appendix B". Almost Periodic Fourier Transform with Oversampling In this work, the impact of oversampling on the alias error obtained in a harmonic balance computation has also been investigated.The motivation for doing this is that aliasing can never be eliminated when M = 2K + 1 time samples are employed.This can be proven by first noting that the columns in E −1 Λ (t) are required to be linearly independent, and therefore form a basis for C M .From this, it follows that there exists a nonzero matrix W such that E −1 Λ \Λ (t) = E −1 Λ (t)W .But then, the alias operator in Eq. ( 15) must be equal to E Λ (t)E −1 Λ \Λ (t) = W = 0, which proves the desired result.When oversampling is employed, E Λ (t) in Eq. ( 6) will represent a left inverse to E −1 Λ (t).Contrary to the standard inverse of a square matrix, however, the left inverse is not uniquely defined.In cases when Λ is finite, it can therefore be tailored such that aliasing is eliminated.In order to prove this, one can start by noting that such a tailored inverse, here denoted E Λ,Λ (t), must satisfy the following two conditions The above conditions can be combined as follows Note that the matrix within the brackets in the left hand side of the above equation corresponds to E −1 Λ (t).From before, it is known that there exists a time sampling with M = #Λ time samples such that this matrix is invertible.From this, it follows that the above relation may be rewritten as Here, π Λ ,Λ is the projection onto the harmonics in Λ. The argument above shows that a left inverse that eliminates aliasing always can be defined if Λ is finite.Unfortunately, it is more computationally expensive to employ this left inverse since it requires that f (q, t) is evaluated at M > 2K + 1 time instances.This motivates the use of a left inverse which requires less than M time instances, but still has the potential to reduce aliasing compared to not using oversampling at all.In order to define such a left inverse, introduce a new set of frequencies Λ that satisfies There exists a time sampling with M = #Λ time samples such that E −1 Λ (t) is invertible.Therefore, the following left inverse may be defined The idea behind this left inverse is to eliminate aliasing with respect to the frequencies in Λ \Λ.This left inverse would also completely eliminate aliasing if a time sampling can be found such that the unresolved Fourier coefficients only contribute to the amplitude of the Fourier coefficients whose frequencies are in Λ \Λ. An alternative choice for the left-inverse which has been suggested in the literature is the Moore-Penrose inverse of E −1 Λ (t) [5,6,21].This left inverse represents a least squares projection onto the subspace spanned by the columns in E −1 Λ (t).From this, it follows the Moore-Penrose inverse can only eliminate aliasing if the columns in E −1 Λ \Λ (t) are orthogonal to those in E −1 Λ (t) (with respect to the standard inner product on C M ).In cases when the frequencies in Λ are commensurable, and f (q, t) is a polynomial, it is possible to ensure this by selecting a uniform time sampling that satisfies Orszag's rule [21] M ≥ ( p + 1) Here, ω max is the largest frequency in Λ and ω base is the greatest common divisor of all frequencies in Λ. It might also be possible to construct a (possibly non-uniform) time sampling such that the Moore-Penrose inverse eliminates aliasing in the general case, but no such algorithm is known to the authors.As shown in "Appendix A", it is however possible to redefine the inner product on C M such that the columns in E −1 Λ \Λ (t) become orthogonal to those in E −1 Λ (t).There, it is also shown that the Moore-Penrose inverse defined with respect to this new inner product is equivalent to the left inverse defined by Eq. ( 23). Aliasing for M = #Λ This section considers the case when the number of sampling points corresponds to the theoretical minimum of M = #Λ.To begin with, it is demonstrated how κ(E −1 Λ ) and Λ \Λ 2 relate in this case.This is done for both the case when Λ contains a single as well as multiple fundamental frequencies.After this, it is investigated how κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 influence the alias error obtained when the Duffing oscillator is simulated with the harmonic balance method.This is done for both the single and multiple frequency case as well. Relation Between Ä(E In Sect.2.2.2 it was shown that E Λ E −1 Λ \Λ 2 is bounded by κ(E −1 Λ ) through Eq. ( 19).This fact can be illustrated by a diagonal line in a κ(E −1 Λ )− E Λ E −1 Λ \Λ 2 plane, if the slope of this line is taken to be the maximum value of α Λ, Λ , t .In the same plane, the minimal values of κ(E −1 Λ ) ≥ 1 and E Λ E −1 Λ \Λ 2 ≥ 0 1 can also be represented by a vertical and a horizontal line respectively.The region between these three lines will then contain all possible values of κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 that a time sampling can give rise to.In Fig. 1a, this region is illustrated for the case when Λ contains a single fundamental frequency and Λ is generated by a second degree polynomial ( p = 2).The case when Λ is generated by a third degree polynomial ( p = 3) is further depicted in Fig. 1b.The limit values of κ and α Λ, Λ , t shown in these figures have all been computed with the OPT algorithm [13], as described in "Appendix B".A large set of random time samples was also generated to investigate how κ(E −1 Λ ) and the region between the three lines in Fig. 1.They also indicate that the maximal norm of the alias operator is proportional to κ(E −1 Λ ), as would be expected from Eq. (19).For the single frequency case shown in Fig. 1, it did in fact turn out that both the minimum value of κ(E −1 Λ ) and the minimum value of E Λ E −1 Λ \Λ 2 were obtained with M = #Λ uniformly distributed time samples between 0 and T .The results in Fig. 1 thus point to the fact that the standard DFT is optimal in terms of aliasing for the single frequency problem.Several other tests with different numbers of harmonics in Λ and different sets Λ have confirmed this statement. The relation between κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 for the case when Λ contains two fundamental frequencies and Λ is generated by a second degree polynomial ( p = 2) is depicted in Fig. 2a.The case when Λ is generated by a third degree polynomial ( p = 3) is further depicted in Fig. 2b.The results shown in Fig. 2 very much resemble the single frequency case in Fig. 1.A couple of differences can however be noted.To begin with, the minimal value of E Λ E −1 Λ \Λ 2 can be seen to increase when going from the single frequency to the multiple frequency case.For both cases, it can also be seen that the minimal value of increases when the nonlinearity of the problem increases.Both these trends can be explained by the fact that the number of unresolved frequencies increases when the frequencies in Λ are no longer harmonically related and/or when the nonlinearity of the problem increases.In addition, it can be seen in Fig. 2 that the time sampling which minimizes κ(E −1 Λ ) (blue square marker) and the one that minimizes E Λ E −1 Λ \Λ 2 (yellow circle marker) are not the same in the multiple frequency case.The difference between these two samplings can however be seen to be very small.One possible explanation of this difference could be that the OPT algorithm has not converged.Alternatively, there may be a trade off between κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 for the multiple frequency case.The trend of the randomly generated samplings shown in Fig. 2 does however suggest that this trade-off is very small.Overall, Figs. 1 and 2 suggest that κ(E −1 Λ ) is a reasonable metric for selecting a time sampling when M = #Λ time samples are used in the single and multiple frequency case.These figures also show that it is important that the optimization algorithm used to minimize κ(E −1 Λ ) is able to come close to the global optimum, since the time sampling will otherwise not minimize the norm of the alias operator. Duffing Oscillator The Duffing oscillator is a model for a damped and driven oscillator with a nonlinear stiffening spring.The equation governing the displacement of the weight in the presence of two driving forces may be written as Here, ζ and F i respectively denote the damping coefficient and the amplitude of the ith driving force.Furthermore, a dot represents differentiation with respect to time.Equation ( 27) may be rewritten as a first order ordinary differential equation by introducing the velocity of the weight, y = ẋ, as an additional degree of freedom.This yields the following system of equations where Equation ( 28) is discretized in time using the frequency domain harmonic balance method presented in Eq. ( 6).The resulting nonlinear system of equations have been implemented in the Python programming languange and solved using the fsolve routine available in SciPy [20].The exact Jacobian is also provided to fsolve to avoid problems associated with finite-difference approximations. The implementation of the frequency domain harmonic balance solver for the Duffing oscillator has been validated against the publically available MATLAB tool NLvib [24].To begin with, only a single driving force is considered (F 2 = 0).The presence of a single driving force admits the standard DFT to be used in the harmonic balance method.Figure 3a shows a comparison between the Python solver and NLvib for ζ = 0.1, F 1 = 1.25 and K = 5.Both solutions were obtained by starting with a driving frequency close to 0 and then gradually increase the frequency.Each new frequency computation is then started from the previous solution.Figure 3a shows that the Python solver and NLvib yield identical results for this case.Note, however, that the hysteresis branch could only be computed with NLvib since no arc length continuation method was implemented in the Python solver.It should also be noted that both solutions were computed with M = 21 time instances to ensure that Orszag's rule (Eq.( 26)) was satisfied for the cubic nonlinearity present in the governing equations. Figure 3b shows results obtained with the Python solver for a multiple frequency problem.In this case, two incommensurable driving frequencies with a ratio of ω 2 /ω 1 = √ 2 are used.The damping factor and amplitudes of the driving forces are further set to ζ = 0.1 and F 1 = F 2 = 0.1 respectively.No higher harmonics of the driving frequencies are considered in the computation.The presence of a cubic nonlinearity in Eq. ( 29) will however generate a large number of additional frequencies.To avoid aliasing, the DFT defined by Eq. ( 23) is used.This requires the use of M = #Λ = 25 sampling points, which were selected to minimize the condition number of the DFT matrix.Unfortunately, NLvib does not support the APFT.direct comparison between the two solvers was therefore not possible in the multiple frequency case.Instead, two NLvib computations were run at each of two driving frequencies and with M = 5 time samples to obtain alias free solutions.These results are provided for reference in Fig. 3b.This figure clearly shows that the APFT solution and the reference single-frequency solution deviate.To ensure that this deviation is due only to the nonlinear coupling resolved by the APFT, the Python solver was rerun with F 2 = 0 and all other settings intact.The results from this computation can be seen to agree perfectly with the corresponding NLvib solution. As shown previously, both E Λ E −1 Λ \Λ 2 and κ(E −1 Λ ) put a bound on the amount of aliasing produced by a DFT.These metrics will now be compared to the actual amount of aliasing obtained with the aforementioned frequency domain harmonic balance solver for the Duffing oscillator.The comparison has been performed for the two sets of frequencies that were previously used for the validation.For each of these sets, 90 random time samplings were generated by drawing M = #Λ samples from a uniform distribution and then checking the condition number of the corresponding DFT matrix.If the condition number was below 10, the sampling was saved.Otherwise, another sampling was generated until 90 samplings had been obtained in total. For both the single and multiple frequency cases the corresponding alias free discrete Fourier transforms that were used for the validation were first used to compute a set of reference solutions using the same frequency stepping procedure as described previously.These reference solutions were then used as initial conditions for the simulations that employed the random samplings.For each frequency, several solutions in which the random samplings were shifted in time were computed.These time shifts were employed to account for the fact that the phase of the solution can affect the amount of aliasing obtained.Once all frequency and time shift combinations had been computed for a given random time sampling, (b) (a) Fig. 4 Numerical alias error for the Duffing oscillator plotted against κ(E −1 Λ ) and the following numerical alias error was calculated Here, APFT and OS respectively denote the random sampling and the alias-free oversampled solution.The vector xΛ in this equation further contains all Fourier coefficients of the displacement. The numerical alias error is plotted against κ(E −1 Λ ) with circular markers for the single frequency case in Fig. 4a.This figure shows that κ(E −1 Λ ) puts a bound on the amount of aliasing obtained in a harmonic balance simulation, which is consistent with the results presented in Fig. 1.The red diamond marker and blue square marker in Fig. 4a further denote the alias free reference solution and a solution obtained with the standard DFT.It is interesting to note that the standard DFT gives the lowest numerical alias error among all samplings with M = #Λ, which is consistent with the results in Fig. 1.From Fig. 4a it can also be noted that κ(E −1 Λ ) = 1 for both the oversampled reference solution as well as the standard DFT based on M = #Λ time samples.This result points to the fact that κ(E −1 Λ ) can not distinguish an alias free solution from an aliased one.This is on the other hand possible when the numerical alias error is plotted against E Λ E −1 Λ \Λ 2 , as shown in Fig. 4b.This figure also shows that Λ \Λ 2 puts a bound on the numerical alias error, which is consistent with Eq. ( 16).In Fig. 5a and b the numerical alias errors obtained for the multiple frequency case have been plotted against κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 respectively.These figures very much resemble their single frequency counterparts in the sense that both κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 can be seen to bound the numerical alias error.The square and triangle markers in Fig. 5 correspond to two samplings which give the lowest value of κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 respectively.These two results can be seen to be very close to each other.This once again indicates that the trade off between κ(E −1 Λ ) and E Λ E −1 Λ \Λ 2 is very small in the multiple frequency case, and thus that it is sufficient to optimize for κ(E −1 Λ ) in order to obtain a DFT that minimizes aliasing.Note however that this statement only holds true when M = #Λ sampling points are employed since the minimal value of κ(E −1 Λ ) can be the same with or without oversampling. Aliasing for M > #Λ Two strategies for employing oversampling with the APFT were presented in Sect.2.3.In the first strategy, E Λ (t) is constructed by explicitly accounting for some of the unresolved frequencies (Eq.( 25)), and in the second strategy, E Λ (t) is computed as the Moore-Penrose inverse of E −1 Λ (t).In this section, it will be investigated whether the additional knowledge about Λ built into the first definition of E Λ (t) is beneficial from an aliasing perspective.This will be done by investigating how the alias error obtained when the Duffing oscillator is simulated with the harmonic balance method varies M for both a single and a multiple frequency case.In relation to this, it is also shown how the numerical alias error relates to the norm of the alias operator and the condition number of the DFT matrix by plotting their variation with M as well. Duffing Oscillator The first case that has been used to investigate the impact of oversampling with the APFT is the Duffing oscillator with a single driving force.The alias free reference solution for this case was computed with a uniform sampling that satisfies Orszag's rule (Eq.( 26)) for cubic nonlinearities.The two definitions of E Λ (t) presented in Sect.2.3 were evaluated using two different sets of time samples.The first set consists of M uniformly distributed points between 0 and the reciprocal of the lowest frequency.The second set consists of M non-uniformly distributed points created by perturbing each point in the first sampling set by a random value drawn from the uniform distribution U (−0.15T /M , 0.15T /M ).When E Λ (t) was computed based on Eq. ( 25), Λ was selected to contain the M lowest frequencies in Λ . Once all samplings had been generated for all values of M , the numerical alias error was calculated according to the procedure outlined in Sect.3.1.2.The results from this calculation are presented in Fig. 6a.This figure shows that the two definitions of E Λ (t) give equivalent results for the uniform sampling.This is because both these discrete Fourier transforms are equivalent to the standard DFT in this case.This fact also explains why both definitions of E Λ (t) eliminate aliasing once Orszag's rule is satisfied.From Fig. 6a it can also be seen that aliasing is not eliminated for any value of M when a non-uniform sampling is used and E Λ (t) is selected to be the Moore-Penrose inverse of E −1 Λ (t).This is on the other hand possible when all the information about Λ is built into the definition of E Λ (t) for the non-uniform sampling. The variation of E Λ E −1 Λ \Λ 2 with M for the single frequency case is further presented in Fig. 6b.This figure shows that the norm of the alias operator follows the same trend as the numerical alias error when the number of sampling points increase.In particular, it can be seen that both E Λ E −1 Λ \Λ 2 and the numerical alias error becomes zero for the same number of sampling points.This highlights the fact that E Λ E −1 Λ \Λ 2 is able to identify an alias free sampling.As noted previously, this is on the other hand not possible if only the condition number is considered.To demonstrate this, κ(E −1 Λ ) is plotted against M in Fig. 6c.In general, Fig. 6 indicates that both the definition of E Λ (t) as well as the choice of time sampling can have a substantial impact on the amount of aliasing obtained in a harmonic balance simulation.This raises the question regarding which combination is best.Based only on the results shown in Fig. 6, it appears that it is beneficial to define E Λ (t) based on Eq. (25).It also appears that the uniform sampling, which gives the lowest value of κ(E −1 Λ ), is a better choice than a non-uniform sampling.This indicates that the best choice in terms of aliasing is to construct E Λ (t) based on Eq. ( 25), and then select a time sampling which minimizes κ(E −1 Λ ).Here, care must however be taken in the definition of the condition number.In this work, it is suggested to compute the condition number based on the square matrix E −1 Λ (t), and not based on Eq. ( 17) in which some columns/rows of E −1 Λ /E Λ (t) are neglected.The first reason for this is that, in practice, E Λ (t) is computed from a numerical inverse of E −1 Λ (t) (see Eq. ( 25)).For this preliminary step to be well-posed it is thus necessary that κ(E −1 Λ ) is small.The second reason is that a minimal value of the condition number has been seen to make the APFT robust against aliasing for the non-oversampled cases, independently on the choice of Λ .In the suggested approach, E Λ (t) is thus computed by first computing a square matrix E Λ (t) that gives as little aliasing as possible, and then explicitly eliminate aliasing with respect to some of the unresolved Fourier coefficients, i.e., those that correspond to frequencies in Λ \Λ. In order to investigate the usefulness of the suggested approach for computing E Λ (t), the Duffing oscillator with two incommensurable driving frequencies was once again considered.The alias free reference solution for this case was computed using the DFT defined in Eq. ( 23).The time sampling for this case was selected by minimizing κ(E −1 Λ ) using the OPT algorithm [13].For each M , two different discrete Fourier transforms were then constructed.The first one was calculated as the Moore-Penrose inverse of E −1 Λ (t), using a time sampling which minimized κ(E −1 Λ ), and the second one using Eq. ( 25) and a time sampling that minimized κ(E −1 Λ ).These minima were also obtained with the OPT algorithm.The variation of the numerical alias error with M for the multiple frequency case is presented in Fig. 7a.From this figure, it can be seen that for this case the suggested approach effectively eliminates aliasing once M ≥ 17.Calculating E Λ (t) as the Moore-Penrose inverse of E −1 Λ (t) does on the other hand not eliminate aliasing for any M .These results once again indicate that the definition of E Λ (t) can have a significant effect on aliasing.The fact that aliasing also is effectively eliminated despite the fact that not all the unresolved frequencies are accounted for in the construction of E Λ (t) is also interesting, since this suggests that there could exist a generalization of Orszag's rule for multiple frequency problems.That is, it is not necessary to construct E Λ (t) in Eq. ( 25) such that all the unresolved sinusoids can be distinguished, only such that the unresolved ones alias onto those with frequencies in Λ \Λ. The variation of E Λ E −1 Λ \Λ 2 with M for the multiple frequency case is shown in Fig. 7b.This figure shows that the trend of E Λ E −1 Λ \Λ 2 once again follows that of the numerical alias error as the number of sampling points increase. A comment on the point corresponding to M = 15 in Fig. 7 should also be made.In this point, it can be seen that the numerical alias error is the largest for the case when E Λ (t) is computed based on Eq. (25).At first, this might seem counter intuitive since more information about the unresolved sinusoids is included in the DFT compared to e.g.M = 5.A possible explanation for this can be found in Fig. 7c.This figure shows that the largest value of κ(E −1 Λ ) is obtained for M = 15.As such, it can be expected that E Λ (t) is the least robust against aliasing for M = 15.Thus, if not all the unresolved frequencies alias onto the frequencies in Λ \Λ, which can not be guaranteed, this sampling may very well generate more aliasing than another one that uses less sampling points and have a lower condition number.It is however not known at this point whether the relatively large value of κ(E −1 Λ ) for M = 15 is due to the fact that the OPT algorithm was unable to find the global optima or not. Conclusions It is well known that aliasing may occur when the harmonic balance method [16,33,34] is applied to nonlinear problems.In the present paper, aliasing is studied in detail for the case when the harmonic balance method is used to solve first order ordinary differential equations which are nonlinear functions of the solution variables.It is shown that aliasing occurs as a result of the fact that the harmonic balance method approximates the Fourier coefficients of a nonlinear function by a discrete Fourier transform (DFT).Although this is already well known, the aliasing of each unresolved frequency onto the resolved ones is here given a precise meaning by introducing a new operator that governs aliasing.This operator, referred to as the alias operator, is derived for the general case when the almost periodic Fourier transform (APFT) [26] is used in the harmonic balance method.As such, the alias operator can be used to study aliasing in both single frequency as well as multiple frequency harmonic balance computations.Using the norm of the newly introduced alias operator as a metric, the best sampling strategy for single and multiple frequency harmonic balance computations is investigated.It is found that for single frequency problems, the widely adopted uniform sampling approach is optimal.This sampling is also known to minimize the condition number of the DFT matrix [26].For the general case with multiple fundamental frequencies, the time sampling that minimizes the condition number of the DFT matrix seems to be close, but not identical, to the time sampling that minimizes the norm of the alias operator.A low condition number and norm of the alias operator is also shown to give a smaller alias error in numerical simulations of the Duffing Oscillator.The results presented in this paper therefore demonstrate that the condition number of the DFT matrix is a reasonable metric for selecting a time sampling, as has previously been done by e.g.[13, 21, 25-27, 35, 37]. Another important finding from the present work is that the alias error is only bounded by, but not strongly correlated with, the condition number.This does in turn imply that the time sampling must give a condition number which is close to the global optimum if the alias error is to be minimized.Therefore, in cases when an optimization algorithm is used to find the time sampling, it is important that it converges well.In the present work, it is found that the OPT algorithm introduced in [13] performs very well. Employing the APFT in combination with oversampling is also considered in this work.In this case, the DFT matrix is defined as a left inverse of the inverse DFT matrix.Previous studies have used the Moore-Penrose inverse for this purpose [5,6,21].In the present work, however, it is shown that this left inverse in general can not eliminate aliasing in the multiple frequency case.Based on this finding, a new left inverse that can eliminate aliasing completely is introduced.This new left inverse is also shown to perform better in general than the Moore-Penrose inverse on the Duffing Oscillator.Based on these observations, it is suggested that the newly introduced left inverse is used in favor of the Moore-Penrose inverse when oversampling is employed in multiple frequency harmonic balance computations based on the APFT. The above findings for the case of oversampling apply both to the abstract metric defined by the norm of the aliasing operator and to the measured alias error in the numerical experiments.In contrast, the conditon number could not be used as a criterion to distinguish samplings that were optimal with regard to aliasing.It is therefore concluded that the norm of the alias operator contains valuable information about the quality of a given sampling strategy. It should finally be noted that both the alias error and the harmonic truncation error encountered in the harmonic balance method can be reduced by increasing the number of frequencies in the computation.In many practical applications, this is unfortunately not feasible, and good control over the alias error is thus important to achieve accurate results.In cases when more frequencies can be afforded, however, one must be mindful of the computational complexity of the APFT, which scales as O(M 2 ).To overcome this limitation, the Non Uniform Fast Fourier Transform (NUFFT) of type three could be used instead [4,30].The theory developed in this paper would however need to be adapted for the NUFFT.
11,877.6
2022-04-11T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Computer-assisted learning as an alternative to didactic lectures : a study of teaching the physics of diagnostic imaging A computer-assisted learning (CAL) package entitled Physics of Diagnostic Imaging was developed in 1995 to replace five hours of didactic lectures at the University of Glasgow Faculty of Veterinary Medicine, and has been available as an additional learning resource for students in the other five UK veterinary schools for over three years. The package was reviewed by peer experts and the reaction of the students to its use gauged by post-task questionnaire administration, informal discussions and observation. To assess the effect of integration into the curriculum, analyses of fourth-year degree examination results over a six-year period were carried out. Analyses of students' examination results for preand post-CAL delivery of the diagnostic imaging course showed that performance in the CAL-based course was significantly higher than in other subjects. This confirmed that the courseware can be used to replace didactic lectures as part of a rich learning environment supported by other resources. Initial student resistance to lecture replacement with CAL occurred, but has lessened as the use of the package has become established in the curriculum. Introduction In recent years there has been an expansion in the number of undergraduate students recruited to the veterinary courses in the UK veterinary schools.This growth in student numbers has not been matched by an increase in academic staff devoted to teaching.In addition, higher education has in the 1990s suffered an approximate 20 per cent reduction in government funding if the effects of inflation and student numbers are taken into account.At the same time there has been a demand for the investigation of teaching quality, leading to the Teaching Quality Assurance process (Ellis, 1993).Thus a number of converging factors have stimulated the need to look for alternative and innovative methods of teaching in veterinary undergraduate education. It is essential for veterinary students to understand the physical processes involved in diagnostic imaging; for example the importance of radiation safety, and the recognition and elimination of artefacts which might mar clinical interpretation of radiographs.Unfortunately the physics of diagnostic imaging was a relatively unpopular part of the fourth-year course because the comprehension and retention of facts delivered during presenter-paced, didactic lectures were hampered by the disparate pre-existing knowledge of physics among veterinary students. The Computer-assisted Learning in Veterinary Education (CLIVE) Phase 2 Teaching and Learning Technology Programme (TLTP) project has enabled the authors to change the way in which diagnostic imaging is taught at the University of Glasgow (see http://www.clive.ed.ac.uk/ for more information).The diagnostic imaging course originally comprised five lectures on the physics of diagnostic imaging complimented by lectures on radiological interpretation and film-reading practicals.The lectures on the physics of diagnostic imaging have been replaced by the use of the.Physics of Diagnostic Imaging CAL package (Sullivan, Dale and May, 1998). The pedagogical reason for replacing a component of the course with CAL was to introduce a student-centred, self-paced, independent learning resource as a potentially more enjoyable alternative to didactic lectures.The availability of the finished package provided an opportunity to assess whether students would learn as effectively from CAL material as they apparently do from lectures, and enabled the authors to record changing student attitudes towards computers in teaching over a three-year period.Although much of the course is theory-laden, ultimately it is in the clinical situation that veterinary students will apply their knowledge of the physics of diagnostic imaging, so many real-life examples are used in the program. The package was supported by image-based question and answer problems delivered using the CLIVE QA template developed at the University of Edinburgh (Figure 1).Since its release in 1993, self-assessment materials created using this template have been used frequently by students at all the UK veterinary schools and its role as an effective revision j Vrtat 3 (estates m the 5»tona.'y»oseeidA t; A^cfcd ertodtf -> increased ii-g^t arve w e sman eSec&ve tecs* spot it) C&PP& anode condt-cfe he«twe!t «ij Oil svrfQfxlc& the tube figure /: The QA module dlowing students to test thdr knowledge of radiological physics tool throughout the five-year veterinary degree has been documented (Holmes and Nicholls, 1996). The relevance of this study is highlighted by the anticipated need for veterinary students to develop the self-paced, independent learning skills they will need to avail themselves of the anticipated explosion in the provision of veterinary continuing professional development (Royal College of Veterinary Surgeons, 1996,1997). Materials and methods The Physics of Diagnostic Imaging package consists of fifteen modules designed and written using Authorware (Macromedia), deliverable over a NetWare 3/Netware 4 network (Novell) and able to run under MS Windows 3.1I95I98INT.The package is a multimedia CAL program which incorporates problem-solving activities and questions (Figures 2 and 3) to test student understanding of the concepts introduced throughout the package.Individual units in modules were designed to take students 15-30 minutes so that the entire package could take 10 hours with the student being given 10 weeks to complete the package before the degree examination in week 14 of the semester.The components of the package are shown in Table 1.The package was intended for use by fourth-year undergraduates. The revised CAL-based physics of diagnostic imaging course includes the following elements (five lectures having been replaced by courseware): • timetabled use of courseware (10 hours allocated), plus free access; • student notes; • standard textbooks in library; • tutorials; • film-reading tutorial; • clinical experience.£» l.lr Production of X-ravs Generation of high speed electrons fa carers is passed through *e csSiode *e resistance in the f.temers causes tte f.lamer* to heat: up and give off electrons that SSher round the wire as a spacecharge . Tiy changing the current passing through the cathode by clicking on the : acrow buttons next to the ftmmeteP ointslo note] There are more electrons because of Use greater • current. Exit-Clkt: on !1* words m bold lost* a* ciKuZ, then the ammeter ccarois to observe titc Oilier modules All the students in each year followed the revised course.The first cohort to use the package was the 1995-6 fourth-year intake, followed by fourth-year students in the 1996-7 and 1997-8 sessions.In comparable studies control groups have been set up where one group of students continues to attend traditional classes, and their performance is compared with that of the group using the new technology (e.g.Rogers, Regehr, Yeh and Howdieshell, 1998).However the validity of such control groups has been questioned because of the number and complexity of factors involved in learning (Gunn, 1996), and control groups were not used in this study. Evaluation Before development the storyboard for the Physics of Diagnostic Imaging was approved in all six UK veterinary schools by subject specialists and members of the CLIVE consortium, to ensure compatibility with courses in all the schools. Designer evaluation Evaluation of the navigation system and the interface design was undertaken by the Design Manager for the CLIVE consortium, and other designers, who tested the program during its development.The designers expressed a common view that the navigation system was simple and easy to follow, and that the interface was clear and aesthetically pleasing. Peer evaluation Five Royal College of Veterinary Surgeons holding the Diploma in Veterinary Radiology with experience of teaching diagnostic imaging physics agreed to review the CAL package, as delivered to the students.There were no major criticisms of the package.Minor points made related to spelling mistakes or minor points of factual inaccuracies.The consensus was positive and that the material was well presented and easy to follow. External evaluation An external evaluation group, commissioned by the CLIVE consortium, used the courseware and provided a report of their findings.The group consisted of two Table I: Components of the Physics of Diagnostic Imaging courseware educational technologists with experience in CAL design and evaluation.The evaluation team noted that the large corpus of information was presented in an engaging and clear way, using a variety of interactions to engage the student, rather than allowing the user to be an entirely passive individual.Like the designers, they stated that the graphic design was extremely clear and easy to view.They did have queries about the appropriateness of some interactions, and suggested putting some pop-up information in separate screens.Other comments related to simplifying instructions and making some changes to animations.In response, a second version of the program was released which incorporated the changes and several other minor improvements.This was released to students in the 1996-7 session. Student evaluation A post-task questionnaire was developed by the courseware designer, and approved by the content author.This was distributed to the first cohort of students (1995-6) and again to students in the 1996-7 and 1997-8 sessions.The questionnaire was not exhaustive, to avoid discouraging the students from providing feedback, but was considered sufficient to provide an insight into students' expectations and experiences of the courseware. Observation and informal discussions with students provided further information.For a more accurate and objective measure of learning gains, fourth-year examination results between 1993 and 1998 were statistically compared. Questionnaire-based evaluation Fifty-five of the 78 students in the first cohort returned evaluation forms (71 per cent).In the following year 48/69 forms were returned (70 per cent).The response of the third cohort was much greater with 81/85 forms returned (95 per cent). Reasons quoted for using the program included: • 'As a backup to the notes'; • 'To help me understand diagnostic imaging'; • 'Images can be looked at repeatedly'; • 'It was examinable'; • 'More interesting and informative than the lecture notes'; • 'Because we were told to'. The latter response appeared more frequently in the returns of the first cohort of students.Another noticeable difference between the first and second cohort of students was the strong reluctance of the first cohort to accept the courseware as a replacement for lectures, a typical comment being: 'The program should be used as a companion to lectures, not as a replacement.'In the forms collected from the second cohort, this sentiment was not expressed.Neither was it stated by any students in the third cohort, although two students expressed a preference for textbooks over computers, stating that they could be used in the comfort of their own homes 'with a coffee'. For the first cohort of students, only 20.1 per cent of the students responding claimed to have enjoyed using the program, however 80 per cent of these students found some or all of the modules helpful.Some students even commented generally: 'I did not enjoy using the program but I found it helpful.'In the second cohort, 70.8 per cent of students claimed to have enjoyed using the program, and 87.5 per cent of the responding students found some or all of the modules useful.In the third cohort 50 per cent of students enjoyed using the program and 84.8 per cent of students found some or all of the modules useful (Figure 4).This reduced level of enjoyment may be explained by a network server crash disrupting the availability of Version 2 of the package, resulting in students accessing two slightly different versions. Overall, 47 per cent of the modules accessed by the first cohort were thought to have helped the students to understand the topics (315 'helpful' modules accessed / 667 modules accessed).For the following cohort of students this proportion was 62.3 per cent (364/584), and in the third cohort the proportion was even higher at 67.3 per cent (641/952) (Figure 4).Some students from the first cohort preferred short modules -the duration of longer modules contributing to their lack of enthusiasm/enjoyment -while others preferred a more comprehensive and thorough account: • 'The ultrasound section was better than the section on X-rays (shorter, faster to work through)'; • 'The ultrasound section was not as useful as the section on X-rays -it was not as explanatory'; • 'Sometimes interactive screens are unnecessary -they slow the program down'. Students in the second and third cohorts did not make comparable statements, although one student in the third cohort requested more information on CT and MRI (the smallest modules). Analysis of the questionnaire responses also revealed that: • most students spent between IS and 20 minutes on each module; • students were divided about the usefulness of a bibliography; • some students thought that prior knowledge was assumed, but this response declined over the three years. No significant and recurring problems were reported by the students relating to either screen design or navigation in the courseware program (Figure 5). Statistical analysis of examination results Statistical analyses were performed to establish whether or not the change in course structure would affect student performance in the 1996 degree examinations of the first cohort, given that the questionnaire responses indicated that students were reluctant to accept any modifications in the course structure.The data from the 1993, 1994 and 1998 professional degree examinations were also analysed.These examinations were chosen for analysis because they included diagnostic imaging questions. A general one-way analysis of variance (ANOVA) test was used to analyse each data set to determine whether there was a significant difference between student performance in each category/subject (i.e.p < 0.05).Subjects included anaesthesia, diagnostic imaging, equine studies, ophthalmology, orthopaedics and reproduction.To establish which subjects the students performed better in, it was necessary to carry out multiple-range tests (pairwise comparisons). Prior to the incorporation of CAL into the course, analysis of the 1993 examination results shows a significant difference (p < 0.001) between performance in different subjects, with diagnostic imaging appearing in the lowest of two groups of subjects.In 1994 the significantly different results (p < 0.001) fall into four performance groups.Diagnostic imaging falls into the lowest group of these subject groups.After the introduction of CAL the performance in 1996 between subjects is significantly different (p < 0.001), with diagnostic imaging appearing in the highest of two groups of subjects.In 1998 performance is again significantly different (p < 0.001) and performance in diagnostic imaging is significantly higher than in any other subject.The remaining four subjects were grouped within a single lower performance group.This enables us to draw the conclusion that the examination results have improved as a result of lecture replacement with CAL. grouped within a single lower performance group.This enables us to draw the conclusion that the examination results have improved as a result of lecture replacement with CAL. Discussion The results of the study are discussed in terms of module design, screen design, the role of CAL in the curriculum, attitudes to CAL and examination performance. Module design In the diagnostic imaging course, core concepts must be understood to appreciate the clinical significance of radiographs and other images.The arrangement of the courseware modules was such that students were encouraged to work through the modules sequentially on a first outing, although students could elect to omit the foundation modules.Each module had embedded problems that referred to previous modules, which the student was expected to solve.Hooper, O'Conner and Cheesmar (1998) stress the importance of problem-solving exercises in medical education, emphasizing that this is the process by which clinical diagnoses are made.The reiteration of important concepts in different ways has been shown to help students integrate information, constructing knowledge by linking new information to information previously given (Grabinger and Dunlap, 1994).The use of the Edinburgh QA template to construct a self-assessment set of questions allowed the same knowledge to be tested, by providing visual clues, helping to reinforce the key points illustrated in the CAL package. Screen design The questionnaire analysis indicated that few students had problems using the interface.The main navigation controls in the program included a paging model which enabled the user to go backwards or forwards one screen, or to jump to the start or end of the current section.It is also possible to jump to a list of modules; progress being indicated by 'ticks' beside the title of each visited module, giving the student freedom to enter or exit the package as they see fit. The role of CAL in the curriculum One of the most frequent responses in the student questionnaire from the first cohort was that 'the program should be used as a companion to lectures, not as a replacement'.This response did not figure in the responses by the second cohort of students.This initial reaction to veterinary CAL has been observed previously in the United States (Weeks, Smith and Martin, 1992). Where use has been monitored in the CLFVE institutions, it has been observed that students are motivated to use courseware if its use is timetabled, or recommended to coincide with particular lectures, practicals, or resource-based learning, i.e. integrated into the curriculum.Further scheduled CAL time, in addition to free access, was considered to be beneficial to students, as it has been argued that access to courseware in timetabled classes facilitates collaboration between students (Doughty et ah, 1995). However, an increasing number of students are running CLIVE programs on home computers; this is shown by the increasing number of requests for overnight borrowing of CDROMs from the University of Glasgow veterinary library.The Dearing Report has recommended that all students should have access to their own portable computers by 2005/6 (National Committee of Inquiry into Higher Education, 1997).This will make CAL more accessible to the small number of students who commented that they did not like working in a 'noisy' computer cluster, and will satisfy more of those students who prefer to study at home. Students generally prefer to use courseware for preparation and revision and the importance of CAL as an additional learning resource at the University of Glasgow has been exemplified by repeated requests for additional courseware covering a range of veterinary topics.The initial preference for CAL as a supplementary aid to learning may be explained by the feeling of deprivation induced by the lack of classical (didactic) lectures, and the novelty of CAL, where the onus is placed on the student rather than the teacher to drive the learning process. Attitudes to CAL Students were told at the start of the academic year that they would be examined on their knowledge of the subject.Assessment has long been recognized elsewhere as the sole motivation for using a resource which might be regarded as optional, but this may also engender negative feelings towards it (Scanlon, Jones and O'Shea, 1987).It is relevant to note that the initial negative opinions of the students in the first cohort diametrically opposed the responses from a peer review survey.Peers, compared to students, are likely to have a clearer idea of the educational imperatives and the desired outcome.This view is supported by Draper, Brown, Henderson and McAteer (1996) who noted that 'Students express quite strongly positive and negative views about a piece of courseware that often seem unrelated to their actual educational value'. Other factors which may have influenced how CAL was received by the first cohort of students as a replacement for lectures are (i) the students in the first cohort had no formal training in IT, (ii) the subject was examinable, and (iii) the previous group of fourth-year students were given lectures.For the following two cohorts, CAL had already replaced lectures and the majority of students had more experience of using computers, although this was still variable. A higher proportion of the second cohort of students enjoyed using the program, compared with the first cohort.This may be due to the fact that integration had been achieved and CAL was now accepted as part of the course.A smaller proportion of students in the third year claimed to have enjoyed the program than in the second year, but more students found the courseware modules useful. Examination analysis The investigation of the variance of scores indicates that in the pre-CAL years, student performance in diagnostic imaging was either on a par with other subjects, or significantly below the other subjects.In the post-CAL examinations, student performance in diagnostic imaging was significantly higher than in other topics, and exclusively the highest in the most recent examination.This would imply that the replacement of introductory lectures with CAL has improved student performance in this subject, despite initial reluctance by students to accept the change in learning methods.There are a number of possible reasons for the high diagnostic imaging scores in the post-CAL examinations.The most pessimistic is that given the perceived adverse change in course structure students compensated by working harder (Draper et al., 1996), using other resources in a remedial fashion.This might assume that all students considered the change adverse; however, this was not borne out by the questionnaire responses.The possibility that the questions were easier, or that the papers could have been marked more leniently, in the three years post-CAL is unlikely since the same person set and marked the examination papers each year.The most favourable explanation would be that the change in learning resources did help the students to learn at their own pace and fostered deep learning.This enabled them to apply their knowledge of facts derived from using the courseware to the rest of the course, e.g.radiopathology tutorials, and engage in collaborative dialogue with each other and the teacher during filmreading sessions. Conclusion The key issues to emerge for the future of CAL in the veterinary curriculum are overcoming initial student resistance, clear full integration into the curriculum, and making courseware available outside traditional scheduled timetables.The fact that design and navigation were not an issue indicated that a well designed package could be extended to CPD use, where users may not be particularly IT literate.The results of this study are very encouraging and indicate that CAL-based open-learning resources can provide an effective alternative to conventional lectures. Figure Figure 2: A saeenshot illustrating interaction fa the Physics of Diagnostic imaging Figure 4 : Figure 4: Analysis of the evaluation form responses
5,017.6
1999-01-01T00:00:00.000
[ "Computer Science", "Education", "Physics" ]
Query-seeded iterative sequence similarity searching improves selectivity 5–20-fold Abstract Iterative similarity search programs, like psiblast, jackhmmer, and psisearch, are much more sensitive than pairwise similarity search methods like blast and ssearch because they build a position specific scoring model (a PSSM or HMM) that captures the pattern of sequence conservation characteristic to a protein family. But models are subject to contamination; once an unrelated sequence has been added to the model, homologs of the unrelated sequence will also produce high scores, and the model can diverge from the original protein family. Examination of alignment errors during psiblast PSSM contamination suggested a simple strategy for dramatically reducing PSSM contamination. psiblast PSSMs are built from the query-based multiple sequence alignment (MSA) implied by the pairwise alignments between the query model (PSSM, HMM) and the subject sequences in the library. When the original query sequence residues are inserted into gapped positions in the aligned subject sequence, the resulting PSSM rarely produces alignment over-extensions or alignments to unrelated sequences. This simple step, which tends to anchor the PSSM to the original query sequence and slightly increase target percent identity, can reduce the frequency of false-positive alignments more than 20-fold compared with psiblast and jackhmmer, with little loss in search sensitivity. INTRODUCTION Protein similarity searching is central to interpreting genome sequence data. The widely used BLAST program (1) can routinely identify homologous proteins that diverged >2 billion years ago, and share as little as 20-25% sequence identity. For most large, well-characterized protein families, a single BLAST search against a comprehen-sive protein library will yield hundreds, if not thousands of statistically significant similarity scores from homologous proteins that share common 3D structures, and often similar functions. But, despite the enormous growth in protein sequence databases, and the expectation that most protein families that exist in nature have homologs in current protein sequence databases, there are still large numbers of proteins for which little or no structural and functional information is known. Likewise, as the number of proteins with known 3D structures has grown, there are still many examples of structurally similar proteins that do not share statistically significant similarity in a BLAST search. Iterative, model-based, similarity search methods like psiblast (1) and jackhmmer (2) are dramatically more sensitive than conventional pairwise similarity searching methods at identifying homologous, structurally similar, proteins. Iterative similarity searches with psiblast are usually two-or three-fold more sensitive than single sequence searches (3,4), and iterative methods can be 5-100fold more sensitive with challenging queries. Unfortunately, iterative search methods can fail when unrelated sequences are included in the Position-Specific Scoring Matrix (PSSM) or Hidden Markov Model (HMM). In the worst cases, contaminating non-homologous sequences can shift the PSSM away from the original homolog family causing it to detect more non-homologs (false-positives) than homologs (true positives). In previous work, Gonzalez and Pearson (5) showed that PSSMs are often contaminated when a homologous alignment over-extends into a nonhomologous region and brings additional non-homologous domains into the PSSM model. In that work, we also showed that reducing 'alignment creep' by fixing the alignment boundaries for a sequence included in the PSSM to the boundaries found at the first significant alignment of the sequence, could dramatically reduce alignment overextension and improve search selectivity (5). An implementation of this strategy--psisearch--was described by Li et al. (6). During the development of psisearch2, an improved version of psisearch, we found that the strategy used to construct the PSSM in psisearch occasionally produced PSSMs that aligned incorrectly to the homologous domain. To correct this problem, we explored methods to correctly 'anchor' the PSSM by replacing gapped positions in the subject sequence alignment with the residues from the query sequence that aligned to the gap in the subject sequence. We were surprised to find that PSSMs constructed using queryseeded subject sequences not only reduced PSSM misalignment, they also produced dramatically fewer false-positives. We compared the ability of conventional psiblast (1,7) and jackhmmer (2), and query-seeded versions psisearch2 using either psiblast or ssearch as the search program to identify homologs in RPD3, a set of full length protein sequences selected from Pfam28. To simulate searches with multi-domain proteins, queries were constructed by embedding Pfam domain regions from real protein sequences into random flanking sequence. We also searched with intact full-length proteins containing the same query domains. In psisearch2 searches with either psiblast or ssearch, PSSMs derived from subject sequences with seeded query residues are much less likely to become contaminated by non-homologous domains. Query-seeding appears to reduce homologous overextension by reducing the evolutionary 'depth,' or increasing the target sequence identity, of PSSM models. Evaluation datasets psiblast, jackhmmer and psisearch2 iterative search performance was evaluated using domains and sequences selected from an updated version of the RefProt-Dom dataset (8), RPD3, derived from Pfam release 28 (9), with modifications described below. The query sequences used for the searches contained a single Pfam28 domain, embedded in an equal length of random sequence. Query sequences were iteratively searched against the full-length protein sequences in RPD3. RPD3 construction--selection of domain families and clans. The domain families used to evaluate iterative similarity search strategies were selected from RefProtDom3, a set of diverse domains families that met the following criteria: (i) domain size: Pfam domain model lengths >200 match states; (ii) domain number: >200 members; (iii) diversity: domains were present in at least two of the three kingdoms of life (archaea, bacteria, eukaryota) with at least 20% of the sequences from the second most abundant kingdom (100 domains if the most abundant kingdom had >500 domains); (iv) clan length consistency: domain families from clans were included only if the maximum model length of the domains in the clan was <1.5-fold the minimum model length. In Pfam28, there are 1743 domain families and 155 clans that met the domain length, domain abundance, and clan length consistency criteria. Including the diversity requirement reduced the number of domain families that did not belong to a clan to 299, and the number of clans to 40, for a total of 339 non-homologous queries from 428 Pfam28 domain families. RPD3 construction--selection of sequences. With the dramatic increase in bacterial sequencing over the past 5 years, some of the 339 domain and clan RPD3 families contained many tens of thousands of sequences containing an RPD3 domain. To reduce the differences in abundance between the largest and smallest domain families, large domain families were randomly down-sampled to a maximum of 5000 entries using a strategy that sought to preserve or enhance phylogenetic diversity. Thus, if there were 2000 or fewer sequences in archaea, bacteria, and eukaroyta, all 2000 were included, and if the two kingdoms with fewer domains had <25% of the domains in the largest kingdom, 2000 were taken from the most abundant, and at least 500 taken from the less abundant kingdoms. If the less abundant kingdoms contained >25% of the sequences in the most abundant kingdom, all three kingdoms were sampled randomly. The same strategy was used for domains from the 40 clans, but for domains in clans, all the domain families from the clan were combined, and then the phylogenetic diversity rules were used for sampling. The resulting RPD3 protein set contains 597 753 proteins that contain at least one domain from the 299 Pfam28 families and 40 Pfam28 clans in the RPD3 domain set. The largest clan/domain family is found in 4719 sequences, the smallest domain family in 207 (median: 2271, Q1: 1011, Q3: 2482). Thus, the middle 50% of clan/domain families differed ∼2.5-fold in abundance. The full length RPD3 sequences contain many domains in addition to the 339 domains/clans in the RPD3 set. Pfam28 reports 2904 domains in the full alignments (Pfam28 MySQL field in full=1) among the RPD3 sequence set. Query sequence selection. To provide a challenging set of domains for evaluating psiblast, psisearch2, and jackhmmer, two sets of 100 query domains were selected from the 339 RPD3 clan/domain families. We sought sequences that were more distant from the HMM model that describes the domain family, using the Pfam28 sequence bits scores as a proxy for evolutionary distance. For one set of 100 queries (far50), we randomly selected domains from the bottom tenth-percentile of sequence bits scores from the pfamA reg full significant table in the Pfam28 MySQL distribution that covered at least 50% of the domain model length, and did not overlap other domains. For the second set of 100 queries (far66), we randomly selected domains that at least 66% of the model length from the bottom tenth-percentile. For domain families that belonged to clans, we selected a domain family near the median in sequence abundance in RPD3. Domains were only selected from sequences that were not marked as is fragment in Pfam28. The domain region sequences were then embedded into random sequence, i.e. a 200 residue domain produced a 400 residue query sequence with 100 residues of random sequence on either side of the genuine domain sequence, and used to search the RPD3 sequence set. The embedded domain queries were then ranked by their ability to produce statistically significant alignments in a Smith-Waterman search (ssearch) of the RPD3 library, and the 100 sequences between the 10th and 40th percentiles by family coverage were selected (far50). For the far66 set of queries, the 100 embedded domain queries were selected randomly Figure 1. The psisearch2 iteration cycle. A query sequence is compared with a sequence database using either psiblast or ssearch, producing output in BLAST tabular format including a BTOP encoded alignment. The tabular BTOP output, together with optional boundary information, is processed by the m89 btop msa2.pl script to produce both a multiple sequence alignment (MSA) and a FASTA format library file which is reformatted with makeblastdb. These two files are processed by psiblast to produce a PSSM, which can then be used to re-search the sequence database for the next iteration. from the bottom half of queries ranked by family coverage after an ssearch search. The two query sets sample 150 of the domain/clan families. Forty-nine families are shared (with different embedded domain queries) between the far50 and far66 query sets. We also evaluated the performance of query-seeded iterative searching using full-length protein sequence queries by retrieving the full-length sequences from Pfam28 that contained the 100 far50 embedded domains (far50-full) and the 100 full-length sequences with far66 domains (far66-full). Challenging Pfam queries. To focus on the domain queries that produced the largest false discovery rates (FP/(TP+FP), FDR), we identified 40 queries with the highest FDR for psiblast and 40 for jackhmmer, and then found 20 each from the far50 and far66 sets with the highest average FDR using psiblast and jackhmmer after 10 iterations. Iterative searching and PSSM construction We evaluated the performance of psisearch2, a new version of the psisearch program (6) that combines query-seeding and alignment boundary modification. The psisearch2 script (Figure 1) separates the two parts of an iterative search: (i) the identification of homologs and production of alignments; and (ii) the production of a PSSM from the alignments for the next iteration. psisearch2 uses a flexible strategy for modifying the boundaries of the multiple sequence alignment (MSA) and the sequence library used to construct the PSSM with psiblast. A block diagram of the search/alignment/PSSM construction process is shown in Figure 1. The iterative process begins with a similarity search, using either psiblast (7) or ssearch (10), which produces output in the blast-tabular format that includes the blast BTOP alignment encoding (psiblast -outfmt 7 or ssearchm 8CB) format. The alignment results are passed to the m89 btop msa2.pl script to produce a MSA and subject sequence library. The MSA and subject library files are then passed to psiblast to produce the PSSM. We control the PSSM construction process by using psiblast2.3.0 with the -in msa and -out pssm options, together with the recently implementedsave pssm after last round option. We build the PSSM by aligning the MSA to a sequence database comprised of the sequences with statistically significant similarity scores from the previous iteration. We can modify the properties of the PSSM in two ways: (i) by controlling the boundaries of the sequences specified in the MSA used to make the PSSM (boundaries can reflect the current alignment boundaries, the previous alignment boundaries, or domain boundaries); and (ii) by modifying the subject sequences in the sequence database used to calculate the PSSM (both internal-and end-gaps in the subject sequence can be ignored, or be substituted with the aligned query sequence residue, or with a random residue). The m89 btop msa2.pl program takes the alignments produced by the psiblast or ssearch similarity search and produces an MSA and a custom subject sequence database that psiblast can convert into a PSSM. m89 btop msa2.pl options control the MSA boundaries and the sequences in the custom subject sequence database. Search evaluation Characterization of true-positives and false-positives. The 200 embedded domain query sequences described above were used to iteratively search the RPD3 full-length protein sequence database using psiblast and jackhmmer (unmodified), and psisearch2 modified using the query-seeding and boundary control strategies available with m89 btop msa2.pl ( Figure 1). For the psiblast and ssearch searches with psisearch2, alignment output was captured using the commented tabular format, including the BTOP field. For jackhmmer, similar information was extracted from the --domtblout file. The blast-tabular and --domtblout formats provide both the identifier and expectation value for the subject library sequences found, and the beginning and ends of the alignments in the query and subject sequence. For jackhmmer, we used hmm coordinates as a proxy for the query coordinates, and the alignment start and end, not the probabilistic envelope boundaries, for the subject boundaries. Because our query sequences contain a genuine protein domain sequence embedded in random sequence, we count alignments as true-positives only if the genuine domain in the query aligns with the same domain in the Pfam28 annotated RPD3 protein sequence. If the embedded domain query sequence domain aligned with a protein that did not contain the correct domain, the alignment was scored as a false-positive. If the query and subject sequences contained the same domain, but the alignment was outside the embedded domain coordinates, the alignment was scored as a false-positive ( Figure 2H). For searches with full-length . psisearch2 with seeding (H) does not produce a significant alignment with L0HP41, so an alignment with Uniprot accession R2VQV7 at iteration 4 is shown (H). This alignment is to a homologous domain, but in a non-homologous region, so the alignment is scored as a false-positive. proteins (far50-full, far66-full), only alignments in the original query domain were scored. Thus, for full-length query sequence H6NQX3, which contains a PF02219/CL0086 domain from residues 326-607, only alignments within this range of query residues were scored as either true-positive (if the aligned region contained a CL0086 domain), or falsepositive (if no true-positive domain was found). This contrasts with embedded queries, which could be scored as false-positives when the alignment occurred in the random flanking sequence ( Figure 2H). Average overall sensitivity and FDR was calculated as weighted sensitivity (TP/(TP + FN)) or FDR (FP/(TP + FP)) for receiver operator characteristic (ROC) curves and Table 1 by treating each family independently, adding up the total sensitivity or FDR, and dividing by the number of queries. Thus, when 100 queries were summarized, each individual query contributed a maximum of 1% of the sensitivity or FDR total; for 20 queries, 5%. Modifications to Pfam28 annotations. Our evaluation of search effectiveness and search selectivity depends on accurate Pfam28 annotations. We used Pfam28 coordinate annotations on the proteins in RPD3 without modification. But when we saw significant false-positive alignments, sometimes with unembedded sequences, and sometimes in the first iteration, we investigated further. In four cases, we concluded that Pfam had missed a homology relationship. We added PF09511 to clan CL0078, PF16332 to CL0579, PF01010 to CL0425, and we formed a new clan (CL9001) from PF01156 and PF07362. Each of these relationships was confirmed by finding alignments where annotated members of the clan aligned with the candidate homologous domain, or where the relationship was supported by SCOOP (11). Adding Pfam domain families to clans allowed us to correct large numbers of false-positives (but also increased the number of homologs in the family, thus reducing the true-positive fraction). It is unlikely that we have corrected all the missing relationships, so some of the falsepositives we record are probably the artifact of homologous domains that are not annotated by Pfam28. Alignment over-extension and PSSM corruption PSSM corruption often occurs when an aligned homologous region produces a strong similarity score that allows the alignment to be continued into an adjacent nonhomologous region, a process we have termed homologous over-extension (5). Homologous over-extension typically occurs because the alignment score for an homologous region is not reduced rapidly enough in the nonhomologous region to terminate the alignment. For example, if the homologous region is around 40% identical and the BLOSUM62 scoring matrix is being used, the alignment might be extended into non-homologous until the overall alignment identity is 25% or lower (12). Homologous overextension is a particular problem for iterative methods that build protein family specific PSSMs or HMMs, because these scoring systems can detect very low identity homologs, and are thus less effective at terminating alignments as they extend into non-homologous regions. When we first identified homologous over-extension as a major cause of PSSM contamination, we found that we could reduce contamination using a simple strategy to prevent alignments between the query/PSSM and the subject sequences from extending with successive alignments (5,6). This strategy improved search specificity but seemed crude, since distantly related portions of an homologous region that failed to align in an early iteration might be excluded from the PSSM. Thus, we sought a more subtle strategy for reducing over-extension. Recently, the fasta programs have been extended to allow sub-alignment scoring (10), a process that partitions the overall similarity score based on sequence annotations, such as the start and stop of domains annotated by Pfam (9). Sub-alignment scoring makes it much easier to detect potential non-homologous alignment, because the part of the alignment that is homologous will have a much higher score than the over-extended non-homologous region. When non-homologous over-extension occurs, more than 80% of the similarity score can be found in the homologous region, but the non-homologous alignment with 20% of the score may be from 10-100 residues long. Thus, the score density in the non-homologous region is far lower than the density across the homologous alignment. Subalignment scoring can detect domains that have been included in alignment but do not contribute significantly to its score. Our earlier psisearch program (6) performed iterative searches by searching a database (upper left box in Figure 1) and then directly building a PSSM by running psiblast with the query sequence or query PSSM against a library of subject sequences produced from the significant alignments in the previous search (lower right box in Figure 1). While integrating sub-alignment scoring into the scripts that we used for iterative searching with ssearch, we were surprised to find that on rare occasions the psiblast run to produce the PSSM did not align the query/PSSM to the same region of the subject sequence. To provide psiblast more guidance and ensure that the appropriate sequences were aligned, we wrote the m89 btop msa2.pl script. m89 btop msa2.pl produces an MSA from the aligned output of the previous search. psiblast can use this MSA, together with the set of subject sequences (the two sets of arrows entering the "Build PSSM" box in Figure 1) to produce a PSSM. But despite the MSA input, psiblast sometimes failed to produce an alignment over the homologous domain. To 'force' psiblast to accurately reproduce the alignments over the homologous domains, we modified the m89 btop msa2.pl script to insert query residues into the subject library sequences at positions corresponding to gaps in the subject sequence in the MSA alignment. This strategy abolished psiblast misalignment during PSSM generation, and we were to find that it also dramatically reduced the number of false-positives found after five, or even ten, iterations. Including 'X'-residues, or random residues, did not consistently prevent misalignment. Figure 2 shows the process of alignment over-extension that occurs when a sequence with a Pfam28 domain (PF00346) embedded in random sequence aligns with full-length Uniprot proteins. The PF00346 domain from A0A022SZ73 embedded in the query comes from the far50 All the alignment beyond the embedded green PF00346 domain in Figure 2 is non-homologous over-extension; outside the green domain the subject sequence is aligning to random sequence in the query (indicated by striped domains in the subject sequences). As Figure 2, panels A-D illustrate, over-extension occurs with NCBI psiblast, jackhmmer, and psisearch2 (without query-seeding), and with psisearch2 with query seeding to a more limited extent. Many proteins that contain a PF00346 domain also contain a PF00374 domain (e.g. L0HP41 in Figure 2A-D). When over-extension aligns random sequence in the query to a PF00374 domain in the subject sequences, the PSSM 'learns' to find PF00374 domains, and produces false-positive alignments with proteins that contain a PF00374 domain, but not the homologous PF00346 domain ( Figure 2E-G, striped blue domains). If the PSSM is built by aligning the MSA against a library that contains query residues seeded in the gaps in the subject sequences, the PF00374 alignment does not occur, though it does align to a non-homologous part of a PF00346 domain ( Figure 2H, striped green domain). Query-seeding reduces false-positives Query-seeding reduces the sensitivity of psisearch2 and psiblast slightly, but decreases the false discovery rate (FDR) for those iterative methods 5-20-fold (Table 1, Supplementary Table S1, and Supplementary Figure S1). The 100 far50 queries are quite challenging--all the methods find about 10% of homologs after the first iteration (for the far66 embedded queries, ∼15% of homologs are found after one iteration)--but after five or ten iterations, 82-93% of homologs (the average across the 100 queries) are found. jackhmmer detects the largest fraction of homologs, but also has the highest average FDR. The more remarkable difference in performance with query-seeding is reflected in FDR after five and ten iterations. The direct effects of query-seeding can be seen by comparing the psisearch2/msa and psisearch2/msa+seed, or the psiblast/msa and psiblast/msa+seed maximum FDR columns. For these two pairs, the only difference in how the PSSM was constructed and used was the inclusion of query residues in subject sequence gaps. With the far50 embedded domains, query-seeding drops the maximum FDR for psisearch2 18-fold after five or ten iterations (Table 1). For psiblast, query-seeding improves the FDR 7-fold after five iterations and 5-fold after 10 iterations. For the far66 queries (Supplementary Table S1), query-seeding reduced the maximum FDR about 5-fold. At 80% average family coverage, query-seeding improved FDR from 4-38-fold for the far50 queries. For the far66 queries, the 80% FDR improvement ranged from 38fold (psisearch2) to 79-fold (psiblast) at iteration 10 (Supplementary Table S1). The far50 and far66 queries were selected because they share significant similarity with the smallest fraction of homologs in the RPD3 database. But half of the far50 queries produce no false-positives after five iterations with psiblast (Supplementary Figure S1), and more than one third of the far50 queries produced no false-positives after 10 iterations with psisearch2 (un-seeded). Thus, we focus on the 20 families from the far50 and far66 query sets that produced the largest FDR (Figure 3, Supplementary Figure S3). Figure 3 shows the sensitivity and selectivity (FDR) of PF00346 (panels A and B) and twenty of the most chal-lenging query sequences from the far50 query set (the far66 dataset is shown in Supplementary Figure S4). At iteration 2, psiblast produces 15 false-positive alignments in addition to finding 98% of the 2761 true-positives, while psisearch2 (unseeded) produces six false-positives. jackhmmer reports its first 217 false-positives with PF00346 at iteration 3 (Figure 3). At iteration 4, jackhmmer produces 1541 false-positives. psisearch2 with seeded query residues produces 1 false-positive at iteration 3, but only 24 after 10 iterations, where the non-seeded iterative strategies produce ∼1700 false-positives. The process of PSSM contamination depends strongly on the content of the sequence database and the topologies of the homologous and non-homologous domains. For a broader perspective on search performance we plotted the true positive fraction and FDR for the 20 hardest far50 queries ( Figure 3C and D). Here, the median FDR shows that psiblast begins producing false-positives for more than half the queries by iteration 2, and jackhmmer and psisearch2 (unseeded) at iteration 3. psisearch2 (seeded) does not produce any false-positives for half the queries in the far50 query set after 10 iterations. In the far66 query set, more than half the queries are producing falsepositives with psisearch2 (seeded) at iteration 4, but the FDR is an order of magnitude lower than psisearch2 without query-seeding (Supplementary Figure S3). We also compared the performance of PSSMs produced with and without query-seeding by simply tabulating the number of query families where the seeding produced more false-positives, or fewer false positives using the 'R' binom.test function. This test confirms that query-seeding significantly reduces the number of false-positives. For psiblast and the far50 data set, 46 queries produce more false-positives after 10 iterations, while 16 queries produce fewer (P < 10 −4 , 'R' binom.test, one-tailed). With the far66 dataset, the numbers are 55 more and 4 fewer falsepositive queries without query-seeding (P < 10 −12 ). For psisearch2, 38 families have more false-positives and nine fewer on the far50 dataset (P < 10 −5 ), while 36 have more and five fewer on far66 (P < 10 −6 ). When the same test is done on the number of true positives, query-seeding reduces sensitivity. For psisearch2 and the far50 set, unseeded PSSMs perform better with 63 queries, while queryseeded PSSMs perform better with 32 (P < 0.001). But we believe that false-positives pose much more of a threat to iterative searches than false-negatives (see Discussion). Query-seeding significantly reduces the number of falsepositives during iterative searches with only a small decrease in sensitivity. Controlling alignment extension reduces over-extension The improvement we see by seeding query residues into gaps in subject sequences is larger than the improvement we found by explicitly limiting alignment extension in psisearch (6). To see whether additional control of alignment extension could improve FDR beyond query-seeding, we examined two alignment strategies for reducing overextension: (i) setting alignment boundaries to the values found the first time the subject sequence was found with a statistically significant score (alignment) (5,6); and (ii) lim- iting alignment extension to domain boundaries based on Pfam28 domains (domains). Limiting extension based on alignment history was tested using both psiblast/MSA and psisearch2 (both with seeding). Domain based extension limits only tested with psisearch2, since the approach uses sub-alignment scores to focus on domains with significant similarity. In our tests, limiting alignment over-extension using the alignment history was generally more effective than the domain strategy (Figures 4, 5 and 6A). Looking at the ROC curve ( Figure 4) the up-and down-triangles ( , ) both produce curves to the left (more selective) of the curves with seeding alone (+ symbols) after 10 iterations. But the effect is quite modest. Looking at the distribution of FDR fractions across 10 iterations (Figure 5 B) suggests that the alignment history strategy does a better job of controlling the FDR for more of the 20 hardest queries. Limiting alignment extension using alignment history has a small effect on psiseach2 with query-seeding and the far50 and far66 datasets; the number of false-positives goes down for seven families but up for four for the far50 queries, and down for eight but up for three with the far66 set. The effect for psiblast is more dramatic; 19 families have fewer false-positives with using alignment history to reduce over-extension, while eight families have more falsepositives (P < 0.026 using a one-sided binom.test func- Figure 5, with the addition of psi2/msa/10 , which shows the alignment progress with psisearch2, without query-seeding, but with a 20fold more stringent inclusion threshold (--evalue=0.0001). The same data for the far66 queries are shown in Supplementary Figure S7. tion in 'R'). For the far66 set, 17 families have fewer falsepositives and six have more (P < 0.017). Because query-seeding is very effective, we only see a modest improvement in search selectivity (FDR) with our two over-extension control strategies. Figure 6A shows how the amount of over-extension increases with number of iterations when query-seeding is not used . Remarkably, when query-seeding is used, the median level of over-extension without alignment boundary modification is <10 residues; query-seeding reduces median over-extension more than 50-fold after five and ten iterations. With so little overextension without alignment boundary modification, it is difficult to do much better, and domain-based boundary modification looks very similar to query-seeding alone. However, boundary modification using alignment history reduces over-extension even more, particularly after iteration three. Seeding query residues into the sequences used to construct PSSMs reduces false-positives by increasing the information content at positions with gaps in many homologs, effectively making the resulting position specific scoring matrix slightly 'less deep'. But query-seeding also reduces failures of the PSSM construction process to re-align sequences properly. To see whether query-seeding improves searches where over-extension is less likely, we also compared the unseeded domains from the far50 query set to the RPD3 database. When the far50 domains are not embedded in random sequence, there were no significant differences between un-seeded and seeded PSSM searches ( Supplementary Figures S4 and S5), and, as expected PSSMs built with query seeding are slightly less sensitive. Query-seeding and PSSM target identity Homologous over-extension, the major cause of falsepositives in iterative searching, can be reduced by decreasing the evolutionary distance (equivalent to increasing the target percent identity) of the scoring matrix used to produce alignments (12,13). Query-seeding increases the median percent identity of the bottom quartile of alignments at iteration five and ten by about 5% identity (e.g. from 13.3% (far50) to 19.0%, Figure 6B). Overextension does not cause reduced target identity; higher identity with seeding occurs with non-embedded searches, which cannot over-extend. To test whether the higher sensitivity of the matrices constructed from unseeded alignments contributes significantly to alignment over-extension, we reduced search sensitivity by specifying a 10-fold lower E()-value, 0.0001 rather than 0.002 (the default), for inclusion in the MSA and PSSM ( Figure 6, psi2/msa/10). The more stringent inclusion threshold is less sensitive than query-seeding with E() < 0.002, but it does not substantially affect either the amount of over-extension or the target percent identity ( Figure 6). Thus, we believe that the higher information content of the scoring matrix, rather than the reduced sensitivity of the search, limits over-extension. Query-seeding effectively makes the PSSMs less evolutionarily 'deep,' which produces higher identity alignments (14) and less homologous overextension (12,13). Iterative searches with full-length proteins Query-seeding dramatically improves search selectivity with embedded queries (Figures 3-5, Table 1, Supplementary Table S1) by reducing alignment over-extension ( Figure 6). But because they are surrounded by random sequence, our embedded queries encourage alignment over-extension. Query-seeding also improves search selectivity in with unembedded, full-length, protein sequences (Supplementary Tables S2 and S3). For the far50 full-length proteins, queryseeding reduced maximum FDR 2-6-fold after five and ten iterations. For the far66 full-length proteins, query-seeding reduces FDR 1.4-3.7-fold at 80% coverage, and 3-4-fold at maximum coverage. Query-seeding reduced sensitivity about 5%, somewhat >2-3% reduction seen in Table 1 and Supplementary Table S1. Full-length query sequences have a lower FDR than embedded domains, but query-seeding can provide an additional improvement. DISCUSSION Iterative sequence similarity searching with psiblast began with the observation by Henikoff and Henikoff (15) that position based sequence weights (PSSMs) embedded into the conserved regions of a query sequence are dramatically more sensitive than searching with the sequences alone. This strategy, implemented in the COBBLER program, provided the basis for psiblast (1), which revolutionized sequence similarity searching by exploiting conservation information in sequence databases to dramatically increase the sensitivity of sequence searches. Subsequent improvements in psiblast have improved performance by more robustly dealing with composition bias and using more sophisticated methods to initialize the PSSM (16)(17)(18)(19)(20). These strategies have improved psiblast performance when identifying homologs, but the same strategies that allow the identification of more distant relationships--the construction of more sensitive PSSMs--also increase the likelihood of alignment over-extension (5,12). In this paper, we demonstrate that the rediscovery of the Henikoffs' original observation (15), that PSSMs can be embedded in a query sequence, can dramatically reduce false-positives in iterative search strategies. The simple strategy we developed and implemented in the m89 btop msa2.pl script can be combined with either psiblast or psisearch to construct PSSMs that are less likely to produce homologous over-extension. Using query-seeded PSSMs, the number of false-positives drops from over a thousand to fewer than a dozen with some queries. While the weighting of residues and pseudo-counts to construct better PSSMs has been examined very carefully, there is much less information available about how to treat gaps when constructing a PSSM. Gaps tend to be clustered in the MSA and may be indicative of less well-conserved regions. Inserting query sequence residues back into the gapped positions in subject sequences to build the PSSM is complementary to the original Henikoff seeding strategy, which embedded the PSSM in the query. Query sequence seeding reduces alignment overextension by increasing the information content at gapped positions in the MSA that is used to construct the PSSM. This increased information content shifts the PSSM target identity to a shorter evolutionary distance ( Figure 6B), which tends to reduce over-extension (12). Since the gaps in the MSA are often found near the ends of the homologous domain, the mismatch penalties near the ends of the domain boundaries are also increased, which reduces the likelihood of over-extension. Modifying the PSSM by seeding query residues dramatically reduces false-positives, but it also slightly reduces true positives (Table 1). On the most challenging far50 query set, our most effective strategy for reducing false-positives reduces sensitivity after 10 iterations from 93.9% (jackhmmer) to 86.9% (psisearch2/seed+aln), while reducing the FDR from 13.9% to 0.2%. On the far66 dataset, sensitivity drops from 93.0% to 88.3% while the FDR drops from 25.9% to 3.5%. Thus, 5-10% drops in sensitivity yield 8-20-fold, or more, reductions in FDR. We believe that the modest decrease in search sensitivity is more than balanced by 10-fold reductions in FDR. As Pfam clans illustrate, for many large and diverse protein families it is not possible to build a single model, either PSSM or HMM, that can reliably detect all the members of the family. In Pfam30, about one-third of Pfam domain families belong to clans, but even with multiple models in clans representing a single domain family, many Pfam domain homologs remain unannotated. Complete identification of homologous domains requires a mixture of PSSM or HMM models and a strategy for re-starting the iterative search to build a new homologous model. Such transitive strategies are far less likely to become contaminated if falsepositives are avoided. Most of our false-positives with query-seeded PSSMs align to the query domain, not to the random sequence surrounding the embedded domain. We cannot be certain that the false-positives that we find with our most selective methods are genuine non-homologs; some are likely to be cryptic homologs. If homologous over-extension can be eliminated with some combination of PSSM adjustment and alignment boundary modification, then the problem of false-positive detection becomes statistical, and it should be possible to develop better methods with even fewer errors. CONCLUSION Improvements in sequence similarity searching require more sensitive scoring matrices--evolutionarily 'deep' matrices that can detect homologs with low sequence identity by giving low-identity alignments positive alignment scores. But matrices that give positive scores to low identity homologs are also much more likely to allow alignments from homologous domains to extend into non-homologous regions. Iterative searching is effective because the discovery of one or two distant homologs can reveal hundreds of new homologs in the next iteration. But this same amplification process makes it critical to avoid false-positives; one or two false-positive relationships can quickly lead to hundreds or thousands of misleading results. Query sequence seeding dramatically reduces the incidence of false-positives. With psisearch2 query-seeding and either alignment or domain boundary limits, more than half of our most challenging queries do not produce any false-positives, even after 10 e46 Nucleic Acids Research, 2017, Vol. 45, No. 7 PAGE 10 OF 10 iterations. The higher selectivity of query comes at a cost of slightly lower sensitivity, which can be partially offset by increasing the number of iterations, but alternative strategies, such as re-initiating the search with a distant homolog, may be required to identify the most distant homologs. SUPPLEMENTARY DATA Supplementary Data are available at NAR Online. FUNDING Funding for open access charge: European Bioinformatics Institute. Conflict of interest statement. None declared.
8,552.6
2016-12-06T00:00:00.000
[ "Computer Science", "Biology" ]
Experimental Study on the Aperture of Geomagnetic Location Arrays A method of locating a magnetic target based on geomagnetic total field is proposed. In the method, a conjugate gradient algorithm is introduced to eliminate the time-varying and uneven spatial distribution of geomagnetic total field. Then a structure of the measuring array of geomagnetic total field is designed. In the measuring array, the array aperture is a primary factor for the conjugate gradient algorithm. To determine an optimal aperture, we analyze the relationship between the array aperture and the localization accuracy. According to the localization theory based on geomagnetic total field, we simulate the process of determining an optimum array aperture. Based on the simulation, we propose the basis and principle of determining the optimum array aperture. To prove it, we use optically pumped magnetometers with different array apertures to carry out the experiments of locating a car in a suburb. Through the experiment, we get the experimental relationship between apertures and location accuracy. And the relationship agrees with the theory. The result shows that the method is feasible to determine the optimum aperture. Introduction It is important to locate targets by magnetic field in geological monitoring, energy and mineral exploration, rescue of plane crash, antisubmarine detection, and medical diagnosis [1][2][3].In measuring geomagnetic total field, optically pumped magnetometers have the advantages of high resolution, long detection range, and no temperature drift [4][5][6].Therefore, it is feasible to locate the magnetic target by using the array of optically pumped magnetometer.In order to improve the location accuracy, the time-varying and uneven spatial distribution of geomagnetic total field must be eliminated [7,8].In this paper, a double gradient algorithm is introduced to eliminate the distribution.In the method, some magnetic field changes caused by targets have also been filtered out.In particular, when two sensors are close to each other, the magnetic field of targets will be filtered out more.On the one hand, all the information is missing when the array aperture is zero.On the other hand, the disturbance field cannot be filtered when the array aperture is infinite.Therefore, it is necessary to determine an optimal array aperture which filters the interference field to the maximum extent and filters the target field to the minimum extent. The double gradient algorithm comprehensively describes the actual situation of target's magnetic field gradient, so the algorithm is not affected by the filtering of some signal gradient.The difference is only that the measured value of the decimal point has been moved forward.For the sensors with high precision and high resolution, the accuracy of locating a target will not be affected as long as the mantissa can be measured.However, if the sensor accuracy and resolution are low, the mantissa will not be detected.It means that the dual gradient algorithm cannot detect the remote or small target.An optimum aperture is an equilibrium point of the double gradient algorithm.Base on the optimum aperture, we can filter the interference field to the maximum extent and filter the target field to the minimum extent.It is impossible to eliminate the magnetic field generated by the target completely.It is only required that the desired accuracy of position is within an expected range. The optimum array aperture is related to sensor precision, resolution, noise of instrument and environment, target magnetic moment, detection position, and detection range.These quantities should be estimated and expected before designing the positioning system, so the calculation method of the optimum aperture can be given according to these estimates. Method of Determining an Array Aperture 2.1.The Location Algorithm.When the distance between a target and a sensor is more than 2-3 times the scale of the target, the target can be regarded as a magnetic dipole [9][10][11].To measure the magnetic field gradient of a target in a 3D space and locate it, the array can be composed of several general field sensors as shown in Figure 1.The distance D between a sensor and origin O is defined as the array aperture. T 0 is the magnetic field vector with no target, T 0 is its magnitude, and the local magnetic dip and magnetic declination are, respectively θ and φ, so the unit direction vector of T 0 is e = cos θ ⋅ cos φ cos θ ⋅ sin φ sin θ .For i = 1, 2, 3 … , T i is the measurement of the magnetic sensor i at x i , y i , z i with a magnetic target.The target is at x, y, z .Then, the displacement vector from the target to the sensor i is and r i is its magni-tude.P m is the magnetic moment vector of the target.P m is its magnitude.α and β denote the offset angle and the tilted angle of P m , respectively.The magnetic field vector B i denotes the field generated by the target at the ith magnetic sensor.In the far-field condition, B i is much less than T 0 , so the total field sensor measurement is T i ≈ T 0 + e ⋅ B i [12]. According to the magnetic dipole model in far field, the magnetic field B i generated by the target can be expressed as Substituting into equation ( 1), B i can be expressed as Then, the scalar measurement of the ith magnetic sensor can be expressed as It can be seen that formula (5) is a scalar equation about the variables x, y, z, P m , α, and β. x, y, z denotes the spatial position of the target.P m can be used to estimate the size of the target.Due to high nonlinear function, it is difficult to get an analytical solution of the variables.Generally, we can obtain a numerical solution by solving the optimization problem.Therefore, we solve equation ( 5) fast and accurately using the software LINGO [13]. − T i t 0 , x i , y i , z i − T j t 0 , x j , y j , z j 6 Through ( 5) and ( 6), we can locate the target location by ΔT ij .In this way, the time-varying and uneven spatial distribution can be filtered out [12].Therefore, we use double gradient ΔT ij in the whole experiment.The magnetic field generated by the target is also filtered to some extent when using the double gradient.Therefore, the optimum array aperture is necessary to filter the interference field to the maximum extent and the target field to the minimum extent. Principle of Determining the Optimum Aperture. In the linear motion of a target, its magnetic moment vector is fixed, which can be regarded as unknown constants.According to formula (6), ΔT ij is a function of variables x, y, z, D. It can be expressed as The uncertainty at x, y, z is expressed as In the formula, Δ ΔT ij cannot be less than the instrument and ambient noise ΔT min .It means that there is a lower limit ΔT min .Δx, Δy, Δz denote the positioning accuracy of the target in three directions.It is assumed that all the positioning accuracy in three directions is Δr, then In order to improve the positioning accuracy, the function g x, y, z, D should be extremely high.When the target is at a certain point x, y, z , g x, y, z, D is only a function of D. Therefore, there is an optimal value of the aperture D, which makes g x, y, z, D reach a maximum.It means that, when ∂g x, y, z, D /∂D = 0, the optimal value of the theory is obtained.It can be expressed as According to the above principle, the optimal aperture can be determined by the following procedures: Step 4 can be omitted when a coordinate point or a target in a local area is concerned. In theory, it is not possible in theory to find the corresponding D value for all x, y, z in the location space.Therefore, it can be calculated by dividing grid points in the measurement space, and the spacing of grid points can be set according to the locating requirement, so that the computation workload can be reduced.In practice, we can calculate the optimum aperture of several representative points in the Target (x, y, z) T 4 (0, D, 0) 3 Journal of Sensors region, or the three-dimensional space can be reduced to two dimensions or one dimension.Then the computation can be further reduced.The curve of the magnetic field changed over time in the experimental area, as shown in Figure 3.The horizontal axis represents time t, and the unit is s.The vertical axis is the double gradient value of magnetic field, and the unit is nT.The array aperture is D = 6 m.The red line in Figure 3 is the measurement value T1 of sensor 1 (for the convenience of making the drawing, each value minus 55315 nT).T1 includes the geomagnetic field which varies with time and the magnetic field generated by the target car which varies by car's movement.Because the geomagnetic field itself varies greatly over time, the red line cannot be used to judge when a car passed.The blue line is the data of ΔT 14 , which is processed by the double gradient of formula (6).The time-varying and uneven spatial distribution of the geomagnetic field is filtered using the double gradient of the magnetic field.The sensitivity of sensor CS-L is 0.6 pT/Hz 1/2 @1 Hz (rms).The peak value of the instrument noise is 2 pT, and the bandwidth is 0.1 Hz.The measurement noise generated by the double gradient of local environmental magnetic field is 10 pT.The measurement value of the car magnetic moment is P m = 405Am 2 , α = 0 592rad, and β = 3 74rad.In Figure 7, the curve varies with D when the target y value is 40.00 m, 33.13 m, 26.27 m, 12.55 m, 5.69 m, -1.18 m, -14.9 m, -21.77 m, and -28.63 m.Therefore, the maximum D of theory changes with the target location y.When the environmental noise is 10 pT and the location accuracy is 1 m, the range of D value which meets the precision requirement is calculated according to the data in Figure 7, as shown in Table 1. Simulation According to the requirements of accuracy, the intersection of D value set in Table 1 is D ∈ 4 5, 16 5 ∪ 21, 25 ∪ 28 5, 32 5 ∪ 36, 53 m.In addition to satisfying the requirement of location accuracy, the array scale should be minimized to make it more flexible, following the principle of Section 2.3.The minimum value D min = 4 5m is selected as the optimal aperture of CS-L array. Results with Fluxgate Magnetometers. We also performed the simulation with the array of HS-MS-FG3S fluxgate magnetometers made in China.The frequency-domain noise of this magnetometer is 10 pT/Hz 1/2 @1 Hz per channel.When we measure the total geomagnetic field by it, the total noise of three channels is 17.32 pT/Hz 1/2 @1 Hz, and the total noise is 20 pT/Hz 1/2 @1 Hz after superimposing with the white noise of geomagnetic field.When the positioning accuracy is 1 m, the range of D value satisfying the accuracy requirement is calculated according to the data in Figure 7, as shown in Table 2. The intersection of D value set in Table 2 which meets the requirements of accuracy is D ∈ 9 5, 12 5 ∪ 38 5, 40 5 m.In addition to satisfying the requirement of location accuracy, the array scale should be minimized to make it more flexible, following the principle of Section 2.3.The minimum value D min = 9 5m is selected as the optimal aperture of HS-MS-FG3S array.It can be seen that the increase of instrument noise has an effect on the array aperture, which makes the optimal aperture of the array increase accordingly. The Location Experiment and Data Analysis We set four CS-L cesium optically pumped magnetometers in the suburb as shown in Figure 2. The target moved along the planned trajectory parallel to the y axis direction in the horizontal plane, x = 32 02m.The horizontal axis in Figure 8 is the values of y, y ∈ −41, 32 8 m.The vertical axis is the double gradient values of total geomagnetic field, which is with the unit nT, and t 0 is the moment y = 32 8m.The experiment is divided into four situations: D = 1m, 2m, 4 m, and 6 m.Red symbol ○ in Figure 8 represents the y value of double gradient ΔT 14 , ΔT 24 , ΔT 34 measured by experiment.The blue line in Figure 8 is the theoretical curve of 8 Journal of Sensors simulation with MathCAD and the target location y.Within the scope of the car moving, the curve of the simulation is consistent with the experimental data.It proves that the far-field model of magnetic dipole is correct in describing the magnetic field model of the car.In the experimental process, the time-varying and uneven spatial distribution of geomagnetic total field is eliminated, which proves that formula (5) and formula (6) are correct.Journal of Sensors According to formulae ( 5) and ( 6), the position (x, y) of the target at each time is calculated using the experiment data, as shown in Figure 8. Formula ( 6) is a high-order nonlinear equation, which is difficult to solve.The software LINGO is used for the numerical solution, and the location results based on the magnetic field are shown in Table 3. In Table 3, (x 0 , y 0 ) is the actual location of the target car, and (x, y) is the target location calculated using the geomagnetic measurements, with the unit m.Δr is the distance between the measuring point and the actual location of the target; Δr is the average of the distance, with the unit m.The smaller the Δr, the higher the positioning accuracy. Based on the proposed method of determining the optimum aperture, the minimum array aperture D min = 4 5m satisfied the requirement in the simulation when the location accuracy is 1 m.In the table, it can be seen that Δr decreases from 15.25 m to 2.53 m when D increases from 1 m to 4 m.Then the location accuracy is getting higher, which meet with the simulation results.However, when D increases to 6 m, the location accuracy is not higher, and Δr = 4 46m.It is because of the multivalueness of equation ( 5), and LINGO does not converge to the real optimal solution.In order to solve this problem, the additional conditions or criteria should be added to the algorithm.One way is to increase the number of linearly independent magnetic field sensor. Conclusion The method of constructing an array of total field magnetometers to locate a target is proposed.This method is based on a magnetic dipole model of far field.We filtered out the time-varying and uneven spatial distribution by the conjugate gradient algorithm.Then the method of determining an array aperture is proposed.We simulate the location algorithm and the process of determining an array aperture.The location experiment is carried out in the suburb.We choose 1 m, 2 m, 4 m, and 6 m, respectively, as the aperture.The experimental data is consistent with the theoretical curve.In the location algorithm, interactive linear and general optimization solver LINGO is used to solve the nonlinear equations.The location accuracy of four apertures is calculated, respectively, which basically agrees with the simulation.In the experiment, the CS-L cesium optically pumped magnetometers are used.The CS-L cesium optically pumped magnetometers have a high resolution.Also, the measurement is not affected by the temperature.The attitude of the magnetometers is not rigidly calibrated.In the location experiment, the diagonal of the car is 4.64 m and the magnetic moment is about 405~512Am 2 .The vertical distance between the array and the car is 32.02 m, and the parallel distance is -41 m to 32.8 m.The experimental location results are shown in Figure 9.When the array aperture is 4 m, the average deviation of the magnetic moment in locating the car is 2.53 m.In a word, the method of locating the target and determining the aperture is feasible. ( 1 )( 2 ) 3 )( 4 ) 5 ) Determinate the instrument and ambient noise ΔT min Set the location accuracy Δr of the experiment (For specific x, y, z , solve for a set of D in the function g x, y, z, D ≥ ΔT min /Δr Find the intersection of D sets corresponding to all x, y, z points in the location space (Choose the minimum value of D in the intersection to minimize the scale of the array Figure 1 : Figure 1: Schematic diagram of the location array. 3. 1 . Simulation of the Experimental Environment.As shown in Figure 2, four optically pumped magnetometers T 1 , T 2 , T 3 , T 4 form a square array.The aperture of the array is D. The target car moves at a constant velocity from y = 32 8 m to y = −41 m along a parallel line 32.02 m from the y axis.Local magnetic dip is 63.3 °, and local magnetic declination is 10.34 °. Figure 4 Figure 2 : Figure 2: Schematic diagram and photo of the experimental area. Figure 3 :Figure 4 : Figure 3: Curve of magnetic field variation over time in experimental area. Figure 6 is a surface graph of ∂ ΔT 34 /∂y, y, and D. The blue horizontal axis in the horizontal plane is y value, unit m, simulation range from 0 m to -40 m.The green horizontal axis in the horizontal plane is D value, unit m, simulation range from 0 m to 100 m.The vertical axis is the value of ∂ ΔT 34 /∂y, unit nT/m, variation range from 0 nT/m to 0.1 nT/m.It can be seen that ∂ ΔT 34 /∂y varies with y and D. It is a three-dimensional surface graph. Figure 7 : Figure 7: Curves of D with different y values. When D = 1, results of location experiment When D = 2, results of location experiment When D = 6, results of location experiment Figure 9 : Figure 9: Results of location experiment. sin φ cos α ⋅ sin β + sin θ sin α The influence must be eliminated in magnetic field measurement.The measured value of ith sensor located at x i , y i , z i is T i t 0 , x i , y i , z i at time t 0 and T i t, x i , y i , z i at time t.The measurements of jth sensor are expressed in the same way.ΔT ij is a double gradient of the time and space of geomagnetic measurements.The double gradient ΔT ij is expressed as Table 1 : Value of D which meets the precision requirement.
4,384.8
2019-03-07T00:00:00.000
[ "Engineering", "Physics" ]
Natural Non-Mulberry Silk Nanoparticles for Potential-Controlled Drug Release Natural silk protein nanoparticles are a promising biomaterial for drug delivery due to their pleiotropic properties, including biocompatibility, high bioavailability, and biodegradability. Chinese oak tasar Antheraea pernyi silk fibroin (ApF) nanoparticles are easily obtained using cations as reagents under mild conditions. The mild conditions are potentially advantageous for the encapsulation of sensitive drugs and therapeutic molecules. In the present study, silk fibroin protein nanoparticles are loaded with differently-charged small-molecule drugs, such as doxorubicin hydrochloride, ibuprofen, and ibuprofen-Na, by simple absorption based on electrostatic interactions. The structure, morphology and biocompatibility of the silk nanoparticles in vitro are investigated. In vitro release of the drugs from the nanoparticles depends on charge-charge interactions between the drugs and the nanoparticles. The release behavior of the compounds from the nanoparticles demonstrates that positively-charged molecules are released in a more prolonged or sustained manner. Cell viability studies with L929 demonstrated that the ApF nanoparticles significantly promoted cell growth. The results suggest that Chinese oak tasar Antheraea pernyi silk fibroin nanoparticles can be used as an alternative matrix for drug carrying and controlled release in diverse biomedical applications. Introduction Various applications in pharmaceutical and biomedical technology are based on the dispersion of particulates, which include specialty coatings and sustained release and delivery systems [1][2][3]. Interest is growing in the development of different drug delivery systems to meet the requirements of different diseases. Various synthetic and bio-polymers have been investigated to produce particulate carriers for sustained release [4][5][6][7][8]. However, the production of particles remains challenging because choosing appropriate materials and modes of processing requires avoiding surfactants, initiators, or organic solvents as far as practicable [9]. Natural macromolecule materials such as collagen, gelatin, and albumin are often preferred [10,11], which can be processed under mild conditions. Natural silk proteins are now considered a suitable material for drug delivery applications because of several important properties, such as biodegradation [12][13][14], biocompatibility [15][16][17], aqueous-based ambient purification [18], and effective drug stabilization [19,20]. Silk protein fibroin derived from wild silkworm sources is termed non-mulberry silk. Non-mulberry silks in different forms or matrices provide a range of superior natural biomaterials [21]. Silk protein is composed of diverse amino acids, many of which contain functional groups which can be used to bind to cell surface receptors of specific cell types. This binding is an advantage for the delivery of drugs and compares favorably with many other synthetic polymeric systems. Compared with Bombyx mori silk protein fibroin, A. pernyi silk fibroin (ApF) is rich in Ala, Asp, and Arg, and has less Gly. In addition, ApF contains the RGD (Arg-Gly-Asp) tripeptide sequence [22,23], which serves as a suitable receptor for and increases the binding affinity of cell surface receptors [24]. It is reported that A. pernyi silk fibroin provides much stronger cell adhesion compared to Bombyx mori and collagen [25]. For drug delivery, especially protein drugs, silk materials exhibit high encapsulation efficiency and controllable drug release kinetic [26,27]. In addition, silk particles are already exploited as a delivery vehicle for growth factors and anti-cancer therapeutics [28][29][30][31]. Furthermore, the high surface area of ultrafine silk particles increases the loading of the target molecules [32]. Hence, selection of natural silk particles as a platform is justified for controlled drug delivery. There are several techniques available for the preparation of drug-loaded silk particles, such as self-assembly [33], layer-by-layer (LBL) deposition [34], emulsion-solvent evaporation spray drying [35], and phase separation [36]. However, each method has two-sidedness-advantages and disadvantages-so that it is important to choose an appropriate method in producing silk particles for drug delivery applications. Therefore, a more available preparation method is still needed for the formation of nanoparticles. Notably, the formulation of nanoparticles via ionic induction is gaining immense popularity. Silk particles have already been fabricated from an aqueous protein solution by the addition of ions [28,37]. However, the literature reporting non-mulberry A. pernyi silk microparticles and nanoparticles [38][39][40] as a suitable delivery vehicle is notably limited. Hence, the non-mulberry natural silk fibroin nanoparticle will be a viable platform for a controlled drug delivery system. It is reported that the particles of A. pernyi silk fibroin can also stabilize and deliver enzymes, such as lysozyme [41]. However, it is still underutilized as a biomaterial for regenerative medicine, even in the presence of RGD. One of the major challenges in the fabrication methods is the requirement of organic solvents, crosslinking agents, or initiators [9], which may cause damage to the human body. In the present study, the ApF nanoparticles are fabrication using cations (Ca 2+ ) as reagents under mild conditions. Doxorubicin hydrochloride, ibuprofen, and ibuprofen-Na are selected as our positively-charged, uncharged, and negatively-charged model drugs to evaluate the controlled drug delivery profile. In addition, the morphology, size, surface area, zeta potential, the loading, loading efficiencies, and release kinetics in relation to charge of drug-loaded ApF nanoparticles are investigated. In addition, the in vitro degradation and cell culture of the non-mulberry silk fibroin nanoparticles are also discussed. The results indicate that small-molecule drugs with different charges are suitable for sustained release from natural silk nanoparticles. Morphology of Drug-Loaded ApF Nanoparticles Scanning electron micrographs (SEM) were used to evaluate the size, shape, and morphology of nanoparticles. Figure 1 shows SEM images of pure ApF nanoparticles and drug-loaded ApF nanoparticles. The pure ApF nanoparticles were spherical with a diameter of approximately 500 nm. When ApF nanoparticles were loaded with small-molecule drugs, such as doxorubicin hydrochloride (DOX), ibuprofen, and ibuprofen-Na, the size and morphology of the drugs loaded on nanoparticles were similar to that of pure ApF nanoparticles. The Average Size and Brunauer-Emmett-Teller (BET) Surface Area of Particles Particle size analysis was used to evaluate the quality of the nanoparticles. The average particle size of ApF nanoparticles and drug-ApF nanoparticles obtained using a Nano-ZS90 particle size analyzer are summarized in Table 1. The average size of pure ApF nanoparticle was 496 ± 53.45 nm. After loading with small-molecule drugs (doxorubicin hydrochloride (DOX), ibuprofen, and ibuprofen-Na), the average particle size of the drugs loaded in nanoparticles were similar to that of pure ApF nanoparticles. The results are consistent with the SEM data. The BET surface area of pure ApF nanoparticles was 38.95 m 2 /g (Table 1). When ApF nanoparticles were loaded with small-molecule drugs, the BET surface area decreased (DOX-ApF nanoparticles: 10.64 m 2 /g, ibuprofen-ApF nanoparticles: 19.32 m 2 /g, and ibuprofen-Na-ApF nanoparticles: 32.96 m 2 /g), which indicated the drugs has been successfully loaded onto ApF nanoparticles. Drug Loading In order to investigate the applicability of ApF nanoparticles as a drug delivery system, three small-molecule model drugs with different charges-DOX (positive charge), ibuprofen (neutral charge), and ibuprofen-Na (negative charge)-were loaded on net negatively-charged silk fibroin nanoparticles by charge-charge interactions. The loading was examined with respect to the molar ratio of the model drug to ApF (Figure 2A-C). As shown in Figure 2, there was a non-linear increase in the loading and encapsulation efficiency as greater quantities of model drug was added. The Average Size and Brunauer-Emmett-Teller (BET) Surface Area of Particles Particle size analysis was used to evaluate the quality of the nanoparticles. The average particle size of ApF nanoparticles and drug-ApF nanoparticles obtained using a Nano-ZS90 particle size analyzer are summarized in Table 1. The average size of pure ApF nanoparticle was 496 ± 53.45 nm. After loading with small-molecule drugs (doxorubicin hydrochloride (DOX), ibuprofen, and ibuprofen-Na), the average particle size of the drugs loaded in nanoparticles were similar to that of pure ApF nanoparticles. The results are consistent with the SEM data. The BET surface area of pure ApF nanoparticles was 38.95 m 2 /g (Table 1). When ApF nanoparticles were loaded with small-molecule drugs, the BET surface area decreased (DOX-ApF nanoparticles: 10.64 m 2 /g, ibuprofen-ApF nanoparticles: 19.32 m 2 /g, and ibuprofen-Na-ApF nanoparticles: 32.96 m 2 /g), which indicated the drugs has been successfully loaded onto ApF nanoparticles. Drug Loading In order to investigate the applicability of ApF nanoparticles as a drug delivery system, three small-molecule model drugs with different charges-DOX (positive charge), ibuprofen (neutral charge), and ibuprofen-Na (negative charge)-were loaded on net negatively-charged silk fibroin nanoparticles by charge-charge interactions. The loading was examined with respect to the molar ratio of the model drug to ApF (Figure 2A-C). As shown in Figure 2, there was a non-linear increase in the loading and encapsulation efficiency as greater quantities of model drug was added. Encapsulation efficiency above 93% was achieved at 10.5% loading for positively-charged DOX. After 10.5% loading, the encapsulation efficiency decreased, indicating that the protein matrix was saturated. Interestingly, when the non-ionic ibuprofen and the negatively-charged ibuprofen-Na were loaded on ApF nanoparticles, the loading was low compared with DOX-ApF systems. It may be that positively-charged DOX drug via electrostatic attraction (the zeta potential for the ApF nanoparticles were measured as −23.8 mV), resulting in a high loading efficiency. However, the non-ionic ibuprofen and the negatively-charged ibuprofen-Na have weak interaction with ApF nanoparticles possibly leads to lower binding than that of DOX. It may be that due to the weaker binding of ibuprofen-Na to ApF materials, most ibuprofen-Na was extracted during ApF nanoparticles preparation. Similar results can be concluded by Lammel and Wang et al. It is reported that loading and release of model drugs happens mostly through electrostatic interactions [28] and positively-charged drugs have better interactions with silk compared to negatively-charged drug [36]. Encapsulation efficiency above 93% was achieved at 10.5% loading for positively-charged DOX. After 10.5% loading, the encapsulation efficiency decreased, indicating that the protein matrix was saturated. Interestingly, when the non-ionic ibuprofen and the negatively-charged ibuprofen-Na were loaded on ApF nanoparticles, the loading was low compared with DOX-ApF systems. It may be that positively-charged DOX drug via electrostatic attraction (the zeta potential for the ApF nanoparticles were measured as −23.8 mV), resulting in a high loading efficiency. However, the non-ionic ibuprofen and the negatively-charged ibuprofen-Na have weak interaction with ApF nanoparticles possibly leads to lower binding than that of DOX. It may be that due to the weaker binding of ibuprofen-Na to ApF materials, most ibuprofen-Na was extracted during ApF nanoparticles preparation. Similar results can be concluded by Lammel and Wang et al. It is reported that loading and release of model drugs happens mostly through electrostatic interactions [28] and positively-charged drugs have better interactions with silk compared to negatively-charged drug [36]. Zeta Potential of Drug-Loaded ApF Nanoparticles To better understand different drug loading, surface charges of the ApF nanoparticles were determined. Changes in ζ-potential (mV) of ApF nanoparticles loaded with drugs were evaluated at different loading percentages ( Figure 3A,B). The ζ-potential of ibuprofen-Na-loaded nanoparticles was dependent on loading and increased with an increase in loading ( Figure 3A). Briefly, the ζ-potential values of particles became more negative with increased loading. The ζ-potential values of pure ApF nanoparticles and ibuprofen-Na-ApF at an of 2.5% were −23.8 mV and −30.2 mV, respectively. Interestingly, the ζ-potential of DOX-and ibuprofen-loaded ApF nanoparticles gradually decreased with increases in loading. At the same time, the ζ-potential of DOX became positive when loading exceeded 11% ( Figure 3B), approximately the same point that the loading efficiency of DOX decreased (Figure 2A). This result indicates that the negative surface charge of the ApF nanoparticles enables positively-charged small molecules to load by simple charge-charge interaction between the drug molecules. The silk nanoparticles were also compared with nanoparticle systems loaded with the non-ionic (ibuprofen) and negatively-charged (ibuprofen-Na) drugs ( Figure 3). Zeta Potential of Drug-Loaded ApF Nanoparticles To better understand different drug loading, surface charges of the ApF nanoparticles were determined. Changes in ζ-potential (mV) of ApF nanoparticles loaded with drugs were evaluated at different loading percentages ( Figure 3A,B). The ζ-potential of ibuprofen-Na-loaded nanoparticles was dependent on loading and increased with an increase in loading ( Figure 3A). Briefly, the ζ-potential values of particles became more negative with increased loading. The ζ-potential values of pure ApF nanoparticles and ibuprofen-Na-ApF at an of 2.5% were −23.8 mV and −30.2 mV, respectively. Interestingly, the ζ-potential of DOX-and ibuprofen-loaded ApF nanoparticles gradually decreased with increases in loading. At the same time, the ζ-potential of DOX became positive when loading exceeded 11% ( Figure 3B), approximately the same point that the loading efficiency of DOX decreased ( Figure 2A). This result indicates that the negative surface charge of the ApF nanoparticles enables positively-charged small molecules to load by simple charge-charge interaction between the drug molecules. The silk nanoparticles were also compared with nanoparticle systems loaded with the non-ionic (ibuprofen) and negatively-charged (ibuprofen-Na) drugs ( Figure 3). Drugs Release Rate The release rate of drug-loaded ApF is pH dependent ( Figure 4). The cumulative release of DOX increased as pH decreased ( Figure 4A). The release of DOX reached a maximum of 34.15% after 11 days at pH 5.2, which was higher than the release observed at pH 7.4 (24.23%) and 8.0 (22.96%). The approximate pKa of the doxorubicin amino group was 7.6 (37 °C, ionic strength 0.15) [42]. So doxorubicin hydrochloride will keep a positive charge in neutral and alkaline aqueous solutions. Some doxorubicin hydrochloride will change to the uncharged neutral molecule when the release solution changes to an acidic solution. This is why doxorubicin hydrochloride released quickly in the acidic solution than in the neutral and alkaline solutions. pH-dependent release is favorable in the therapeutic delivery of drugs, such as anti-cancer drugs. A lower pH environment is favored for the growth of tumor cells, whereas the opposite is true for normal cells [43]. However, the release of ibuprofen showed the opposite pattern ( Figure 4B,C). There is a low-level and short initial burst release, and the reason may be that the residual ibuprofen drug was Drugs Release Rate The release rate of drug-loaded ApF is pH dependent ( Figure 4). The cumulative release of DOX increased as pH decreased ( Figure 4A). The release of DOX reached a maximum of 34.15% after 11 days at pH 5.2, which was higher than the release observed at pH 7.4 (24.23%) and 8.0 (22.96%). The approximate pKa of the doxorubicin amino group was 7.6 (37 • C, ionic strength 0.15) [42]. So doxorubicin hydrochloride will keep a positive charge in neutral and alkaline aqueous solutions. Some doxorubicin hydrochloride will change to the uncharged neutral molecule when the release solution changes to an acidic solution. This is why doxorubicin hydrochloride released quickly in the acidic solution than in the neutral and alkaline solutions. pH-dependent release is favorable in the therapeutic delivery of drugs, such as anti-cancer drugs. A lower pH environment is favored for the growth of tumor cells, whereas the opposite is true for normal cells [43]. Drugs Release Rate The release rate of drug-loaded ApF is pH dependent (Figure 4). The cumulative release of DOX increased as pH decreased ( Figure 4A). The release of DOX reached a maximum of 34.15% after 11 days at pH 5.2, which was higher than the release observed at pH 7.4 (24.23%) and 8.0 (22.96%). The approximate pKa of the doxorubicin amino group was 7.6 (37 °C, ionic strength 0.15) [42]. So doxorubicin hydrochloride will keep a positive charge in neutral and alkaline aqueous solutions. Some doxorubicin hydrochloride will change to the uncharged neutral molecule when the release solution changes to an acidic solution. This is why doxorubicin hydrochloride released quickly in the acidic solution than in the neutral and alkaline solutions. pH-dependent release is favorable in the therapeutic delivery of drugs, such as anti-cancer drugs. A lower pH environment is favored for the growth of tumor cells, whereas the opposite is true for normal cells [43]. However, the release of ibuprofen showed the opposite pattern ( Figure 4B,C). There is a low-level and short initial burst release, and the reason may be that the residual ibuprofen drug was However, the release of ibuprofen showed the opposite pattern ( Figure 4B,C). There is a low-level and short initial burst release, and the reason may be that the residual ibuprofen drug was released from the surface of ApF nanoparticles. The DOX and ibuprofen were released slowly. In contrast, ibuprofen-Na was released much faster, with more than 60% of total loading being released within one hour at nearly a zero-order release rate. It may be that negatively charged Ibuprofen-Na has weak interaction with ApF nanoparticles, which lead to the drug molecule diffusion from nanoparticles. pKa values of ibuprofen is about 4.4 [44]. That is to say ibuprofen is an uncharged neutral molecule when it is in neutral aqueous solution or acidic solution. Only when the pH value was higher than 10 will it change to being negatively charged. In our release experiments, it will remain uncharged. Otherwise, for ibuprofen-Na, it will remain negatively charged during release experiments. In addition, the cumulative release at pH 7.4 within the first 6 h was 85% for negatively-charged ibuprofen-Na, 83% for ibuprofen, and 10.3% for positively-charged DOX. This suggests that positively-charged molecules exhibit a more prolonged or sustained in vitro release of the drugs from the nanoparticles. This effect may be attributable to charge-charge interactions between the drug and the silk. There was a cumulative release of 100% of drugs in the control group. This result indicates that ApF nanoparticles may be a suitable candidate for drug delivery. Structure of Drug-ApF Nanoparticles To confirm the changes in secondary structure between ApF nanoparticles and drug-ApF nanoparticles, X-ray diffraction curves were obtained ( Figure 5). The pure ApF nanoparticles ( Figure 5A (curve d)) showed an X-ray diffraction profile with one intense diffraction peak at 20.2 • , two minor peaks at 16.8 • and 23.8 • , and one weak peak at 30.8 • . These peaks are typical of β-sheet conformation [45]. The DOX-ApF nanoparticles ( Figure 5A (curve c)) showed none of DOX's characteristic diffraction peaks on the X-ray diffraction curve ( Figure 5A (curve a)). In contrast, the mixture of DOX with ApF nanoparticles ( Figure 5A (curve b)) showed clear diffraction peaks that are characteristic of DOX ( Figure 5A (curve a)). This observation indicates that there is no crystallization of DOX molecules in the nanoparticles. Similar changes were also observed in the ibuprofen-ApF nanoparticle and ibuprofen-Na-ApF nanoparticle systems ( Figure 5B,C). released from the surface of ApF nanoparticles. The DOX and ibuprofen were released slowly. In contrast, ibuprofen-Na was released much faster, with more than 60% of total loading being released within one hour at nearly a zero-order release rate. It may be that negatively charged Ibuprofen-Na has weak interaction with ApF nanoparticles, which lead to the drug molecule diffusion from nanoparticles. pKa values of ibuprofen is about 4.4 [44]. That is to say ibuprofen is an uncharged neutral molecule when it is in neutral aqueous solution or acidic solution. Only when the pH value was higher than 10 will it change to being negatively charged. In our release experiments, it will remain uncharged. Otherwise, for ibuprofen-Na, it will remain negatively charged during release experiments. In addition, the cumulative release at pH 7.4 within the first 6 h was 85% for negatively-charged ibuprofen-Na, 83% for ibuprofen, and 10.3% for positively-charged DOX. This suggests that positively-charged molecules exhibit a more prolonged or sustained in vitro release of the drugs from the nanoparticles. This effect may be attributable to charge-charge interactions between the drug and the silk. There was a cumulative release of 100% of drugs in the control group. This result indicates that ApF nanoparticles may be a suitable candidate for drug delivery. Structure of Drug-ApF Nanoparticles To confirm the changes in secondary structure between ApF nanoparticles and drug-ApF nanoparticles, X-ray diffraction curves were obtained ( Figure 5). The pure ApF nanoparticles ( Figure 5A (curve d)) showed an X-ray diffraction profile with one intense diffraction peak at 20.2°, two minor peaks at 16.8° and 23.8°, and one weak peak at 30.8°. These peaks are typical of β-sheet conformation [45]. The DOX-ApF nanoparticles ( Figure 5A (curve c)) showed none of DOX's characteristic diffraction peaks on the X-ray diffraction curve ( Figure 5A (curve a)). In contrast, the mixture of DOX with ApF nanoparticles ( Figure 5A (curve b)) showed clear diffraction peaks that are characteristic of DOX ( Figure 5A (curve a)). This observation indicates that there is no crystallization of DOX molecules in the nanoparticles. Similar changes were also observed in the ibuprofen-ApF nanoparticle and ibuprofen-Na-ApF nanoparticle systems ( Figure 5B,C). In Vitro Degradation of ApF Nanoparticles It is reported that silk degradation is greatly affected by β-sheet formation [13]. To confirm the changes in the secondary structure during the formation of ApF nanoparticles, FTIR spectra and X-ray diffraction curves were obtained. A freshly-prepared regenerated ApF solution exhibited absorption bands at 1655 cm −1 (amide I), 1545 cm −1 (amide II), 1270 cm −1 (amide III), and 892 cm −1 (amide IV), assigned to an α-helix and random coil conformation [46]. In contrast, significant absorption bands at 1630 cm −1 (amide I), 1520 cm −1 (amide II), 1234 cm −1 (amide III), and 963 cm −1 (amide IV) appeared in ApF nanoparticles ( Figure 6A). These absorption bands are characteristic of the β-sheet structure [45], and indicate that a transformation from random coil and α-helix to β-sheet occurs in the process of ApF nanoparticle preparation. As shown in the X-ray diffraction curves ( Figure 6B), freshly-prepared regenerated ApF solution exhibited two major diffraction peaks at 11.52 • and 21.53 • , corresponding to the α-helix structure [45]. Two intense diffraction peak at 16.78 and 20.32, and two minor peaks at 16.78 and 30.75, occurred in ApF nanoparticles. These absorption bands are characteristic of the β-sheet conformation [45], and confirm that the ApF sol-particle transition was accompanied by the ApF conformational transformation, In accordance with the results from the FTIR spectra. In Vitro Degradation of ApF Nanoparticles It is reported that silk degradation is greatly affected by β-sheet formation [13]. To confirm the changes in the secondary structure during the formation of ApF nanoparticles, FTIR spectra and X-ray diffraction curves were obtained. A freshly-prepared regenerated ApF solution exhibited absorption bands at 1655 cm −1 (amide I), 1545 cm −1 (amide II), 1270 cm −1 (amide III), and 892 cm −1 (amide IV), assigned to an α-helix and random coil conformation [46]. In contrast, significant absorption bands at 1630 cm −1 (amide I), 1520 cm −1 (amide II), 1234 cm −1 (amide III), and 963 cm −1 (amide IV) appeared in ApF nanoparticles ( Figure 6A). These absorption bands are characteristic of the β-sheet structure [45], and indicate that a transformation from random coil and α-helix to β-sheet occurs in the process of ApF nanoparticle preparation. As shown in the X-ray diffraction curves ( Figure 6B), freshly-prepared regenerated ApF solution exhibited two major diffraction peaks at 11.52° and 21.53°, corresponding to the α-helix structure [45]. Two intense diffraction peak at 16.78 and 20.32, and two minor peaks at 16.78 and 30.75, occurred in ApF nanoparticles. These absorption bands are characteristic of the β-sheet conformation [45], and confirm that the ApF sol-particle transition was accompanied by the ApF conformational transformation, In accordance with the results from the FTIR spectra. Biodegradability is one of the ideal properties of biomaterials used in tissue engineering. In in vitro degradation experiments, protease XIV is the most widely used proteolytic enzymes, which were derived from Streptomyces griseus [47,48]. Degradation of the ApF nanoparticle pattern was determined based on the weight remaining ratio at different time points (2-28 days). After 28 days of degradation, there was a distinct change in the molecular weight of ApF nanoparticles in protease XIV solution. The molecular weight of the nanoparticles reduced by 45.4% after 12 days. After 20 and 28 days, the molecular weight loss was approximately 65.9% and 86.8%, respectively. The remaining quantity of ApF nanoparticles was approximately 23.2% after 28 days of degradation Biodegradability is one of the ideal properties of biomaterials used in tissue engineering. In in vitro degradation experiments, protease XIV is the most widely used proteolytic enzymes, which were derived from Streptomyces griseus [47,48]. Degradation of the ApF nanoparticle pattern was determined based on the weight remaining ratio at different time points (2-28 days). After 28 days of degradation, there was a distinct change in the molecular weight of ApF nanoparticles in protease XIV solution. The molecular weight of the nanoparticles reduced by 45.4% after 12 days. After 20 and 28 days, the molecular weight loss was approximately 65.9% and 86.8%, respectively. The remaining quantity of ApF nanoparticles was approximately 23.2% after 28 days of degradation ( Figure 6C). Conversely, the change of ApF nanoparticles in the PBS solution was negligible in comparison to the change in the nanoparticles in the protease XIV solution. The remaining quantity of nanoparticles in the control group was 93.4% after 28 days. It is reported that the degradation of silk significantly depends on the molecular weight, the amount of crystalline structure, and structural characteristics, such as surface roughness, porosity, and pore size [13]. To reveal the degradation of ApF nanoparticles structures by the proteolytic enzymes, the percentages of β-sheet structures of ApF nanoparticles at various enzymatic degradation time by protease XIV and PBS were calculated based on the experimentally-obtained curves, as shown in Figure 6D. From control experiments without proteolytic enzyme for 25 days, the average β-sheet content decreased from 40% to 36%. However, treatment for 20 days by protease XIV, the β-sheet content reduced to 23%. After enzymatic degradation over 20 days, the β-sheet content of the ApF crystals did not change at these levels. The result indicates that 23% of the β-sheet structure still existed in ApF crystals, and then the crystalline structure collapsed into small fragments, during longer enzymatic degradation time by protease XIV. Cell Viability The Alamar Blue assay was carried out using the L929 cells to research the cell viability ( Figure 7). As culture days became longer, the cell activity gradually increased on all the samples. Nevertheless, there were no significant differences for the relative cells activity between the different concentrations of ApF nanoparticles (200, 100, 50, 10, 5, 1, 0.5, and 0.1 µg/mL) and the blank plate at the same culture time (1, 3, 5, 7, and 9 days). These results implied that the concentration of ApF nanoparticles had no significant effects on the cells activity. Therefore, the ApF nanoparticles were suitable for cell proliferation. ( Figure 6C). Conversely, the change of ApF nanoparticles in the PBS solution was negligible in comparison to the change in the nanoparticles in the protease XIV solution. The remaining quantity of nanoparticles in the control group was 93.4% after 28 days. It is reported that the degradation of silk significantly depends on the molecular weight, the amount of crystalline structure, and structural characteristics, such as surface roughness, porosity, and pore size [13]. To reveal the degradation of ApF nanoparticles structures by the proteolytic enzymes, the percentages of β-sheet structures of ApF nanoparticles at various enzymatic degradation time by protease XIV and PBS were calculated based on the experimentally-obtained curves, as shown in Figure 6D. From control experiments without proteolytic enzyme for 25 days, the average β-sheet content decreased from 40% to 36%. However, treatment for 20 days by protease XIV, the β-sheet content reduced to 23%. After enzymatic degradation over 20 days, the β-sheet content of the ApF crystals did not change at these levels. The result indicates that 23% of the β-sheet structure still existed in ApF crystals, and then the crystalline structure collapsed into small fragments, during longer enzymatic degradation time by protease XIV. Cell Viability The Alamar Blue assay was carried out using the L929 cells to research the cell viability ( Figure 7). As culture days became longer, the cell activity gradually increased on all the samples. Nevertheless, there were no significant differences for the relative cells activity between the different concentrations of ApF nanoparticles (200, 100, 50, 10, 5, 1, 0.5, and 0.1 µg/mL) and the blank plate at the same culture time (1, 3, 5, 7, and 9 days). These results implied that the concentration of ApF nanoparticles had no significant effects on the cells activity. Therefore, the ApF nanoparticles were suitable for cell proliferation. The L929 cells were cultured in the medium with different concentrations of ApF nanoparticles. Green fluorescence represents the live cells and red fluorescence dots indicate the dead cells. It was observed that the live cells (green) attached well and grew normally in the medium with different concentrations of ApF nanoparticles (10 and 200 µg/mL) and the blank plate (Figure 8) as well and, from day three, onward, the cells began to proliferate more quickly. With the increase of incubation times, more live cells were detected in all groups. Furthermore, very few or no dead cells were detected from these scaffolds. These results coincided with the result of the Alamar Blue assay. These observation indicates that the ApF nanoparticles possess good biocompatibility and support the growth of cells. The L929 cells were cultured in the medium with different concentrations of ApF nanoparticles. Green fluorescence represents the live cells and red fluorescence dots indicate the dead cells. It was observed that the live cells (green) attached well and grew normally in the medium with different concentrations of ApF nanoparticles (10 and 200 µg/mL) and the blank plate (Figure 8) as well and, from day three, onward, the cells began to proliferate more quickly. With the increase of incubation times, more live cells were detected in all groups. Furthermore, very few or no dead cells were detected from these scaffolds. These results coincided with the result of the Alamar Blue assay. These observation indicates that the ApF nanoparticles possess good biocompatibility and support the growth of cells. Discussion As a potential biomaterial, silk fibroin is used as a platform to enhance cell adhesion and proliferation and differentiation [21,49]. Although much research has been done on mulberry silk investigating its potentials as a biomaterial, the work done on non-mulberry silk is still limited. Non-mulberry silk can be a potential biomaterial, since it has superior mechanical properties compared to mulberry silk and is also characterized by the presence of RGD sequences [21][22][23]. Therefore, the exploitation of the biomaterial properties of non-mulberry silk fibroin will open a new era of an alternative natural functional biomaterial to replace mulberry silk protein fibroin. In order to evaluate the potential of non-mulberry silk-based biomaterials in various biomedical applications such as tissue engineering and as a model matrix to study various cellular phenomena, numerous studies have been carried out. Non-mulberry silk has also been exploited for fabricating drug delivery devices in the form of nanoparticles from fibroin. It is reported that silk fibroin nanoparticles of non-mulberry A. mylitta for the delivery of anti-cancer therapeutics have been studied [50], which indicated the potential of non-mulberry silk as a delivery vehicle. Antheraea pernyi is one of the most well-known wild non-mulberry silk sources and also contains an integrin-binding RGD peptide [23], which is known as a recognition motif in several different integrin receptors [51][52][53][54]. Notably, RGD peptide has been applied widely in drug delivery [55][56][57]. It is reported that RGD-containing materials may specifically target drugs to cancer cells or angiogenic endothelial cells by the binding of the RGD peptide to these cell surface receptors. Moreover, these materials can be internalized by receptor-mediated endocytosis. This is an advantage compared with many other non-RGD materials. Therefore, choosing non-mulberry-Antheraea pernyi silk as a platform for drug delivery is justified. Silk fibroin is increasingly being considered as a suitable protein based material for drug delivery applications [58][59][60][61]. Silk protein-based nanoparticles also exhibit superior performance for sustained release of drugs and genes [62,63]. The mechanism of binding drugs to silk fibroin is an important consideration for drug delivery. Loading and release of model drugs of various molecular weights and surface charges have been widely studied [31]. It is reported that loading and release of model drugs happens mostly through electrostatic interactions [28]. Strong electrostatic binding between silk and bound molecules can avoid significant burst release [64]. Though such strong interactions may also prevent complete release of the carried molecules, the release can be controlled by adjusting the surrounding charge by changing the pH. In addition, drug transfer can be Discussion As a potential biomaterial, silk fibroin is used as a platform to enhance cell adhesion and proliferation and differentiation [21,49]. Although much research has been done on mulberry silk investigating its potentials as a biomaterial, the work done on non-mulberry silk is still limited. Non-mulberry silk can be a potential biomaterial, since it has superior mechanical properties compared to mulberry silk and is also characterized by the presence of RGD sequences [21][22][23]. Therefore, the exploitation of the biomaterial properties of non-mulberry silk fibroin will open a new era of an alternative natural functional biomaterial to replace mulberry silk protein fibroin. In order to evaluate the potential of non-mulberry silk-based biomaterials in various biomedical applications such as tissue engineering and as a model matrix to study various cellular phenomena, numerous studies have been carried out. Non-mulberry silk has also been exploited for fabricating drug delivery devices in the form of nanoparticles from fibroin. It is reported that silk fibroin nanoparticles of non-mulberry A. mylitta for the delivery of anti-cancer therapeutics have been studied [50], which indicated the potential of non-mulberry silk as a delivery vehicle. Antheraea pernyi is one of the most well-known wild non-mulberry silk sources and also contains an integrin-binding RGD peptide [23], which is known as a recognition motif in several different integrin receptors [51][52][53][54]. Notably, RGD peptide has been applied widely in drug delivery [55][56][57]. It is reported that RGD-containing materials may specifically target drugs to cancer cells or angiogenic endothelial cells by the binding of the RGD peptide to these cell surface receptors. Moreover, these materials can be internalized by receptor-mediated endocytosis. This is an advantage compared with many other non-RGD materials. Therefore, choosing non-mulberry-Antheraea pernyi silk as a platform for drug delivery is justified. Silk fibroin is increasingly being considered as a suitable protein based material for drug delivery applications [58][59][60][61]. Silk protein-based nanoparticles also exhibit superior performance for sustained release of drugs and genes [62,63]. The mechanism of binding drugs to silk fibroin is an important consideration for drug delivery. Loading and release of model drugs of various molecular weights and surface charges have been widely studied [31]. It is reported that loading and release of model drugs happens mostly through electrostatic interactions [28]. Strong electrostatic binding between silk and bound molecules can avoid significant burst release [64]. Though such strong interactions may also prevent complete release of the carried molecules, the release can be controlled by adjusting the surrounding charge by changing the pH. In addition, drug transfer can be controlled by adjusting the composition/structure of the silk coating [65,66]. In addition to release from the intact particle, drugs can also be released when the particles are enzymatically degraded. The microstructure of silk plays an important role in both drug release and particle degradation. The microstructure of silk can be controlled by inducing β-sheet formation during particle regeneration from solution. It is reported that an increase in β-sheet content is responsible for slowing down the release rate [29]. The methods of silk particles preparation have been widely studied for drug delivery applications. A mild environment, such as aqueous solution and ambient temperature is needed to load the model drugs during the particle fabrication process. The present study provides a unique method to fabricate ApF nanoparticles and ApF nanoparticles were used as drug carriers to load differently-charged small-molecule drugs via simple absorption based on electrostatic interactions under mild conditions. This method avoids using organic solvents and any other noxious reagents during material processing. Therefore, this method is suitable for biomedical applications and ApF nanoparticles have potential as a sustained drug delivery vehicle. Preparation of ApF Solution The Antheraea pernyi silk fibroin (ApF) was prepared as an earlier published procedure. Briefly, cocoons of Antheraea pernyi were boiled for 30 min in 0.2% sodium bicarbonate solution at 100 • C to remove the sericin. After being rinsed and dried, the degummed silk fibroin was dissolved in 9 M LiSCN solution, and then dialyzed against distilled water for four days to obtain a pure fibroin solution [25]. As determined gravimetrically, the final silk fibroin concentration was approximately 2.2% (w/v) and later diluted to the desired concentration. Preparation of ApF Nanoparticles ApF nanoparticles were fabricated by ion induction. Briefly, the 10 mg/mL ApF solution was mixed with 1 mM Ca 2+ in 1:1 volume ratios using a pipette. The mixture was placed in a water bath for 60 min at a temperature 37 • C to induce self-assembly. In order to get pure ApF nanoparticles, the resulting suspension was centrifuged at 12,000 rpm for 10 min and then washed three times using ultrapure water. Drug Loading in ApF Nanoparticles Drug loading on the silk particles was conducted as follows: small-molecule model drugs with different charges (doxorubicin hydrochloride, ibuprofen (dissolved with acetone), and ibuprofen-Na) were dissolved in 10 mL of an aqueous solution of 1 mM Ca 2+ . Approximately 10 mg/mL ApF solution was added to the drug solution at a 1:1 volume ratio containing different molar ratios to prepare the drug-encapsulated nanoparticles. The mixture was placed in a water bath for 60 min at a temperature 37 • C to induce self-assembly. To obtained pure drug-loaded ApF nanoparticles, the suspension was centrifuged at 12,000 rpm for 10 min and then washed three times using ultrapure water to remove the excess drug molecules. In order to determine drug loading and encapsulation efficiency, the supernatants were subjected to absorbance measurements using a UV-VIS absorption spectrophotometer (Bio-Rad, Berkeley, CA, USA). The drug quantification was calculated based on a standard calibration curves. The model drugs were dispersed in 10 mL water as the control for each experiment. Finally, Drug loading and encapsulation efficiency were calculated via Equations (1) Zeta Potential and Surface Area Analysis To investigate the influence of ApF nanoparticle loading, surface charges of the particles were measured using a zeta potential analyzer (Malvern Instruments, Malvin, UK). The surface areas of particles were measured by Brunauer-Emmett-Teller (BET) methods using nitrogen adsorption-desorption measurements in a V-Sorb 2800P (Micromeritics, Norcross, GA, USA) surface area analyzer. Release of Drugs from ApF Nanoparticles Ten milligrams of the drug-loaded ApF particles were subsequently re-dispersed in 10 mL of phosphate-buffered saline (PBS) buffer with pH values of 5.2, 7.4, and 8.0 at 37 • C to monitor the pH-dependent release and free drugs were dispersed in PBS buffer as the control. At pre-determined time points, the samples were centrifuged at 12,000 rpm for 10 min to collect the supernatants. Next, the nanoparticles were suspended in fresh PBS buffer to continue the release study. The supernatants were subjected to absorbance measurement using a UV-VIS absorption spectrophotometer. The drug content in the medium was calculated based on a standard calibration curves. All measurements were performed in triplicate. The percentage of cumulative model drug release (%w/w) was calculated using Equation (3): Releasing content(w/w%) = amount of drug in the release medium amount of drug loaded into particle × 100 (3) Morphology and Particle Size of Drug-Loaded ApF Nanoparticles The morphology of pure ApF nanoparticles and drug-loaded ApF nanoparticles was examined via scanning electron microscopy (SEM, Hitachi S-4700, Tokyo, Japan) at an accelerating voltage of 15 kV. ApF nanoparticles were distributed in water by ultra-sonication, plated directly on a silicon plate, and later dried by vacuum. The samples were gold sputter-coated to prevent charging during SEM imaging. To better understand different drug-loaded ApF nanoparticle sizes, the samples were carried out using a NanoZS90 particle size analyzer (Malvern Instruments, Malvin, UK). Structure of ApF Nanoparticles ApF solutions, pure ApF nanoparticles and drug-loaded ApF nanoparticles were frozen at −80 • C and subsequently freeze-dried for X-ray diffraction (XRD) analysis. XRD analysis was conducted on an X'PERT-PRO MPD Diffractometer (Panalytical Co., Almelo, The Netherlands) with a Cu-Kα radiation source. The scanning speed was 2 • /min. The X-ray source was operated at a 30 kV voltage and 20 mA current. XRD patterns were recorded in the 2θ region from 5 • to 40 • . In addition, the FTIR spectra of ApF solutions and ApF nanoparticles were obtained using a Nicolet 5700 Fourier transform infrared spectrometer (Nicolet Co., Madison, WI, USA) in the spectral region of 400-4000 cm −1 . In Vitro Biodegradation of ApF Nanoparticles One hundred milligrams of nanoparticles were separately placed into a tube containing 10 mL of a PBS solution with protease XIV (pH 7.4, 37 • C, 5 U/mL). A similar sample was kept in PBS as a control. The protease XIV solution was replaced every two days with freshly prepared solution. The samples were centrifuged at 12,000 rpm for 5 min at pre-determined time points (2, 4, 6, 8, 10, 12, 14, 16, and 18 days). The pellets were lyophilized to obtain data on degraded ApF nanoparticles, if any. The change in the weight remaining ratio was calculated using Equation (4): Remain weight(%) = amount of nanoparticles − amount of degradation amount of nanoparticles × 100 (4) The L929 cells were plated at a density of 7 × 10 3 cells/well in 96-well plates at 37 • C in a 5% CO 2 atmosphere. After 24 h of culture, the medium in the well was replaced with 100 µL fresh medium containing A. pernyi silk fibroin nanoparticles of varying concentrations (200, 100, 50, 10, 5, 1, 0.5, or 0.1 µg/mL) and incubated for a pre-determined time interval (1, 3, 5, 7, or 9 days). A blank plate was used as the control group. The medium was replaced with fresh medium containing A. pernyi silk fibroin nanoparticles of varying concentrations every three days. After subsequent incubation, the cell proliferation was evaluated with Alamar Blue (AB) assay and all experiments were performed in triplicate. After cell culture for 1, 4, and 7 days, the cytotoxicity of the treated cells (200, 10, and 0 µg/mL) were examined with live/dead assay (Calcein AM and PI) under inverted fluorescence microscopy (Olympus Corporation, Tokyo, Japan). Statistical Analysis Statistical analysis was performed using one-way ANOVA. The data were presented as the mean ± SD. p-Values < 0.05 were considered to be statistically significant. Conclusions We show the applicability of nanoparticles from the non-mulberry silkworm Antheraea pernyi for controlled drug release. The negative surface charge of the fibroin nanoparticles enables loading with different types of charged small molecules by charge-charge interaction and diffusion into the particle matrix. In vitro release reveals that the release of small molecules depends on their charge interactions between the drugs and the silk fibroin. In addition, the release rate of loaded drugs from fibroin nanoparticles is observed to be pH-sensitive. The biodegradable behavior, cell viability, and growth, and simplicity of the all-aqueous production and loading process suggest that the fibroin nanoparticles of this underutilized can be exploited as an alternative matrix for drug carrying and controlled release in diverse biomedical applications.
9,984.8
2016-12-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Aspects of holography for theories with hyperscaling violation We analyze various aspects of the recently proposed holographic theories with general dynamical critical exponent z and hyperscaling violation exponent $\theta$. We first find the basic constraints on $z, \theta$ from the gravity side, and compute the stress-energy tensor expectation values and scalar two-point functions. Massive correlators exhibit a nontrivial exponential behavior at long distances, controlled by $\theta$. At short distance, the two-point functions become power-law, with a universal form for $\theta>0$. Next, the calculation of the holographic entanglement entropy reveals the existence of novel phases which violate the area law. The entropy in these phases has a behavior that interpolates between that of a Fermi surface and that exhibited by systems with extensive entanglement entropy. Finally, we describe microscopic embeddings of some $\theta \neq 0$ metrics into full string theory models -- these metrics characterize large regions of the parameter space of Dp-brane metrics for $p\neq 3$. For instance, the theory of N D2-branes in IIA supergravity has z=1 and $\theta = -1/3$ over a wide range of scales, at large $g_s N$. Introduction Holography [1] is a powerful tool to study strongly interacting large N quantum field theories [2,3,4,5]. In the holographic context, a d (spatial) dimensional quantum field theory is mapped to a higher-dimensional (usually (d + 2)-dimensional) gravitational theory, with the (d + 1) "field theory dimensions" arising as the boundary of the space-time. While the initial interest in concrete examples centered on applications to AdS gravity theories and their conformal field theory duals, the class of metrics of interest in gauge/gravity duality has been considerably enlarged in recent years. One simple generalisation is to consider metrics which can be dual to scale-invariant field theories which are, however, not conformally invariant, but instead enjoy a dynamical critical exponent z = 1 (with z = 1 reducing to the case of the AdS metric): These metrics are invariant under the scaling They arise as exact solutions of simple gravity theories coupled to appropriate matter [6,7], with the simplest such theory also including an abelian gauge field in the bulk. This simple generalisation of AdS is motivated by consideration of gravity toy models of condensed matter systems (where Lorentz invariance needn't emerge in the deep infrared, and e.g. doping with charge density can naturally lead to z = 1). 1 Such metrics have also been found as solutions in string theory, and supergravities which arise simply from string constructions, in [9]. More recently, it has been realized that by studying systems including a scalar "dilaton" in the bulk, one can find even larger classes of scaling metrics. Such theories have been studied in e.g. [10,11,12,13,14,15,16,17,18,19,20,21] (with very similar metrics also characterizing the "electron stars" studied in e.g. [22]). By including both an abelian gauge field and the scalar dilaton, one can in particular engineer the full class of metrics [13] These exhibit both a dynamical critical exponent z and a "hyperscaling violation" exponent θ [24], as emphasized in [20]. This metric is not scale invariant, but transforms as ds → λ θ/d ds (1.4) under the scale-transformation (1.2). Very roughly speaking, in a theory with hyperscaling violation, the thermodynamic behaviour is as if the theory enjoyed dynamical exponent z but lived in d−θ dimensions; dimensional analysis is restored because such theories typically have a dimensionful scale that does not decouple in the infrared, and below which such behaviour emerges. One can then use appropriate powers of this scale -which will be denoted by r F -to restore naive dimensional analysis. As emphasized in [20], building on the stimulating explorations in [19], the case θ = d − 1 is a promising gravitational representation of a theory with a Fermi surface in terms of its leading large N thermodynamic behaviour. In this example, the relevant dimensionful scale is of course the Fermi momentum. In this paper we characterize strongly coupled quantum field theories with hyperscaling violation using holography. In general, the metric (1.3) may not be a good description above the dynamical scale r F . 2 For this reason, in this work we will not assume (1.3) all the way to the boundary, but instead we will follow an 'effective' holographic approach in which the dual theory lives on a finite r slice. This is similar to an effective field theory analysis in the dual QFT. This has been put on a firmer footing for asymptotically AdS spacetimes in [23]. First, we discuss the most basic holographic features of this class of metrics: the constraints on (z, θ) that follow from energy conditions in the bulk, the behavior of propagators for bulk fields (and the consequent behavior of correlation functions of scalar operators in the dual field theories), and the behavior of the stress-energy tensor. Our analysis reveals intriguing properties of correlation functions in these theories. In a semiclassical approximation, a massive scalar has a correlation function of the form (1.5) at spacelike separations (where c θ > 0 is a constant). We note the nontrivial |∆x| θ/d dependence, as compared to a weakly coupled massive propagator, G(∆) ∼ exp(−m|∆x|). On the other hand, away from the semiclassical limit (i.e. at small masses/short distances), there is a cross-over to a power-law behavior, and the propagator becomes 3 G(∆x) ∼ 1 |∆x| 2(d+1)−θ . (1.6) That is, there is a universal θ-dependent power law, independent of m. In another direction, we systematically study the entanglement entropy properties of the dual field theories. In recent years, studies of entanglement entropy have come to the fore as a new technique for understanding and perhaps classifying novel phases of quantum field theory [25]. In general, the entanglement between a region A and its complement in a quantum field theory in its ground state in d spatial dimensions, is expected to scale as the area of ∂A, the boundary of the region (with a precise proportionality constant dependent on the UV-cutoff of the field theory) [26]. For the UV-dependent contribution, this scaling simply follows from locality, and has come to be known as the "area law." However, several states which violate the area law have also been discussed. These include d = 1 conformal field theories [27], conventional Fermi liquids, which can exhibit logarithmic violation of the area law [28,29], and certain proposed non-Fermi liquid ground states [30]. In these systems, the area law -violating terms are not cut-off dependent. More generally, sub-leading but cutoff independent terms in the entanglement entropy have proved to be of significant interest -for instance, they can distinguish between states with different topological orders [31,32]. Ryu and Takayanagi [33] proposed that the entanglement entropy between a region A and its complement in the boundary field theory, can be computed in gravity by finding the area of the minimal surface ending on ∂A and extending into the bulk geometry (in Planck units). While this proposal is as yet unproven, it has passed many non-trivial tests, and is supported by an impressive amount of circumstantial evidence. Here, we systematically study the entanglement properties of the class of metrics (1.3), over the full range of parameters z, θ where they seem to emerge as solutions of a reasonable matter + gravity theory (i.e., one where the required stress-energy sources satisfy reasonable energy conditions). Entanglement properties of subsets of these theories were studied in [19,20,21], and also in [34], which emphasized the importance of the cross-over between the area law at T = 0 and the thermal entropy. One of the surprises we'll encounter is the existence of a class of theories which violate the area law and have universal terms in the ground-state holographic entanglement entropy that scale parametrically faster than the area of ∂A (while scaling more slowly than the extensive entanglement entropy expected in a theory with extensive ground-state entropy [34]). As a third focus, we also discuss the way some θ = 0 metrics arise in a UV-complete theory -string theory. Existing embeddings have been in phenomenological theories of Einstein/Maxwell/dilaton gravity, which are clearly applicable only over some range of scales (as the dilaton is varying, leading one to suspect that the description breaks down both in the deep IR and the far UV). Here, we simply point out that over a wide range of scales, the dilatonic Dp-branes (those with p = 3) give rise to metrics of the form (1.3) with z = 1 but θ = 0. 4 For instance, the large N D2-brane theory, in the IIA supergravity regime, has θ = −1/3. The string embedding, together with our knowledge of the properties of Dp-branes, provides a complete understanding of what happens in the far UV and deep IR regions of the phase diagram where "bottom up" descriptions break down [35]. Holographic theories with hyperscaling violation We begin by analyzing basic properties of theories with hyperscaling violation using holographic techniques. In this first step, our goal will be to determine two-point functions and the expectation value of the energy-momentum tensor for these field theories. In the following sections we will construct other observables, such as the entanglement entropy, and study finite temperature effects. Metrics with scale covariance As we reviewed before, the gravity side is characterized by a metric of the form This is the most general metric that is spatially homogeneous and covariant under the scale transformations Thus, z plays the role of the dynamical exponent, and θ is the hyperscaling violation exponent. The dual (d + 1)-dimensional field theory lives on a background spacetime identified with a surface of constant r in (2.1). The radial coordinate is related to the energy scale of the dual theory. For example, an object of fixed proper energy E pr and momentum p pr redshifts according to When θ ≤ dz and θ < d, r → 0 (the boundary of (2.1)) describes the UV of the dual QFT. Clearly, different probes give different energy-radius relations, as in AdS/CFT [37]. For instance, a string of fixed tension in the (d + 2)-dimensional theory has E ∝ 1/r z−2θ/d . Probe scalar fields will be discussed in §3. Before proceeding, it is important to point out that the metric (2.1) will only give a good description of the dual theory in a certain range of r, and there could be important corrections for r → 0 or very large r. Outside the range with hyperscaling violation, but assuming spatial and time translation symmetries and spatial rotation invariance, the metric will be of the more general form An important situation corresponds to a field theory that starts from a UV fixed point and then develops a scaling violation exponent θ at long-distances. This means that the gravity side warp factor e 2A → R 2 /r 2 for r → 0 (with R the AdS radius) and that below a cross-over scale r F it behaves as in (2.1). This scale then appears in the metric as an overall factor ds 2 ∝ R 2 /r 2θ/d F , and is responsible for restoring the canonical dimensions in the presence of hyperscaling violation. 5 Finally, at scales r r F the theory may flow to some other fixed point, develop a mass gap etc., and (2.1) would again no longer be valid. String theory examples that exhibit these flows will be presented in §6. For now we will simply ignore these corrections and focus on the form (2.1), keeping in mind that it may be valid only in a certain window of energies. We follow an 'effective' approach where the dual theory is taken to live at finite r of order r F . In order to understand the metric properties of this class of spacetimes, notice that (2.1) is conformally equivalent to a Lifshitz geometry, as can be seen after a Weyl rescaling g µν →g µν = Ω 2 g µν , with Ω = r −θ/d . (The scale-invariant limit is θ = 0, which reduces to a Lifshitz solution.) Since a Lifshitz metric has constant curvature, the scalar curvature associated to (2.1) acquires r-dependent terms controlled by the derivative of the Weyl factor Ω. The Appendix contains the Ricci and Einstein tensors for the general class of metrics (2.4). In particular, the Ricci tensor for the metrics (2.1) is given by The scalar curvature is then R ∝ r −2θ/d , which becomes constant for θ = 0 as expected. Constraints from the null energy condition What types of constraints should we impose on (2.1) in order to get a physically sensible dual field theory? Quite generally, from the gravity side we should demand at least that the null energy condition (NEC) be satisfied. That is, we impose on the Einstein equations, where N µ N µ = 0. Since G µν = T µν on-shell, from (2.5) the constraints from the NEC become 6 The constraints (2.7) have important consequences for the allowed values of (z, θ) that admit a consistent gravity dual. First, in a Lorentz invariant theory, z = 1 and then the first inequality implies that θ ≤ 0 or θ ≥ d. Both ranges will be realized in the string theory constructions of §6. On the other hand, for a scale invariant theory (θ = 0), we recover the known result z ≥ 1. Theories with θ = d − 1 are of interest since they give holographic realizations of theories with several of the properties of Fermi surfaces [19,20]. The NEC then requires that the dynamical exponent satisfies z ≥ 2 − 1/d , (2.8) 6 For the general metric (2.4), the two independent null vectors are where ϕ = 0 or π/2. in order to have a consistent gravity description. More generally, in §4 we will find that for systems with d − 1 ≤ θ ≤ d , (2.9) the entanglement entropy exhibits new violations of the area law. These cases can be realized for a dynamical exponent that satisfies z ≥ 1 − θ/d. The limit θ = d will correspond to an extensive violation of the entanglement entropy, and requires z ≥ 1 or z ≤ 0. Notice that in theories with hyperscaling violation the NEC can be satisfied for z < 1, while this range of dynamical exponents is forbidden if θ = 0 [38]. In particular, {z < 0, θ > d} gives a consistent solution to (2.7), as well as {0 < z < 1, θ ≥ d + z}. Notice that, just based on the NEC, the range θ > d is allowed. Clearly more information is needed to determine whether the above choices lead to physically consistent theories -in particular we will argue below that θ > d leads to instabilities in the gravity side. In what follows we continue this analysis using holographic techniques to calculate correlation functions, entanglement entropy and thermal effects. It would also be interesting to derive conditions for the existence of these theories directly in the field theory side. Massive propagators The next step is to calculate two-point functions O(x)O(x ) , where O is some operator in the dual theory. We will consider an operator that can be described by a scalar field in the bulk. The simplest correlation functions correspond to massive propagators in the bulk, since in the semiclassical approximation this is given in terms of the geodesic distance traveled by a particle of mass m. In AdS/CFT, massive bulk propagators give power-law CFT Green's functions because of the r-dependent warp factor. The geodesic distance is minimized by moving into the bulk, and this turns an exponential into a power-law; see e.g. [39]. Let us now calculate correlation functions in the semiclassical approximation, for the class of metric (2.1), in the range θ ≤ d. The full correlator away from the semiclassical limit will be studied in §3. The particle geodesic describing the semiclassical trajectory is obtained by extremizing the action where λ is the worldline coordinate and a 'dot' denotes a derivative with respect to λ. The propagator between x = (t, x i ) and x = (t , x i ) on a fixed r = surface is then given by with the conditions (x(0) = x, r(0) = ) and (x(1) = x , r(1) = ). Because of time and space translation invariance, the propagator only depends on ∆t ≡ t − t and ∆x i = x i − x i . Here the cutoff ∼ r F , but otherwise it is left unspecified. Consider first the case of spacelike propagation, with ∆t = 0. Choosing λ = r gives Since the momentum conjugate to x is conserved, the equation of motion is integrated to Here r t is the turning point for the geodesic, dr/dx| r=rt = 0. It is related to ∆x by integrating (2.13): (2.14) Plugging (2.13) into (2.10), we obtain the geodesic distance where we have neglected higher powers of and defined Thus, the propagator in the semiclassical approximation becomes The approximation holds in the regime in units of the cross-over scale r F . As a check, in the scale-invariant limit θ = 0, the integrals for r t and S give logarithms instead of powers, and we should replace θ/d /(θ/d) → log (and similarly for r t ). Then reproducing the expected CFT power-law behavior. The correlator (2.17) reveals interesting properties about the dual field theory with hyperscaling violation in the WKB regime. First, it has an exponential (rather than power-law) dependence on |∆x|, showing that the dual theory is massive, with a nontrivial RG evolution -at least for operators dual to massive scalars. However, the usual weakly coupled decay ∼ exp(−|∆x|) is now replaced by a nontrivial θ-dependent exponent. For θ > 0 the propagator decays exponentially at large distances with an exponent |∆x| θ/d . For θ < 0 the propagator appears to approach a constant value at large distances, but it is outside the regime (2.18) where the semiclassical approximation is valid. 7 Anticipating our results of §3, we point out that away from the semiclassical limit (or more generally for operators dual to massless scalars) the propagators do exhibit power-law behavior, with a power that includes a shift by θ. So far our calculations have been for a spacelike geodesic; similar computations lead to the correlator for a timelike path, now with nontrivial z dependence. Working in Euclidean time, τ = it, the value of the action becomes, ( 2.21) This is valid in the range 0 < θ/d < z. We see that for the case z = 1, this reduces to the solution for the spacelike geodesic. The propagator for a timelike path is in the regime where m|∆τ | θ/dz 1. (2.23) The propagator for an arbitrary geodesic is in general a function of both |∆x| and |∆τ |. Now that we have discussed the two specific extremes, we briefly discuss the general solution. The differential equations cannot be solved analytically for arbitrary d, θ, z, but can in principle be solved numerically for specific values of the critical exponents. We outline this procedure in appendix B. Holographic energy-momentum tensor Another important object that characterizes the dual QFT is the expectation value of the energy-momentum tensor. It contains information about the number of degrees of freedom (e.g. the central charge in a 2d theory) and other conformal anomalies. In order to calculate the stress tensor, we need a method that can be applied locally to a radial slice, and which does not assume an asymptotic AdS structure -after all, the metric (2.1) may give a good description of the QFT only in an open range of radial scales. An adequate method for this case is to compute the Brown-York stress tensor [40] on a radial slice, and identify it with the expectation value of the energy-momentum tensor in the dual theory [41,42]. 8 The basic idea is as follows. Consider a hypersurface of constant r = r c and let n µ be the unit normal vector to this timelike surface. For us, r c ∼ O(r F ). The induced metric is γ µν = g µν − n µ n ν , (2.24) and the extrinsic curvature is given by Since r c will be taken to be finite (for instance, of order of the cross-over scale), counterterms and regularization issues will be ignored. Then the quasilocal stress tensor [40] is ignoring a dimensionful constant. The AdS/CFT correspondence relates the expectation value of the stress tensor T µν in the dual theory to the limit of the quasilocal stress tensor τ µν as r c → 0 (the boundary): where h µν is the background QFT metric, which is related to γ µν by a conformal transformation. Our proposal is to extend this relation to metrics of the form (2.1) at finite r, and use τ µν to determine T µν . A radial slice at r = r c has an induced metric has extrinsic curvature K µν = −h(w)∂ w h(w) η µν at constant w. Applying this to our case and using (2.27) obtains In the more general case of z = 1, we break Lorentz invariance and the nontrivial components of the energy-momentum tensor can be determined as In a (d + 1)-dimensional CFT, the energy-momentum tensor is an operator of conformal weight d + 1, so we expect T µν ∼ h µν /r d+1 c . More precisely, a nonvanishing one-point function is obtained by placing the CFT on a curved background of constant curvature, and then T µν ∼ h µν /R d+1 with R the curvature radius. Obtaining this from the gravity side requires adding counterterms and taking the limit r c → 0. Here we are working at finite r c and we ignore these subtraction terms, since in general the metric with hyperscaling violation is not valid near the boundary. Hyperscaling violation has then the effect of shifting the energy-momentum tensor onepoint function to h µν /r d+1−θ c . A similar result will be obtained in correlators of marginal operators below. From this point of view, a possible interpretation is that θ reflects a nonzero scaling dimension for the vacuum. 9 However, the effects of θ in the field theory are probably more complicated than just a universal shift in the vacuum. We will return to these points in §3. Dynamics of scalar operators Having understood the basic properties of holographic theories with hyperscaling violation, in this section we study in detail operators that are described by bulk scalar fields with action In particular, we will analyze two-point functions valid for arbitrary (not necessarily large) mass m, where the WKB approximation of §2.3 is not applicable. Scalar field solution The equation of motion for a scalar field with mass m in the background (2.1) is Let us first consider the behavior of φ at small r. Starting from an ansatz φ ∼ r α , we find that the ∂ i , ∂ t , and m 2 terms are all subdominant at small r if z > 0 and θ > 0. In this case, we can solve the equation at leading order in r, which gives α = 0 or d − θ + z. This means that when we impose the incoming boundary condition at r = ∞ (or in the Euclidean picture, the regularity condition), the full solution has the following expansion around r = 0: where we have Fourier transformed in the t and x directions, and · · · refers to higher order terms in r. The behavior in (3.3) should be contrasted to the case of θ = 0, in which the mass term becomes one of the leading contributions, and we are back to the Lifshitz or AdS case of The momentum-space two-point function on the boundary for the operator dual to φ is given by the coefficient function G( k, ω) in the expansion above [17]. We will analyze its behavior in the next few subsections, while solving it exactly in a few special cases. Massless case For simplicity we will consider the z = 1 case where we recover Lorentz invariance. The equation of motion (3.2) becomes exactly solvable for a massless scalar: with k = (ω, k). The solution that satisfies the correct boundary condition at r = ∞ is Note that we have normalized the solution at the boundary according to (3.3), up to a numerical factor that does not depend on k. Expanding the modified Bessel function, we find (again up to a k-independent constant) Fourier transforming back to position space, we find the two-point function to be Here O is a marginal operator dual to the massless φ in the bulk. We find that the dimension of this marginal operator is shifted by θ. Massive case: a scaling argument In the more general case where the mass is nonzero, we cannot solve the scalar equation of motion in closed form (except for special values of θ which we will discuss in the next subsection). However, we note a scaling symmetry in the equation under which the coefficient function G(k) should transform as We immediately observe that in the massless case G(k) ∼ k d−θ+1 , by setting λ = k. This agrees with our results in the previous subsection. We also find that for positive θ, the mass term become unimportant at short distances, and the UV behavior of the massive two-point function is given by the massless results (3.6, 3.7). The long-distance behavior of the massive two-point function is given by the WKB approximation of §2.3. We will verify these statements in a few exactly solvable cases in the next subsection. When θ is negative, the mass term becomes unimportant at long distances, and the IR behavior of the massive two-point function is given by the massless results. We have restricted ourselves to the z = 1 case here, but our results apply more generally for z = 1 as well. In that case the scaling symmetry is 11) and the momentum-space two-point function transforms as Fourier transforming back to position space, we have G(∆ x, ∆t; m) = λ 2(d+z)−θ G(λ∆ x, λ z ∆t; m/λ θ/d ) . (3.13) The equal-time two-point function in the massless case is therefore given by (3.14) Some special cases The equation of motion (3.8) simplifies and becomes solvable in some special cases, which we now discuss. The case θ = d The solution that satisfies the correct normalization and boundary condition is The two-point function in momentum space is therefore At short distance, the two-point function is dominated by the large-k behavior G(k) ∼ k, and we have which agrees with (3.7) for θ = d. At long distance, the two-point function in position space can be shown to decay as e −m|x−x | by the saddle point approximation. This agrees with (2.17) for θ = d. The case θ = d/2 The equation of motion (3.8) is exactly solvable for θ = d/2 in terms of the confluent hypergeometric function. A special case of this kind is d = 2 and θ = 1, which is a candidate holographic realization of a Fermi surface in 2 + 1 dimensions [20]. For θ = d/2 with general d, the solution with the correct normalization and boundary condition is from which we read off the two-point function in momentum space: . (3.21) Again, the short-distance behavior of the two-point function is given by G(k) ∼ k 1+d/2 at large k, and agrees with (3.7) for θ = d/2. When d is even, the first gamma function in the numerator of (3.21) diverges and indicates the appearance of logarithmic terms in k. The two-point function in momentum space becomes at large k, which gives us G(∆x) ∼ 1/|∆x| 3 . Entanglement entropy In this section we evaluate the entanglement entropy for systems with hyperscaling violation, according to the holographic proposal of Ryu and Takayanagi [33]. Our main result is the entropy formula (4.24) for theories with arbitrary (z, θ). We will use this to probe various properties of these theories, including ground state degeneracies and the appearance of Fermi surfaces. Our study will reveal novel phases for d − 1 ≤ θ ≤ d, which feature violations of the area law that interpolate between logarithmic and linear behaviors. A natural question in a holographic study of entanglement entropy is which system extremizes the entanglement entropy over a given class of metrics. This question is motivated in the following sense: one measure of strong correlation is ground-state entanglement, and it is well known that some of the most interesting systems (Fermi liquids, non-Fermi liquids) have entanglement which scales more quickly with system size than 'typical' systems. It is therefore worthwhile to ask, does holography indicate new phases (dual to new bulk metrics) with equally large or larger anomalous ground-state entanglement? This was one of our original motivations in this analysis. In §4.4 we address this question for metrics with hyperscaling violation, finding that θ = d − 1 is the only local extremum. This implies that systems with a Fermi surface minimize the entanglement. General analysis Before computing the entanglement entropy in systems with hyperscaling violation, let us discuss the more general class of metrics We will first calculate the entanglement entropy across a strip. This is the simplest case to analyze. Then it will be argued that the same behavior is found for general entanglement regions when their diameter is large. Let us then begin by computing the entanglement entropy for a strip in the limit l L. We focus on the case of θ ≤ d so the strip is located on a UV slice at r = . The profile of the surface in the bulk is r = r(x 1 ), and its area is We have inverted x 1 = x 1 (r) to make the conserved momentum manifest, and the turning point r t corresponds to dr/dx 1 | rt = 0. To obtain the entanglement entropy we need to extremize A and evaluate it on the dominant trajectory. The calculation follows the same steps as those of §2.3 for the particle geodesic. Extremizing A and using the conserved momentum obtains The entanglement entropy for a strip in the general metric (2.4) is thus with M P l the (d + 2) -dimensional Planck constant. General entanglement regions While most of our analysis will be carried out explicitly for the simplest case of a strip, our conclusions will also apply to general entanglement surfaces. We will now establish this, by showing that the entanglement entropy for a general surface is given approximately by (4.6) both near the boundary and at long distances. Consider a general surface, parametrized by at r = 0. The surface that extremizes the area will then be of the form The pullback of the bulk metric onto Σ gives and the area reads The equation of motion implies the existence of a conserved current J M with components Integrating ∂ M J M = 0 over x i , we read off the conserved charge which generalizes the result for a strip (4.4) to an arbitrary shape. We will now show that (4.11) reduces to the case of a strip (4.6) both near the boundary r = and near a 'turning point' ∂ r Σ → ∞. First, for r → 0, Σ(x i , r) may be expanded as 10 (4.14) The equation of motion for σ 1 then requires λ = 1. Then the UV part of the area is of the form which indeed agrees with the result for the strip (4.6). Now we want to show that for regions of large area (or diameter), the long-distance part of the entanglement entropy also coincides with (4.6). Intuitively, when the size of the system is large most of the surface is deep inside the bulk, and the scaling of the entropy can be approximated by the behavior in the vinicity of a turning point r = r t with ∂ r Σ → ∞. In more detail, we require that locally around r t , ∂ i Σ is smooth and that the combination e dA(r) (r − r t ) 1/2 → 0 as r → r t . This guarantees that J i → 0 as r → r t (see (4.12) for a definition of J M ) and hence the current conservation equation implies that J r ≈ const near r = r t . In this approximation, This agrees with the behavior (4.4), so the entropy from the IR region also agrees with that for a strip, Given these results, in what follows our calculations will be done explicitly for a strip, keeping in mind that our conclusions are valid for more general entanglement regions that satisfy the above criterion. Using trial surfaces Let us also briefly mention that the basic properties of the entanglement entropy can be understood by considering a simple trial surface in the bulk (e.g. a cylinder) and requiring that it is a stationary point. This method also applies to a general entanglement region. Consider a general entanglement region Σ defined on a slice r = ; denote its volume |Σ| and surface area by |∂Σ|. We now approximate the bulk surface used in the holographic calculation of the entanglement entropy by a cylinder with boundary ∂Σ, that extends from r = to r = r t . The value r t is chosen such that it extremizes the entanglement entropy. 11 Starting from the general metric (4.1), the total area of this bulk cylinder is then A = |∂Σ| rt dr e dA(r) + |Σ|e dA(rt) . (4.18) Requiring that r t is a stationary point gives where we introduced the perimeter l ≡ |Σ|/|∂Σ|. Given a concrete warp factor, (4.19) determines the value of r t , and then (4.18) gives the approximation of the entanglement entropy by trial cylinders. The stationary point is a minimum or a maximum depending on the sign of A (r): evaluated at the critical point. In examples below we will find that this procedure gives a good qualitative understanding of the entanglement entropy. Entanglement entropy with hyperscaling violation Now we are ready to evaluate the entanglement entropy for metrics with hyperscaling violation. It is useful to start by recalling the scale invariant θ = 0 case. The bulk metric corresponds to e 2A = R 2 /r 2 in the ansatz (4.1), with R the AdS radius. Plugging this into (4.5) and (4.6) obtains up to higher powers of . For d = 3, this reproduces the entanglement entropy for N = 4 SYM, after relating the 5d Einstein frame quantities to their 10d counterparts. The hyperscaling violation exponent modifies the warp factor to e dA(r) = r θ−d (ignoring for now the cross-over scale). Now (4.6) gives (4.22) and, from (4.5), the turning point is related to the length of the strip by (4.23) Restoring R and the cross-over scale r F , we thus find that the entanglement entropy across a strip is again neglecting higher powers of . Comparing with the scale invariant answer (4.21), we see that the effect of the hyperscaling violation exponent is to modify the entropy by an additional power of (length) θ . This can be understood directly in terms of scaling weights: since the metric has dimension θ/d, the entanglement entropy across a d-dimensional region acquires a scaling weight θ. It is also useful to obtain the entanglement entropy using the method of trial surfaces described above. Choosing a cylinder in the bulk, (4.18) and (4.19) evaluate to where the trial value of the turning point is r t = (d − θ)l and recall that l = |Σ|/|∂Σ| here. Eq. (4.25) correctly reproduces all the physical features of (4.24); moreover when Σ is a strip both expressions exactly agree for θ = d − 1. Eq. (4.25) of course applies to general surfaces, indicating that our conclusions are valid beyond the case of strip-like region. Novel phases with The entanglement entropy result (4.24) reveals interesting properties of the dual field theory. First, when θ = d − 1 the integral in (4.22) gives a logarithmic (instead of power-law) dependence, so that the entropy becomes The cutoff can be chosen to be of order the cross-over scale, ∼ r F , where we expect the metric (2.1) to give a good description of the dynamics. Eq. (4.26) shows a logarithmic violation of the area law, signaling the appearance of a Fermi surface in the dual theory. This case was studied in detail by [19,20], who identified various properties of this (strongly coupled) Fermi surface. In particular, r F ∼ k −1 F , the inverse scale of the Fermi surface. Another special value corresponds to θ = d, where the metric becomes (4.27) and the geometry develops an R d factor. In this limit, the surface that bounds the entanglement region does not move into the bulk, and the entropy is simply proportional to the volume of the entanglement region, This is an extensive contribution to the entanglement entropy and suggests that the dual theory has an extensive ground state entropy. Note that the metric (4.27) is not that of AdS 2 × R d , which shares this feature. 12 Having understood these two limits, it is clear that in the range of parameters (4.24) predicts new violations of the area law that interpolate between the logarithmic and linear behaviors. These novel phases present various intriguing properties. To begin with, the entanglement entropy is finite: (4.24) does not diverge if we take the cutoff → 0. Also, in §2.2 we learned that in general these systems have a nontrivial dynamical exponent z > 1 − θ/d. The correlation functions computed in § §2 and 3 may also provide information to further characterize these phases. For instance, for massless scalars the two-point function is In the following sections we will derive further properties of these systems by placing them at finite temperature and will comment on the possible ground states that can lead to these new phases. Extremizing the entanglement entropy Finally, based on the result (4.24), let us determine the value of θ that extremizes the entropy. We focus on the finite contribution to S in the limit where the diameter is much larger than r F , which is necessary in order to obtain a universal answer independent of the entanglement surface. 13 12 Similar volume laws were discussed in [36] (and [44] for flat space holography) and related to nonlocality; we see no reason the metrics we are studying here are dual to non-local theories, however. 13 The same result is obtained if the metric (2.1) is valid all the way to r → 0. In this case it is consistent to take ∼ r F → 0, and extremizing the first term of (4.24) with respect to θ also yields a universal answer. Extremizing the second term of (4.24) with respect to θ shows that there is a local minimum at Therefore, θ = d − 1 minimizes S in the limit of large diameter. This establishes that Fermi surfaces are local minima of the entanglement entropy. 14 We can also ask which value of θ gives the strongest l-dependence. Assuming θ ≤ d, the strongest rate is a linear dependence in l, when θ = d (see also [19]). The entropy then scales like the volume of the entanglement region. Intuitively, this is associated to the logarithm of the number of degrees of freedom, which scales like the volume. However, just from the NEC analysis in §2.2, we found that θ > d is allowed. 15 In this range, the entanglement entropy scales faster than the volume, which is not expected to correspond to a QFT behavior. It is interesting to ask whether the entanglement entropy can reveal additional properties of this regime. For this, consider again the calculation in terms of trial cylinders in the bulk. At the stationary point, the entanglement entropy (4.25) is a good approximation to the exact answer. The "mass" for fluctuations around the critical point r t = (d − θ)l can be calculated from (4.20), which now implies (4.31) For θ < d, the stationary surface is a minimum; at θ = d the bulk surface collapses into the r = slice, explaining the extensive scaling. And for θ > d the stationary point becomes a maximum. This suggests that gravitational backgrounds with θ > d cannot appear as stable theories (at least in the regime of validity of our current analysis). §5 exhibits similar thermodynamic instabilities, and in the string theory realizations one finds that there is no well-defined decoupling limit. All these results suggest that theories with θ > d may not be consistent. Thermodynamics So far we studied properties of QFTs with hyperscaling violation at zero temperature, such as correlators, the energy-momentum tensor, and entanglement entropy. We will now analyze finite temperature effects in these holographic theories. After obtaining the basic thermodynamic quantities for this class of theories, we will study the entanglement entropy at finite temperature and the cross-over to the thermal result. In particular, this will reveal how the degrees of freedom responsible for entanglement are related to those that are excited by thermal effects. The reader is also referred to [13,17,20,34] for related discussions. Gravitational background at finite temperature Finite temperature effects are encoded in the general metric (2.4) by introducing an emblackening factor f (r) with The basic property of f (r) is that it vanishes at some r = r h ; the temperature is then proportional to a power of r h , as we explain in more detail below. In order to study finite temperature effects on a regime with hyperscaling violation, in the gravity side we need to take r F < r h . Starting now from the metric (2.1) with hyperscaling violation, the black hole solution becomes [17,34] Starting from a solution with f (r) = 1 and matter content general enough to allow for arbitrary (z, θ), 16 one can show, using the results in the Appendix, that (5.3) still gives a solution. Concrete examples will be presented in §6. As usual, the relation between the temperature and r h follows by expanding r h − r = u 2 , and demanding that near the horizon the metric is ds 2 ≈ du 2 + u 2 dτ 2 , where τ = (2πT )it. The result is These expressions imply that the thermal entropy, which is proportional to the area of the black hole, becomes Thus, a positive specific heat imposes the condition We see that the branch {0 < z < 1, θ ≥ d + z} that was consistent with the NEC is thermodynamically unstable. On the other hand, {z ≤ 0, θ ≥ d} is still allowed by (5.6). It would be interesting to study this case in more detail to decide whether it is consistent -the entanglement entropy analysis of §4 suggested an instability for all θ > d. Eq. (5.5) suggests that d − θ plays the role of an effective space dimensionality for the dual theory. From this point of view, θ = d − 1 yields a system living in one effective space dimension, i.e. a (1 + 1)-dimensional theory. Recall also that for this value of θ there is a logarithmic violation of the area law for the entanglement entropy. These points support the interpretation of θ = d − 1 as systems with a Fermi surface [19,20]. The case θ = d would then correspond to a system in (0 + 1)-dimensions. In §4 we found novel phases with d − 1 < θ < d that violate the area law. According to this interpretation, these would be systems of defects living in a fractional space dimension (between 0 and 1). Notice that the behavior of the thermal entropy for θ = d can also be obtained in systems with θ < d by taking z → ∞. This is the familiar AdS 2 × R d limit of a Lifshitz metric. We see from the metrics that these systems are not equivalent, and they are distinguished in terms of field theory observables by their correlation functions. In particular, the two-point function for a marginal operator, implies that positive θ increases the correlation between spatially separated points. Therefore, despite the extensive ground-state entropy, θ = d does not in any sense correspond to spatially uncorrelated quantum mechanical degrees of freedom. Entanglement entropy and cross-over to thermal entropy We now study the entanglement entropy at finite temperature. This quantity is of physical interest since it illustrates how the degrees of freedom responsible for the entanglement entropy contribute to the thermal excitation. As [34] argued recently, we expect a universal crossover function that interpolates between the entanglement and thermal entropies. Finite temperature effects modify the entanglement entropy for a strip as follows: where r t is given in terms of the length and temperature by For the purpose of comparing with the thermal entropy we focus on the universal finite contribution to (5.8). Evaluating these expressions for the metrics (2.1) with hyperscaling violation obtains where we have defined the integrals Also, recall from (5.4) that r h ∼ T −1/z . We first need to express the turning point r t in terms of l and T and then plug this into S f inite . While I ± (α) don't have a simple analytic form for general θ, we can analyze the limits of small and large temperatures explicitly. In the small temperature regime we have T 1/z l 1 or, equivalently, r h r t . The lowest order thermal correction to the entanglement entropy (4.24) is with c 1 a positive constant that depends on d − θ and z. So the finite contribution to the entanglement entropy increases by thermal effects, with a nontrivial power T (d−θ+z)/z . Interestingly, this dependence is in general non-analytic -for instance, the leading thermal correction is ∼ T (z+1)/z in systems with a Fermi surface (θ = d − 1). It would be interesting to understand the physical implications of these corrections. On the other hand, in the large temperature regime, r h → r t , and the integrals I ± are dominated by the pole near u = 1. Then I + ≈ I − ≈ l/r h , and expressing r h in terms of T finds which agrees with the thermal entropy (5.5). This verifies the existence of a crossover function that interpolates between the entanglement and thermal entropy, in theories with θ = 0. String theory realizations In this last section we will show how some θ = 0 metrics arise from string theory. Theories with nontrivial hyperscaling violation have been realized so far in effective actions for Einstein, Maxwell and dilaton fields. However, this description usually breaks down in the far UV or IR, so it is important to have UV completions that explain what happens at very short or very long distances. We will accomplish this by noting that, over a wide range of scales, Dp-branes with p = 3 give rise to metrics of the form (2.1) with z = 1 but θ = 0. This discussion has overlap with similar remarks in [14]. Black-brane solutions Let us first review the necessary results on black branes in ten-dimensional supergravity. For more details and references to the literature see e.g. [5,45]. In 10d string frame, the Dp-black brane solution is The supergravity solution includes a dilaton and RR p-form with field strength 4) where N is the number of D-branes. The ADM mass is It is convenient to introduce a new radial coordinate and define Here Then the metric and dilaton acquire the more familiar form with In order to compute the Bekenstein-Hawking entropy, we need to change to 10d Einstein frame. This is accomplished by rescaling The area of the horizon at u = u h then gives an entropy (6.11) in 10d Planck units. This defines a thermal entropy for the dual theory on the Dp-brane, where the temperature is determined by cosh β and u h : These results simplify in the limit of small temperature, in which case the black branes are nearly extremal. When u h → 0, (6.7) gives cosh 2 β ∼ sinh 2 β ∼ 1/u 7−p h . Note also that sinh 2 β u 7−p h ∼ g s N in string units. Then T ∼ u The (extremal) supergravity description is valid when the curvature and dilaton are small. In terms of the effective 't Hooft coupling on the branes, At large N and for p < 7, this gives a large range of u where the supergravity description can be trusted. Notice that the dilaton grows large for p ≤ 2 in the deep IR, and goes outside the range (6.15). For example at p = 2, the theory flows into the M-theory regime. We emphasize that the entropy scaling (6.13) is valid when the horizon is located within the regime of validity of 10D supergravity/string theory (6.15), and at the corresponding range of temperatures, the theory exhibits hyperscaling violation. (p + 2)-dimensional effective theory and hyperscaling violation We will now compactify this theory on S 8−p and show that it leads to hyperscaling violation. Dimensionally reducing on the sphere and changing to Einstein frame in p + 2 dimensions obtains Also p = d in the notation of the previous sections. Taking the near horizon limit 17 of (6.16), we arrive at metrics (2.1) and (5.2) with hyperscaling violation exponent where r ∝ u (p−5)/2 . The emblackening factor f (u) also reproduces the black hole solution (5.2). As a further check, we can compute thermal effects in this effective theory and compare with the 10d answer. For instance, plugging (6.18) into the formula for the thermal entropy (5.5) indeed agrees with (6.13) for z = 1. We conclude that black-branes with p = 3 give rise to metrics with hyperscaling violation. This description is valid in the range of radial variables (6.15), and provides an explicit dual field theory realization of systems with hyperscaling violation. The field theory is given by SU (N ) super Yang-Mills in (p + 1) dimensions, with sixteen supersymmetries; for a large range of energy scales and strong 't Hooft coupling, it realizes a hyperscaling violation exponent (6.18). Notice that θ < 0 for p ≤ 4, and θ > p for p ≥ 6. Of course, these values satisfy the NEC constraints found in §2.2. It is important to remark that the p ≥ 6 cases do not, however, "decouple from gravity" and give rise to well-defined non-gravitational theories the way the p ≤ 5 cases do. A relevant case for condensed matter applications is that of N D2-branes, which lead to θ = −1/3. The string theory realization also allows us to understand the deep UV and IR limits, where the hyperscaling violating regime is not valid. The UV theory is given by the maximally supersymmetric YM theory in 2 + 1 dimensions, which is asymptotically free. In the IR the theory flows to a strongly coupled conformal field theory dual to AdS 4 × S 7 -the gravity background for M2 branes. Besides providing an explicit string theory realization of systems with hyperscaling violation, our results from the previous sections show interesting properties of maximally symmetric Yang-Mills theories in the intermediate strongly coupled regime (6.15). For instance, the propagator for an operator dual to a massless scalar is The entanglement entropy is also a probe of strong dynamics. The result (4.24) yields with C p a numerical constant. The entanglement entropy for D-branes was also calculated in [36]. Their interpretation of (6.20) was in terms of an area law and a number of degrees of freedom with nontrivial dependence on the RG scale, Future directions In this paper, we have systematically analyzed the most basic holographic characteristics of the family of metrics (1.3), building on much earlier work. There are many directions in which one could imagine further developments. On the one hand, the classes of such metrics which are known to arise in string theory are still quite limited. We saw here that Dp-brane metrics in the supergravity regime, for p = 3, 5, provide one class of examples. AdS 2 × R 2 and Lifshitz spacetimes provide another. But the cases of most physical interest in conventional systems, such as the θ = d − 1 "Fermi-surface" -like case [20], remain to be realized. On a related note, it would be interesting to interpret the θ = 0 metrics more explicitly in terms of dual field theories. In some of the cases with θ < 0, it may be useful to think of these metrics as simply reflecting a growth of the effective number of degrees of freedom with temperature or energy scale -this has been suggested for the Dp-brane metrics in e.g. [36]. On the other hand, it has also been suggested that spin-glass phases of the random field Ising model can be governed by nontrivial hyperscaling violation exponent [46]. Several groups have proposed different ways to model random or glassy phases with gravity or D-brane constructions [47,48,49,50]; it would be very interesting if coarse-graining appropriately in any of these approaches, yielded metrics of the form (1.3) for reasons similar to those espoused in [46]. Also, while here we followed an effective approach studying holography on slices at finite radius (associated to the cross-over scale r F ), one could imagine trying to take the limit r F → 0. It would then be interesting to extend the methods of holographic renormalization [51] to metrics with hyperscaling violation. Finally, we uncovered here some novel "bottom up" holographic ground states with entanglement entropy intermediate between area law and extensive scaling, for d − 1 < θ < d. It is well known that systems with extensive ground state entropy, like the duals of AdS 2 ×R 2 gravity theories (or, even simpler, theories of free decoupled spins), can yield extensive entanglement entropy. It would be interesting to find candidate field-theoretic models which could yield the intermediate scalings we found here. In [52] and references therein, supersymmetric lattice models which are either "superfrustrated" (enjoying extensive ground-state entropy), or frustrated with large but sub-extensive ground state degeneracy, are described. It is quite possible that one can construct analogous lattice models with intermediate ground state entropies, giving rise to entanglement scaling like that of our new phases [53]. In fact, soon after this work was submitted, similar intermediate scalings of the entanglement entropy were found in field-theoretic models with impurities [54]. We can impose the null energy condition, where N µ N µ = 0, on the Einstein equations G µν = T µν to derive constraints on the metric functions A(r), B(r), f (r). Choosing from ϕ = 0 and ϕ = π 2 respectively. B Massive propagators for general geodesics We consider the action S = −m dr r −(d−θ)/d r −2(z−1)τ 2 +ṙ 2 +ẋ 2 (B.1) where we have set λ = r and τ = it in equation (2.10), this time for both |∆x| and |∆τ | nonzero. The integrated x, τ equations of motion define two conserved momenta: We can use these to rewriteẋ,τ in terms of r and Π x , Π τ : Also, using the fact that at the turning point dr/dx, dr/dτ = 0, we can derive a relationship between r t , Π x , Π τ , r
13,036.8
2012-01-09T00:00:00.000
[ "Physics" ]
Variability of the Deep Overflow through the Kerama Gap Revealed by Observational Data and Global Ocean Reanalysis : Herein, the temporal variability of the deep overflow through the Kerama Gap between the East China Sea and the Philippine Sea is investigated based on observational data combined with reanalysis data obtained during 2004–2011. The observations and model results show a strong bottom-intensified flow intruding into the deep Okinawa Trough. The observed deep overflow shows intraseasonal variations that are enhanced from August to November. The variability in the deep overflow via the Kerama Gap is well-correlated with the density changes near its sill depth in the Philippine Sea. Additionally, some portion of the dense water originates from a region east of Miyakojima, which can be related to the northeastward-flowing Ryukyu Current at intermediate depths. In contrast, three extreme deep overflow events indicate that the arriving mesoscale eddies propagated from the east resulted in an increase in the density near the Kerama Gap sill than that on the Okinawa Trough side. The density di ff erence associated with the baroclinic pressure gradient across the Kerama Gap forced the deep overflow into the Okinawa Trough. The volume transport of the deep overflow computed by integrating the cross-sectional velocity and through hydraulic theory are 0.14 and 0.11 Sv (1 Sv = 10 6 m 3 / s), respectively. Introduction The East China Sea (ECS) is separated from the western Philippine Sea by the Ryukyu Island chain (Figure 1). In the ECS, the Okinawa Trough is more than 2000 m deep. The Kuroshio enters and exits the ECS through the East Taiwan Channel (sill depth 775 m) and Tokara Strait (sill depth 690 m), carrying significant heat, salt, nutrients, and organic matter [1][2][3][4]. The northeastward Ryukyu Current flows along the continental slope, east of the Ryukyu Island chain [5][6][7]. The deepest passage connecting the ECS and western Philippine Sea is the Kerama Gap, located near the middle point of the Ryukyu Island chain. The Kerama Gap sill is approximately 50 km wide and 1050 m deep. Numerical Model In this study, HYCOM reanalysis outputs were used to examine the variability in the deep overflow via the Kerama Gap. In the HYCOM reanalysis, observed sea surface temperature, sea surface height, and sea ice concentration were assimilated using the Navy Coupled Ocean Data Assimilation (NCDA) [15]. HYCOM reanalysis can be applied widely, from coastal to planetary scales. Horizontal resolution of HYCOM is 1/12° (approximately 9 km), with 32 vertical hybrid layers configured. These hybrid layers are expressed in z-, sigma-, and isopycnal coordinates in unstratified water, shallow depths, and stratified ocean, respectively. Compared to other reanalysis data, HYCOM is advantageous to use in deep overflow studies of the Kerama Gap. It can solve the complex topography around the Ryukyu Island chain and further simulate the surrounding circulation. Several studies have employed the HYCOM reanalysis for the Ryukyu Island chain region [10][11][12]16,17]. Recent studies have suggested that this HYCOM reanalysis efficiently represent deep sea dynamics as it is isopycnic and can efficiently represent the horizontal pressure in an adiabatic fluid [18]. For example, Du et al. [19] employed HYCOM to successfully There has been one set of previous observations in the Kerama Gap comprising Current and Pressure-recording Inverted Echo Sounders (CPIES) and moored current meters [8]. Observations took place from June 2009 to June 2011. Na et al. [9] analyzed the above observational data and suggested that during the two-year observational period, the mean transport into the ECS from the Philippine Sea was 2.0 ± 0.7 Sv (1 Sv = 10 6 m 3 /s). They further revealed that the temporal variability in the Kerama Gap transport plays an important role in determining the Kuroshio transport in the ECS, which is associated with the impinging mesoscale eddies propagated from the Pacific Ocean. Based on the global HYCOM (HYbrid Coordinate Ocean Model) reanalysis, Yu et al. [10] investigated the temporal variability in the transport through the Kerama Gap at much longer timescales. They showed that the transport had significant seasonal variability, with a maximum in October and minimum in November, which is explained by annual variations, mesoscale eddies, and Kuroshio meander. Furthermore, Yu et al. [11] studied the causes of the extreme cases of the Kerama Gap overflow based on the same HYCOM reanalysis data and found that the most important factor is mesoscale eddies. Zhou et al. [12] reported that the variation in the Kuroshio transport across the PN-line (a repeat hydrographic section crossing the Kuroshio in the ECS northwest of Okinawa) is associated with the water exchange between the Philippine Sea and ECS through the Kerama Gap by utilizing HYCOM reanalysis data. Notably, the above mentioned studies focused their attention solely on the upper and intermediate water exchange and overlooked the deep water overflow through the Kerama Gap. Previous studies suggested that the Kerama Gap deep water overflow can ventilate the Okinawa Trough deep water below 1000 m [13]. Additionally, the deep water overflow is energetic with the velocity at the sill of up to 50 cm/s. Nakamura et al. [13] suggested that the deep water overflow is probably topographically controlled and predicted the Kerama Gap deep water overflow to be approximately 0.18-0.35 Sv using a simple box model. The swift deep water overflow of the Kerama Gap is accompanied by strong turbulence and intense mixing in the Okinawa Trough [13,14]. Using comprehensive observational data, Nishina et al. [14] indicated that hydraulic jumps occurring over the Kerama Gap sills possibly caused strong vertical diffusivity (~10 −1 m 2 /s). Their results showed that the Kerama Gap deep overflow can ventilate the deep water below approximately 1100 m depth in the Okinawa Trough, and the residence time in the southern Okinawa Trough is approximately 5-10 years. Therefore, the deep overflow through the Kerama Gap plays an important role in the deep ventilation in the Okinawa Trough. However, these studies argue that the mean state of the deep water overflow in the Kerama Gap given by the observational data is not sufficiently long. Thus far, understanding the variability in the deep overflow through the Kerama Gap has required further study. Similar to the Luzon Strait connecting the deep Philippine Sea circulation with the South China Sea throughflow, the deep water circulation in the Kerama Gap also dynamically controls the water mass distribution and circulation in the deep layer of the southern Okinawa Trough. Thus far, many efforts have been made in terms of the deep water overflow via the Luzon Strait to address the following questions: (1) How does the deep water overflow vary? (2) What factors control the variability in the deep water overflow? (3) How does the deep water circulation in the strait influence the circulation in the South China Sea? However, the above questions are still unknown in the Kerama Gap region, even though these are important in understanding the deep water circulation in the ECS. In this study, we investigate the temporal variation in the Kerama Gap deep overflow using eight years of HYCOM reanalysis output. The organization of the present paper is as follows. In Section 2, observational data and assimilation data are briefly introduced. Variability and causes of the deep overflow through the Kerama Gap are examined in Section 3. The summary and discussion are presented in Section 4. Mooring Measurement In the period of 7 June 2009-6 June 2011, a tightly spaced mooring array straddled the Kerama Gap sill. This array was configured with five CPIESs and three current meter moorings. In this array, one CPIES instrument (ES5) was deployed approximately 7.4 km downstream of the Kerama Gap main sill during 7 June 2009-6 June 2010 (red dot shown in Figure 1). The CPIES was equipped with an Aanderaa current meter (Model 3820R) positioned 50 m above the seafloor. The typical accuracy in the raw velocity data is ±1.0 cm/s. Raw current data were recorded every 1 h and corrected based on the local magnetic declination. A low-pass filter with a cutoff period of 72 h was optimized to remove the major semidiurnal and diurnal tidal constituents and near-inertial signals (~27 h) near the Kerama Gap. Finally, all currents were subsampled to a frequency of once per day. The observational data return and processing details are found in Liu et al. [8]. Numerical Model In this study, HYCOM reanalysis outputs were used to examine the variability in the deep overflow via the Kerama Gap. In the HYCOM reanalysis, observed sea surface temperature, sea surface height, and sea ice concentration were assimilated using the Navy Coupled Ocean Data Assimilation (NCDA) [15]. HYCOM reanalysis can be applied widely, from coastal to planetary scales. Horizontal resolution of HYCOM is 1/12 • (approximately 9 km), with 32 vertical hybrid layers configured. These hybrid layers are expressed in z-, sigma-, and isopycnal coordinates in unstratified water, shallow depths, and stratified ocean, respectively. Compared to other reanalysis data, HYCOM is advantageous to use in deep overflow studies of the Kerama Gap. It can solve the complex topography around the Ryukyu Island chain and further simulate the surrounding circulation. Several studies have employed the HYCOM reanalysis for the Ryukyu Island chain region [10][11][12]16,17]. Recent studies have suggested that this HYCOM reanalysis efficiently represent deep sea dynamics as it is isopycnic and can efficiently represent the horizontal pressure in an adiabatic fluid [18]. For example, Du et al. [19] employed HYCOM to successfully simulate the transport mechanism in the ocean interior. Zhao et al. [20] used HYCOM and suggested that deep circulation in the Luzon Strait is primarily driven by a baroclinic pressure gradient across the strait. These examples indicated the suitability of the model for simulating the deep water overflow through the Kerama Gap. Results In this section, we first provide the variability in the Kerama Gap deep water overflow based on the observational data in Section 3.1; then, we elucidate the mechanism of the variability using the HYCOM reanalysis data in Section 3.2. simulate the transport mechanism in the ocean interior. Zhao et al. [20] used HYCOM and suggested that deep circulation in the Luzon Strait is primarily driven by a baroclinic pressure gradient across the strait. These examples indicated the suitability of the model for simulating the deep water overflow through the Kerama Gap. Results In this section, we first provide the variability in the Kerama Gap deep water overflow based on the observational data in Section 3.1; then, we elucidate the mechanism of the variability using the HYCOM reanalysis data in Section 3.2. To investigate the temporal variability in the deep water overflow, we first analyze the wavelet power spectrum (using Torrence and Compo wavelet toolbox) and the global wavelet spectrum (GWS) of the eastward and northward velocity ( Figure 3). Both the eastward and northward velocities show statistically significant (95%) energy peaks with periods near 100, 15-60 days ( Figure 3c,d). These period bands are also found in the volume transport variability in the upper and intermediate layers through the Kerama Gap, as reported by Yu et al. [10]. Meanwhile, the length of the observational data is just one year; thus, the inter-annual and annual variation in the deep water overflow in this gap cannot be addressed with this data. Furthermore, wavelet analysis of the eastward and northward velocity of the deep overflow at ES5 indicates that intraseasonal variations enhance from August to November during the period of 2009-2010 (Figure 3a,b). To investigate the temporal variability in the deep water overflow, we first analyze the wavelet power spectrum (using Torrence and Compo wavelet toolbox) and the global wavelet spectrum (GWS) of the eastward and northward velocity (Figure 3). Both the eastward and northward velocities show statistically significant (95%) energy peaks with periods near 100, 15-60 days (Figure 3c,d). These period bands are also found in the volume transport variability in the upper and intermediate layers through the Kerama Gap, as reported by Yu et al. [10]. Meanwhile, the length of the observational data is just one year; thus, the inter-annual and annual variation in the deep water overflow in this gap cannot be addressed with this data. Furthermore, wavelet analysis of the eastward and northward velocity of Causes of Temporal Variability In this section, the detailed mechanism of the deep water overflow via the Kerama Gap is further investigated based on HYCOM reanalysis data. To evaluate the capability of the HYCOM data, the current vector time series obtained from the HYCOM reanalysis and measured data are compared, as shown in Figure 2. The time series for the same location as the CPIES at ES5 is used. Notably, the current velocity at 1000 m depth is used, since the water depth of the HYCOM reanalysis is 1000 m near the Kerama Gap sill. Generally, the modeled velocity time series is consistent with the observed time series. HYCOM also exhibited a strong and stable current near the bottom, with the mean velocity ± standard deviation as 19.0 ± 9.5 cm/s. The minimum-to-maximum range of velocity was approximately 2.7 times larger than the mean value. The standard deviation of the HYCOM velocity (9.5 cm/s) was approximately 1.4 times larger than that of the observed velocity (6.7 cm/s), while its mean (19.0 cm/s) was 75% of the observed mean (25.4 cm/s). The relatively weaker deep overflow captured by the HYCOM reanalysis can be attributed to the HYCOM topography, which was significantly coarser than the true topography. Causes of Temporal Variability In this section, the detailed mechanism of the deep water overflow via the Kerama Gap is further investigated based on HYCOM reanalysis data. To evaluate the capability of the HYCOM data, the current vector time series obtained from the HYCOM reanalysis and measured data are compared, as shown in Figure 2. The time series for the same location as the CPIES at ES5 is used. Notably, the current velocity at 1000 m depth is used, since the water depth of the HYCOM reanalysis is 1000 m near the Kerama Gap sill. Generally, the modeled velocity time series is consistent with the observed time series. HYCOM also exhibited a strong and stable current near the bottom, with the mean velocity ± standard deviation as 19.0 ± 9.5 cm/s. The minimum-to-maximum range of velocity was approximately 2.7 times larger than the mean value. The standard deviation of the HYCOM velocity (9.5 cm/s) was approximately 1.4 times larger than that of the observed velocity (6.7 cm/s), while its mean (19.0 cm/s) was 75% of the observed mean (25.4 cm/s). The relatively weaker deep overflow captured by the HYCOM reanalysis can be attributed to the HYCOM topography, which was significantly coarser than the true topography. As reported by Nakamura et al. [13] and Nishina et al. [14], the deep water overflow of the Kerama Gap is driven by a persistent baroclinic pressure gradient owing to the density difference between the Philippine Sea and the Okinawa Trough; in other words, significant density differences exist between two sides of the Kerama Gap. As shown in Figure 5, the absolute velocity time series of the deep overflow (black) is well correlated with the time series of the density differences between the Philippine Sea and the southern Okinawa Trough at depth 1000 m (blue, r = 0.72, significant at the 95% confidence level). The correlation coefficients between the absolute velocity time series and the upstream (red) and downstream (green) densities at depth 1000 m are 0.73 and 0.35, respectively. This implies that the deep water overflow variation in the Kerama Gap can be determined by the variability in the density in the Philippine Sea. As reported by Nakamura et al. [13] and Nishina et al. [14], the deep water overflow of the Kerama Gap is driven by a persistent baroclinic pressure gradient owing to the density difference between the Philippine Sea and the Okinawa Trough; in other words, significant density differences exist between two sides of the Kerama Gap. As shown in Figure 5, the absolute velocity time series of the deep overflow (black) is well correlated with the time series of the density differences between the Philippine Sea and the southern Okinawa Trough at depth 1000 m (blue, r = 0.72, significant at the 95% confidence level). The correlation coefficients between the absolute velocity time series and the upstream (red) and downstream (green) densities at depth 1000 m are 0.73 and 0.35, respectively. This implies that the deep water overflow variation in the Kerama Gap can be determined by the variability in the density in the Philippine Sea. Because the dynamic environment of the upstream region is relatively complex, i.e., comprising the Ryukyu Current and mesoscale eddies from the east, it is important to examine the cause of the variability in the upstream density at 1000 m. Therefore, we plot the correlation maps between the upstream density at 1000 m and densities at 1000, 1100, 1200, and 1300 m in the surrounding region ( Figure 6). While computing the correlation, time series of two variables were used. One variable was density at 1000 m depth in the selected upstream point (location: black triangle shown in Figure 1b), and the other variable was density at 1000 m (or 1100, 1200, 1300 m) depth at each grid point around the Kerama Gap. Then, Pearson's linear correlation coefficient between each two time series was calculated. The statistical significance of the correlation coefficients was estimated using a Student's t test. It is clear that the upstream density at depth 1000 m is responsible for the region southeast of the Kerama Gap along the submarine Ryukyu Island chain. This is because the Kerama Gap overflow originates from a branch of the northeastward Ryukyu Current off the eastern slope of the Ryukyu Island chain at intermediate depths [6,[21][22][23][24][25]. of the deep overflow (black) is well correlated with the time series of the density differences between the Philippine Sea and the southern Okinawa Trough at depth 1000 m (blue, r = 0.72, significant at the 95% confidence level). The correlation coefficients between the absolute velocity time series and the upstream (red) and downstream (green) densities at depth 1000 m are 0.73 and 0.35, respectively. This implies that the deep water overflow variation in the Kerama Gap can be determined by the variability in the density in the Philippine Sea. Because the dynamic environment of the upstream region is relatively complex, i.e., comprising the Ryukyu Current and mesoscale eddies from the east, it is important to examine the cause of the variability in the upstream density at 1000 m. Therefore, we plot the correlation maps between the upstream density at 1000 m and densities at 1000, 1100, 1200, and 1300 m in the surrounding region ( Figure 6). While computing the correlation, time series of two variables were used. One variable was density at 1000 m depth in the selected upstream point (location: black triangle shown in Figure 1b), and the other variable was density at 1000 m (or 1100, 1200, 1300 m) depth at each grid point around the Kerama Gap. Then, Pearson's linear correlation coefficient between each two time series was calculated. The statistical significance of the correlation coefficients was estimated using a Student's t test. It is clear that the upstream density at depth 1000 m is responsible for the region southeast of the Kerama Gap along the submarine Ryukyu Island chain. This is because the Kerama Gap overflow originates from a branch of the northeastward Ryukyu Current off the eastern slope of the Ryukyu Island chain at intermediate depths [6,[21][22][23][24][25]. According to Whitehead et al. [26], Partt et al. [27], and Qu et al. [28], if the local gap width is larger than or comparable with the local internal Rossby radius, the volume transport of the deep water overflow through the Kerama Gap can be estimated as: where ∆ is the reduced gravity, is the gravity acceleration, ∆ is the density According to Whitehead et al. [26], Partt et al. [27], and Qu et al. [28], if the local gap width is larger than or comparable with the local internal Rossby radius, the volume transport of the deep water overflow through the Kerama Gap can be estimated as: where g = g ∆ρ ρ is the reduced gravity, g is the gravity acceleration, ∆ρ is the density difference between the upstream and downstream at 1000 m depth, ρ is the average density near the Kerama Gap sill depth (1000 m), h u is the interface height above the gap sill depth separating the two layers, f is the Coriolis parameter, and R = g h u f 2 1/2 is the Rossby radius. Figure 7a shows the estimated volume transport time series of deep overflow based on Formula (1), and the corresponding time series for h u is shown in Figure 7b. Notably, the bifurcation depth in the calculation is chosen as the depth at which ∆ρ < 0.05 kg/m 3 . The mean deep overflow transport in the period of January 2004-June 2011 is approximately 0.11 Sv, and the mean depth of the deep overflow ranges from 800 to 1000 m. two layers, is the Coriolis parameter, and / is the Rossby radius. Figure 7a shows the estimated volume transport time series of deep overflow based on Formula (1), and the corresponding time series for is shown in Figure 7b. Notably, the bifurcation depth in the calculation is chosen as the depth at which ∆ 0.05 kg/m 3 . The mean deep overflow transport in the period of January 2004-June 2011 is approximately 0.11 Sv, and the mean depth of the deep overflow ranges from 800 to 1000 m. To understand the contribution of the hydraulic control component to the total deep overflow, we also compute the volume transport of the deep overflow based on the mean integrated crosssectional velocity during 2004-2011, for velocities larger than 5 cm/s, at depths ranging from 800 m to 1000 m (gray line in Figure 7a). In this method, the mean and standard deviation of the integrated volume transport was 0.14 ± 0.17 Sv, whereas the estimated transport based on the hydraulic theory was approximately 0.11 Sv, with a standard deviation of ± 0.28 Sv. In other words, the hydraulic control component is about 79% of the total deep overflow. Moreover, according to Na et al. [9] and Yu et al. [10], both the observed and modeled through-passage transport in the upper and intermediate layers, except the deep water overflow, total approximately 2.0 Sv. Hence, the volume transport of the deep water overflow is approximately 6% of the transport through the upper and intermediate layers, even though the deep water overflow occurs only in the layer above 100-200 m above the bottom. In addition, with a linear fit from January 2004 to June 2011, the deep overflow transport increased by 0.03 Sv per year, suggesting an increasing deep overflow during this period. This may be related to decadal or significantly longer time scale variations in the dynamic environment around the Kerama Gap. However, detailed examination of this mechanism was beyond the scope of this study and will be explored in future research. To understand the contribution of the hydraulic control component to the total deep overflow, we also compute the volume transport of the deep overflow based on the mean integrated cross-sectional velocity during 2004-2011, for velocities larger than 5 cm/s, at depths ranging from 800 m to 1000 m (gray line in Figure 7a). In this method, the mean and standard deviation of the integrated volume transport was 0.14 ± 0.17 Sv, whereas the estimated transport based on the hydraulic theory was approximately 0.11 Sv, with a standard deviation of ±0.28 Sv. In other words, the hydraulic control component is about 79% of the total deep overflow. Moreover, according to Na et al. [9] and Yu et al. [10], both the observed and modeled through-passage transport in the upper and intermediate layers, except the deep water overflow, total approximately 2.0 Sv. Hence, the volume transport of the deep water overflow is approximately 6% of the transport through the upper and intermediate layers, even though the deep water overflow occurs only in the layer above 100-200 m above the bottom. In addition, with a linear fit from January 2004 to June 2011, the deep overflow transport increased by 0.03 Sv per year, suggesting an increasing deep overflow during this period. This may be related to decadal or significantly longer time scale variations in the dynamic environment around the Kerama Gap. However, detailed examination of this mechanism was beyond the scope of this study and will be explored in future research. Significantly strong deep overflow (deep overflow transport was approximately 15 times larger than the mean value) was exhibited during three periods, 17 September 2006, 17 January 2009, and 24 April 2011. In the following, we will show these three cases in detail, especially the dynamical environment. Figure 8a shows the sea level anomaly on 24 April 2011, obtained from the HYCOM reanalysis data, which indicated that a cyclonic eddy was located in the southeast section of the Kerama Gap. The density upstream is much larger than that downstream, even at 600 m depth (Figure 9a). The density difference between the upstream location and downstream location is 0.18 kg/m 3 at 1000 m depth (Figure 9d). The larger density difference between both sides of the gap forces denser Philippine Sea water entering the southern Okinawa Trough via the Kerama Gap. Just as in the case of 24 April 2011, the density difference between the two sides of the Kerama Gap is the largest on 17 January 2009 (0.28 kg/m 3 at 1000 m depth, see Figure 9e), which also drives denser water flow into the Okinawa Trough from the North Pacific (Figure 8b). However, different from the case of 24 April 2011, an anticyclonic eddy is on the east side of the Ryukyu Island chain, which tries to intrude into the Okinawa Trough via the Kerama Gap (Figure 8b). In the final case, density differences also exist between the North Pacific and the Okinawa Trough but much smaller than the previous two cases Significantly strong deep overflow (deep overflow transport was approximately 15 times larger than the mean value) was exhibited during three periods, September 17, 2006, January 17, 2009, and April 24, 2011. In the following, we will show these three cases in detail, especially the dynamical environment. Figure 8a shows the sea level anomaly on April 24, 2011, obtained from the HYCOM reanalysis data, which indicated that a cyclonic eddy was located in the southeast section of the Kerama Gap. The density upstream is much larger than that downstream, even at 600 m depth (Figure 9a). The density difference between the upstream location and downstream location is 0.18 kg/m 3 at 1000 m depth (Figure 9d). The larger density difference between both sides of the gap forces denser Philippine Sea water entering the southern Okinawa Trough via the Kerama Gap. Just as in the case of April 24, 2011, the density difference between the two sides of the Kerama Gap is the largest on January 17, 2009 (0.28 kg/m 3 at 1000 m depth, see Figure 9e), which also drives denser water flow into the Okinawa Trough from the North Pacific (Figure 8b). However, different from the case of April 24, 2011, an anticyclonic eddy is on the east side of the Ryukyu Island chain, which tries to intrude into the Okinawa Trough via the Kerama Gap (Figure 8b). In the final case, density differences also exist between the North Pacific and the Okinawa Trough but much smaller than the previous two cases (0.15 kg/m 3 at 1000 m depth, see Figures 9c and 9f). Moreover, the Ryukyu Current bifurcates into the deep overflow via the Kerama Gap and northeastward Ryukyu Current ( Figure 8c). Summary and Discussion Observational data from a current meter and output from HYCOM reanalysis data from 2004 to 2011 were used to investigate the variability in the deep water overflow through the Kerama Gap, the deepest channel of the Ryukyu Island chain and a key pathway between the Philippine Sea and the ECS. Current records from June 2009 to June 2010 indicate that the deep water overflow in the Kerama Gap is fairly strong, with a mean velocity of approximately 25.4 cm/s and a standard deviation of 6.4 cm/s. Moreover, one-year observations reveal that the deep overflow shows statistically significant (90%) spectral peaks, with periods of approximately 100 days, 37-60 days, and 21 days, and the intraseasonal variations become enhanced from August to November based on observational data. By analyzing the HYCOM analysis data, we find that the absolute velocity time series of the deep overflow at ES5 is well correlated with the time series of the density differences between the Philippine Sea and the Okinawa Trough at depth 1000 m (r = 0.72, significant at the 95% confidence level). Additionally, the variability in the deep overflow in the Kerama Gap is much better correlated with the density in the Philippine Sea than with that in the Okinawa Trough. This implies that the variations of the deep overflow may be related to the variability occurring in the Philippine Sea. The density variability in the Philippine Sea results from at least two factors: (1) the northeastwardflowing Ryukyu Current off the eastern slope of the Ryukyu Island chain at intermediate depths, which can induce upstream density near the Kerama Gap sill depth when it passes the gap and (2) Summary and Discussion Observational data from a current meter and output from HYCOM reanalysis data from 2004 to 2011 were used to investigate the variability in the deep water overflow through the Kerama Gap, the deepest channel of the Ryukyu Island chain and a key pathway between the Philippine Sea and the ECS. Current records from June 2009 to June 2010 indicate that the deep water overflow in the Kerama Gap is fairly strong, with a mean velocity of approximately 25.4 cm/s and a standard deviation of 6.4 cm/s. Moreover, one-year observations reveal that the deep overflow shows statistically significant (90%) spectral peaks, with periods of approximately 100 days, 37-60 days, and 21 days, and the intraseasonal variations become enhanced from August to November based on observational data. By analyzing the HYCOM analysis data, we find that the absolute velocity time series of the deep overflow at ES5 is well correlated with the time series of the density differences between the Philippine Sea and the Okinawa Trough at depth 1000 m (r = 0.72, significant at the 95% confidence level). Additionally, the variability in the deep overflow in the Kerama Gap is much better correlated with the density in the Philippine Sea than with that in the Okinawa Trough. This implies that the variations of the deep overflow may be related to the variability occurring in the Philippine Sea. The density variability in the Philippine Sea results from at least two factors: (1) the northeastward-flowing Ryukyu Current off the eastern slope of the Ryukyu Island chain at intermediate depths, which can induce upstream density near the Kerama Gap sill depth when it passes the gap and (2) the impinging mesoscale eddies propagated from the North Pacific, which induce a density difference between the two sides of the Kerama Gap and then force the deep overflow through the Kerama Gap. Notably, the mean estimated volume transport based on hydraulic control theory using the HYCOM reanalysis data is relatively smaller than that using observational data. According to Nakamura et al. [13] and Nishina et al. [14], the deep water near the Kerama Gap sill originated from the West Philippine Sea, intruding into the deep layer of the Okinawa Trough. This process induces strong turbulent mixing immediately with light water upwelling. Meanwhile, the turbulent mixing in the Okinawa Trough plays a key role in driving the denser water intruding into the Okinawa Trough. Hence, we suggest that the relatively weaker deep overflow in the HYCOM may be related to the relatively weaker turbulent mixing occurring in the Okinawa Trough, since the sea water stratification of the HYCOM is relatively stronger than that of the real ocean. Water mass formation in the deep layer of the Okinawa Trough is related to the deep water overflow through the Kerama Gap. In future study, more intensified observations should be conducted in the Kerama Gap region. Furthermore, the use of observational data along with numerical experiments is needed to address the question of how the deep water overflow drives the deep circulation in the southern Okinawa Trough.
7,572.6
2020-06-02T00:00:00.000
[ "Environmental Science", "Geology" ]
Water leak detection based on convolutional neural network using actual leak sounds and the hold-out method The main purpose of this study was to investigate whether machine learning can be used to detect leak sounds in the fi eld. A method for detecting water leaks was developed using a convolutional neural network (CNN), after taking recurrence plots and visualising the time series as input data. In collaboration with a pipeline restoration company, 20 acoustic datasets of leak sounds were recorded by sensors at 10 leak sites. The detection ability of the constructed CNN model was tested using the hold-out method for the 20 cases: 19 showed more than 70% accuracy, of which 15 showed more than 80%. INTRODUCTION Globally, capital expenditure on supplying drinking water was approximately US$90 billion in 2011. Almost half of this was spent on water distribution networks, including constructing new water networks and rehabilitating existing ones. According to the municipal water capex by category, capital expenditure on rehabilitation (34.5%) already exceeds that on new networks (13.0%) (Global Water Intelligence ). The situation is similar in Japan: of the total assets of the water supply system, water pipes account for about 65% of the economic value. Furthermore, it is estimated that 14.8% of water pipes have surpassed their durable life (40 years). The ratio of ageing pipes that need to be replaced, which was 6% in 2006, is increasing every year, while the rate of pipeline renewal has been falling steadily (Ministry of Health Labour & Welfare ). Furthermore, this figure is expected to exceed 20% in 10 years and 40% in 20 years. The deterioration and ageing of water pipes are the main causes of water leaks. If such leaks are left unrepaired for a long time, sinkholes could occur due to ground loss and cause major incidents. Therefore, leaks should be discovered and fixed at an early stage. However, since water pipes are buried underground, it is difficult to locate leaks until direct damage appears on the ground. The sound of leaking water varies with the material of the pipe, water pressure, diameter, distance and so forth. Inspectors need to become familiar with the sound propagation characteristics based on the material of the pipes and also any external noise, such as the sound of electric motors in vending machines. They also require experience to be able to estimate the location of a leak and to differentiate water leak sounds. Therefore, they often attend courses on leak detection prior to actual investigation (Ministry of Health Labour & Welfare ). However, this approach, which may involve listening at night when there is less noise, is burdensome for inspectors. For these reasons, it is difficult for skilled inspectors to pass down technical expertise to younger inspectors. This paper starts with a review of previous studies on leak detection techniques used in water distribution systems (WDSs) that have been published at international congresses and by various researchers. Khulief et al. () studied the detection of leaks by measuring the leakage sound inside a water pipe. However, the technique was applicable to a single pipeline, and although the probability of detection is higher, only a limited area can be investigated at one time. Seyoum et al. () described a leak detection technique using the same principle as an app that identifies the name of a song or artist. The leak detection model in their study was divided into Feed mode and Detection mode, and the software makes the final decision in Detection mode. This is a sophisticated technique that uses software to detect leaks. However, it is designed for households, and is not suitable for detecting leaks in a place with many obstacles. Finally, Fuad et al. () reviewed multiple leak detection methods, including a sensor that measures sound intensity (amplitude), and estimating the leak location using the delay time between two sensors. However, leak detection that depends on sound intensity cannot easily distinguish between a leak sound and a similar other sound. As indicated above, many studies use sound data to detect water leakage, but sound is not the only clue for detecting leakage. Leaks can also be inferred from changes in pressure and flow. However, pressure and flow can change greatly throughout a pipe network, and the more complex the network, the harder it is to locate the leak. Although sound is not completely free from such influences, the sound becomes clearer closer to the leak point, which is a useful characteristic. Therefore, the present study uses sound data acquired by sensors for leak detection. When such data is acquired, it is not immediately known whether the sound is from a water leak or is background noise. Just as people listening to a saxophone or flute can distinguish the two instruments, experts must listen for any differences between water leak sounds and background noise. In a previous study, Nam et al. () investigated whether the differences between water leak sounds and background noise could be quantified and verified prior to the full-scale detection of water leaks, and considered what the differences were. It was found that the pattern of a specific sound was expressed as having deterministic properties (Fujimoto & Iokibe ), and it was shown that leak sounds had stronger deterministic properties than normal or background noise. In the present study, time series data (which has deterministic properties) are obtained in the form of recurrence plots (RP), which is a two-dimensional form, and the characteristics of actual water leakage are visualised. In addition, the visualised RPs are trained using the convolutional neural network (CNN) model, which is a deep learning technology. On the other hand, it is beyond the scope of this paper to discuss the effectiveness of cross-validation (CV) in machine learning. CV is a data resampling method to assess the generalisation ability of predictive models and to prevent overfitting (Berrar ). Before performing CV, which requires a lot of time and effort, it is important to conduct a basic review of whether these data are trained normally. This study used actual leakage sound data, but there has been little research on how to process and utilise such data. Therefore, this study mainly addresses some fundamental questions when applying the hold-out method in machine learning. How to collect actual leak sounds This section explains the collection of experimental data, including actual leak sounds. The leak sounds recorded by sensors in the field were obtained in collaboration with a pipeline restoration company. The procedure for measuring leak sounds and background noise was as follows: (1) the company's engineer visited the place where a leak was suspected at the request of the waterworks bureau; (2) the engineer then installed leak sensors on a fire hydrant or gate valve as close as possible to the leak site; (3) the sound (leak sound) was recorded for 3 min; (4) the leak was fixed; and then (5) the sound (background noise) was re-recorded on the same day at the same location ( Figure 1). The measuring device (AQUASCAN 620 L, Gutermann) used a sampling frequency of 10,000 Hz, 16-bit resolution and WAV format. The water pipelines analysed in this study were made of five materials. Figure 2 shows the 10 leak sites in this study and each pipe material. The amount of water that leaked was measured as follows: (1) the spilled water was collected in a plastic bag for 10-60 s; (2) the amount (mass) of water collected was measured; and (3) the mass in units (L/min) was measured. (The temperature of the water was assumed to be 4 C, at which the density of water is the maximum, 1 kg/L). Visualisation of measurement sounds using a recurrence plot The RP and the effect of RP size on the training process were as follows. where i is plotted on the horizontal axis and j on the vertical axis, and where X is an embedding vector. UTRPs are able to reflect delicate boundaries between data in results and are also useful for reverse-inferring appropriate threshold values. Accordingly, the time series data were visualised on a plane using the UTRP method in order to express in detail the characteristics of frequencies (input data) that change sensitively in units of Hertz. Figure 3 shows how to convert time series data into RP. When comparing X i and X j obtained from two systems of the same dimension, the vector embedded from each time series can be constructed as follows (y: time series data, m: embedding dimension, τ: delay time): The matrix on the right in Figure 3 is the RP reconstructed from the time series data on the left. If i ¼ 2 and j ¼ 2, the distance matrix becomes 0 from the calculation of D 2,2 ¼ X 2 À X 2 k k . This corresponds to the smallest value among the calculated values in one RP, and is displayed on the plane in black. In this study, the size of the horizontal axis and vertical axis of one RP was 64. Ten-thousand 1-s data (with a sampling frequency of 10,000 Hz) were divided into 100, and adjusted (using bicubic interpolation) so that the size became 64 × 64 to reduce the computation time of machine learning. Consequently, the time series data of 3 min of background noise and water leak sounds became 36,000 (180 s × 100 × 2) RPs, and the number of RPs for Area 1 to 10 in Figure 2 totalled 720,000 (36,000 × 10 areas × 2 sensors). As an example, Figure 4 shows RPs that visualise background noise (left) and water leak sounds (right) using sensors A in Area 4 (m: 3, τ: 1). The RP size used in the training process was determined as follows. contained in the sounds, RPs were created using a low-pass filter. Given that the frequency band effective for detecting water leaks using sensing data on water pipes was shown to be 1,500 Hz or less in a previous study (Kawamura et al. ) , a low-pass filter that cuts frequencies over 1,500 Hz was adopted. Model for detecting the presence of water leaks using the CNN model The structure of the CNN model was chosen as follows. Various tools such as CNN, AutoEncoder and recurrent neural networks have been proposed. This study constructed learning models using CNN, which has demonstrated strong performance in many fields, with RPs as training data (Hatami et al. ). CNN is a model (Figure 5) that uses a neural network with layers called the convolution layer and the pooling layer. With CNN, the input data of the convolutional layer is called the input feature map; the output data is called the output feature map; and the input and output data together are defined as the feature map. There were four convolutional layers in this study, and the configuration based on the shape notation is as follows: Meanwhile, the initial kernel value of the convolutional layers of this study is randomly generated. Then, the weights are updated so as to converge on the correct answer as the training progresses. This means that even if the data used for training is the same, the results may differ each time. The difference in balanced accuracy at the end of learning is not large, but in order to increase the reliability of the results, the same process was performed five times and expressed as an average value. In addition, in order to evaluate whether training was being performed properly, cross entropy was used as the where p denotes the classification target, and q the output of the model. The smaller the H value, the higher the probability of the correct answer; and the larger the value, the farther away from the correct answer. In other words, the fact that H decreases as the iteration progresses means that training is performed normally. The fully connected layer refers to the layer connected to all outputs through the pooling layer (Duchi et al. ). With a threshold based on 0.5 as the standard, the fully connected layer was designed so that inputs no larger than 0.5 are output as 0, and inputs larger than 0.5 are output as 1. On the other hand, the value throughout the fully connected layers adopts the sigmoid function as the activation function before it goes to distinction by threshold, and is the actual output of the CNN as a value between 0 and On the other hand, to avoid 'learning with future data and evaluating with past data' for data division, 90% future data was used as training data and 10% past data was used as test data. Moreover, the hold-out method was used. This is one of the simplest among the various data The confusion matrix in Table 1 Table 1 corresponds to 'when there is a water leak,' while negative corresponds to 'no water leaks.' Balanced accuracy was used to assess the detection ability of the model. RESULTS AND DISCUSSION The RPs of background noise (no water leaks) in Figure 4 are close to white noise and their shapes tend not to have regular features (weak deterministic properties). In contrast, it can be qualitatively determined that the RPs of water leak sounds exhibit shapes with regular features, such as a mesh or honeycomb (strong deterministic properties). On the other hand, the difference between the RPs in However, in practice, the exact leak point is not known, and so the sensor is likely to be placed further away. Furthermore, it is also likely to be difficult for a person to detect the characteristics of RP. Therefore, this study used the CNN model to develop a leak detection technique. Of course, since the amount of leakage in Area 9 is 0.001 L/min, which is the lowest value among all areas, the low recognition accuracy is not only caused by the material of the pipeline. It was also found that the balanced accuracy of the non-metallic group, Areas 5, 6, 7 and 8, which experienced some leakage, did not exceed 90% throughout all training processes. This trend was confirmed when looking at the average balanced accuracy in the verification process in Table 2, which lists the test results when using the trained model. In addition, since the balanced accuracy in Table 2 detecting the presence of water leaks using a CNN was constructed by using the visualised images as training data. The major findings were as follows. (1) By expressing background noise (no water leaks) and water leak sounds using RPs, the difference in the deterministic properties of both sounds was clarified. (2) It was found that the background Regarding the direction of future research, the leak detection technique proposed here is not yet complete and ready for application, for the following reasons. The same data were used for training and verification. The hold-out method was used for verifying the proposed model using data that was different from the data used for training. However, it is true that all the data were generated in one location: during training, the sounds recorded in one area were also used to verify the performance of the model for the same area. Accordingly, the test method may have suffered from overfitting, where training was concentrated on only one sample. As a result, a model trained in one area may not attain the same high recognition accuracy when used in another area. This problem can be resolved by performing CV. The detection model will be improved through CV in a future study. DATA AVAILABILITY STATEMENT Data cannot be made publicly available; readers should contact the corresponding author for details.
3,638.6
2021-04-12T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Removal of an Azo Textile Dye from Wastewater by Cyclodextrin-Epichlorohydrin Polymers Native cyclodextrins (CDs), α-, βand γ-CDs, were employed to synthetise three different cyclodextrin-based polymers using epichlorohydrin (EPI) as a cross-linker. These polymers were applied as adsorbent material to remove an azo textile dye, Direct Blue 78 (DB78), from water. The formation of inclusion complexes between the alone CDs and DB78 molecules were first studied in aqueous solutions. Then, adsorption experiments of the dye were performed by means of cyclodextrin/epichlorohydrin (CD/EPI) polymers. The effects of various parameters, such as contact time, adsorbent dosage, initial dye concentration, pH and temperature, were examined to determine the better adsorption conditions. The equilibrium isotherms and the adsorption kinetics were also analysed using opportune mathematic models. The chemical-physical characteristics and the morphology of the adsorbent polymers were, respectively, observed by differential scanning calorimetry and field emission scanning electron microscope. The CD/EPI polymers showed a very good ability in the removal of DB78 from aqueous solution; indeed, the maximum efficiencies in the dye removal were found to be about 99% for β-CD/EPI polymer and about 97% for γ-CD/EPI polymer, at pH 6 and 25°C conditions. It is possible to assume that the good adsorbent aptitude of CD/EPI polymers is due to their double peculiarity to include the dye in the inner cavity of CDs and to adsorb the dye on their porous surfaces by physical interaction. Textile dyes Textile and clothing industries generate a remarkable pollution in natural water due to the discharge of large amounts of dye chemicals in the effluents [1]. These dyes give an undesirable colour to the water body, reducing the sunlight penetration and influencing the photochemical and biological activities of aquatic life [2]. Dye molecules present several chemical structures and, depending on functional groups of their chromophore, are classified as azo, anthraquinone, styryl, acridine, nitro, nitroso, benzodifuranone, diphenylmethane, triphenylmethane, azine, xanthene, cyanine, phthalocyanine, hemicyanine, diazahemicyanine, triarylmethane, stilbene, or oxazine dyes [2]. However, the azo compound class accounts for about 65-70% of all classes of dyes [3], and the azo dyes are the most common synthetic molecules released into the environment. It is now recognised that some azo dyes, under certain conditions, produce aromatic amines which are toxic, allergenic, carcinogenic, and mutagenic [4,5]. Due to the hazard of reduction products arising from the use of azo dyes, the European Union (EU) AZO Colourants Directive 2002/61/EC already came into force in September 2003, and replaced by REACH regulation, regulated the restrictions on the marketing and use of certain dangerous azo dyes. In addition, these contaminants are highly soluble in water and are very difficult to degrade being stable to light irradiation, heat, and oxidation agents. Therefore, the conventional wastewater treatment systems are not able to remove them [6], and it is necessary to treat the industrial effluents before releasing it into the environment. Hence, in recent years, numerous treatments on the removal of azo dyes from the effluents have been studied. There are different methods for the treatment of wastewaters including chemical, physical, and biological technologies [7]. All these methods present both advantages and disadvantages, which are shown in detail in Table 1 [1,[8][9][10]. Table 1, the adsorption is one of the most efficient and popular methods for the removal of textile dyes from industrial effluents [11] and activated carbon is the most common material used for dye removal by adsorption. This material due to its ability to adsorb cationic, acid dyes, and mordant, and, to a slightly lesser extent, dispersed, direct, and reactive dyes [9]. However, commercially available activated carbons are very expensive, and so it is opportune to use low-cost carbons that are able to absorb pollutants from wastewater. In the last years, the research is pointing towards the use of more efficient and inexpensive adsorbent materials for the treatment of coloured effluents. A wide variety of low-cost materials, such as biosorbents and by-products of industry and agriculture [12][13][14][15], are being evaluated as viable substitutes for activated carbon to remove dyes. Industrial and agricultural wastes are indeed very interesting adsorbent materials with good adsorption capacity, high selectivity, low cost, easy regeneration, and free availability. A recent paper [15] reported that oil mill solid waste, previously treated, is able to reduce significantly the amount of an azo direct dye in industrial textile wastewater. In particular experimental conditions, this material can adsorb the 100% of the dye in solution with the possibility to recycle both the dye and the adsorbent [15]. Also, natural and biodegradable polymers showed good biocompatibility and high efficiency in dyes adsorption. Indeed, it was demonstrated that Chemical treatments Oxidative chitosan films [16], chitosan/polyamide nanofibres [17], and alginate-chitosan beads [18] are used as efficient and economic adsorbents for the removal of direct and anionic textile dyes. Numerous experiments are, moreover, conducted to evaluate the possibility to use some polysaccharides, in particular, starch and starch derivatives, as adsorbents for wastewater treatment [19,20]. Since it was established that the good adsorption properties of polymers derived from starch towards dyes, in this study, cyclodextrin-based polymers were used to remove an azo textile dye, Direct Blue 78 (DB78), from wastewater. In Figure 1 is shown the chemical structure of DB78, a tri-azo compound characterised by the presence of three azo bonds (─N═N─) with four sulphonate groups. Cyclodextrins Cyclodextrins (CDs) are natural cyclic oligosaccharides, derived from starch, that present a truncated cone structure with an inner relatively apolar cavity and an external hydrophilic face [21]. Due to this characteristic conformation, CDs are host molecules able to include in their cavity, a high range of guest molecules, with appropriate dimensions, through the formation of host-guest inclusion complexes [22]. The native CDs, named, α-, β-, and γ-CDs, are respectively constituted by 6, 7 and 8 glucopyranose, connected by α(1,4)-linkages. CDs can be employed both in their native form and in functionalised form, after opportune chemical modifications. Attributable to their numerous and specific properties, CDs are widely employed in several areas such as pharmaceutical, biomedical, biotechnological, and industrial sectors [22,23]. Several studies also reported that CDs and CD-based materials are used in removal of dyes [18,19], organic pollutants, and heavy metals from water, soil, and atmosphere [23,24]. Moreover, in a previous study [25], the interaction between some azo textile dyes and some commercial cyclodextrins was already demonstrated. Therefore, in this chapter, the study on the removal of DB78 dye from wastewater, by using cyclodextrins, is described in detail. However, since the most of CDs are highly soluble in water, insoluble CD-based materials were employed as dye adsorbent. Indeed, after the adsorption process, these materials can be easily removed from treated solutions obtaining clean water. Cyclodextrin-based polymers Among the numerous preparation methods of water insoluble CD-based materials, cross-linked polymers, obtained by copolymerisation of CDs and coupling agents, have received great attention. The most employed cross-linking agent is epichlorohydrin (1-chloro-2,3-epoxypropane) [26]. This cross-linker, shortly named EPI, is a bi-functional coupling agent, which contains two reactive functional groups, an epoxide group and a chloroalkyl moiety. EPI form bonds with polysaccharide molecules in cross-linking step and/or with itself in polymerisation step and the hydroxyl groups of native CDs, at the 2-, 3-and 6-positions of glucose units that are available and reactive to form linkages. The -OH groups in 6-positions are more reactive than those in 3-positions; however, their reactivity depends on the reaction conditions, such as temperature and alkalinity, to allow complete alkoxide formation [26]. Indeed, the secondary hydroxyl groups, which have pKa values of around 12.2 (at 298 K), can be deprotonated with hydroxide or hydride to form alcoholate sites. Consequently, typical methods used to synthesise CDs-based polymers require the addition of NaOH, NaH or NaBH 4 [27]. Despite its toxicity for humans, animals, algae, and bacteria and its potential pollutant characteristics for the environment, EPI is widely used to synthetise CD/EPI polymers [28] due to simplicity and low cost of the synthesis. On the other hand, a careful purification of these polymers allows to eliminate free EPI and other residual solvents making them good and non-toxic drug delivery systems for pharmacological formulations [29]. Furthermore, the CD/EPI polymers present high adsorption properties, high efficiency in pollutant removal and are recyclable and easily recoverable [26][27][28][29][30]. Despite the β-CDs are the most common cyclodextrins used to produce CD-based polymers, in this study, α-, β-and γ-CDs were employed, and their respective polymers were synthetised. Preparation of DB78/CD solutions To verify the formation of inclusion complexes between dye and CDs, aqueous solutions of α-, β-and γ-CDs were respectively added to DB78 solution, at different molar ratio. Stock solutions of DB78 and α-, β-and γ-CD were already prepared in distilled water and the desired volumes of these solutions were mixed and diluted to the chosen final volume to obtain the DB78/CD solutions. They were maintained under stirring for 10 min, at room temperature, to ensure the inclusion complex formation, and then studied by electrochemical measurements. Preparation of CD/EPI polymers The α-CD/EPI, β-CD/EPI, and γ-CD/EPI polymers were prepared dissolving opportune amounts of the respective CDs in water, in presence of sodium borohydride. The mixtures were vigorously stirred at 50°C until the reactants were dissolved. Then, NaOH (40% w/w) solution was added and an excess of epichlorohydrin was slowly added dropwise. The mixtures were vigorously stirred and heated gently at 50°C. About after 5 hours, the solutions started to be viscous, and gelatinous solids were obtained. Then acetone was added, and the systems were maintained under stirring and heating for 10 min. After cooling, the insoluble polymers obtained were poured into water, filtered and the resulting solid was purified by several Soxhlet extractions. Next, the CD/EPI polymers were dried in oven, at 50°C for 12 h, crushed and utilised as adsorbent materials to remove DB78 from aqueous solution. Figure 2 shows the scheme of CD/EPI polymer synthesis. Instruments Electrochemical measurements were performed in a standard three-electrode cell using hanging mercury drop electrode (HDME) as working electrode. An Ag/AgCl, KCl sat electrode and a Pt rod were used as reference and counter electrodes, respectively. A LiClO 4 0.1 M solution was used as supporting electrolyte. Voltammograms were recorded by means of the AUTOLAB PGSTAT10 potentiostat interfaced with a personal computer. Absorption spectra were recorded from 200 to 600 nm using a Shimadzu UV-1601 spectrophotometer. Calorimetric measurements were performed using an LKB 2277 Thermal Activity Monitor Isothermal Microcalorimeter equipped with an LKB 2277-204 flow mixing cell. The photographs of samples were collected by using a field emission scanning electron microscope (Merlin Compact/VP, Carl Zeiss Microscopy, Germany) with a secondary electron detector using an acceleration voltage of 2 KV and an aperture size of 30 μm. Batch adsorption experiments Batch mode experiments were carried out to study the dye adsorption processes by CD/EPI polymers. The required amounts of adsorbent were added to fixed volume of dye solution, at opportune concentration, under constant condition of agitation rate (170 rpm), pH and temperature. At predetermined time intervals, the dye concentration in solution was evaluated by UV-Vis absorption measurements. Different variables, such as contact time, adsorbent dosage, initial dye concentration, pH and temperature, were analysed to recognise the optimum adsorption states. These experiments were performed by varying the parameter under evaluation and maintaining the other parameters constant. Values of dye removal (%) and amount of dye adsorbed onto adsorbent q t (mg/g) at time t were respectively calculated using the following expressions: where C i and C t (mg/L) are the dye concentration in solution at initial and at t adsorption time, respectively. V (L) is the initial volume of dye solution and m (g) is the mass of adsorbent. All tests were achieved in triplicate and the mean values were reported. Adsorption equilibrium isotherms Adsorption isotherms, by means of accurate mathematical models, allow to evaluate the adsorption behaviour and to describe how the adsorbate interacts with the adsorbent [31]. Among all isotherm models developed, the more common models, Langmuir and Freundlich models were used in this study. The Langmuir adsorption isotherm model presumes that the adsorption occurs on homogeneous sites of adsorbent surface forming a saturated monolayer of adsorbate on the outer surface of adsorbent and that the adsorption of each molecule onto the surface has equal adsorption activation energy [32,33]. The Freundlich adsorption isotherm is an empirical equation which describes heterogeneous systems having unequal available sites on adsorbent surface with different adsorption energies [31,32]. The adsorption isotherms were evaluated adding different amounts of CD/EPI polymers to dye solutions and maintaining the systems at constant temperature of 25°C under continuous stirring until the equilibrium was achieved. Values of dye concentration were measured before and after the adsorption processes and the obtained experimental data were fitted with Langmuir and Freundlich models. The values of the linear regression correlation coefficient R 2 give information about the best-fit model. The linearised form of Langmuir is represented by Eq. (3): where q e (mg/g) is the amount of the dye adsorbed on polymer at equilibrium, q m (mg/g) is the maximum monolayer amount of DB78 adsorbed per unit mass of adsorbent, C e (mg/L) is the concentration of dye in solution at equilibrium and b is the constant related to the affinity of the binding sites (L/mg). From the intercept and slope of the plot 1/q e versus 1/Ce, it is possible to obtain the values of q m and b, respectively. Moreover, the Langmuir isotherm can be expressed in terms of a dimensionless constant separation factor, R L that is defined by the following equation: where C 0 is the initial concentration of adsorbate (mg/L) and b (L/mg) is Langmuir constant. The value of R L indicates the trend of the adsorption process, indeed, the isotherm can be either favourable (0 < R L < 1), unfavourable (R L > 1), linear (R L = 1) or irreversible (R L = 0). The Langmuir values, q m , b, and R L are presented in Table 2. The linear form of Freundlich equation is: where q e (mg/g) is the amount of DB78 adsorbed at equilibrium, C e (mg/L) is the concentration of the dye in solution at equilibrium, K F (L/g) is the Freundlich constant related to the maximum adsorption capacity of adsorbent and n (dimensionless) is the heterogeneity factor. The values of K F and n, reported in Table 2, were calculated respectively by the intercept and slope of the linear plot ln q e versus ln C e . The magnitude of n gives an indication of the favourability of adsorption process: when n = 1, the adsorption is linear, when n > 1, the adsorption is a favourable adsorption condition [31,33]. Thermodynamic analysis Thermodynamic parameters, such as Gibb's free energy change (ΔG°) (J mol −1 ), enthalpy change (ΔH°) (J mol −1 ) and entropy change (ΔS°) (J mol −1 K −1 ), allow to comprehend the nature of adsorption process and the effect of temperature on adsorption. These parameters can be calculated using the following relations [34]: where R is the universal gas constant (8.314 J mol −1 K −1 ), T is the solution temperature (K) and K c is defined as: Therefore, Eqs. (6) and (8) can be rewritten as: ΔH° and ΔS° were obtained by plot of Eq. (9), while the ΔG° values were determined from Eq. (8). Electrochemical measurements Before testing the ability of azo dye removal by CD/EPI polymers, the interactions between DB78 and α-, β-and γ-CDs were investigated in solution by electrochemical measurements. Generally, electrochemical studies of different azo dyes show that the electroreduction of the N═N double bond occurs at the hydrazo stage (HN˗NH), via the consumption of 2e − /2H + , or at the amine stage (─NH 2 ), via the consumption of 4e − /4H + , in one or two steps depending on the chemical structure of the investigated azo compound, the nature of adjacent substituents and the pH of the medium [35]. Furthermore, the electrochemical reduction of azo compounds is an irreversible process complicated by preceding and following chemical reactions leading to the cleavage of the azo bond and resulting in various degradation products [36]. In Figure 3, the cyclic voltammetry measurements of DB78 are reported. It presented three cathodic peaks, located in the range from −0.2 to −1.0 V. The first two weak waves (I and II) were positioned at −0.15 and −0.70 V respectively, while the more intense wave (III), were located at about −0.80 V. Ep, I and Ep, II are both attributable to the azo moieties electroreduction [37,38]. The different potential for the azo group reduction is due to the different substituents present in ortho position respect to it. The first peak can be attributed to the electroreduction of the azo group with the ortho -OH group that facilitate the electroreduction due to its electron-donating effect. The electrochemical behaviour of DB78 in presence of increasing CDs concentration was then analysed. Although the addition of α-CD did not greatly influence the cyclic voltammograms of DB78 (data not showed), it is not possible to affirm that there is no interaction between dye and α-CD, but that this technique did not allow to obtain detailed information. On the contrary, the addition of β-CD and γ-CD, at increasing molar ratio, showed regular changes in the cyclic voltammograms of DB78. Indeed, in Figure 4a and b, it is possible to observe a strong increment of current intensity values at the increasing of the CD amount, particularly in the case of γ-CD, while no shifts of the potential peaks were detected. These regular variations indicate that the dye was reduced with more difficulty because its involvement in the inclusion complex. The inclusion of the azo groups of dye inside the cavity of the CDs prevents the interaction with the electrode and reduces the diffusion coefficient of the molecule determining the reduction of the peak current intensity. Consequently, the electrochemical measurements confirmed the formation of inclusion complexes between DB78 and β-CD and between DB78 and γ-CD. Dye adsorption efficiency by different CD/EPI polymers To evaluate the more appropriate material able to adsorb DB78, three different types of adsorbents, α-CD/EPI, β-CD/EPI, and γ-CD/EPI polymers, were used. Ten milliliters of dye (11.00 mg/L) at pH 6 and 25°C were analysed using 1.00 g of polymers as adsorbent. Figure 5a shows that β-CD/EPI polymer presented a better ability to remove DB78 from solution than the other polymers. The dye removal efficiency was 98.90% with β-CD/EPI polymer, in contrast to 97.25% and only 92.70% when γ-CD/EPI and α-CD/EPI polymers were respectively used. Consequently, all adsorption experiments were carried out on β-CD/EPI and γ-CD/EPI polymers. It is possible to suppose that the adsorption is based not only on physical adsorption process in the polymers networks but also on inclusion complex formation [39]. Therefore, β-and γ-CD, which are characterised by a wider cavity, can form more host-guest supramolecular interaction with dye than α-CD. However, β-CD/EPI, despite the intermediate size of β-CD between α-and γ-CD, showed the better efficiency in the removal of dye. This behaviour is due to the highest complexing ability and stability with cross-linking agents of β-CD [40]. Effect of contact time To determine the effect of contact time on adsorption processes, 10 mL of DB 78 (11.00 mg/L) was maintained for 24 h under continuous stirring with 1.00 g of β-and γ-CD/EPI polymers at pH 6 and 25°C. The concentrations of dye in solution were measured at several times. Figure 5b shows that both polymers presented the maximum dye removal after 2 h of adsorption process and no further changes were observed after 24 h. Therefore, it is possible to affirm that the time required to achieve the equilibrium was about 2 h. During this time, the complete saturation of active sites of polymers was reached. Effect of adsorbent dosage The amount of the adsorbent used in these experiments is another important parameter that affects the uptake of dye. Indeed, a quantitative removal cannot be achieved when the polymer is less than the optimum amount. To optimise the smallest quantity of polymer able to adsorb the greater amount of DB78, increasing dosage of adsorbents, from 0.05 to 1.25 g, was added into 10 mL of dye solution (11.00 mg/L). The systems, maintained at pH 6 and 25°C, were stirred, until equilibrium achievement, and the remaining amount of dye in solutions were measured. In Figure 6a and b are presented the effect of adsorbent dosage on β-CD/EPI and γ-CD/EPI polymers, respectively. It is possible to observe that both polymers have the same behaviour: the percentage of dye removal increased with the increase in dosage of polymers, due to the major availability of adsorbent surface sites [18]. In presence of β-CD/EPI polymer (Figure 6a), the removal of dye from the initial solutions increased from 41.20 to 98.90% as the adsorbent dosage increased from 0.05 to 1.00 g. When γ-CD/EPI polymer (Figure 6b) was used as an adsorbent, the removal of DB78 increased from 52.01 to 97.25% as the adsorbent dosage increased from 0.05 to 1.00 g. A further increase in dosage of polymers (1.25 g) did not improve the removal of both dye since the systems were achieved the maximum adsorption efficiency. Therefore, 1.00 g of polymers was used for further measurements. Effect of initial dye concentration To study the effect of initial dye concentration on adsorption mechanism onto CD-based polymers, increasing the concentrations of DB78 solutions were used. The experiments were performed at pH 6 and 25°C, using a constant volume of dye solution (10 mL) and a constant dosage (1.00 g) of β-CD/EPI polymer and of γ-CD/EPI polymer. Experimental results show that the amount of dye adsorbed onto adsorbent, q t (mg/g), increased with the increase in initial concentration of dye. This behaviour was more evident in the case of β-CD/EPI polymer (Figure 7a), where the amount of dye adsorbed onto polymer at equilibrium, q e , improved from 0.32 to 1.99 mg/g as the initial concentration of DB78 increased from 11.00 to 70.00 mg/L. In the case of γ-CD/EPI polymer (Figure 7b), q e increased from 0.24 to 1.28 mg/g when the initial concentration of dye was incremented from 11.00 to 70.00 mg/L. This occurs because the increase in the initial concentrations of dye induces the optimisation of favourable interaction raising the driving force, able to overcome the resistance to the mass transfer of dye between the aqueous and the solid phase [41]. Furthermore, these measurements demonstrate again the better adsorption ability of β-CD/EPI polymer than γ-CD/EPI polymer. Effect of initial pH To study the influence of pH on the adsorption of azo dye onto the two polymers, experiments were carried out at pH 2, 6 and 11 with a contact time of 2 h. In Figure 8a and b are reported the results respectively obtained with β-CD/EPI and γ-CD/EPI polymers. Generally, the initial pH of solution plays a significant role in the chemistry of adsorbent and dye, however, in this case, no significant changes in the adsorption process were observed at different pH conditions. Indeed, for both polymers, when acid and basic conditions were used, no important variations in the adsorption efficiency were observed. However, the highest percentage of dye removal was obtained at pH 6. It is possible to suppose that at alkaline pH, the presence of excess ─OH ions compete with the anionic dye for the adsorption sites. Indeed, as the pH of the system increases, the number of negatively charged sites increases as well, and the number of positively charged sites decreases. A negatively charged surface site on the adsorbent does not support the adsorption of anionic dye due to electrostatic repulsion [42]. On the other hand, at low values of pH, the sulphonate groups of dye are protonated and the number of positively charged sites increases, inducing again electrostatic repulsion. Therefore, all experiments were performed at pH 6 that is the natural pH of DB78 aqueous solution. Adsorption equilibrium isotherms The adsorption isotherms of DB78 onto CD/EPI polymers were determined at pH 6 maintaining the systems at constant temperature of 25, 50 and 80°C. Various quantities of adsorbent, from 0.05 to 1.00 g, were added to 10 mL of dye (80.00 mg/L) and the adsorption process was maintained until the reaching of equilibrium state. The Langmuir and Freundlich values are listed in Table 2, respectively and the value of the linear regression correlation coefficient R 2 is used to determine the best-fit model. Based on R 2 , the results show that the adsorption process with both polymers was better represented by Langmuir isotherm model than the Freundlich equation. The applicability of Langmuir isotherm describes a monolayer and homogeneous adsorption of the dye onto the surface of polymers, where the adsorption of each molecule onto the surface has equal adsorption activation energy [31]. These results agree with a study, reported in the literature [42], where some azo dyes have been removed by β-cyclodextrin-based polymers. Furthermore, these measurements show that increasing the temperature from 25 to 80°C induced a higher maximum adsorption capacity. Since the R L values were between 0 and 1, it possible to underline that β-CD/EPI and γ-CD/EPI polymers are good and favourable adsorbent for DB78 removal. Thermodynamic analysis The thermodynamic parameters for the adsorption of DB78 dye wastewater on β-CD/EPI and γ-CD/EPI polymers are summarised in Table 3. The negative values of ΔG° indicated that the dye adsorption by these polymers is a spontaneous and a favourable process. Since the obtained values of free energy change were in the range of −9.85 to −12.01 kJ mol −1 , for the β-CD/EPI polymer, and in range of −11.26 to −14.25 kJ mol −1 , for the γ-CD/EPI polymer, it is possible to affirm that the adsorption was principally physical. Indeed, some studies reported that the adsorption is classified as physical adsorption when the ΔG° values range between −20 and 0 kJ mol −1 , and as chemical adsorption when ΔG° values range from −80 to −400 kJ mol −1 [34]. The positive values of ΔS°, for both polymers, showed that the disorder of the systems increased at the solid solution interface during the adsorption of DB78 on polymers. Also, the ΔH° values for β-CD/EPI and γ-CD/EPI polymers were 4.94 and 1.86 kJ mol −1 , respectively. These positive values indicate that the adsorption followed an endothermic process as in agreement with results derived from the isotherm measurements. Thermal analysis The thermal analysis of β-CD/EPI polymer, γ-CD/EPI polymer, and their respective polymers loaded with DB78 was performed with differential scanning calorimetry (DSC) under N 2 atmosphere with heating rate of 20°C/min. As shown in Figure 9a, the DSC thermograms of β-CD/EPI polymer exhibited an endothermic peak at about 280°C [43]. After the interaction of this polymer with DB78, the thermogram presented a double endothermic peak at about 250 and 280°C. Since the first signal corresponds to the decomposition temperature of the only dye, it is possible to affirm that DB78 exhibits a thermal instability even after adsorption. This result allows to hypothesise that the interaction between DB78 and the polymers did not occur only in the internal cavities of cyclodextrins but also in the pores present on the external surface of polymer. In Figure 9b, the DSC thermograms of γ-CD/EPI polymer loaded with DB78 were no longer exhibit the typical thermal decomposition phenomena, shown in the thermograms of DB78 and γ-CD/EPI polymer. It confirms the interaction between DB78 and γ-CD/EPI: the inclusion of the dye into the γ-CD by weak forces stabilises both the adsorbent and the adsorbate. Morphologic study CD-based polymers were observed by field emission scanning electron microscope (FESEM) to examine their morphology. In Figure 10a and c, FESEM images of unloaded β-CD/EPI and Removal of an Azo Textile Dye from Wastewater by Cyclodextrin-Epichlorohydrin Polymers http://dx.doi.org/10.5772/intechopen.72502 Figure 11. Adsorption of DB78 (11.00 mg/L) onto β-CD/EPI polymer at pH 6 and at constant temperature of 25°C before and after 2 h of treatment. (a) Image of β-CD/EPI polymer before and after the adsorption process and (b) image of DB78 solution in presence of β-CD/EPI polymer before and after the adsorption process. γ-CD/EPI polymers are respectively showed. It is possible to observe that these materials presented a very porous, rough and irregular structure which cavities are able to adsorb the DB78 molecules. Moreover, the presence of loaded dye molecules on polymers did not affect significantly the morphology of the samples, as reported in Figure 10b and d, confirming the weak and physical interaction of adsorption process. Conclusion Results of adsorption show that β-and γ-CD/EPI polymers exhibited good adsorption properties towards azo dye Direct Blue 78 (Figure 11a) and the maximum efficiencies in dye removal, performed at pH 6, 25°C, with an initial dye concentration equal to 11.00 mg/L and using 1.00 g of adsorbents, were found to be about 99% for β-CD/EPI polymer and about 97% for γ-CD/EPI polymer, respectively. The proposed adsorption mechanism involved several kinds of interactions such as physical adsorption in the polymer network, hydrogen bonding and formation of inclusion complex due to the presence of CD molecules through host-guest interactions. As illustrated in Figure 11b, this adsorption method allows, after 2 h of treatment with polymers, to obtain clean water that could be reused in further industrial processes of fabric dyeing. Furthermore, these polymers could be promising adsorbents for industrial application due to their low cost of production and their possible recycling in different adsorption cycles.
6,820.8
2017-12-20T00:00:00.000
[ "Chemistry" ]
Doppler Modeling and Simulation of Train-to-Train Communication in Metro Tunnel Environment The communication system of urban rail transit is gradually changing from train-to-ground (T2G) to train-to-train (T2T) communication. The subway can travel at speeds of up to 200 km/h in the tunnel environment, and communication between trains can be conducted via millimeter waves with minimum latency. A precise channel model is required to test the reliability of T2T communication over a non-line-of-sight (NLoS) Doppler channel in a tunnel scenario. In this paper, the description of the ray angle for a T2T communication terminal is established, and the mapping relationship of the multipath signals from the transmitter to the receiver is established. The channel parameters including the angle, amplitude, and mapping matrix from the transmitter to the receiver are obtained by the ray-tracing method. In addition, the channel model for the T2T communication system with multipath propagations is constructed. The Doppler spread simulation results in this paper are consistent with the RT simulation results. A channel physics modelling approach using an IQ vector phase shifter to achieve Doppler spread in the RF domain is proposed when paired with the Doppler spread model. Sensors 2022, 22, 4289 3 of 13 speeds of more than 200 km/h along a given track in a tunnel environment cause higher Doppler shifts. Thus far, the channel modeling work in subway tunnels has mainly studied the influence of the LoS path, and the single-bounced and double-bounced signal on the channel in the same coordinate space in the V2G communication scene [38][39][40]. In the T2T communication system, the transceiver ends are located in different locations, so it is difficult to apply the same coordinate space analysis. In addition, the movement of the antenna causes rapid changes in the channel environment, complicating the signal propagation process. The influence of multi-bounced signals and the movement of the transceiver antenna on the Doppler shift must be considered, so a Doppler spread model suited for the movement of two-terminal terminals in the tunnel must be established. In this paper, the receiver (Rx) and transmitter (Tx) coordinate systems are established, and the Doppler spread model of the multipath signals from the transmitter to the receiver is established in their respective coordinate systems. The mapping matrix approach is proposed as an innovative solution to the problem of signal matching at the receiver and transmitter, as well as a method for obtaining the mapping matrix. In order to verify the proposed Doppler model for T2T communication in the tunnel, the ray-tracing (RT) [41] approach is utilized to obtain the angle and amplitude of the signals at the transmitting and receiving ends. By comparing the simulation results of the RT approach with the simulation results of the Doppler spread model, the validity of the Doppler spread model is proved. A channel physical simulation method using IQ vector phase shifters [42,43] is proposed to execute the T2T communication channel simulation in the tunnel environment, which can be used for future tunnel environments when combined with the T2T channel model and test data analysis in the tunnel [44][45][46]. It can provide a reference for 5G millimeter wave physical channel simulation in the future tunnel environment. Article Structure The rest of this paper is organized as follows: In Section 2, the Doppler shift models for transmitted and received signals are established, followed by the solution method for the multipath signal mapping relationship at the transmit and receive ends and the multipath signal's Doppler spread model. In Section 3, the RT simulation method is used to obtain the angle, amplitude, and mapping relationship of the transmitted and received signals, and the Doppler spread simulation of the theoretical model is discussed. In Section 4, combined with the analysis of the communication signal in the tunnel, a method to realize physical channel simulation using an IQ vector phase shifter is proposed. Finally, Section 5 provides the conclusion of this paper. Figure 1 depicts a T2T communication scenario in which the front and rear trains run through a tunnel with a width W and a height H. The complicated propagation process of multipath signals through straight or curved tunnels, such as reflection and scattering, is represented by the multipath link between two trains. The space where the multipath link is located represents the complex channel environment, such as straight or curved tunnels. A three-dimensional coordinate system is established at the transceiver antennas, with the tunnel depth as the x-axis, the height as the z-axis, and the horizontal direction as the y-axis. The downlink of the T2T communication consists of the transmitting antenna Tx located in the rear train and the receiving antenna Rx located in the front train. In addition, v t and v r represent the moving speed of the transmitting antenna and the receiving antenna, and the moving direction follows the same path as the x-axis. The received signal consists of the NLoS signal that reaches the receiving antenna after multiple reflections and scattering. In the LoS scenario, it includes the LoS signal from the transmitting antenna to the receiving antenna. In order to study the doppler spread of T2T communication in the tunnel environment, it is assumed that the receiving and the transmitting antennas are omnidirectional uniform antennas. signal consists of the NLoS signal that reaches the receiving antenna after multiple re tions and scattering. In the LoS scenario, it includes the LoS signal from the transmi antenna to the receiving antenna. In order to study the doppler spread of T2T comm cation in the tunnel environment, it is assumed that the receiving and the transmi antennas are omnidirectional uniform antennas. where , represents the angle between the signal ( , , , , , ) and the mo direction of Tx. ̂, and ̂ represent the unit vector in the direction of the transm signal and the unit vector in the direction of Tx movement. 0 is the signal carrier w length, and (•) represents the transpose of the matrix. Due to the movement of th ceiving antenna, the Doppler shift , of the signal ( , , , , , ) at the recei end can be expressed as Wireless Channel Model of T2T Communication and I = M, I and M represent the number of multipaths of transmitted and receiving signals. Let P t,i denote the transmit power of the i-th path signal, and P r,m denote the receive power of the m-th path signal. Then the spherical coordinate form of the transmitter signal and the receiver signal can be expressed as (P t,i , θ ZOD,i , φ AOD,i ) and (P r,m , θ ZOA,m , φ AOA,m ). Due to the movement of the transmitting antenna, the Doppler shift f i d,t of the signal (P t,i , θ ZOD,i , φ AOD,i ) can be expressed as where ψ t,i represents the angle between the signal (P t,i , θ ZOD,i , φ AOD,i ) and the moving direction of Tx.r t,i andv t represent the unit vector in the direction of the transmitted signal and the unit vector in the direction of Tx movement. λ 0 is the signal carrier wavelength, and (·) T represents the transpose of the matrix. Due to the movement of the receiving antenna, the Doppler shift f m d,r of the signal (P r,m , θ ZOA,m , φ AOA,m ) at the receiving end can be expressed as cos(ψ r,m ) =r T r,m ·v r (6) where ψ r,m represents the angle between the signal (P r,m , θ ZOA,m , φ AOA,m ) and the moving direction of Rx.r r,m andv r represent the unit vector of the received signal direction and the unit vector of the Rx moving direction. where , represents the angle between the signal ( , , , , , ) and the moving direction of Rx. ̂, and ̂ represent the unit vector of the received signal direction and the unit vector of the Rx moving direction. Matching of Receiving and Transmitting Rays In the proposed system, the transmitted and the received signals have a one-to-one mapping relationship, which means that for each received signal, a unique corresponding transmitted signal can always be located, completing a full signal chain from Tx to Rx. According to the roughness of the sidewall of the tunnel, the mapping relationship between the transmitted and receiving signals can be solved by the spatial mirror method and the random scatterer distribution method. The space mirror method is shown in For the NLoS propagation path in the tunnel, this method solely considers reflection, and the mirror space is formed with the reflection surface as the axis. For the signal reflected for k (k = 0, 1, 2, …, K) times, the mirror image point Tx' of transmitting antenna Tx needs to be obtained through k times of mirror image. The scatterers' location at the tunnel's size wall follows a uniform random distribution, as shown in Figure 4. In this model, scatterers are randomly distributed along the inner wall of the tunnel. When the NLoS signal passes through the scatterer, the rough Matching of Receiving and Transmitting Rays In the proposed system, the transmitted and the received signals have a one-to-one mapping relationship, which means that for each received signal, a unique corresponding transmitted signal can always be located, completing a full signal chain from Tx to Rx. According to the roughness of the sidewall of the tunnel, the mapping relationship between the transmitted and receiving signals can be solved by the spatial mirror method and the random scatterer distribution method. The space mirror method is shown in Figure 3. where , represents the angle between the signal ( , , , , , ) and the moving direction of Rx. ̂, and ̂ represent the unit vector of the received signal direction and the unit vector of the Rx moving direction. Matching of Receiving and Transmitting Rays In the proposed system, the transmitted and the received signals have a one-to-one mapping relationship, which means that for each received signal, a unique corresponding transmitted signal can always be located, completing a full signal chain from Tx to Rx. According to the roughness of the sidewall of the tunnel, the mapping relationship between the transmitted and receiving signals can be solved by the spatial mirror method and the random scatterer distribution method. The space mirror method is shown in Fig For the NLoS propagation path in the tunnel, this method solely considers reflection, and the mirror space is formed with the reflection surface as the axis. For the signal reflected for k (k = 0, 1, 2, …, K) times, the mirror image point Tx' of transmitting antenna Tx needs to be obtained through k times of mirror image. The scatterers' location at the tunnel's size wall follows a uniform random distribution, as shown in Figure 4. In this model, scatterers are randomly distributed along the inner wall of the tunnel. When the NLoS signal passes through the scatterer, the rough For the NLoS propagation path in the tunnel, this method solely considers reflection, and the mirror space is formed with the reflection surface as the axis. For the signal reflected for k (k = 0, 1, 2, . . . , K) times, the mirror image point Tx' of transmitting antenna Tx needs to be obtained through k times of mirror image. The scatterers' location at the tunnel's size wall follows a uniform random distribution, as shown in Figure 4. In this model, scatterers are randomly distributed along the inner wall of the tunnel. When the NLoS signal passes through the scatterer, the rough scatterer surface leads to a certain randomness in the direction of the secondary radiation wave. When the signal is repeatedly scattered, the mapping between the transmitted signal and the received signal can be considered as random mapping. scatterer surface leads to a certain randomness in the direction of the secondary radiation wave. When the signal is repeatedly scattered, the mapping between the transmitted signal and the received signal can be considered as random mapping. The mapping relationship between the transmitter multipath signal and the receiver multipath signal can be expressed as where A is the mapping matrix with I rows and M columns. The element in the mapping matrix represents the mapping relationship between the signal ( , , , , , ) at the receiving end and the signal ( , , , , , ) at the transmitting end. Let the transmitter signal ( , , , , , ) and the receiver signal ( , , , , , ) be the same signal, where 1 ≤ ≤ , 1 ≤ ≤ , then the element is The matrix N is the subscript matrix of the multipath signal at the receiving end. After rearranging the mapping matrix, the subscript matrix Γ corresponding to the signal at the transmitting end is obtained. Doppler Effect at the Transmitter and Receiver Rearrange the Doppler shifts of each ray at the receiver so that they are in the same order as the corresponding ray at the transmitter When both the receiving and transmitting ends move, the Doppler shift of the communication signal can be expressed as Assuming that the transmitted signal is ( ) = cos (2 ), the bandpass form ( ) of the i-th path received signal can be expressed as where and represent the power normalized amplitude and time delay of the i-th path signal, so the baseband form ( ) of the i-th path received signal is expressed as The mapping relationship between the transmitter multipath signal and the receiver multipath signal can be expressed as where A is the mapping matrix with I rows and M columns. The element a im in the mapping matrix represents the mapping relationship between the signal (P r,m , θ ZOA,m , φ AOA,m ) at the receiving end and the signal (P t,i , θ ZOD,i , φ AOD,i ) at the transmitting end. Let the transmitter signal (P t,x , θ ZOD,x , φ AOD,x ) and the receiver signal P r,y , θ ZOA,y , φ AOA,y be the same signal, where 1 ≤ x ≤ M, 1 ≤ y ≤ M, then the element a im is The matrix N is the subscript matrix of the multipath signal at the receiving end. After rearranging the mapping matrix, the subscript matrix Γ corresponding to the signal at the transmitting end is obtained. Doppler Effect at the Transmitter and Receiver Rearrange the Doppler shifts of each ray at the receiver so that they are in the same order as the corresponding ray at the transmitter When both the receiving and transmitting ends move, the Doppler shift f i d of the communication signal can be expressed as Assuming that the transmitted signal is x p (t) = cos(2π f c t), the bandpass form R i p (t) of the i-th path received signal can be expressed as where c i and τ i represent the power normalized amplitude and time delay of the i-th path signal, so the baseband form R i b (t) of the i-th path received signal is expressed as thus, the baseband form of the multipath received signal can be expressed as When the number of rays I → ∞ , the received signal can be expressed as the integral function of all frequency components from the minimum Doppler frequency f d,min to the maximum Doppler frequency f d,max where P( f d ) represents the continuous Doppler spectral function. Simulation Model and Parameter Settings of RT In order to verify the Doppler model proposed in this paper, the RT simulation method is used in the Wireless Insite (WI) simulation software to obtain the angle and amplitude characteristics of the transmitted and received signals, and the mapping relationship of the transmitted and received signals is extracted. The 3D model of the tunnel is built using the 3D modeling software Inventor. The tunnel is a rectangular straight tunnel with a length of 300 m, a width of 5 m, and a height of 5 m. In the simulation, the signal carrier frequency is 28 GHz, and both the transmitting antenna and the receiving antenna are omnidirectional antennas, located in the center of the tunnel 100 m and 200 m away from the tunnel entrance. The distance between the antenna and the tunnel ground is 2 m, as shown in Figure 5. The tunnel structure and transmit and receive antenna parameters are listed in Table 1. where = 2 ( + ), when ≫ , ≈ 2 ; thus, the baseband form of the multipath received signal can be expressed as When the number of rays → ∞, the received signal can be expressed as the integral function of all frequency components from the minimum Doppler frequency , to the maximum Doppler frequency , where ( ) represents the continuous Doppler spectral function. Simulation Model and Parameter Settings of RT In order to verify the Doppler model proposed in this paper, the RT simulation method is used in the Wireless Insite (WI) simulation software to obtain the angle and amplitude characteristics of the transmitted and received signals, and the mapping relationship of the transmitted and received signals is extracted. The 3D model of the tunnel is built using the 3D modeling software Inventor. The tunnel is a rectangular straight tunnel with a length of 300 m, a width of 5 m, and a height of 5 m. In the simulation, the signal carrier frequency is 28 GHz, and both the transmitting antenna and the receiving antenna are omnidirectional antennas, located in the center of the tunnel 100 m and 200 m away from the tunnel entrance. The distance between the antenna and the tunnel ground is 2 m, as shown in Figure 5. The tunnel structure and transmit and receive antenna parameters are listed in Table 1. The tunnel material and simulation ray parameter settings are shown in Table 2. The tunnel material is concrete, and the parameters such as the permittivity and conductivity of the tunnel material are calibrated according to the measured data of Shanghai Metro The tunnel material and simulation ray parameter settings are shown in Table 2. The tunnel material is concrete, and the parameters such as the permittivity and conductivity of the tunnel material are calibrated according to the measured data of Shanghai Metro Line 7 [10]. Moreover, the signal's maximum reflection time in the tunnel is set to 10, its maximum scattering time is set to 2, and its transmission time is set to 0. Simulation Results and Analysis The angles and powers of 250 multipath signals are obtained through the RT simulation, and the propagation paths of the multipath signals in the tunnel are shown in Figure 6. The polar coordinate form of the multipath signal angle is shown in Figure 7, in which the pitch departure angle and pitch arrival angle are concentrated around 90 • , while the horizontal departure angle and horizontal arrival angle are around 0 • and 180 • . The angular range of the arrival angle is greater than the angular range of the departure angle. The angles and powers of the five largest energy paths in the simulation results are shown in Table 3. Line 7 [10]. Moreover, the signal's maximum reflection time in the tunnel is set to 10, its maximum scattering time is set to 2, and its transmission time is set to 0. Simulation Results and Analysis The angles and powers of 250 multipath signals are obtained through the RT simulation, and the propagation paths of the multipath signals in the tunnel are shown in Figure 6. The polar coordinate form of the multipath signal angle is shown in Figure 7, in which the pitch departure angle and pitch arrival angle are concentrated around 90°, while the horizontal departure angle and horizontal arrival angle are around 0° and 180°. The angular range of the arrival angle is greater than the angular range of the departure angle. The angles and powers of the five largest energy paths in the simulation results are shown in Table 3. The transceiver signal mapping matrix extracted from the RT simulation results can be represented as an identity matrix with 250 rows and 250 columns Line 7 [10]. Moreover, the signal's maximum reflection time in the tunnel is set to 10, its maximum scattering time is set to 2, and its transmission time is set to 0. Simulation Results and Analysis The angles and powers of 250 multipath signals are obtained through the RT simulation, and the propagation paths of the multipath signals in the tunnel are shown in Figure 6. The polar coordinate form of the multipath signal angle is shown in Figure 7, in which the pitch departure angle and pitch arrival angle are concentrated around 90°, while the horizontal departure angle and horizontal arrival angle are around 0° and 180°. The angular range of the arrival angle is greater than the angular range of the departure angle. The angles and powers of the five largest energy paths in the simulation results are shown in Table 3. The transceiver signal mapping matrix extracted from the RT simulation results can be represented as an identity matrix with 250 rows and 250 columns Table 3. Angle and power of 5 maximum energy paths. The transceiver signal mapping matrix extracted from the RT simulation results can be represented as an identity matrix with 250 rows and 250 columns Number of Multipath In order to verify the spatial mirroring method proposed in this paper and the random matching method for scattered signals, after randomly arranging the transmitted signals, first determine the mapping relationship of the reflected signals between the transceivers according to the spatial mirroring principle, and then perform random matching on the unmatched signals. The complete transceiver signal mapping matrix is shown in Figure 8. In order to verify the spatial mirroring method proposed in this paper and the ran dom matching method for scattered signals, after randomly arranging the transmitted sig nals, first determine the mapping relationship of the reflected signals between the trans ceivers according to the spatial mirroring principle, and then perform random matching on the unmatched signals. The complete transceiver signal mapping matrix is shown in Figure 8. After obtaining the angle, power information, and mapping relationship of the transmitting and receiving signals, the normalized Doppler power spectrum of the transmitting and receiving antennas at different moving speeds is obtained through the Doppler model, as shown in Figure 9. The normalized amplitude in the graph is defined as the ratio of the power of each single path to the total power of the multipath. When the moving speeds of the receiving and transmitting antennas are 160 km/h and 80 km/h, the Doppler shift of the LoS signal in the RT simulation results is 2.074 KHz, and the Doppler spread is Physical Simulation Model of T2T Communication Channel in Tunnel In a complex propagation environment, two kinds of fading channels will be generated due to the delay spreading effect of multipath channels, namely, the frequency flat fading channel and the frequency selective fading channel. Multipath effects cause the amplitude of the received signal to shift over time when the signal bandwidth is smaller than the coherence bandwidth , but the signal spectrum does not. In this case, the duration of the symbol is greater than the maximum time delay of multipaths, and this channel is called the flat fading channel. In a flat fading channel, the influence of the time delay on the communication system can be ignored. Existing test results show that the multipath delay in tunnel scenarios is tens of nanoseconds [11][12][13][14], that is, < , so the bandpass form of Equation (17) where ( ) = tan −1 ( ( ) ( ) ⁄ ) is the phase shift caused by the Doppler shift of multipaths. The circuit structure of the IQ vector phase shifter [15,16] is shown in Figure 10. The phase shifter circuit consists of a quadrature splitter (QS), a variable gain amplifier (VGA), and a quadrature combiner (QC). The input RF signal generates the in−phase component = √2 ⁄ and the quadrature component = √2 ⁄ after passing through the quadrature splitter. After and pass through independent VGAs, they are summed in the combiner, and the output signal is a function of the VGA gains and where the gain range of VGA is {−1,1}, the amplitude of the output signal and the input signal remain unchanged, and phase difference ∆ = tan −1 ( ⁄ ). Compared to (21) and (22), make = , then , = √2 ⁄ , , = √2 ⁄ , = , = − . Therefore, the physical simulation of the channel can be theoretically realized Physical Simulation Model of T2T Communication Channel in Tunnel In a complex propagation environment, two kinds of fading channels will be generated due to the delay spreading effect of multipath channels, namely, the frequency flat fading channel and the frequency selective fading channel. Multipath effects cause the amplitude of the received signal to shift over time when the signal bandwidth B s is smaller than the coherence bandwidth B c , but the signal spectrum does not. In this case, the duration of the symbol T s is greater than the maximum time delay τ max of multipaths, and this channel is called the flat fading channel. In a flat fading channel, the influence of the time delay on the communication system can be ignored. Existing test results show that the multipath delay in tunnel scenarios is tens of nanoseconds [11][12][13][14], that is, τ max < T s , so the bandpass form of Equation (17) can be expressed as the product of the bandpass transmit signal x p (t) and the multiplicative spreading factor H(t) c i e j2π f i d t (20) where x p (t) = x p,I (t) + jx p,Q (t), H(t) = |H(t)|e jϕ(t) = H I (t) + jH Q (t), then Equation (20) can be expressed as where ϕ(t) = tan −1 H I (t)/H Q (t) is the phase shift caused by the Doppler shift of multipaths. The circuit structure of the IQ vector phase shifter [15,16] is shown in Figure 10. The phase shifter circuit consists of a quadrature splitter (QS), a variable gain amplifier (VGA), and a quadrature combiner (QC). The input RF signal RF in generates the in−phase component V I = RF in / √ 2 and the quadrature component V Q = jRF in / √ 2 after passing through the quadrature splitter. After V I and V Q pass through independent VGAs, they are summed in the combiner, and the output signal RF out is a function of the VGA gains A I and A Q where the gain range of VGA is {−1,1}, the amplitude of the output signal RF out and the input signal RF in remain unchanged, and phase difference ∆ϕ = tan −1 A I /A Q . Compared to (21) and (22), make x p = RF in , then x p,I = RF in / √ 2, x p,Q = jRF in / √ 2, A I = H I , A Q = −H Q . Therefore, the physical simulation of the channel can be theoretically realized by using the program-controlled IQ vector phase shifter. The physical simulation model is shown in Figure 11. In Figure 11, the IQ vector phase shifter is a programcontrolled phase shifter that can be controlled in real time. by using the program-controlled IQ vector phase shifter. The physical simulation model is shown in Figure 11. In Figure 11, the IQ vector phase shifter is a program-controlled phase shifter that can be controlled in real time. Conclusions In this paper, the Doppler shift and Doppler spread of T2T communication in a tunnel environment are studied. Independent coordinate systems are established at the receiving and transmitting antennas. According to the angle and amplitude characteristics of the receiving and transmitting signals, the Doppler spread caused by the movement of the receiving and transmitting antennas is analyzed. The use of the mapping matrix approach to solve the matching problem of the transmitting and receiving signals is presented as an innovative solution, and two methods for obtaining the mapping matrix are described and verified. In order to verify the T2T Doppler spread simulation model proposed in this paper, the RT method is used to simulate the T2T communication channel in the tunnel, and the angle and amplitude information of the transmitted and received signals are obtained. By comparing the Doppler results of the simulation model with those of WI simulation, the correctness of the T2T Doppler spread simulation model is proved. Based on the Doppler spread model, a physical channel simulation method using an IQ vector phase shifter to complete T2T communication in a tunnel environment is proposed, which can provide a reference for the physical channel simulation of 5G mmWave T2T communication in a tunnel environment in the future. Author Contributions: Conceptualization, P.Z. and G.Z.; data curation, P.Z. and X.W.; formal analysis, P.Z.; funding acquisition, G.Z. and Y.J.; investigation, P.Z. and X.W.; methodology, P.Z. and G.Z.; project administration, P.Z. and K.Z.; resources, P.Z. and G.Z.; software, P.Z.; validation, P.Z.; by using the program-controlled IQ vector phase shifter. The physical simulation model is shown in Figure 11. In Figure 11, the IQ vector phase shifter is a program-controlled phase shifter that can be controlled in real time. Conclusions In this paper, the Doppler shift and Doppler spread of T2T communication in a tunnel environment are studied. Independent coordinate systems are established at the receiving and transmitting antennas. According to the angle and amplitude characteristics of the receiving and transmitting signals, the Doppler spread caused by the movement of the receiving and transmitting antennas is analyzed. The use of the mapping matrix approach to solve the matching problem of the transmitting and receiving signals is presented as an innovative solution, and two methods for obtaining the mapping matrix are described and verified. In order to verify the T2T Doppler spread simulation model proposed in this paper, the RT method is used to simulate the T2T communication channel in the tunnel, and the angle and amplitude information of the transmitted and received signals are obtained. By comparing the Doppler results of the simulation model with those of WI simulation, the correctness of the T2T Doppler spread simulation model is proved. Based on the Doppler spread model, a physical channel simulation method using an IQ vector phase shifter to complete T2T communication in a tunnel environment is proposed, which can provide a reference for the physical channel simulation of 5G mmWave T2T communication in a tunnel environment in the future. Conclusions In this paper, the Doppler shift and Doppler spread of T2T communication in a tunnel environment are studied. Independent coordinate systems are established at the receiving and transmitting antennas. According to the angle and amplitude characteristics of the receiving and transmitting signals, the Doppler spread caused by the movement of the receiving and transmitting antennas is analyzed. The use of the mapping matrix approach to solve the matching problem of the transmitting and receiving signals is presented as an innovative solution, and two methods for obtaining the mapping matrix are described and verified. In order to verify the T2T Doppler spread simulation model proposed in this paper, the RT method is used to simulate the T2T communication channel in the tunnel, and the angle and amplitude information of the transmitted and received signals are obtained. By comparing the Doppler results of the simulation model with those of WI simulation, the correctness of the T2T Doppler spread simulation model is proved. Based on the Doppler spread model, a physical channel simulation method using an IQ vector phase shifter to complete T2T communication in a tunnel environment is proposed, which can provide a reference for the physical channel simulation of 5G mmWave T2T communication in a tunnel environment in the future. Funding: This work was supported by National Natural Science Foundation of China (61871261) and Natural Science Foundation of Shanghai (22ZR1422200).
7,522.2
2022-06-01T00:00:00.000
[ "Computer Science" ]
A New Approach of Knowledge Reduction in Knowledge Context Based on Boolean Matrix : Knowledge space theory (KST) is a mathematical framework for the assessment of knowledge and learning in education. An important task of KST is to achieve all of the atoms. With the development of KST, considering its relationship with formal concept analysis (FCA) has become a hot issue. The atoms of the knowledge space with application in knowledge reduction based on FCA is examined in this paper. The knowledge space and its properties based on FCA are first discussed. Knowledge reduction and its relationship with molecules in the knowledge context are then investigated. A Boolean matrix is employed to determine molecules and meet-irreducible elements in the knowledge context. The method of the knowledge-reduction-based Boolean matrix in the knowledge space is also explored. Furthermore, an algorithm for finding the atoms of the knowledge space in the knowledge context is developed using a Boolean matrix. Introduction Formal concept analysis (FCA), as a supplement of a rough set, is a mathematical method of data analysis with applications in various areas [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. In particular, it provides a theoretical framework for the discovery and design of concept hierarchies from relational information systems. Concept hierarchies are built on a binary relation between the sets of objects and attributes in a given formal context. FCA derives from the fact that it provides an easy-to-understand diagram rooted in data, the so-called concept lattice. Moreover, the concept lattice is the collection of all formal concepts, which consists of extents and intents that are determined by each other. In general, the research of formal contexts has mainly focused on two aspects: one is the construction of the concept lattice; the other is knowledge reduction. For the first aspect, Berry [15] presented an approach to generate concepts by discussing the relationship between concept lattices and the underlying graphs. After that, Kuznetsov [16] thoroughly compared and summarized several well-known algorithms. On the other hand, the second aspect is knowledge reduction, which includes two parts: concept reduction, which reduces the size of the concept lattices [17][18][19][20][21], and attribute reduction, as well as object reduction, which preserve the hierarchical structure of concept lattices [22][23][24][25][26][27][28][29]. Several important investigations arose at these points. Kumar [17] proposed a method based on fuzzy K-means clustering for reducing the size of the concept lattices.Reference [18] derived the mean value of cardinality of the reduced hierarchicalstructure-based graph-theoretical point of view on FCA together with simple probabilistic arguments. To reduce redundant information, Wu et al. [22] illustrated a method of granular reduction based on a discernibility matrix in a formal context. Kumar [23] put forward a non-negative matrix factorization to address the knowledge reduction. Li et al. [24][25][26] formulated heuristic knowledge reduction approaches for finding a minimum granular reduct in decision formal contexts. Knowledge space theory (KST) [30][31][32][33], proposed by Doignon and Falmagne provides a valuable mathematical framework for computerized web-based systems for the assessment of knowledge and learning in education. The knowledge state, the key notion of KST, is represented by the subset K of items (or problems) in the finite domain of knowledge Q that an individual is capable of solving correctly, barring careless errors and lucky guesses. A pair (Q, K) represents a knowledge structure by convention, where K is the collection of all the knowledge states called the knowledge structure always containing at least the following special constituent parts: (1) the empty state, i.e., corresponding to a student knowing nothing about the subject; (2) Q, which is the state of a student knowing everything about the subject. Subsequently, a knowledge space is closed under union when any two states K and L are given in the space, then union K ∪ L is in K. In essence, at some point, the knowledge of students arising from the union of their initial knowledge states is plausible if they have different knowledge states that are involved in an extensive interaction. Therefore, it is unnecessary to reserve any union of states in a description of the knowledge space. With the growth of KST, the research about its relationship with other approaches has become a research topic. KST has the same mathematical background as FCA. Both of them aim to order two sets of elements simultaneously. There is an intimate relationship between FCA and KST [34]. The relationship between the attribute implications and the entailed relations was considered by Falmagne et al. [33]. Reference [35][36][37] built a knowledge space by querying an expert to interpret an entailed relation. They showed that the implication systems are in essence closure systems. This is not just in FCA, but also applies in KST via taking set-theoretic complements. At present, our study focuses on the framework of FCA. In this paper, we use the FCA for the knowledge reduction in the knowledge space. However, FCA could lead to potentially high combinatorial complexity, and the structure obtained, even from small dataset, may become prohibitively large [38]. To overcome this limitation, we applied a Boolean matrix to construct the intents (extents) of molecules (meet-irreducible elements) in knowledge contexts. This model avoids making the concept lattice from the knowledge context and aims to maintain both the object relation and attribute relation simultaneously. We first introduce the relationships between concepts in the knowledge context from the viewpoint of molecule lattice. A novel method based on a Boolean matrix is further proposed for finding the knowledge reduction of a knowledge space. The remainder of this paper is organized as follows. In Section 1, we briefly review some basic notations of FCA and KST and their relationships. In Section 2, we investigate the relationships between the molecule lattice and the concept lattice from the knowledge context. Then, the judgment theorems of the molecule, as well as meet-irreducible elements of concept lattices are proposed. In Section 3, we conclude that each member in the concept lattice from the knowledge context is the union (intersection) of some molecules (meetirreducible elements), present a simple way to compute the molecules and meet-irreducible elements in a knowledge context using a Boolean matrix, and discuss an algorithm for the knowledge reduction of a knowledge space in detail. The final summary and further research are drawn in Section 4. Formal Concept Analysis Theory Partial order ≤ is a relation on a set X with reflexivity, anti-symmetry, and transitivity. Then, the set X satisfying the partial order is called the partially ordered set (for short, a poset). For a subset Y of a poset X, we define the lower set and the upper set as follows, respectively: when Y is a singleton set {x}; we denote the lower ↓ x and upper sets ↑ x for short. One can see [39] for the details. A subset Z of a partially ordered set is called a chain if any two members of Z are comparable. Generally, alternative names for a chain are a linearly ordered set and a totally ordered set. Therefore, if Z is a chain and x, y ∈ Z, then either x ≤ y or y ≤ x (see [40]). An element a ∈ X is named maximal if there exists a ≤ x such that x = a, that is there is no element in X following a except a. Similarly, an element a ∈ X is called minimal if whenever x ≤ a then x = a, that is there is no element in X preceding a except a itself. Lemma 1 (Kuratowski). Let X be a poset. Then, each chain in X is contained by a maximal chain. A formal context is a triple F = (U, A, R), where U = {g 1 , · · · , g n } and A = {a 1 , · · · , a m } are two nonempty finite sets of objects and attributes, respectively. R is a binary relation between U and A, where (g, a) ∈ R means that the object g possesses attribute a. In fact, the representation of the binary relation contains values 1 and 0, where 1 means the row object possesses the column attribute. In this paper, we suppose that the binary relation R is regular, which holds the following forms: for any (g, a) ∈ U × A: 1. There exist a i , a j ∈ A with (g, a i ) ∈ R and (g, a j ) / ∈ R; 2. There are g i , g j ∈ U satisfying (g i , a) ∈ R and (g j , a) / ∈ R. For G ⊆ U and B ⊆ A, one defines the following two operators [1,2]: 1. G * is the maximal set of attributes that possesses all objects in G, and B * is the maximal set of objects shared by B. If a pair (G, B) satisfies G * = B and B * = G, we say that the pair (G, B) is a formal concept, in which G is called the extent and B is called the intent. In addition, for any g ∈ U, {g} * is denoted as g * for short. Similarly, for any a ∈ A, denote a * for convenience instead of {a} * . It is easy to observe that G * = {a ∈ A : G ⊆ a * } = g∈G g * and B * = {g ∈ U : B ⊆ g * } = a∈A a * . With the hypothesis of regular, we have (1) The collection of all formal concepts of F is denoted by L(U, A, R), in which the corresponding partial order relation ≤ is given as follows: for any (G 1 , B 1 ), (G 2 , B 2 ) ∈ L(U, A, R), then (G 1 , B 1 ) is a sub-concept of (G 2 , B 2 ) and the relation ≤ is called the hierarchical order of concepts. Since L(U, A, R) is closed under union and intersection [1], that is, for any (G 1 , B 1 ), (G 2 , B 2 ) ∈ L(U, A, R), then L(U, A, R) forms a complete lattice denoted as the concept lattice. A closure system is a collection of subsets that is closed under intersection and contains ∅, U. Caspard and Davey [41,42] Then, L U (U, A, R) and L A (U, A, R) in fact form the closure systems on U and A with respect to L(U, A, R), respectively. Knowledge Space Theory In this subsection, we recall several notions of KST; for more details, refer [31,32]. For a partial knowledge structure (Q, K), Q is a nonempty finite set of items or problems, which is called the domain of the knowledge structure, and K is family of subsets of Q containing at least Q. The subsets of K are knowledge states. If ∅ ∈ K, then the partial knowledge structure is denoted as a knowledge structure. When K is closed under union, (Q, K) is called a knowledge space; equivalently, K is a knowledge space on Q. The dual of K on Q is the knowledge structure K containing all the complements of the states of K, i.e., K = {K ∈ 2 Q : Q \ K ∈ K}. The minimal subfamily of a knowledge space spanning the original knowledge space is called a base. It should be pointed out that each finite knowledge space has a base. Furthermore, the states of the base have an important property. Assume that F is a nonempty family of sets. Each atom at q ∈ F is a minimal set in F containing q. A set X ∈ F is an atom if it is an atom at q for some q ∈ F . We can conclude that a base of a knowledge space is formed by the collection of all atoms. There is an intimate relation between FCA and KST [34]. The core connection between FCA and KST is the property that both the collection of extents, as well as the collection of intents of the concepts yield the closure systems. Consider the following formal context: Definition 1 ( [34]). Let (U, Q, R) be a formal context where U and Q are individuals and the knowledge domain. For any x ∈ U and q ∈ Q, (x, q) ∈ R ⊆ U × Q, if individual x is not capable of solving problem q, then (U, Q, R) is defined as a knowledge context. } be a set of individuals and Q = {p, q, r, s} be knowledge domains. Then, the solution behavior is characterized by the relation R in the formal context defined in the following table. The collection of intents derived from the knowledge context of Table 1 is: It is easy to determine that L Q (U, Q, R) is closed under intersection forming a closure system on Q with respect to a knowledge context. Now, taking the set-theoretic complements of all the intents, we obtain a family of Q, which contains ∅ and Q and is closed under union, then it forms a knowledge space: It is easily seen that a base of knowledge space K is {{q}, {s}, {p, q}, {p, r, s}}, the collection of the subset of solved items for all individuals. p q r s Molecular Lattice Lemma 2. Let L(U, A, R) be a concept lattice of a formal context F = (U, A, R). For any Proof. We only present the course of proof for (1). For any (G, M), (Y, C) ∈ L(U, A, I), we have By the above the lemma, the complete lattice L(U, A, R) satisfying the distribution laws is called a completely distributive lattice. Note that for any B, and for C ⊂ L(U, A, R) with in f C ≤ (G, M) such that C B, where "inf" means the infimum. Then, B is referred to as a maximal set of (G, M). The minimal sets of (G, M) ∈ L(U, A, R) can be defined under antithesis. and for C ⊂ L(U, A, R) with (G, M) ≤ supC such that B ≤ C, where "sup" means the supremum, then B is called a minimal set of (G, M). It is enough to show that (U, ∅) and (∅, A) are the minimal set and maximal set in L(U, A, R) of ∅, respectively, since L(U, A, R) is a complete concept, that is sup∅ = (∅, A) and in f ∅ = (U, ∅). Actually, the union of some maximal sets of (G, M) is also a maximal set of (G, M). Then, the biggest maximal set will inevitably exist, and we denote it as α((G, M)). Similarly, the union of some minimal sets of (G, M) remains a minimal set of (G, M) since L(U, A, R) is complete. Thus, there exists the biggest minimal set of (G, M) denoted by β((G, M)). Next, we aim to conduct an investigation on the relationship between (G, M) and its biggest maximal set. By virtue of the complete distributive lattice L(U, A, R), α((G, M)) is in correspondence with (G, M) ∈ L(U, A, R) and uniquely exists, which can be seen as a image of (G, M) with a mapping from L(U, A, R) to 2 L(U,A,R) . For maximal set α((G, M)). In addition, (G 2 , M 2 ) ∈ α((G, M)) meets that (G 2 , M 2 ) ≤ (G 1 , M 1 ). It is easy to see that C B. Then, B is a maximal set of (G, M). On account of α((G, M)) being the biggest maximal set of (G, M), it only has (B) = α((G, M)), for which (G 1 , M 1 ) ∈ α((G, M)). Then, α((G, M)) =↑ α ((G, M)). ( We only have to prove that B = i∈∆ α((G i , M i )) is the biggest maximal set of (G, M). In reality, On the other hand, due to B 1 ⊂ L(U, A, R) with in f B 1 ≤ (G, M) and for any i ∈ ∆, we B 1 , which yields that B is a maximal set of (G, M). Furthermore, for any maximal set C of (G, M) and B = C, by the definition of the maximal set, we conclude that C B, i.e., for any (Z, M)). Proof. This is similar to the proof of Theorem 1. M)). Then, there are (G 1 , M 1 ), · · · , (G t , M t ) ∈ L(U, A, R) following that: M)) and (G k+1 , M k+1 ) ∈ α((G k , M k )), k = 1, 2, · · · , t − 1. Proof. This follows immediately by applying Lemma 3. Classification of Concepts In the case of knowledge spaces encountered in education, the cardinality of the base of a knowledge space K is typically much smaller than the cardinality of K. Furthermore, a knowledge space admits at most one base, which is formed by the collection of all the atoms. An atom is a minimal set in K containing an element of knowledge domain Q. In fact, an atom at q ∈ Q in the knowledge space is in correspondence with a maximal set at q in the collection of the intents of concepts in the knowledge context. If not, there exist (Y 1 , C 1 ), (Y 2 , C 2 ) ∈ L(U, A, R) such that (Y, C) = (Y 1 , C 1 ) ∪ (Y 2 , C 2 ), but (Y, C) = (Y 1 , C 1 ) and (Y, C) = (Y 2 , C 2 ), which yield (Y 1 , C 1 ) < (Y, C) and (Y 2 , C 2 ) < (Y, C). On the other hand, (Y 1 , C 1 ), (Y 2 , C 2 ) ∈ I and (Y, C) is a minimal set. This implies (Y, C) = (Y 1 , C 1 ) ∪ (Y 2 , C 2 ) ∈ I, a contradiction. Hence, (Y, C) is a join-irreducible element. Then, (Y, C) ∈ π((X, B)), that is (Y, C) ≤ (Z, D). In accordance with (Z, D) ∈ I and I being a lower set, we have (Y, C) ∈ I. This contradicts (Y, C) ∈ L(U, Q, R) − I. Therefore, (G, M) ≤ supπ((G, M)). Theorem 8. Let F = (U, A, R) be a formal context, then every formal concept in L(U, A, R) is the intersection of some prime elements. Proof. This is similar to the above theorem. Definition 8. Let F = (U, A, R) be a formal context; L(U, A, R) is referred to as a molecular lattice. Because L(U, A, R) is a complete distributive lattice, in which molecules, as well as meet-irreducible elements can be regarded as the basic unit generating L(U, A, R), it is therefore a molecule is an intersection of some meet-irreducible elements. In order to demonstrate not all the meet-irreducible elements are necessary to show all the molecules, we research the following example. Figure 1 is a Hasse diagram of a concept lattice generated from a formal context (U, A, R), in which a dot represents a formal concept. Dots 1 and 26 in the diagram correspond to (∅, A) and (U, ∅), respectively. It is easy to see that Concepts 2,3,4,5,6,7,10,11,12,17 are molecules and 10,13,15,17,19,20,21,22,23,24,25 are meet-irreducible elements. We have Example 2. In other words, in this way, all molecules can be represented by some meet-irreducible elements except for Concepts 24 and 25. Namely, not all meet-irreducible elements are necessary absolutely, similar to molecules. Therefore, we have the following definition. Table 2, and the Hasse graph of the corresponding concept lattice is shown in Figure 2. Representing a Concept Lattice Based on a Boolean Matrix A concept lattice is an ordering of the maximal rectangles defined by a binary relation. In this subsection, we carry out the relationship between the concept lattice and Boolean matrix. Then, a one-to-one correspondence between the set of elements of the concept lattices and the Boolean vector is established. Furthermore, we explain how to use the properties of the Boolean matrix to research the molecules. Given a formal context F = (U, A, R), for each (x i , a j ) ∈ U × A, (x i , a j ) ∈ R iff object x i has a value of 1 in attribute a j and (x i , a j ) / ∈ R iff object x i has a value of 0 in attribute a j , i.e., x i has no value in attribute a j . In other words, a formal context can be seen as a Boolean matrix M R = (c ij ) n×m , defined by We call M R the relation matrix of F. This point of view enables us to establish a relationship between the concept lattice and Boolean matrix. This may prove important in many applications. It is well known that, for a relation matrix M R of a formal context, a row vector is the feature vector of x * , x ∈ U, and a column vector is the eigenvector of the corresponding a * , a ∈ A, denoted as λ(x * ) and λ(a * ), respectively. As a consequence, we can use Boolean matrices to characterize the formal context, a subset of the objects, or a subset of the properties is characterized by eigenvectors. Definition 10 ( [43]). Let F = (U, A, R) be a formal context. M denotes a Boolean matrix composed of all λ(x * * ) (∀x ∈ U), and N denotes a Boolean matrix consisting of all λ(a * * ) (∀a ∈ A). Then, we call M and N the object relation matrix and attribute relation matrix in F = (U, A, R), respectively.
5,152
2022-04-20T00:00:00.000
[ "Computer Science" ]
Human-Robot Collaboration Human-Robot Collaboration (HRC) is gaining more and more importance as a new development direction in robotics. The previously common separating devices are to be eliminated, man and machine are to work in a common process. At present, new fields of application are being opened up in automation, which require both an ergonomically and safety-related suitable design and a philosophical consideration of the resulting socio-technical system. Text After four decades of economically and technologically successful use of robots in industrial production, robotics has now reached a new peak: Robots are developed and manufactured with outstanding precision, high positioning accuracy and reliability. Nevertheless, in most industrial applications robots cannot function reliably without humans, as early experiences in the development of robotics have shown. Humans are often needed in the process, for example to correct robot errors. For some years now, systems of human-robot collaboration (HRC) have been gaining increasing importance. In human-robot collaboration applications, humans and robots work in a common work system without being spatially separated by facilities such as safety fences. Thus a common working space is created, which differs from the previous concepts of separated fenced robot systems. Humans and robots are to carry out work on the same object in the common workspace at the same time. The aim is to maintain the human being with his abilities as an active member of the production and at the same time to increase productivity through automation. Collaboration is characterized above all by the proximity and form of cooperation between human and robot. On closer inspection, there are even more striking features. For example, robots often have physical characteristics that they share with humans; for example, they are often referred to as robot arm or hand. They also have similar characteristics in terms of interaction and communication, and in some areas of application even natural language skills. In view of the socio-technical system that is obviously emerging, the existence of humans and robots, their bodies, their presence in space, their relationships and similarities -and ethical questions of all kinds -are of philosophical interest. HRC systems are supposed to support or relieve in monotonous or power-consuming work, DOI: 10.35840/2631-5106/4121 • Page 2 of 3 • ISSN: 2631-5106 | Citation: Hans-Jürgen B (2020) Human-Robot Collaboration. Int J Robot Eng 5:021 as for example in the assembly of threaded screws on geometrically determined components. Today many leading manufacturers of industrial robots develop special HRC robots. Due to collaboration and the elimination of fences, occupational safety and ergonomics are increasingly coming into focus. Intensive research is currently being carried out on workplace design, for example with regard to distraction using eye tracking methods, with regard to safety perception and situation awareness, the predictability of robot movements or task allocation. HRC test benches for full-scope simulation are also under development. Required for a successful implementation of HRC in industrial applications are lightweight robots with force detection or safety shutdown. In the sense of a versatile production they have to be scalable and flexible in the process. Important is the relief of the workers from monotonous or physically straining work as well as a guaranteed safety by compliance with the machine guidelines and standards. Programming and operation must be simplified; ideally, the machine operator should also be able to carry out programming. Economically reasonable use of HRC requires a relevant, calculable benefit. First of all the space utilization of productive facilities, which sooner or later leads to space problems in the industry in case of further economic growth. The omission of separating protective devices makes a space-saving realization possible. The more effective use of productive space leads to increased productivity per unit area. Overlapping work areas allow easier process planning, especially in flexible manufacturing with changing work contents. Increasing the flexibility of the productive facilities is the next point. The term "flexibility" or "flexible manufacturing" is not new. As early as the 1980s, flexible manufacturing systems were used in which the variants of a given product range could be called up via information interfaces. At that time, information became a production factor, and the idea of computer-integrated manufacturing (CIM) was born. The concept of flexibility was scientifically discussed, classifications according to product-related and production-related flexibility became commonplace, and product-related flexibility types (variant flexibility and change flexibility) were distinguished from production-related flexibility types (functional flexibility, volume flexi-bility, expansion flexibility and redundancy). The discussion about flexibility in the context of the HRC is now being held again. The HRC makes special demands and opens new possibilities of flexibility. Often mentioned is the production-related flexibility of location which describes the mobility of the workplaces. Due to the omission of fences or enclosures, workplaces of the HRC are clearly more flexible in terms of location than usual robot cells. For the first time automation systems are created which can be used at different workplaces and for different tasks, e.g. in case of capacity fluctuations. For the first time an application of robotics becomes economically reasonable in small and medium-sized companies. Furthermore, ergonomics can be improved by taking into account the strengths and weaknesses of the interaction partners, man and machine. The advantages of the human being in the HRC lie in the fast detection, evaluation and reaction. He has a free mobility and the possibility to compensate tolerances and to detect errors at any time. He uses his sensomotoric abilities like seeing, hearing, feeling etc. for this purpose of his own accord. He is capable of learning, the handling of objects of varying complexity is unproblematic, and the selection and operation of suitable tools is also intrinsically possible. He is able to question, optimise and innovate processes, he is empathic and can also be used flexibly to a large extent. The advantages of the robot in the HRC are precision and repeatability, and it consistently reaches a level that humans can hardly reach and under production conditions rather not continuously maintain. Thus a higher quality can be achieved by the use of robots. Robots can use heavy objects and tools, and it is also possible to handle dangerous objects. Monotonous, repetitive tasks can be ideally transferred to robots. In addition, a robot is available throughout and can therefore also be used in shift models without any problems. In order to be able to use the potentials resulting from HRC, an adjustment of the operation by simple, efficient and reliable programming is indispensable. The concept of intuitiveness comes up, in the future it will be necessary that the actions of the robot are transparent and comprehensible for humans. Also an ad-hoc task sharing, which overcomes the limitations of previous sequences in Maba-Maba lists to be defined in the pre-field, eration. At the same time the attention focus and situation awareness of the workers is examined in test person trials. Besides an ergonomically and safety-related suitable design of the HRC, a philosophical consideration of the resulting socio-technical system is necessary. becomes more and more relevant. In current research projects we are working on the optimization of the operation and programming of collaborative robots and thereby examine the predictive ability and the expectation conformity of robot movements in HRC. The idea is to rely on methods of robotic standard path planning to a large extent in order to guarantee an easy programming and op- DOI: 10.35840/2631-5106/4121
1,701
2020-12-31T00:00:00.000
[ "Engineering", "Computer Science", "Philosophy" ]
Living on a Carbon Diet The objectives of this paper are to understand the features of simulated low carbon lifestyles under strong greenhouse gas emissions reduction assumptions (20%, 50%), the nature of trade-offs and the hierarchy of choices operated by households within a limited carbon and financial carbon budget, the acceptability of important changes in consumption patterns, and finally the values and representations, benefits and losses that households express in such changes. The research implemented a protocol combining experimental economy (simulation of carbon budget reductions under financial constraints) and anthropology (semi-structured interviews, to understand the rationale behind choices). Each household of the sample (n = 30) was investigated for 2 3 days. Firstly, a very detailed carbon footprint of the household was calculated. Then households were proposed a list of 65 pre-defined solutions covering most of available mitigation options, with financial and carbon cost and savings calculated for their real situation. The sample reached an average of −37% (−12%/−64%), with a preference to act on habitat and food and a reluctance to change transport consumption. Due to the amount of reductions asked, low carbon lifestyles finally impact comfort but allow saving money. Recommendations for policies are presented. Introduction 1.Citizens, Consumers and Climate Change Individuals and households, citizens and consumers, have been for long neglected by policy-making and research on climate change.They can be considered as potential supports [1] or barriers to the implementation of new policy measures, as beneficiaries or victims of climate change and climate change policies.Policies are however still based on emission inventories starting from a national and production basis, with a modeling and some reference scenarios dominated by economics and large scale thinking, for instance introducing variables of technological change and carbon prices and observing effects on the distribution of production and revenues.This is valid for the current generation of IPCC scenarios [2]- [4] as well as for other global scenarios [5]- [7].A review [8] shows that households, lifestyles and consumption are seldom taken into account in modeling and scenarios exercises, and if so, are analyzed ex post rather than as potential drivers of change. The need to balance better a production and a consumption perspective impulses some research on emission inventories and attribution of importation and exportations, therefore extending IPCC guidelines and drawing a consumption based picture of emissions [9]- [16] or assessing the distribution of emissions within the society, now and in the future [17]- [19]. Beyond this "macro" perspective, more understanding of GHG emissions and decision-making at the household level is required, so as to develop specific policy instruments-carbon taxes, carbon budgets, individual tradable permits [20]- [24] and to assess the impact of large scale policies on lifestyles, anticipating inequalities and barriers for change.This reveals some knowledge gaps in people's representations, behaviors and choices facing greenhouse gas emission mitigation. Qualitative Approaches This obviously calls for more social sciences.If econometrics or quantitative sociology can yield statistical correlations and typologies useful for modeling, they generally fail to go deeply in the interpretation/understanding of choices.A quantitative approach allows reaching some statistically representative results, according to the sample [25], but is of more limited use when the objective is to reach an in-depth understanding and dense descriptions of social phenomena [26].Quantitative methods offer robustness and rigor, when qualitative methods provide richness and texture.This echoes a general need for more humanities in the field of climate research, since "The analysis of anthropogenic climate change continues to be dominated by positivist disciplines at the expense of interpretative ones [27]".That is the reason why this research is grounded in a frequent qualitative tradition in social sciences and in a less frequent perspective of experimentation, both applied to climate change mitigation. Qualitative methods have a long history in social sciences [28].In particular, added value of psychology and anthropology [29] allows some understanding of the cultural background of choices [14], the values, justification, and formation of moral judgments [30] [31]. Several fields of research have been developed concerning individuals and households: opinions [32], factors influencing representations and behavior, such as the peers, friends and relatives or the social status and reputation [33]- [35], willingness and barriers to act [36]- [42], motivations of consumption [43]- [46].Deniers and climate skepticism, and more generally public understanding have been well investigated following the recent controversies [47] [48] so are the factors influencing a better awareness of climate change, such as the experience of changes [49]- [52].In the field of energy, processes like the rebound effect are well documented [53] [54]. Experimental Perspectives Research relying on opinion surveys faces the objection that for issues of public interest like climate change, individuals tend to overestimate their capacity to act.Research relying on what individuals declare in terms of actual behavior or willingness to change will always face this "implementation" gap.[55] described for instance the "green fakers" category, characterized by a gap between discourses and action. This paper does not focus on opinion surveys, but introduces a perspective of simulation and experience, more rare in social sciences, even though quite common in psychology [56], and other experiments on cognitive dissonance), marketing (focus groups, applied to climate change adaptation and mitigation [8] [57]- [59], sociology [60] or experimental economy [61].Experimental economy in particular tries to simulate individuals' behavior as close as possible to the reality and not to start from a theoretical homo economicus.In social science, experimental methods will also involve surveys, but using controlled environments, so as to understand precise processes (e.g.pricing or buying process) in stable conditions.The basic question "Should we have to reduce our GHG emissions by 50%, what would be the priorities, the hierarchy of choice, the personal trade-offs, the impact on our ways of life?" does not inform on the reality of the future implementation of choices, but is very productive to inform the formation of decision in a context of climate change mitigation. Methods The objectives of this paper are to understand the features of simulated low carbon lifestyles under strong greenhouse gas emissions reduction assumptions (20%, 50%), the nature of trade-offs and the hierarchy of choices operated by households within a limited carbon and financial carbon budget, the acceptability of important changes in consumption patterns, and finally the values and representations, benefits and losses that households express in such changes. The research, conducted in 2011-2012 in France, implemented a protocol combining experimental economy (simulation of carbon budget reductions under financial constraints) and anthropology (semi-structured interviews), to understand the rationale behind choices).Each household of the sample (n = 30) was investigated for 2 -3 days.The sample was recruited so as to combine contrasted situations of revenues (middle classes, excluding higher and lower percentiles), habitat (individual and collective), region of residence, gender, age, family structure, and the existence of strong characteristics affecting the initial carbon footprint (e.g. a large house in cold climate travel by plane…).We first calculated, using a tool developed for the project, a very detailed carbon footprint of the household, on habitat, transport, food, holidays and other consumption features.We then proposed households a list of 65 pre-defined solutions covering most of available mitigation options, with financial and carbon cost and savings calculated for their real situation, introduced in an ad-hoc simulator.Households were proposed to reduce their carbon footprint by 20%, and then by 50%, choosing one option after the other, after a careful analysis of the solutions offered.A coding of solutions (replacing/reducing/renouncing, behavioral/financial choices, habitat/transport/consumption/holidays/food, order of choices, first order and second order choices…), associated with the data on financial and carbon consequences, allowed a quantitative processing of results and the use of indicators like eco-efficiency (euro per ton of CO 2 avoided), which is usually not affordable in such qualitative research.Between these simulation stages of the research, some in-depth interviews were conducted so as to understand the perception and representation of individuals, and the rationale they put behind their choice, in particular to discriminate financial and non-financial (values…) drivers of choice. Results The sample reached an average of 37% (range of −12%/−64%) reduction of the initial carbon footprint, with several households, of different categories of revenues and carbon footprint, pushing the experience beyond 50%.Others stated various limitations to actions (financial, cultural…).Beyond these Figures 1-3, the easiness for individuals to enter the simulation and to project themselves into a low carbon future was a result in itself. The analysis by category of choice reveals (Figure 1) a priority given to habitat and food, and a strong reluctance to act on transport.This is explained for transport by constraints (daily commuting) and desires (reluctance to give up air transport), for food by the resonance with other motivations for decision (health), for habitat by the existence of incentives and technical solutions.Transports appear to be, given their contribution to the initial balance, a key for the adoption of low carbon lifestyles, but also the main barrier.The research also yield results on the 65 individual choices proposed.The analysis of costs shows that, within very contrasted strategies (see infra.), households tend to adopt first solutions that cost money but preserve comfort, and then, urged by the simulation to reach more severe reductions, tend to give up some elements of comfort (less meat, less driving of smaller cars), which save money.Low carbon lifestyles are not necessarily costly.60% of the options have a net cost at the end of the simulation, 40% a net saving.Some will privilege operating costs (e.g.buying more organic products) while others opt for investment costs (e.g.buying a more energy efficient car). Read: the first choices adopted by households saves on average 6 euros per month.At the end of the simulation, the average impact on a household monthly financial budget is 29 euros per month.The "tracking" of choices and the interviews with households reveal some strong points: • the inequality of some situations, some households combine easy options for GHG reductions or important and sufficient revenues to take decisive actions (e.g.changing their heating system), while others are constrained and forced to accept large comfort losses (households who rent their apartments and do not decide the insulation, and therefore must limit the temperature to save energy); • contrasted strategies, a minority of households taking "rationally" their decision with a clear vision of the initial situation and the emission target, while most adopt a pragmatic and sequential strategy, adopting first the simplest and most desirable-but sometimes worthless-options, and then moving to the most engaging ones; • a clear-and somehow understandable-trend for households to favor options changing the less their daily life: even if behavioral change (e.g.heating less, using less the car) can be necessary to reach substantial emission reductions, as well as to renounce to some key elements of a lifestyle (stop flying), households seldom favor these categories, even when a single choice can reduce considerably their footprint.They generally opt for such options when "against the wall"; • the limited use of financial data (even though this data was particularly highlighted in the simulation), either to choose or to justify a choice, which introduces the idea that as soon as lifestyles are deeply questioned, other arguments, like values and culture are taken into account to build the coherence of a new lifestyle [37]. When provided with a comparison on their current and future lifestyles, households were asked to comment the gains and the losses, but also to express themselves on the conditions that would make such low carbon lifestyles acceptable.It appears that rather than an appeal to large scale-long term values ("saving the planet"), the low carbon lifestyles are a mix of personal values (well being, quality of life, pleasure, health, provided by some solutions), collective harmony (e.g.promotion of proximity for daily purchase), and environmental preservation perceived as a limitation of waste, a compliance with more sober consumption patterns, referring to ancient times.To a certain extent, there seems to be some potential to compensate the loss of material well being by other elements of quality of life (proximity, sociability, pleasure).This is consistent with other research on frugality.At the level of reductions obtained (an average of −37%), the fundamentals of lifestyles are not questioned, but the daily life has to be severely optimized so as to comply with the emission budget. Discussion This research developed a hybrid protocol, combining experimental economy and anthropology, which lead to an in-depth understanding of households' perception and potential behavior in front of a limited carbon budget.In particular, it helped understand the complex interactions of factors (financial, non-financial) influencing decision-making.The limits are obviously linked to the size of the sample, which is not statistically representative of the French population, in spite of our efforts to cover a variety of situations.To overcome this limitation, using a more quantitative approach might seem appealing: either assessing emissions at the households' level using input/output matrix of national accounting systems, or developing large scale simulations, for instance using the carbon emission simulators available on-line.Yet without associated semi-structured interviews as proposed in this research, and a face to face protocol avoiding bias, none of these options would manage to reveal the rationales behind choice. Several recommendations for future policies are derived: • a special attention which must be given to fragile households, regarding emission reductions: the retired, rural families depending on their cars, low revenues, individuals already with a low carbon footprint; • a need to adapt public policies (discourses, incentives) to the key moments of a person's life, when households rearrange their life: buying a new apartment, having children, retiring, and offering opportunities for change which should be targeted in priority; • preparedness to solutions is quite diverse: while households seem ready to accept some changes in their lifestyles providing some barriers are offset (using less their car, eating less meat), for others the priority is first to raise awareness, in a context where the reluctance to change is still strong (e.g.limiting the use of plane); • public campaigns could adapt their message, using the arguments of fear and guiltiness with caution, and favoring the promotion of exemplarity by public authorities, or maximizing the material or symbolic benefits provided by some solutions (well-being, pleasure…). Figure 2 .Figure 3 . Figure 2. Cumulated average costs of solutions adopted (average of rank 1 choices, cumulated with average of rank 2 choices, etc. until 33), in euros per month.Black line: trend curve.
3,299.8
2015-02-09T00:00:00.000
[ "Economics" ]
Membrane-Modulating Drugs can Affect the Size of Amyloid-β25–35 Aggregates in Anionic Membranes The formation of amyloid-β plaques is one of the hallmarks of Alzheimer’s disease. The presence of an amphiphatic cell membrane can accelerate the formation of amyloid-β aggregates, making it a potential druggable target to delay the progression of Alzheimer’s disease. We have prepared unsaturated anionic membranes made of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and 1,2-dimyristoyl-sn-glycero-3-phospho-L-serine (DMPS) and added the trans-membrane segment Aβ25–35. Peptide plaques spontaneously form in these membranes at high peptide concentrations of 20 mol%, which show the characteristic cross-β motif (concentrations are relative to the number of membrane lipids and indicate the peptide-to-lipid ratio). We used atomic force microscopy, fluorescence microscopy, x-ray microscopy, x-ray diffraction, UV-vis spectroscopy and Molecular Dynamics (MD) simulations to study three membrane-active molecules which have been speculated to have an effect in Alzheimer’s disease: melatonin, acetylsalicyclic acid (ASA) and curcumin at concentrations of 5 mol% (drug-to-peptide ratio). Melatonin did not change the structural parameters of the membranes and did not impact the size or extent of peptide clusters. While ASA led to a membrane thickening and stiffening, curcumin made membranes softer and thinner. As a result, ASA was found to lead to the formation of larger peptide aggregates, whereas curcumin reduced the volume fraction of cross-β sheets by ~70%. We speculate that the interface between membrane and peptide cluster becomes less favorable in thick and stiff membranes, which favors the formation of larger aggregates, while the corresponding energy mismatch is reduced in soft and thin membranes. Our results present evidence that cross-β sheets of Aβ25–35 in anionic unsaturated lipid membranes can be re-dissolved by changing membrane properties to reduce domain mismatch. A primary feature in the pathogenesis of Alzheimer's disease is the deposition of insoluble fibrillar plaques in the extracellular space of brain tissue 1 . The major component of these plaques is a naturally existing peptide, the amyloid-β peptide (Aβ). Aβ undergoes conformational changes leading to aggregation and the development of neurodegenerative diseases, such as Alzheimer's disease; however, the exact relationship between the two is still unclear 2 . While aggregation of proteins is an inherent part of aging to some extent 3 , increasing evidence has shown a link between the formation of plaques and the composition of surrounding brain tissue. Amyloid-β is a polypeptide consisting of 42 amino acids, which has a 10 amino acid long transmembrane segment, Aβ [25][26][27][28][29][30][31][32][33][34][35] , that is common to both the amyloid precursor protein (APP) and the full length Aβ peptide. While this short transmembrane segment is commonly used in the study of peptide interactions and partitioning in membranes, see for example [4][5][6] , it has also been reported to have neurotoxic properties [7][8][9][10][11][12] and high tendency for aggregation and fibrillation [13][14][15] . The secondary structure of a peptide and its interactions with the plasma membrane are essential in maintaining the function and integrity of the cell. By significantly altering its formation, the resulting changes can lead to the pathology of many diseases 16 . In Alzheimer's disease, the monomeric Aβ peptides transition into long, peptide structures that form amyloid fibres. These fibres consists of arrays of β sheets running parallel to the long axis of the fibrils, the cross-β motif 17 , which are connected through steric zippers 1 . Formation of fibrils is believed to Results Atomic Force Microscopy. The topology of the multi-lamellar, solid supported membranes was investigated using an ezAFM+ from Nanomagnetics Instruments, as detailed in the Materials and Methods Section. Membranes were measured at a temperature of 24 °C. By operating the device in tapping mode, topology and phase pictures can be measured simultaneously. While the topology picture visualizes structures on the membrane surface, the phase picture highlights softer and stiffer regions of the bilayer. Figure 2(a) shows topology and phase image of of POPCP/DMPS + 20 mol% Aβ [25][26][27][28][29][30][31][32][33][34][35] . An area of ~4 × 4 μm which was scanned with 2 μm/s and a resolution of 512 × 512 pixel. The images show small, ~100 nm sized structures. The appearance of these structures in the phase image indicates changes in the membrane stiffness. In order to further clarify the origin of these structures, membranes were studied using fluorescence microscopy. Fluorescence Microscopy. Various amino acids are known to show autofluorescence. Peptides can emit a fluorescent signal when excited by an external light source. This property was used to identify peptide-rich areas in the bilayer. Fluorescence Microscopy was conducted using an Eclipse LV100 ND Microscope from Nikon, as detailed in the Materials and Methods Section. Samples were measured at a temperature of 24 °C. The corresponding image is shown in Fig. 2(b). Bright ~45 μm sized regions are visible. Based on their strong autofluorescence, these regions were identified as peptide rich, in agreement with Tang et al. 30 who have previously reported Aβ [25][26][27][28][29][30][31][32][33][34][35] peptide plaques of tens of μm in POPC/DMPS bilayers using optical microscopy. The size of these regions also corresponds to the typical size of senile plaques observed in the brain tissue of Alzheimer's patients. However, while some of these large plaques show a relatively uniform structure, the high resolution fluorescence microscope shows evidence that many of these plaques are composed of much smaller peptide rich clusters, about 100 nm in size, which correspond to the structures observed in AFM. The fluorescent images prove that these structures are indeed composed of peptides. The microscopy images also shed light on the formation of peptide aggregates: They suggest that small, ~100 nm peptide clusters form spontaneously in the unsaturated anionic membranes. These small clusters eventually fuse to form peptide plaques of several tens of μm. Transmission X-ray Microscopy. Scanning transmission x-ray microscopy (STXM) was performed using the ambient STXM on beamline 10ID1 at the Canadian Light Source. Membranes were measured at a temperature of 24 °C. Results are shown in Fig. 2(c). Briefly, monochromated soft x-rays are focused to a 100 nm spot, the sample is placed at that focus, and images are formed by raster scanning the sample while synchronously recording the transmitted flux. Spectra are obtained by recording images at a set of photon energies, in this case, across the C 1s edge (280-305 eV). Transmitted signals are converted to optical density using the Beer-Lambert law Scanning transmission x-ray microscopy images show clear evidence for peptide-rich clusters (peptides show in red). (d) Relative energy spectra of either lipid or peptide were used in order to discern the position and size of the peptide clusters on a membrane substrate. (e) Spatial analysis of the images reveals an average width of these plaques of 74 nm (with a standard deviation of 18 nm). , with I 0 measured through the silicon nitride support where there is no sample. Lipids and peptides are distinguished spectroscopically by a 0.3 eV shift of the C 1s → π*C = O transition 48 , as shown in Fig. 2(d). Image sequences measured on the peptide-membrane aggregate are fit to reference spectra to form maps of the peptide and lipid components, which can be combined into color coded composites. Figure 2(c) reveals small peptide clusters associated with lipids. Spatial analysis (Fig. 2(e)) indicates the Aβ 25-35 peptides form Table 1 aggregates with an average size of 74 (18) nm in the anionic membranes, in excellent agreement with the results from AFM and optical microscopy. The molecular structure and composition of these peptide clusters was then studied using high resolution x-ray diffraction. X-ray Diffraction Signature of Aβ in Membranes. Highly-oriented lipid bilayers of POPC/DMPS (97:3 mol/mol) were prepared on silicon wafers with Aβ 25-35 and the drugs curcumin, ASA, and melatonin. The membranes were hydrated and scanned in 97% RH conditions at a temperature of 30 °C. Using high resolution x-ray diffraction, the in-plane (q ∥ ) and out-of-plane (q z ) structural features can be decoupled. The experimental setup is sketched in Fig. 3(a). Figure 3(b) presents out-of-plane diffraction data for pure POPC/DMPS bilayers without peptide and 20 mol% peptide added, and with the addition of 5 mol% curcumin, ASA and melatonin. A series of well developed Bragg peaks along q z is the signature of well organized lamellar membranes. All membrane samples without Aβ show well developed Bragg peaks up to an order of 7. The addition of 20 mol% Aβ [25][26][27][28][29][30][31][32][33][34][35] led to significant changes in the diffraction pattern: Bragg peaks were less pronounced and the number of higher orders was significantly reduced as a result of increased membrane bending, as will be discussed below. The lamellar spacing (membrane width + width of the hydration water layer), d z is calculated from the spacings between the respective Bragg peaks using Bragg's law, d z = 2π/Δq z . Results are plotted in Fig. 3(c) and listed in Table 1. The addition of Aβ 25-35 significantly reduced d z in pure membranes from 70 to 56 Å. Aβ [25][26][27][28][29][30][31][32][33][34][35] was found to decrease the lamellar spacing also in the presence of 5 mol% curcumin and 5 mol% melatonin while 5 mol% ASA was found to lead to an increase in thickness. Membrane width, d HH , defined by the head-head distance, and the thickness of the hydration water layer, d w , were determined from Fourier transformation of the reflectivity data in Fig. 3, as detailed in a previous publications 6,29 . The corresponding values are given in Table 1. When peptides are embedded in pure POPC/DMPS and with the addition of curcumin and melatonin, there is a decrease in the thickness of the hydration water layer while the addition of ASA led to an increase in d w . By comparing the membrane widths, d HH remained unchanged with the addition of melatonin (within the statistics of our experiment). The addition of ASA, however, led to an increase in membrane width while curcumin made the membranes thinner. In-plane diffraction results are shown in Fig. 4. Complete 2-dimensional data are shown for the POPC/DMPS assay containing 20 mol% Aβ [25][26][27][28][29][30][31][32][33][34][35] in Fig. 4(a,b). The different signals can be assigned to different molecular components and membrane properties, as sketched in Fig. 4(c); Part d) shows a sketch of the experimental set up. Two-dimensional data were integrated and converted into line scans in Fig. 4(e-l). Membrane hydration water molecules organize at 3.4 Å with respect to each other, leading to a peak at q ∥ = 1.85 Å −1 . The broad peak of highest intensity centered around q T ~ 1.4 Å −1 in fluid membranes is due to the packing of the lipid tails in the hydrophobic membrane core. The area per acyl tail is obtained from the relation where a T is the distance between acyl tails as calculated from a q 4 /( 3 ) T T π = 30,49 ; values for A T are given in Table 1. The addition of Aβ [25][26][27][28][29][30][31][32][33][34][35] led to a slight increase in lipid area in all assays. Additional small lipid signals are observed in the pure POPC/DMPS membranes in Fig. 4(e) at q || -values of 1.43 and 1.5 Å −1 . These signals have been reported before [50][51][52] and assigned to the organization of the lipid head groups within the lipid matrix. Peak positions are well described by a rectangular unit cell with dimensions a = 8.4 Å and b = 8.8 Å. We note that these signals disappear with the addition of curcumin, ASA and melatonin indicative of an increased disorder in membrane organization. The average orientation of the lipid bilayers and the tilt of the lipid molecules can be determined by studying the angular dependence of the corresponding diffraction signals in the 2-dimensional x-ray intensity maps. The intensity at the lipid tail position was integrated as function of the azimuthal angle γ, to determine the average tilt angle of the lipid acyl chains. The corresponding values are listed in Table 1. Figure 4(b) shows the small angle region around the reflectivity Bragg peaks in magnification. The corresponding intensity shows a circular pattern and was integrated over the meridian, δ, and analyzed using Hermans orientation function, as detailed in the Materials and Methods Section. Hermans function describes the degree or extent of orientation of the molecular axis relative to the membrane normal. Completely aligned would result in f = 1, randomly oriented in f = 0.25. Values for membrane curvature and lipid tail orientation are shown in Fig. 4(m,n) and also listed in Table 1. Bending of the bilayers and the average tilt angles of the lipid tails increase with peptide concentration, indicative of increasing bilayer distortions in the presence of peptides and peptide aggregates. The signals at ~10 Å (q || = 0.7 Å −1 ) and 4.7 Å (q || = 1.35 Å −1 ) are the pattern of amyloid peptides forming cross-β amyloid sheets. The structure of a cross-β sheet is depicted in Fig. 4(d). The two reflections observed in the x-ray pattern correspond to inter-strand and inter-sheet distances of peptide chains 1,30 . The reflection at 1.35 Å −1 is indicative of extended protein chains running roughly perpendicular to the membrane plane and spaced 4.7 Å apart. The reflection at 0.7 Å −1 shows that the extended chains are organized into sheets spaced 10 Å apart. The signal at q || -values of ~0.4 Å −1 (marked in grey) stems from the Kapton windows of the humidity chamber and was, therefore, not included in the structural analysis. The intensities of the peptide signals in Fig. 4(i-l) are proportional to the volume fraction of the different phases. The volume fraction of Aβ 25-35 aggregates can be determined from the ratio of the integrated diffraction signals. Values are given in Table 1 and displayed in Fig. 4(n). While a volume fraction of aggregates of 14% was observed in pure POPC/DMPS, 15% were found with ASA and 12% with melatonin. A significantly lower percentage of 4% was found in the presence of curcumin. As 20 mol% of the bilayers are made of peptides, they should contribute 20% to the scattering signal. However, the scattering experiment is only sensitive to aggregated peptides. Therefore, a peptide signal of 15% means that 3/4 of the added peptides (out of the 20 mol% added), are organized in clusters while 1/4 still exist as monomers, either embedded or outside of the the bilayers. In the presence of curcumin, only 1/5 of all added peptides are found in clusters while 4/5 exist as monomers. The sizes of the corresponding peptide aggregates can be estimated from the width of the corresponding diffraction peaks using Scherrer's equation 53,54 . Cluster size and volume fraction are plotted in Fig. 5(a,b) (and also listed in Table 1. The size of peptide clusters was in the order of ~100 nm for POPC/DMPS in the microscope images in Fig. 2, while all cluster sizes determined by x-ray diffraction except for ASA were significantly smaller (pure membrane 21 nm, curcumin 14 mn, melatonin 12 nm). Although this could be the result of a general discrepancy in size determination using different techniques, it likely points to a domain sub-structure of the peptide Table 1). SCIentIfIC RepoRTs | (2018) 8:12367 | DOI:10.1038/s41598-018-30431-8 clusters, i.e., that the 100 nm aggregates have grown from smaller nuclei, which then form domains of different orientation within the peptide clusters. X-ray diffraction would then give the width of the smaller domains, while the microscopical techniques are sensitive to the total size of the cross-β clusters. UV-vis Spectroscopy. The thioflavin T fluorescence (ThT) assay is commonly used for the detection of amyloid fibrils 55 . The ThT class of molecules has three distinct binding sites in Aβ peptides 56 . ThT binds to "cross-strand ladders" that are inherent in repeating side-chains interactions running across the β-strands within a β-sheet layer 57 and is used to quantify the presence of cross-β sheets in liposome solutions with UV-visible spectroscopy to corroborate our previous experimental findings. In the presence of cross-β sheets, thioflavin T absorbs light at 456 nm. We first prepared liposomes of POPC/DMPS (97:3 mol/mol%) and added 20 mol% Aβ [25][26][27][28][29][30][31][32][33][34][35] and Thioflavin T to form peptide aggregates. The corresponding spectrum in Fig. 6(a) shows fluorescence at 456 nm, characteristic of cross-β sheets. After a stable fluorescence was reached, small volums of melatonin, ASA or curcumin were added to the solution. When adding curcumin, the fluorescence was found to decrease. Figure 6(b) shows the difference in fluorescence for the three molecules normalized to that of the pure POPC/DMPS matrix. No reduction in fluorescence was observed after the addition of melatonin or ASA. In assays with curcumin; however, the fluorescence decreased significantly, indicating a depletion of cross-β sheets. Fig. 7(a) was initially centered at z = 0 but then localized toward the acyl tails of either leaflet. This slightly tilted position is in agreement with the position determined experimentally from x-ray diffraction 6 . Peptide partitioning caused negative membrane curvature on both sides of the membrane such that the C-and N-terminals of the peptide were positioned in the head-tail interface. All three drugs were found to position in the head-tail interface of the membrane, in agreement with experimental models of bilayers with curcumin ( Fig. 7(b)) 41 , ASA (Fig. 7(c)) 52 , and melatonin (Fig. 7(d)) 46 . The positions of the phosphorous atoms in the lipid head groups is visualized as 2-dimensional surface and shows increased membrane bending around the position of the Aβ 25-35 peptides. Molecular Dynamics The local membrane curvature, K, was calculated by fitting the positions of the lipid head groups with respect to the membrane normal and subsequent Monge parameterizations, as discussed in the Materials and Methods Section. Values for K are plotted in Fig. 7(e). All drugs led to a decrease in membrane curvature with curcumin causing a 60% decrease. A decrease of K and decrease in this local curvature is indicative of a softening of the membranes. From the lipid diffusion displayed in Fig. 7(f), all drugs led to a slight decrease in diffusivity. Discussion As all drugs investigated in this study do not directly interact with the amyloid peptides but instead impact on membrane properties, it is likely that membrane-mediated interactions between the inserted proteins play a major role in the aggregation behaviour of Aβ [25][26][27][28][29][30][31][32][33][34][35] . A hydrophobic mismatch is created between peptide domains and lipids when the hydrophobic thickness of the transmembrane proteins does not match the equilibrium bilayer thickness, which causes each monolayer leaflet to distort in order to ensure that the entire hydrophobic region of the peptide is contained within the hydrophobic core of the membrane. As reported previously [30][31][32][33][58][59][60][61] , this mismatch can result in long-range attractive forces between the peptides. Multi-lamellar, solid supported membranes were prepared for the AFM, fluorescence microscopy, transmission x-ray microscopy and x-ray diffraction experiments. In this type of sample preparation, lipids, peptides and small molecules are all added at the time of preparation. To rule out that the observed effects are the result of the sample preparation, small unilamellar vesicles in solution were prepared for the UV-vis measurements, and peptides and small molecules added later. These assays were used to study the effect of curcumin, ASA and melatonin on size and volume fraction of the cross-β peptide clusters. MD simulations were specifically used to investigate the effect of insertion of a single peptide on the curvature of the bilayers to better understand the early stages of aggregate formation and derive a theoretical model. A simple free energy model based on the concept of hydrophobic mismatch could be used to understand the observed effects of curcumin, ASA and melatonin on the size of the Aβ 25-35 clusters. Regarding the clusters as a phase separated domain, their free energy can be written as the sum of contributions from the bulk free energy gain and interface energy (line tension). Assuming the aggregates are circular, the free energy of the clusters bilayer can be written in terms of the cluster radius R as, where g 0 and σ are the free energy density (free energy per unit area) and line tension of the Aβ 25-35 domain, respectively. For a binary system composed of phase separated domains, the size of the minority domains is dictated by the interfacial tension or line tension of the domains. In particular, a smaller interfacial or line tension would lead to a smaller domain size. The line tension between a bilayer membrane and a hydrophobic insertion depends on the hydrophobic mismatch. Specifically, the line tension is mainly governed by the thickness deformation energy of the bilayer near the hydrophobic insertion, if we assume that the thickness change of the membranes is small. From this thickness dependence and using the values of the bilayer thickness obtained from the experiments, we can conclude that the line tension of the clusters following the order of σ Curcumin < σ Melatonin < σ ASA . This ordering of the line tension is consistent with the observed size distribution of the Aβ 25-35 aggregates. In the case of melatonin, the membrane properties did not change and as a consequence, melatonin did not have a measurable effect on the amyloid peptide clusters. ASA led to a thickening and stiffening of the membrane thereby increasing the hydrophobic mismatch and making this interface energetically less favorable. As a result, the addition of ASA led to the formation of larger domains; however, the total amount of cross-β sheets was unchanged. This argument can be illustrated as follows: two domains of radius r have a total area of 2 × πr 2 and a circumference of 2 × 2πr. A larger domain with the same area would have a radius of r r 2 = and a circumference of π r 2 ( 2 ), smaller than two smaller domains. At the same time lipid tilt has to increase in order to adapt to the larger hydrophobic mismatch, as observed in the structural data. Curcumin, on the other hand, was found to lead to a thinning and softening of the membranes, which significantly reduces the energy cost in Eq. (1) and favors dissolution of the peptide domains. Lipid tilt was unchanged and the membrane order parameter, H, points to small membrane bending, consistent with the idea of relatively flat bilayers. Thus the results point at a membrane mediated mechanism for the formation of Aβ [25][26][27][28][29][30][31][32][33][34][35] peptide clusters in unsaturated anionic lipid membranes. The findings are summarized Fig. 8. Conclusion Model unsaturated anionic neuronal membranes were studied at high concentrations of Aβ [25][26][27][28][29][30][31][32][33][34][35] , the transmembrane segment of the Amyloid-β peptide. Multi-lamellar, solid supported membranes were prepared for these experiments. The peptides form peptide clusters at high peptide concentrations of 20%, which show the cross-β pattern also observed in the plaques of Alzheimer's patients. Formation of peptide clusters was confirmed by atomic force microscopy, fluorescence microscopy and transmission x-ray microscopy. These techniques present evidence for small, ~100 nm sized clusters, which organize and fuse to form larger plaques of tens of μm. In order to test the effect of membrane properties on peptide cluster formation, melatonin, ASA and curcumin were added to the membranes. These molecules are membrane active and have been speculated to affect amyloid aggregation. From x-ray diffraction, we find that melatonin does not change membrane properties and did not have an observable effect on the peptide clusters. ASA led to a thickening and stiffening of the membranes, which resulted in the formation of larger peptide clusters. The addition of curcumin led to a thinning and softening of the membranes, resulting in a significant decrease of the amount of the cross-β sheet signal indicative of a dissolution of the Aβ clusters by 70%. Results were confirmed using UV-vis spectroscopy and the Thioflavin T assay, where curcumin led to a significant reduction in the β sheet signal. Materials and Methods Preparation of Highly-Oriented Multi-Lamellar Membranes. Highly oriented multi-lamellar membranes were prepared on single-side polished silicon wafers. 100 mm diameter, 300 μm thick silicon (100) wafers were pre-cut into 1 × 1 cm 2 chips. The wafers were first pre-treated by sonication in dichloromethane at 310 K for 30 minutes to remove all organic contamination and leave the substrates in a hydrophobic state. Each wafer was thoroughly rinsed three times by alternating with ~50 mL of ultrapure water and methanol. Solutions of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and 1,2-dimyristoyl-sn-glycero-3-phospho-L-serine (DMPS) at a concentration of 20 mg of lipid per mL of solvent were dissolved in a 1:1 chloroform: 2,2,2-trifluoroethanol (TFE) solution. The Aβ peptides were prepared by pre-treatment with trifluoroacetic acid (TFA) to disaggregate the peptide, as described by 64 . This pretreatment included dissolving the peptide in a 1 mg/mL solution of TFA, sonicating with a tip sonicator for four three second intervals, and then removing the solvent through evaporation using nitrogen gas then placed in a vacuum for 30 minutes at 298 K to remove any traces of TFA. The peptide was then redissolved in a 20 mg/ml solution of 1:1 TFE:chloroform. Stock solutions of melatonin, ASA and curcumin were prepared at a concentration of 20 mg/mL as well. Each solution was vortexed until the solution was homogeneous. The POPC, DMPS, small molecules, and peptide solutions were then mixed in appropriate ratios to produce the desired membrane samples for the experiment. Schematics of the POPC, DMPS and Aβ [25][26][27][28][29][30][31][32][33][34][35] molecules are shown in Fig. 1. The mol%-values given refer to the number of peptide and drug molecules per lipid molecule and indicate the peptide-to-lipid and drug-to-lipid ratios. The temperature of the main transition in pure POPC is −2 °C. Care was taken to prepare the membranes at elevated temperatures, in the fluid phase of the lipid bilayers. The tilting incubator (VWR Incubating Rocker/3-D Rotator Waver) was heated to 313 K and the lipid solutions were placed inside to equilibrate. 65 μL of lipid solution was applied on each wafer, and the solvent was then allowed to slowly evaporate for 10 minutes (speed 15, tilt of 1), such that the lipid solution spread evenly on the wafers. After drying, the samples were placed in vacuum at 313 K for 12 hours to remove all traces of the solvent. The bilayers were annealed and rehydrated before use in a saturated K 2 SO 4 solution which provides ~97% relative humidity (RH). The hydration container was allowed to equilibrate at 293 K in an incubator. The temperature of the incubator was then increased gradually from 293 K to 303 K over a period of ~5 hours to slowly anneal the multi-lamellar structure. This procedure results in highly oriented multi-lamellar membrane stacks and a uniform coverage of the silicon substrates. About 3,000 highly oriented stacked membranes with a thickness of ~10 μm are produced using this protocol. The high sample quality and high degree of order is a prerequisite to determine in-plane and out-of-plane structure of the membranes separately, but simultaneously. Table 1 lists all samples prepared for this study. Scanning Transmission X-ray Microscopy. STXM measurements were performed using the ambient STXM on beamline 10ID1 at the Canadian Light Source (CLS, Saskatoon, SK, Canada). As measured transmission images were converted to optical density (OD) images using the Beer-Lambert Law, = OD II ln( ) / , where I 0 is the incident photon flux measured through a blank area of the silicon nitride window and I is the photon flux through an area where the sample is present. Since there was only partial coverage of the sample by the lipid peptide aggregates, lipid bilayer areas were identified using the difference in OD images measured at 288.8 eV (peak C 1s signal from lipids) and at 280 eV (below the C 1s onset). A full C 1s image sequence from 280 eV to 340 eV was then measured on areas typically 10 μm × 10 μm using 100 nm steps and with the STXM beam defocused to 100 nm to match the step size. Radiation damage to lipids is a concern 65 and the reason why we chose to use defocused beam sizes, which reduced the dose by 10-fold relative to the use of a fully focused ~30 nm spot. The membrane sample was measured by x-ray diffraction before and after the STXM measurements and no structural differences were observed within the experimental resolution. The photon energy step size was 0.10 eV from 284 eV to 290 eV and 0.25 eV or larger outside this region. Axis2000 (available at http://unicorn.mcmaster.ca/ aXis2000.html) was was used for stack alignment, conversion to OD, and singular value decomposition mapping using lipid and peptide spectra taken from the data measured in this study. Atomic Force Microscopy. Atomic Force Microscopy was conducted in the Origins of Life Laboratory at McMaster University using an Nanomagnetics ezAFM+. The instrument uses a FPGA based digital feedback control and is able to scan an area of 40 × 40 μm with a maximal hight difference of 4 μm. Samples are taped on a steel plate, which is then magnetically mounted on a 38 mm motorized XY-stage. A digital microscope with a field of view of 390 × 230 μm can be focused either on the cantilever or sample surface and allows aligning the specimen with respect to the cantilever tip. All presented images were recorded in non-contact (tapping) mode. The instrument was equipped with a Point Probe Plus Non-Contact Long-Cantilever Reflex Coating (PPP-NCLR) probe with an guaranteed tip radius of less than 10 nm and a resonance frequency of 190 kHz. Operating the instrument in tapping mode offers a simultaneous measurement of the topology and surface phase. First an area on the sample surface was chosen by the digital microscope and the motorized stage. Afterwards a coarse scan was performed over an area of 20 × 20 μm with a scanning speed of 10 μm/s. Flat areas were identified on this scan and rescanned with a scanning speed of 2 μm/s at a resolution of 512 × 512 px. Topology data was processed using an auto-plane correction to correct the sample tilt, a scar and a horizontal line correction to correct artifacts. All data was processed using the NanoMagnetics Image Analyzer Software (version 1.4). Phase pictures were corrected by using the spot removal tool. Epi-Fluorescent Microscopy. Fluorescent Microscopy was conducted using an Eclipse LV100 ND Microscope from Nikon in the Origins of Life Laboratory at McMaster University. The instrument is equipped with a Tu Plan Fluor BD 50× objective with an numerical aperture of 0.8. Images were recorded using a Nikon DS-Ri2 Camera with a resolution of 4908 × 3264 pixel and a pixel-size of 7.3 × 7.3 μm. The camera is mounted via a 2.5× telescope to the microscope. All presented images were recorded in episcopic illumination mode using a halogen lamp. Due to the high numerical aperture, the objective has a small depth of focus between 0.7 μm and 0.9 μm. In order to record a uniform sharp image, the Nikon control software (NIS Elements, Version 4.60.0) was used to record an Extended Depth of Focus (EDF) image by combining multiple images with different focal planes. The presented data are a combination of two EDF pictures. First, a bright-field image has been recorded. Second, a fluorescent picture was taken. A B-2A longpass emission filter cube was used with an excitation wavelength of 450-490 nm and a long-pass analyzing filter with an barrier wavelength of 520 nm. Due to autofluorescence, peptides, such as the analyzed amyloid-β peptide, light up in the fluorescent picture. Phospholipids, on the other hand, barely emit a fluorescent signal. Hence, one can easily identify amyloid-β enriched regions on the fluorescent image. To locate the amyloid-β clusters on the sample surface, both a bright-field image and a fluorescent image were combined. 400 μL of small unilamellar vesicles (SUVs) were prepared in water by probe sonication, preserving the POPC/ DMPS (97:3 mol/mol%) ratio at a concentration of 5 mg/mL of lipid. The samples were kept in ice-bath during sonication to prevent solvent evaporation, and the probe was pulsed over an hour at 20,000 Hz to prevent the sloughing off of the titanium tip 66 . 1.8 mg of Aβ [25][26][27][28][29][30][31][32][33][34][35] was then added to mimic a concentration of 20 mol% to replicate experimental aggregation conditions. The solutions was then transferred to a cuvette and used as blanks. Complete wave scans were taken from 200 nm to 800 nm, and fluorescence of thioflavin T at a wavelength of 456 nm was monitored 55,67 . Because ThT can accelerate deposition of Aβ peptides 68 and other amyloids 69,70 , experiments without the presence of drugs were conducted over a period of 24 hours to check for aggregation. Samples of Aβ [25][26][27][28][29][30][31][32][33][34][35] were mixed in a 1.5 mL flask and kept in a shaking incubator at 37 °C. Aliquots were taken and placed in a cuvette at each time point. ThT was then added to the aliquot and a measurement was taken. ThT was found to induce aggregation, which plateaued out after 12 hours. All measurements in the manuscript were therefore conducted 12 hours after ThT deposition. When the plateau was reached, a small concentrated 100 μL volume of a dissolved drug in water was added to the cuvette to minimize the decrease in measured emission from the increase in volume. The concentrations of these dissolved drugs were chosen to ensure the total solution had 5 mol% of drug to be in agreement with experiment. Full wave scans were then taken every 30 seconds for the next 10 minutes. As changes in Thioflavin T signal happened apparently instantaneously, we were not able to detect a change in signal temporally within the capabilities of our spectrophotometer. SCIentIfIC RepoRTs | (2018) 8:12367 | DOI:10.1038/s41598-018-30431-8 X-ray Diffraction. X-ray diffraction data was obtained using the Biological Large Angle Diffraction Experiment (BLADE) in the Laboratory for Membrane and Protein Dynamics at McMaster University. BLADE uses a 9 kW (45 kV, 200 mA) CuKα Rigaku Smartlab rotating anode at a wavelength of 1.5418 Å. Both source and detector are mounted on movable arms such that the membranes stay horizontal during the measurements. Focussing multi-layer optics provides a high intensity parallel beam with monochromatic X-ray intensities up to 10 10 counts/(s × mm 2 ). This beam geometry provides optimal illumination of the solid supported membrane samples to maximize the scattering signal. By using highly oriented membrane stacks, the in-plane (q || ) and out-of-plane (q z ) structure of the membranes can be determined separately but simultaneously. The result of such an x-ray experiment is a 2-dimensional intensity map of a large area (0.03 Å −1 < q z < 1.1 Å −1 and 0 Å −1 < q || < 3.1 Å −1 ) of the reciprocal space. The corresponding real-space length scales are determined by d = 2π/|Q| and cover length scales from about 2.5 to 60 Å, incorporating typical molecular dimensions and distances. These 2-dimensional data are essential to detect and identify signals from bilayers and peptides and determine orientation of the molecules. All scans were carried out at 28 °C and 97% RH. The membrane samples were mounted in a humidity controlled chamber during the measurements. The membranes were hydrated by water vapour and allowed to equilibrate for 10 hours before the measurements to ensure full re-hydration of the membrane stacks. The degree of orientation of the bilayers and lipid tails within the membrane samples was determined from the 2-dimensional x-ray maps. The intensity as a function of Q and angle γ from the q || axis was used to determine orientation of lipid tail signals. Pixels within a wedge of the reciprocal space map, defined by γ and γ step (where γ step = 2°), were integrated as a function of Q q q ( ) z 2 21/2 = + and normalized by pixel count at each Q. γ varied from 30° to 90° for the sample with 20 mol% peptide to capture peptide signals. Data from γ < 30° was not included due to high absorption at low angles 71 . The integrated I(Q, γ) could be fit with Lorentzian functions. By calculating the area under the Lorentzian fits, I(γ) was determined and fit with a Gaussian distribution. To determine the degree of orientation of membranes in the stack, the intensity as a function of the meridional angle δ was determined. The intensity was integrated around the second Bragg peak, at Q ≈ 0.22 Å −1 , from 18° < δ < 40°. δ < 18° was not used in order to avoid contributions from diffuse scattering 72 . The second Bragg peak was chosen as diffuse scattering was weaker than the first Bragg peak. Pixel density at low-Q was too low to calculate I(Q, δ) as with the peptide samples, so I(δ) was calculated by direct summation of pixels within δ and δ step (where δ step = 2°), and within Q and Q step , where the Q-range was chosen to include only scattering from the second Bragg peak. I(δ) was fit with a Gaussian distribution centred at δ = 0, which was then used to calculate the degree of orientation using Hermans orientation function: Molecular Dynamics Computer Simulations. All simulations were run in-house on MacSim, A GPU-accelerated workstation containing 20 physical Intel XeonCPU cores and two GeForce GTX 1080 graphics cards, totalling to 5120 CUDA cores. The Aβ 25-35 peptide was taken from PDB 1QWP and equilibrated using typical parameters, in the presence of 1000 water molecules to reduce structural rigidity for 300 ns. The peptide was then desolvated and the structure was re-inserted into bilayer patches. A system containing 124 POPC and 4 DMPS lipids evenly sectioned across each leaflet. The lipid topologies were taken from the CHARMM-GUI builder. The system was equilibrated at high hydrations (25 waters per lipid) for 200 ns before Aβ [25][26][27][28][29][30][31][32][33][34][35] was added to the center of the bilayer by a modified InflateGRO algorithm for multicomponent bilayers. Topologies for all systems were generated with the CHARMM General Force Field (CGenFF) program. All simulations were performed using the GROMACS 5.1.2 software package 73,74 , utilizing the CHARMM36 force field. All simulations used a 2 fs time step, a periodic boundary condition applied in all directions, a short-range van der Waals cutoff of 1.2 nm, the particle-mesh Ewald solution for long-range electrostatics 75 , and the LINCS algorithm for determination of bond constraints 76 . A Nose-Hoover thermostat at 28 °C (with a time constant of τ t = 0.5 ps) was used for temperature coupling 77 , while a Parrinello-Rahman semi-isotropic weak pressure coupling scheme was used to maintain a pressure of 1.0 bar (with a time constant of τ p = 1 ps) 78 . The position of the Aβ 25-35 was restrained during volume (NVT) and pressure (NPT) equilibration to avoid free space bias as systems were reduced. Restraints were removed during 200 ns simulation. Calculating the membrane curvature from MD simulations is not as well-defined, as defining the surface which is being "curved" from a reference plane is difficult to define. For this reason, we used Monge parameterization, in which the reference plane is defined as the average distance along the membrane normal where the overall density is at minimum, i.e. the bilayer center (d 0 ). The surface is defined by the distance d, from the surface to the center. If δ = Δd = d − d 0 at position r(x, y), we obtain the form δ(r). Thus, the curvature K within Monge gauge can be calculated by the relation, where ∇ and Δ are the nabla-and Laplace-operator on the reference plane, respectively. The use of ∇δ uses the small gradient approximation. From this, we are able to define the local curvature at the point of maximum deformation as a function, f(x), due to the presence of Aβ 25-35 on either leaflet, P. If we imagine an external circle force applied on the bilayer, then a line bisecting the center of the force and point P can be defined as p 1 (x), and a second line bisecting the center of the force at some point away from P can be defined as p 2 (x). The intersection
9,339.4
2018-08-17T00:00:00.000
[ "Biology", "Chemistry" ]
Investor Psychology, Mood Variations, and Sustainable Cross-Sectional Returns: A Chinese Case Study on Investing in Illiquid Stocks on a Specific Day of the Week This paper uncovers a new finding of sustainable cross-sectional variations in stock returns explained by mood fluctuations across the days of the week. Long/short leg of illiquid anomaly returns are extensively related to the days of the week, and the magnitude of excess returns is also striking [Long leg refers to portfolio deciles that earn higher excess returns. Historical evidence suggests that more illiquid stock earn higher excess returns (Amihud, 2002; Corwin and Schultz, 2012)]. The speculative leg of illiquid anomalies is the long leg (Birru, 2018) [The speculative leg falls into the long leg of anomaly because more illiquid stocks are sensitive to investor sentiment (Birru, 2018)]. Therefore, the long (speculative) leg experiences more sustainable high returns on Friday than the short (non-speculative) leg. At the same time, relatively higher long (speculative) leg returns were witnessed on Friday than Monday with a greater magnitude difference. These cross-sectional variations in illiquid stocks on specific days are consistent with the explanation of the limit to arbitrage. The observed variations in cross-sectional returns are sustained and consistent with plenty of evidence from psychology research regarding the low mood on Monday and high mood on Friday. INTRODUCTION Behavioral finance researchers critique traditional finance theories by arguing in the favor of the psychology aspect of investors as a core determinant of asset-pricing research. Therefore, it has been a long-standing area of interest for economists to explore whether investor sentiment affects stock prices or not. There is no role of investor sentiment in the presence of classical finance theory. Instead, classical finance theory argues that the competition amongst rational investors -who make portfolios to diversify the statistical properties -will lead to an equilibrium in which prices are equal to realistically discounted values of expected future cash flows. Here, the cross-sectional expected returns depend solely on the cross-sectional systematic risks (Rasheed et al., 2016;Yang et al., 2019). According to classical finance theory, even if there are some irrational investors, the demands of these investors are compensated by arbitrageurs and therefore have no considerable impact on prices . A sentimental hypothesis represents a clear prediction that anomalies reveal variation in returns across days of the week, and earlier studies (Baker and Wurgler, 2006, 2007 have focused on the anomalies that theorizes that prediction is related to sentiment. Particularly, this research has focused on anomalies related to illiquidity and a theory that predicts that one leg should be clearly speculative and one clearly non-speculative. Importantly, individual action and behavior is determined by mood, and it is one of the powerful determinants. Variations in mood have been found to persuade less than fully rational behavior of financial markets, not only from individual investors but from institutional investors as well (Goetzmann et al., 2015). The weekend effect has not existed since 1975 (Robins and Smith, 2016). The presence of a strong cross-sectional effect is still not surprising since mood variations provide clear predictions, but these patterns do not lead to comprehensive predictions. As Baker and Wurgler (2007) argue in relation to sentiment, theory does not provide obvious comprehensive predictions. For example, speculative stocks are more sensitive to sentiment, and, with the decrease in sentiment, the price of the stocks will also decline. This scenario can lead to fluctuations in quality, which will cause an increase in prices of non-speculative or safe stocks (Kong et al., 2019). Therefore, sentiment provides obvious crosssectional prediction as has previously been argued. This study will focus on specific types of cross-sectional investment strategies, which will clearly show day-of-the-week return by considering the sentimental hypothesis. In earlier studies, many researchers documented that stock markets perform low on Mondays [early studies include (Cross, 1973;French, 1980;Gibbons and Hess, 1981)]. Though many studies explored the weekend effect, none of them have produced satisfactory results. Investor sentiment diverges across the days of the week since mood is a deviating factor that affects sentiment (Ma and Tanizaki, 2019). In capital markets, the existence of pessimism and optimism (which is not related to the fundamentals) -generally called sentiment -provides clear predictions of cross-sectional return. The variation in sentiment will have a contemporary effect on returns, and it will highly affect the prices of stocks that are not easy to value, that are very subjective to value, or that are difficult to arbitrage (Baker and Wurgler, 2006). Therefore, the hypothesis predicts that, in comparison to non-speculative stocks, speculative stocks will earn high returns on Friday and low returns on Monday (Birru, 2018). The analysis of the variation in mood across the days of the week has remained a vigorous research dimension in the field of psychology ever since the first extensive study was conducted by Rossi and Rossi (1977). Though the exact flow of the variations in mood over the course of the week has long been debated, one comparatively unquestionable finding has been discovered in the literature: there exists a higher mood on the weekend and Friday than from Monday to Thursday. Generally, the mood increases from Thursday to Friday and decreases on Monday. Mixed results exist in literature regarding the mood variation from Monday to Thursday . Some recent studies have used a large heterogeneous sample of the individuals and expanded our understandings. For example, Stone et al. (2012) and Helliwell and Wang (2014) used the sample data from the Gallup organization of the United States that was gathered through telephonic questionnaires, and this study consisted of more than 340,000 individuals above the age of 18. Their findings are also consistent with the theory that mood is higher on Friday than Monday to Thursday. As only Friday and Monday are the days of the week that provide the obvious psychological forecast, our analysis will focus only on these days. The strong psychological evidence that mood is higher on Friday and lower on Monday predicts higher returns for speculative stocks on Friday than non-speculative characteristics of stocks, and an inverse pattern exists on Monday. Our study contributes to the literature: it presents a specific explanation of stocks as being sensitive to sentiment, and it provides evidence of cross-sectional variations of returns on particular days by linking a speculative leg of illiquid stocks with mood theory from psychology literature. This study also provides different investment strategies to earn excess returns across the days of the week by investing in illiquid stocks. Numerous hypotheses motivate the analysis in this study. One of the possible theories is that the trading behavior of institutions changes with the days of the week, and this, in turn, causes the predictable variation of cross-sectional returns across the days of the week. Other reasons are related to the content and timing of the news release. Cross-sectional variation may be found in the contents and timings of the announcement of good or bad news. Another possible explanation is related to the timing of macroeconomic news announcements; it is sometimes observed that good or bad macroeconomic news is systematically released on particular days of the week. These systematic patterns have cross-sectional return effects, and this study has incorporated these explanations in order to check and verify the true relationship. On the basis of the published literature we test four hypotheses. H1: The speculative leg of anomalies earns a higher return on Friday than Monday due to mood variations across the days of the week. H2: The speculative leg of anomalies earns higher stock returns on Friday than the non-speculative leg. H3: The speculative leg of anomalies earns higher long minus short strategy returns on Friday than Monday. H4: The observed cross-sectional variation in the stock return is inconsistent with the impact of Firm-specific news and Macroeconomic news. MATERIALS AND METHODS In this section, our analysis has focused on the characteristics of illiquid stocks that theory predicts are affected by the sentiment. According to Wurgler (2006, 2007), stocks that are most affected by the sentiment are difficult to value or are subjective and hard to arbitrage. Practically, stocks that have either subjective characteristics in terms of valuation or that are hard to arbitrage are likely to be the same (Birru, 2018). Literature from Psychology hypothesizes that mood affects decisions when situations are not clear or adequate information is not available (Clore et al., 1994;Forgas, 1995;Hegtvedt and Parris, 2014;Sarfraz et al., 2019). Conversely, stocks that do not have a precise valuation will give experienced investors a misrepresentation of the valuation of the stocks, and this varies with the existing state of sentiment. Baker and Wurgler (2006) have assessed the related dimensions to differentiate the speculative intensity of stocks, and they found that stocks are the paying status of dividend, intense growth, size, level of distress, profitability, and age. More precisely, Birru (2018) mentioned that illiquid stocks face greater impediments to arbitrage. Therefore, our analysis will focus on illiquid stocks because these stocks are sensitive to sentiment, and they should have higher speculative returns on Friday than Monday. For this purpose, we have taken two measures for illiquid stocks: Amihud's illiquidity measure (Amihud, 2002) and the Bid-Ask spread (Corwin and Schultz, 2012) 1 . Portfolio performance is measured through the value of the Jensen Alpha, and it is a risk-adjusted measure of performance that represents the average return of the portfolio investment, below or above the different asset pricing models, given investment's beta or market average return. Portfolio Construction Illiquidity is measured following the methodology of Amihud (2002) and Corwin and Schultz (2012). Portfolios are then generated by making 10 deciles based on calculated values for each stock following the methodology of Amihud (2002) and Corwin and Schultz (2012). We took, however, only decile one and 10 of both measures for the portfolio construction, as our analysis was based on speculative and non-speculative characteristics of each portfolio and fell only in extreme deciles. Portfolios constructed for both anomalies are rebalanced every month. A penalized expected risk criterion is one of the widely used portfolio construction approach (Luo et al., 2019). Birru (2018) mentioned that illiquid stocks are sensitive to sentiment, and stocks that are more illiquid face higher impediments to arbitrage. Therefore, the highest decile of illiquid stocks will earn higher returns on Friday than Monday and higher long minus short returns on Friday than Monday. Supplementary Table S1 provides more insights regarding the possible returns on particular days and the details of both anomalies. Supplementary Table S1 describes the division of the sample for anomalies and speculative strategies. It indicates the division of anomalies into Long leg and Short leg, and it also indicates the expected speculative leg for each anomaly and the brief explanation for speculative reason. The table also reports the expected returns for a speculative leg on particular days. Data The data set that we use for our analysis was taken from Wind Information Incorporation 2 . The analysis period for our study was from January 1996 to December 2018 for Amihud's illiquidity measure and from 2005 for the Bid-Ask spread measure and the target of our analysis is Chinese A-shares market for both the Shanghai and Shenzen stock exchanges. The Chinese A-shares market started domestic trading in 1990 with the establishment of both stock exchanges. Our focus was based on post-1996 data for two reasons. The first reason was to ensure uniformity in the data. Though principles of fair trade and reporting were introduced in 1993, companies do not have much guidance on how to do this practically. The implementations of rules and regulations take time and yet produced a lot of discrepancies in the early years. Most of the firms take the liberty to make their standards of implementation for financial reporting, and this thus creates comparability issues (Ghulam et al., 2019). The second reason belongs to the minimum required numbers in the creation of portfolios. To attain reasonable power and precision, the portfolio construct should be based on 10 equal deciles, and each decile should have a minimum of 50 values after using all filters. Illiquidity: Friday Long Minus Short Focusing on the long minus short returns on Friday for Amihud's illiquidity measure, panel A of Table 1 shows that Friday accounts for more than 113 basis points per month excess returns according to CAPM, while the Fama and French three-factor alpha, Carhart four-factor alpha, and Fama and French fivefactor alpha account for 95, 79, and 75 basis points of excess returns, respectively, for each month. Panel B of Table 1, however, examines long minus short returns for portfolio constructs through Amihud's illiquid measure on Monday. According to the results of Table 2, Monday accounts for negative alpha values for all measurement models, and these results are consistent with the mood theory that Friday accounts for higher long minus short strategy returns due to a higher mood than the lower mood on Monday. Panel A of Table 2 focuses on the long minus short strategy returns of Bid-Ask spread anomaly on Friday and presents the same predication that Friday alone provides 46, 49, 48, and 44 basis points monthly in excess returns against CAPM, the Fama and French three-factor model, the Carhart fourfactor model, and the Fama and French five-factor alpha, Table 3 is also consistent with mood prediction and provides comparatively lower long minus short strategy returns on Monday than Friday. The magnitude difference between Friday and Monday portfolio strategy returns are much higher for Amihud's illiquid measure, whereas the Bid-Ask spread anomaly provides almost double long minus short strategy returns on Friday than Monday. Asymmetry in Long Leg Panels A and B of with Monday's long leg. An explanation based on the sentiment explains that the displayed return trend should be endorsable to the speculative leg. Therefore, panel A and B show only the long leg for both anomalies since the speculative leg is the long leg for both measures. The return difference in long leg portfolios for both days is solely larger than the return of the long minus short portfolio. For example, focusing on the CAPM alpha of the Friday long minus the Monday long for the portfolio based on Amihud's illiquidity measure leads to an increase of 118 excess basis points strategy returns on Friday over Monday. To give a more straightforward explanation for this investment strategy, we can say that investors can earn 118 basis points excess returns by merely investing in the long leg of the portfolio based on Amihud's illiquid measure for only two days (take the long position on Friday, shorten it on Monday, and the rest of the days can be used to invest in risk-free investment). Robustness Test Macroeconomic news effect It is unlikely that good or bad news has a systematic pattern of being announced on a particular day of the week, but it is possible that a cross-sectional effect is generated due to these macroeconomic news announcement effects. For instance, illiquid stocks are sometimes more sensitive toward these announcements than others. Therefore, we gathered data on the monthly macroeconomic announcement dates by following (Savor and Wilson, 2013) and took the announcement dates data of the CPI (Consumer Price Index) and PPI (Producer Price Index). We focused on the days when these figures are released. Panel A and B of Supplementary Table S2 provide results of strategy returns for both anomalies when returns of particular dates are excluded from the sample. The results indicate that earlier patterns of cross-sectional returns for Friday and Monday are robust to the exclusion of macroeconomic announcement dates. Thus, the results are not consistent with the explanation that the observed cross-sectional pattern is due to the impact of macroeconomic news announcements. Firm-specific news impact A possible explanation for this cross-sectional effect on a particular day could be the non-random timing of firm-specific news announcements (Guo and Huang, 2019). Therefore, to make this argument valid, speculative and non-speculative firms should have a systematic difference in the announcements of good and bad news. To verify this argument, we took data for announcement dates of earning and dividend declarations at the firm level. Literature has suggested that earning announcement dates are off by some days (Dellavigna and Pollet, 2009). We excluded two days before the announcement and two days post-announcement by considering the conservative approach. This approach was beneficial because a week has five working days, and the exclusion of t − 2 and t + 2 days ensured equal elimination for each day of the week. Panel A and B of Supplementary Table S3 examine the results, with the exclusion of t − 2 to t + 2 dates, of earnings and dividend announcements. The results indicate that the magnitude difference of strategy returns has no significant change, and findings of both anomalies are not consistent with the explanation that cross-sectional variation of returns on Friday and Monday is derived by firmspecific news. Impact of institutional ownership Firms with high institutional ownership are expected to be less effected by sentiment, and, at the same time, firms with less institutional ownership and high individual ownership structures are more sensitive to sentiment. Therefore, we have further divided existing portfolios on the basis that institutional investment and firms that fall below the median provide a robust explanation: cross-sectional variation in stock returns is higher for stocks that have less institutional ownership because stocks with less institutional ownership are prone toward investor sentiment. Supplementary Table S4 provides only the results for both anomalies with the firms that fall below the median value of institutional ownership. The results indicate that firms with low institutional investment and more illiquid are more sensitive to sentiment and provide higher returns on Friday than Monday. Discussion The literature of psychology predicts and gives robust findings related to mood elevation on Friday over Monday, and this causes higher returns for speculative stocks on Fridays. These robust findings also predict that returns will be relatively low on Monday in parallel with the decreasing mood on Monday. Hence, a new strategy is emerging with the prediction that anomaly returns will be higher on Fridays for those where the speculative leg is the long leg. Results have confirmed the prediction that was found in the data and examined long minus short returns for Amihud's illiquid measure and the Bid-Ask spread anomaly on Friday and the long minus short returns of illiquid anomaly on Monday. Striking results emerged when the Friday minus Monday estimation provided higher alpha values for the portfolio constructed on the basis of Amihud's illiquid measure for all models. The results were not only striking in terms of magnitude, but they were also consistent with the findings of mood theory that Friday sustained higher returns in comparison to Monday, and these findings were consistent for both anomalies. The findings of difference in the long leg for both anomalies were also consistent with the sentiment, based on investor mood, that the day of the week effect prevails in cross-sectional returns for the speculative leg of the portfolio investment. Therefore, it was again confirmed that the speculative leg earns higher returns on Friday than on Monday for the same speculative leg. Our findings were also in accordance with the explanation that the observed pattern of the cross-sectional effect on a particular day was not explained by the non-random timing of firm-specific news announcements, and they were inconsistent with the finding that the cross-sectional effect is generated due to these macroeconomic news announcement effects, as it is unlikely that good or bad news had a systematic pattern to be announced on a particular day of the week. We have also performed an additional robust test by dividing the firms into two sections on the basis of institutional ownership, and our results were again consistent with the portfolio returns of speculative stocks. The findings indicate that Friday earns higher stock returns for the portfolio of the firms that fall in the lower median because firms that have less institutional investment (and thus more individual ownership) are more prone to sentiment. CONCLUSION The study has found a strong, predictable cross-sectional variation in illiquid stocks across the days of the week. Although the Chinese market has a different investment culture and political environment, our results are consistent with the findings of Birru (2018). The study found that the speculative leg of illiquid stocks earned higher returns on Friday than Monday in comparison to non-speculative stocks. Our results are also concurrent with firm-specific news announcements, macroeconomic news announcements, and monthly portfolio returns. Psychology literature has found a consistent variation in mood across the days of the week, where mood increases on Friday and decreases on Monday. Our results regarding the cross-sectional pattern in illiquid stocks were consistent with the psychology findings, and returns are relatively higher on Friday when the mood is higher and lower on Monday when the mood is lower. Moreover, the study provides different strategies to earn excess returns across the days of the week by investing in illiquid stocks. The findings of the paper are an extension of the evidence of Baker and Wurgler (2012) that during the high-sentiment time investors tend to have low demand for safe investments, whereas, during the low sentiment time, investors tend to have a flight toward quality. Our study gives a more specific explanation that stocks are sensitive to sentiment, and we provide evidence of cross-sectional variations of returns on particular days by linking the speculative leg of illiquid stocks with mood theory from psychology literature. Additionally, our research will help academicians and practitioners in making investment strategies for investment or future research. There are several limitations to the study: this study is based upon a Chinese dataset, and the structure of the Chinese stock market is quite different to that of the rest of the world. Therefore, the results of the study may not be generalized to other markets. The Chinese market has a strong government influence, and this study can be further tested by segregating the state-owned and non-state-owned enterprises. For future directions, this research can also be expanded by incorporating the different dimensions of emotions that affect an individual's mood, e.g., valence, arousal, state-related, and trait-related emotions. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS QY supervised this study and involved with the methodology section, data collection, and made revisions to the manuscript. TY performed the formal analysis, the methodology of the manuscript, and applied techniques through software. QA wrote, reviewed, and edited the draft, and helped with the data collection. YA proofread the manuscript.
5,265.4
2020-02-19T00:00:00.000
[ "Economics" ]
Photonic crystals for matter waves : Bose-Einstein condensates in optical lattices We overview our recent theoretical studies on nonlinear atom optics of the Bose-Einstein condensates (BECs) loaded into optical lattices. In particular, we describe the band-gap spectrum and nonlinear localization of BECs in oneand two-dimensional optical lattices. We discuss the structure and stability properties of spatially localized states (matter-wave solitons) in 1D lattices, as well as trivial and vortex-like bound states of 2D gap solitons. To highlight similarities between the behavior of coherent light and matter waves in periodic potentials, we draw useful parallels with the physics of coherent light waves in nonlinear photonic crystals and optically-induced photonic lattices. © 2004 Optical Society of America OCIS codes: (020.0020) Atomic and molecular physics; (190.0190) Nonlinear optics References and links 1. J.D. Joannopoulos, R.D. Meade, and J.N. Winn, Photonic Crystals: Molding the Flow of Light (Princeton University Press, Princeton, 1995). 2. S.F. Mingaleev and Yu.S. Kivshar, “Self-trapping and stable localized modes in nonlinear photonic crystals,” Phys. Rev. Lett. 86, 5474 (2001). 3. R. Slusher and B. Eggleton, eds., Nonlinear Photonic Crystals (Springer-Verlag, Berlin, 2003). 4. Yu.S. Kivshar and G.P. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Academic Press, San Diego, 2003). 5. J.W. Fleischer, T. Carmon, and M. Segev, “Observation of discrete solitons in optically induced real time waveguide arrays,” Phys. Rev. Lett. 90, 023902 (2003). 6. D. Neshev, E.A. Ostrovskaya, Yu.S. Kivshar, and W. Krolikowski, “Spatial solitons in optically induced gratings,” Opt. Lett.28, 710 (2003). 7. J.W. Fleischer, M. Segev, N.K. Efremidis, and D.N. Christodoulides, “Observation of two-dimensional discrete solitons in optically induced nonlinear photonic lattices,” Nature 422, 147 (2003). 8. J.H. Denschlag, J.E. Simsarian, H. Haffner, C. McKenzie, A. Browaeys, D. Cho, K. Helmerson, S.L. Rolston, and W.D. Phillips, “A Bose-Einstein condensate in an optical lattice,” J. Phys. B 35, 3095 (2002). 9. S. Peil, J.V. Porto, B. Laburthe Tolra, J.M. Obrecht, B.E. King, M. Subbotin, S.L. Rolston, and W.D. Phillips, “Patterned loading of a Bose-Einstein condensate into an optical lattice,” Phys. Rev. A 67, 051603 (2003). 10. M. Jona-Lasinio, O. Morsch, M. Cristiani, N. Malossi, J.H. M üller, E. Courtade, M. Anderlini, and E. Arimondo, “Asymmetric Landau-Zener tunneling in a periodic potential,” Phys. Rev. Lett., 91, 230406 (2003). 11. E.A. Ostrovskaya and Yu.S. Kivshar, “Matter-wave gap solitons in atomic band-gap structures,” Phys. Rev. Lett. 90, 160407 (2003). 12. O. Zobay, S. P̈ otting, P. Meystre, and E.M. Wright, “Creation of gap solitons in Bose-Einstein condensates,” Phys. Rev. A59, 643 (1999). 13. V.V. Konotop and M. Salerno, “Modulational instability in Bose-Einstein condensates in optical lattices,” Phys. Rev. A65, 021602 (2002). 14. P.J. Louis, E.A. Ostrovskaya, C.M. Savage, and Yu.S. Kivshar, “Bose-Einstein condensates in optical lattices: band-gap structure and solitons,” Phys. Rev. A 67, 013602 (2003). (C) 2004 OSA 12 January 2004 / Vol. 12, No. 1 / OPTICS EXPRESS 19 #3388 $15.00 US Received 19 November 2003; revised 31 December 2003; accepted 1 January 2004 15. N.K. Efremidis and D.N. Christodoulides, “Lattice solitons in Bose-Einstein condensates,” Phys. Rev. A 67 063608 (2003). 16. E.A. Ostrovskaya and Yu. S. Kivshar, “Localization of two-component Bose-Einstein condensates in optical lattices,” arXiv:http://xxx.arxiv.org/abs/cond-mat/0309127 17. B. Eiermann, P. Treutlein, Th. Anker, M. Albiez, M. Taglieber, K.-P. Marzlin, and M.K. Oberthaler, “Dispersion management for atomic matter waves,” Phys. Rev. Lett. 91, 060402 (2003). 18. L. Fallani, F. S. Cataliotti, J. Catani, C. Fort, M. Modugno, M. Zawada, and M. Inguscio, “Optically induced lensing effect on a Bose-Einstein condensate expanding in a moving lattice,” Phys. Rev. Lett. 91, 240405 (2003). 19. B. Eiermann, Th. Anker, M. Albeiz, M. Taglieber, and M.K. Oberthaler, “Bright atomic solitons for repulsive interaction”, In: Proceedings of the 16-th International Conference on Laser Spectroscopy (ICOLS’03) (13-18 July 2003, Palm Cove, Australia). 20. C. M. de Sterke and J. E. Sipe, “Envelope-function approach for the electrodynamics of nonlinear periodic structures,” Phys. Rev. A38, 5149 (1988). 21. H. Pu, L.O. Baksmaty, W. Zhang, N.P. Bigelow, and P. Meystre, “Effective-mass analysis of Bose-Einstein condensates in optical lattices: Stabilization and levitation,” Phys. Rev. A 67, 43605 (2003). 22. D. E. Pelinovsky, A.A. Sukhorukov, and Yu.S. Kivshar, “Bifurcations of gap solitons in periodic potentials,” in preparation. 23. A.A. Sukhorukov, Yu. S. Kivshar, H. S. Eisenberg, and Y. Silberberg, “Spatial optical solitons in waveguide arrays,” IEEE J. Quantum Electron. 39, 31 (2003). 24. A.A. Sukhorukov and Yu. S. Kivshar, “Spatial optical solitons in nonlinear photonic crystals,” Phys. Rev. E 65, 036609 (2002). 25. N. Aközbek and S. John, “Optical solitary waves in two-and three-dimensional nonlinear photonic band-gap structures,” Phys. Rev. E 57, 2287 (1998). 26. J.J. Garc̀ ıa-Ripoll and V.M. P̀erez-Garc̀ ıa, “Optimizing Schr̈ odinger functionals using Sobolev gradients: Applications to quantum mechanics and nonlinear optics,” SIAM J. Sci. Comput. 23, 1316 (2001). 27. B. B. Baizakov, V.V. Konotop, and M. Salerno, “Regular spatial structures in arrays of Bose-Einstein condensates induced by modulational instability,” J. Phys. B 35, 5105 (2002). 28. N.K. Efremidis, S. Sears, D.N. Christodoulides, J.W. Fleischer, and M. Segev, “Discrete solitons in photorefractive optically induced photonic lattices,” Phys. Rev. E 66, 046602 (2002). 29. E.A. Ostrovskaya, T.J. Alexander, and Yu.S. Kivshar, “Matter-wave gap vortices in two-dimensional optical lattices,” in preparation. Introduction Photonic band-gap materials [1] -artificial periodic structures fabricated in a dielectric medium with a high refractive index contrast -offer new possibilities for the control and manipulation of coherent light waves.Diffraction management, localization, and controlled steering of light in periodic band-gap structures, such as fiber Bragg gratings and photonic crystals, has revolutionized modern photonics and laid the foundation for the development of novel types of integrated photonic devices.The study of nonlinear photonic crystals made of a Kerr nonlinear material [2,3], has revealed that such structures can support self-trapped localized modes of the electromagnetic field in the form of optical gap solitons [4] with the energies inside the photonic gaps of a periodic structure.Recent demonstration of the light scattering in dynamically reconfigurable photonic structures -optically-induced refractive index gratings in nonlinear materials -has opened up novel ways to control light propagation and localization [5,6,7]. On the other hand, many recent experimental studies of Bose-Einstein condensates (BECs) in periodic potentials of optical lattices [8,9,10] demonstrate an unprecedented level of control and manipulation of coherent matter waves in the reconfigurable crystal-like structures created by light.Due to the inherent nonlinearity of coherent matter waves which is introduced by the interactions between atoms, BEC in a lattice potential forms a periodic nonlinear system which is expected to display rich and complex dynamics. The modelling of both nonlinear optical and nonlinear matter-wave dynamics is often based on the nonlinear Schrödinger equation, which is used to describe both the electromagnetic field envelope and the BEC macroscopic wavefunction (mean-field).This model reflects similarities between the physics of coherent light and matter waves, which aid in understanding and predicting the nonlinear behavior of BEC in an optical lattice.Similar to periodic photonic structures for coherent light waves, optical lattices form band-gap structures for coherent matter waves [11] which modify the diffraction properties of atomic wavepackets.Different diffraction regimes were predicted to lead to nonlinear localization of condensates with both attractive and repulsive interactions in the spectral gaps of the atomic band-gap structure [12,13,14,15,16].Recent experiments on nonlinear dynamics of BEC in a lattice have demonstrated both diffraction management [17,18] and nonlinear localization of the atomic wavepackets in the form of the matter-wave gap solitons [19]. In this paper we describe the structure and stability properties of nonlinear localized states of BEC in one-and two-dimensional optical lattices, and make links to parallel studies in nonlinear optics of periodic photonic structures such as nonlinear photonic crystals and optically-induced photonic lattices. Model The dynamics of a Bose-Einstein condensate loaded into an optical lattice can be described in the mean-field approximation, by the nonlinear Gross-Pitaevskii (GP), or nonlinear Schrödinger, equation for the macroscopic condensate wavefunction Ψ(r,t), where V (r) is the time-independent trapping potential, and g 3D = 4π h2 a s /m characterizes the two-body interactions for a condensate with atoms of mass m and s-wave scattering length a s .The scattering length a s is positive for repulsive interactions and negative for attractive interactions. We consider a trapping potential V (r) of the form where the first term describes an anisotropic parabolic potential due to a magnetic trap.The effective periodic potential, V L , is formed by an optical lattice, which we consider to be either one-dimensional (1D), or two-dimensional (2D): The 1D (2D) lattice is created by a pair (two pairs) of counter-propagating laser beams with the wavelength λ = 2π/K, and the lattice depth, V 0 , is proportional to the intensity of the standing light wave.Equation ( 1) can be made dimensionless using the characteristic length a L = 1/K, energy E rec = h2 /ma 2 L , and time ω −1 L = h/E rec scales of the lattice.In dimensionless units, the twobody interaction coefficient is given by g 3D = 4π(a s /a L ), and the lattice depth is measured in units of the lattice recoil energy, E rec . One-dimensional atomic band-gap structures We now assume a cigar-shaped condensate weakly trapped in x-direction (ω x ω y,z ≡ ω ⊥ ).In the directions of tight confinement, the condensate wavefunction can be described by the ground state of a two-dimensional radially symmetric quantum harmonic oscillator potential, with the normalization ∞ −∞ |Φ| 2 dr = 1.The 3D wavefunction then separates as Ψ(r,t) = Φ(y, z)ψ(x,t), and the transverse dimensions can be intergrated out in the dimensionless Eq. ( 1), yielding the 1D GP equation: where the wavefunction is rescaled as and the external potential is approximated by the quasi-1D periodic potential of the optical lattice, V L (x) = sin 2 (x), by neglecting the contribution of a weak magnetic confinement. The stationary states of a condensate in a quasi-1D infinite periodic potential are described by solutions of Eq. ( 5) of the form: ψ(x,t) = φ (x) exp(−iµt), where µ is the corresponding chemical potential.The case of a noninteracting condensate formally corresponds to g 1D = 0, and the condensate wave function can be represented as a superposition of Bloch waves, where φ 1,2 (x) have periodicity of the lattice potential, b 1,2 are constants, and k is the Floquet exponent.The linear matter-wave spectrum consists of bands of eigenvalues µ n,k in which k(µ) is a (real) wavenumber of amplitude-bounded Bloch waves [14].The bands are separated by spectral gaps in which Im(k) = 0.The solutions at the band edges are stationary Bloch states corresponding to the condensate density which is strongly and periodically modulated by the lattice. For an interacting condensate (g 1D = 0), bright solitons can exist for chemical potentials corresponding to the gaps of the linear matter-wave spectrum.The mechanism for nonlinear localization in the gaps was first described for light waves in nonlinear periodic photonic structures by using the Bloch-wave envelope approximation near the band edge [20].It has been pointed out that, near the band edges, the characteristic diffraction coefficient for a coherent wave is proportional to the band curvature at that edge, which can be both negative and positive.For a matter wave in an optical lattice, the group velocity, v g = ∂ µ/∂ k, and the effective diffraction coefficient, D = ∂ 2 µ/∂ k 2 (analogous to the inverse effective mass of an electron in crystalline solids [21]), are shown in Fig. 1 for the Bloch waves in the first lowest-order bands of a relatively shallow lattice.The balance of the repulsive (attractive) interactions and anomalous, D < 0, (normal, D > 0) diffraction near the top (bottom) of the bands (see Fig. 1), can lead to formation of localized wavepackets -matter wave solitons -with zero group velocity.The chemical potentials corresponding to such localized waves lie in the gaps of the Bloch-wave spectrum for the matter waves of the noninteracting condensate (open areas in Fig. 1). Employing the multi-scale perturbation series expansion for the chemical potential and the gap soliton envelope, it is possible to show that, in general, two types of gap solitons bifurcate from each band edge [22].These are the bright solitons centered on the maxium (off-site) and minimum (on-site) of the lattice potential, respectively.Several families of the lowestorder gap modes are presented in Fig. 2, for the case of both repulsive (left) and attractive (right) condensates.The families are characterized by the norm of the condensate wavefunction, P = φ 2 (x)dx, which is proportional to the number of atoms in a localized state.Such localized states exist in all band gaps, including the semi-infinite gap of the spectrum below the first band [14], which is analogous to the total internal reflection gap of the photonic structures, and where the existence of the localized states is due to conventional self-focusing.Near the bottom (top) gap edge, gap solitons of the repulsive (attractive) BEC are well approximated by a sech-like envelope of the corresponding Bloch wave, centered either on-or off-site.Near the opposite edge of the gap, i.e. approaching the spectral band, the solitons develop extended "tails" with the spatial structure of the Bloch wave at the corresponding band edge [14].For a sufficiently wide gap (i.e., in the relatively tight binding regime), the soliton deep inside the gap may be strongly localized around a single well of the lattice potential.The bifurcations of the two types of gap solitons from the band edges of the opposite "polarity" (or sign of the effective diffraction) are a general feature of the coherent waves in periodic systems, such as spatial optical solitons in discrete waveguide arrays [23]. Stability of extended and localized modes in nonlinear systems is an important issue, since only dynamically stable modes are likely to be generated and observed in experiments. .In (a) the initial state given by the exact (numerical) stationary solution of Eq. ( 5) is perturbed by a symmetric excitation at 5% of the soliton peak density.In (b) the antisymmetric internal mode is excited by an initial perturbation at 5% of the initial soliton peak density. determine the stability properties of the localized modes, we consider small perturbations to a stationary gap soliton of the GP equation, φ (x), in the form where ε 1, and u(x), w(x) are spatially-dependent perturbation modes.We linearize the GP equation ( 5) around the localized solution and obtain, to the first order in ε, a linear eigenvalue problem for the perturbation modes where The modes describing the development of instability have either purely real or complex eigenvalues λ ; in this latter case, the instability is called an oscillatory instability. We solve the eigenvalue problem for the perturbation modes numerically, as a matrix eigenvalue problem with Fourier interpolants for the differential operators.The results reveal that, in agreement with the generalized Vakhitov-Kolokolov criterion (see, e.g., [4]), nodeless localized states of the attractive BEC (such as the fundamental solitons in a semi-infinite gap [Fig.2(a), right]) are linearly stable as long as −∂ P/∂ µ > 0. Note that the criterion is "inverted" for our choice of µ.The fundamental gap solitons of repulsive BEC [Fig. 2 (left, a)] are also linearly stable. The linear stability analysis shows that gap solitons of both attractive and repulsive BEC are associated with a number of internal modes (Imλ = 0 and Reλ = 0).The excitation of these modes usually leads to a persisting dynamics (e.g., amplitude oscillations) of the localized state that does not lead to its decay.However, higher-order gap solitons of both attractive and repulsive condensates can experience oscillatory instability (Imλ = 0 and Reλ = 0) initiated by resonances of internal modes with the bands of the inverted spectrum [22].This mechanism for weak spectral instability was first identified in studies of spatial optical solitons in onedimensional nonlinear photonic crystals [24].In Fig. 3 we compare temporal evolution of the weakly unstable gap soliton of the repulsive condensate in the first gap (a) with that of a linearly stable (attractive) BEC soliton in the semi-infinite spectral gap (b).The evolution of the soliton peak density is shown in Fig. 3 for the cases when initial solitons are perturbed by a 5% of the peak density.These results indicate that the oscillatory instability is weak and, on realistic experimental time scales, can not provide a strong mechanism for the soliton decay into spatially extended lattice states (which would otherwise prevent observation of localized states in experiments). Two-dimensional band-gap structures The theory of the nonlinear localized matter waves in optical lattices can be extended to the case of two-dimensional lattices [11].Nonlinear localization of BEC in higher-dimensional lattices is qualitatively different because both the symmetry and dimensionality of the lattice start to play an important role in the formation and properties of the band-gap structure and the corresponding nonlinear localized modes.In particular, the problem of the existence and stability of 2D matter-wave gap solitons of BEC with repulsive interatomic interactions loaded into optical lattices, is analogous to the study of localized states of light waves in nonlinear photonic crystals [2]. To simplify our analysis of the model ( 1), we assume that the weak magnetic confinement characterized by trap frequencies ω x,y has little effect on the stationary states of the condensate in the 2D lattice formed in the (x, y) plane of the condensate cloud.Under this assumption, the trap component of the confining potential in the lattice plane can be neglected, and the model can be reduced to a two-dimensional GP equation by the dimensionality reduction procedure described above for the 1D case.The condensate wavefunction in the 2D lattice potential is then described by the following equation: where is the periodic potential of the optical lattice, and the wavefunction is rescaled as The equation ( 9) is made dimensionless by using the "natural" lattice units of energy, length, and frequency, described above.In the simplest case of the square optical lattice, the potential of the optical lattice can be written as V L = V 0 sin 2 (x) + sin 2 (y) , where V 0 is the amplitude of the optical lattice. Stationary (time-independent) states of the condensate in an infinite periodic potential of a 2D optical lattice are described by solutions of Eq. ( 9) of the form: ψ(r,t) = φ (r) exp(−iµt), where µ is the chemical potential.The case of a noninteracting condensate formally corresponds to Eq. ( 9) being linear in ψ.According to the Bloch theorem, the stationary wavefunction can then be sought in the form φ (r) = u k (r) exp(ikr), where the wave vector k belongs to a Brillouin zone of the square optical lattice, and u k (r) = u k (r + d) is a periodic (Bloch) function with the periodicity of the lattice.For the values of k within an n-th Brillouin zone, the dispersion relation for the 2D Bloch waves, µ n,k , is found by solving a linear eigenvalue problem: The band-gap structure of the spectrum µ(k) of the atomic Bloch waves in the 2D optical lattice is shown in Fig. 4, in the reduced zone representation usually assumed in the theory of crystalline solids and photonic crystals.Due to separability of the lattice potential, the Bloch waves u(x, y) in the first band, at the high symmetry points (Γ → X → M) of the first irreducible Brillouine zone (see Fig 4), can be found as u(x, y; k) = u (1d) (x; k x )u (1d) (y; k y ), where u (1d) are the corresponding 1D Bloch states.Different structure of the Bloch states in the first band, at the values of k corresponding to the three symmetry points, are shown in Fig. 4 (right column).By applying an envelope theory near the band edges, it can be deduced that each of the Bloch states is associated with a (partially) localized state in the Γ, X, or M-gap.Similarly to optical gap solitons in indirect gaps of higher-dimensional photonic crystals [25], matterwave gap solitons can be spatially localized in all directions only in the complete gap located between the M and X-edges of the first and the second band, respectively (see Fig. 4).The localization near the lower M-edge is possible due to the negative components of the effective diffraction tensor.We find spatially localized stationary solutions of Eq. (9) numerically.Our numerical procedure involves minimization of the norm N = f † f dr of the functional φ by following a descent technique with Sobolev preconditioning [26].The minimization procedure yields a stationary state when N (φ ) → 0. The family of fundamental bright matter-wave solitons in the first complete gap is presented in Fig. 5 .For the repulsive condensate, such gap solitons exist in all band gaps excluding the semi-infinite gap below the first band.The spatial structure of the lowest-order gap solitons near the lower gap edge [Fig.5(a)] has a characteristic structure of the symmetric envelope superimposed onto the corresponding Bloch state [see Fig. 4].The threshold value of the number of atoms (and therefore P c ) needed for the soliton localization is determined by the theory of lattice-free self-focusing in the 2D geometry.These weakly localized, low density modes are similar to the near-band-edge modes described above for the case of one-dimensional gap solitons.Deeper inside the gap, the matter-wave soliton becomes strongly localized [Fig.5 ble.The possibility of the formation of periodic trains of such 2D localized structures, triggered by the modulational instability of the nonlinear Bloch states, has been suggested in Ref. [27]. We have also found different families of higher-order localized gap modes that can be identified as in-phase and out-of-phase bound states of the fundamental gap solitons.These states can be centered at the lattice potential minima or maxima, similarly to higher-order states in onedimensional lattices [14], and they can exhibit symmetry-breaking instabilities.In particular, Figs.6(a-c) shows three different examples of higher-order gap solitons.The simplest structure of this kind is described by a pair of two fundamental gap solitons which are out-of-phase and form a dipole [see Fig. 6(a)].The similar structure of four solitons can exist as a quadrupole state where the neighboring solitons are π out-of-phase [see Fig. 6(b)].However, the most interesting structure, which is shown in Fig. 6(c), possesses a vortex-like phase dislocation, with the phase winding by 2π around the low-density centre [as seen in Fig. 6(d)].Such a bound state can be identified as a gap vortex [29]. Optically-induced photonic lattices Although many similarities (including the structure of localized modes and their stability properties) exist between nonlinear atom optics in optical lattices and nonlinear optics in fabricated photonic structures, the closest analog of the matter-wave band-gap structures can be found in the new and rapidly developing field of optically-induced photonic lattices [5,6,7].In the existing experimental realizations of optically-induced photonic lattices, a periodic modulation of the refractive index is induced by a plane wave interference pattern illuminating a photorefractive crystal with a strong electro-optic anisotropy [28].The spatially periodic interference pattern modulates the space-charge field in the crystal, which relates to the refractive index via electro-optic coefficients.The latter are substantially different for different polarizations.As a result, the material nonlinearity experienced by waves polarized in the main direction of the crystal is up to two orders of magnitude larger than that experienced by the orthogonally polarized ones.When the lattice-forming waves are polarized orthogonally to the main axis, their nonlinear self-action as well as any cross-action from the co-propagating probe beam (polarized along the main axis) can be neglected.Thus, the periodic interference pattern propagates in the diffraction-free linear regime and creates a stationary refractive-index grating [5,6].The effective induced grating exhibited by the probe light is almost identical to the effective periodic potential experienced by matter waves in an optical lattice.Optically induced lattices open up an exciting possibility for creating dynamically reconfigurable photonic structures in bulk nonlinear media, with a degree of control over the parameters of the periodic structure approaching that of BECs in optical lattices.Nonlinear localization of coherent light waves has been observed in both 1D and 2D optically-induced photonic lattices [5,6,7], in the self-focusing regime (which is analogous to the attractive matter-wave localization regime). Conclusions We have studied the spectrum of matter waves and localization of Bose-Einstein condensates in one-and two-dimensional optical lattices and demonstrated that the interaction of the condensate with a periodic lattice potential can be compared to the propagation of coherent light waves in a nonlinear periodic photonic structure.Using the analogy with the physics of photonic bandgap structures, we have described some basic properties of matter-wave band-gap structures and demonstrated numerically the existence of one-and two-dimensional matter-wave gap solitons, spatially localized states of the condensate existing in the gaps of the matter-wave linear spectrum.We believe that the analogy between the physics of BEC in optical lattices and light waves in nonlinear photonic crystals, as well as other types of periodic photonic structures, is useful to reveal many novel features of the matter-wave dynamics in reconfigurable optically-induced structures. Fig. 1 . Fig. 1.Group velocity and effective diffraction coefficient for Bloch matter waves in an optical lattice shown in the context of the bandgap spectrum with the Bloch bands (shaded) and gaps (open); V 0 = 2.0. Fig. 2 . Fig. 2. Band-gap spectrum of matter waves in an optical lattice shown as the Bloch bands (shaded) and gaps (open), combined with the families of bright gap solitons in (left) repulsive and (right) attractive condensates (V 0 = 5). Fig. 3 . Fig. 3. Examples of a weakly unstable and stable soliton dynamics.Shown is peak density (a) of the repulsive BEC off-site soliton [shown in Fig. 2 (left, b)] in the first gap (µ = 3.7), and (b) of the attractive BEC on-site soliton [shown in Fig.2 (right, a)] in the semi-infinite gap (µ = 1.0).In (a) the initial state given by the exact (numerical) stationary solution of Eq. (5) is perturbed by a symmetric excitation at 5% of the soliton peak density.In (b) the antisymmetric internal mode is excited by an initial perturbation at 5% of the initial soliton peak density. Fig. 4 . Fig. 4. Left: Dispersion diagram for a 2D square lattice (V 0 = 1.5); dotted -the line µ = V 0 ; shaded -spectral bands, open -the lowest, semi-infinite, and the first complete gaps.Below: lattice potential in the cartesian and reciprocal spaces.Right: Spatial structure of the 2D Bloch waves at the extreme high-symmetry points of the first irreducible Brillouine zone. Fig. 5 . Fig. 5. Top: Family of bright atomic gap solitons of repulsive BEC in a 2D optical lattice (V 0 = 1.5).Bottom: spatial structure of the BEC wavefunctions at the marked points of the existence curve inside the gap.
6,151.6
2004-01-12T00:00:00.000
[ "Physics" ]
Recent advances in the application of isoindigo derivatives in materials chemistry In this review, the data on the application of isoindigo derivatives in the chemistry of functional materials are analyzed and summarized. These bisheterocycles can be used in the creation of organic solar cells, sensors, lithium ion batteries as well as in OFET and OLED technologies. The potentials of the use of polymer structures based on isoindigo as photoactive component in the photoelectrochemical reduction of water, as matrix for MALDI spectrometry and in photothermal cancer therapy are also shown. Data published over the past 5 years, including works published at the beginning of 2021, are given. Introduction Among three isomeric bisoxindoles, isoindigo has recently attracted the greatest interest (Scheme 1). The first studies on this class of compounds were related to the field of medicinal chemistry since a number of isoindigo derivatives were found to be highly active against leukemia [1][2][3]. However, to date, the volume of publications on the study of the biological activity of isoindigo derivatives has been steadily decreasing. At the same time, the unique properties of the isoindigo structure (planarity, stability, a high degree of conjugation, and electron deficiency) began to attract more and more attention of many research groups. In addition, the ease of modification, such as at the endocyclic nitrogen atom as aromatic fragment of isoindigo, makes it possible to fine-tune the electronic properties. These factors led to the beginning of many studies on isoindigo as a platform for the construction of polymeric materials for various purposes. Review Organic solar cells (OSCs) on the base of isoindigo derivatives Since the pioneering works on the use of isoindigo derivatives in the design of OSCs [3][4][5], specialists in this field have made significant progress in tuning and improving their properties [6][7][8][9][10][11][12]. The main photophysical characteristics that determine Scheme 2: Isoindigo-based OSCs with the best efficiency. the effectiveness of OSCs are open circuit voltage (V OC ), shortcircuit current (J SC ) and fill factor (FF). In addition, the solubility of isoindigo derivatives in organic solvents is very important since this affects the morphology of thin films of the photovoltaic cells. To date, the maximum efficiency of 12.05% has been shown by an OSC based on a composite of a donor poly-thiophene and an acceptor polymeric dicyanoindanone derivative [13]. Among the derivatives of isoindigo, the leading compounds are polymers 1-3, which were used in the design of OSCs as donor components of the active layer. Their power conversion efficiency (PCE) reached more than 8%. The development of an OSC based on low-molecular-weight derivatives of type 4, containing only one isoindigo fragment, also seems promising (Scheme 2). One of the areas of research is the design of low-molecularweight structures containing one or two isoindigo fragments in a unified conjugated electronic system. Currently, to improve the key characteristics of OSCs, some directions of studies are related to the design of substituents both on the heterocyclic platform (in position 1 and in the aromatic ring) and in the side chain. In the overwhelming majority of works, studies on the photophysical properties of isoindigo derivatives containing a thiophene fragment in position 6 are described. Thus, the authors of References [14,15] obtained a small number of simple representatives of symmetric dithiophene derivatives of isoindigos 5a-c (Scheme 3). The constructed solar cells with an active layer based on a mixture of compounds 5 (donor) and PC 61 BM (acceptor) in a 1:1 ratio showed the dependence of the Scheme 3: Monoisoindigos with preferred 6,6'-substitution. efficiency on the structure and position of substituents in the aromatic fragment of isoindigo. In the presented series, the system based on compound 5b with an efficiency of 1.25% turned out to be the best. It was found [16] that 6,6'-substitution of the isoindigo core is preferable due to the possibility of the formation of a quinoid structure after irradiation with sunlight, which facilitates the transport of electrons through the system (Scheme 4). Certain nitrogen heterocycles can be inserted into the substituent chain as an acceptor structural unit (Scheme 5). For exam-ple, an OSC based on a symmetrically substituted isoindigo derivative 4 containing a diketopyrrolopyrrole fragment in a mixture with PC 71 BM showed a record efficiency of 5.86% among oligomeric isoindigo [17]. At the same time, similarly constructed (D-A-D-A) oligomers 6 in the composition with PC 71 BM showed an efficiency of 1.3-1.4% [18]. Another way to design isoindigoid OSCs is the introduction of aromatic substituents of variable nature into the oligomer structure. By the example of pyrene derivatives 7 and 8, the dependence of the binding type of the aromatic fragment to the isoindigo core was revealed. The synthetic procedure for the preparation of these monoisoindigoid derivatives is based on the Suzuki reaction. At the same time, pyrene-1-ylboronic acid and 6,6'-dibromoisoindigo were used to introduce a pyrene fragment directly into the isoindigo nucleus, and to obtain a thiophene analogue, pyrenyl-substituted 2-bromothiophene and 6,6'-bis(4,4,5,5-tetramethyl-1,3,2-dioxaborolan-2-yl)isoindigo were used (Scheme 6). Thus, a photovoltaic cell based on thiophene derivative 7 mixed with PC 71 BM showed an efficiency of 1.88% [19], but if the pyrenyl substituent is bonded directly to the isoindigo core, the efficiency of such an OSC turns out to be significantly lower (0.10%) [20]. One of the highest efficiency values (4.7%) among low-molecular-weight isoindigo derivatives was shown by a two-component OSC based on isoindigo 9a, containing an alkoxylated p-phenylene fragment and PC 71 BM in a ratio of 1:0.7 (w/w), a thin film of which was obtained from chloroform with 0.5 vol % N-methylpyrrolidone (Scheme 7). As the authors believe [21], the addition of this viscous solvent made it possible to provide a better surface morphology of a thin-film layer since without it, the efficiency was almost two times lower (2.8%). Therein, the key role of the structure of the acceptor terminal substituent was also revealed since a similar OSC based on the rhodamine derivative 9b showed an efficiency of only 0.66% (Table 1). It is important to note that the replacement of the thienylphenylene spacer in structure 9a by the acceptor indan-3-dicyanoethylidene-1-one-2-ylidene fragment in compound 9c led to a decrease in the efficiency to 2.82% [22]. Perylene diimide-derived isoindigo derivative 10 was used as an acceptor in the creation of a nonfullerene OSC with thiophene polymer 11 as a donor component (Scheme 8). The PCE value of such a device turned out to be 2.6%. Cho et al. constructed a three-component cell in which the active layer consisted of a donor 11 and a polymeric acceptor based on perylene diimide 12 [23]. One of the simplest thiophene derivatives of isoindigo 13 was used here only as an additive (10 wt %), leading to an increase in efficiency from 5.9% to 6.8% (Scheme 9). Using a variety of physical methods, it has been proven that the presence of isoindigo 13 in the three-component mixture provides tighter packing of the thin layer and larger crystalline domains. This, in turn, leads to an increase in the decay time of the exciton and, as a consequence, to a high J SC value. Thus, in the series of thiophene-centered bisisoindigos 14a-d, the best efficiency values were shown by OSCs containing an odd number of thiophene units (14a: 2.16%; 14c: 2.40%) [24]. A cell based on compound 15b showed a close value of efficiency (2.25%) [25]. At the same time, incorporation of the tetrafluorophenylene fragment into the center of the molecule 14a and the presence of a branched alkyl substituent at the nitrogen atom made it possible to improve the characteristics of the cell [26]. After annealing such an OSC at 80 °C, the efficiency was increased from 3 to 3.18%. It should be noted that in all cases mentioned here, PC 71 BM was used as the acceptor component of the OSC active layer. The photovoltaic characteristics of the OSCs based on compounds of this type are summarized in Table 2. It has been shown that oligomeric isoindigos that do not contain a thiophene fragment can also be used as donor components of the OSC. The synthesis of such compounds is also based on the Suzuki coupling reaction between alkylated 6,6'-dibromoisoindigos and corresponding arylboronic derivatives, leading to the formation of target molecules in moderate yield (Scheme 11). In these compounds, the triarylamine substituent is linked either directly to the isoindigo core or via a vinylphenylene bridge (see compounds 17a,b) [27,28]. The presence of the latter determines the cell efficiency to be an order of magnitude higher (3.57%) then when using 16a and 16b. In the chemistry of isoindigo-based materials, the most popular and most studied direction in the creation of OSCs is the use of polymer structures containing an isoindigo fragment in a monomer unit associated with a different number of thiophene substituents (Scheme 12). By the example of one of the simplest representatives of this type of polymers, a significant effect of the structure of the substituent at the nitrogen atom of the heterocycle on the OSC efficiency was also shown [29,30]. Thus, fluorine-substituted polymers 18a,b in the composition with PC 61 BM showed a PCE of only 0.9-1.4%, while an OSC based on ethoxylated derivative 19a was characterized by a high short-circuit current (J SC = 13.92 mA/cm 2 ), with a 5-fold better PCE value ( Table 3). In the course of further studies, the key influence of the nature of both the substituent at the nitrogen atom of isoindigo and at the bithiophene moiety on the OSC efficiency was confirmed. Thus, the introduction of fluorine atoms to thiophene rings in compound 22b leads to an increase in the efficiency from 4.58 to 6.21%. Based on the data of quantum chemical calculations, Park et al. [34] showed that in the fluorinated derivative, the dihedral angle between the thiophene rings is 0.88° (for comparison, in compound 22a it is 17.55°), which provides better planarity of the polymer and, as a consequence, higher velocity transport of electrons under irradiation. Within the framework of this direction, the importance of the method for preparing a thin film of the active layer of OSC was also shown. In reference [35], the OSCs based on a mixture of polymers 23a,b with PC 71 BM and an additive of diphenyl ether (3 vol %) were obtained by spin coating from an o-xylene solution. Therein, the fluorinated analog also turned out to be better in terms of the final value of the efficiency. Dithienosylole polymers 24 can also be included in this structure type. Although the authors of reference [36] do not give the exact values of the molecular weight of the obtained polymers, they draw a conclusion about the influence of the molecular weight on the efficiency of the cells (Table 4). Thus, when using polymer 24b obtained by the Suzuki reaction with bis(pinacolato)diboron, the OSC efficiency was only 0.99%, while the Stille method gave a polymer 24a characterized by an efficiency of 1.66% (Scheme 14). Scheme 12: The simplest examples of polymers with a monothienylisoindigo monomeric unit. Strengthening the donor effect of the monomer unit can be achieved by lengthening the thiophene chain up to three fragments [37][38][39][40][41]. In this case, additional possibilities arise for fine-tuning the properties of polymers due to the introduction of substituents of different structures in each of the thiophene rings. Using the example of OSCs consisting of a mixture of a polymer 25 and PC 71 BM (1:1.5, w/w), the effect of the length of the alkyl radical on the efficiency of such cells was shown [37]. Thus, the hexyl and octyl derivatives 25a,b showed the best PCE values of 5.1 and 5.2%, respectively, which is higher than analogues bearing a longer or branched hydrocarbon chain (Scheme 15 and Table 5). If the OSC active layer is prepared with the addition of diiodooctane, the PCE of the octyl analogue 25b increases to 6.4% [38]. A slight change in the substituent at the nitrogen atom and the introduction of electron-donor methyl or electronacceptor cyano groups in the thiophene fragment can lead to a sharp deterioration of all characteristics (structure 26: efficiency 1.94%) [39] or to their significant improvement (structure 27, efficiency 8.36%), respectively [40]. As it was found, the OSC based on polymer 27 showed one of the best values Scheme 13: Monothienylisoindigos bearing π-extended electron-donor backbones. among the described structures of V OC 1.06 V. Others types of polymer structures that have good potential in organic photovoltaics can be found in the form of derivatives 28 and 29, means of introducing fluorine atoms into the thiophene ring [43]. The use of compound 29c (ratio of monomer units n/m = 2:1) as an acceptor component of the OSC made it possible to achieve one of the highest efficiency values of 7.3%. The problem of the low solubility of such polymers was partially solved by inserting an alkylene spacer between two thiophene fragments in one of the monomer units [44]. Efficiency (3.0-3.7%) and viscosity characteristics provide good prerequisites for the use of this type of polymers in the design of flexible OSCs. Condensed thienothiophene substituents can also be used as the donor component of the monomeric isoindigo unit (Scheme 17). The first data on the use of these compounds as donor components of OSCs (mixed with PC 61 BM) showed that the technology of preparing a thin film of the active layer is important to achieve the best efficiency value [45]. Thus, the best results using compound 30 (efficiency 2.24%) were shown by a cell with an active layer of 44 nm obtained by shifting the solution along the substrate at a rate of 0.1 mm/s. Compared to compound 30, OSCs based on more complex condensed analogs 31 containing a heterocyclic fragment showed a 2-times better efficiency (5.6% for difluorothiophene, 5.0% for selenophene) [46]. The effect of the length and branching of the alkyl substituent at the endocyclic nitrogen atom in a series of this type of donor polymers was investigated. The 2-hexyldecyl derivative exhibited the best compatibility with PC 71 BM, which resulted in a high PCE value (6.83%) of the corresponding OSC [47]. This efficiency may be due to the good surface morphology of the composite thin film and, as a consequence, the high shortcircuit current (J SC = 13.55 mA/cm 2 ). It is interesting to note To increase the degree of conjugation in the structure of polymeric isoindigo, an introduction of additional aromatic fragments either into the main monomeric chain (compounds of type 33) [48][49][50] or as a side substituent in a thiophene unit (compounds of type 34) [51][52][53] was proposed. Among a number of compounds 33, the best efficiency (5.29%) was shown by an OSC based on a mixture of PC 71 BM and polymer 33с containing a short n-butyl substituent at the isoindigo nitrogen atoms and the longest and most branched alkyl radical in the p-phenylene fragment (Scheme 18). (Table 6). Thus, an OSC based on a fluorine-containing polymer showed an efficiency of only 0.93%. This may be due to the low hole conductivity, a decrease in the HOMO level of the polymer, and a narrower band gap of visible light absorption [51]. Within the framework of the study of the prospects for polymer derivatives of isoindigo in organic photovoltaics and avoiding the use of the fullerene components, the concept of creating one-component OSCs appeared [54]. Following this strategy, isoindigo was used as platform for the synthesis of compound 35a, combining acceptor (perylene diimide) and donor (polythiophene) fragments in the structure. Both polymers were ob- However, the OSC of such a cell showed an efficiency of only 1%. Using polymeric isoindigo 35b as an acceptor component, a nonfullerene OSC was also obtained, which showed a record efficiency of 12.03% among the composites based on isoindigo described to date [55]. Polymeric derivatives of isoindigo containing no thiophene units were also used as acceptor components of the OSCs. Moreover, in both of the studies described in recent years, the donor component of the photovoltaic cell was variously substituted polythiophene, while the acceptor component (isoindigo platform) was functionalized with aromatic nitrogen-containing substituents of various structures (Scheme 20). The design of DPP copolymers of isoindigo 37 seems to be the most promising here [56] since OSCs based on it showed a higher efficiency than with compound 36 (4.2 vs 0.26%) [57]. Considering the indigoid bisheterocycle as an additional photon trap, the substituted isoindigo was introduced as an acceptor substituent in the structure of the polyconjugated thiophene polymer 38 [58]. For this purpose, two indacenothiophene polymers were obtained containing one or two isoindigo fragments (the position of the introduction of the second isoindigo fragment is indicated by an arrow, Scheme 21). However, the efficiency of the cell based on the monoindigo derivative turned out to be slightly higher (2.66 vs 2.50%). Isoindigo as the basis for organic field-effect transistors (OFET) In recent years, the interest of researchers in semiconductor materials has covered the field of organic electronics. The advantages of organic oligomeric and polymeric materials for applications in OFET and thin-film transistor (TFT) technologies are due to the ease of their directed chemical modification, mechanical flexibility, the possibility of varying optoelectronic properties, and good solubility in a wide range of solvents [59][60][61]. In this regard, the most studied and promising are π-conjugated polymer structures based on sulfur, oxygen, nitrogen, and selenium heterocyclic compounds [62,63]. One of the systems for creating OFET devices is the isoindigo platform [64][65][66][67][68]. Scheme 22 shows polymeric structures with the best mobility values among isoindigo derivatives that are known to date. Compounds 39-44 are the simplest representatives of isoindigo derivatives used in the design of OFET devices (Scheme 23). In a series of compounds of this type, the central platform of isoindigo is substituted in positions 6,6' either by a phenyl or by a substituted thiophene fragment. Thus, Ashizawa et al. [69] established the ambipolar character of the conductivity of 6,6'-diphenylisoindigo with μ h /μ e = 0.037 cm 2 ·V −1 ·s −1 / 0.027 cm 2 ·V −1 ·s −1 . Although the obtained values of mobility turned out to be low, this finding could have contributed to the development of this direction by the design of electron-donating aromatic substituents at the isoindigo core. However, as was shown in subsequent works, neither a change in the length and structure of the thiophene chain, nor the presence of an embedded benzothiadiazole fragment led to an improvement in the characteristics of transistors based on compounds 39-43 [70][71][72][73]. These derivatives had only hole-type conductivity in the range μ h = 1.5⋅10 −4 -6⋅10 −6 cm 2 ·V −1 ·s −1 . Studies of the characteristics of OFETs based on compounds 45 showed almost complete absence of a dependence of the semiconductor properties (μ e = 0.08-0.01 cm 2 ⋅V −1 ⋅s −1 ) on the structure of the alkyl substituent at the nitrogen atom. It was only established that the distancing of the branching position of the alkyl chain affects the ordering of molecules in a thin polymer film after annealing [83]. At the same time, for the example of polymer 45 containing a 4-decyltetradecyl radical, it was shown that the efficiency of an OFET device depends on the method of thin film processing. In contrast to the traditional spin-coating technique, the authors succeeded in obtaining a thin film by immersing a substrate in a polymer solution, followed by slow extraction, accompanied by slow evaporation of the solvent. The device thus obtained showed one of the highest values of hole conductivity of μ e = 8.3 cm 2 ⋅V −1 ⋅s −1 [78]. It was also found that a device based on compound 45 (R = 2-decyltetradecyl) with the addition of iron phthalocyanine showed slightly better mobility. Liu et al. explain this effect by an improvement in the hole-type conductivity and a tight and even packing of the composite in a thin film [80]. At the same time, selenophene analogs generally show lower values of mobility [90]. The introduction of fluorine atoms into the dithienyl fragment (see compound 46), while varying the symmetry of substitution of the 1,1' positions in isoindigo, did not lead to an improvement in the OFET efficiency (maximum μ h = 1.08 cm 2 ⋅V −1 ⋅s −1 ). For comparison, a similar device based on a nonfluorinated analog showed a value of μ h = 2.71 cm 2 ⋅V −1 ⋅s −1 [74]. Continuing the Scheme 20: Isoindigo-based nonthiophene aza aromatic polymers as acceptor components of OSCs. Scheme 21: Polymers with isoindigo substituent as side-chain photon trap. study of the effect of the substituent nature on the OFET efficiency, a number of polymers 47 with varying degree of fluorination of the monomer structural units was obtained [79]. The study showed that the transition from a less fluorinated analogue (two fluorine atoms on dithiophene) to one containing a larger number of fluorine atoms (two fluorine atoms on dithio-phene and two in the 7,7' positions) improves the planarity of the structural units of the polymer and increases the degree of crystallinity, which consequently increases μ e . At the same time, the polymer containing two fluorine atoms on the dithiophene unit and one in position 7 exhibited balanced ambipolar properties, having the currently best ratio of μ h /μ e = 6.41 cm 2 ⋅V −1 ⋅s −1 /6.46 cm 2 ⋅V −1 ⋅s −1 among isoindigoid polymers. Attempts to improve the efficiency of isoindigo-based OFETs by introducing an ethylene bridge between thiophene fragments in general did not lead to the desired result (Scheme 25). Thus, devices based on polymers 48 and 49 showed an analogy in the unipolar character of the conductivity with close values of μ h = 0.68-0.83 cm 2 ⋅V −1 ⋅s −1 [87,91]. It should be noted here that more detailed studies on selenium analogs of polymers 48 are promising since the latter showed moderate values of electronic mobility of μ e = 1.28 cm 2 ⋅V −1 ⋅s −1 [89]. It was shown that a sig-Scheme 23: Monoisoindigos as low-molecular-weight semiconductors. Scheme 25: Fluorination as a tool to improve isoindigo-based OFET devices. nificant improvement of the semiconducting properties of an isoindigo polymer can be achieved by "multifluorination", the introduction of fluorine atoms both to the dithiophene fragment and to the 7,7'-positions of isoindigo (compound 50) [88]. As such, the presence of fluorine atoms led to the ambipolarity of the polymer with an effective ratio μ h /μ e = 3.94 cm 2 ⋅V −1 ⋅s −1 / 3.50 cm 2 ⋅V −1 ⋅s −1 . Recently, diketopyrrolo[3,4-c]pyrrole (DPP) derivatives, which are highly conjugated electron-withdrawing heterocycles with high charge conductivity, broad absorption spectrum, photostability, and thermal stability have attracted considerable interest of researchers in the field of organic electronics [92][93][94]. Isoindigo derivatives have similar characteristics. Taking into account these data, copolymers 51 containing up to 25% DPP units were obtained [95]. Despite the good prerequisites, an OFET based on this copolymer showed only hole-type conductivity with μ h = 1.2⋅10 −3 cm 2 ⋅V −1 ⋅s −1 . Therein, the thermolysis of a thin film of the device at 220 °C, accompanied by the elimination of Boc groups, led to a significant decrease of the OFET performance. For the example of the polymers 52 series, the importance of the spatial arrangement of the isoindigo and DPP fragments relative to each other was demonstrated [96]. Thus, the dihedral angle 179° in a furan polymer determines the conductivity μ h /μ e = 0.01 cm 2 ⋅V −1 ⋅s −1 /1.6⋅10 −3 cm 2 ⋅V −1 ⋅s −1 , while polymers containing thiophene or p-phenylene spacers did not possess conductivity at all due to the lower planarity (dihedral angle 143°, Scheme 26, Table 7). Homopolymers based on isoindigo of various types have also been used in the design of transistors [97][98][99][100]. In order to reduce the influence of factors of conformational and energy disordering inherent in all isoindigo polymers in which aromatic fragments are linked by a single bond, a number of homopolymers 53 was obtained by the aldol polycondensation reaction [97]. These compounds have rigid and almost planar structures with a wide absorption range, high electron affinity, good solubility, and ambient stability (Scheme 27). The study of the transistor characteristics showed that these homopolymers have an electronic type of conductivity, with a maximum value of μ e = 0.03 cm 2 ⋅V −1 ⋅s −1 . The work [98] was also aimed at obtaining OFET devices based on homopolymeric isoindigo. The thieno-based condensed polymers 54 described herein, in which the monomeric isoindigo fragments are linked by a single bond, were obtained in two ways: by Suzuki (40% yield) and Stille (50% yield) coupling reactions. Despite the possibility of rotation of monomeric fragments around a single bond, a transistor based on this polymer showed relatively high values of mobility of μ h /μ e = 0.065 cm 2 ⋅V −1 ⋅s −1 /0.15 cm 2 ⋅V −1 ⋅s −1 . It is also worth presenting data on compounds 55 containing thiophene bridges of various structures between the isoindigo nuclei [99,101]. Therein, the presence of a spacer, in comparison to compounds 54, led to a decrease of the OFET performance (maximum μ h /μ e = 0.037 cm 2 ⋅V −1 ⋅s −1 /0.029 cm 2 ⋅V −1 ⋅s −1 , Table 8). One of the directions for the design of polymeric isoindigo derivatives to improve the conductive properties is the lengthening of the conjugation chain of both the monomer unit itself and the monomer subunit [102][103][104][105]. Thus, two polymers 56 were obtained in which isoindigo fragments are condensed on the indacenedione scaffold [102]. Despite the presence of an extended π-conjugation system, which determines the ambipolar properties of a transistor based on these polymers, the charge mobility values turned out to be rather low (Scheme 28). The maximum value of μ h /μ e = 0.1 cm 2 ⋅V −1 ⋅s −1 /0.14 cm 2 ⋅V −1 ⋅s − 1 was shown by a polymer containing a bridging ethylene fragment between two thiophene substituents. The lengthening of the conjugation chain within the monomeric unit can be achieved by introducing phenylenequinoxaline [103] or fluorinated phenylenethiophene [104] fragments. Based on compound 57, a flexible OFET with μ e = 0.25 cm 2 ⋅V −1 ⋅s −1 was fabricated on a 3D printer. In contrast to the above, an OFET based on difluorobenzothiadiazole polymer 58 showed hole conductivity with a low charge mobility value of μ h = 0.07 cm 2 ⋅V −1 ⋅s −1 . Isoindigo-based sensor devices Isoindigo derivatives have begun to find an application in the design of sensor devices for the detection of simple and complex molecules in various aggregation states. Thus, it was shown that polymer structures based on isoindigo and thiophene 59a,b and 60a,b are able to effectively bind gaseous ammonia molecules (Scheme 29) [106]. Thin films of these polymers are characterized by high sensitivity, reproducibility, and fast response time. Among this series of compounds, polymer 60b, containing a terthiophene oligomeric unit, has the best characteristics. Further studies showed that a polymer composite based on the terthiophene analogue 59a, poly(methyl methacrylate), and polyaniline is capable of detecting vapors of some organic solvents (chlorobenzene, n-butanol, DMF, isopropanol, and toluene) [107]. Therein, the best sensitivity was found towards n-butanol. The lower limit of detection was 100 ppm with a response time of less than 10 s. The ability of polymeric isoindigo derivatives to strongly bind to carbon nanotubes due to π-stacking was used to create sensors for the determination of NO 2 in the gaseous state (see polymer 61) and glucose in solution (see polymer 62) [108,109]. The limit of sensitivity of the sensor for nitrogen dioxide was 60 ppm and for glucose 0.026 mM. For the example of the fairly simple arylamine series 63, the possibility of using isoindigo polymer for the detection of explosives (trinitrophenol and trinitrotoluene) in solution was demonstrated (Scheme 30) [110,111]. Miscellaneous applications Taking into account the high thermal, atmospheric, mechanical, and redox stability of isoindigo polymers, various scientific groups focused their studies on the development of new directions for the practical application of these materials. Thus optical contrast (59% at 1500 nm), and redox stability (<8% after 4000 cycles). Poly(isoindigothiophene) 65 containing sulfonate groups in the side chain was used as an anionic photoactive polyelectrolyte in a composite with platinum nanoparticles stabilized with poly(acrylic acid) [113]. Such a catalytic system, obtained by layer-by-layer self-assembly with the addition of poly(diallyldimethylammonium chloride) on the indium tin oxide surface, provided hydrogen formation in photoelectrolytic cycles with a Faraday yield of about 45%. The search for new stable polyfunctional materials for the mass spectrometric determination of low-molecular-weight compounds led the authors of reference [114] to the discovery of new properties of polyisoindigo 66. It was found that this polymer can be used as a two-mode matrix (in positive and negative modes), which is a rarity for the MALDI method. The detection limits were below 164 pmol for reserpine and below 245 pmol for cholic acid. More recently, another new application of polyisoindigos was discovered as a new conductive binder inside electrodes containing silicon nanoparticles coated with a carbon shell (Si@C) for lithium ion batteries [115]. The specific capacity of a battery designed using polyisoindigo 67 (up to 1400 mA⋅h/g) with high stability (up to 500 cycles) indicates a high potential of such structures in the search for alternatives to the existing polymer conductive binder mixed with carbon additives (Scheme 31). Isoindigo derivatives have begun to find use in biomedical applications. Thus, nanoparticles of isoindigoid polymers have shown good potential as agents for photoacoustic and photothermal cancer therapy [116][117][118][119][120][121]. In this field, a number of condensed derivatives of oligoisoindigo 68-70, triphenylamine-containing monoisoindigo 71, and selenophenevinylene polymer 72 were investigated (Scheme 32). In terms of photothermal conversion (62-71% yield), ribbon-like compounds turned out to be the most effective [120]. At the same time, in vivo experiments have shown the high efficiency of low-molecular-weight isoindigo 71 in oxygen sensitization for cancer therapy [121]. Therein, a high value (84%) of the singlet oxygen quantum yield was obtained. Conclusion To summarize, it can be concluded that isoindigo is a promising platform for creating materials for various purposes -from organic solar cells and transistors to materials for biomedical applications. The possibility of easy modification and easy accessibility of the starting reagents for the synthesis of polysubstituted isoindigo derivatives provides the possibility of finetuning the properties and wide design of this bisheterocycle. In particular, to improve the characteristics of organic solar cells and OFET devices based on polymer derivatives of isoindigo, some of the most important factors are the planarity of the monomer unit, the electron donor/acceptor nature, the heterocyclic substituents, and the branching of the alkyl radical at the endocyclic nitrogen atom. Research on methods to obtain polymer isoindigo thin films and the use of additives will, in our opinion, significantly improve the efficiency of materials. In addition, the first work on combining polyaromatic acceptor and heterocyclic donor fragments in one macromolecule on the isoindigo platform showed the possibility of designing onecomponent nonfullerene solar cells. The high stability of polymeric isoindigo in air, at elevated temperature, under redox conditions, under laser irradiation, and the biocompatibility Scheme 32: Mono-, rod-like, and polymeric isoindigos as agents for photoacoustic and photothermal cancer therapy. make it possible to conclude that the design of these compounds is promising for mass spectrometry, for catalysts for hydrogen production, and for photothermal cancer therapy.
7,308
2021-07-06T00:00:00.000
[ "Chemistry" ]
Annihilation of electroweak dumbbells We study the annihilation of electroweak dumbbells and the dependence of their dynamics on initial dumbbell length and twist. Untwisted dumbbells decay rapidly while maximally twisted dumbbells collapse to form a compact sphaleron-like object, before decaying into radiation. The decay products of a dumbbell include electromagnetic magnetic fields with energy that is a few percent of the initial energy. The magnetic field from the decay of twisted dumbbells carries magnetic helicity with magnitude that depends on the twist, and handedness that depends on the decay pathway. Introduction The "electroweak dumbbell" consists of a magnetic monopole and an antimonopole of the standard electroweak model connected by a string made of Z magnetic field [1,2].The existence of such non-perturbative field configurations in the electroweak model is of great interest as they can provide the first evidence for (confined) magnetic monopoles.In a cosmological context, dumbbells can source large-scale magnetic fields which can seed galactic magnetic fields and play an important role in the propagation of cosmic rays [3]. Electroweak dumbbells are often regarded as magnetic dipoles, with the magnetic field strength falling off as 1/r 3 with the distance r from the dipole.However the situation is richer: there is a one-parameter set of electroweak dumbbell configurations [4], all describing a confined monopole-antimonopole pair, but with additional structure called the "twist".In our previous work [5], we have shown that the magnetic field strength of the maximally twisted dumbbell falls off asymptotically as cos θ/r 2 (in spherical coordinates), a gross departure from the usual dipolar magnetic field.The twisted dumbbell configuration was proposed in [4] and is closely related to the electroweak sphaleron [6][7][8][9]. The dynamics of electroweak sphalerons and dumbbells are of particular interest in the efforts towards detecting them in experiments.In view of ongoing experiments like the Monopole and Exotics Detector (MoEDAL) at the Large Hadron Collider (LHC) [10], Ref. [11] recently studied the production of the electroweak sphaleron in the presence of strong magnetic fields that arise during heavy ion collisions.In the cosmological context, simulations of the dynamical decay of electroweak sphalerons have been conducted to study baryogenesis and magnetogenesis [12,13].Thus, there have been several efforts to numerically resolve the configuration and dynamics of electroweak sphalerons.The relevance of dynamics of electroweak dumbbells was first alluded to by Nambu [1] wherein he showed that electroweak dumbbells could be stabilized by rotation, potentially making them long lived enough to have significant implications in experimental searches.This, along with a lack of detailed study of the dynamics of electroweak dumbbells, has motivated our investigation into the dumbbell configurations and their dynamics. In this article, we investigate the dynamical evolution of an initially stationary dumbbell configuration for a range of twists and initial dumbbell lengths.The initial condition is found by numerically relaxing a "guess" dumbbell configuration under the constraint that locations of the monopole and antimonopole remain fixed, according to the method outlined in our previous work [5].The main quantities of interest that we analyze are the dumbbell lifetimes and the magnetic field produced during the decay.The untwisted dumbbells are found to be unstable, with the monopoles immediately undergoing annihilation as expected.Twisted dumbbells, on the other hand, lead to the creation of an intermediate sphaleron configuration after the initial collapse, and subsequently decay into helical magnetic fields with relatively stronger field strength. The article begins with a description of the model and our initial configuration in Section 2. The numerical simulation setup is described in Section 3. Our results for the lifetimes and the relic magnetic fields are given in Section 4, and we discuss our conclusions in Section 5. Model The Lagrangian for the bosonic sector of the electroweak theory is given by where Here, Φ is the Higgs doublet, W a µ are the SU(2)-valued gauge fields with a = 1, 2, 3 and, Y µ is the U(1) hypercharge gauge field.In addition, σ a are the Pauli spin matrices with Tr(σ a σ b ) = 2δ ab , and the experimentally measured values of the parameters that we adopt from [14] are g = 0.65, sin 2 θ w = 0.22, g ′ = g tan θ w = 0.35, λ = 0.129 and η = 174 GeV. The classical Euler-Lagrange equations of motion for the model are given by where the gauge field strengths are given by (2.7) Electroweak symmetry breaking results in three massive gauge fields, namely the two charged W ± bosons and Z µ , and one massless gauge field, A µ , that is the electromagnetic gauge field.We define where are components of a unit three vector n.The weak mixing angle, θ w is given by tan θ w = g ′ /g, the Z coupling is defined as g Z ≡ g 2 + g ′2 , and the electric charge is given by e = g Z sin θ w cos θ w .The Higgs, Z and W boson masses are given by m H ≡ 2 GeV and m W ≡ gη/ √ 2 = 80 GeV, respectively. Initial dumbbell configuration We construct a suitable initial configuration for dumbbell dynamics by first choosing a "guess" field configuration that contains a monopole and antimonopole separated by a distance 2d and with relative twist γ [2].In the asymptotic region, (2.11) The monopole and antimonpole are located along the z-axis at z = ±d, with θ m and θ m being the spherical polar angles as measured from the monopole and antimonpole, respectively; ϕ is the azimuthal angle.The coordinate system is illustrated in Figure 1.In the limit θ m → 0, (2.11) reduces to the monopole configuration, and in the limit θ m → 0 to an antimonopole.In between the monopole and antimonopole, θ m → π and θ m → 0, the configuration is that of a Z−string. The asymptotic gauge field configurations are obtained by setting the covariant derivative of the Higgs field to vanish, To correctly account for the spatial dependence of the Higgs field around the monopoleantimonopole pair, we attach spatial profiles.Including this in the ansatz, the initial monpole-antimonopole scalar field configuration is given by where r m and r m are the radial coordinates centered on the monopole and antimonpole, respectively, given by The function k(⃗ r ) is the Z-string profile.Similar to the Higgs, we include spatial profiles for the gauge fields as ) We have previously used numerical relaxation to solve for the Higgs and gauge field profile of a static dumbbell in with the constraint that the topology of the monopole and antimonopole remain fixed during the relaxation process [5].We use the same procedure to find the initial field configurations that we will then evolve to study dumbbell dynamics. Numerical simulation We have used a numerical relativity technique adapted from [15], and previously used in [16] to study monopole-antimonopole scattering in the SO(3) model.Adopting the temporal gauge for convenience in numerical implementation, W a 0 = Y 0 = 0, the evolution equations (2.3)-(2.5)can be written as Straightforward discretization of the evolution equations leads to numerical instabilities.To control the instabilities, we introduce "Gauss constraint variables" Ξ = ∂ i Y i and Γ a = ∂ i W a i , with their respective evolution equations Here, we have introduced the numerical stability parameter g 2 p .These equations reduce to the Gauss constraints in the continuum, regardless of the choice of g 2 p .However, once the system of equations is discretized for numerical evolution, the term in the curly brackets do not always vanish, and a non-zero value of g 2 p ensures numerical stability as outlined in [15].The equations in (3.1)-(3.3)are now written with the Gauss constraint variables as Our simulations are conducted on a regular cubic lattice with Dirichlet boundary conditions and the fields are evolved in time using the explicit Crank-Nicholson method with two iterations [17].We adopted phenomenological values for the electroweak model as given in Sec. 2. We choose to work in units of η and set its numerical value to η = 1 in our simulations.Then the Higgs mass of 125 GeV in these units is given by 2η For most of our runs, we use a lattice of size 400 3 with lattice spacing δ = 0.25η −1 = 0.25, and time step dt = δ/8.The Compton radius of the W boson is m −1 W = 2.17.This is also approximately the radius of the monopole and string in the dumbbell, implying that their profiles have a resolution of roughly 10 grid points in our setup. The Higgs field vanishes at the centers of the monopole, antimonopole and Z−string.Since the initial dumbbell profile involves delicate cancellation of zeros on the dumbbell, we offset the dumbbell location away from the z−axis by half a lattice spacing.That is, the monopole and antimonopole are at the coordinates x = y = δ/2, z = ±(d + δ/2), while the Z−string is located at x = y = δ/2 and parallel to the z−axis. To ensure that the Dirichlet boundary conditions do not significantly affect the dynamics of the annihilating dumbbell, we only consider initial dumbbell lengths 2d that are sufficiently smaller than the lattice size.The maximum separation considered in our simulations was slightly less than half the lattice width and we ensure that the Higgs field is close to the vacuum expectation value near the boundaries |Φ| ≈ 1.We run our simulations for ∼ (5 − 10) × T c , where T c is the dumbbell lifetime, to study the relic energies in the different fields.We ensure that there are no significant effects on the dumbbell dynamics due to reflections from the Dirichlet boundary conditions in the time span considered here.We have tested this by varying the lattice spacing δ and the total simulation box size.Thus we ensure that our choice of numerical parameters and boundary conditions do not affect the dumbbell dynamics. Results We ran the simulation for a range of initial dumbbell lengths and twists.We show several snapshots of the energy density in the xz-plane for the untwisted case (γ = 0) in Figure 2, and for the twisted case (γ = π) in Figure 3.As can be inferred from these slices, the untwisted dumbbell promptly undergoes annihilation.However, the twisted dumbbell forms an intermediate object that appears as an over-density in the center, before eventually decaying away.We will discuss the relevance of this object (most likely the electroweak sphaleron, as discussed in Ref. [4]) in the context of the dumbbell lifetimes and the relic magnetic energies in the following sections. Estimating lifetimes The magnetic monopoles are zeros in the Higgs field and we leverage this to estimate the lifetime of the dumbbell by tracking the zeros and finding when they disappear in the simulation box.Since the dumbbell is offset by δ/2 in the positive x and y directions in our setup, the zeros of the Higgs field are never located at lattice points.We instead borrow the approach from [16] used in studying the creation of monopoles via classical scattering.We track the evolution of the minimum value of the Higgs field over the entire lattice, Min[|Φ|].Once Min[|Φ|] exceeds a threshold, we tag the timestep in our simulation as the dumbbell lifetime.The lifetime obtained by this criterion depends on the chosen value of the threshold.As we will see, the result is sensitive to the threshold in the untwisted case but is quite insensitive in the twisted case.Additionally, we have tested the dependence of lifetimes on the spatial and time resolution of the simulation and demonstrated that there were no significant dependencies on the choice of our simulation parameters. In Figure 4, we show the evolution of Min[|Φ|] for various values of initial separation 2d.Comparing the untwisted (left panel) and twisted (right panel) cases, respectively, we see that the curves for the untwisted case rise slowly, implying greater sensitivity of the dumbbell lifetime to the chosen threshold.In the twisted case, the curves rise very sharply and the dumbbell lifetime is not sensitive to our choice of threshold.After tagging the timestep when the threshold condition on Min[|Φ|] is satisfied in our simulations, we interpolate the time evolution of Min[|Φ|] in a small range around the tagged timestep.The interpolation was performed via a fourth order polynomial curve fit, and we evaluate the interpolated function at the threshold value (Min[|Φ|] = 0.25) to find the lifetime.The interpolated Min[|Φ|] are shown in Figure 4 as solid lines.We plot the dumbbell lifetime against the initial dumbbell length for untwisted and maximally twisted dumbbells in Figure 5.We find that the lifetime grows with initial length but appears to saturate beyond a certain length.To check the saturation we have performed one run for very long dumbbells (2dm W = 55.2).These runs are computationally very expensive.The saturation is clearer in the case of twisted dumbbells.We interpret these plots recalling the instability analysis of Z-strings [18,19]: the untwisted dumbbell decays on a timescale ∼ m −1 W due to the instability, while the twisted dumbbell survives about 10 times longer, first collapsing to an electroweak sphaleron that eventually decays due to its instability on a longer timescale.To verify that the instability is dynamical and not a result of numerical artifacts, we followed [2,20] and ran test simulations with large values of sin 2 θ w and small values of m H /m Z for which Z-strings are known to be stable [18,19].The results of these simulations are given in Appendix A. In contrast to the simulations with physical parameters, the Z-string instability is absent and the dynamics of the monopoles is clearly visible in these "semilocal" simulations. Relic Magnetic Fields The definition for the electromagnetic field strength tensor in the symmetry broken phase (|Φ| = η) that reduces to the usual Maxwell definition in unitary gauge is [21,22], This definition implies the presence of non-zero electromagnetic fields even for A µ = 0 due to the Higgs gradient term.In our study of the decay of dumbbells, ∂ µ Φ vanishes at late times and then the expression in (4.1) agrees with the Maxwell definition.The total magnetic field energy is given by We show the time evolution of magnetic energy E M (t) for twisted and untwisted dumbbells of various initial lengths in Figure 6.After an initial phase of annihilation, the total magnetic field energy reaches a steady value that depends on the initial length of the dumbbell.An important observation is that the relic magnetic energy depends on the twist, in addition to the initial dumbbell lengths.In Figure 7 we plot the fractional magnetic field energy at a late time after annihilation T c + ∆t, where we have chosen ∆t = 25η −1 .Since we use Dirichlet boundary conditions, the simulation time has a upper bound, after which reflections occurring at the boundaries would affect quantities of interest.The time at which we evaluate the relic magnetic field energy, T c + ∆t is large enough such that the relic magnetic energy has reached an asymptote but is still smaller than the time at which a significant fraction of the energy is reflected.From Figure 7, we infer the fractional magnetic energy, after annihilation, is about twice as large for the twisted case when compared to the untwisted case, for the same initial (large) dumbbell length.In addition to the magnetic energy density, we are interested in the helicity of the magnetic field, which has been shown to have significant implications for cosmic baryogenesis and magnetogenesis [3,4,23].The total magnetic helicity is defined as where A is given by the spatial components of the vector potential (2.9) and B is derived from the EM tensor (4.1).The evolution of total magnetic helicity for an initially twisted dumbbell is shown by the blue curve in the left panel of Figure 8.It is clear that the helicity does not approach a specific value within the span of our simulation.A possible explanation for the behavior is that the definition of H M is gauge independent only if the magnetic field is orthogonal to the areal vectors everywhere on the boundaries of the simulation domain.(Alternately, if the magnetic field vanishes on the boundaries.)This is certainly not true in our simulations.Hence we cannot assign physical meaning to our evaluation of H M at late times when the magnetic field is not small at the boundaries.Nonetheless, we can infer that the relic magnetic field has significant net helicity when compared to the untwisted case, which had a helicity H ∼ 10 −13 , consistent with numerical roundoff.An alternative measure of the parity violation in the magnetic field is provided by the "physical helicity" defined as, In contrast to the magnetic helicity, the physical helicity does not have issues with gauge invariance.It is also better behaved in a finite domain as the magnetic field B falls off faster than the gauge field A. The evolution of H B is shown in Figure 8 for γ = π, and it asymptotes to a constant value at late times.We also plot H B at late times after annihilation for various initial dumbbell lengths and twists in the right panel of Figure 8. From this, we can infer that the H B for the untwisted case γ = 0 is consistent with zero.In the twisted case with γ = π, it takes on finite positive values, ∼ 3η 2 , as shown in Figure 8.One feature, that is immediately clear, is that the signs of the physical helicities, H B , for γ = π and 0 < γ < π are opposite to each other.The opposite signs are related to the untwisting dynamics of the dumbbell configuration as it annihilates.Dumbbells with twist γ = π are maximally twisted and carry maximal energy as a function of twist [5].Dumbbells with twist slightly less than π will tend to untwist by reducing the twist angle from π− → 0, while those with twist somewhat greater than π will untwist by increasing the twist angle from π+ → 2π.These decay modes lead to magnetic field helicity of opposite signs.We tested this argument by running the simulation for twists slightly larger and smaller than π, and confirmed that the signs of the physical helicity are indeed opposite to each other.The plots for the tests we conducted, for two different initial dumbbell lengths, are shown in Figure 9, where we see that the evolution of H B for twist π+ mirrors the one for twist π−. Discussion and Conclusion We have studied the collapse, annihilation, and decay products of electroweak dumbbells as a function of their length and twist. The untwisted case has the expected dynamics, where the dumbbell collapses in a time that is very short, ∼ m −1 W , and comparable to the instability time scale of the Zstring [18,19].The energy of the dumbbell that is converted into magnetic field energy depends on the length of the dumbbell.For short untwisted dumbbells, the magnetic field energy can be ∼ 0.1%; for longer dumbbells it is ∼ 1%.The magnetic fields produced in the untwisted case are not helical. Twisted dumbbells collapse on a time-scale that saturates to ∼ 10m −1 W for long dumbbells.They collapse to form a long-lived object that presumably is an electroweak sphaleron, see Figure 3.Eventually, the sphaleron decays as well.The decay products include magnetic field energy that is large compared to the untwisted case.The fractional energy converted to magnetic field energy is roughly independent of the dumbbell length and is ∼ 4%.The produced magnetic field is helical though our calculation of the magnetic helicity is not reliable, especially at late times, due to the finiteness of the simulation box.As an alternative we have calculated the integrated physical helicity and this asymptotes to a constant (∼ 3η 2 ), at late times for the maximally twisted dumbbell. In future work, we propose to study the dynamics of rotating dumbbells as this pertains to their production and detection in a laboratory setting as first discussed by Nambu [1].We expect the study to be technically challenging as it would require significant improvements in implementing boundary conditions, especially if rotating dumbbells survive for a long time. Figure 1 : Figure 1: The location of the monopole (blue dot), antimonopole (red dot) and the Zstring (purple line) in a cartesian grid. Figure 2 : Figure 2: The snapshots of energy density in xz-plane for γ = 0 are shown for times 0dt, 1000dt and 5000dt in the left, middle and right panels, respectively.The colors represent the energy density and the corresponding values are given by the scale, in units of η 4 .Here, the simulation parameters are dt = δ/100, with δ = 0.25η −1 , and the initial dumbbell length is 2d = 160δ.The horizontal (z) and vertical (x) axis are in lattice units. Figure 3 : Figure 3: The energy density snapshots, similar to Figure 2, for the twisted case (γ = π) and times 0dt, 5000dt and 10000dt in the left, middle and right panels, respectively. Figure 4 : Figure 4: The evolution of Min[|Φ|] in the simulation box for twists γ = 0 (left panel) and π (right panel).The line colors correspond to different values of dumbbell lengths expressed as a multiple of the approximate monopole width m −1 W .The hollow circles correspond to the data points from the simulation and the solid curves are 4 th order polynomial fits around the threshold Min[|Φ|] = 0.25.We show our threshold choice, Min[|Φ|] = 0.25, as a dotted horizontal line. Figure 5 : Figure 5: The lifetimes T c of the dumbbells as a function of their initial separation are shown for twists γ = 0 and π in the left and right panels, respectively. Figure 6 : Figure 6: The total magnetic energy as a function of time for twists γ = 0 and π in the left and right panels, respectively.The time along the horizontal axis is shifted by the lifetime T c and t = T c is shown by a vertical dotted line.Different colored curves correspond to different initial dumbbell lengths, as stated in the legends. Figure 7 :Figure 8 : Figure 7: The magnetic energy at a late time, E M (T c +25), as a function of initial dumbbell length.The different colors correspond to different values of twists γ as stated in the legend. Figure 9 : Figure 9: The evolution of physical helicity H B for twists γ = 5.5π/6 (red curve) and 6.5π/6 (blue curve) are shown for initial dumbbell lengths 2d = 80δ and 160δ in the left and right panels, respectively.The annihilation time T c is marked with a black dotted vertical line.The horizontal red and blue curves correspond to the physical helicities at time t = 20m −1 W . Here, the time step size in the simulation is dt = δ/8, with δ = 0.25η −1 . Figure 11 : Figure 11: The energy density snapshots, similar to Figure 2, for the twisted case (γ = π) and times 0dt, 450dt and 1000dt in the left, middle and right panels, respectively. Figure 12 : Figure 12: The evolution of Min[|Φ|] in the simulation box for twists γ = 0 (red curve) and π (blue curve).The hollow circles correspond to the data points from the simulation and the solid lines are 4 th order polynomial fits around the threshold Min[|Φ|] = 0.25.We show our threshold choice, Min[|Φ|] = 0.25, as a dotted horizontal line.
5,444.6
2023-10-31T00:00:00.000
[ "Physics" ]
Bi12Rh3Cu2I5: A 3D Weak Topological Insulator with Monolayer Spacers and Independent Transport Channels Topological insulators (TIs) are semiconductors with protected electronic surface states that allow dissipation‐free transport. TIs are envisioned as ideal materials for spintronics and quantum computing. In Bi14Rh3I9, the first weak 3D TI, topology presumably arises from stacking of the intermetallic [(Bi4Rh)3I]2+ layers, which are predicted to be 2D TIs and to possess protected edge‐states, separated by topologically trivial [Bi2I8]2− octahedra chains. In the new layered salt Bi12Rh3Cu2I5, the same intermetallic layers are separated by planar, i.e., only one atom thick, [Cu2I4]2− anions. Density functional theory (DFT)‐based calculations show that the compound is a weak 3D TI, characterized by Z 2 = ( 0 ; 0001 ) , and that the topological gap is generated by strong spin–orbit coupling (E g,calc. ∼ 10 meV). According to a bonding analysis, the copper cations prevent strong coupling between the TI layers. The calculated surface spectral function for a finite‐slab geometry shows distinct characteristics for the two terminations of the main crystal faces ⟨001⟩, viz., [(Bi4Rh)3I]2+ and [Cu2I4]2−. Photoelectron spectroscopy data confirm the calculated band structure. In situ four‐point probe measurements indicate a highly anisotropic bulk semiconductor (E g,exp. = 28 meV) with path‐independent metallic conductivity restricted to the surface as well as temperature‐independent conductivity below 60 K. One type of topological protection in TIs, the earliest proposed theoretically [6] and verified experimentally, [7] is based on strong spin-orbit coupling (SOC). The reason is that SOC causes an inversion between states with opposite parities around the bandgap. When such an inverted semiconductor comes into contact with a material with trivial parity sequence (normal semiconductor, air, vacuum), the states with the same parity combine at the interface to form metallic and spin-polarized surface states that cross the bandgap. [8] As SOC increases with the atomic number and is particularly strong in p-orbitals, bismuth, as the heaviest nonradioactive element, is an excellent choice for the synthesis of TI materials. Intensively studied bismuth-based TIs include Bi 2 Te 3 , [9] Bi 2 Te 2 Se, [10] Bi 2 TeBr, [11] Bi 2 TeI, [12] and MnBi 2n Te 3nþ1 . [13,14] All these compounds are categorized as strong 3D TI which are bulk materials characterized by protected surface states on all facets. [15] Moreover, the mentioned Bi compounds are layered materials consisting of stacked 2D TIs. [6] In addition, there is the comparatively exclusive compound Bi 14 Rh 3 I 9 (1), which is the first material with strong experimental and theoretical indications of a weak 3D TI state, implying a crystalline material with protected surface states on certain, but not all, facets. [16] This bismuth-rich subiodide has a layered structure, in which intermetallic [(Bi 4 Rh) 3 I] 2þ TI layers alternate with topologically trivial [Bi 2 I 8 ] 2À spacer layers ( Figure 1). The spacer layer effectively separates the electronic systems of adjacent intermetallic layers that are presumed [16a] to be 2D TIs. The weak 3D TI 1 is thus a stack of weakly coupled 2D TIs. The spatial density of TI channels in stacking direction, a quantity that could be of interest for applications, is 0.8 nm À1 . The topological bandgap of compound 1 is large (about 240 meV), implying that the TI quantum state exists at room temperature (RT) and above. Explicit understanding of the relationships between chemical composition and crystal structure on the one hand and topological properties of the electronic band structure on the other hand is important for synthesis of potentially viable TIs. In this regard, compounds similar to 1 have recently been studied. The chemically and structurally closely related layered bismuth platinum subiodides (Figure 1), Bi 13 Pt 3 I 7 (3), [17,18] Bi 12 Pt 3 I 5 (4), [18] Bi 38 Pt 9 I 14 , [19] Bi 8 Pt 5 I 3 , [20] or Bi 16 Pt 11 I 6 , [20] unfortunately, are not TIs suitable for electronic applications because of bulk conductivity caused by nonmatching electron count and/or electronic coupling of the predicted 2D TI layers. In particular, a monolayer of iodide ions, as present in 3 or 4, was found to be an insufficient spacer as covalent Bi─I─Bi bonds strongly couple adjacent intermetallic layers. [18,21] This is different in the weak 3D TI Bi 12 Rh 3 Cu 2 I 5 (2) presented here. The goal of this research was to replace the spacer layer in 1 while preserving the predicted 2D TI character of the intermetallic layer, thus obtaining further insights into tuning and designing principles of new layered weak 3D TIs based in both structural (physical) and chemical criteria. Synthesis and Thermal Stability To explore possible phase formation in the Bi-Rh-Cu-I quaternary system and derive optimal reaction conditions, we performed differential scanning calorimetry (DSC) measurements starting from mixtures of Bi, Rh, Cu, and BiI 3 in 0.1 mL silica ampoules and characterized the products associated with the detected DSC signals. In the course of these studies, we detected the new compound 2. The DSC experiment of a mixture with the composition of the target compound is shown in Figure 2. The reactions were assigned to their respective signals ( Table 1) based on phase diagrams [22,23] and analysis of the products of ex situ annealing experiments ( Figure S4-S8, Supporting Information). The first endothermic effect at 265 C (H1) was assigned to the melting of Bi. The liquid phase induces the formation of binary and ternary phases, initially Bi 18 I 4 [22] and γ-CuI are predominant. The signal H2 was attributed to the decomposition of Bi 18 I 4 , being the only phase not observed anymore in a reaction mixture annealed at 297 C. The latter also contained the cluster salt (Bi 8 ) 2 (Bi 9 )[RhBi 6 I 12 ] 3 [24] along with reflections associated with at least one unreported Bi-Cu-I compound, which is currently under investigation. The endothermic effect H3 is Figure 1. From left to right: crystal structures of the bismuth-rich subiodides Bi 14 Rh 3 I 9 (1), Bi 13 Pt 3 I 7 (3), Bi 12 Pt 3 I 5 (4), and Bi 12 Rh 3 Cu 2 I 5 (2). In all compounds, the Rh-or Pt-centered Bi cubes form the same kagome nets, which host iodide ions in their hexagonal voids. These predicted 2D TI layers are separated by different spacers: iodidobismuthate(III) chains in 1 and 3, monolayers of isolated iodide ions in 3 and 4, or iodidocuprate(I) groups in 2. www.advancedsciencenews.com www.pss-b.com associated with the formation of 2. Trace amounts of this compound were already observed at lower temperatures, suggesting that its formation is kinetically hindered. Upon reaching 500 C (H4), 2 decomposes following the reaction Bi 12 Rh 3 Cu 2 I 5 ! 3Bi 2 Rh þ 2CuI þ 5 Bi þ BiI 3 , which was confirmed by further DSC experiments starting from manually selected crystals of 2 ( Figure S1 and Table S1, Supporting Information). Combining this information with the optimized synthesis protocol for compound 1, [16b] we developed temperature programs for the targeted synthesis of single-phase microcrystalline powders as well as for growing larger crystals of 2. The new subiodide 2 forms black plate-shaped crystals with 〈001〉 as the largest faces. 2 is not perceptibly sensitive to air, but should be stored under inert gas. Oxygen impurities in the reaction mixture should be avoided because they become trapped in by-products. The composition of 2 was determined by X-ray diffraction on single crystals and confirmed by energy-dispersive X-ray spectroscopy (EDX) analysis ( Figure S2, Supporting Information). Crystal Structure A single crystal of 2 was investigated by X-ray diffraction at RT (Table S2, Supporting Information). The cell metrics suggested an F-centered orthorhombic lattice; accordingly, the crystal structure of 2 was initially solved and refined in the orthorhombic space group Fmmm. The figures of merit of the refinement as well as the resulting structure model were quite reasonable, with the exception of a 50% occupancy of the single crystallographic copper position. Therefore, refinements were performed in space groups of lower symmetry that allow the position to be split. Among the four possible ordering models tested, the monoclinic model presented here was the only one that was refined to a fully ordered distribution of the copper atoms (Table S3, Supporting Information). We used the monoclinic space group F 1 2/m 1, a nonstandard setting of C 1 2/m 1, with pseudoorthorhombic metrics (a ¼ 9.1697(4) Å, b ¼ 15.8050(6) Å, c ¼ 18.2437(5) Å, ß ¼ 90.103 (2) ). The proximity to the symmetry of the crystal class mmm causes pseudomerohedral twinning along [100]. Moreover, the unit cell has a pseudoorthohexagonal basis (b/aÀ p 3 ¼ 0.008). In the crystal structure of 2 ( Figure 1, 3), positively charged intermetallic layers [(Bi 4 Rh) 3 I] 2þ and layers of anionic spacers [Cu 2 I 4 ] 2À alternate along the [001] direction. The intermetallic layer is a kagome net of edge-sharing rhodium-centered bismuth cubes that hosts an iodide ion in each of its hexagonal-prismatic voids ( Figure 3). It has the symmetry of the 2D subperiodic layer group p 6/m m m (no. 80). The spacer layer is a packing of discrete centrosymmetric tetraiodido-dicuprate(I) anions ( Figure 3). The copper cations are in trigonal-planar coordination, with one terminal and two μ-bridging iodide ions each. The layer group symmetry is c 1 2/m 1 (no. 18). The projection of the stacking vector on the (001) plane, i.e., the lateral offset of neighboring layers of the same kind, is exactly a/2 (in 1, this is a/6). The hexagonal pseudosymmetry of the intermetallic layer allows stacking vectors rotated by 120 and 240 . In the investigated crystal of 2, Table 1. Assignation of DSC signals in Figure 2 (H ¼ heating, C ¼ cooling). The listed temperatures for the signals are the onset temperatures, except for the signal H3, which covers a broad range. The heating curve is shown in red and the subsequent cooling curve in blue. www.advancedsciencenews.com www.pss-b.com such stacking faults occur with about 1% probability and are responsible for the highest maxima of the residual electron density. In view of this massive pseudosymmetry, the fairly well-ordered crystal requires a mechanism to pass information about order through the structure. This is apparently achieved by the [Cu 2 I 4 ] 2À groups, which do not only imprint a preferred orientation in the (001) plane and thus cancel the hexagonal symmetry, but are inclined to reach into the hexagonal voids of both neighboring intermetallic layers, thereby interlocking them. The bond lengths Rh-Bi (2.80-2.85 Å) and Bi-Bi (3.17-3.46 Å) in 2 (Table S4, Supporting Information) deviate by less than 0.02 Å from those in 1. Planar [Cu 2 I 4 ] 2À anions have been reported previously, e.g., in two modifications of [PPh 4 ] 2 [Cu 2 I 4 ]. [25] In 2, the Cu-I distances are 2.55 Å (terminal) and 2.61 Å (bridging), which differs by no more than 0.06 Å from that of the references. The Cu···Cu distances differ much more: 2.71 Å for 2, while 2.85 and 2.96 Å for the phosphonium salts. However, this parameter is highly dependent on its chemical environment, e.g., for [Ba(tetraglyme) 2 ][Cu 2 I 4 ] [26] it is 2.64 Å. It should also be noted that the Cu cation in 2 has two direct Bi neighbors in the distance of 3.12 Å. For comparison, 2.67 Å was reported as the average Cu-Bi distance in the (CuBi 8 ) 3þ cluster, [27] and 2.74 Å as the bond length in the [(Me 3 Si) 2 Bi-Cu(PMe 3 ) 3 ] molecule. [28] The essential difference between the structures of 1 and 2 lies in the nature and thickness of their spacer layers. The [Cu 2 I 4 ] 2À groups of 2 replace the [Bi 2 I 8 ] 2À chains of 1 without changing the layer charge. The substitution only slightly affects the lateral layer dimensions: [a(2)Àa (1) The spatial density of the TI channels in the stacking direction was increased to 1.1 nm À1 and seems to represent the maximum feasible. However, the question arises whether the spacer in 2, which is only one atom thin, is as efficient an electronic separator as the spacer in 1, which is three atoms thick. The subiodides Bi 13 Pt 3 I 7 [17,18] (3) and Bi 12 Pt 3 I 5 [18] (4) have basically the same intermetallic layers and contain monolayers of isolated iodide ions as spacers. 4 is even isostructural to 2, except that it does not have copper cations within the iodide spacer layer. The distance between intermetallic TI layers is about 0.6 Å wider in the copper-containing compound 2 than in 4, suggesting a significant difference in chemical bonding between TI and spacer layers. Chemical Bonding A density functional theory (DFT)-based real-space bonding analysis of 2 confirms strong covalent bonding within the two types of layers and mainly electrostatic interactions between them, similar to the parent compound 1. [16] In Figure 4a, the green isosurface of the electron density (at 0.035 a.u.) shows that the [Cu 2 I 4 ] 2À group (in the center) is clearly separated from the neighboring intermetallic layers. In the yellow isosurface, at the much lower value of 0.0223 a.u., the electron density of both layers merges at the bond critical point, linking Bi and Cu atoms. However, there is no indication for a constructive interaction between Bi and Cu, as will be shown below. Figure 4b depicts the electron density contribution for all states lying between the Fermi level and 0.5 eV below. The high energy electron density contribution is formed from the orbitals of all atomic species. In case of Cu, a clearly structured 3d orbital contribution is evident. Despite this fact, no direct Cu─Cu bond is found. The valence electron density around the Bi atom is mainly governed by its lone pair. The absence of Cu-Cu and Cu-Bi two-center bonding is evident in the analysis of atom-specific density contributions (Table S5, Supporting Information). Here, the electron density resulting from the square of given atomic states and their respective mixtures is integrated within the unit cell. For example, choosing the 3d and 4s states for one of the Cu atoms, the resulting electron density integrates to 10.1 electrons within the unit cell, indicating a filled 3 d shell on Cu. In the case where the 3d and 4s states are chosen for both Cu atoms, the charge distribution integrates to 20.2 electrons. As this number is equal to the simple sum of both Cu populations, there is no additional attractive interaction between the two Cu atoms. Otherwise, the integrated populations would be higher than the sum of the Cu1 and Cu2 populations because additional terms would arise from Cu1 to Cu2 mixtures. The absence of a direct Cu─Cu bond is also shown by the chemical bonding analysis based on the ELI-D. [29] Figure 4c depicts an orthoslice of ELI-D through the [Cu 2 I 4 ] 2À group. The penultimate shell for Cu is clearly visible and shows an almost spherical distribution. No distortion or bonding region is found between the two Cu atoms. Moreover, the localization domain between the two Cu atoms emerges from the valence In compounds 3 and 4, which contain monolayers of iodide ions, the Bi─I─Bi bonds connecting adjacent intermetallic layers are covalent to a significant extent. In 3, they cause pairwise coupling of the intermetallic layers, resulting in a semimetal with a pseudogap or, below 3 K, a superconductor ( Table 2 and Figure S9, Supporting Information). [18,21] In the band structure of 4 (SOC included), a band crosses the Fermi level along the stacking direction. [18] In 2, the Cu cations have a special role, as they redirect the covalent bonding predominantly to Cu-I and render the Bi⋯I interactions more electrostatic. Moreover, as both Bi and Cu atoms are positively charged and close to each other in stacking direction, the related repulsion together with the reduced covalent interlayer bonding leads to a large distance between TI layers which is 0.6 Å wider in the copper compound 2 than in the almost isostructural copperfree compound 4 (see Table 2). Reduced covalency and large spacing result in low electronic interaction between adjacent TI layers. The synopsis in Table 2 suggests a fairly simple distance criterion for the occurrence of TI properties in this class of compounds. The intermetallic kagome nets should be at least 5.9 Å apart, as is the case in 2. A thicker spacer layer also generates a wider topological gap. This effect seems to saturate at distances above 9 Å. There the topological gap is mainly determined by the band splitting due to SOC, while band dispersion by chemical bonding along the stacking direction plays only a minor role. Of course, a matching electron count is also required to prevent bulk metallic conductivity. Figure 5 shows the total and layer-resolved densities of states (DOS) as well as the electronic band structure of 2. These data were obtained within the generalized gradient approximation (GGA) with spin-orbit effects included in a nonperturbative manner using a four-component Kohn-Sham-Dirac approach implemented in the FPLO code. [30] The ground state corresponds to a nonmagnetic semiconductor with an indirect gap of approximately 10 meV. In view of the fact that the bandgap of 2 is much smaller than the gap of 1 (240 meV), it is possible that structure optimization may lead to an enhancement of the bandgap. To this end, we computed the residual forces on each atom using the symmetry constraints for the space group. We found that the maximum force on any atom is only about 7 meV Å À1 . Even after relaxing the symmetry constraints (space group P 1 with 22 inequivalent atoms in the unit cell), the maximum force on any atom was smaller than 10 meV Å À1 . This test confirms the experimental structure and leaves no room for gap enhancement by structure optimization. Electronic Band Structure and Topology Calculations without SOC result in a metallic state, suggesting that the gap is SOC-induced and may have an inverted parity sequence. To ascertain the nontrivial topological properties, we computed the Z 2 invariants. [31] We find Z 2 ¼ ð0; 001Þ, implying that 2 is a weak 3D TI predicted to possess no protected surface states in the (001) surface, but on other crystal faces. [31] Space group symmetry contains mirror symmetry m(y) and, therefore, allows a topological crystalline insulator phase. However, such a phase in 2 is ruled out by explicit computation of the mirror Chern number. [32] In analogy to 1, [16] the states around the gap in the electronic band structure are dominated by the atoms of the intermetallic TI layer, especially the 6p states of Bi (see Figure 5, S10, Supporting Information). This is, on one hand, the basis for the SOC Each Bi atom contributes its three 6p electrons (the 6s electrons form lone pairs), Rh nine electrons, and Pt ten electrons; the positive charge has to be subtracted. www.advancedsciencenews.com www.pss-b.com generated parity inversion and, on the other hand, the reason why an exchange of the spacer layer does not necessarily affect the 3D TI properties within this family of compounds. Specifically, DFT calculations for a charge-compensated [(Bi 4 Rh) 3 I] layer suggests that the intermetallic layer is a 2D TI. [16a] A 2D TI is characterized by an odd number of odd parities of the filled electronic states at the four time-reversal invariant momenta (TRIM) Γ i 2D (i ¼ 1, …, 4). This holds for a system with inversion symmetry, where Γ i 2D ¼ (n 1 b 1 þ n 2 b 2 )/2 with n j ¼ (0, 1) and b j denotes the reciprocal lattice vectors. [31] If a periodic stack of 2D TI layers with appropriate spacer layers is formed, each 2D TRIM point is split into a pair of TRIM points in the 3D reciprocal space, ficient condition for the formation of a weak 3D TI with Z 2 ¼ ð0; 001Þ is that, the spacer layer preserves, up to a gauge transformation, [31] the parity at the 2D TRIM points Γ i 2D in the related 3D pair (Γ 3D i,0 , Γ 3D i,1 ). We suggest that this preservation of parity is guaranteed if the electronic states have essentially the same orbital character all along the line between the two points Γ 3D i,n 3 . In other words, bonding among 2D TI and spacer layers should have only minor covalent contributions. Indeed, this situation is confirmed for 2, as shown in Figure 5c: The orbital character of all bands is almost invariant in the vicinity of the Fermi level and no additional band inversions are present along the stacking direction [001]. This correlation between topological properties and chemical bonding is illustrated in the considered class of layered compounds by comparing Bi 12 Rh 3 Cu 2 I 5 (2) with Bi 13 Pt 3 I 7 (3) or Bi 12 Pt 3 I 5 (4). Earlier DFT calculations had shown that a spacer that consists of a monolayer of iodide ions leads to covalent Bi─I─Bi bonds between neighboring intermetallic layers. In 3, covalent pairing of intermetallic layers creates a trivial topology, while in 4, the strong interlayer bonding leads to a metallic state even in a hypothetical strongly doped situation with even electron number. [18,21] Photoelectron Spectra To gain experimental insights into the electronic properties of 2, we carried out high-resolution angle-resolved photoelectron spectroscopy (ARPES) on two samples. Both were cleaved under UHV conditions. The photoemission experiments were carried out at a pressure of p < 5Â10 À11 mbar and at low temperature (T ¼ 20 K). Figure 6a shows the normal emission spectrum of sample #1. Additional data for sample #1 and #2 are shown and discussed in the Supporting Information. While the overall agreement between the two data sets at large binding energies is very good, some differences are notable, especially near the Fermi energy. We ascribe these differences to unlike surface compositions probed in the two measurements (see Supporting Information). Most importantly, a faint dispersion-less band is observed at the Fermi energy for sample #1. As the resolution of the measured bands was relatively low for all samples, we calculated the inverse second derivative from energy distribution curves (EDCs) as shown in Figure 6a, right panel, which enhances the visibility of small features in a noisy measurement. The broadening could be due to superposition of Figure 5. a) Total and layer-resolved DOS of 2 as obtained in a fully relativistic GGA calculation. b) Self-consistent (blue) band structure obtained within a fully relativistic GGA calculation and the related Wannier TB-model-derived (red) band structure of Bi 12 Rh 3 Cu 2 I 5 (2). c) Layer-resolved contributions from the Bi 6p orbitals and I 5p orbitals from the intermetallic layer as well as the spacer layer (labeled I S ) along Γ-Z and X-I. d) Brillouin zone. The Fermi energy was set to ε F ¼ 0 and is shown by a dashed line. www.advancedsciencenews.com www.pss-b.com signals from domains of the two possible surface terminations, as observed for 1. [16c] In order to model the surface electronic properties and to understand related subtleties, we carried out tight-binding (TB) calculations for both [(Bi 4 Rh) 3 I] 2þ and [Cu 2 I 4 ] 2À surface terminations in a finite-slab geometry. To this end, an accurate (bulk) TB model was obtained by considering the maximally projected Wannier functions in the energy window between -14 and þ7 eV (see Figure 5b and Experimental Section). This TB model was then used to investigate the surface electronic properties for slabs consisting of five unit cells along [001] (periodic boundary conditions along a and b). To model the ARPES spectra, layerand k-resolved DOS were convoluted with an exponential function by considering an escape depth of 10 Å. Further, a Gaussian broadening of 20 meV was introduced to model the experimental resolution. In the simulated spectra, it was possible to distinguish the two different terminations of the (001) surface. The ARPES data set #1 was identified to compose dominantly of [(Bi 4 Rh) 3 I] 2þ surface terminations. Figure 6b shows the simulated EDC for this termination. To match with ARPES data set #1, it was assumed that the Fermi energy is located about 0.10 eV above the bottom of the conduction band. This particular choice of the Fermi energy is consistent with the observed faint band at the Fermi energy. The applied shift of the Fermi level is necessary in order to model the polar surface that results in an intrinsic doping (also called band bending) that was already observed in the parent compound 1. [16] While the overall qualitative agreement between the simulated and the measured ARPES spectra is evident, the quantitative agreement is also remarkably good (see Supporting Information for a detailed quantitative comparison). In particular, the high intensities at about 0.5, 1.0, and 1.5 eV at Γ found in both spectra and the band dispersion (dome-shaped profile) along the Γ -À K path below 0.5 eV are noteworthy. The dome-shaped profile arises from the fact that the maxima of these bands do not lie at the K point (see Figure S15, Supporting Information). This good agreement indicates that the DFT calculations of the (bulk) electronic properties correctly describe the topological properties of the real system. Surfaces of cleaved samples have been investigated at NanoESCA beamline (Elettra, Trieste) using photoemission electron microscopy (PEEM) and micro-X-ray photoelectron spectroscopy (μ-XPS), which provide lateral resolved information about the surface. Secondary electron micrographs measured using mercury lamp, with an example shown in Figure 7a, reveal macroscopic regions that exhibit different work functions as shown in Figure 7b, where "bright" and "dark" regions are indicated. To establish the chemical composition of these regions, μ-XPS was performed revealing that the photoelectron spectra associated with the two areas differ significantly (Figure 7b,c) implying different chemical terminations. In particular, the Bi 5 d μ-XPS spectrum acquired from the bright region shows one doublet, while from the dark region two doublets with the same width are observed. It is tempting to assign these regions to different terminations of a clean surface (bright ¼ [Cu 2 I 4 ] 2À ; dark ¼ [(Bi 4 Rh) 3 I] 2þ ). However, the chemical shift of 2 eV in the Bi 5d from the dark region is much larger than expected for the [(Bi 4 Rh) 3 I] 2þ surface based on DFT calculation for few-layer slabs models (about 0.2 eV). Moreover, the www.advancedsciencenews.com www.pss-b.com appearance of the 21 eV peak, which is routinely assigned to O 2s, points to (at least partial) surface contamination in the "dark" regions, probably by glue residues. Therefore, μ-XPS shows that a UHV cleaving procedure yields a clean surface for significant parts of the surface consistent with high-resolution ARPES investigations. Electronic Transport In order to investigate the electronic properties of this new TI compound further, we performed in situ four-point probe transport measurements using W tips. The resistivity of the sample with a thickness of t % 250 μm measured at 300 K is about 7 Ω □ À1 and does not depend on the spacing p between the tips (Figure 8a,b), which indicates 2D transport parallel to (001). A detailed analysis gave an effective thickness of the surface-near 2D channel of t eff ≤ 25 μm. The resistivity across the layered structure is higher by a factor of about 100 (t/t eff ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ρ z =ρ xðyÞ q Þ, [33] quantifying the anisotropy of the structure of compound 2. The sheet resistance for the effective 2D channel refers to a bulk resistivity of around 2 Â 10 À2 Ω cm at 300 K and is comparable to carbon. Based on the ARPES and DFT results, we propose here a simple model for the surface-near transport channel, which is the incoherent sum of conductances stemming from the n-doped intermetallic termination of the (001) surface, the highly anisotropic semiconducting bulk, and from states with topologically nontrivial character at the edges of all predicted TI layers. Temperature-dependent measurements between 30 and 300 K reveal three different regimes (Figure 8c,d). Between 300 and about 160 K (regime III), the resistivity gradually decreased with decreasing temperature, most likely because of a reduced electron-phonon scattering. The Arrhenius analysis in the "semiconducting" regime II (160 to 60 K) shows an activation energy, which nicely correlates with the bandgap of the crystal (E A ¼ E g /2 ¼ 14 meV). Apparently, the dominant transport channel in regime II and III is mediated most likely by the bulk states. Thereby, the metallic channel of the intermetallic layer at the (001) surface is based on strongly localized states, represents only about 1% of t eff , and is therefore not dominating the transport (Figure 8b,c). Most interestingly, the resistivity remains constant below 60 K (regime I). This suggests that a third transport channel is present and dominating the conductivity in this regime. A likely interpretation is a percolated network of TI edge states, in which transport is largely unaffected by temperature. Very similar observations had been made for Bi 2 Te 2 Se and Bi 1.1 Sb 0.9 Te 2 S under high pressure. [34] There the decoupling of topological surface states and bulk states at low temperatures was much more obvious because of the wider bandgaps and the topological classification as strong 3D TIs. Here, we reveal such behavior under normal pressure and for a weak 3D TI. Conclusion Bi 12 Rh 3 Cu 2 I 5 (2) is a new weak 3D TI. It demonstrates that the iodidobismuthate(III) spacer layer of the known weak 3D TI Bi 14 Rh 3 I 9 (1) can be replaced without changing the electron count of the intermetallic layer, in this case against iodidocuprate(I) groups. Despite the fact that the new spacer is only one atomic layer thick, representing the minimum achievable in such layered salts, the character of a weak 3D TI is preserved, however, at the price of a much smaller topological gap. The copper(I) cations between the iodide ions redirect the chemical bonding inside the spacer monolayer and thus act as a tool of (topological) bandgap engineering. The demonstrated chemical robustness of the intermetallic layers, combined with the finding that transition metal atoms can be incorporated into the spacer layer, opens up new options in the current quest for intrinsically magnetic TIs, which could form the material basis for a new type of quantum computer. Moreover, it is encouraging that even in a narrow-gap weak 3D TI the electronic transport at low temperatures appears to be dominated by the topologically protected edge states. Surface termination effects are responsible for a weak n-doping of the top intermetallic layer, but do not affect the bulk semiconductor property or the topologically protected edge states. The weak interlayer bonding is expected to allow easy exfoliation of 2, ideally down to the predicted 2D TI limit of one structural layer. Experimental Section Materials: The starting materials were prepared as follows: Bismuth (ChemPur, ≥99.9999%) was treated with H 2 at 220 C. Rhodium (ABCR, ≥99.9%) was treated twice with H 2 at 500 C. Copper (pieces, Merck, p.a. grade) was treated with warm glacial acetic acid for 5 min and dried in an argon gas flow. Fresh shavings of the metal were obtained by filing it off from the treated pieces. BiI 3 (Merck-Schuchardt, ≥99%) was sublimated twice. All the starting materials were stored in an argon-filled glove box (p(O 2 )/p 0 < 1 ppm, p(H 2 O)/p 0 < 1 ppm) directly after their purification. Synthesis of Bi12Rh3Cu2I5 (2): In argon atmosphere, a stoichiometric mixture of the starting materials with the molar ratio Bi:Rh:Cu:I ¼ 12:3:2:5 was ground in an agate mortar and sealed in a silica ampule of about 3 mL under vacuum. For obtaining a phase-pure powder ( Figure S3, Supporting Information), the sample was heated from RT to 420 C at 200 K h À1 and annealed at that temperature for at least 4 days. The ampoule was then quenched in water at RT. Larger crystals were obtained by heating the ampoule from RT to 700 C at a rate of 350 K h À1 , holding the temperature for 1-2 h, then fast cooling to 493 C at -2 K min À1 , followed by slow cooling to 365 C at -1 K h À1 and annealing at this temperature for at least 3 days. Afterward, the ampoule was quenched in water at RT. This temperature program usually yielded by-products, from which the crystals of the target phase were manually selected based on their morphology. SEM and EDX: SEM was performed using a SU8020 electron microscope from Hitachi equipped with a triple detector system for secondary low energy and back scattered electrons was used (U acc ¼ 2 kV). Figure S2, Supporting information, shows the layered morphology of the crystals of Bi 12 Rh 3 Cu 2 I 5 (2), which usually present numerous terraces perpendicular to the stacking direction of the layers. EDX data were collected using an Oxford Silicon Drift X-MaxN detector (U acc ¼ 20 kV). A polycrystalline sample was selected from the same batch as the single crystal studied by X-ray diffraction. The sample was fixed on a double-side adhesive C-pad settled on an aluminum holder. The average of a five-point analysis revealed the chemical composition of Bi 11.9(4) Rh 2.9(2) Cu 2.3(2) I 4.8 (2) (Figure S4, Supporting Information, right). Thermal Analysis: DSC was performed to investigate formation and decomposition processes in the quaternary Bi-Rh-Cu-I system. A Setaram Labsys ATD-DSC device with a k-probe (Ni-Cr/Ni-Al) and Al 2 O 3 as reference were used. The temperature program consisted of two cycles from 20 to 800 to 20 C (temperature ramp: AE2 K min À1 ). The samples were either a mixture of the starting materials ( Figure 2 www.advancedsciencenews.com www.pss-b.com the sample. This allows various probe geometries and defined probe spacing. Thereafter, each of the W tips were approached into a tunneling contact and subsequently lowered by piezo actuators in the feedback-off mode in order to ensure stable ohmic contacts. The resistance values were corrected to calculate the resistivity of the sample. The resistivity was measured at various positions and probe currents (1-100 μA) in order to average out the effect of chemical inhomogeneity. More details about surface sensitive four-point probe measurements using collinear and squared contact assemblies can be found elsewhere. [33] Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
8,111.6
2022-04-01T00:00:00.000
[ "Physics", "Materials Science" ]
Experimental evidence of symmetry breaking of transition-path times While thermal rates of state transitions in classical systems have been studied for almost a century, associated transition-path times have only recently received attention. Uphill and downhill transition paths between states at different free energies should be statistically indistinguishable. Here, we systematically investigate transition-path-time symmetry and report evidence of its breakdown on the molecular- and meso-scale out of equilibrium. In automated Brownian dynamics experiments, we establish first-passage-time symmetries of colloids driven by femtoNewton forces in holographically-created optical landscapes confined within microchannels. Conversely, we show that transitions which couple in a path-dependent manner to fluctuating forces exhibit asymmetry. We reproduce this asymmetry in folding transitions of DNA-hairpins driven out of equilibrium and suggest a topological mechanism of symmetry breakdown. Our results are relevant to measurements that capture a single coordinate in a multidimensional free energy landscape, as encountered in electrophysiology and single-molecule fluorescence experiments. C lassical thermally activated reactions are ubiquitous in nature and technology with a wide range of examples including folding transitions of proteins [1][2][3][4] and DNA 5,6 , transitions of colloidal particles between optical traps 7 , and the dynamics of molecules in membrane channel proteins [8][9][10] , artificial nanopores 11,12 , and channels 13,14 . Depending on the system, these transitions may proceed along a single, multiple, or even a continuum of pathways in phase-space that connect the initial and final states. The question of pathway multiplicity is, for instance, currently debated in the context of protein folding [15][16][17] . The transition-path time τ is defined as the time it takes to travel from one thermodynamic state to another. τ and especially its distribution contains valuable information about the pathway and the underlying system. In a two-state system, the transition-path time τ I→II can be measured as follows: whenever the system leaves the area of state I, a stopwatch is triggered. It is stopped when the system either returns or transitions to state II. In the latter case, this constitutes a single realization of the transition path. Importantly, transition-path times τ do not directly determine the rates k I→II and k I←II of the reaction 18,19 . This is because, rate coefficients account for all prior unsuccessful attempts at leaving the state in addition to the actual time of travel of the successful attempt. Transition rates can therefore be strongly asymmetric if one state is thermodynamically favourable over the other. By contrast, transition-path times are expected to be statistically symmetric in equilibrium, in accord with the principle of microscopic reversibility [20][21][22] . In many systems, it has been challenging to resolve individual transitions. However, technological progress has now advanced to a point where information about folding events of polymers can be gathered using optical techniques, such as Förster resonance energy transfer (FRET), which has sparked considerable interest in transition-path times [2][3][4] . Time-resolved force spectroscopy based on optical tweezers has been successfully applied in studies of the folding pathways of proteins and DNA 6,[23][24][25][26][27] . Measurements of ribosomal stepping times along RNA have led to insights into the molecular mechanics of gene translation 28 . The same technique has also been used to show a transition-path time symmetry in equilibrium folding and unfolding transitions of DNA-hairpins 6 . Recently, local velocities along folding trajectories of DNA have been measured with high resolution of the folding coordinate, which shed light on the frequency of recrossing events in relation to all barrier crossings 29 . In electrophysiology and pulse-sensing experiments, molecules are interrogated by voltage-driven transport often proceeding along a single pathway through membrane channels 30 . This technique detects changes in ion flow due to blockage by solutes of interest. However, for small, uncharged molecules (such as some antibiotics), these measurements are often not sensitive to the direction of travel or the orientation of the channel 31 . Direction, however, matters in biological membrane channels, which usually have an asymmetric structure. Channel asymmetry can, for instance, give rise to ratcheting effects that rectify diffusive currents under the influence of fluctuating forces 32 . Despite their importance, most of the thermodynamic principles of transition-path times have not been studied systematically, especially in experiments outside of thermodynamic equilibrium. We begin by showing experimentally that a pathway symmetry in steady-state potentials is reflected in a robust and measurable symmetry in path times. Counterintuitively, a symmetry 〈τ I→II 〉 = 〈τ I←II 〉 would imply steeper potential gradients between states I and II to not only expedite downhill, but also uphill transitions. In the first part of the study, we demonstrate two flavours of this symmetry: (i) for exit paths from a one-dimensional spatial interval that are conditioned on a particular exit 33,34 (see Fig. 1a) and (ii) direct transition paths across an interval (see Fig. 1b). Our fully-automated setup consists of holographic optical tweezers (HOT), which create time-constant potential landscapes for colloidal particles within microfluidic channels. Conceptually, our results show that transition-path times are well-defined theoretically when measured from uninterrupted trajectories between any two points in phase-space, regardless of whether or not these points lie in thermodynamic states defined by deep potential minima (in contrast to the sketch in Fig. 1c-f). In the second part of the study, we show that a breakdown of detailed balance in a coarse-grained, multidimensional phase space can lead to a breakdown of transition time symmetry (see Fig. 1f). We demonstrate the breakdown of this symmetry in microscopic transitions in asymmetric, bistable potentials, perturbed by fluctuating forces. Then, we show that this breakdown of symmetry extends to the molecular scale. Specifically, we explore the kinetics of folding and unfolding transitions of a DNA-hairpin grafted onto colloidal particles driven out of equilibrium by telegraphic noise. To the best of our knowledge, properties of transition times under time-dependent forces have not been described experimentally so far. Results Uphill and downhill exit-path-time symmetry. We use our HOT setup in conjunction with confining microchannels to physically simulate the escape of a Brownian particle from a cavity, reminiscent of the escape of solutes from membrane channels 35 . The movements of solutes such as ions in membrane channels or nanopores often follow thermodynamic gradients. Such a gradient is modelled here with a phase-gradient force 36 f in a laser line trap as sketched in Fig. 2a. We plot a selection of trajectories of Brownian particles acquired from automated drag-and-drop experiments in Fig. 2b: At t = 0, the particle is positioned at the centre x 0 = 0 of a predefined spatial interval within a microchannel. The particle is released, a stopwatch is triggered, and a laser line trap with a prescribed phase-gradient is turned on (see Fig. 1a and Methods). Once the particle leaves the interval, the stopwatch and the measurement are stopped. We find that the probability density of positions ρ(x) recorded in an ensemble of repeats goes to zero at the interval boundaries x ← , x → . These points can therefore be considered as absorbing in the Fokker-Planck picture. For each value of force f, we gathered around 1000 trajectories of which 950-980 were free of incidents such as other particles entering the channel. We applied both positive and negative forces (see Fig. 2c) to check for static bias caused for example by weak latent flows of water. Within our experimental resolution, we find that the diffusion profile D(x) along the channel is not affected by the applied phasegradients (see Supplementary Note 2 and Supplementary Figure 2), such that any difference in dynamics must be attributed to the difference in force. The inference method used to estimate potential U(x) and diffusion profile D(x) is discussed in the Methods section. The central result of this experiment is the equivalence of the mean left and right exit-path times 〈τ ← 〉, 〈τ → 〉, shown in Fig. 2d. Moreover, shorter exit-path times for higher absolute forces |f| indicate a speed-up of both uphill and downhill trajectories. By contrast, exit probabilities behave intuitively: exits against the force (uphill direction) become increasingly unlikely with an increasing force magnitude |f| as shown in Fig. 2e. The theoretical mean exit-path time (black line in Fig. 2d) was obtained from a solution of the mean first-passage time equation Dhτi′′ðxÞ þ f γ hτi′ðxÞ ¼ À1 with boundaries at x ← = −L/2 and x → = L/2 with L = 3.7 µm. We note that L stands for the length of the interval, not the channel (see Methods). Due to the observed exittime symmetry, the equation for the mean exit time can be solved for any exit x ← , x → . Interestingly, this symmetry extends beyond a simple equivalence of the mean. In fact, the distribution of exitpath times agree in the uphill and downhill direction as shown in Fig. 2f. This holds even for forces of different nature such as hydrodynamic drag (see Supplementary Note 3 and Supplementary Figure 3). Trajectories that manage to exit against the flow do so at precisely the same drift speed h _ xi as the ones that follow the flow. The statistical significance of similarity between two given cumulative distributions is asserted by the Kolmogorov-Smirnov test. Throughout this study, we require a significance of 0.5. Theoretical distributions of exit-path times shown in Fig. 2f and Supplementary Figure 3a to, for example, the left side ρ τ ðtÞ were obtained from a numerical solution of the Fokker-Planck equation ∂ t ρ(x, t|x 0 ) = −∂ x j(x, t|x 0 ) with j(x, t|x 0 ) = fρ(x, t|x 0 )−D(x)∂ x ρ(x, t|x 0 ) denoting the current of probability. The initial density of colloid positions ρ(x, t 0 = 0) = ρ 0 (x) was modelled as a sharp peak at the channel centre, x 0 = 0. Once j(x, t|x 0 ) is obtained, the exit-path time distribution is given by ρ τ ðtÞ ¼ jðx ; tjx 0 Þ=P ðx 0 Þ, with x ← denoting the xposition at the left boundary 37 . Both boundaries were treated as absorbing, i.e. ρ(x ← , t) = ρ(x → , t) = 0. The two exit probabilities and P ← (f) = 1 − P → (f) 37 . ρ τ ! ðtÞ can be obtained by exchanging x ← for x → and P ← for P → . Uphill and downhill transition-path-time symmetry. The robust symmetry observed in exit-path times is also found in direct transitions between any two points x L , x R in a quasi-onedimensional microchannel that is filled with an optical landscape. We deliberately choose left (L) and right (R) subscripts here to contrast transition-path times from exit-path times (see also Fig. 1a, b). The landscape considered here consists of a mixture of a point trap and a line trap with a positive phase-gradient force created by our HOT (see inset in Fig. 3b). We used the same HOT automation routine as before to observe around 500 uninterrupted colloid trajectories. The energy potential inferred from this ensemble of particle trajectories is plotted in Fig. 3a. The transition-path times τ tr , τ tr ! across the interval shown as a black box in Fig. 3a, are identically distributed as can be seen in Fig. 3b. Based on a spline interpolation of the inferred potential U(x) and a spatially dependent diffusion coefficient D(x), we calculated the theoretical distribution of transition-path times ρ τ tr ðtÞ. Again, we treat both boundaries x L and x R as absorbing. Following Zhang et al. 38 , we compute ρ τ tr ðtÞ for an initial density ρ 0 (x) which is sharply peaked close to the initial exit. For the sake of this example, we choose the direction left to right and thus set x 0 (ε) = x L + ε. The current density reads j(x R ,t|x 0 ) We normalize the distribution by the overall probability to exit through , assuming x 0 as the initial position. Finally, we obtain the distribution of transition times ρ τ tr ! ðtÞ from A plot of the cumulative distribution of ρ τ tr ! ðtÞ is shown in Fig. 3b (black). In Fig. 3c we plot the probability of direct transition across the same interval length, when this interval is continuously moved along the channel. For each position of this interval, the transition probabilities and times are recorded. As can be seen in Fig. 3d, the mean transition-path times calculated in this way in both directions are sensitive to the local force, especially when the transition interval touches the optical point trap. Despite this sensitivity, transition times in both directions are in excellent agreement. The theoretical prediction for the mean transition-path Fig. 1 Conditional exit-and direct transition-path times and pathway multiplicity with and without detailed balance. a The exit-path time τ to a given exit from a spatial interval is defined as the time of first-passage of that boundary (dotted line) before other boundaries are reached after initialization within the interval. b Transition-path times τ tr between the boundaries of an interval are first-passage times of trajectories that start from one boundary and directly reach the opposite boundary. Trajectories that return to the same boundary (see grey trajectory) are not counted. As we explain in the text, we deliberately named the transition-path boundaries x L and x R to distinguish them from exit-path boundaries. c Illustration of (mean) transition pathways (arrows) in potential landscapes with two coordinates and a single pathway. Breakdown of transition-path-time symmetry. The question that arises is whether and how this symmetry can be broken. In the following section, we describe the effect of external forces f ext (t), that stochastically switch between two levels +f 0 and −f 0 with exponentially distributed switching times. This drives the system into a non-equilibrium steady state (NESS). Such two-state switching processes are generally referred to as telegraph noise. The time between two switches is exponentially distributed with a decorrelation rate α, such that 〈f ext (t + Δt)f ext (t)〉 ∝ e −αΔt for Δt > 0. We create a NESS on the mesoscale by combining a bistable optical potential consisting of two point traps with different trap strengths created with our HOT and randomly sign-switching electrical fields (see Fig. 4a). We set the traps apart by 0.7 µm and direct 50% more light to the left trap than to the trap on the right, while operating at an overall laser power of~50 mW to avoid heatinduced convection. The minima of the two traps correspond to the two states I and II. he transition time is measured over the white area in Fig. 4c. Different trap strengths result in a difference in curvature between the two traps; the transition barrier loses its symmetry with respect to the centre line separating the two states. Indeed, as shown in Fig. 4b, we observe a statistically significant difference in the distribution of transition-path times τ I→II and τ I←II for α = 0.5 s −1 . Interestingly, the difference in mean transition times stays fairly constant over a range of decorrelation rates α, as shown in Fig. 4d. Towards higher noise decorrelation rates α (note the logarithmic scale), the symmetry is restored. For high frequencies of sign switches, the telegraph force approaches a white-noise process and the system should approach the scenario sketched in Fig. 1d. To study further the underlying mechanism that led to the breaking of this symmetry, we recreate our experiments in onedimensional Brownian dynamics simulations. The motion of a colloidal particle under the influence of external telegraph forces is well described by the following equation where γ denotes the friction coefficient of a sphere and f ext ðtÞ ¼ f 0 T ðtÞ denotes the force exerted by the electrical field. T ðtÞ represents a random telegraph process that switches between 1 and −1. U(x) corresponds to the free energy and ξ(t) is a Gaussian white-noise process with zero mean and unit variance 〈ξ(t)ξ(t′)〉 = δ(t − t′). We did not attempt to model every parameter of the experiment quantitatively, but rather test the generality of the observed split of transition pathways. The distribution of the system state in the NESS is shown in Fig. 4e. The transition pathways indeed split up. The distribution P i,j of the system state along these pathways in this force×position plane turn out to be visibly different on each leg. Colloids transitioning into one direction will therefore likely experience different force magnitudes along their pathway than colloids transitioning into the opposite direction. The red and blue arrows in the figure indicate the preferred sense of transition direction along the two pathways; detailed balance is indeed broken in this two-dimensional space 40 and transition-path times differ as a consequence (see Fig. 4f). Breakdown of transition-path-time symmetry in DNAhairpins. Having established broken symmetry on the mesoscale we now demonstrate the generality of the effect with an experimental realisation on the molecular scale. We measure folding and unfolding transition times of short 20-bp DNAhairpins under the influence of telegraph forces using an optical tweezers-based force spectroscopy setup 41 (see Fig. 5a). The hairpin is grafted onto a colloid of each end. While one colloid is firmly attached to a pipette, the other colloid is held in force-measuring optical tweezers and subject to a feedbackcontrolled force. If the force is kept constant in time, the hairpin thermally transitions between two main states folded (F) and unfolded (U), which differ in molecular extension and thus in trap position λ. In this paper we describe experiments performed using a non-equilibrium protocol. Here the system is subject to a telegraph force and each of these two (U, F) states splits into a doublet: high (F + , U + ) and low force (F − , U − ). This is shown in Fig. 5b using the density P i,j of states in a coarse-grained space spanned by the force measured by the optical tweezers f and the trap position λ. Importantly, as the arrows indicate, the telegraph force not only leads to a splitting of states, but also causes transition pathways to diversify. The system is more likely to unfold F→U during extended periods of high force (+), than during periods of low force (−). As a consequence, transitions from state F − to U + through U − (red arrows in Fig. 5b, c) are more likely than transitions from state U + to F − through F + (blue arrows in Fig. 5b, c). A typical trajectory of the system is shown in Fig. 5c, highlighting transitions from U + to F − (blue) and vice versa (red). This split in pathways results in the visible difference of cumulative distributions of folding (red) and unfolding (blue) transition times in Fig. 5d. However, this is not necessarily the case: The difference between back and forth transitionpath times can become arbitrarily small under certain conditions that we describe in Supplementary Note 4. We conclude that transition-path-time asymmetry points to a lack of information about the system, if not a breakdown of detailed balance. Overall, the transition-path times of our DNA-hairpin are significantly longer than previously reported values 6 , because our system transitions via intermediate states (U − or F + ). The time spent in corresponding minima affects the overall transitionpath time in a path-dependent way and thus amplifies the asymmetry. By contrast, on the mesoscale, transitions are slow enough such that we could resolve the asymmetries shown in Fig. 4b, which directly originate from asymmetries in the barrier shape. We conclude that the overarching topological picture indeed applies to the molecular scale; the dimensionality of the space of folding is effectively increased by one due to the external coloured noise. In this increased phase-space, a breakdown of detailed balance results in a diversification of transition pathways, which causes a transition-path-time asymmetry. Importantly, all participating degrees of freedom, including internal variables of external forces, have to be considered in the analysis. Discussion In our study, we present experimental evidence of a fundamental transition-path-time symmetry in Brownian transitions and its breakdown on the meso-and molecular scale under the influence of stochastic external forces. In accord with intuition, we find that uphill transitions become less likely, as the potential gradient between the initial and end state becomes steeper. Uphill and downhill transition-path times, however, are identically distributed under steady-state conditions. Conceptually, we show that transition-path times connecting any two points in the space of the system are thermodynamically well-defined quantities. Indeed, we find that in a time-constant force landscape, measured transition-path times agree with theoretical predictions that assume absorbing boundary conditions at both ends of the transition interval. It is important to note that boundaries can be located anywhere in the potential landscape and do not need to coincide with minima of the potential. In contrast to transitions driven by thermal forces, we find that the transition-path-time symmetry can breakdown under the influence of coloured noise. The additional timescale of external telegraph forces in our systems changes the topology of transition dynamics. We uncover a diversification of transition pathways in the extended phase-space, which includes the external force. Back and forth reactions follow, on average, different paths, breaking detailed balance and the transition-path-time symmetry. Specifically, we show that transition-path times of a colloid in an asymmetric double well potential become measurably asymmetric, when perturbed by randomly switching electrical fields. The asymmetry is sensitive to the frequency of field reversals and disappears for frequencies that are much higher than the barrier crossing time. Similarly, a DNA-hairpin that is driven out of equilibrium by a force that switches randomly between two levels, exhibits asymmetric folding-/unfoldingpath times. The observed asymmetry in transition-path times, however, is a result of an implicit projection of the system state onto a one-dimensional reaction coordinate. A breakdown of transition-path-time symmetry does therefore not imply a breakdown of microscopic reversibility. We note that all systems studied here are overdamped and effects related to inertia can be neglected. Our results have direct implications for the study of transitions in membrane channels or nanopores. Translocation times of solutes, like antibiotics, through membrane channels, are of interest in electrophysiological measurements 30,31 . Due to a lack of any direct optical access in these experiments, the shape of the current signal during translocation is the only source of information about the channel-solute interaction. Our work shows that it should be possible to infer the direction of travel solely from first-passage times. A reversal of the electrical potential in such an experiment should result in a distinct translocation time distribution if the solute-channel interaction landscape is asymmetric. The combination of a (sign-flipped) field and interaction potential can be interpreted as the limit of infinite switching times in Fig. 4, which amounts to the simple case of back and forth translocating solutes experiencing different time-constant force landscapes. Furthermore, in studies of molecular motors, transition-path time measurements could enable one to discriminate between power stroke and ratchet mechanisms, beyond thermodynamic considerations 42 . Arguments based on firstpassage-time symmetries have already been used to question the thermodynamic consistency of interpretations of Kinesin motility experiments 43,44 . Moreover, in systems driven by ratcheting 14,[45][46][47] , unbiased coloured noise rectifies Brownian dynamics around points of asymmetry of the energy landscape. The asymmetry of transitionpath times demonstrated in this study could be used as experimental evidence of this effect. Transition-path-time asymmetries could therefore be helpful in identifying and quantifying 100 140 180 (nm) 13 14 15 16 non-equilibrium dynamics in biological and molecular systems and complement recently discussed techniques such as broken detailed balance in active matter 48 and filament fluctuations 40 . We note that a breakdown of the transition-path-time symmetry can be diagnosed by tracking only one degree-of-freedom, whereas diagnosing a breakdown of detailed balance requires a minimal dimension of two in continuous coordinates 48 . This might be particularly helpful in FRET experiments, where usually only a single degree-of-freedom, the FRET efficiency, is accessible. The path-time symmetries we explore come in two flavours: an exit-path-time symmetry (see Fig. 2) and a transition-path-time symmetry (see Fig. 3). Recent theoretical advances 34,49 point to a common origin of both flavours, which lies in a symmetry of the first-passage time of the entropy produced during the transition. Since breakdown of transition-path-time symmetry is a sufficient, but not necessary condition for non-equilibrium dynamics, it is in general not possible to deduce entropy production from observed transition-path times as we discuss in Supplementary Note 4 and Supplementary Figure 4. Finally, our results on uphill and downhill first-passage times of Brownian motion are in interesting contrast to a recently reported velocity asymmetry observed in the flow-dominated regime in asymmetrically-shaped microfluidic channels 50 . Crucially, the reported asymmetry 50 is not an uphill/downhill symmetry in the sense described here, since all observed transitions occurred only into the direction of the applied electrical field. However, a careful investigation of dynamics in the crossover regime of weaker forces would shed light on how such a breakdown of time-reversal symmetry might arise in overdamped driven motion. Methods Microfluidic experiments. Optical tweezers allow for precise control of Brownian particles on the micro-scale and exert both conservative as well as non-conservative forces. Scattering of photons results in a transfer of photon momentum in the direction of beam propagation and thus gives rise to a mechanical path-dependent force 51 . By contrast, forces arising from the gradient of light intensity within the laser beam are path independent and can therefore be interpreted as resulting from an energy potential 52 . In HOTs, computer generated holograms are displayed on spatial light modulators (SLM) to attain almost complete control over the shape of the focal intensity pattern 36,53 . The technique is highly flexible and permits the creation of several, independent focal shapes at once. Moreover, the phase of the wave front of each individual trap in the focal plane can be addressed such that phase-gradient forces can be applied to particles 54 . The resultant forces are non-conservative. In line-shaped traps, the phase-gradient force near the line centre is approximately uniform along the extended direction of the trap. Optical scattering forces lie in the femtoNewton range 55 and are thus of ideal magnitude to bias the Brownian motion of colloidal particles over a few micrometres as sketched in Fig. 2a. Horizontal phase-gradients can be realized in HOTs by laterally shifting the SLM pattern, which imparts a phase-offset onto the beam. The relation between the force exerted by the phase-gradient f and the degree of the shift p turns out to be close to linear for small shifts |p| ≤ 0.4, which makes the control of forces f significantly easier. The calibration process is described in greater detail in Supplementary Note 1 and Supplementary Figure 1. Our SLM-control software has a response time of a few microseconds 56 , which permits us to reliably automate drag-and-drop experiments in a feedback system. The position of all colloids is identified on-line by a peak detection algorithm. The centroid of a small box around the presumed location of the colloid is then calculated from the background subtracted image to refine the position estimate. The optical setup and the microfluidic chip have been described before 35,57 . In contrast to previous publications, we use a Mikrotron MC1362 with adjustable frame rate. The intervals in which measurements are conducted were chosen to be centred in the channel. The microchannels used to confine the colloidal particles all have a length of 4 µm and height of 1 µm. The drag-and-drop automation algorithm executes the following steps: (i) The colloid is initialized precisely in the centre of the channel using a HOT point trap. (ii) The point trap holding the colloid is then turned off, a line trap with a prescribed phase-gradient parameter p is turned on and a stopwatch is triggered. (iii) Once the particle leaves a predefined interval, the experiment is stopped and the cycle is repeated with a new phase-gradient parameter. A similar automation procedure was chosen for the two-state experiment with electrical fields that swith polarity randomly, where we varied the rate of switches α. The colloidal particles consist of polycarbonate with a COOH-functionalized surface, with a diameter of 0.5 µm, and these were purchased from Polysciences Inc. The particles were suspended in 0.5 × TRIS-EDTA buffer at pH 8 and additionally 3 mM KCl to screen potential charges on the walls of the channel. Prior to each experiment, we ascertained that no hydrodynamic flow was present by comparing left and right exit probabilities of colloidal particles initialized in the centre of the channel. Electrical fields were created by applying a LabView-controlled voltage signal to silver electrodes that we connected to the far ends of the inlets of our microfluidic chip. Inference of potentials and diffusion coefficients. The forces, f, that correspond to each phase-gradient parameter p in Fig. 2d-f and Supplementary Figure 3a were inferred from the number of left and right exits using the relations P ← and P → for exit-path times described in the results section. The excellent agreement of theoretical with experimental exit-path times shows the self-consistency of this approach. We furthermore inferred forces and diffusion coefficients in all microfluidic experiments from Gaussian fits of the form We fitted Eq. (4) distributions of measured step lengths Δx, which we parcel into subintervals along the channel. Δt in Eq. (4) is the inverse of the frame rate which was set to 80 Hz in all microfluidic experiments. We calculated the local friction coefficient γ using the Einstein-Stokes equation D = k B T/γ. Experiments were performed in a section of the channel where the profile of the diffusion coefficient D(x) roughly remains constant 58 , as shown in Supplementary Figure 2a. Due to increased hydrodynamic drag near obstacles, the diffusion coefficient is sensitive to the distance of the colloid to channel walls 58 . Conversely, hydrodynamic drag is reduced close to the entrances of the channel. All onedimensional diffusion coefficients reported here can be interpreted as averages over y-and z-coordinates. The free energy potentials U(x) in Figs 2-4 were calculated from the force estimates described above by integrating from the left end x L of the interval of interest Brownian dynamics simulations. The Brownian dynamics simulation was set up to qualitatively model the bistable-dynamics experiment. We used a bistable, asymmetric potential of the form U(x) = a/4x 4 + b/2x 2 + cx, where c controls the asymmetry around x = 0. The coefficients were set to a = 64ΔU 1 /L 4 , b = −aL 2 /4, and c = 2ΔU 2 /L, with ΔU 1 = 5k B T, ΔU 2 = 2k B T and L = 1 µm. We set the diffusion constant to D = 0.15 µm 2 /s, which is close to the value a 500 nm colloid would have in a microchannel (see Supplementary Figure 2). The friction coefficient was obtained, again, using the Einstein-Stokes relation γ = k B T/D. The decorrelation time of the telegraph force was set to 2s, while the magnitude of the force change was f 0 = ±82 fN. DNA-hairpin folding and unfolding. DNA-hairpin experiments were performed with miniaturized, high-stability optical tweezers equipped with a force-feedback system. The setup uses two counter-propagating focused laser beams (λ = 845 nm, P = 200 mW) to create a single optical trap. The design of the microfluidic chamber has been described before 41 . Force measurements are based on the conservation of linear momentum, and were carried out using Position Sensitive Detectors (PSD) to measure the deflection of the laser beam after interaction with the trapped object 59 . The position of the trapping beam is monitored by diverting 8% of each laser beam to a secondary PSD. Altogether the instrument has a resolution of 0.1pN and 1 nm at a 1 kHz acquisition rate. The telegraph forces in Fig. 5 are directly exerted using the optical trap force feedback. In our experiments, a single DNA hairpin is tethered between two polystyrene beads which are either optically manipulated or trapped by air suction onto the tip of a micropipette. The constructs used in the experiments reported in this paper have short (29 bp) dsDNA handles. The two different handles are differentially labelled with either biotins or digoxigenins. In this way, each handle can selectively bind to either streptavidin (1.87 µm, Spherotech) or anti-digoxigenin coated beads (3.0-3.4 µm Kisker Biotech). The synthesis protocols for short (20 bp) DNA hairpins have been previously described 60 . All experiments were performed at 25°C in a buffer containing: 10 mM Tris, 1 mM EDTA, 1 M NaCl, 0.01% NaN 3 (pH 7.5). Code availability. Custom code used in the current study is available from the corresponding author on reasonable request. Data availability The data used in the current study are available from the corresponding author on reasonable request.
7,663.6
2019-01-04T00:00:00.000
[ "Physics" ]
Influence of MRI-based bone outline definition errors on external radiotherapy dose calculation accuracy in heterogeneous pseudo-CT images of prostate cancer patients. Abstract Background. This work evaluates influences of susceptibility-induced bone outline shift and perturbations, and bone segmentation errors on external radiotherapy dose calculation accuracy in magnetic resonance imaging (MRI)-based pseudo-computed tomography (CT) images of the male pelvis. Material and methods. T1/T2*-weighted fast gradient echo, T1-weighted spin echo and T2-weighted fast spin echo images were used in bone detection investigation. Bone edge location and bone diameter in MRI were evaluated by comparing those in the images with actual physical measurements of fresh deer bones positioned in a gelatine phantom. Dose calculation accuracy in pseudo-CT images was investigated for 15 prostate cancer patients. Bone outlines in T1/T2*-weighted images were contoured and additional segmentation errors were simulated by expanding and contracting the bone contours with 1 mm spacing. Heterogeneous pseudo-CT images were constructed by adopting a technique transforming the MRI intensity values into Hounsfield units with separate conversion models within and outside of bone segment. Results. Bone edges and diameter in the phantom were illustrated correctly within a 1 mm-pixel size in MRI. Each 1 mm-sized systematic error in bone segment resulted in roughly 0.4% change to the prostate dose level in the pseudo-CT images. The prostate average (range) dose levels in pseudo-CT images with additional systematic bone segmentation errors of −2 mm, 0 mm and 2 mm were 0.5% (−0.5–1.4%), −0.2% (−1.0–0.7%), and −0.9% (−1.8–0.0%) compared to those in CT images, respectively, in volumetric modulated arc therapy treatment plans calculated by Monte Carlo algorithm. Conclusions. Susceptibility-induced bone outline shift and perturbations do not result in substantial uncertainty for MRI-based dose calculation. Dose consistency of 2% can be achieved reliably for the prostate if heterogeneous pseudo-CT images are constructed with ≤± 2 mm systematic error in bone segment. field inhomogeneities and gradient non-linearities by conducting examinations with synthetic phantom materials. The magnetic susceptibility variations between different body tissues can result in additional geometric errors by shifting and distorting the tissue boundaries in the patient MR images [13,[19][20][21][22]. Kapanen et al. quantified magnitudes for the both types of distortions with the 1.5 T imager GE Optima MR450w (GE Medical Systems Inc., Waukesha, WI, USa), but the study did not evaluate the potential bone outline shift stemming from substantially different susceptibility values of bone cortex and surrounding soft tissues [2,[19][20][21][22][23]. The susceptibility-induced bone outline shift and perturbations could potentially reduce quality of so called pseudo-CT images. These images are constructed from MR images to provide electron density information for MRI-based RTP [4][5][6][7][8][9][10][11][12][13]24]. For the pelvis, the pseudo-CT image construction techniques necessitate a bone segment in order to separately present the high density bony tissues and the low density soft tissues [4][5][6][7][8][9][10][11][12][13]. The bone outline contouring accuracy might be especially important with the recently developed dual model Hounsfield unit (HU) conversion technique, because the method relies on separate conversion models within and outside of bone segment transforming intensity values of a single T 1 /T 2 * -weighted MR image series into HUs [13]. Under-or oversegmentation of bone outline could cause either substantial underestimation of the cortical bone HUs or major misrepresentation of the adjacent soft tissues, respectively [13]. nevertheless, previous studies have not evaluated the influence of bone segmentation errors on dose distribution [12,13]. Quantification of relation between bone contouring precision and dose calculation accuracy is essential in order to evaluate the feasibility of dual model HU conversion technique for routine clinical RTP workflow and to quantify acceptance limits for bone segmentation. The current study evaluates influence of systematic bone outline definition errors in MR images on external radiotherapy dose calculation accuracy in the dual model HU conversion pseudo-CT images of prostate cancer patients. This includes evaluation of potential susceptibility-induced bone outline shift and perturbations with a dedicated phantom, and quantification of dose consistency in the patient pseudo-CT images constructed from MR images with variable-sized bone segments. Subsequently, the work aims to determine the sufficient bone outline geometric accuracy for the pseudo-CT image construction to reach the dose calculation accuracy recommendations of 1% and 2% [25,26]. Bone and gelatine phantom In order to evaluate the potential susceptibility-induced bone outline shift and perturbations in MR images with respect to actual physical bone outline locations and to conduct the research with patientlike materials, a dedicated phantom was constructed by using fresh bones of a wild deer and gelatine made out of pig skin [12]. The phantom construction schematic is presented in the Supplementary Figure 1 (to be found online at http://informahealthcare.com/doi/ abs/10.3109/0284186X.2014.929737). The investigated femur and tibia were composed of a medullary cavity and an exterior cortical bone layer of approximately 3 mm. The bone diameters were on average roughly 3 cm for the femur and 2 cm for the tibia. Before examinations, the soft tissues located exteriorly to the bone cortex were removed without damaging the bone surface and the bones were attached into a 30  35 20 cm 3 PMMa box. The bone positioning was parallel to the longitudinal imaging direction. The femur was attached approximately 1.5 cm lateral to the box middle axis, roughly 2 cm above the box bottom wall. The tibia was positioned 12 cm lateral to the middle axis to simulate distance of patient femoral bones from the magnet isocenter. The box was filled with a mix of 8 l of warm water and 400 g of solid pig skin-based gelatine. The box was placed in a refrigerator to solidify the gelatine. Predefined locations for the upcoming measurements were marked onto the box walls with laxative pills. The authors affirm that the reported research was conducted in accordance with the ethical policies of Helsinki University Central Hospital (HUCH, Helsinki, Finland). Patients Dose calculation accuracy in the pseudo-CT images was investigated for 15 prostate cancer patients. Five of the patients were randomly selected from the patient database of HUCH Cancer Center. Ten of the patients were adopted from a previous study to complement the analysis on dose calculation accuracy in the pseudo-CT images and because this test group included patients with varying anatomies [13]. The study did not have any effect on the imaging and RTP of these patients. The patients were imaged with both MRI (target delineation) and CT (dose planning and image guidance). Imaging MRI and CT were performed following clinical protocols of HUCH Cancer Center using the same patient positioning and fixation as in radiotherapy [2]. The patient imaging workflow and parameters were adopted for phantom imaging. Table I presents the applied MRI sequence parameters. MRI was performed with the 1.5 T imager GE Optima MR450w. Five MR image series obtained by three different sequences were included into the study. Three of the image series, i.e. in-phase-, out-of-phase-, and wateronly images, were constructed using T 1 /T 2 * -weighted three-dimensional (3D) fast dual gradient echo (FGE) sequence [2]. T 1 -weighted 2D spin echo (SE) and T 2 -weighted 3D fast spin echo (FSE) sequences were applied to achieve the other two investigated image series [2]. The MR images were corrected for signal intensity inhomogeneity and geometrical distortion by using imaging software tools; PURE  and GradWarp3  , respectively [2,13]. The CT imaging was carried out with the four-slice CT scanner GE LightSpeed RT (GE Medical Systems Inc., Waukesha, WI, USa). The unit was operated at 120 kVp. The image slice thickness was 1.25 mm and pixel size was 0.98 mm [2]. Figure 1 illustrates the bone contrast in the obtained images of the phantom. Measurements for the bone diameter and bone outline location in the phantom The accuracy of the bone edge location and the bone diameter in the MR images were evaluated by comparing the measured pixel-based distances with the actual physical measurements. The bone diameter and the distances from the bone edges to the phantom walls were measured to both phase-(lateral) and frequency encoding directions (vertical). additionally, the examinations were designed to take into account errors in the sums of the distances, which could have revealed whether the possible errors in the distances were compensated for, or whether they accumulated, thereby causing amplified errors. The measurements were performed at four predefined locations. Three of the locations included the femur within  2 cm from the magnet isocenter and one included the tibia 12 cm lateral from the magnet isocenter. The physical measurements were taken by using a caliper and dipsticks. The corresponding distances in the images were measured with a RTP system (Eclipse  10.0, Varian Medical Systems Inc., Helsinki, Finland). although the dual model HU conversion technique relies solely on the T 1 /T 2 *weighted in-phase MR image, the additional images were included into the study because these images may prove useful for bone segmentation, if the bone outline is presented accurately. additionally, CT images were included into the examinations to evaluate whether the bone outline location accuracy in CT images is substantially superior compared to that in MR images. With each image series the criterion for the bone edge was determined by measuring the pixel values representing the cortical bone and those representing the gelatine, and setting a threshold pixel-value in between. By considering the image pixel size and the intended segmentation accuracy for RTP, both physical and image-based measurements were rounded to the nearest 0.5 mm. Bone segmentation in patient MR images, pseudo-CT construction and dose calculation Bone outlines in the patient T 1 /T 2 * -weighted inphase MR images were contoured carefully to achieve bone segments for pseudo-CT construction. Furthermore, the bone contours were intently expanded or contracted with 1 mm spacing to obtain pseudo-CT images constructed from MR images with additional systematic errors in the bone segment. The pseudo-CT images were constructed individually for each patient with each of the bone segments by adopting the dual model HU conversion technique. Korhonen et al. have described the method in details earlier [13]. The Supplementary Video-clip 1 (to be found online at http://informahealthcare.com/ doi/abs/10.3109/0284186X.2014.929737) shows the image transformation from the MR image to the pseudo-CT image. Figure 2 illustrates examples of the constructed pseudo-CT images with variablesized bone segments. The influence of bone segmentation accuracy on dose level in the pseudo-CT images was evaluated by comparing the planning target volume (PTV, including prostate and seminal vesicles) and organs-at-risk (OaR, the rectum and the bladder) dose-volume histogram (DVH) parameters in the pseudo-CT images with those in the standard CT images. The dose distribution differences in the pelvis were evaluated locally with the Gamma index method [27]. To minimize uncertainty for dose comparisons the same isocenter and PTV positioning in the CT and pseudo-CT images were obtained by MR to CT image co-registration (rigid registration relying on mutual information within roughly 15-cm diameter rectangular-shaped volume-of-interest positioned to the image center; MIM Software Inc., version 5.4, Cleveland, OH, USa) before constructing the pseudo-CT images. Moreover, the volumes in which patient body in pseudo-CT image unfilled the body in CT image were set as water-equivalent, and the body volumes in pseudo-CT image locating exteriorly to the body outline in CT image were assigned as air-equivalent. Treatment planning was performed using 6 MV photons with 360° volumetric modulated arc therapy (VMaT) and with seven-field intensity-modulated radiation therapy (IMRT). The dose optimization process was conducted by minimizing high dose volumes in healthy tissues especially at the organs at risk (such as the rectum and the bladder wall) without compromising the target dose. The treatment plans were calculated in standard CT images with the X-ray Voxel Monte Carlo algorithm (XVMC, version 1.6, with 0.2 cm grid-size and 0.5% MC uncertainty, electron density scaled medium, Monaco  3.20.01, Elekta aB, Stockholm, Sweden) for the VMaT plans, and with the anisotropic analytical algorithm (aaa, version 11.0.31, with 0.1 cm grid-size, electron density scaled water, Eclipse  11.0) for the IMRT plans. The treatment plans were copied and recalculated in the pseudo-CT images. Figure 3 illustrates the spatial accuracy of bone outline presentation in the images. The determined bone diameter in the MR images was always within 1.0 mm compared to the physically measured diameter. In the distances from the bone edges to the phantom walls the maximum errors were 1.5 mm, which were obtained at two of the 80 measurements (with T 1 SE Table II presents the prostate PTV DVH parameter (PTV volumes 95%, 50% and 5%) differences between dose distributions in the standard CT images and in the pseudo-CT images. Without additional errors in the bone segment the PTV mean dose level inconsistencies were in average (and in the worst cases)  0.2% ( 1.0-0.7%) and 0.0% ( 0.7-0.8%) in VMaT and in IMRT treatment plans, respectively. With additional systematic bone segmentation errors of  3 mm,  2 mm, …  3 mm the PTV mean dose level inconsistencies were up to 1.6%, 1.4%, 0.9%,  1.3%,  1.8%, and  2.2%, respectively, in VMaT plans, and 2.1%, 1.8%, 1.2%,  1.1%,  1.8% and  2.2%, respectively, in IMRT plans. The OaR DVH parameter differences were detectable mainly only in high dose volumes, in which the differences were similar with those quantified for the PTV. The dose distribution comparisons with the Gamma index method are presented in Supplementary Table I Discussion The bone outline measurements indicated that the bone edges and the bone diameter in the tested MR images are presented precisely within 1 mm pixel size in both of the encoding directions. The error ranges between MR image sets obtained by different sequence parameters were relatively similar, suggesting that any of the investigated images could be adopted for bone contouring. Moreover, the uncertainties in bone diameter and in bone edge location were roughly within similar ranges in MR images and in CT images indicating that the bone edges can be presented approximately as accurately in the pseudo-CT images as in the conventional planning CT images, if the MRI-based bone contouring is performed precisely. Hence, the susceptibility-induced bone outline shift and perturbations are not restrictive issues for precise bone outline contouring and for accurate pseudo-CT image construction. In this study it was essential to use real fresh bones instead of adopting tissue-representative phantom materials and that the reference measures were obtained by actual physical measurements in a stable phantom. Image intensity values of the deer bones and the gelatine ascertained the similar presentation of these materials with the human bony and soft tissues, respectively. The susceptibility of gelatine can be expected to be similar to soft tissues by considering the material properties [28]. The obtained bone outline uncertainty was roughly within similar level with the theoretical values of susceptibility-related geometrical distortion for the applied imaging parameters, such as frequency encoding gradient strength (5 mT/m in T 1 /T 2 * -weighted imaging) [2,13,[20][21][22][23]. although local susceptibility-induced distortions are unique to individual anatomies and may be different in complex patient anatomy, it is likely that the distortion magnitudes would not be substantially different to those the XVMC and the aaa. Dose comparisons include potential uncertainties stemming from such as different body positions in the images, co-registration errors, image artefacts, and rectum gas in CT images. Moreover, the dose calculation accuracy was evaluated only for treatment plans with multiple radiation field directions. Local dose distribution errors of over the reported uncertainty level may occur especially at the vicinity of bone outlines and with treatment plans relying only on few static fields [12]. We are currently aiming to introduce automatic bone outline segmentation methods enabling routine MRI-based RTP workflow in a clinic. according to the current study, we have defined a segmentation accuracy goal of 2 mm. Rapid automatic bone segmentation would be of particular value in order to introduce efficient and accurate on-line adaptive radiotherapy techniques with linear accelerators that are integrated into MR scanners [29]. With these MR-Linacs it would be particularly reasonable to rely solely on MR images throughout the RTP process. Further studies could also evaluate feasibility of the dual model HU conversion technique for obtaining attenuation correction for positron emission tomography/MRI [30]. quantified in the current research. The reported bone outline presentation accuracy include potential submm-sized uncertainties stemming from such as system-related distortion, imaging voxel size and software capability of presenting the pixel values at tissue interfaces. In these circumstances quantification of bone outline position more precisely than with the applied 0.5 mm scaling would have been inappropriate. Moreover, specification of possible sub-mm differences is irrelevant considering the goal of the investigation. This research focused on determining the bone outline accuracy only with MR images obtained by the specific MR platform and sequence parameters. Thus, before adopting any sequences for MRI-based RTP, a verification procedure should be conducted to ensure that the geometric accuracy in the obtained MR images is sufficient, especially if the scanner and the sequence parameters are different to those applied in the present research. a major limiting factor for using some MR sequences for bone contouring can be the obscure appearance of the cortical bone in the images and the bony tissue definition from these images. This study complemented previous research by quantifying dose calculation accuracy in the dual model HU conversion pseudo-CT images of prostate cancer patients with variable-sized bone segments [13]. This was essential in order to determine the level of needed segmentation accuracy providing sufficient dose calculation accuracy in the pseudo-CT images for routine clinical MRI-based RTP workflow. This work underlined that the ultimate 1% goal of dose consistency between the actual CT and the pseudo-CT images can be reached reliably if the pseudo-CT images of prostate cancer patients are constructed with precisely contoured bone segments and with the dual model HU conversion technique [13,26]. according to Table II, each 1 mm-sized systematic error in the bone outline contour result in approximately 0.4% change to the average prostate PTV dose level in the pseudo-CT images. The systematic bone outline contour error generally decreased the dose calculation accuracy, but with some patients the dose consistency compared to CT was better in agreement with 1 mm or 2 mm systematic errors in bone contour than with the precise segment. In these cases the bone contour errors compensated the uncertainty of HU conversion method [13]. nevertheless, the 2% goal of dose calculation accuracy was reached reliably for all studied prostate cancer patients when the additional systematic error of bone segment was within  2 mm [25]. It is important to recognize that the reported dose consistency in the pseudo-CT images was quantified by regarding as reference the dose distributions optimized and calculated in standard CT images with
4,312
2014-08-01T00:00:00.000
[ "Medicine", "Engineering" ]
Novelty of Bioengineered Iron Nanoparticles in Nanocoated Surgical Cotton: A Green Chemistry The current focus of nanotechnology is to develop environmentally safe methodologies for the formulation of nanoparticles. The phytochemistry of Zingiber officinale inspired us to utilize it for the synthesis of iron nanoparticles. GC-MS analysis revealed the phytochemical profile of ginger. Out of 20 different chemicals, gingerol was found to be the most potent phytochemical with a retention time of 40.48 min. The present study reports a rapid synthesis method for the formation of iron nanoparticles and its potential efficacy as an antibacterial agent and an antioxidant. Because of its antibacterial property, ginger extract was used to coat surgical cotton. Synthesized ginger root iron nanoparticles (GR-FeNPs) were characterized by UV-visible spectroscopy, Fourier-transform infrared spectroscopy (FT-IR), X-ray diffraction analysis, and particle size analysis. XRD confirmed the crystalline structure of iron oxide nanoparticles as it showed the crystal plane (2 2 0), (3 1 1), (2 2 2), and (4 0 0). The particle size analyzer (PSA) showed the average size of the particles, 56.2 nm. The antimicrobial activity of the FeNPs was tested against different Gram-positive and Gram-negative bacteria. E. coli showed maximum inhibition as compared with the other organisms. Antioxidant activity proved the maximum rate of free radicals at 160 µg/mL produced by nanoparticles. In addition, the antimicrobial activity of nanocoated surgical cotton was evaluated on the first day and 30th day after coating, which clearly showed excellent growth inhibition of organisms, setting a new path in the field of medical microbiology. Hence, iron-nanocoated surgical cotton synthesized using green chemistry, which is antimicrobial and cost effective, might be economically helpful and provide insights to the medical field, replacing conventional wound healing treatments, for better prognosis. Introduction In the modern era, it is crucial to find a substitute for conventional antibiotics because of the emergence of new multidrug-resistant bacterial strains that are able to form biofilms, decreasing the action of antibiotics [1].e recent advancements in the field of nanotechnology include the preparation of nanoparticles of specific size and shape that exhibit antimicrobial properties.e antimicrobial activity of the nanoparticles can be determined by the size, physicochemical properties, and surface area-to-volume ratio [2][3][4][5].It is reported that nanoparticles of smaller size tend to exhibit excellent antimicrobial activity.Various polyphenols and antioxidants present in the Z. officinale root play an important role in the field of medicine, and interaction of these with the metallic surface of the nanoparticles exhibits a possible antioxidant activity [4].Because of the growing concerns of society regarding health issues, consumers are paying attention before they pick any product.at is the reason for the demand of antimicrobial agents in the market. e essential oil of Z. officinale has antimicrobial, antioxidant, antifungal, insecticidal, and anti-inflammatory properties [5].It created a special interest for choosing ginger as a plant for the preparation of iron nanoparticles.Various chemical and biological methods can be used to prepare nanoparticles, but synthesis via a green approach using Z. officinale root extract is ecofriendly, cost effective, easy, and less hazardous [6][7][8]. In comparison with the previous reports, few studies have reported on the synthesis of iron oxide nanoparticles from Z. officinale and its antimicrobial evaluation on surgical cotton.Iron is a cost-effective alternative compared with other expensive metals reported earlier as antimicrobial agents.is study's focus is to synthesize iron nanoparticles (FeNPs) from Z. officinale; confirm the formation of FeNPs by different characterization methods such as UV-visible spectroscopy, Fourier transform infrared spectroscopy, and X-ray diffraction; check the bactericidal activity of FeNPs; and coat the FeNPs on surgical cotton.Because of antibiotic-resistant bacterial strains, wound dressing of patients is difficult.us, the study may provide insight into a new path which may be an alternative for antibiotic use in the near future. Materials and Reagents. All the chemicals including ferric chloride [FeCl 3 ], isopropyl alcohol, DPPH [2, 2diphenyl-1-picrylhydrazyl], ascorbic acid, methanol, and antibiotic disks were of analytical reagent grade and used directly without any further purification.Ingredients for media preparation were from HiMedia.Ginger was collected from a local market in Gandhidham, Gujarat, India.Distilled water was used in all experiments. GC-MS Analysis of Zingiber officinale Root Extract. e root extract of Z. officinale was analyzed using HP5 Agilent Technology 5977B (Santa Clara, US) model no.7820A MS coupled to a 5977B equipped with a HP-5 fused silica capillary column (30 m × 0.320 mm × 0.25 μm film thickness).Helium gas was used as the carrier gas.GC-MS programme was set as per the method described by Dhalani et al. [9]. Preparation of the Plant Extract. Ginger was collected from the local market and washed thoroughly with distilled water to eliminate dust on the surface, chopped into small pieces, sundried, and powdered.e extract was prepared by mixing 12 grams of dried ginger powder in 200 mL of isopropyl alcohol, and the mixture was stirred on a magnetic stirrer at 80 °C for 1h; thereafter, the extract was filtered carefully, and the supernatant collected and stored at room temperature for further use [9]. Green Synthesis of Iron Nanoparticles.GR-FeNPs were synthesized by adding the equivalent extract to 0.01 M•FeCl 3 at room temperature and constantly stirring for 10 min.Immediate appearance of black brown color showed the reduction of Fe +3 ions, which is the first indication of the formation of iron nanoparticles.Afterwards, the liquid was poured into large Petri plates and dried at 100 °C for 24 h in a hot-air oven and cooled down the next day.e upper layer of the plate was scraped out carefully using a spatula.e fine-dried black powder of ginger iron nanoparticles was kept ready for further characterization.All nanoparticle preparations were performed according to our previous study [10,11]. 2.5.Characterization 2.5.1.UV Visible Spectroscopy.UV visible spectroscopic analysis of the synthesized FeNPs was done by using 0.1 ml of sample diluted in 2 ml of deionized water.Absorbance was measured with the help of ABTRONICS Model No. LT-2900 in the range of 200-700 nm [12,13].(Fourier Transform Infrared Spectrophotometer).FT-IR spectra of dried FeNPs and plant extract were determined using a Fourier transform infrared spectroscope. FT-IR Analysis e synthesized nanoparticles were lyophilized and mixed with KBr pellets and further processed.An average of 9 scans were collected for each measurement with a resolution of 4 cm −1 at a range of 4000-650 cm −1 [14]. Particle Size Analyzer (PSA). e synthesized nanoparticles were analyzed by using a particle size analyzer, which measures the particle size by its flow through a beam of light producing a size distribution from the smallest to largest dimensions.When the particles are dissolved in water, they stay in a colloidal form and flow with a velocity that depends on their size and zeta potential (Brownian movement) [15]. X-Ray Crystallography. e crystalline structure of the synthesized nanoparticles was analyzed by powder X-ray crystallography (XRD). Antimicrobial Activity. Gram-negative bacterial strains such as Escherichia coli MCC 2246 and Klebsiella pneumoniae MCC 2716 and Gram-positive strains such as Staphylococcus aureus MCC 2408 and Bacillus subtilis MCC 2244 were cultured overnight in nutrient broth media at 37 °C with continuous agitation on an orbital shaker platform at 180 rpm.Simultaneously, nutrient agar media was dispersed into sterile petri plates and incubated for 24 hours at 37 °C for sterility check.e overnight broth cultures of the test organisms (E.coli, K. pneumoniae, S. aureus, and B. subtilis) were used as inoculum.Antimicrobial activity was analyzed using the agar well diffusion method. e test culture of each organism (100 μl) was applied to each plate. Antimicrobial Activity of Iron Nanoparticle-Coated Surgical Cotton.FeNPs (30 µg/ml) were dissolved in methanol and sonicated for 10 minutes in an ultra sonicator.Dip coating is the precision-controlled immersion and withdrawal of a substrate into a reservoir of liquid for the deposition of a layer of material over it.A cotton piece of size 0.5 cm × 0.5 cm (substrate) was fixed to the head portion of the dip coater, which has a mechanical body that moves up and down.A small beaker with diluted NPs sample is placed 2 Advances in Pharmacological Sciences below the body, and the dip coating process was performed three times at a particular pressure and speed.e coated cotton was carefully removed using sterile forceps, dried using an air dryer, and stored in a ziplock bag for further use.Antimicrobial activity was analyzed using the agar well diffusion method.100 μl of the test culture for each microorganism was spread using a sterile glass spreader on the nutrient agar plate and left for 10 minutes for the inoculum to get absorbed into the agar medium.A small piece of FeNPs-coated surgical cotton was placed in the middle of the plate and incubated for 48 hours, and the results noted [19,20].(1) GC-MS Analysis of Zingiber officinale Root Extract. e GC-MS profile of Z. officinale root extract is shown in Figure 1. e retention time and area (%) of each compound are given in Table 1.Out of 20 different chemical compounds, gingerol was found to be the major component with the highest retention time of 40.368 min.e presence of gingerol might be the reason for the reduction of metal ions and bactericidal activity of the nanoparticles. Mechanism of FeNPs Formation from Z. officinale Extract. e process of synthesizing nanoparticles from plant extracts has been proven to be one of the most reliable, nontoxic, and eco-friendly approaches towards green chemistry and plant biotechnology.Plant extract is mixed with 0.01 M•FeCl 3 at a ratio of 2:3.e color change is due to the presence of polyphenols present in the plant extract which act as a reducing and capping agent, lowering the valency of Fe +3 to Fe 0 , as shown in Figure 2. Earlier studies reported that the color transformation from yellow to reddish black is the primary indication of the formation of iron oxide nanoparticles [24].e aldehyde and polyphenol groups present in the leaf extract are responsible for the reduction of ferric chloride [12,25]. UV-Visible Spectroscopy (UV-Vis). Spectroscopy is an analytical technique concerned with the measurement of absorption of electromagnetic radiation.UV-Vis spectroscopy is one of the oldest methods in molecular spectroscopy.It refers to absorption spectroscopy in the UV-visible spectral region.is means it uses light in the visible and adjacent ranges. e absorption varies with the difference in color in the chemicals in the given samples.e UV-visible spectra of iron nanoparticles in the aqueous ginger extract are shown in Figure 3. e absorption peak at a wavelength between 200 and 260 nm indicates the formation of iron nanoparticles (Figure 3).e sharp and intense peak is attributed to the uniform size of the particles [26,27]. FT-IR Analysis. FT-IR analysis was carried out to evaluate the possible interaction between the biomolecules and Fe 3+ during the biogenic reduction reaction.e FT-IR data for FeNPs containing Z. officinale root extract are given in Table 2. e bond stretching at 2927.8 cm −1 is attributed to the C-H bond, 1638.3 cm −1 to the C�O bond, 1517 cm −1 to the C-C amide group at 861 cm −1 , and C-N to 1075.3 cm −1 and 760 cm −1 , which was found to be very close to 688 cm −1 , which was attributed to the presence of zero valent FeNPs as shown in Figure 4. We can observe the FT-IR data of FeNPs with a plant sample [Figure 4] by analyzing the shift in bond stretching of the C-H bond from 2922 cm −1 to 2927 cm −1 , C-C bond from 1514 cm −1 to 1517 cm −1 , and C-N bond from 1037 cm −1 to 1075.5 cm −1 . Particle Size Analysis. It is evident from the particle size analysis that smaller nanoparticles below 100 nm were synthesized, which might have agglomerated and resulted in larger nanoparticles.Furthermore, because of the magnetic property, the particles might have agglomerated, producing larger dimensions as depicted in Figure 5. Antimicrobial Activity. Various Gram-positive and Gram-negative bacterial strains were used to check the bactericidal activity of FeNPs synthesized via green chemistry.e excessive use of antibiotics has led to the emergence of new multidrug-resistant strains.It is necessary to find an alternative to antibiotics.Earlier studies reported the use of expensive metal nanoparticles as antimicrobial agents Advances in Pharmacological Sciences [31][32][33].To overcome this problem, the current study's focus was to design an eco-friendly and cost-e ective method.e results are shown in Table 3. E. coli and K. pneumoniae show more sensitivity than Gram-positive bacteria.Compared with Gram-positive bacteria, Gram-negative bacteria have a thin layer of peptidoglycan.Hence, FeNPs can easily Advances in Pharmacological Sciences penetrate the cell wall of Gram-negative bacteria (Figure 7).Our results support the notion of earlier studies that E. coli and K. pneumoniae showed higher zones of inhibition than B. subtilis and S. aureus [26,27,34]. Antimicrobial Activity of Iron Nanoparticle-Coated Surgical Cotton. e key outcome of the present study is the FeNPs-coated surgical cotton.e bactericidal activity of FeNPs extended to the surgical cotton, and it can be used further in wound healing, tissue therapy, and other medicinal applications.10 μg/ml of FeNPs was used to coat the surgical cotton with the help of a dip coater.e antimicrobial activity was evaluated on Gram-positive B. subtilis, S. aureus, and Gram-negative E. coli, by the standard disc di usion method.e results are given in Table 4. e results showed the radial diameter of the inhibiting zones of B. subtilis, S. aureus, and E. coli after 24 hours.e clear inhibition zones made by the FeNPs-coated surgical cotton obtained in the present study are shown in Figure 8. Antimicrobial activity was evaluated on day zero and 30 days after coating.Initially, the particles showed higher Advances in Pharmacological Sciences antimicrobial activity, which diminished in terms of zone diameter due to the development of resistance in the microbial culture used for the study.e clear zone even after 30 days indicates bacterial growth restriction by the di used FeNPs over the surgical cotton.Furthermore, the green approach for synthesis of FeNPs can be applied on cotton fabric which could have good bactericidal activity in wound dressing [30,[35][36][37][38]. us, the function of an antioxidant system is not to remove oxidants entirely but instead to maintain an optimum level inside the body.Ascorbic acid has high amounts of antioxidants; thus, it was used as standard (Figure 9).By using the DPPH assay at different sets of concentrations, antimicrobial activity was evaluated in triplicate.e total antioxidant capacity of Z. officinale was expressed as the number of equivalents of ascorbic acid.e color of the DPPH solution in the presence of the GR-FeNPs changes gradually from deep violet to pale yellow, which allowed the visual monitoring of the antioxidant activity of the nanoparticles.e observed effect of FeNPs is in the following order: ascorbic acid > FeNPs > FeCl 3 > plant extract [Figure 8]. e study revealed that the antioxidant activity of the extract follows an increasing trend with the increase in concentration of the GR-FeNPs.DPPH activity results showed the highest free radical % scavenging potentials of 0.01 M FeCl 3 , GR extract, GR-Fe NPs, and ascorbic acid to be 74%, 71%, 89%, and 92%, respectively, at a concentration of 160 μg/mL [39,40]. Conclusion e present work highlighted the green chemistry of synthesizing iron nanoparticles from the root extract of Z. o cinale.e production proved to be easy, cost e ective, and eco-friendly with natural reagents and less harsh chemicals.e color change was also remarkable when ferric chloride solution was mixed with the reducing agent of the plant extract.e biosynthesized FeNPs were characterized by UV-Vis spectroscopy that showed a surface plasmon resonance behaviour.e antimicrobial activity reported using green approach-synthesized nanoparticles may be further bene cial for various applications for better prognosis of several diseases, and the antioxidant activity of Z. o cinale root extract has shown tremendous results.Iron nanocoated surgical cotton would have great role for distinguished health applications by creating new gadgets such as biosensors which can be implemented for the study may enhance the e ects of conventional antimicrobials, which will probably decrease costs and improve the treatment quality.us, nanocoated surgical cotton obtained using a green synthesis approach might be a promising path in the eld of medical microbiology.Future studies are still needed to design nanochips having an antimicrobial property which can replace antibiotics in our lives. Figure 2 : Figure 2: Primary indication of formation of nanoparticles-color change from yellow to brown. Table 1 : Phytochemical GC-MS analysis of Zingiber o cinale root extract. Table 2 : Comparison of bond stretching of synthesized FeNPs with Z. o cinale root extract. Table 3 : Diameter of the zone of inhibition of GR-FeNPs against Gram-positive and Gram-negative bacteria.
3,815
2019-02-03T00:00:00.000
[ "Materials Science", "Medicine", "Chemistry", "Environmental Science" ]
Management Dynamics Management Dynamics Recently the issue of social protection floor (SPF) has gained a strong momentum at International Labour Organisation (ILO) and World Health Organisation (WHO) meetings. A series of discussions are going on in ILO for a comprehensive instrument in the form of non-binding recommendation on social protection floorfor providing basic social protection benefits for all. SPF is considered as a powerful instrument at national levelfor addressing the permanent human crisis. In this regard the present paper attempts to examine the concept ofsocial protection floor and its significance. The paper is based on the secondary sources of data. For this purpose, the various research papers and publications of ILO, WHO and UNDP and, the reports published by the government and non-government agencies were considered. The paper attempts to contribute to the ongoing discussion on the relevance of a social protection floor in our country. INTRODUCTION According to World Social Security Report of International Labour Organisation (201 Oa) only about 20 percent of the world's working age populations (and their families) have effective access to comprehensive social protection.Approximately 80 percent of the global population lives in social insecurity.As a result they are not able to get any sort of benefit from a basic set of social guarantees which help them to deal with the life's risks.As per a recent World Bank estimation, about 1.4 billion people live on less than $ 1.25 a day & most of them are women & children, working in the informal economy, and/or belong to socially unprotected groups such as people living with disabilities or HIV/AIDS or migrant workers (UNDP, 2011).Moreover the recurring financial & economic crisis, anywhere in the world, adds to their misery.These conditions emphasize, on all the countries, the imperative need to invest particularly in the area of social protection.The investment in social protection is preferred in order to promote sustainable & equitable economic growth & socio-economic development through efficient productive systems, deeper cooperation & integration, good governance, and durable peace and security (Mat and LG, 2012). Among the various instruments of social protection, the social protection floor (SPF) is gaining importance in the discussions at international level since 2009.SPF initiative was one of the nine joint initiatives taken by UN System Chief Executives Board (CEB) on April 2009 for confronting the global economic and financial crisis, accelerate recovery and pave the way for a fairer and more sustainable globalization.Other initiatives include additional financing for the most vulnerable, food security, trade, a green economy initiative, a global jobs pact, humanitarian, security and social stability, technology and innovation, and lastly, monitoring and analysis.Among all these initiatives a national level social protection floor is considered as a powerful instrument for addressing the permanent human crisis. In this regard the present paper attempts to examine the significance of social protection floor and its relevance for India.The organisation of the paper is as following.Firstly, the concept of the term "social protection floor" is discussed.Secondly, the rationale of social protection floor is presented.After that the main actors in the SPF initiative at international and national level are explained.After that India's position on social protection floor is discussed.And in the last, the conclusions and implications are presented.The paper is based on the secondary sources of data.For this purpose, the various research papers and publications of ILO, WHO and UNDP and, the reports published by the government and non-govemment agencies were considered.The paper attempts to contribute to the ongoing discussion on the relevance of a social protection floor in our country. THE CONCEPT There are two main terms, one is social protection and another is social protection floor The term "social protection" refers to the "full range of protective transfers, services, and institutional safeguards supposed to protect the population 'at risk' of being 'in need" (Standing, 2007).In other words, social protection deals with both absolute capability deprivation (food insecurity, inadequate employment, low earnings, low health, and educational status) and contingency type risk and vulnerabilities such old age, health, accident, death 1.Thus, socialprotection refers to a wide package of measures to prevent and reduce poverty, vulnerability and inequality.Investment in social protection is essential as it is fundamental to social cohesion and economic development. The concept of social protection floor was first proposed by the World Commission on the Social Dimension of Globalization in 2004, when it stated that "a certain minimum level of social protection needs to be an accepted and undisputed part of the socio-economic floor of the global economy."Presently, this concept as developed by the ILO denotes a global and coherent social policy concept that promotes nationally defined strategies that protect a minimum level of access to essential services and income security for all (ILO,2009) especially in the time of economic and financial crisis and beyond that period.In other words, SPF represents a basic set of social rights, facilities and services which every individual should enjoy.The concept has been described more clearly in the Figure 1.The figure reveals the concept of SPF as a part of social security staircase. Low Individual/Household Income High From the above discussion it is very much clear that the SPF does not define new rights, it rather contributes to the realization of the human right to social security and essential services as defined in Articles 22,25 and 26 of the Universal Declaration of Human Rights (1948) as well as encouraging the observance of ILO Convention 102 on Social Security (Minimum Standard).The human right to social security was formally stipulated more than 60 years ago and since then it has remained almost untouched on the "to do list" of the global community of nations. According to the United Nations (2009) and Oechslin (2010) SPF could consist of two main elements that help to realize respective human rights: 1. Supply of an essential level of goods & social services -This refers to the geographical and financial access to essential services such as water & sanitation, health, education, food, housing, life and asset -saving information that are accessible for all. 2. Transfers -This refers to a basic set of essential social rights and transfers, in cash & in kind, as aid to the poor & vulnerable, to provide minimum income and livelihood security for all and to facilitate effective demand for and access to essential goods & services, including healthcare. There is an organized relationship between the services (the "supply side" of the SPF) and the means to ensure effective access including transfers (the "demand side" of the SPF).By working on both demand and supply side measures, the SPF takes a holistic approach to social protection.On the one hand, SPF activities will work on means to ensure the availability of goods and services in the areas of health, water and sanitation and housing, education, food and related information.At the same time, the SPF will secure rights and transfers that guarantee effective access to these goods and services for all throughout the life cycle: children, active age groups and older persons, paying particular attention to vulnerable groups by considering further key characteristics that cut across all age groups (gender, socio-economic status, ethnicity, disabilities, population exposed and/or highly sensitive to adverse external effects such as natural hazards, intense climate phenomena, etc.).Strategies to ensure effective demand will require identification of those who currently do not have access to essential services and the barriers they are facing.Thus, "the social protection floor approach combines all the social services and income transfer programmes in a coherent and consistent way, preventing people from falling into poverty and empowering those who are poor to escape the poverty trap and find decent jobs. In the absence of social protection, people are subjected to increased risks of sinking below the poverty line or remaining caught in poverty" (UNDP, 2011). RATIONALE OF SOCIAL PROTECTION FLOOR According to Cichon et al (2011), "the concept of the SPF must be seen in a much wider and more ambitious development context.The adoption of the SPF concept reflects the emergence of a new socio-economic development paradigm, which the ILO normally describes as a virtuous cycle of development called "Growing with Equity".It is built on the following logic: 1. Without basic social security systems, no country can unlock its full productive potential.Only a basic social protection system can ensure that people are well nourished, healthy and enjoy at least basic education and are thus able to realize their productive potential.Investments in basic social protection are necessary conditions for workers to be sufficiently healthy, well nourished and educated to be employable in the formal economy. 2. Only if people can move from the informal to the formal economy and thus migrate from low productivity subsistence level activities to become tax and contribution payers can an economy grow; and 3. Incomes can be taxed for the financing of a state and social security systems that can help to achieve higher levels of welfare and growth. 4. Once people are in a position to enter the formal labour market, higher levels of social security, if properly designed, provide the necessary incentives to remain in formal employment, as well as the financial security that allow individuals to adapt to technological and economic change through training and retraining measures".Further the SPF is also significant because of following reasons. Social justice point of view This view justifies the need of SPF because of growing poverty and inequality.Moreover, social protection is a human right, as per Article 22 of the Universal Declaration of Human Rights, "Everyone, as a member of society, has the right to social security".But inspite of this declaration still 80% of global population remains without access. Economic view This view necessitates the provision of SPF for enhancing human capital and the productivity of labour.In other words, SPF is desired in order to have a better educated, healthy and well nourished workforce. Political view According to this view, poverty and gross inequities/income inequalities are liable for causing intense social tensions and violent conflict.Thus, SPF is needed in order to prevent differences and create politically stable societies. Affordability Another argument in the favour of SPF is that it is easily affordable (as per ILO's calculations).According to ILO, less than 2% of the global GDP would be necessary to provide this basic set of social guarantees to all of the world's poor (ILO, 2008). All the above mentioned points make it clearer that SPF is not only a human right but also a social, economic and political necessity and, moreover, it is also affordable.Thus "investment in social security is the investment in a nations' human infrastructure which is as important as investment in that country's physical infrastructure.Investing in a social protection floor, is investing in social justice and economic development.Social protection schemes are important tools to reduce poverty and inequality.They do not only help to prevent individuals and their families from falling or remaining in poverty, they also contribute to economic growth by raising labour productivity and enhancing social stability.The global financial and economic crisis proved how key a role social protection plays as an automatic economic stabilizer". The international community has also recognized the fundamental contribution of social protection floor policies in accelerating the achievement of the Millennium Development Goals (UNDP, 2011).SPF is considered valuable for achieving the Millennium Development Goals (MDGs) targets in the following ways: 1. MDG 1-Nutrition: Regular and predictable income through cash transfers such as child benefit and old age pension to help those struggling with chronic poverty to sustain adequate levels of nutrition; 2. MDG 2-Education: Social assistance to enable poor families to prioritise education and health spending, and help to reduce the burden on children to augment family income, allowing them to attend school; 3. MDG 3-Empowerment: Predictable minimum income to enable poor women and their families to plan for the fiiture and make long-term investments, and empower them to renegotiate for favourable terms of trade or employment; and 4. MDGs 4,5 & 6-Health: Universal and equitable access to healthcare to enable the poorest people access to health services by increasing their ability to pay or reducing the cost of services'delivery. THE KEY ACTORS IN SOCIAL PROTECTION FLOOR The global SPF initiative (i.e., at international level) has been guided mainly by the two actors.As shown in Figure 3, these are ILO and WHO (they are appointed by the UN Chief Executives Board for Coordination).This global SPF Initiative brings together an international coalition, comprised of: (a) The Global Advisory Network of UN agencies, the World Bank, the IMF and other organizations. A high level SPF Advisory Group was set up with an aim to develop global support, elaborate further global policy aspects regarding the SPF and provide general guidance in this area.This Group is preparing a Global SPF Report as an advocacy tool and as general guidance to support the implementation of national SPFs. THE INDIAN SCENARIO At present the population of India is 1.22 billion in 2012.More than 50% of India's current population is below the age of 25 and over 65% below the age of 35.About 72.2% of the population lives in some 638,000 villages and the rest 27.8% in about 5,480 towns and urban agglomerations7.Agriculture still provides the principal means of livelihood for over 52% of India's population.The total labour force in India is estimated to be around 478.3 million (as per 2010 estimate). Sector-wise, a major chunk of Indian labour force is employed in the unorgainzed sector (around 97%).This sector consists of causal and contributing family workers, self employed people and the private households and other persons employed in organised and unorganised enterprises that are not eligible either for paid, sick or annual leave or any form of social secuirty provided by their employer.While remaining about 3% of the total workforce is employed in the organised or formal sector which includes all public sector establishments and all non -agricultural establishments in private sector with 10 or more employees.Thus the major chunk of Indian labour force is employed in the unorgainzed sector and from more than a decade the Indian government is making various plans and programmes for providing social security cover to these workers.The various governmental level efforts, in a chronological order, are breifly mentioned below: 4. In September 2004, the then UPA government appointed The National Commission for Enterprises in the Unorganised Sector (NCEUS), led by Professor Arjun Sengupta, to look into how its NCMP promise of welfare legislation for unorganized workers, could be brought to life. 5. In August 2005 and May 2006, the NCEUS submitted its two reports to the Prime Minister containing draft legislation on securing the conditions of work and creating a social security scheme for unorganized sector workers. 6.In 2005, the National Advisory Council (NAC), formerly chaired by Sonia Gandhi, also submitted a draft Bill based on the recommendations of the Second National Labour Commission and the ongoing deliberations of the NCEUS.This draft Bill which was titled "The Unorganized Sector Workers' Social Security Bill" was tabled in the Rajya Sabha in September, 2007. 7. In July 2007 NCEUS finalized its various draft legislative proposals.In its debates, the NCEUS recommended the need for separate laws to differently protect agricultural workers and non-agricultural workers within the unorganized sector. 8. In 2008, "The Unorganized Sector Workers' Social Security Bill" received the Assent of the President of India on 23rd December, 2008 and has now become an Act.This Act titled "The Unorganized Workers' Social Security Act, 2008" has not been brought into force yet.Through this Act, government has tried its best to serve the workers in the unorganized sector who now constitute 97% of total workforce of India.Moreover, "what is really good about this legislation, as recommended by the NCEUS and NAC, is an enforceable 'floor scheme' that creates in each unorganized sector worker beneficiary a legal entitlement of governmental protection within a specified time frame" (Ghosh, 2009). Regarding the SPF initiative at the global level, our country has expressed its full support for the ILO's proposal for SPF for the vulnerable, but wants that SPF should be linked to the each country's financial resources.In other words, the government of India is of the view that each country should decide the level of its own SPF and there should not be prescription of a uniform SPF for all countries.Social protection should be implemented depending on the national social and economic circumstances in member states.At the 100"' session of ILO at Geneva in June 2011, the Indian Union Labour & Employment Minister Shri Mallikarjun Kharge said, "Each country should decide the level of its social protection floor, which should be closely linked country's financial resources, employment strategy and other social policies.It is desirable to have floor levels for social protection, there should be not any timelines". Thus, India recognizes the fact that social protection is an investment that enhances the productivity of workers in the long run.The Indian government has already enacted the "Unorganised Worker's Social Security Acf in 2008 to safeguard the interests of unorganized workers, who account for 97% of the country's workforce.This Act provides for constitution of National Social Security Board which shall recommend social security schemes viz life and disability cover, health and maternity benefits, old age protection and other benefits as may be determined by the government for unorganized workers. Further the government has also set up a National Social Security Fund for the unorganized workers.The central government is implementing various social security schemes for the benefit of unorganized workers like -Rashtriya Swasthya Bima Yojana (RSBY), Jana Shree Bima Yojana, Indira Gandhi National Old Age Pension Scheme, Aam Aadmi Bima Yojna, Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA).India has also enacted Right to Education, 2009 which provides for compulsory education for 6-14 years. CONCLUSIONS AND IMPLICATIONS At present the social protection floor is very much on the global development agenda.Its main feature is that it is a holistic approach of looking simultaneously at supply-and demand-side factors for a range of social protection goods and services across the life cycle and for all population groups.The holistic approach is intended to facilitate the prioritization and sequencing of the different elements of the floor.This does not mean that all the countries should immediately start to establish schemes for all target groups and elements of the floor.Rather, a carefial analysis of capacities, needs and existing schemes already in place will enable a rationalization of the policymaking process for a gradual building up of the social protection floor (UNDP, 2011)."The ultimate objective of the SPF approach is to build a solid basis that would allow higher levels of protection, than simply the ground floor level. As economies grow and fiscal space is created, social protection systems can and should move up the Social Protection Floor staircase, extending the scope, level and quality of benefits and services provided9".The main arguments in the favour of SPF are that it is human right, it contributes effectively to MDGs and human development, it generates economic growth and political stability and it reduces poverty fast (Ortiz, 2010). As far as India is concerned the central government as well as the various state governments has demonstrated a wider awareness and stronger commitment to respond to the social protection needs of the presently excluded groups.As a direct result, numerous actors of civil society (community-based organizations, women's groups, trade, informal economy, trade unions, NGOs, microfinance institutions etc.) have already designed and set up in-house microinsurance schemes that were tailor-made to answer the priority needs and contributory capacity of their target groups.However, India is still striving to extend basic human rights, including social protection, to all its citizens.The importance of the informal economy which regroups today some 94% of the total labour force i.e. around 370 million workers has been constantly growing over the last decades.Although contributing to some 63% to the GDP, these workers still cannot benefit from a fair redistribution of the wealth generated by their effort and remain excluded from formal social security mechanisms.It is estimated today that 90% of the whole population (some 950 million) is still deprived of any kind of social protection services, thus remaining exposed to the multiple risks affecting their daily lives and inhibiting their development initiatives.Among them, the most disadvantaged groups remain caught in a continuing cycle of poverty and vulnerability". The main challenge is how to cover the entire population effectively, especially those who are at risk or who are already in a situation of deprivation, and in a sustainable manner.The approach towards SPF should be such that it enables better coordination between the different ongoing social protection activities in the country and the actors involved in the design and implementation of social protection policies.Moreover, the concept can be adapted in very different economic and social contexts in order to fulfill the very different needs of societies (UNDP, 2011). Very recently in a joint statement submitted by 30 NGOs wholeheartedly supported the ILO intention to adopt the recommendation on national social protection floors at the 101st session of the International Labour Conference in June 2012.According to the joint statement "the implementation of this recommendation will be a great step forward towards the reduction of poverty and inequality as well as to the empowerment of people worldwide. It is a timely response to the fact that millions of people living on our planet are excluded In nutshell, in order to achieve sustainable development, our country should introduce the concept of social protection floor level.This would definitely not only reduce poverty but also increased healthier, better educated, empowered and a productive labourforce.In addition, this will enhance peace, stability and social cohesion and also politically stable societies.But to remain sustainable, the social protection floor entitlements should (1) be build on the existing social protection measures/schemes/systems, (2) avoid creating longterms dependencies and moral hazards, (3) be based on a clear definition of rights, that govern the relationship between the citizens and the state; and (4) ensure continued and predictable funding (ILO,2010). Figure Figure 1: Social Protection Floor as a Part of Social Security Staircase (Source : Adapted from www.socialsecurityextension.org) Figure: 2 Figure 2 : Figure: 2 displays the above discussed points.The figure explains that the child benefits facilitate the child's access to education which, in turn, help break the intergenerational poverty cycle; access to health care helps families remain above the poverty line by relieving them of the financial burden of medical care; & income support avoid poverty & creates the security the people need in order to take risks & invest in their own productive capacity. Jaipuria Institute of Management Management Dynamics, Volume 13, Number 1 (2013/ ^^ Dr. GurpreetRandhawa* from the benefits of globalization, and are often penalized through the implementation of austerity measures".
5,259.8
2022-04-25T00:00:00.000
[ "Economics", "Political Science" ]
Developing a Deep Neural Network for Driver Fatigue Detection Using EEG Signals Based on Compressed Sensing : In recent years, driver fatigue has become one of the main causes of road accidents. As a result, fatigue detection systems have been developed to warn drivers, and, among the available methods, EEG signal analysis is recognized as the most reliable method for detecting driver fatigue. This study presents an automated system for a two-stage classification of driver fatigue, using a combination of compressed sensing (CS) theory and deep neural networks (DNNs), that is based on EEG signals. First, CS theory is used to compress the recorded EEG data in order to reduce the computational load. Then, the compressed EEG data is fed into the proposed deep convolutional neural network for automatic feature extraction/selection and classification purposes. The proposed network architecture includes seven convolutional layers together with three long short-term memory (LSTM) layers. For compression rates of 40, 50, 60, 70, 80, and 90, the simulation results for a single-channel recording show accuracies of 95, 94.8, 94.6, 94.4, 94.4, and 92%, respectively. Furthermore, by comparing the results to previous methods, the accuracy of the proposed method for the two-stage classification of driver fatigue has been improved and can be used to effectively detect driver fatigue. Introduction With the advancement of industrial technology in recent years, car production has increased dramatically, resulting in an increase in traffic accidents. Every year, 1.25 million people die in road accidents, according to the World Health Organization (WHO) [1]. Driver fatigue can be considered the main cause of road fatalities among the factors affecting car accidents. According to the National Highway Traffic Safety Administration (NHTSA), 100,000 driver fatigue accidents caused 1550 deaths, 71,000 injuries, and USD 12.5 billion in monetary losses, annually, in the United States [2]. Therefore, it is important to develop a method that can detect the levels of mental fatigue accurately and automatically in order to prevent catastrophic driving events [3]. Fatigue may be caused by a lack of sleep, prolonged driving, driving overnight, driving on a monotonous route, and so on. Fatigue, as a general term, involves drowsiness. Drowsiness is defined as the need for sleep, while fatigue requires rest (not necessarily sleep) [3][4][5]. Yawning, impatience, daydreaming, heavy eyes, etc., are the initial signs of fatigue [5][6][7]. Driver fatigue slows down the reaction time (prolongations), it reduces the driver's alertness, and it affects the driver's decisions. Fatigue on the road can be suppressed by listening to music, drinking coffee or energy drinks, and so on [7][8][9]. However, it is necessary to design an automatic system to detect the driver's mental fatigue with a high reliability that can warn the driver before potential accidents. In recent years, many studies have been conducted on the automatic detection of driver fatigue. Still, for various theory to detect driver fatigue on the basis of EEG signals. These researchers performed their experiments on 28 subjects to classify two states of driver fatigue, and they used the EEG Lab toolbox to eliminate the environmental and motion noise. The accuracy of their classification was reported to be approximately 98%. Luo et al. [26] used a combination of the adaptive scaling factor and entropy features to automatically detect driver fatigue. The researchers performed their experiments on 40 subjects using two channels of the EEG signals (Fp1 and Fp2) for a two-stage classification of driver fatigue. The EEG toolbox was also used to remove EOG noise. Moreover, the classification accuracy of their algorithm is reported to be about 98%. Gao et al. [27] used deep neural networks (DNNs) to detect driver fatigue on the basis of EEG signals. The researchers also performed their experiments on 10 subjects. Their deep network architecture consisted of 11 convolutional layers. The accuracy of the classification reported by these researchers is about 95%. Karuppusamy et al. [28] used a combination of EEG signals, facial expressions, and gyroscopic data to diagnose driver fatigue. The researchers used DNNs in their research. The final accuracy obtained, according to the model proposed by these researchers, is approximately 93%. Jiao et al. [29] used EEG and EOG signals to automatically detect driver fatigue. The researchers used continuous wavelet transform (CWT) to extract the frequency band analysis, and then selected the discriminatory features in their proposed model. They also used generative adversarial networks (GANs) to balance the samples of their class. The final accuracy reported by these researchers, according to the long short-term memory (LSTM) classifier, is approximately 98%. Liu et al. [30] used deep transfer learning networks to automatically detect driver fatigue on the basis of EEG signals. The random forest (RF) algorithm was also used to identify active channels. The highest accuracy reported by these researchers is approximately 73%. A comprehensive review of previous studies for the automatic detection of driver fatigue indicates that, while several studies have been conducted in this regard, there are still some issues from various perspectives that should be considered: (1) The majority of these studies extract features manually, and few studies have used feature learning methods to extract features. Using manual methods necessitates complex processes, as well as specialized knowledge. Furthermore, manual feature extraction does not guarantee that the features chosen are optimal; (2) In most feature-learning-based driver-fatigue-detection methods, a large amount of raw time signals from multiple electrodes at high sampling rates are used directly as the input of the feature learning algorithms, imposing a significant burden on the acquisition hardware, data storage, and transmission bandwidth. As a result, it is critical to fundamentally alter the data processing mode of existing real-time monitoring systems by employing a novel data acquisition method and compression theory. The current study aims to overcome the abovementioned challenges, especially the problem of a large amount of raw time signals. To the best of the authors' knowledge, this research study presents, for the first time, a novel method for the automatic detection of driver fatigue using a combination of compressed sensing (CS) and DNNs. Recently, CS has attracted the attention of many researchers in this field of research, and has shown great success in various fields, such as magnetic resonance imaging (MRI) [31], radar imaging [32], and seismic imaging [33]. DNNs are also a group of machine learning methods that can learn features hierarchically, from lower levels to higher levels, by building a deep architecture. Deep learning has achieved state-of-the-art performance in several application domains, such as signal processing [34][35][36], network security [37], the Internet of Things (IoT) [38], and so on. In the proposed method, a deep convolutional long short-term memory (DCLSTM) neural network is designed to learn the optimal features from compressed data. In this study, compressed EEG signal data is used to detect driver fatigue for the first time. Indeed, the most significant contribution of this study can be attributed to the reduction in the amount of data collected in order to obtain the optimal features, while retaining useful information in the compressed data. In addition, by using compressed data as input, the computational burden of the feature learning process is significantly reduced. The simulation results of the proposed method for a single-channel recording show an accuracy of 92% for a compression rate of 90, which is a significant accuracy. For more clarity, the following outlines the contributions made by this article: a. It provides a new and fully automated method for selecting and extracting discriminative features from the EEG signal to identify two stages of driver fatigue, with a high performance based on DNNs, with a proposed deep architecture, that does not require prior knowledge of, or expertise on, each case/subject; b. For the first time, the paper presents the application of CS theory in combination with DL to reduce EEG signal samples without the loss of essential signal information, related to automatic driver-fatigue detection, for use in real-time systems; c. The article illustrates the use of a minimum number of EEG signal channels to detect driver fatigue automatically, with the precondition of high classification accuracy and low detection errors; d. The selection of the parameters of the proposed method, and the effect of the key parameters on the deep network architecture, were thoroughly investigated in order to automatically detect driver fatigue. Furthermore, comparisons with traditional methods show the superiority of our proposed method. Because of the use of CS in the proposed algorithm, this method is suitable for extensive data processing and real-time processing, and it also provides a new idea for smart driver-fatigue detection; e. In this study, the environmental noise while driving was considered for the first time among the previous research related to driver-fatigue detection. The results show that the proposed network is robust to noise up to 1 dB. The rest of the paper is organized as follows: In Section 2, the mathematical background of CS theory and DNNs is presented; in Section 3, the acquisition of the EEG data, the proposed method, the parameter selection, and the network architecture are examined. Section 4 discusses the simulation results and makes comparisons to previous research, and, finally, Section 5 consists of the conclusion. Background In this section, a brief mathematical introduction to CS theory and DNNs for the automatic detection of driver fatigue is described. Compressed Sensing Theory This section provides a brief description of CS theory [39]. Theoretically, CS adopts a quasi-optimal measurement scheme that gathers all the raw signal information and that realizes an efficient and dimensionally reduced representation of the signal. Using CS theory, a sparse signal can be reconstructed from far fewer samples than what the Shannon-Nyquist sampling theorem [40] requires. It has been shown that many nonsparse signals (such as EEG signals) can be converted to sparse signals through various signal transformations, such as discrete cosine transformation (DCT), and discrete wavelet transformation (DWT). In view of the above, the CS theory can also be applied to nonsparse signals, provided that the sparsity of the transformed signal is guaranteed. Considering X ∈ R N and Φ ∈ R M×N as the input signal and the measurement matrix, respectively, the compressed output, Y ∈ R M , is obtained as: A measurement matrix that satisfies the restricted isometry property (RIP) will result in M N, which means an output observation vector of much less dimensionality than that of the input signal. The RIP implies that, for any strictly sparse vector, the measurement matrix must satisfy: where δ is the RIP constant, which has a value between 0 and 1. The compression ratio (CR) is an indicator of the extent to which the signals are compressed, and is defined as: Thus, having M N will result in a high CR. Thus, various levels of compressed acquisitions are achieved by adjusting the size of the measurement matrix (provided that the RIP condition is met). Finally, an exact reconstruction of X can be performed by various algorithms, such as L1 norm minimization [41], and orthogonal matching pursuit (OMP) [42]. Deep Convolutional Neural Networks CNNs are considered powerful deep learning techniques to learn sophisticated phenomena. CNNs are very efficient in terms of learning features, and they are used in a variety of applications, such as computer vision. CNNs consist of three main layers: convolutional layers, pooling layers, and fully connected layers (FCs) [43,44]. In the convolutional layers, the CNN network uses various kernels to convolve the input signal, as well as intermediate feature maps. These layers reduce the number of parameters and cause immutability and stability with respect to displacement. The pooling layer is typically used after the convolutional layer to reduce the size of the feature maps. The most common method is to use the max-pooling and average pooling functions to implement a pooling layer. The fully connected layer comes after the last pooling layer, and it converts the two-dimensional feature map into a one-dimensional feature vector. To prevent overfitting, the dropout layer is used; thus, according to a probability, each neuron will be thrown out of the network at each stage of the training [45]. The batch normalization (BN) layer is typically used to normalize the data within the network and to accelerate the network training process [46]. Various types of activation functions are used in the layers, such as the Leaky Rectified Linear Unit (ReLU) and Softmax [47,48]. In the prediction stage, the "loss" function is carried out to learn the error ratio. Then, an optimization algorithm is applied for reducing the error criterion. Indeed, the optimization results are used for updating the hyperparameters. The "loss" function is a means of evaluating and representing the model efficiency in machine learning approaches [49,50]. Long Short-Term Memory (LSTM) Recurrent neural networks (RNNs) are powerful networks that are widely used in learning from sequential data, such as text, audio, and video. RNNs can reduce the computational load of the algorithm by reducing the input data dimension and by facilitating the training process. However, as the gap between previous input information and the point of need grows, these networks encounter a lag in the learning features and perform poorly [51]. As a result, to address the shortcomings of traditional RNNs, LSTM networks have been introduced. Since prior information can affect the model accuracy, the use of LSTM has become a popular option for researchers. Unlike traditional RNNs, where the content is rewritten at every step, in LSTM, the network is able to decide whether to retain the current memory through its memory gateways. Intuitively, if an LSTM detects an important feature in the input sequence in the initial steps, it can easily transmit this information over a long period of time, thereby receiving and maintaining such long-term dependencies. Moreover, because of their memory-based design, these networks avoid gradient vanishes and the instability that plagues traditional RNNs [52,53]. Proposed Method In this section, the proposed fatigue detection algorithm is provided, which is based on the CS theory and the DCLSTM network for the automatic classification of two stages of driver fatigue (a block diagram of the proposed method is shown in Figure 1). First, the acquisition of the EEG signals for the diagnosis of two-stage driver fatigue is described. Then, the preprocessing techniques performed on the recorded EEG signal are described, followed by the related details of the signal compression. Finally, the proposed DCLSTM network architecture is presented. Proposed Method In this section, the proposed fatigue detection algorithm is provided, which is based on the CS theory and the DCLSTM network for the automatic classification of two stages of driver fatigue (a block diagram of the proposed method is shown in Figure 1). First, the acquisition of the EEG signals for the diagnosis of two-stage driver fatigue is described. Then, the preprocessing techniques performed on the recorded EEG signal are described, followed by the related details of the signal compression. Finally, the proposed DCLSTM network architecture is presented. Acquisition of EEG Signals Eleven graduate students (six men and five women), between 22 and 30 years of age, took part in a driving simulation experiment. It was ensured that all participants had a driver's license, and that, up until the experiment, none of them had experienced driving Acquisition of EEG Signals Eleven graduate students (six men and five women), between 22 and 30 years of age, took part in a driving simulation experiment. It was ensured that all participants had a driver's license, and that, up until the experiment, none of them had experienced driving in a driving simulator. All of the participants in the experiment were also right-handed. This experiment was carried out with the ethics code license number, IR.TBZ-REC.1399.6, in the signal processing laboratory of the Biomedical Engineering Department of the Faculty of Electrical and Computer Engineering, at the University of Tabriz. All participants were asked to confirm and sign the voluntary attendance consent form and the test requirements (no history of psychiatry, no history of epilepsy, and pretesting hair washing, enough sleep overnight, and no pretesting caffeine) before the experiment. The experiment was conducted using a G-Tec 32-Channel EEG Recorder, an MSI Laptop (Corei7 and 16 Ram), a Logitech Driving Simulator G29, a City Car Driving Simulator, and a Samsung 40-inch LCD. Figure 2 shows the recording of the EEG signal of a subject while driving in the simulator. The EEG signal recording was carried out following the international standard 10-20 electrode placement system, at the sampling frequency of 1000 Hz, and with the A1 and A2 channels as reference electrodes. Before the experiment, all participants practiced driving with the simulator to become acquainted with it and the purpose of the experiment. In order to induce mental fatigue in the drivers, the driving route in the simulator was considered a uniform highway without traffic. When the driving procedure began for 20 min, the last 3-min EEG recording was labeled as the "normal" stage. The ongoing driving process lasted until the participant's questionnaire results (the multidimensional fatigue inventory scale [54]) showed that the subject was at the "fatigue" stage (varying from 40-100 min, depending on the participant), and the last 3-min EEG recordings were marked as the "fatigue" stage. The drivers were required to report their levels of fatigue using the Chalder Fatigue and Lee Fatigue questionnaires to confirm their fatigue [55,56]. The questionnaires included the following questions: "Do you feel tired?"; "Do you have a blurred vision problem?"; "Do you feel like you are constantly increasing your speed?"; "Do you feel out of focus?", etc. Each question in the questionnaire had four scores, from −1 to 2. A score of −1 means "better than usual", a score of 0 means "normal", a score of 1 means "tired", and a score of 2 means "very tired". A high fatigue score indicates a high level of driver fatigue, which has been used in many recent studies to confirm fatigue [23,24,27,30]. The driving task started at 9 a.m., and only one EEG signal per day was recorded in order to ensure the same recording conditions for all of the participants. in a driving simulator. All of the participants in the experiment were also right-handed. This experiment was carried out with the ethics code license number, IR.TBZ-REC.1399.6, in the signal processing laboratory of the Biomedical Engineering Department of the Faculty of Electrical and Computer Engineering, at the University of Tabriz. All participants were asked to confirm and sign the voluntary attendance consent form and the test requirements (no history of psychiatry, no history of epilepsy, and pretesting hair washing, enough sleep overnight, and no pretesting caffeine) before the experiment. The experiment was conducted using a G-Tec 32-Channel EEG Recorder, an MSI Laptop (Corei7 and 16 Ram), a Logitech Driving Simulator G29, a City Car Driving Simulator, and a Samsung 40-inch LCD. Figure 2 shows the recording of the EEG signal of a subject while driving in the simulator. The EEG signal recording was carried out following the international standard 10-20 electrode placement system, at the sampling frequency of 1000 Hz, and with the A1 and A2 channels as reference electrodes. Before the experiment, all participants practiced driving with the simulator to become acquainted with it and the purpose of the experiment. In order to induce mental fatigue in the drivers, the driving route in the simulator was considered a uniform highway without traffic. When the driving procedure began for 20 min, the last 3-min EEG recording was labeled as the "normal" stage. The ongoing driving process lasted until the participant's questionnaire results (the multidimensional fatigue inventory scale [54]) showed that the subject was at the "fatigue" stage (varying from 40-100 min, depending on the participant), and the last 3-min EEG recordings were marked as the "fatigue" stage. The drivers were required to report their levels of fatigue using the Chalder Fatigue and Lee Fatigue questionnaires to confirm their fatigue [55,56]. The questionnaires included the following questions: "Do you feel tired?"; "Do you have a blurred vision problem?"; "Do you feel like you are constantly increasing your speed?"; "Do you feel out of focus?", etc. Each question in the questionnaire had four scores, from −1 to 2. A score of −1 means "better than usual", a score of 0 means "normal", a score of 1 means "tired", and a score of 2 means "very tired". A high fatigue score indicates a high level of driver fatigue, which has been used in many recent studies to confirm fatigue [23,24,27,30]. The driving task started at 9 a.m., and only one EEG signal per day was recorded in order to ensure the same recording conditions for all of the participants. Preprocessing In order to remove unwanted artifacts from the recorded EEG signals, at first, a notch filter was applied to remove the 50-Hz frequency of the power supply. Second, a firstorder Butterworth filter, with a frequency of 0.5 to 60 Hz, was applied to the data to capture useful information to detect driver fatigue [57]. Third, to improve the detection efficiency, the features for each participant were normalized by scaling between 0 and 1, and by applying min-max normalization. Fourth, since one of the objectives of this study is to use minimal EEG signal channels, it is necessary to identify the active electrodes to reduce the computational complexity. In accordance with [24,25,58,59], 12 electrodes, out of the Preprocessing In order to remove unwanted artifacts from the recorded EEG signals, at first, a notch filter was applied to remove the 50-Hz frequency of the power supply. Second, a first-order Butterworth filter, with a frequency of 0.5 to 60 Hz, was applied to the data to capture useful information to detect driver fatigue [57]. Third, to improve the detection efficiency, the features for each participant were normalized by scaling between 0 and 1, and by applying min-max normalization. Fourth, since one of the objectives of this study is to use minimal EEG signal channels, it is necessary to identify the active electrodes to reduce the computational complexity. In accordance with [24,25,58,59], 12 electrodes, out of the 30 electrodes used for signal recording, were identified in the form of six active regions, on the basis of the electrode weights, for this purpose. Accordingly, only data from the 12 selected channels were used for the compression and data processing, and the rest of the channels were excluded from the processing. The selected electrodes, in the form of six regions, A, B, C, D, E, and F, are shown in 2D and 3D in Figure 3. 30 electrodes used for signal recording, were identified in the form of on the basis of the electrode weights, for this purpose. Accordingly, onl selected channels were used for the compression and data processing, channels were excluded from the processing. The selected electrodes, regions, A, B, C, D, E, and F, are shown in 2D and 3D in Figure 3. Signal Compression Based on CS Theory This section provides a detailed description of how to compress basis of CS theory. After preprocessing the signal, 12 selected electrod pression process. In the following, the process of the segmentation an the signal, according to the block diagram of Figure 4, is described. As ously, 3 min of the recorded EEG channel is assigned to either the "n tigue" stages. In this case, we have two classes of data (normal and fati samples for each channel. Then, every 3-min recording is divided into 5 segments, with an overlap of 1200 samples. Since we have 11 subjects the input matrix X , for each class (normal and fatigue), corresponding to E), will be equal to N × S, where N = 11 × 150, and S is the raw signa equal to n × w (n is the number of electrodes, and w is the length of each is equal to 5000). As stated in Section 3, in accordance with the CS theor RIP condition, the input matrix, X , is multiplied by a random Gaussia the dimension, M × N, in order to produce the compressed signal, Y number of rows of the raw signal, X , as N = 11 × 150 = 1650, now the n the compressed signal, Y, is reduced to M = N × (1 − CR/100). In the nex proposed DCLSTM network for the selection/extraction of the discrim perform the automatic classification. Signal Compression Based on CS Theory This section provides a detailed description of how to compress the signal on the basis of CS theory. After preprocessing the signal, 12 selected electrodes enter the compression process. In the following, the process of the segmentation and compression of the signal, according to the block diagram of Figure 4, is described. As mentioned previously, 3 min of the recorded EEG channel is assigned to either the "normal" or the "fatigue" stages. In this case, we have two classes of data (normal and fatigue), with 180,000 samples for each channel. Then, every 3-min recording is divided into 5-s intervals of 150 segments, with an overlap of 1200 samples. Since we have 11 subjects, the dimension of the input matrix X, for each class (normal and fatigue), corresponding to each region (A to E), will be equal to N × S, where N = 11 × 150, and S is the raw signal dimension and is equal to n × w (n is the number of electrodes, and w is the length of each 5-s segment, and is equal to 5000). As stated in Section 3, in accordance with the CS theory, to guarantee the RIP condition, the input matrix, X, is multiplied by a random Gaussian matrix, Φ, with the dimension, M × N, in order to produce the compressed signal, Y. Considering the number of rows of the raw signal, X, as N = 11 × 150 = 1650, now the number of rows of the compressed signal, Y, is reduced to M = N × (1 − CR/100). In the next step, Y enters the proposed DCLSTM network for the selection/extraction of the discriminative features to perform the automatic classification. The Proposed Deep Neural Network Architecture In this section, the architecture of the proposed DCLSTM network will be introduced, which is depicted in Figure 5. To recognize the "fatigue" and "normal" stages, the proposed network consists of seven convolutional layers, three LSTM layers, and one Softmax layer (without using the fully connected layer), as follows: (a) A dropout layer; (b) A convolution layer with a nonlinear Leaky ReLU function and a max-pooling layer, followed by a dropout layer and a batch normalization; (c) The architecture of the previous step is repeated six times without a dropout layer; (d) The previous architecture's output is connected to an LSTM layer with a nonlinear Leaky ReLU function, with a dropout layer and a batch normalization; (e) The architecture of the previous step is repeated two times; (f) The Softmax layer is used to access the outputs and calculate the scores. Table 1 displays the specifics of the proposed DCLSTM architecture with CS (DCLSTM-CS), such as the sizes of the filters, the layer types, and the number of filters. As shown in Table 1, the dimensions of the strides at different CRs are different. The Proposed Deep Neural Network Architecture In this section, the architecture of the proposed DCLSTM network will be introduced, which is depicted in Figure 5. To recognize the "fatigue" and "normal" stages, the proposed network consists of seven convolutional layers, three LSTM layers, and one Softmax layer (without using the fully connected layer), as follows: (a) A dropout layer; (b) A convolution layer with a nonlinear Leaky ReLU function and a max-pooling layer, followed by a dropout layer and a batch normalization; (c) The architecture of the previous step is repeated six times without a dropout layer; (d) The previous architecture's output is connected to an LSTM layer with a nonlinear Leaky ReLU function, with a dropout layer and a batch normalization; (e) The architecture of the previous step is repeated two times; (f) The Softmax layer is used to access the outputs and calculate the scores. Table 1 displays the specifics of the proposed DCLSTM architecture with CS (DCLSTM-CS), such as the sizes of the filters, the layer types, and the number of filters. As shown in Table 1 Training and Evaluation All of the hyperparameters for the proposed method were carefully adjusted to achieve the best convergence degree. A trial-and-error method was used to select these parameters. The numbers of the samples and the parameters for the training, evaluation, and test sets for all the active regions are also shown in Figure 6. According to Figure 6, 70% of the gathered data is randomly selected for training, 10% for validation, and the remaining 20% is selected as the test set. The 5-fold cross-validation was also performed for all of the active regions for a more detailed analysis. Figure 7 shows a graphical schematic for the 5-fold evaluation. In the proposed network, the weights are initially assumed to be random and small, and they are then updated using the optimal hyperparameters on the basis of the RMSProp optimizer and the cross-entropy cost function shown in Table 2. In designing the proposed network architecture, we used different types of optimizers, and different numbers and filter sizes, and we achieved the optimal values for the parameters of the proposed architecture, which are shown in Table 2. In the proposed network, the weights are initially assumed to be random and small, and they are then updated using the optimal hyperparameters on the basis of the RMSProp optimizer and the cross-entropy cost function shown in Table 2. In designing the proposed network architecture, we used different types of optimizers, and different numbers and filter sizes, and we achieved the optimal values for the parameters of the proposed architecture, which are shown in Table 2. Results and Discussion The simulation results, comparisons with previous studies, and an intuitive evaluation of the proposed method are presented in this section. Simulation Results The simulation results of the proposed DCLSTM-CS algorithm for the automatic detection of driver fatigue are discussed in this section. The computer specifications used to simulate the proposed method are as follows: 32 GB of RAM, a Core i9 CPU, and an RTX 2070 Graphics card. The different parameters for the proposed DCLSTM network were finetuned using the trial-and-error method. As is shown in Figure 8, we used different numbers of convolution layers (three to eleven layers) in the proposed network design, while seven layers of convolution is considered as the optimum solution (in terms of accuracy and speed). As can be seen from Figure 8, by increasing the number of convolution layers (above seven), the classification accuracy remains almost constant, and the training time increases accordingly. As mentioned before, in our proposed algorithm, the compressed signal is used as the DCLSTM network input, for which Table 3 shows the accuracy obtained on the validation data. As is shown in Table 3, the performance of the proposed algorithm for the noncompression (NC) mode, where the raw signal is directly fed into the DCLSTM network, is given, as well as for the compression mode, where the CR is set to 40, 50, 60, 70, 80, and 90. As is clear from the NC mode, the final accuracy for all regions is over 93%. Moreover, as can be seen, the accuracy of Region E, which includes a single channel, is approximately 98%. In addition, as is clear, even at CR = 90, the accuracy of the proposed network for all regions, except for Region F, is still above 90%. Figure 9 shows the loss of the proposed network for six active regions, where CR = 40. As can be seen from Figure 9, increasing the number of iterations leads to a decrease in the losses for all regions, reaching a steady-state value at approximately 300 iterations. Figure 10 shows the classification accuracy of the proposed network where CR = 40, and, as can be seen, the classification accuracies for Regions A, B, C, D, E, and F reach 96.3, 95.15, 95.9, 94.9, 95, and 91.3%, respectively, at about 310 iterations. The confusion matrix for the proposed DCLSTM-CS is provided in Figure 11, for all regions at CR = 40. As can be seen, only 21 samples were misdiagnosed as "fatigue" stage in Region E, indicating that the proposed network performs well. For further analysis, Figure 12 shows the performances (sensitivity, precision, specificity, F-score, kappa, accuracy, and training time) of the proposed network for the single-channel region (E) at different CRs, as well as the NC mode. As can be seen from Figure 12, the network training time decreases as the CR increases; however, the network performance for CR = 90 is still higher than 90%, which indicates a promising performance for the proposed network in single-channel driver-fatigue-detection scenarios. Furthermore, Figure 13 shows the T-Sen graph for the raw signal and the LSTM2 layers for the single-channel region (E) at CR = 80, as well as the NC mode. As can be seen from the last layer in the different CRs, almost all of the samples are separated for the evaluation set in each of the CRs, indicating the desirable performance of the proposed method for the two-stage classification of driver fatigue. As a result, on the basis of the questionnaire, the model calculation results are consistent with the actual fatigue of the participants. Therefore, the proposed method can detect driver fatigue effectively. To further research the efficacy of the proposed method, Figure 14 shows the classification accuracy obtained for each fold in all the selected regions at different CRs. As is shown in Figure 14, the accuracy obtained for each fold at different compression rates is approximately higher than 90%, indicating that the overfitting phenomenon did not occur in different folds. Comparison with the State-of-the-Art Methods The proposed method was evaluated in a variety of ways. First, it was compared with the common and popular methods used to diagnose driver fatigue, and second, it was compared with the most recent studies. In order to demonstrate the performance of the proposed algorithms, based on DCLSTM and DCLSTM-CS, the two-stage classification of Comparison with the State-of-the-Art Methods The proposed method was evaluated in a variety of ways. First, it was compared with the common and popular methods used to diagnose driver fatigue, and second, it was compared with the most recent studies. In order to demonstrate the performance of the proposed algorithms, based on DCLSTM and DCLSTM-CS, the two-stage classification of driver fatigue based on Region E (single-channel) was simulated using four existing popular networks. The networks to be compared included MLP [60], DBM [61], SVM [62], and Fully CNN (FCNN) [63], and they are based on feature learning from raw data, as well as manual feature extraction (for the manual features, the mean, minimum, crest factor, skewness, variance, maximum, and kurtosis were also used), which have recently been widely used in driver-fatigue detection studies. For the FCNN, the proposed network architecture is considered in Figure 5, without the use of LSTM. As the kernel function of the SVM, the Gaussian radial basis function (RBF) was used, and to optimize the kernel parameters, the grid search method was used. In addition, the number of hidden layers and the learning rate for the MLP and DBM networks is 3 and 0.0001, respectively. The accuracies obtained, on the basis of feature learning from the raw data and manual features, are presented in Table 4. As is shown in this table, applying feature learning in deep networks (FCNN, DBM, DCLSTM, and DCLSTM-CS) leads to a significant improvement in terms of the accuracy, compared with the manual feature extraction approach. As it is seen in Table 4, the proposed DCLSTM and DCLSTM-CS present the highest accuracies, compared to the other deep and traditional approaches (i.e., SVM and MLP). The accuracies of the proposed DCLSTM and DCLSTM-CS (for CR = 40), together with FCNN, DBM, SVM, and MLP, based on feature learning, are shown Figure 15. The accuracies of the proposed DCLSTM, DCLSTM-CS, and the FCNN, DBM, SVM, and MLP reach 98, 95, 88, 85, 72, and 70%, respectively, after 400 iterations. As is seen, the accuracies of the proposed networks are significantly higher than the existing methods, while having faster convergence rates. As is shown in Figure 15, although the proposed DCLSTM network has higher accuracy than its compressed mode (DCLSTM-CS), the accuracy achieved by DCLSTM-CS is still remarkably higher that the existing methods (about 95%), and it is more computationally efficient compared to DCLSTM. In addition, for a thorough comparison of the proposed methods with the existing state-of-the-art methods, a performance of all the methods was examined in a noisy environment. For this purpose, a white Gaussian noise of SNR, from −4 to 20 dB, was added to the EEG signals as measurement noise, and the classification accuracies for all the methods are shown in Figure 16. As can be seen, the proposed DCLSTM and DCLSTM-CS algorithms are quite robust to the measurement noise over a wide range of SNRs, such that the classification accuracy is still above 90%. A number of studies have been conducted in recent years in the field of the automatic detection of driver fatigue. The best results presented in these studies are shown in Table 5 and are compared to the proposed algorithms. As is shown in Table 5, the proposed methods have the best performance when compared to previous studies. Table 4. Accuracies of the proposed DCLSTM and DCLSTM-CS methods, as well as of the FCNN, DBM, SVM, and MLP methods for both feature learning and manual feature extraction scenarios. tures, are presented in Table 4. As is shown in this table, applying feature learning in de networks (FCNN, DBM, DCLSTM, and DCLSTM-CS) leads to a significant improveme in terms of the accuracy, compared with the manual feature extraction approach. As it seen in Table 4, the proposed DCLSTM and DCLSTM-CS present the highest accuracie compared to the other deep and traditional approaches (i.e., SVM and MLP). The accur cies of the proposed DCLSTM and DCLSTM-CS (for CR = 40), together with FCNN, DBM SVM, and MLP, based on feature learning, are shown Figure 15. The accuracies of t proposed DCLSTM, DCLSTM-CS, and the FCNN, DBM, SVM, and MLP reach 98, 95, 8 85, 72, and 70%, respectively, after 400 iterations. As is seen, the accuracies of the propos networks are significantly higher than the existing methods, while having faster conve gence rates. As is shown in Figure 15, although the proposed DCLSTM network has high accuracy than its compressed mode (DCLSTM-CS), the accuracy achieved by DCLSTM CS is still remarkably higher that the existing methods (about 95%), and it is more com putationally efficient compared to DCLSTM. In addition, for a thorough comparison the proposed methods with the existing state-of-the-art methods, a performance of all t methods was examined in a noisy environment. For this purpose, a white Gaussian noi of SNR, from −4 to 20 dB, was added to the EEG signals as measurement noise, and t classification accuracies for all the methods are shown in Figure 16. As can be seen, t proposed DCLSTM and DCLSTM-CS algorithms are quite robust to the measureme noise over a wide range of SNRs, such that the classification accuracy is still above 90 A number of studies have been conducted in recent years in the field of the automa detection of driver fatigue. The best results presented in these studies are shown in Tab 5 and are compared to the proposed algorithms. As is shown in Table 5, the propos methods have the best performance when compared to previous studies. Methods Feature Learning from Raw Manual Features (%) Figure 16. Robustness of the proposed algorithms against existing methods in different SNRs. Table 5. Accuracies of the proposed networks compared with existing state-of-the-art methods.
9,266.6
2022-03-03T00:00:00.000
[ "Computer Science" ]
Technology as a socio-historical phenomenon . The article reveals the authors' vision of the essence of the technology as a sociohistorical phenomenon. It is based on the idea that technology is not only a set of technical devices but a segment of the general system – a society – located between a social medium and its natural surroundings in the form of a peculiar social technosphere, which simultaneously separates and connects them. The main objective purpose of the technosphere is to promote the effective rendering of society-generated entropy outwards; it defines the features of the technosphere as a sociohistorical phenomenon. The analogues of such material formations take place also in wildlife (from the spider-web to the beaver dam) but are very few and arise from the implementation of instinctive programs of the species. In a person's consciousness, such programmes are not given by “nature”, they are formed on the basis of “desobjectivation” of their conglomerates designed to perform certain functions, which, similar to the biological branch, were called techno enosis; in the latter at the account of a peculiar “competition”, the development of these components in particular and the technosphere integrally takes place. However, despite consistency, the technosphere is a subsystem of a society, therefore, there is no perspective of creating certain laws of its development and an appropriate coherent periodization. For this reason, the scientific periodization of the development of technology as such is connected with the purpose of the given research and is defined by it. their conglomerates designed to perform certain functions, which, similar to the biological branch, were called techno enosis; in the latter at the account of a peculiar "competition", the development of these components in particular and the technosphere integrally takes place. However, despite consistency, the technosphere is a subsystem of a society, therefore, there is no perspective of creating certain laws of its development and an appropriate coherent periodization. For this reason, the scientific periodization of the development of technology as such is connected with the purpose of the given research and is defined by it. Keywords: technology; technosphere; technical thinking; technical devices; technocenosis Introduction: problem statement. At the present stage of human civilization, the development of the general changes in social realities occurs very quickly. One of its most important factors is the rapid development of technology. Its study and successful interpolation will determine the possibility of the scientific forecast of the influence of technology on the course of iconic processes in society for the near future and will allow improving the conditions of scientific prediction of the technology impact on future social processes. The implementation of this task requires the definition of the essence of technology as a social phenomenon. The researchers have studied this question for a while and still, there are many different perspectives. This problem is studied as by Ukrainian (Vynnyk, 2016;Melnyk, 2010;Mykhailovskyi, 2018;Muratova, 2019;Chornomordenko & Kachak, 2016;Chursinova, 2014), so foreign (McLain, Irving-Bell, Wooff, & Morrison-Love, 2019;Nishimura, Kanoshima, & Kono, 2019;Kahlau, Schneider, & Souza-Lima, 2019;Oliveira, 2020;Ushakov, 2017;Jaspers, 2012) researchers. Consider the existence of different points of view on a set of defined issues. The purpose of the proposed article is to present an original vision of the specified subject. The accents defined by us seem to be the most approximate to understanding the profound realities of technology as a social phenomenon. The methods of research are chosen on the basis of historical scientific reliability of technical progress, special methods of technology development processes research. The article is based on scientific, theoretically-critical use of the previous research on the history of technology. Results and discussions. The technology is one of the most important social phenomena and as such deserves close attention. The research of technology as a specific phenomenon presupposes the starting of its definition, though there are lots of them nowadays. Basically, four interpretations of this phenomenon essence are defined, in particular: technology as a tool for work, as a system of artificial bodies of activity, as a public material system, as organized by man substance and energy. The analysis of dozens of technology definitions indicates the absence of an expanded definition of the term that would cover all the features of the phenomenon. They either do not include all its manifestations (based mainly on the means of production), or do not sufficiently take into consideration its social nature. It is thought that the comprehension of this social phenomenon should begin with the analysis of society as a system (Kahlau, Schneider, & Souza-Lima, 2019;Nishimura, Kanoshima, & Kono, 2019). According to the generalized by us scientific and theoretical ideas, the development of complex systems (in particular, living ones) provides for their constant complication with carrying out entropy to the external environment. The passage of junction points is followed by the transition to a higher structural level of development, the highest of which is society as a specific biological superorganism. In the process of such development, the role of the appearance of specific material formations, which the "living system" places between itself and the environment, is rather important. Thus, in the animal world, gradually complicated individual material objectsa kind of "proto-technology" (from the spider-web to the beaver dam) are observed (Frojde, 1986, p. 17;Rukovskij, 1991, p. 41) and in the public body, they form the technology as a necessary subsystem of the latter. The creation of the marked objects in wildlife takes place as an external implementation of instinctive programs with the gradual (in the process of evolutionary development) inclusion of their correction elements on the basis of individual experience. In the society whose constituents (separate individuals) do not possess such a "program" from birth, it is formed in the consciousness of the latter as a "desobjectivation" of external objects of a technology previously created by humans. Then these programs, in their turn, are "objectificated" in human activities, in particular by creating new technical objects that society places between itself and the environment. A unique algorithm is formed: "the technology manifests itself as an active capability and power inherent not only to a separate person but to a person as a social creature and, ultimately, to humanity as a whole" (Aliyeva, 2003). Consequently, the technology constitutes a social subsystem that forms a technosphere between society and the environment. It is formed by society, but from the materials and by the rules of nature, being a natural anthropogenic formation. The technosphere is the realization of people's activities, in particular their thinking, throughout history, and this world is objectificatedrealized in the productthinking of mankind, a distracted thinking in general. And the individual needs to desobjectivate it, take possession of implemented there types of activity (Rozin, 2013). Remaining a "lifeless" subject, the technology "comes to life", being brought into use by the society in accordance with its purpose. As a result, based on its genesis and conditions of functioning, the technology as a social subsystem has two significantly different, but inseparably linked and attributable to each other "images"ideal and material, and can be understood only in their "duality". Technical thinking is no less important and essential constituent of technology than its material embodiment. It, in the same manner as thinking, is a specific feature of a person. As a physiological process it occurs in each individual brain, but in obligatory interaction with other individuals, the "connection" with which is carried out through the re-encoding of "internal" nerve impulses into "external" generally significant signals through the sign systems and vice versa, which makes individual mental cogitative processes a public one. Accordingly, the result of mental processes objectification is the creation of two different types of material formationscarriers of the pointed signals (signs) and technical objects. Creation of technical objects at any level of technological development requires appropriate knowledge, which can be obtained in different ways: directly in the process of life-activity (practice); on the basis of aloof observation (contemplation); during the purposeful influence on the subject, its study (experiment). Considering the systemacity of the world around us, on the one hand, and the "fragmentation" of knowledge in different "heads"on the other hand, they should always be conducted to a system the character of which is determined by the level of knowledge. There are three such systems in history: mythology, philosophy, and science. However, only in the latter one two interrelated forms of cognition are clearly defined: experimentalwith direct research of the object and theoreticalwith the study of its simplified model created for this purpose (Gryffen, 2012). Nowadays, scientific knowledge has taken the dominant position and covers almost all reality. The system of sciences, the constitution of which is defined differently, was formed. In the most "objectivistic" system (according to "the forms of movement") sciences are divided into natural, with appropriate subdivisions, and social. There is no place for technical sciences in such system. However, often the division of sciences is made according to their purpose: the study of society itself and its habitat. Then the existence of technical sciences, the object of which is the technosphere, which divides (and connects) the natural environment and society as a specific phenomenon of the real world with its features, is natural. Belonging to that kind of sciences system, the technical sciences still significantly differ from others relative to the specifics of its objective existence (its "manmade" nature) and their own purposes. A Nobel prize holder, a prominent American scientist Herbert Simon emphasized that if the main purpose of natural sciences is perception (analysis prevails), so for the technical sciences an ultimate purpose is to create a "second nature" (synthesis dominates) (Sajmon, 2004) In an experimental research, the main thing is not to obtain knowledge, but to improve the object. As for theoretical research, since in each case the ultimate goal is to create a class of technical devices, a kind of simplified model turns out to be a specific real device of this class. The creation of technical devices is carried out by society to meet the needs of a person, and therefore their character and "nomenclature" are largely defined by these needs (Maslow, 1954). Among a significant number of systems emphasizing different human needs the so-called "Maslow's hierarchy of needs" is the most wide-spread. https://www.hst-journal.com Історія науки і техніки, 2021, том 11, випуск 1 History of science andtechnology, 2021, vol. 11, issue 1 However, it does not take into account the fact that the carrier of needsthe individual is simultaneously an element of higher integrity (society). It is he who, by virtue of his complexity and his own prehistory, is an integral system. It follows that two interconnected systems of needs are peculiar for a person. Thus, the composition and functions of the technical system should take into account both individual needs of a person (in objects of assimilation, creating comfortable conditions, providing constant physical and mental job), and social needs (in society as such, communication, selfesteem). In primitive society, a complex of things that provided material interaction of the tribe community with the environment, was concentrated in the dwelling of that time (including the latter) (Tolochko & Stanko, 1997). With respect to the unity of production and consumption, the labour sharing (except sex and age distinction) did not exist as well as the differentiation of technical objects types. At the same time, the dwelling became an environment which confronted the external world, it was the place of each individual person formationboth in desobjectivation of manmade objects and in meeting social needs. Therefore, in this case, the technical system was of a syncretic character. The technical objects, usually called consumption items, first of all, serve to meet the individual needs of people. Regarding assimilation problems, the consumption items mainly play a supporting role. And the needs of comfortable conditions are almost fully provided with their help. The appointed needs have a wide assortmentfrom clothing to sports devices. In fact, these very items are the things a person really needs. However, the "man-made" nature of the necessary consumption items naturally causes the appearance of other technical devices, which are not directly aimed at meeting human needs, but provide the possibility of their creationthe production tools (or, more broadly, means of production). One of the first to deal with the problems of aesthetics of technical objects and industrial design was Franz Reuleaux (1829-1905 biennium), an outstanding German mechanic engineer, lecturer of the Royal Technical School in Berlin (nowadays, Berlin Technical University). He was called "Poet of Technology" and believed that with the help of technical devices in general, or tools in particular, we make "the internal processes of the material world to act and work for our purpose" (Ryolo, 1885, p. 1). They provide the creation of other types of technologies, in particular the means of production, for further technological progress. The importance of this type of technology involves their significant influence on industrial relations, namely, the social system, creating working conditions. In the process of society and technology development, this type of technology progresses at the fastest pace and becomes the most diverse. It consists of numerous classes and subclasses, significantly expanding social opportunities in interaction with the environment (McLain, Irving-Bell, Wooff, & Morrison-Love, 2019). Thus, it should be logical to conclude that the means of work are not only the criterion of the human https: //www.hst-journal.com Історія науки і техніки, 2021, том 11, випуск 1 History of science andtechnology, 2021, vol. 11, issue 1 labour power development but also the indicator of social relations in which labour is carried out. However, the society in interaction with the environment faces the need to act in the form of integrity, which with its quantitative growth and spread also involves the use of certain technical means (in order to provide integrity). The means of communication supposed to provide material (substantial and energetic) (transport) and informational (linking) streams serve as unifying, integrated objects. They fulfill this role both in reference to the elements of societyindividuals and social groups, and to their production activities. The latter comprises communication means as belonging to the means of production. The increasing number of social entities leads to the intensification of various kinds of contacts between them. The growth and complicacy of each entity are also accompanied by structural changes. They are primarily connected with the distribution of labour and the distinguishing of separate production social groups with different interests. In both cases, the appropriate types of technical devices (separate) are needed, which, above all, include weapons and military technology in general. Another type of technical devices of this kind is articles of luxury. Such articles can be used both as special technical devices and the objects of other technology types, which obtain a high-status character. Accordingly, the technology is a system with a complex structure, which is due to its transformation into a dynamic developing system. In this context, the technosphere is compared to the biosphere. It stands to reason that there is a significant difference between the elements of these dynamic systemsthe biological personality and the technical device (product). First of all, it is the difference between the living and the lifeless. This specific element includes differences both in functioning (regarding entropy) and in structure (heterogeneity of materials in the first case, and homogeneityin the second one). However, there is something in common between them, as complex formations require a certain "previous plan" for their implementation. Although there are significant differences in this case too (Libberta, 1982, р. 18). The biological personality is self-created in accordance with the program laid in its genotype, which is connected with each cell of the organism. Its "construction" is a process that in ontogenesis repeats phylogenetic developmentsince a biological person, regardless of the development level in ontogenesis, should successfully function in the environment as a separate organism at each stage. The technical device is created by a society from ready-made components, and only in a programmed form it starts to function by itself. But it also needs some kind of genotypeexternal in regard to this device. Today, such a "genotype" is a document (sometimes called a passport), without changes in which no "phenotypic" transformations in the device is possible. Still, in both cases, the new peculiarities of "organisms" arise in the form of random mutations in the genotype, which are fixed in with respect to natural selection or social practice. Another important similarity of technical and biological is that in both cases a separate "organism" can only function in a particular conglomerate (Malaspina, 2019). This refers not only to the biosphere or the technosphere as a whole but also to some local unities, which are called "biocenosis" in relation to living organisms. Technical devices, by analogy, received such name as "technocenosis". Cenosis is a kind of system that differs from the system properly in that its whole structure is influenced much more by the properties of separate components which are already included in it. These communities of any kind have a commonas distinct from Gaussiannature of the elements distribution. It is in the composition of cenosis that specific technical devices perform functions in the technosphere, and therefore largely determine the dynamics of the latter. It stands to reason that the development of technology in general and in its structural and functioning units, as well as any material objects, is directed by certain patterns. Therefore, the researchers seek to find the "general laws" of this development. In our opinion, such question setting is as scientifically incorrect as attempts to find the "general laws" of physics or biology. Regarding the technology, the attempt to use the so-called "laws" of Hegelian dialectic cannot be recognized as legitimate. These "laws" are designed for an ideal single object that does not arise, does not disappear, and does not come into interaction with anything. This fundamentally distinguishes an ideal single object from real objects. Valid "laws of technology" should concern its real objects, even if they have different levels of generalization and idealization. Since the development of technology is a historical process, there is a problem of its periodization. The corresponding numerous periodizations are reduced to two basic principles. As for the first, the periodization of the development of technology as a social phenomenon should coincide with the periodization of society development; according to the second, the technology in its development is self-sufficient, subordinate to its own laws and not only independent of the development of society but dramatically affects it. Both principles reflect the significant features of technology and have the right to exist. However, in reality, the development of technology that complies with two different types of consistent patterns, cannot be determined by any of them. Therefore, the periodization of its development can be the only relative, depending on the purpose of the research. The main direction of the general development of technology is the gradual transference of partial technical functions of a person to technical devices. Initially, it concerned the tool that interacted with the subject of work and changed it directly; subsequentlytypes of energy spent by a human being on this interaction (muscle strength, thenthe energy of animals, and thenthe forces of nature), and finally control and management of these processes. Today, despite the importance of all appointed aspects, the latter gained special attention. It is associated with a significant increase of productivity, and what is the most important, a gradual more and more complete transference of "alive" workers' functions to the technical systems. https://www.hst-journal.com Історія науки і техніки, 2021, том 11, випуск 1 History of science andtechnology, 2021, vol. 11, issue 1 Eventually, the time when only the purposefulness and general control will be kept by a person and everything else will be performed by technical systems will come. Conclusions. One of the most important social phenomenatechnology, comprises a created by mankind in relation to the environment a special sphere that separates them and simultaneously connects with each other. Materially specified technosphere consists of a system of technical devices designed both to meet the needs of separate individuals and society as a whole. However, the set of technical devices inherently is not yet a technology, because only the direct efforts of people "restores it to life" and makes it effective. The development of technology is of spontaneous character and is determined by the momentum of socio-historical evolution, not intrinsically but through interaction with the relevant structural and functional changes of human society. Funding.
4,725
2021-06-26T00:00:00.000
[ "Computer Science" ]
Interleukin 11 (IL-11): Role(s) in Breast Cancer Bone Metastases Bone metastases represent the main problem related to the progression of breast cancer, as they are the main cause of death for these patients. Unfortunately, to date, bone metastases are incurable and represent the main challenge for the researcher. Chemokines and cytokines affect different stages of the metastatic process, and in bone metastases, interleukin (IL) -6, IL-8, IL-1β, and IL-11 participate in the interaction between cancer cells and bone cells. This review focuses on IL-11, a pleiotropic cytokine that, in addition to its well-known effects on several tissues, also mediates certain signals in cancer cells. In particular, as IL-11 works on bone remodeling, it plays a relevant role in the osteolytic vicious cycle of bone resorption and tumour growth, which characterizes bone metastasis. IL-11 appears as a candidate for anti-metastatic therapy. Even if different therapeutic approaches have considered IL-11 and the downstream-activated gp130 signaling pathways activated downstream of gp130, further studies are needed to decipher the contribution of the different cytokines and their mechanisms of action in breast cancer progression to define therapeutic strategies. Introduction Despite advances in cancer treatment, therapeutic options for bone metastases are still inadequate and, generally, palliative [1]. Furthermore, the prolongation of survival of cancer patients, due to the availability of effective therapies, is associated with the onset of bone metastases for neoplasms that rarely have bone as a secondary growth site [2,3]. Novel combination strategies that simultaneously target both primary tumour and bone metastasis are desirable to improve patient outcomes. This is particularly relevant for breast carcinoma, as bone metastases are responsible for 90% of deaths for mammary carcinoma [4]. Chemokine and cytokine signalling intervenes and regulates different steps of the metastatic process, starting from the detachment of tumour cells from the primary tumour mass to bone colonization, participating in the epithelial-to-mesenchymal transition (EMT), cell migration, seeding, and proliferation [5,6]. Considering the bone metastases, interleukin (IL)-6, IL-8, IL-1β, and IL-11 mediate the crosstalk between bone cells and tumour cells acting on bone homeostasis: they show a bone trophic function and are regulators of bone remodelling. In this context, the contribution of IL-11 appears peculiar, since it participates in the establishment of a vicious cycle of bone resorption and tumour growth, thus promoting bone colonization. Moreover, IL-11 is present in larger amount in cancer compared to IL-6 and, thus, it may play a relevant role in neoplastic disease [7]. In this review, we focus on IL-11, a member of the IL-6 cytokine family, which exerts pleiotropic effects in homeostasis and in disease. IL-11 mainly functions as an anti-Biomedicines 2021, 9, 659 2 of 15 inflammatory cytokine though it can act as a pro-inflammatory mediator, a feature shared with IL-6 [8]. The pleiotropic nature of this cytokine emerged early after its discovery: first identified as a stimulating factor for murine plasmacytoma cells [9], it was next described as secreted by bone marrow cell lines and able to inhibit differentiation of preadipocytes [10,11]. Since then, several biological roles have been attributed to IL-11. IL-11 is a powerful hematopoietic factor that, synergistically with other cytokines (e.g., IL-3, IL-4), induces megakaryocytopoiesis [12]. Further, IL-11 alone stimulates the recovery of platelets after radiation therapy, in mice [13], and its recombinant version has been approved by the FDA to treat radiation-induced thrombocytopenia in tumour patients (e.g., breast cancer) [14]. It is also involved in myelopoiesis, lymphopoiesis, and erythropoiesis [15]. At bone level, IL-11 has been shown to promote osteoblast differentiation in mice [16,17], while mutations in its sequence or in that of its receptor (IL-11Rα) have been associated with height growth deficit, [18], osteoarthritis [19], and craniosynostosis [20] in humans. Overall, the studies clearly indicate that IL-11 signalling is involved in growth regulation [15]. Moreover, IL-11 is implicated in reproduction: mutations in IL-11Rα, in female mice, associate with infertility [21] and, hence, it has been under investigation as a contraceptive [22]. IL-11, indeed, regulates the invasion of the extravillous trophoblast during placentation and seems to be also involved in the onset of preeclampsia [23]. Besides the physiological roles, the aberrant expression of IL-11 is associated with the evolution of several pathological conditions. For instance, its expression is increased in the case of viral-induced asthma [24], and in mycobacterium tuberculosis infection [25], in airways and lung where it is critical for the T-helper (Th2)-mediated inflammatory response [26] and, indeed, the inhibition of IL-11 signalling improves the inflammatory status [27]. IL-11 has a role in fibrotic degeneration [28] of different organs such as heart [29], liver [30], and lung [31,32] after chronic inflammation and failure. It is a downstream effector of transforming-growth factor (TGF)β and acts in an autocrine fashion [29]. The role of IL-11 in cancer has been the subject of extensive studies in view of the emerging evidence indicating IL-11 as a signalling mediator in cancer cells in defining worst outcomes. IL-11 is involved in different aspects of the tumorigenesis including proliferation, angiogenesis, survival under hypoxic condition, radio-and chemo-resistance, and apoptosis suppression [33][34][35][36]. IL-11 promotes tumorigenesis mainly by triggering the JAK-STAT3 pathway. Elevated IL-11 expression has been associated to various human cancers of both epithelial and hematopoietic origin. IL-11 is secreted not only from different types of cancer cells but also from cancer-associated cells, including cancer-associated fibroblasts and myeloid cells. In this way, the cytokine is able to participate in the bidirectional crosstalk tumour-microenvironment. IL-11 is produced by breast cancer cells and has been implicated in breast cancer-induced osteolysis. Moreover, the release of high levels of IL-11 by mammary tumour cells correlates with an elevated possibility to develop bone metastasis [37,38]. Due to its pro-tumorigenic activities, IL-11 signalling inhibition appears as a new strategy to be used in cancer therapy [39]. This review has the aim to take stock of the current knowledge about the role of IL-11 on bone metastasis development to highlight the therapeutic opportunities that the modulation of its activity may offer. IL-11 acts throughout the transmembrane glycoprotein β-receptor subunit (gp130), a co-receptor shared with all the family members [42]. Given this peculiarity, IL-11, while showing unique biological activities, partly exhibits functional overlaps with all the mem-bers. Within the family, however, only IL-11 and IL-6 utilize gp130 subunit in a homodimeric complex. IL-11 signalling begins after the interaction of the cytokine with a specific membranebound α-receptor (IL-11Rα), which has only the function of binding the ligand. This event is followed by the engagement of gp130 (the signal-transducing β-receptor), the dimerization of gp130 subunits, and the formation of a hexameric complex: one gp130 homodimer plus two IL-11/IL11Rα couples. This hexameric complex activates the downstream signalling pathways, which, as for all the IL-6 family members, results in the activation of three major pathways: JAK/STAT pathway, Ras/Raf/MAPKs signalling cascade, and PI3K/AKT pathway ( Figure 1). The formation of the hexameric complex activates the associated JAKs by trans-phosphorylation. Once activated, JAK kinases in turn phosphorylate various cytoplasmic tyrosine residues in gp130, generating docking sites for other signalling molecules and, thus, initiating distinct intracellular signalling cascades. The phosphorylation of STAT binding sites recruits STAT proteins, which are themselves phosphorylated. This event leads to STATs' activation and their translocation to the nucleus where they act as transcription factors. Upon activation, gp130 offers also a binding site (Y759) for the Src homology-2 domain-containing phosphatase (SHP2) [43]. Activated JAK phosphorylates SHP2, which becomes able to recruit other signalling molecules leading to the activation of Ras/Raf/ERK pathway. IL-11 also activates the PI3K/AKT/mTOR pathway independently of Tyr phosphorylated gp130, event that requires JAK activation [39,44]. The duration of the receptor signal activation is regulated by suppressors of cytokine signalling (SOCS), negative regulator of gp130 signalling. SOCS3, which is transcriptionally regulated by STAT, binds the kinase domain of JAKs, while the direct binding of SOCS3 to gp130 (Y759) blocks cytokine receptor and mediates the receptor complex ubiquitination and degradation [45,46]. The induction of JAK-STAT3 pathway is of particular interest given its role in oncogenesis [47]. It has been reported that the activation of the JAK-STAT axis promotes cell migration and invasion. Indeed, EMT-related genes, such as matrix metalloproteinases (MMPs), are targets of STAT. MMPs play a relevant role in tumour growth and invasion by degrading extracellular matrix (ECM) and by allowing the release of growth factors and cytokines stored in ECM [48]. The JAK/STAT pathway sustains tumour growth also by promoting angiogenesis: vascular endothelial growth factor A (VEGFA) and hypoxiainducible factor 1 α (HIF-1α) are target genes of STAT [39]. Moreover, PI3K-AKT-mTORC1 pathway, triggered by IL-11 signalling, contributes to tumour progression and metastasis. This pathway plays known anti-apoptotic and survival promoting roles in cancer [49], but it is also important as an inducer of invasion and proliferation of tumour cells acting downstream of IL-11 [50]. cules and, thus, initiating distinct intracellular signalling cascades. The phosphorylation of STAT binding sites recruits STAT proteins, which are themselves phosphorylated. This event leads to STATs' activation and their translocation to the nucleus where they act as transcription factors. Upon activation, gp130 offers also a binding site (Y759) for the Src homology-2 domain-containing phosphatase (SHP2) [43]. Activated JAK phosphorylates SHP2, which becomes able to recruit other signalling molecules leading to the activation of Ras/Raf/ERK pathway. IL-11 also activates the PI3K/AKT/mTOR pathway independently of Tyr phosphorylated gp130, event that requires JAK activation [39,44]. The duration of the receptor signal activation is regulated by suppressors of cytokine signalling (SOCS), negative regulator of gp130 signalling. SOCS3, which is transcriptionally regulated by STAT, binds the kinase domain of JAKs, while the direct binding of SOCS3 to gp130 (Y759) blocks cytokine receptor and mediates the receptor complex ubiquitination and degradation [45,46]. The induction of JAK-STAT3 pathway is of particular interest given its role in oncogenesis [47]. It has been reported that the activation of the JAK-STAT axis promotes cell migration and invasion. Indeed, EMT-related genes, such as matrix metalloproteinases (MMPs), are targets of STAT. MMPs play a relevant role in tumour growth and invasion by degrading extracellular matrix (ECM) and by allowing the release of growth factors and cytokines stored in ECM [48]. The JAK/STAT pathway sustains tumour growth also by promoting angiogenesis: vascular endothelial growth factor A (VEGFA) and hypoxia-inducible factor 1 α (HIF-1α) are target genes of STAT [39]. Moreover, PI3K-AKT-mTORC1 pathway, triggered by IL-11 signalling, contributes to tumour progression and metastasis. This pathway plays known anti-apoptotic and survival promoting roles in cancer [49], but it is also important as an inducer of invasion and proliferation of tumour cells acting downstream of IL-11 [50]. Regulation of IL-11 Expression in Physiological Conditions IL-11, physiologically expressed at low levels, is secreted by many cells with mesenchymal origin: chondrocytes, osteoblasts, leukocytes, fibroblasts, keratinocytes, synoviocytes, but also epithelial cells (the major source of IL-11). IL-11 is a non-glycosylated secretory protein of 178 amino acids, with a molecular mass of about 19 kDa; its gene is located on the long arm of the human chromosome, locus 19q13, the genomic sequence is of 7 kb and comprises 5 coding exons and 4 introns. IL-11-promoter contains binding sites for activator protein-1 (AP-1), Runt-related transcription factor 2 (Runx2) as well as several Smad binding elements [51]. The transcription and subsequent expression of IL-11 appear to be mainly regulated by Extracellular-signal Regulated Kinases (ERKs) and transcription factors of the AP-1 family. IL-1β and TGF-β are known to induce IL-11 expression, which involves signalling of ERKs and p38 mitogenactivated protein (MAP) kinases via AP-1 [52,53]. TGFβ-triggered IL-11 secretion appears to be critically involved in the initiation of metastasis of colorectal cancer cells [54]. Of note, a long non-coding RNA activated by TGF-β (lncRNA-ATB) promotes organ colonization of disseminated hepatocellular carcinoma cells by binding the IL-11 mRNA and the autocrine induction of IL-11 expression [55]. Regulation of IL-11 Expression in Cancer In vitro experiments reveal that TGFβ-dependent IL-11 induction is critical to provide bone metastases phenotype to breast cancer cells [56]. In bone disease associated to multiple myeloma, the production of IL-11 is stimulated by hepatocyte growth factor (HGF). In this context, IL-11 stimulates osteoclast (OC) recruitment and inhibits osteoblastic bone formation [57]. Moreover, the same authors reported that TGF-β1 and IL-1 potentiate the effect of HGF on IL-11 secretion, whereas an additive effect with TNFα is observed [57]. Even if these results derived from in vitro studies, they underline that a cytokine network works to make IL-11 available with important effects on aberrant bone resorption, which characterizes multiple myeloma. Although IL-11 is virtually absent in body fluids of healthy individuals, its level increases in serum of patients in pathologic conditions, such as arthritis [58], acute pancreatitis, [59] pancreatic cancer [60], lipoedema [61], polycythaemia vera [62], lung disease in rheumatoid arthritis patients [63], and major cardiac events in chronic heart failure [64]. These results confirm a role for the cytokine in several pathological conditions, often in cancers and inflammatory diseases. IL-11 and Bone IL-11 is essential for physiological bone turnover and maintenance of bone structure. Once released by osteoblasts, IL-11 binds its receptor on both osteoblasts and osteoclasts, and in this complex microenvironment several signals regulate both IL-11 and IL-11R expression [65]. A role for IL-11 signalling in the bone development is suggested by in vivo and in vitro studies: IL-11 is able to promote bone formation in vitro [66], IL-Rα knockout mice show craniofacial abnormalities [17], the overexpression of IL-11, in transgenic mice, promotes bone formation [16], IL-11 stimulates osteoblastic differentiation in ST2 bone marrow stromal cells through STAT3-induced bone marrow morphogenetic protein 2 (BMP-2) [16]. IL-11 is also an osteoclastogenic cytokine [67]: a global deletion of IL-11R in mice determines low osteoclast number compared to control animals [17,68]. An essential role in physiological osteoclastogenesis has been observed also for the other members of the IL-6 family, which are able to stimulate osteoclast formation by promoting their production in osteoblast lineage cells [69]. IL-11 appears to stimulate osteoclastogenesis through a dual action: inhibition of osteoprotegerin (OPG) expression and induction of nuclear factor ligand-receptor κB (RANKL) activator production. The RANKL/OPG ratio may regulate the delicate balance between bone resorption and synthesis; OPG acts as a decoy receptor, binding to RANKL and blocking its interaction with RANK, thus inhibiting osteoclast development [70]. McCoy et al. reported that IL-11-induced osteoclast differentiation requires the presence of RANKL, which is released by osteoblasts [71]. Thus, this is a controversial issue: in fact, even if a functional role of IL-11 in the osteoclastogenic process has been well established, the exact mechanisms by which IL-11 promotes both the differentiation and function of osteoclasts warrant further analysis. Based on these studies, it emerges that IL-11 governs bone remodelling and has a substantial impact on bone homeostasis. Thus, it becomes important to consider the functions of IL-11 in bone metastasis. IL-11 and Bone Metastasis Studies employing animal models have provided significant insights on the importance of IL-11 for cancer progression and it has been demonstrated that IL-11 drives metastasis in mouse models. As breast tumour cells are able to lead to bone disruption, when they grow in the bone metastatic site, several studies have considered the contribution of IL-11 in this condition. Of note, IL-11 is expressed by cells of the osteoblast lineage; therefore, cancer cells, other than producing IL-11, respond to this signal once they arrive in the bone marrow. IL-11 and Breast Cancer Bone Metastasis: Data for Its Implications Experimental evidence has shown a possible role of IL-11 in breast cancer bone metastasis. Breast cancer cells express IL-11R and secrete IL-11, which in turn stimulates osteoclasts [72,73]. In both human bone metastasis biopsies and experimental models, an increased osteoclast activity has been demonstrated close to the advancement margin of bone metastasis [74,75]. This led to the hypothesis that IL-11 may be associated with bone metastasis development in human breast cancer. Sotiriou et al. demonstrated, for the first time, a significant enhancement of IL-11 expression in a cohort of 99 patients bearing primary invasive breast tumours and suggested the use of this cytokine as a predictive marker for the development of bone metastases [34]. In 180 breast cancer patients, IL-11 expression level was shown to be significantly increased in the serum of patients with bone metastases compared to patients without metastases, and this is associated with shorter overall survival [76]. These data parallel with augmented gp130 and STAT3 phosphorylation in tumour tissue, a pathway that leads to RANKL expression. The authors speculate that high levels of IL-11 promote bone degradation, which is responsible for the poor outcome. By acting on megakaryocytes, IL-11 stimulates the production of platelets and, indirectly, promotes metastases. Platelets can protect circulating cancer cells from attack by the immune system (immune evasion) and facilitate their arrest at the endothelium, supporting the development/establishment of secondary lesions [77]. Indeed, the number of megakaryocytes increases in the bone marrow of bone metastases-bearing animals [78] and, hence, it is conceivable that IL-11 supports this effect. A positive correlation between the expression of IL-11Rα in tumour cells and bone metastasis incidence was reported in advanced breast cancer patients [79]. Lim et al. highlighted the functional role of hypoxia in the induction of IL-11 in breast cancer metastasis; IL-11 autocrine production plays an important role in cancer cell motility and invasiveness under hypoxic conditions [80]. Of note, since bone metastatic microenvironment is hypoxic, these results indicate that IL-11 is continuously produced. Hypoxia-induced IL-11 expression significantly alters EMT-related gene expression such as E-cadherin, Ncadherin, and vimentin, suggesting that the IL-11-STAT3 pathway may be involved in hypoxic tumour EMT [80,81]. As regards other bone-related cancers and their progression to metastasis, IL-11Rα is a cell surface marker of tumour progression and correlates with poor prognosis in patients with osteosarcoma [82]. The same authors show that IL-11Rα and its ligand, IL-11, are specifically upregulated in human metastatic osteosarcoma cell lines and that the engagement of this autocrine loop leads to tumour cell proliferation, invasion, and anchorage-independent growth in vitro. Figure 2 summarises the data of the IL-11 implication in bone metastasis. specifically upregulated in human metastatic osteosarcoma cell lines and that the engagement of this autocrine loop leads to tumour cell proliferation, invasion, and anchorageindependent growth in vitro. Figure 2 summarises the data of the IL-11 implication in bone metastasis. IL-11 and Osteolysis in Breast Cancer Bone Metastasis Kang Y et al., about twenty years ago, demonstrated that IL-11 is the most abundantly expressed osteolytic factor in breast cancer cells highly metastatic to bone, and that TGFβ further increases the already high level of IL-11 [85]. Breast cancer cell is known to produce numerous osteolytic factors, including parathyroid hormone-related protein (PTHrP), IL-1, IL-6, IL-8, IL-11, VEGF, connective tissue growth factor (CTGF), MMP1, HGF, etc. [86][87][88][89][90]. Some of these factors activate osteoclastogenesis by increasing RANKL expression in osteoblasts, while others activate osteoclastogenesis either synergistically to RANKL stimulation or in a RANKL-independent way. IL-11 released by breast cancer cells is able to stimulate osteoclast differentiation and increases osteoclast progenitor cells [71]. McCoy et al. characterized the role of IL-11 in osteoclast formation, function, and survival and indicated that primary function of IL-11 is related to the promotion of osteoclastogenesis by increasing the pool of osteoblast progenitor cells and by downregulating granulocyte-macrophage colony-stimulating factor (GM-CSF) expression. The authors specify that IL-11 produced by breast cancer cells induces osteoclast formation and bone resorption by two mechanisms, the first related to the production of RANKL by stromal cells/osteoblasts, and the second related to the rise of osteoclast progenitor through IL-11 derived from tumour cells. This partially explains the role of IL-11 in the promotion of osteolysis [71]. Liang M et al. demonstrated, by in vitro and in vivo experiments, that IL-11 plays an essential role in the vicious osteolytic cycle by activating osteoclastogenesis regardless of RANKL, via c-Myc activated by JAK1/STAT3 pathway in bone metastasis. Indeed, in vivo, the blockade of STAT3 phosphorylation results in the inhibition of osteolysis, and tumour growth of metastatic breast cancer [83]. Several studies have demonstrated that breast tumour cells can also target osteoblasts to stimulate the production of IL-11 [91,92], further increasing IL-11 concentrations in the bone microenvironment. Therefore, the production of IL-11 by cancer cells within bone can be direct or indirect, and, in turn, IL-11 inhibits bone formation by suppressing osteoblast activity [93]. Lysophosphatidic acid (LPA), a Figure 2. Sources, targets, and effects of IL-11 in bone metastasis [33,35,71,78,80,83,84]. IL-11 provides the cells a path to communicate with each other. It acts in paracrine as well as autocrine ways, contributing to the complexity of bone metastatic microenvironment (Figure created using Servier Medical Art available at https://smart.servier.com, accessed on 7 May 2021). IL-11 and Osteolysis in Breast Cancer Bone Metastasis Kang Y et al., about twenty years ago, demonstrated that IL-11 is the most abundantly expressed osteolytic factor in breast cancer cells highly metastatic to bone, and that TGFβ further increases the already high level of IL-11 [85]. Breast cancer cell is known to produce numerous osteolytic factors, including parathyroid hormone-related protein (PTHrP), IL-1, IL-6, IL-8, IL-11, VEGF, connective tissue growth factor (CTGF), MMP1, HGF, etc. [86][87][88][89][90]. Some of these factors activate osteoclastogenesis by increasing RANKL expression in osteoblasts, while others activate osteoclastogenesis either synergistically to RANKL stimulation or in a RANKL-independent way. IL-11 released by breast cancer cells is able to stimulate osteoclast differentiation and increases osteoclast progenitor cells [71]. McCoy et al. characterized the role of IL-11 in osteoclast formation, function, and survival and indicated that primary function of IL-11 is related to the promotion of osteoclastogenesis by increasing the pool of osteoblast progenitor cells and by downregulating granulocyte-macrophage colony-stimulating factor (GM-CSF) expression. The authors specify that IL-11 produced by breast cancer cells induces osteoclast formation and bone resorption by two mechanisms, the first related to the production of RANKL by stromal cells/osteoblasts, and the second related to the rise of osteoclast progenitor through IL-11 derived from tumour cells. This partially explains the role of IL-11 in the promotion of osteolysis [71]. Liang M et al. demonstrated, by in vitro and in vivo experiments, that IL-11 plays an essential role in the vicious osteolytic cycle by activating osteoclastogenesis regardless of RANKL, via c-Myc activated by JAK1/STAT3 pathway in bone metastasis. Indeed, in vivo, the blockade of STAT3 phosphorylation results in the inhibition of osteolysis, and tumour growth of metastatic breast cancer [83]. Several studies have demonstrated that breast tumour cells can also target osteoblasts to stimulate the production of IL-11 [91,92], further increasing IL-11 concentrations in the bone microenvironment. Therefore, the production of IL-11 by cancer cells within bone can be direct or indirect, and, in turn, IL-11 inhibits bone formation by suppressing osteoblast activity [93]. Lysophosphatidic acid (LPA), a bioactive phospholipid derived from platelets and also present in tumour microenvironment, is known to play a critical role in in breast cancer osteolytic bone metastasis [94,95]. In an in vitro experiment, it has been reported that LPA enhances breast cancer cell-mediated osteoclastogenesis by inducing the secretion of osteolytic cytokines, such as IL-8 and IL-11. In particular, LPA induces IL-11 expression in MDA-MB231 breast cancer cells and this process seems to be related to the involvement of the PKCδ signalling pathway [96]. Prostaglandins (PGs) are abundant in bone, where they are released by cells of the osteoblast lineage and regulate bone metabolism. PGE2, a potent stimulator of osteoly-sis, represents a mediator of IL-11-induced bone resorption. Therefore, cyclooxygenase (COX)-2 inhibitors could be interesting drugs that potentially interfere with IL-11-mediated osteolytic metastasis. Several authors have, indeed, proposed COX-2 inhibitors (NS398, indomethacin, and dexamethasone) as drugs useful for suppressing IL-11-mediated osteolytic bone metastases of tumour cells [97]. Singh et al., focusing on the relationship between COX-2 and IL-11 in vitro both in poorly metastatic (MCF-7) and highly metastatic (MDA-MB231) breast cancer cell lines, demonstrated that COX-2 overexpression induces PGE2-mediated IL-11 expression, in both cell types. In an in vivo mouse model of breast cancer bone metastasis obtained with mice injected with COX-2-transfected MDA-435S cells, they isolated one-seeking clone from long-bone metastases that produces higher levels of PGE2 and correspondingly higher levels of IL-11 compared to COX-2-transfected parental MDA-435S cells. The authors demonstrated the important role of COX-2-mediated production of IL-11 in breast cancer cells and hypothesized the importance of this process in the development of osteolytic bone metastases in breast cancer patients. Therefore, they propose that COX-2 targeting may be useful in inhibiting the osteolytic process [98]. It has been reported that endothelial cells in bone play important roles in the promotion of bone resorption by secreting IL-11 in physiological and pathological conditions. In particular, bone-derived endothelial cells (BDECs) are specifically involved in bone osteolytic bone metastasis and also express the IL-11Rα and gp130, being in turn affected by IL-11 [84]. Figure 3 illustrates the osteolytic effects of IL-11 in bone metastasis. ated osteolytic bone metastases of tumour cells [97]. Singh et al., focu ship between COX-2 and IL-11 in vitro both in poorly metastatic (MC astatic (MDA-MB231) breast cancer cell lines, demonstrated that CO induces PGE2-mediated IL-11 expression, in both cell types. In an in v breast cancer bone metastasis obtained with mice injected with COX 435S cells, they isolated one-seeking clone from long-bone metas higher levels of PGE2 and correspondingly higher levels of IL-11 c transfected parental MDA-435S cells. The authors demonstrated th COX-2-mediated production of IL-11 in breast cancer cells and h portance of this process in the development of osteolytic bone metast patients. Therefore, they propose that COX-2 targeting may be use osteolytic process [98]. It has been reported that endothelial cells in bone play importan tion of bone resorption by secreting IL-11 in physiological and patho particular, bone-derived endothelial cells (BDECs) are specifically in olytic bone metastasis and also express the IL-11Rα and gp130, bein IL-11 [84]. Figure 3 illustrates the osteolytic effects of IL-11 in bone metasta Therapeutic Strategies Targeting IL-11 The involvement of IL-11 in different stages of cancer development and progression has spurred many studies aimed at counteracting IL-11 itself or its signalling with different molecular approaches. Drug strategies aimed at targeting IL-11 fall mainly into two categories: monoclonal antibodies directed against either the cytokine or its receptor and small molecules that interfere with the receptor-signalling complex, including the gp130 or the downstream pathway of JAK-STAT [15,40]. Targeting human IL-11, or its signalling, in different types of cancer has been reported in few preclinical models [35,[99][100][101][102]. In vitro experiments utilizing IL-11-neutralizing antibody were reported in breast cancer cells [71,80] with promising results. Among the small molecules, Bazedoxifene (a selective oestrogen receptor modulator) represents a potential molecule to be used as an inhibitor of IL-11 signalling. It binds gp130 and inhibits the downstream signal transduction, blocking STAT3 activation in human cancer cell lines. This effect results in the suppression of gp130-dependent tumour growth of the gastrointestinal epithelium [103]. IL-11 Mutein is a recombinant protein bearing a number of mutations generated to disrupt IL-11 signalling [27,104]. IL-11 Mutein, which binds the IL-11R with higher affinity than IL-11 (20 times more efficiently), showed positive results in the treatment of gastrointestinal cancers, in xenograft model [35]. Given that IL-11 Mutein does not show adverse effects on platelet numbers and it is well tolerated, it was suggested its potential use in clinical therapy also in other cancer types. In xenograft models of human endometrial cancer, it was reported the use of neutralizing anti-human IL-11R: the treatment transiently decreases tumour growth in mice inoculated with Ishikawa cells, while it reduces the dimension of tumour in mice injected with HEC1A cells [102]. Recently it has been reported a role of Asperolide A, a dipertenoid derived from marine algae, in the prevention of breast cancer bone metastasis. This agent, acting on PI3K/AKT/mTOR signalling cascade, efficiently inhibits osteoclastogenesis and prevents breast cancer-induced bone osteolysis [105]. Of note, Asperolide A intervenes on a pathway triggered by IL-11. Conversely, Oprelvekin, a recombinant human IL-11, is now routinely used to treat thrombocytopenia in breast cancer patients, which underwent radiation therapy, as an alternative to platelets transfusion. Due to its haematopoietic and megakaryocytopoietic activities, Oprelvekin reduces severe thrombocytopenia and accelerates platelet recovery [14,106]. Also, naturally occurring compounds may affect the IL-11 signalling. In traditional Indian and South East Asian medicine, Curcumin, derived from Curcuma longa, has been used to treat a variety of diseases given its known anti-inflammatory and anticancer properties. Indeed, Curcumin has shown the ability to modulate a lot of signalling pathways, among which STAT3 activation, the PI3K/AKT/mTOR, and HIF-1 signalling pathways [107]. Moreover, in an in vivo experiment, a preparation containing the essential turmeric oils in addition to the standard Curcumin demonstrated a higher bioavailability, a prerequisite to be adsorbed following the ingestion, resulting in upregulation of the expression of IL-10 and IL-11 [108]. Taken together, this preclinical and clinical evidence allow considering Curcumin and its derivatives as a tool to potentiate the chemotherapeutic agents against cancer. miRNA and Inhibition of IL-11 Signalling Some miRNAs, a class of key posttranscriptional regulators [109], have been described to inhibit IL-11 signalling in different diseases as well as in breast cancer. Pollari and colleagues elucidated the role of miRNAs in the bone metastatic process of breast cancer and specifically analysed the miRNAs that regulate TGFβ-induced IL-11 expression [110]. These authors identified miR-204, -211, and -379 as the strongest modulators of IL-11 production, these miRNAs directly downregulate a key pathogenetic process in breast cancer metastasis, i.e., the TGFβ-induced expression of IL-11. Other authors reported that miR-124 inhibits breast cancer bone metastasis through the repression of IL-11 [111]. In this study, the authors demonstrated that miR-124 negatively regulates IL-11 expression, both in vitro and in vivo. A negative correlation has been demonstrated between miR-124 level and IL-11 expression, both in cell lines and in human metastatic bone tissues. Patients with lower miR-124 expression or higher IL-11 expression in metastatic bone tissues have a rapid progression of the disease and therefore a shorter overall survival. The authors propose miR-124 and IL-11 as new therapeutic targets for breast cancer patients at an early stage and prognostic markers in advanced stage patients with bone metastasis. Bockhorn et al. have identified IL-11 as a relevant downstream target of twinfilin (TWF1), an actin-monomer-binding protein. TWF1 regulates the expression of IL-11 at both mRNA and protein levels. The authors demonstrated that miR-30c regulates breast cancer chemoresistance and EMT by direct targeting of the cytoskeleton gene TWF1 and thus by indirect targeting of the cytokine IL-11 [112]. Samaeekia et al. reported that miR-206 suppresses breast tumour stemness and metastasis by inhibiting both self-renewal and invasion. In triple-negative breast cancer cells, the authors identified the pathway mediated by miR-206: it targets TWF1, megakaryoblastic leukaemia (translocation) 1 (MKL1), and serum response factor (SRF), and subsequently leads to lower levels of IL-11 mRNA and protein expression [113]. Conclusions IL-11 appears as a multifunctional cytokine with known roles on cancer (clearly pro-tumorigenic and pro-metastatic). Considering that, to date, the treatment for bone metastases is largely palliative, the fact that the cytokine performs an osteolytic action in these conditions places it at the center of investigations with the aim of finding solutions to improve the conditions of these patients. However, despite the promising results in animal models, IL-11-based therapies still have to overcome numerous challenges. The main challenge, in fact, is the collateral interference induced by the blockade of IL-11 or IL-11 signalling molecules, with the physiological mechanisms. Of note, in vivo, a plethora of signals, emanating from cancer/microenvironmental cells and influencing each other, works in the complex bone milieu. Many of these signals find their receptor on boneresident cells and so they act to remodel bone. In this regard, some results collected with in vitro experiments to explain the mechanism of action of a particular cytokine may not be able to mimic a real situation that occurs in vivo. Deciphering the roles of IL-11 and other cytokines as well as of the intracellular signals involved in the colonization of bone metastases will lead to the identification of targets for coordinated therapies. A deeper understanding of the role played by IL-11 in the various stages of tumour development and progression, in parallel with an improvement in the knowledge of the IL-11 signalling, will fully reveal its potential uses. Interesting and promising are some miRNAs, uncovered as key cellular mediators of the metastatic process in breast cancer in vitro and with potential clinical relevance to prevent or eventually treat breast cancer bone metastases. Therapeutic options for breast cancer patients at risk of progressing to bone metastasis are necessary, but IL-11-based therapy requires more extensive analyses to confirm and extend its use. Finally, conflicting results are reported on the correlation between IL-11 expression and the receptor endowment of breast cancer cells [36,114]. This represents an important issue and further studies should be conducted to highlight the possible impact that IL-11 could have on the molecular classification of breast cancer as a prognostic marker. Conflicts of Interest: The authors declare no conflict of interest.
7,613.2
2021-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Deciphering of Space Images of the Inder Salt-Dome Upland in ENVI 4 . 7 Program Space images play an important role in the Earth study as they bring the main information received from the Space Flyer Units (SFU) to help researchers. Space images’ deciphering gives the opportunity to study the territory and to plot different maps. On the basis of the space image obtained from Landsat 5TM (30 m resolution, 01.09.2012 year), we managed to get a picture of the modern relief of the northern part of Inder lake. When comparing the space image with topographic maps of 1985, we succeeded to identify the dynamics of landforms change on the studied area, what has been shown on the drawn map of the relief of the Inder salt dome uplift. 14 classes, corresponding to a particular type of terrain or to a landscape complex, have been distinguished on the studied area. Inder salt dome uplift is a paradynamic conjugation, consisting of highly karsted Inder Mountains corresponding to large diapir uplift, and of the Inder Lake having a large ellipsoidal shape. Geomorphologically, the investigated territory is located on the left bank of Zhaiyk River, and presents a salt dome uplift in the form of a plateau-like hill raised above the surrounding surface from 12 to 40 m. The maximum height reaches 42.5 m (g. Suatbaytau). The crest of the Inder salt dome is composed of Low Permian sediments (rock salt with anhydrite, potassiummagnesium salts), and has an area of about 210 km2. Inder lake’s basin is represented by a tectonic depression, which is the local basis of erosion and is a drainage place of the Inder uplift karstic water. The lake area is 150 km2. Depending on the climatic conditions, the water level can vary. Object Classes; Landscapes Complex; Mapping Introduction Scientific interest to the use of remote sensing methods in studying of the Earth and the planet's natural resources has reached a higher level through the multizone photography modes [1] [2], which allowed getting information that could be used both for the improvement of universal maps' contents and for specific maps' drawing [3], [4].Development of new methods of automatic image processing, by means of GIS-programs, contributed to the success of multizone photography [1]. Analyzing materials of the space surveys, it should be noted that these materials should be supplemented with the number of various ground and air methods for distance studying of underlying surface using cartographic material, and studying the characteristics of brightness of underlying surface will allow using them in thematic deciphering and in maps' drawing of the studied area [5] [6]. Relief deciphering is focused on the achievement of the "geographic likelyhood" on maps.It has the maximum indication value as it could be very easily read on the images.There is a certain connection between the morphologic and morphometric peculiarities of modern relief, endogenous and exogenous processes, geologic structures, surface and subsurface waters, flora and subsoil [7]. Research Object The Inder lake area, which is characterized by peculiar karstic shapes, was chosen as the object of study.The Inder lake is located on the Atyrau region territory in West Kazakhstan.The region is the largest area of the karst shapes' development in Kazakhstan.The Inder lake area has been studied on summer, 2013.For that purpose, a morphogenetic method of study has been chosen.Various karstic forms, lithological composition, dynamics of development and their order on the topographic maps have been studied there.The collected information was, then, processed and analyzed.As the result, the morphographic and morphometric parameters of karstic forms of relief have been identified. Geomorphologic Characteristics of the Research Object According to geomorphology, the Inder lake area is located on the left bank of Zhaiyk River and represents saltdome uplift in the form of a plateau-like upland 12 -40 m. above the surrounding surface.The maximum height reaches 42.5 m of Suatbaitau Mountain.The crest of Inder salt-dome is composed of Low Permian sediments (rock salt with anhydrite, potassium-magnesium salt).Its area is about 210 km 2 (measures taken from space image).The Inder lake basin is represented by tectonic depression, which is a local basis of erosion and drainage area of Inder uplift karst waters.The lake area is 150 km 2 (measures taken from space image). The northern coast of the lake is presented by a cleve; its height in places is 15 -20 m and more; the south coast is flat. The lake water is bitter-salty, local people even compare it with the Dead Sea, which is in the Middle East, as one have to be either in horizontal or in vertical position only.Northern coast is fed from the source, which is located in the bottom of slope that can be seen on space image (by its colour). There is the Inder salt-dome in northern part of the lake, composed of gypsum rocks, with the capacity of 60 m.Most part of upland presents cuesta-shaped ridges, which height varies mostly from 20 to 40 m.The karst processes are actively developed on the surface of Inder salt-dome.Karst forms' density comes to 200 -300 units per 1 km 2 [8] (Figure 1). Selection the Space Image of the Investigated Territory We ordered the КА Landsat space image on the glovis.usgs.gosite, which was created especially for naturalresource monitoring of different territories. The peculiarity of the Landsat 5TM image is its electro-optical TM camera and an upgraded scanner MS (multispectral scanner).TM camera allows to form an image at seven places of the electromagnetic spectrum (with 30 m of spatial resolution), in the visible and infrared range with a swath width of 185 kilometers [8].Large scale images, made in different periods, should be used for studies.They should cover all the studied territory, be of one type.The most high-quality images (without cloud cover), intended for mapping of coastal water surfaces are selected.Such was an image taken at the end of summer period of 2012, with the spatial resolution of 30 m.We managed to see a picture of modern Inder Lake's relief on the obtained space image (Figure 2).The correction and geographical junction/bridging was done by means of ArcGIS 9.3.program Binding of the Space Image in the ArcGIS/ArcMap Program As it is known, the geometry of the obtained images in most cases is accompanied by distortions.Consequently, it is difficult to perform accurate measurements.For the image geometry's restoration a photogrammetric processing of an image is carried out, during which the one-to-one correspondence between the points on the image and the same points on the earth's surface is detected.Also the geometric distortions of an image are eliminated [9]. The gridding in the ArcGIS/ArcMap software package is performed in several steps, the sequence of which depends on the type of a gridded material.We performed the cross line screen bridging in ArcGIS using tools from Spatial Reference (Georeferencing) panel.For this It was enough to know the coordinates of several points on the cross line screen or to have the vector data, which can be then compared with data on the cross line screen.We took the coordinates of Inder lake (48˚33' north latitude, 51˚44' south latitude), of Bodene village (48˚23' north latitude 51˚38' south latitude), and of the mine #103 (48˚32' north latitude 51˚57' south latitude) from topographic map (100,000 scale).After bridging the space images we marked out the borders Inder salt dome uplift in the northern part of the lake, i.e. the development area of karst relief (Figure 3). Space Image Processing in the ENVI 4.7 Program During the space image processing the signs (markers) of spectral brightness are used.That is why in the course of automated deciphering, the problem of determining the quantitative connections between the spectral brightness and the objects characteristics is solved.Distribution of pixels by classes happens in the spectral space. Automated processing, used by us, is based on the fact that the studied object is characterized by a set of quantitative characteristics of its image, that form an image or signature.The image is automatically partitioned into elements, where for each of them the numerical values of signs/characteristics are determined, forming a multidimensional vector.The task of classification is to divide the signs/characteristics space into local regions corresponding to one class of objects.Also the program performs a reliable classification under the unique correspondence of signs/characteristics to the object.To increase the reliability, the textural signs are used in addition to the spectral ones.The shape and location of objects and information about the surrounding objects are recorded.These signs increase the reliability of classification [9]. Space image processing is performed by classification method in the ENVI 4.7 program, where 255 akin colours are automatically highlighted.The task of the classification was to break the group of objects on the number of classes (in this case 14 classes), the number of iterations and convergence threshold.Then each class is assigned a color (Figure 4). After dividing into 14 classes, each class in ROI format shall be transferred to vector layer for their use in GIS and shall be downloaded in another window (white contours, (Figure 4).The vector file in ENVI format (evf * ) shall be resaved in the shape (shp). Objects Deciphering Further, the work shall be continued in ArcGIS program, where the recognition of objects and grouping of colors shall be carried out as per the attribute table.Also their combination or separation to classes shall be carried out as per the certain features (Figure 5). Then, on a topographic base (Figure 6), we shall find the appropriate objects and classify them by comparing the objects, and using colors configuration of space image through three channels: 7, 4, 2 (Figure 7). Objects Classification Thus, 14 classes, corresponding to a particular morphogenetic relief type or landscape complex, have been selected on the studied territory: Class For the comparison of selected classes, we used the topographic bases of 1: 50,000, 100,000 and 200,000 scales (Figure 8). The process of classes' selection was the following: the largest area 1,222,800 sq•m. is occupied by extrabarren, anthropogenically disrupted area.Sor-affected declines that are spread mainly in the eastern part of the uplift, also occupy a large area: 555,300 m 2 .Within the uplift territory, the area of large and medium-sized sinkholes is equal 900 m 2 (total area-1800 sq•m).Small sinkholes are wide-spread there.The occupied by them area is 55,530 m 2 .Gypsum hills occupy 7200 sq•m.territory.Area of man-made quarries is 1800 sq•m. Research Object Mapping After the space image classification the map called "Relief of the Inder salt-dome upland" was drawn with the legend (was shortened in the article).It shows the development of certain relief forms and types on the Inder salt-dome upland (Figure 9).The legend matches the above selected 14 object classes. Thus, the Inder salt-dome upland is a paradynamic junction, consisting of a highly karsted Inder mountains, which correspond to the large diapir uplift and large ellipsoidal, in shape, the Inder lake, with an area of 115 km 2 and with the water's edge of 23 m below sea level.Nutrition of the lake is accomplished by snowmelt and rain water, springs and groundwater that come from the Inder Mountains.The lake stretches from north-east to southwest.Its northern and western shores are steep and bold, reach over 20 m in height, and are cut by short slit-like and trough-shaped ravines and gullies.At the northern shore of the lake there are springs in the gullies with mineral water, the total number of which is 80.One of them is Aschebulak, which is located on the north-east shore of the lake, and is used for balneological purposes.Average annual output of springs is 78.2 l/s, which vary over a wide range (33 -144 l/s) [10].The northern coasts are composed of gypsum overlaid by quaternary deposits.Two springs, Belaya Rostosh and Aksai run into the lake from the north-west, which expose the Jurassic and chalk deposits.Eastern and southern shores are flat and cut by wide beams.Karst field of the Inder Mountains is the largest in the Pricaspian lowland.The total number of karstic forms reaches 5000.The density of surficial karstic forms reaches 200 -300 pcs/km.The total amount of surface lowering caused by the karst processes is 1.87 mm/year.Four types of karst sinkholes are the most noticeable-saucer-shaped, cone-shaped, ponor-shaped and well-shaped.Saucer-shaped sinkholes are spread everywhere, but mostly all around the Inder mountains periphery.They reach 10 -15 m. in diameter and 2 -3 m in depth.Cone-shaped sinkholes reach 20 m. of depth and 30 -40 m. of diameter.Ponor-shaped sinkholes have a cone shape with a narrow chink (ponor) in its bottom, which serves as drainage channel.Karst wells are peculiar enough: being of small sizes (up to 5 m. in diameter), their depth reaches 15 m.Some karst kettles and sinkholes are spread on the south and south-east of the Inder lake.S. S. Korobov and I. K. Polenov [11] identify a number of factors contributing to the karst development on the Inder upland: 1) The composition of the cap rocks (gray medium-grained gypsum); 2) The cap rocks fissuring (deep open fissures of 10 -16 m. of depth, and even more); 3) Elevation of carsting massif above the basis of erosion (up to 35 -40 m. above the Inder lake); 4) Climatic conditions (continentality and climate aridity, rain showers); karst is active in the period of snow and rain showers; 5) Low power of coverings (Khvalynian) formations and their sandy (fine sandy loam and light loam) composition. Morphological structure of the Inder salt dome landscape is complemented by a two-deck lake terrace that stretches along the southern and south-western coast of the lake.Fragmentarily, the terrace occurs along the northern and eastern shores.The lower layer of the terrace is located at a height of 1 -1.5 m. above the water's edge of the Inder lake, the upper layer 7 -8 m.It is obvious, that the coasts abruptness of the Inder lake, related to the compensating trough, have the tectonic causality.The surface of the Inder salt dome (directly beneath the north shore) is inclined at an angle of 85˚, and within the Inder mountains 5˚ -30˚ [12]. The Inder denudation karst hill, obviously, is a relic of the ancient peneplain, which was first raised and eroded, under the influence of salt tectonics, and then underwent the karst-denudation preparing with the formation of various micro-and meso-forms of relief.Formation of the largest Inder karst field in the Caspian lowland, relates, mainly, to the secondary cap rock, which completely covers salt-glass of the dome on the area of 230 km 2 .The power of karst-tectonic breccia, which builds up the cap rock, is about 50 -60 m.Cap rock is built up by rock-salt, potassium salt and potassium-magnesium salts (halite, sylvite, carnallite, potassium sulfate and magnesium), anhydrite and by other rocks.The boric mineralization appeared in the assises of potassium and potassium-magnesium salts (kaliborite, boracite, hydroboracite, etc.) with the B 2 O 3 content in the rocks and with the level of 1% -5% [13]. Formation of secondary (karst-tectogenic) cap rock is associated with the wet postglacial epochs, and with the leaching by sea water during the Caspian transgressions periods.Process duration for the formation of cap rock mass has led to the formation of various minerals, including unique ones-vitchit, hergheyit, hydroboracite, inderboryte, inyoite, colemanite, kurnakovite, sulphoborite [13]. Inder dome cap rock is watered by the fractured-karstic suprasalt waters, connected with fractured karsted gypsum, anhydrite, and sandstones.Inder cap rocks have an extremely high filtration capacity (300 -500 l/day), [14].Aquifer recharge is accomplished by the atmospheric precipitation and transit streams directed towards the Inder lake.32 springs with various water discharges-from centiliters to a few tens liters per second come to the surface at the northern shore of the lake.The total discharge of all springs, at an average, is 35.25 l/s (or 1.1 mln.m/year).The most powerful is the Aschebulak spring (22.5 l/s). The geomorphologic feature of the Inder denudational-karstic hills is the differences between the north-western and south-eastern parts (Figure 9).The north-western part of the hill is like a slightly rolling plain covered with many small sinkholes and kettles, but its south-eastern part is covered with ribbed ridges, which represent the slopes of large karst valleys and basins filled with terrigenous deposits [15]. Inder Mountains are characterized by the forms of inside meso-and micro-relief, belonging to various highgenetic levels.The internal structure of the Inder mountains' relief can be defined mainly by the salt table irregularities of the salt dome upland, which is complicated both by diagenetic forms of secondary salt tectonics (salt spines and stocks) and by forms of underground and surface karst formation. In particular, the so-called "kurgan-tau", peculiar to central part of mountains [16], is the crescentic ridges with a ribbed top, built up by the compacted reddish gypsum, and the slopes of white gypsum.The feet of the ranges turn into the flat inter-ridge valleys.Presumably, the formation of "Kurgan-tau" was caused by the dissolution of salt folds on the surface of salt uplift and the collapse of the gypsum roof.The formation of tectonic faults systems over rising salt spine has played a leading role in the formation of karst ridges.This system represented the radially and concentrically divergent fissures, which have broken up a lifting hill into the separate sectors.Some of the sectors were eroded and destroyed after, what has lead to the formation of ribbed ridges like "Kurgan-tau" (Figure 9).Obviously, "Kurgan-tau" is the slope of large karstic basins, cropping out to the surface, and inter-range valleys -are the flat bottomed surfaces of the fragmentary material filling the basins. The influence of desalination processes is typical for soils and vegetation of the Inder salt dome upland.Morphological structure of the Inder mountains landscape is formed by two components: 1) the sloping ridges with broad ravines with a slight or without any karst formation process, 2) highly-carsted ranges and plateaus. Steppe formation processes occur on the karsted landforms of the Inder mountains.There are peculiar darkcolored soils with abnormally thick humus horizon (70 -80 cm) in the karst basins and hollows.However, in general, the infantile and highly fragmented soil with short profiles, dominate on the Inder karst field under the high mosaicism and soil cover variability.Also, a meadow-steppe vegetation with shrubs (Spiraea, Rhamnus cathartica, honeysuckle, wild apple, rose, astragalus), peculiar for this landscape region, grows in karst basins.For example, Eurotian-community (Ceratoides sp.-Artemisia lerchiana) together with anabasis (Anabasis salsa), feather grass, (Stipa capillata L.), and bulbous bluegrass (Poa bulbosa L.) occupy large areas. Conclusions Deciphering of the Inder lake area space images shows that the use of GIS technologies (automated deciphering) gives a great opportunity to solve more complex problems in geomorphology according to the morphogenetic research method, which is acknowledged by the following results: 1) The Inder salt-dome upland area was and still remains the largest region of the karstic rocks and salt-dome tectonics' development in Kazakhstan; 2) On the basis of space images' processing, the morphographic and morphometric parameters of the development of various karst relief types of the Inder salt-dome upland have been identified.A total karst development area has been determined by the space image.Its area is about 210 km 2 ; 3) Combined use of «ENVI» and «ArcGIS» programs for the detailed «Landsat» space images processing by applying the large-scale topographic base has allowed to select 14 classes of objects on the studied territory, that correspond to certain morphogenetic complexes of landscapes in arid climatic conditions of Kazakhstan; 4) The map named "Relief of the Inder salt-dome upland" is the result of research.During the process of map plotting, the legend has been developed, which reflects the modern dynamics of karst relief development on the Inder salt-dome upland. Thus, the deciphering of the Inder lake area space images confirms that the karst relief, at present, continues to develop on this territory and reflects the process and dynamics of modern relief-forming processes in conditions of salt-dome tectonics of the arid territory of Kazakhstan. Figure 2 . Figure 2. Space image of Inder Lake.The lake area is 150 km 2 . Figure 3 . Figure 3. Junction/bridging of space image in and Inder salt-dome upland boundary detection. Figure 4 . Figure 4. Space image processing in ENVI 4.7 program. Figure 5 . Figure 5. Selection of limestone sink types (by green color). Figure 7 . Figure 7.The process of isolating individual classes of objects. Figure 8 . Figure 8. Mapping of the space image over the topographic base.
4,685.8
2014-03-27T00:00:00.000
[ "Geology" ]
Studies on the temperature dependence of electric conductivity for metals in the Nineteenth Century : a neglected chapter in the history of superconductivity Two different lines of research had significant contributions to the discovery of superconductivity: the liquefaction of gases and the studies of the temperature dependence of the electrical conductivity, or resistance, of pure metals and alloys. Different publications have described and discussed the achievements in the first one of these subjects. The second subject had not received, however, the same attention. This article tries to fill this gap by presenting an account showing details of the evolution of the ideas, the first essentially experimental contributions to the subject and their corresponding responsible persons. Introduction The traditional accounts about the discovery of superconductivity, whose first century is commemorated this year, show it as a consequence, in some way accidental, of the experimental researches on the liquefaction of the by then so called 'permanent' gases.A previous article on this subject is mainly focused on the historical evolution of these events [1].This conception is, however, historically incomplete, since parallel achievements developed in a different line of research made equally significant contributions to the identification of the new phenomenon.The necessity for disposing of appropriate thermometric instruments for the each time more extreme and limited regions of low temperature in order to replace those, by then, unpractical gaseous ones, forced the carrying out of researches focused on the application of different physical principles and materials in order to fill this objective.The studies on the electrical conductivity, or resistance, of pure metals and alloys and their temperature dependence aroused the interest of scientists of different nationalities in the second half of the nineteenth century and the first half of the twentieth century, and contributed not only to the understanding of the electrical properties of those materials but also to the proposal of several theoretical models on electric conduction.The purpose of this article is to expose an account of the more relevant facts related with this second line of knowledge, including details of the first essentially experimental contributions to the subject and their corresponding responsible persons. putting forth his famous free electron theory [2], the establishment of electricity as an imponderable elastic fluid transferable from one body to another had been discovered around one hundred and eighty years before by the British scientist Stephen Gray (1666-1736), sweeping so away the old idea of an electrical effluvia inseparably attached to a body in which they were excited [3,4].Stimulated by the reading of the accounts of the physicist and compatriot his Francis Hawksbee (1666-1713) [5,6], and after some experiences on related subjects as assistant to the also British natural philosopher Jean Theophilus Desaguliers (1683-1744), Gray, a dyer by profession and an amateur naturalist by inclination, carried out in 1720 a series of experiments using a hemp thread that allowed him to demonstrate that electricity could be conducted by some materials for distances as great as 233 m, while others did not conduct electricity at all [7].Although initially almost ignored, the early Gray's ideas on electrical communication came to the fore nine years later, and became of the public domain with a new publication in 1731 [8]. No further significant progress was made on this subject until 1734, when the French scientist and Superintendant of the Jardins du Roi, Charles François de Cisternay du Fay (1698-1739), suggested the existence of two kinds of electric fluid, vitreous and resinous, which could be separated by friction and neutralized each other when they combined [9].The so-called "twofluid" theory of electricity proposed by du Fay, based on experiments with the attraction and repulsion of different electrified substances, was later contrasted with that of "one-fluid" theory suggested by the American scientist Benjamin Franklin (1706-1790), which considered only one fluid either present, positively or negatively charged [10].With the classification of substances as conductors or insulators and the establishment of the electricity flow direction from positive to negative, the basis for the behavior of the new property was ready. It is the British natural philosopher Joseph Priestley (1733-1804) the first in trying, although with great imperfections, the estimation of electrical conductivity in metals by using static electricity.He determined the relative scale of conduction power between two metals by measuring the amount of fine test melted after passing similar discharges through wires of identical length and diameter [11].The first notice about the dependence of electrical conduction with temperature, however, is very probably due to the British scientist Henry Cavendish (1731-1810).Accurate measurements he did in 1776 of relative resistances of an iron wire and solutions of common salt allowed him to identify better conduction powers for warmer solutions than for colder ones [12].The first more reliable measurements for metals with voltaic electricity are attributable to the British chemist Humphry Davy (later Sir) (1778-1829), who carried out researches in 1821 by which not only clearly proved the differences existing between the conducting power of different materials and its dependence with temperature, but also found the relation of this power to the other physical variables such as weight, surface and length of the conducting body, as well the conditions of electro-magnet action [13].Working as nearly as possible with similar wires (diameter being more than one-tenth of an inch and lengths of three inches) of platinum, silver wire, copper, tin, iron, lead, gold and palladium, Davy found much greater differences in the behavior of different materials than he initially expected, and, over all, what he considered the most remarkable result, which was the significant variation of the corresponding conducting powers with temperature, being lower in some inverse ratio as the temperature was higher.One year before the publication by the German physicist Georg Simon Ohm (1789-1854) of the well-known mathematical theory of the galvanic circuit, and based on the Davy's work, the French scientist Antoine César Becquerel (1788-1878) compiled in 1826 a list of conductive powers of nine metals (relative to that of cooper), which was considered as a reference work and used as a guide on the subject for several decades [14]. The first analytical relations of dependence The first mathematical relation for the temperature dependence of electrical conductivity was established by the Russian physicist and Professor at the University of Saint-Petersburg, Emil Khristianovich Lenz (Heinrich Friedrich Emil) (1804-1865) (Fig. 1), widely known by the discovery of two fundamental physical laws on the phenomena of induction and the thermal action of a current (the latter now better known as Joule's law) [15].After finishing some electromagnetic experiments on the influence of various factors in the induction of the electrical currents by magnets, and before starting those that led him to the establishment of the law that bears his name, Lenz was involved in studies on the electrical resistance and conductivity in metals.In an article published in 1832 [16] he revealed how the lecture of a paper written by the Italian physicist Leopoldo Nobili (1784-1835) and his countryman, the science administrator Vincenzo Antinori (1792-1865), on the electrical phenomena produced by a magnet [17], suggested him the idea for the new subject of research.The paper describe the way they used for determining the order in which four different metals (cooper, iron, antimony and bismuth) were adapted to produce the electric current by magnetism.The Lenz's exact words were: "it is particularly striking that the order is the same as that which these metals occupy, also, in reference to their capacity of conducting electricity, and the idea suddenly occurred to me whether the electromotive power of the spirals did not remain the same in all metals; and whether the stronger current in the one metal did not arise from its being a better conductor of electricity than the others.With this in view, I examined four metals, cooper, iron, platinum and brass" [16].In a later paper, read before the Academy of St. Petersburg on June 7, 1833, Lenz revealed details of his research.A pair of identical spirals were built for each metal under investigation and connected in series into a single circuit whose free ends were joined by cooper lead wires to a galvanometer very similar to that invented by Nobili.By using a circuit with an electromotive force which was free of the uncertainty implied by the internal resistance of the battery, Lenz was able to find the electric conductivity of each one of the studied metals at different temperatures and found with considerable degree of accuracy the magnitude of their respective changes with this variable.The ballistic method based on the original conception of an instantaneous, impactlike effect of induction current he employed for measuring this and other electric and magnetic quantities, allowed him to determine values at between six and twelve different temperatures for, initially, silver, copper, brass, iron and platinum [18], and sometime later, gold, lead and tin too [19].According to the particular analytical style that characterized his research and differenced it from that of most of contemporaneous colleagues, Lenz worked the whole set of experimental values with the assistance of mathematical methods, mainly that of least squares, looking for the establishment of a general quantitative relation between both variables.The relation he found for this specific case was (using his own notation) where γ n represented the electrical conductivity at a temperature n, x the conductivity at 0 • C in the Reaumur scale, and y and z two particular coefficients for each specific substance (Table 1).Lenz determined as well the ranges in which the formulas could be utilized for each one of the studied metals.Researches carried out more than a decade later by the German physicist and mathematician Johann Heinrich Müller (1809-1875) showed a favorable comparison between the results found by application of these formulas and the experimental values [20]. The next essay on the subject was carried out more than a decade later by the third son of the previously mentioned A.C. Becquerel, the French physicist Alexandre-Edmond Becquerel (1820-1891) (Fig. 2), more known by his studies on solar radiation and on phosphorescence.The author did not make, however, any reference to the Lenz's work, and the research included not only the effect of heating on the electrical conductivity for a larger number of metals, but also for some liquids and solutions. The apparatuses he used for the study incorporate several changes in order to improve the accuracy regarding that used by Lenz, and are schematically shown in Fig. 3 [21].A first step for the evaluation of the temperature influence on the conduction power was its determination at a reference value (around 12.75 • C).The corresponding device included (Fig. 3a) a differential galvanometer having two separate wires in its coil (one of them of the metal to be studied) and placed on the board SS', a couple FF' formed by one cylinder of amalgamated zinc immersed in a saturated sodium chloride solution and other of cooper in a solution of sulphate of the same metal, an early version of a Wheatstone rheostat DEE'D' for the introduction of a conductor wire in the circuit, and a device RAA'R' where the wire whose conductivity was to be determined was placed. If the wires was be made of two different metallic conductors of equal conducting power connecting the poles of the same galvanic pair or battery, and so that both currents shall be in opposite directions through the galvanometer, the needle of the latter would be stationary, and this became the test for the equality of the conduction powers of the two metals.A quantity denominated equivalent of resistance must to be determined in terms of a number of divisions in the rheostat for a definite length of wire of the metal to be studied.The rheostat was made to furnish a measure of the resistance to conduction of a given length of wire, and on this way the length of the other wire must to be conditioned according its conduction power in order to maintain the equilibrium of the needle.If the nature of the connections remaining the same, the apparatus was arranged so that scarce any change could be made in either circuit, beyond that of the length of the wire, and the corresponding variations in the length of the wire under study could be read off in a graduated scale.The study of the effect of temperature included a simple arrangement (Fig. 3b), in which the wire under investigation was putted around a metallic tube CD to form a helix whose convolutions were not touching, and introduced along with the bulb of a thermometer into a bath of oil.Two thick connecting parts of cooper, whose resistance to electrical conduction could be neglected regarding that of the wire, were connected with the extremities of the latter in the bath, and the whole arrangement was then immersed in a water bath.The increase of the equivalent of resistance in the wire due to heating it from the ambient temperature to the boiling point could be then accurately measured by allowing the heated bath to cool gradually.The obtained results allowed Becquerel to conclude that the increase of resistance by unitary change of temperature (dr /dt) was a characteristic constant for each metal studied.Although without the primary purpose for explicitly establish an analytical equation for estimating conductivities, Becquerel arranged the results as a function of both, the so called coefficient of the increment of resistance (dr/dt/ resistance at 0 • C) and temperature, where R 0 represents the electrical resistance of the metal at 0 • C. The Table 2 presents the coefficients he estimated for a number of metals.A few time later, in the second semester of 1848, the Irish astronomer and physicist, Rev. Dr. Thomas Romney Robinson (1792-1882), made a contribution to the subject, which have been almost ignored by historians.Unsatisfied as he was, on one side with the imperfect state of the rheometric knowledge and the unsteady action of the batteries implied in the experiments carried out by Davy, and with the range of temperature covered by the researches of both, Lenz and Becquerel, (little above of that of boiling water) on the other, Robinson meant to find a more general relation between conductivity and temperature based on his strong belief about a close relation between it and the molecular forces and the structure of matter.The main difference in the equipment used was the inclusion of a pyrometer to measure the temperature by expansion of a wire of the material to be investigated [22].Although he apparently identify the analytical expression (where R 0 was the resistance at 0 • C, b its change for one degree) as the more appropriate to represent the results of his experiments, Robinson also understood that it was required to introduce corrections due to the heating effect of the current on the rheostat and resistance coils by which they were measured.The formula to which he finally arrived, whose results showed close agreement with the experimental values, would be where I is the electric current.The coefficients a, b and c, specific again for each substance, could be determined by minimum squares techniques or ordinary elimination on the basis of experimental values of successive trials.The initial promising results showed for platinum wires of different thickness quickly became strongly limited.The possibility to extend these experiments to other metals was minimized because of the difficulty to find any determination of similar expansions to that gotten for platinum, with the only exceptions of iron and copper.The oxidation in the pyrometer of the first of these metals did not permit, however, to get consistent results.Cooper also oxidized, but the film of the oxide acted as a coating and protected the interior, prevented changes in the diameter of the wire for temperatures below 480 • C. With results for only platinum and cooper, the research had not important diffusion among the then scientific community. Arndtsen, Siemens and the improvements of accuracy and coverage The separate works on the subject by the Norwegian neurophysiologist, physicist, and professor at the University of Christiania, Adam Frederik Oluf Arndtsen (1829-1919) (Fig. 4) and the German Ernst Werner Siemens, complete this first chapter of the history.Regarding the first, it was very probably that his early interests on phenomena related with nerve conductivity and transmission, and in general on the uses of electricity in medicine in general, motivated him to carry out researches on the electrical resistance in metals at different temperatures.Knowledgeable, as he was, of the Lenz's work, the primary objective Arndtsen had with the experiments was to eliminate some sources of error he indentified in previous researches, in order to improve the accuracy of the by then known results.These mistakes were mainly focused in the significant influence the contact between the wire and the warm liquid had, specially at high temperatures, on the conductivity measurements.With this purpose in mind, Arndtsen modified twice the arrangements used by his predecessors by working first in his native land, and later at Gottingen in the laboratory of one of his professors, the German physicist Wilhelm Eduard Weber (1804-1891).In order to make the above mentioned corrections, the first arrangement included the covering of the metallic wire L with silk, its winding up around a test tube of a little diameter, and the soldering of the free ends of two thick and short cooper wires ab and bc (with lengths appropriately conditioned as it is exposed above) completed with caps fully filled with mercury.The new disposition was then placed into other wider glass test tube provided with a thermometer and closed with corks, and the whole device was immersed in a water or oil bath carefully kept at constant temperature.The whole assembly was coupled to a simple Wheatstone's differential galvanometer AA and the rheostat R as the basic elements of the circuit.The essential dif-ferences of the second arrangement regarding the first one were the introduction not only of a previously calibrated copper wire ce placed on a board R and coupled to the motor B for carry out the rheostat function, but also of a differential galvanometer A with multipliers a and b and the respective reel M in the assembly of the whole equipment.This latter modification had as objective to amplify in the scale F the effect of the current, further deflecting the needle m, improving so the accuracy of sensitivity, and, as consequence, the accuracy of measurements [23].Both arrangements are shown in the Fig. 5. To the only metals studied in the first series of experiments, cooper and platinum, Arndtsen subsequently added silver, aluminum, brass, iron, lead, and the alloy argentan, or German silver, composed this latter by nickel, copper and zinc.The experiments showed a proportional resistance increasing with temperature for most of the metals under investigation, it meant silver, copper, aluminium and lead, confirming so the previous Becquerel's findings.Regarding brass, argentan and iron, the changes in resistance with temperature were far from to be simply proportional and Arndtsen decided to use a parabolic equation of second order for represent the corresponding experimental data.Some of the formulas he found, valid in the range studied, 0 to 200 • C, are shown in Table 3. Arndtsen observed that, with the only exception of iron, the proportional increase in the resistance for the six different metals investigated varied very little, and that if the resistance at the freezing point was called 100, the numbers representing the increase for one centigrade degree in the metals he investigated lied, again with the exception of iron, between 0.327 and 0.413.This fact allowed him to speculate that if still more carefully investigations were carried out and absolutely pure metals were employed, perfectly accordant numbers could be obtained.This little difference led the German physicist and mathematician Rudolf Julius Emanuel Clausius (1822-1888) to suggest a very simple and original new class of relation between both variables.After glance at the Arndtsen's results, and without an apparent connection in his mind with any theoretical consideration, Clausius observed that they very closely approached the coefficients of expansion [24].By leaving of consideration the quadratic member occurring in iron and taking the mean of the whole of the first coefficients, he obtained the following expression for the resistance at the temperature t as a function of the resistance at the freezing point From this equation he followed that, although the number of metals investigated was still too small and the agreement of the numbers too imperfect to enable him to arrive to a "safe" conclusion, the resistance of simple metals in the solid state was nearly in proportion to the absolute temperature.Almost a quarter of a century later, this speculation was taken up again by the Polish physicist and chemist Zygmunt Florenty Wróblewski (1845-1888), who inferred from it the fact that the electric conductivity of metals would be null at the temperature of zero absolute.This inference can be considered the first explicit, although unconscious, reference to the phenomena what would be later called superconductivity.Although Wróblewski carried out researches on this subject working with copper, the technical limitations by then existing only allowed him to reach temperatures of about -200 • C, and he was therefore unable to establish conclusive experimental support about this fact [25].It seems that no further investigations about this relation were later proposed and the speculation did not go further away.About 1860 the inventor and industrialist Ernst Werner Siemens (1816-1892) (Fig. 6) investigated the law of change of resistances in wires by heating and proposed several equations and methods for testing resistances and use them for determining faults.His interest on the subject was not casual, but arouse as a consequence of preceding researches, mainly related with electrical communication, which had allowed him to developments such as the invention of the pointer telegraph, the magneto-electric dial instrument giving alternate currents, and the instrument for translating on and automatically discharging submarine cables, among others.The subject of telegraphy was closely associated with the then system of electrical measurements and with the invention of many delicate measuring instruments.The requirement of other standards of measurements in order not only to some quantities could be gauged, but also consistent work could be done, led W. Siemens and other contemporary scientists to the identification of a general, easily reproducible, and sufficiently accurate standard measure of resistance.W. Siemens adopted in this search the resistance of mercury as the unit, and embarked on its more accurate possible experimental determination at different temperatures.Among the enough criteria for its choice were facts such as that it could be easily be procured of sufficient, almost indeed of perfect purity, the nonexistence of different molecular states that could affect its conducting power, a smaller dependence of its resistance with temperature regarding other metals, and a very considerable specific resistance that allowed that numerical comparisons founded on it as a standard were small and convenient.As every experimenter would then be able to provide himself with a standard measure as accurate as his instruments permit or require and to check the changes of resistance of the more convenient metal standard, W. Siemens understood the necessity for establishing a comparison between the conducting power of mercury and some solid metals.By using identical experimental procedures with all materials under investigation, he was able to build a table including the relative conducting power of nine metals at the temperature t (Table 4) [26].The results showed very acceptable agreement with those presented by Arndtsen regarding common metals in both studies. The researches of Augustus Matthiessen The maybe most widely known and integral researches in the nineteenth century on the electric conductivities of metals and their relation with different variables, including temperature, were carried out by the British chemist and physicist Augustus Matthiessen for not merely working out the solution of each new physical problem he faced up to after it had been so formulated, but also for stating it in terms of mathematics, very probably influenced the character of Matthiessen's later investigations on conductivities and other areas. A first paper published by Kirchhoff in Matthiessen's name on the electric conductivity of the two above mentioned metals, besides potassium, sodium, lithium and magnesium, embodied the experimental results obtained by Matthiessen in the physical laboratory [27].The required wires were formed in a device he designed to press out small portions of metals into thin samples by means of a steel pressure, while the determination of the resistances was made by using a slightly modified apparatus constructed by Kirchhoff on the basis of the Wheatstone's method.Other publications his, including experimental data of conductivities for twenty-five metals, followed this paper [28]. Almost simultaneously Matthiessen also showed interest in alloys made of two metals because of the multiple industrial applications he predicted for them [29], and proceeded to determine not only the electrical conductivities of upwards of 200 alloys of variable composition [30], but also their tenacities and specific gravities.He decided to classify the metals employed in the different alloys in order to try to state some general rules about the behavior of their conductivities compared to those in individual condition.The classification included two great groups: those which, when alloyed with one another, conducted electricity in the ratio of their relative volumes, and others which, when alloyed with one of the metals belonging to the first class, or with one another, do not conduct electricity in the ratio of their relative volumes, but always in a lower degree than the mean of their volumes.To the first class belonged lead, tin, zinc, and cadmium, whereas to the second belonged bismuth, mercury, antimony, platinum, palladium, iron, aluminum, gold, copper, silver, and, as he thought, in all probability most of the other metals.The alloys were consequently divided into three groups: those made of the metals of the first class with each other; those made of the metals of the first class with those of the second class, and finally those made of the metals of the second class with each other.The comparison between the obtained experimental values and those calculated by assuming a proportional participation of each metal in the whole value according to its relative volume in alloys showed very acceptable agreements.Other comparison, this time between the magnitudes of the electric conductivities of the alloys and those of their constituents, allowed him too to work in the opposite way, and get information about the real nature of the alloys and to state if some chemical combination would really exist there or not.On the same way, and because of the preparation of cooper of the greatest conductivity had great practical importance in connection with telegraphy and his results showed sig-nificant discrepancies regarding previous similar observations by different researchers, Matthiessen embarked as well in studies about the probable influence of minute quantities of other metals, metalloids and impurities on the quantitative magnitude of this property [31]. Matthiessen's first researches on the influence of temperature on the electric conducting power of metals were published in 1862.The paper describes the apparatus and the corresponding procedure in the minute detail that characterized all his scientific work [32].Figure 7a shows the disposition of the whole apparatus.B is the trough in which the wires (soldered to two thick copper wires, bent as shown in the figure, and ending in the mercury-cups E, which were connected with the apparatus by two other which were connected with the apparatus by two other copper wires, F ′ , of the same thickness) were heated by mean of an oil-bath; C a piece of board placed in such a manner in order to prevent the heat of the trough from radiating on the apparatus; G a cylinder glass including the normal wire soldered to the wires F ′′ ; H a wire of German silver stretched on the board, I the galvanometer, K the battery, L and L ′ two commutators fitting into four mercury-cups at o, and M the block to make the observations.In addition, a identifies the tubes for filling the space between the inner and outer troughs with oil, and d a glass tube allowing the thermometer c to pass freely.The way as the wire to be studied was placed on a small glass tray in the trough is shown separately in the Fig. 7b. Matthiessen determined the conducting power of the wires or bars of silver, copper, gold, zinc, tin, arsenic, antimony, bismuth, mercury, and the metalloid tellurium at about 12 • , 25 • , 40 • , 55 • , 70 • , 85 • , and 100 • C, and from the mean of the eight observations made with each wire (four at each temperature on heating, and four on cooling), deduced the same general formula previously proposed by Lenz for representing its dependence with temperature, but determining new sets of coefficients for each one on the basis of the new accurate experimental data.Table 5 shows the mean of the formula found for some metal, with the conducting power of each one taken equal to 100 at 0 • C. Some conclusion he arrived, by which the conducting power of all pure metals in a solid state would seem to vary in the same extent between 0 • C and 100 • C, it meant 29.307%, did not received additional experimental support and consequently did not extend in the time. Two years later he published a new article on the effect of temperature, this time on alloys [33].The conclusions of the study showed great similarity in the behavior of the alloys regarding those of the metals which composed them.By using a very similar apparatus he was able to find that the conducting-power of alloys decreased (with exception of some bismuth alloys and few others) with an increase of temperature and to deduce specific equations of dependence for fifty-three alloys composed by two metals and three alloys composed by three metals.The Table 6 shows the results for some alloys of definite chemical formula; Table 7, on the other side, shows the variation of these formulas for alloys including the same metals but different composition.All the values were reduced to 0 • C, as mentioned for pure metals.Table 5 -Matthiessen's analytical expressions for the relative electrical conductivities of ten different metals.Credit: Ref. [32]. Table 6 -Matthiessens's analytical expressions for the temperature dependence of electric conductivities of some alloys of definite chemical formula.Credit: Ref. [33]. A very significant conclusion Matthiessen derived from the study was the deduction of the fact that the absolute difference between the observed and calculated resistances of an alloy at any temperature equaled the absolute difference between the observed and calculated resistances at 0 • C. On this basis it was followed that the formula for the correction for temperature for a specific alloy could be then easily be determined with only knowing its composition and its resistance at any temperature. Table 7 -Matthiessen's analytical expressions for the variation of the temperature dependence of electric conductivities of some alloys including the same metals with composition.Credit: Ref. [33]. Extension and consolidation of an analytical relation The investigations of Arndtsen, W. Siemens, and Matthiessen were limited to the range of temperatures between the freezing and boiling-points of water, and do not comprise important metals such as platinum, which began to be considered as the most valuable metal for constructing pyrometric instruments.The equation, including the coefficients determined by Matthiessen for example, gives a close agreement with observation between the narrow limits indicated, but resulted wholly inapplicable for temperatures exceeding 200 • Centigrade, when the term t 2 began to predominate and to produce absurd values for R.This had been clear for German born engineer Carl Wilhelm (later Charles William) Siemens (1823-1883) (Fig. 8), who in 1860, in the course of the testing of the electrical condition of the Malta to Alexandria telegraph cable, its manufacture and submersion, identified the potential danger of the heat that could be spontaneous generated within a large amount of cable, either when coiled up at the works or on board ship, due to the influence of the moist hemp and iron wire composed its armature.In considering the ways by which such increase of temperature might happen, his attention was directed to the importance for studying the relation between electric conductivity and temperature.This study additionally led some time later to made use of this relation for the design of a new thermometer.The instrument, which really worked as an electric thermometer, was further perfected by C.W. Siemens and applied as a pyrometer to the measurement of furnace fires.As a consequence of this design, C.W. Siemens undertook to carry out a series of detailed experiments in order to find a more widely applicable equation [34].His work was initially focused on platinum, the metal he considered in many aspects the most suitable one for extending the enquiries to high temperatures, and that Matthiessen had been left out of consideration in his researches.C.W. Siemens carried out then three series of experiments using equal number of different assemblies.In the first one the wire was wound upon a cylinder of pipeclay in helical grooves to prevent contact between the convolutions of the wire.This wire was then placed together with a delicate mercury thermometer, first in a cooper vessel contained in a bath of linseed oil, and then in a larger vessel, packing with sand between the two in order to the too sudden radiation and the consequent change of temperature.Wire was then connected with a Wheatstone's bridge and a sensitive galvanometer.The bath was then very gradually heated by a series of small burners, being the resistances read off at intervals of 4 to 5 • C whilst the oil was kept in continual motion.When the highest point had been reached, the bath was allowed to gradually cool down measurements were taken at the same points of temperature as before.This methodology was repeated several times until about six readings of the resistance at each point of temperature had been obtained.The second assembly (Fig. 9) replaced the wire in the pipeclay by a spiral contained in a glass tube and hung by its leading wires in a rectangular air chamber.The space between the walls was filled with sand in order to insure too a very steady temperature inside.Three mercury thermometers with the bulbs around the platinum coil in the same horizontal plane were inserted through the cover of the double chamber.Five small burners heated externally the box, which, together with the flames, were always surrounded by a metallic screen in order to prevent irregular looses of heat by radiation or by atmospheric currents.Measurements of the resistance were then taken at the same regular intervals of temperature previously mentioned.The third assembly made use of the same platinum wire, with the only difference that chamber containing the tube and wire was filled with linseed oil.As no general conclusion could be drawn from the bearing of only one metal, C.W. Siemens decided to carry out similar researches with coils of comparatively pure cooper, fused iron (or mild steel), iron, silver and aluminum, which were in a similar way gradually heated and cooled in metallic chambers containing the bulbs of mercury thermometers, and for higher temperatures of air thermometers, and the electrical resistances were carefully determined.The progressive increase of electrical resistance was thus directly compared with the increasing volume of an incondensable gas between the limits of 0 and 470 • C. The experiments were described by C.W. Siemens in 1871 in a Bakerian lecture delivered at the Royal Society in which he presented the possibility of temperature measurement by measuring the corresponding resistance variations of a metal conductor.Although a committee designated two years later determined the inconvenience for to use the platinum wire proposed as a sensor by C.W. Siemens for temperature measurement because of a thermal hysteresis it exhibited, the investigations led to the formulation of a new formula for the electric dependence of conductivity.He was convinced that the new formula should be based upon a rational dynamic principle and the application of the mechanical laws of work and velocity to the vibratory motions of a body which represented, according him, 'its free heat'.If this heat was considered to be directly proportional to the square of the velocity with which the atoms vibrate and it was assumed that the resistance which a metallic body offered to the passage of an electric impulse from atom to atom was directly proportional to the velocity of the vibrations representing its heat, it followed that the resistance increased in the direct ratio of the square root of the free heat communicated to it.In mathematical terms, the equation should to take an initial form, where T symbolized temperature by first time in an absolute scale, and α a specific and experimentally determined coefficient for each metal in particular.C.W. Siemens analyzed however that this exclusively parabolic expression would make no allowance for the possible increase of resistance due to the increasing distance between adjoining articles with increase of heat and the ultimate constant resistance of the material itself at the absolute zero.He speculated that the first of these contributions should depend upon the coefficient of expansion βT and the second one only upon the nature of the metal.Once these contributions were considered, the expression took the form where β and γ symbolized usual specific coefficients for each metal.Table 8 shows the individual equations for each one of the metals under consideration.Improvements in the range of coverage in both directions of the scale of temperature of the general forms of the relation between conductivity and temperature would become the last significant contributions to the subject in the nineteenth century.The first of them is due to the work of the French physicist and later Director of the Bureau International des Poids et Mesures (BIPM) Justin-Mirande René Benoît (1844-1922) (Fig. 10), who, with an early training in medicine, turned very quickly to studies in physics, and conducted in 1873 a series of experiments on the temperature dependence of electrical resistance in metals as a requirement to receive his Doctoral ès Sciences Physiques [35].By employing the differential galvanometer for measuring resistances, Benoit was able to determine between five and eight measurements of electric resistances of nineteen metals at equal number of different temperatures, covering a range that varied between the melting point of ice and the boiling point of cadmium, it meant 0 and 860 • C. The experimental results were numerically treated in order to find the optimum coefficients for each metal by the method of least squares on the basis of the above mentioned form of the equation originally proposed by Lenz.Table 9 lists the analytical equations derived for all the metals included in the study.In the graphical results shown in the Fig. 11 for some metals the resistance of each at 0 • C is supposed equal to 1, and each ordinate indicates the experimental increase at the corresponding temperature [36].Other extensions, this time on low temperatures, were later reported on the same line of investigation that in the second decade of the twentieth century would led Kamerlingh Onnes to his important discovery, it means in the whole framework of the researches on liquefaction of gases, and by using the excessive cold so produced for the determination of several properties of matter.The French physicist Louis-Paul Cailletet (1832-1913), one of the two responsible for the first liquefaction of oxygen, reported in 1885 the results of experimental measurements carried out jointly with his colleague Edmond Bouty (1846-1922) on conductivities of mercury, silver, magnesium, tin, copper, iron and platinum in the new range between 0 and -123 • C [37].If it is important to remark that their experiments proved again the regular decreasing in the electrical resistance with the drop in temperature for most of the metals, that the coefficient of variation was appreciably the same for all, and that the results acceptably agreed the general form of the analytical formulas previously proposed, the most relevant facts for the present history are the speculations they venture to do on the basis of these results.In a short sentence concluding the paper the authors considered that although their experiments do not enable them to form any precise idea of what would take place in those conditions as "very probably that the resistance would become extremely small and therefore the conductivity very great at temperatures below -200 • C", and what could be considered one of the first, if not the first one, although unwitting predictions about the existence of the phenomena of superconductivity, that "if the same law (that proposed according the Matthiessen and Benoit's results) held at low temperatures, the resistance of a metal, varying like the pressure of a perfect gas under constant volume would furnish a measure of the absolute temperature, and would cease to exist at absolute zero".This hypothesis complemented the previously mentioned by Clausius, but on a slightly more supported basis.About a decade later researches carried out by the British physicist James Dewar (later Sir) and John Ambrose Fleming (1849-1945) strengthened the information about the electrical behaviors of metals at still lowest temperatures, thanks to the increasing availability of each time more complete appliances for the production of large quantities of the liquid gases necessary as refrigerant agents.With great experience on the subject thanks to previous researchers that had allowed him to liquefy hydrogen by first time, Dewar embarked with his colleague in specific investigations on the measurement of electrical conductivities of eight metals (nickel, tin, iron, platinum, aluminium, gold, silver and copper), seven alloys (platinoid, german silver, platinum-iridium, platinum-silver, palladiumsilver, platinum-rhodium and phosphor-bronze), and carbon at the temperatures produced by the evaporation of liquid oxygen when boiling under normal or under reduced pressures, it means of about -200 • C [38].The initial research was complemented one year later by adding other metals (palladium, zinc, lead, magnesium, cadmium and thallium) and alloys (gold-silver, aluminium-silver, aluminium-copper, manganese-steel, nickel-steel, copper-manganese, titanium-aluminium, silverine and copper-nickel-aluminium) in the whole study [37].Figure 12 shows some of the preliminary results.The experimental results showed that the order of the conductivity of the metals at very low temperatures was different from the order at ordinary temperatures, but what was more important, after extrapolate their data they dared to speculate that "the electric specific resistance of all pure metals will probably vanish at the absolute zero of temperature" [39]. Concluding remarks Several factors influenced the scientific studies on the temperature dependence of the electric conductivity of metals in the nineteenth century.The curiosity arisen for the still new science of electricity that characterized many subjects related with this discipline was very quickly complemented by interactions with other sciences, as for example medicine, and the usefulness that the results provided for the technological developments in emerging communication systems, the design of instruments for measure temperature, and the establishment of a standard unit of resistance.But while these motivations were diverse, the experimental methodology used along the full century was not so different.The introduction of the differential galvanometer by Becquerel and Wheatstone bridge in the first half of the century for the measuring of resistances became the maybe only significant modifications in the methodology followed by the different scientists involved in the subject, and the fact that the various results only correspond each other in an approximate way was not due to the diversity of apparatuses employed but circumstances such as the purity of the materials used, the procedure by which the corresponding wire was built and the accuracy of the measurements, among others.The general dependence equation between resistance and temperature established by Lenz about the first third of the century was basically retained and the extension in the number of metals included in the study, the accuracy and the coverage of temperatures would become the main modifications and the facts that helped to consolidate the relation. The fact that the different involved researchers mostly worked with the same metals was not a merely coincidental matter.Criteria for the selection of the materials to be studied included, besides the obvious availability, conditions of the highest possible purity, relative good conducting power, and scientific and industrial usefulness in electrical and non-electrical areas.The weighted consideration of all these criteria provided good reasons for the joint study of their respective physical, thermal and electrical behaviors.Three metals, copper, aluminium, and platinum, become good examples.Because their high electrical conductivities and the ease with which copper could be drawn into rod and flexible and easily soldered wire this metal became in great demand.It was known, however, that this electrical conductivity varied with the presence of even small various impurities, and very few of its commercial grades reached the required standard of purity.The fire-refined copper was too poor an electrical conductor, but this did not matter for most of the usual non-electrical applications, such as the manufacture of domestic pots and pans, sheathing of ships's bottoms and fireboxes for locomotives; for electrical applications, however, it mattered very much.The appearing of a patent for the electrolytic refining of cooper in 1865 in order to replace the old fire-refining methods not only allowed precious metals to be recovered efficiently and economically, but provided a means of producing the desirable very pure copper.The introduction of the dynamo in the second half of the nineteenth century favored impetus in the production of both, copper and lead, being the electrical industry the greatest for these two metals, and in to a lesser extent practically all other non-ferrous metals.Aluminium is other case to mention.Nor Lenz nor Becquerel included it in the list of the metals to be studied, and the determination of its power conduction only aroused the interest of the scientific community in the second half of the century.The finding of a possibility by which it could become a challenger to copper in the field of electrical conductivity and for a number of other uses previously traditional the latter metal propelled increased interest in the evaluation of its properties and a later phenomenal rise in its production.Platinum, for mentioning the third example, was not only important in the nineteenth century because the property that would led it to be used in the resistance thermometer and subsequently to bridge the gap between scales and to establish as a new one in the each time greatest range of temperatures.Other properties such as its expansion with heat as glass did, which allowed wires to be sealed into glass tubes or vessels to carry an electric current, its high resistance to caustic substances even at high temperatures and its inertness to react with most of elements and compounds, made of it a valuable material in the chemical and physical laboratories from before the coming of the stainless steel. Kamerlingh Onnes discovered the phenomenon of superconductivity on April 8, 1911.In no one of the different related publications that preceded the discovery, nor in his Nobel Lecture in 1913 he referred to the researches by Lenz, Arndtsen, Matthiessen, Benoit, the Siemens's brothers, or to whatever those by the other names mentioned along this article.The only references he occasionally did to Cailletet and Dewar were related with the apparatus or procedures that these scientists followed along their experiments on liquefaction.Although there is not historical evidence about the fact that Kamerlingh Onnes had information or not about the researches who preceded him in the study of electrical properties at low temperatures, and no one of the experiments carried out by the Dutch physicist seemed to have require of the utilization of analytical relations for the calculation of electrical conductivities or resistances, it is evident that the full knowledge he had on the temperature dependence of electrical conductivities of metals in the full range must to be based to a great extent in the lecture of the respective publications. The developments of the relation between electrical conductivity and temperature for metals, as many others of the discoveries in electricity in the nineteenth century, favored significant improvements in other specific areas besides the later well-known discovery of superconductivity, as it was, for example, thermometry.The invention of resistance thermometers decisively made the establishment of reliable standards of electrical units much easier and allowed accurate measurements of especially by then high temperatures.Although not so quickly, the phenomena of superconductivity also found commercial use.In the same way that, for example, the electromagnet invented by the British physicist William Sturgeon (1783-1850) and later developed by American physicist Joseph Henry (1797-1878), quickly found application in the developing heavy industries, that the recognition by the Estonian-German experimental physicist Thomas Seebeck (1770-1831) of the fact that the flow of an electrical current could be regulated by heat marked not only the beginning of the study of thermoelectricity but also played an important role in the design of superconductors in the late twentieth century, or that the independent discoveries of the conversion of temperature differences directly into electricity, the presence of heat at an electrified junction of two different metals, and the heating or cooling of a current-carrying conductor with a temperature gradient by the French physicist Jean Charles Athanase Peltier (1785-1845), Seebeck, and the mathematical physicist and engineer William Thomson, 1st Baron Kelvin (1824-1907), respectively, brought about the phenomena of thermoelectric effect and found wide utilization in power generation and refrigeration, among others, the progressive understanding of the properties of superconducting materials had allowed their practical application in areas such as power transmission, superconducting magnets in generators, energy storage devices, particle accelerators, levitated vehicle transportation, rotating machinery, and magnetic separators.If the whole research program on liquefaction of gases from middle of the nineteenth century supplied the thermal frame that favored the discovery of superconductivity, the amazing amount of available experimental data of the temperature dependence of the electrical conductivities for metals, the continuously improved methodologies for the respective studies, and the analysis of the observed tendencies were the responsible of the not so much accidental discovery of this phenomena. between 1857 and 1864.It was during the studies at Heidelberg that his interest in electrical subjects arose.From 1853 he spent nearly four years under the direction of German chemist Robert Wilhelm Eberhard Bunsen (1811-1899), isolating for the first time the metals calcium and strontium in the pure state by means of electrolytic methods.The studies of the electrical conductivities of these metals, and later of many others, carried out in the laboratories of the German physicist Gustav Robert Kirchhoff (1824-1887), became the opening to researches in the area.Kirchhoff had made in 1849 what can be very probably considered the first absolute determination of resistance, and his skills Table 4 - [26]ens's conducting powers of nine metals at a temperature t compared with that of mercury at 0 • C. Credit: Ref.[26].
11,705.4
2011-12-01T00:00:00.000
[ "History", "Physics" ]
Defence Against Terrorism : What Kind of Co-operation between NATO and the EU ? El artículo no presenta resumen. it.Although the Alliance's Member States had been confronted with various forms of terrorism during the 1990s, they were unable to find an agreement on whether NATO could be an appropriate organization to co-ordinate defence against terrorism.It appears clearly that, during the 1990s, NATO was not the main discussion forum for subjects such as the nature of terror, the drafting of an operational definition of terrorism or the root causes of terrorism.In the end, the Alliance was confronted by a total absence of any conceptual or operational tradition regarding defence against terrorism. In view of this weakness, NATO -shortly after 9/11 -attempted to impose itself as a key actor in the world-wide fight organized against transnational terrorist organizations.Yet, despite its willingness, NATO was soon faced by three difficulties.Firstly, defence against terrorism is above all a matter for the police and the judicial authorities, and therefore falls outside the Alliance's competences.Secondly, as indicated above, NATO did not, prior to 9/11, develop any kind of thinking or conceptualization in respect of defence against terrorism.Thirdly, in the case of a military response to a terrorist attack or threat, action takes place mostly at nationallevel (the consequence of management and prevention) or within the framework of a coalition of the willing. Faced wit those difficulties, NATO had something to prove in order to play a role in future developments on the security front.It began with the rapid activation of 11361 Article 5 of the Washington Treaty.However, besides the political and symbolic significance of this historie decision, the N ATO military response was to remain at the periphery of the major action undertaken by the coalition led by Washington. Initially, the Alliance's involvement was limited to measures taken after activation of article 5: increased co-operation in the field of intelligence gathering; identification and offering of all the necessary means for defence against terrorism; ensuring free access to air space, airports and harbours; ensuring reinforced security to the Alliance's Member States; assistance to countries hit by terrorist attacks; starting the operations Active Endeavour and Eagle Assist.Those various measures require little comment.In fact, most of them fall within the basic competences of the Alliance and are not therefore specifically linked to the fight against terrorism.In fact, only the operations Eagle Assist and Active Endeavour could be considered as military, operational involvement on the part of the Alliance in respect of defence against terror. Moreover, the Alliance's operational involvement was increased by its efficient involvement in Afghanistan and in Iraq, albeit much more marginally in the latter.In this respect, the management of those two operations reflects not only the potential added value ofNATO, but also the organization's limitations.In Afghanistan, the Allies showed relatively strong political homogeneity, NATO thereby being used as a platform for the integration of capacities and assets.In the case of training missions for Iraq, however, the Alliance was not able to overcome the differences between Member States and had to be satisfied with an effort that, a1though not symbolic, was still far below the organization' s real poten ti al for that kind of operation. Beyond the purely operational aspects, NATO looks to be considered as a preferred platform for consultations between allies, between allies and third countries and also between organizations (EU and UN).For the Alliance, such consultations should contribute to the development of common points of view regarding the perception of terrorist threats and the way to counter them.However, certain NATO Member States, such as France, do not share this analysis; for them, the Alliance is trying to use the fight against terror as a way to legitimize a reform of the organization, with the reform of intelligence structures for example.In this respect, there was a great deal of reluctance to merge civil and military intelligence capacities, each country preferring to keep its own specificity.Moreover, as the exchange of operational intelligence is still taking place outside the great multinational structures, NATO must content itself with working essentially with strategic intelligence.The same goes in respect of the constant use of the 'terrorism prism' to guide modification of the Alliance' s concepts and politics, particular!y regarding the possibility of conducting military operations in terms of where and when they are necessary and what simplified decisional procedures are to be employed. For the future, it is likely that the Alliance will continue to focus on the activities for which it could offer added value.In this respect, we could suppose that NATO will increase its efforts to be considered as the reference organization, together with the 11371 nation states, in respect of protecting the civilian population.It is the case with airspace surveillance, with the protection of critica!infrastructures (such as nuclear power plants) and with protection during major events (the Olympic Games, major political meetings, etc.).The other great potential domain of NATO expertise concerns protection against WMD.Since 2002, in fact, NATO has been developing a lot of key initiatives in this domain, which, amongst other things, translated into the establishment of an NRBC defence battalion in 2004.We can also note the Alliance's involvement during the coming years in the border security problematic, in consequence of the launching in 2003 of the Orhid conference on border management and security in co-operation with the EU and ESCO.Lastly, although the Alliance cannot be considered as the reference organization for defence against terrorism, N ATO is still going to significant lengths to be considered as such.However, the intrinsic nature of this fight does not plead for a global and central role for NATO. The EU global approach The 9/11 terrorist attacks were a factor in the dramatic increasing of judicial and police co-operation in Europe.That development, however, was not limited to the third pillar: during the past four years, all policies developed by the EU have been influenced in one way or another by the fight against terrorism.This is especially the case for the second pillar, as witnessed by the signature of the European Capabilities Action Plan (ECAP) soon after 9/11.The first plan and the follow-up process were deeply influenced by the risks and menaces that international terrorism could represent.From this perspective, at the EU level, a policy of willingness was developed and, within a few years, most of the objectives linked to the fight against terrorism were attained.However, a lot of work remains to be done over the next few years, mainly in the field of NBC defence, intelligence-gathering and, fundamentally, in the changing of mentalities. In order to judge the value of measures taken in the fight against terrorism, it is important to identify the EU's weaknesses in this matter.Thus, we must first consider the important debate in Europe on the perpetua!search for equilibrium between the protection of citizens and respect for their individualliberties.This problem is very much at the forefront, in the light of the progress made in the third pillar.This is especially the case with regard to the definition of terrorism, which has made many observers fear a rise of social movements and trade-union criminalization.Moreover, other concerns have been raised vis-a-vis the exceptional measures legitimized in the fight against terrorism, such as the increased length of jail sentences and police custody.In any case, once the declaratory phase is over, the implementation of written engagements often amounts to a fastidious case of cherry-picking.In this respect, the 1,;;1 EU institutional structure itself is an obstacle to the achievement of a harmonious, 0 coherent and complete strategy for the fight against terrorism. Therefore, before the Constitutional Treaty can be ratified, no less than three different texts are sometimes required, in order to take account of the specifics of each pillar.Another obstacle to be overcome is the fragmentation of the Council through its varying structures and different formations.At this level, transversal communication must be put into place, but without creating another new structure.Lastly, the current system of rotating presidencies does not allow for proper followup, especially when the number of policies to be managed is very high and when the fight against terrorism is simply considered as one amongst others. But if, fundamentally, there is still ambivalence about the concrete application of measures taken after 9/11, the main responsibilities rest not only with the Member S tates and their will to keep a maximum number of prerogatives for themselves, but also with the EU, which is incapable of offering a solid contribution in this domain.This situation was well described by the EU itself in a severe report made by the Council's Secretary-General in 2004, for which the fight against terror was still overly limited to nation states.Moreover, the report also criticized co-operation between Member States themselves and with third countries.In this respect, one can only agree with the statement that most of the agencies created post-9/11 in chiefly the field of transport are "not or nearly not endowed with means and objectives". In the area of judicial harmonization, initiatives and proposals were too often downplayed in their concrete transposition.The difficulties encountered in the implementation of the European arrest warrant are significant in this respect.As for police co-operation, Europol is still inefficient in its fight against terrorism, even though contributions have been increased.From now on, in fact, all sensitive intelligence will be shared during bilateral or informal meetings.In this respect, the first challenge will be to overcome the resistance of national services to sharing intelligence in multilateral forums when necessary. As regards defence and security policy, the 'Iraq diplomatic crisis' had negative effects on European political cohesion, but was also a dramatic driving element.Nevertheless, when it was urgent to take a decision on the opportunity of invading Iraq, the EU had four of its Member States in the UN Security Council.But, unsatisfied about the opportunity, the EU was divided and the Security Council was by-passed.As it turned out, this diplomatic failure served to accelerate the finalization of Solana's Security Strategy Paper. In 2005, the links between ESDP and terrorism were tackled during the informal meeting of Defence Ministers on 18th March 2005.lt was decided to pursue efforts, in order to reinforce the civilian and military capacities, their interoperability, the exchange of intelligence in the military area, the possibility of EU national protection in third countries, the assistance to third countries in their fight against terrorism, the creation of a protection capacity for the rapid deployment elements and a cooperation with NATO in the field of civil protection. Regarding Justice and Home Affairs (JHA), the Luxemburg Presidency was 11391 essentially focused on the application of the «The Hague programme» in order to optimize operational co-operation between the 25 within the EU legislative framework. In this respect, Luxemburg pleaded for co-operation to be developed in a way that respects the four liberties: free movement of people, of goods, of capital and of services.Moreover, Luxemburg was in favour of deepened or total integration of JHA within the communíty framework.In concrete terms, the Presídency has been working on a form of ínternal crisis management that could have trans-border incidence, on strengthening Chief of Police Task Force functions and on Europol operational competences. Additionally, the Luxemburg Presidency has worked efficiently to achieve a political agreement on two important decisions: one conceming data linked to communication traffic and the other concerníng the European arrest warrant.The Luxemburg presidency has also followed the development of SIS 11 (Schengen Information System, second generation), in order to allow the ten new Member States to join the second phase of the Schengen Agreement. The fight against terrorism was thus a priority for the Luxemburg Presidency even if it did not constitute a visible objective.We can say that the Luxemburg Presidency ensured a good follow-up of the December 2004 European Council Conclusions, and focused essentially on issues regarding the financing of terrorism.For Luxemburg it was important that a multidisciplinary approach be chosen in respect of defence against terrorism.This has been concretized by the presentation of a strategic analysis of the threats by the SITCENT to the Council, on the basis of data from EU Member States.Lastly, the Luxemburg Presidency worked closely with the EU terrorism co-ordinator in order to facilitate co-operation between all the actors concemed. The strongest potential contributions of the EU depend on factors that could influence the resort to terrorism.However, to make such contributions, the EU must first develop a strong and coherent foreign policy towards sensitive areas regarding terrorism, as well as in all other forms of transnational criminality.What is right for the former Yugoslavia may not be right for the Middle-East.We might ask ourselves whether the Euro-Mediterranean partnership policy, which represents one billion euros per year, is really pertinent in terms of financial redistribution.The EU must think in the long term.Although EU policy was catastrophic at the beginning of the Balkan crisis, it became better as time passed.Indeed, the same reasoning must be applied with regard to the fight against terrorism.A seismic shock is necessary, as was partially represented by the events of 11 March 2004 at the Atocha railway station in Madrid. In fact, soon after the Madrid seismic shock, heads of state and govemment met in Brussels to adopta 'new' action plan against terrorism.It is symptomatic to note that, within less than one week, the fight against terrorism, which had been 11401 downplayed for a year, ultimately became the challenge for the next decade.Once more, European streets had to be drenched in blood befare this tapie was put at the top of the poli ti cal agenda. The Union nevertheless recognized its weakness in this respect.It thus adopted a solidarity clause, created a co-ordinator and provided for integration of a cell for information exchange within the Council.Additionally, the heads of state and government declared that they were determined to use assets already in existence and, more fundamentally, effectively to deploy them.It was decided to draw up a precise calendar and to publish a report that clearly identified the Member States failing to implement the existing measures.Lastly, the EU politicalleaders expresseu their will to increase co-operation between European police forces and intelligence agencies.Obviously, besides the declarations of intent, the EU urgently needs a change of mentality in those fields, although this is not something the EU can decree. Over the next few years, important improvements could emerge from the future Constitutional Treaty.Indeed, when ratified, it will bring huge changes in the field of intemal and extemal security, such as the extension of qualified majority voting; the reinforcement of Europol and Eurojust; the appointment of an EU foreign affairs minister; a juridical value for the Charter of Fundamental Rights; the solidarity clause in the case of a terrorist attack.However, the perception of the terrorist threat is still weaker in Europe than in the United-States.Immediately after 9/11 and the Madrid attacks, the perception was high, but tension decreased rapidly in each case. Nevertheless, the 'daily terrorism quota' in the press over the past four years has significantly raised the perception of threat in Europe, though that perception is clearly counter-balanced by other fears linked to the economic situation and to health considerations. Fundamentalíy, the critica!matter is to be capable of going beyond the absence of communication.In fact, the political authorities in Europe are in duty bound to explain to their populations that total prevention is an illusion.Even the State of Israel, the most experienced in the world in the fight against terrorism, is still vulnerable.The core question is what levels of control we agree to accept at EU level.lt is a question of striking a balance between an open society and a fortress society.The European choice has not been made yet.Ultimately, if we agree on the fact that absolute security is not possible, we must focus on the development of common and coherent tools in the management of consequences.lt is in that particular area that the EU must demonstrate its solid ability to contribute, because, if it fails to, there will be no second chance. What are the links between NATO and the EU? In order to correctly understand this question, it is necessary to consider the foundation ofthe relationship established between NATO and the EU in the field of li4il the so called «Berlin plus» agreements.lt was during the Santa Maria da Feira EU O Summit in June 2000 that the principies of the EU/NATO relationship were precisely elaborated.The two principies of this relationship are the assurance of efficient consultation, co-operation and transparency of the military response in the case of a crisis, with the guarantee of efficient management.Beside a definition of those guiding principies, the report made by the Portuguese EU presidency submitted a proposal to the Council, aiming at the establishment of four ad hoc working-parties to deal with four specific aspects of this relationship: • Security working group; • Capacities goal working group; • Working group for the establishment of the disposition allowing the EU to ha ve an access to NATO assets; • Permanent agreement working group. Moreover, consultations between NATO and the EU will be based on five guiding principies.Those principies are crucial as they will orient the nature of the future permanent relationship between the two organizations: • Respect for the autonomy of the decision-making process; • Maintenance of consultation, co-operation and a real and complete transparency; • Affirmation of the different natures of the two organizations; • Equality between the two organizations; • No discrimination between NATO and EU Member States. The proposals for EU/NATO consultation in peace time and in times of crisis were formalized in the report by the French presidency approved by the head of states and government during the Nice Summit.For consultation in peace time, severa!mechanisms were proposed: they concern the establishment of a regular contact mechanism between NAC and COPS, including at ministerial leve!, and involve meetings between NATO and EU military committees as well.Furthermore, in order to benefit from NATO experience on particular problems, meetings could be held between different subsidiary groups.Those meetings could take place as NATO/EU ad hoc working-parties oras meetings of expert committees. For consultation in times of crisis, an increased frequency of contacts and meetings in the emerging phase of the crisis was proposed.In addition, if the EU considers an 'option that could require the use of identified NATO assets and capabilities for a possible intervention, contacts would be established between NAC and COPS. Where the crisis is not avoidable and the EU decides to intervene, two scenarios could be considered: the EU asks NATO for assets and capabilities, or, the EU takes independent action. Beyond this institutional framework, the question of the operational autonomy of the two organizations has not been resolved yet.In fact, soon after the EU declared its willingness to increase its role regarding security and defence, the question of its relationship with NATO became crucial.In this respect, NATO was -and still isplaying the role of catalyst for transatlantic tensions linked to burden-sharing between Europe and the United-States.Moreover, NATO is the seat of numerous discussions, often polemical, on EU military autonomy, as well as on the way to manage current security and defence challenges.Defence against terrorism is clearly not an exception to this rule; in fact, it is within NATO forums that one can most often hear Americans warnings about duplication, discrimination and division that the EU could introduce by developing ESDP.Once the Soviet enemy had collapsed, the politicalleaders of NATO Member S tates raised the question of the relevance of the Alliance.For the United-States -and also for the NATO Secretary-Generalthe Alliance still remains the only international organization capable of managing the post-cold-war situation and preserving the existence of transatlantic links. On the other hand, the emergence of the EU as an international player could change the nature of these links.The accession of George W. Bush to the presidency, together with the emergen ce of new transatlantic tensions during tht early part of his first term, influenced the role played by the Alliance and its co-operation with the EU in the fight against terrorism.Thus, although George W. Bush's two presidential teams have been composed of experienced people, they have also been characterized by a weak knowledge of the EU.Nevertheless, the two administrations developed positions that do not differ from those of the previous administration.Essentially, they have felt that the development of ESDP could be damaging to transatlantic relations if it occurs in competition with the Alliance.If we consider the mood before 9111, we can see that the subjects of tension were numerous: reactivation of the anti-missile defence project; non-recognition of the Intemational Court of Justice; non-ratification of the convention on landmines and of the convention on biological weapons; and rejection of the Comprehensive Test Ban Treaty (CTBT).At the economic level, the steel question is adding to the banana, beef and other «bioethics» crises.Lastly, we cannot avoid the huge transatlantic opposition regarding the nonratification of the Kyoto protocol and the global policies on energy consumption. Although relationships between the two sides of the Atlantic were not at their best when the 9/11 attacks occurred, it was within the Alliance that NATO and EU Member States expressed their solidarity with the United States by rapidly activating Article 5 ofthe Washington Treaty.However, as we have seen above, NATO was quickly marginalized in the global fight led by the United States.In this respect, the big fear of political and military leaders was that there would be no benefit from the flexibility and rapidity of reaction to undertake operations far beyond the borders of the Alliance.However, the marginalization of NATO was not to be imputed to only the United States' requirement of efficiency.In fact, certain European countries were reluctant to see NATO becoming the champion of co-operation in the fight against terrorism.Those countries feared that co-ordination between NATO and the EU 1 143 1 would contribute radically to promoting and influencing Washington's own views regarding the best strategy to adopt in defence against terrorism. Surprisingly enough, although terrorism seems to be the major preoccupation in respect of security, it is on other subjects that NATO-EU co-operation has been reinforced during the last four years.Thus, permanent co-operation between the two organizations was concretized in the field in March 2003, when the EU replaced NATO in FYROM within the framework of the CONCORDIA operation.For the first time in history, the EU intervened under its own political responsibility by using NATO assets and capabilities: the operation commander in chief was the Deputy SACEUR, and the general headquarters was located in SHAPE (Belgium).However, although operational co-operation between the two organizations in the field of crisis management has been working well for two years now, the story is not the same for defence against terrorism. Fundamentally, the problem ofthe lack of co-operation between NATO and the EU in defence against terrorism is linked to the very nature of the two organizations.In fact, where in the EU most of the fight takes place within the context of the cooperation between the police and the judicial authorities, there is no equivalent forum within NATO.In this respect, possible co-operation between the two organizations could be found only within the framework of security and defence policy, which, as we have seen, is not deeply involved in the fight against terrorism.We are facing a dual development here.NATO, soon after the end of the cold war, evolved towards crisis management operations and then tried to impose itself in respect of defence against terrorism.For its part, in those domains, the Alliance is faced with a certain deficit related to its lack of experience in managing operations that are not strictly military, though the EU started from a position of much greater experience in the field of non-military crisis management to move towards the establishment of a military structure.Those two developments are moving in opposite directions and it seems that the EU is now keeping step with NATO regarding the range ofpotential answers that can be used to counter transnational terrorist threats.This particular situation clearly reduces the opportunities of having permanent structured cooperation established between the two organizations. Soon after 9/11, it already became clear that the opportunities for deepened cooperation between NATO and the EU would not be exploited.In this respect, the NATO-EU meeting following 9/11 was not used as an opportunity to announce a common action plan for defence against terrorism.At that time, the main tapies of discussion were the peace process in FYROM and the potential EU involvement in this area, as well as the elaboration of permanent NATO-EU co-operation agreements.Terrorism was, in other words the «surprise item» on a rather busy relational agenda.Because of that, the joint NATO-EU meetings during the following few months would only be occasions to announce measures taken at both organizations and to restate the necessity of establishing real co-operation in this ~ field, but without proposing any concrete actions.In fact, the core co-operation is to O be found within the framework of their common interests in stability in the whole Balkan area, towards which both organizations are developing a common approach. By mid-2002 and within the context of the forthcoming Prague Summit, certain potential avenues for co-operation had been indicated, including the field of the proliferation of weapons of mass destruction.Besides, the essential of the agenda would be filled in by the strategic partnership established between NATO and the EU and by the continuing operations in the Balkans.After this period, co-operation between NATO and the EU in defence against terror would be given a low profile.2003 saw the first ever NATO-EU joint crisis management exercise.This simulation exercise, named CME/CMX03, too k place in the N etherlands from 19 -25 N ovember.lts aim was to test the endurance of the so-called «Berlin plus» agreements, but it demonstrated that there were not enough lines of communication between NATO and the EU. Strangely enough, although co-operation between the two organizations seemed to work quite well in respect of crisis management, it clearly did not with regard to the fight against terror.This situation would be highlighted again at the end of the year 2003 by the political authorities of NATO and the EU, which were able to reach agreement only on condemning the escalation of terrorist attacks and pleading for better co-operation conceming the defence against them.In 2004, that co-operation took the modest form of a seminar on terrorism co-chaired by the two organizations.Furthermore, evaluation procedures were introduced regarding improving opportunities within the framework of joint efforts in respect of the proliferation of weapons of mass destruction. During 2004, the Secretary-General pleaded more than once for reinforcement of cooperation between his organization and the EU.A promising breakthrough carne in a joint declaration during the NATO Istanbul Summit, whereby the politicalleaders of the Alliance undertook to pursue their consultations and to exchange information on terrorism, as well as on the proliferation of weapons of mass destruction and in particular regarding the consequent management problematic. Those discussions took place within the established framework of co-operation between NATO and the EU: • At Foreign Affairs Minister level-twice per year; • At ambassador level (NAC and COPS) -at least three times per semester; • At Military Committee level -twice per semester; • At Committee level -regularly; • At executive level -daily.At all levels, thus, NATO and the EU exchange information on their respective activities in the fight against terror, especially with regard to the protection of civilians in the event of biological, chemical, nuclear or radiological terrorist attacks.11451 Moreover, both organizations have promoted greater transparency by setting up the exchange of an inventory of their respective capacities.For the moment, the EU is exploring new means to intensify its co-operation with NATO regarding defence against terror.Apart from those facts, it is clear that there is no adequate dialogue on terrorism: on the one hand, NATO is trying to move towards a holistic vision of security and, on the other, the EU has not defined its own finality yet within the framework of ESDP. Furthermore, the specificities ofNATO and the EU will not permit total co-operation in this field, at least not for the next few years.In this respect, the United States will not want to engage in a dialogue within NATO's structures about the defence against terrorism as long as the Europeans do not appear credible in their eyes.For instance, the NATO and United States military and political authorities have been urging the EU Member States for years now to improve their airlift capabilities.Regarding transport aircraft, in fact, the greatest lack of European capability lies in heavy carriers.At present, only the United-Kingdom possesses C-17 aircraft, and then just four, even though, as we saw, the Helsinki requirement for such aircraft is 20.Conceming the medium carriers (C-130 and C-160) the European fleet, with nearly 300 aircraft, is meeting the Helsinki requirement, even if all aircraft are not fully operational. However, we must take into account that European capabilities do not have unlimited capacity, i.e. 15 tonnes for a C-130 and 17 tonnes for a C-160, as against 78 tonnes for a C-17.Moreover, the autonomy of those aircraft is still weak.This situation requires a fleet of refuelling aircraft and eliminations of carriers that are not upgraded for airborne refuelling.It will also be possible to plan refuelling stops, though this will be at the expense of intervention speed, which is one of the most important criteria for the evaluation of strategic lift capacities.Lastly, we cannot hide the fact that the C-130 and C-160 are old aircraft and that the fleet is diminishing year by year. Those elements show the importance of establishing a European programme of development and acquisition of an aircraft that will be capable of replacing our current capacities; the A-400M goes sorne way to fitting the bill, but will not be operational before 2008-2010, a situation that requires the adoption of intermediary solutions.With regard to refuelling, the need to increase our current capabilities both quantitatively and qualitatively is more obvious.In this respect, the MRTT choice made by certain European nations is wise.At least, the conclusion of technical agreements far from the camera (an example is ATARES -Air Transport and Air-Refuelling and other Exchange Services), in combination with interoperable and multi-role aircraft seems to prefigure the future of strategic airlift for the next few years, even if it does not sol ve the problem of the lack of heavy carriers.Unfortunately, this example could hold good for precision-guided munitions, sealift, UCA V (Unmanned Combat Aerial Vehicle) or intelligence-gathering satellites as well. In fact, an augmentation of military spending in Europe is considered to be a prerequisite by US political leaders, an opinion that is also shared by most defence analysts in the United States and on the other side of the Atlantic as well.Nevertheless, tendencies in Europe do not seem to be in favour of expanding the national military budget.According to the officials in the Pentagon, the Senate and Congress, and in the eyes of a large part of the US elite, European defence ambitions clearly lack credibility.The main reason for this situation is the so called «technological gap» between the two sides of the Atlantic.In this respect, now that US defence spending and more importantly research and development spending have been raised to an historie level, it is obvious that Europe has not yet closed the gap.All those military tools are now essential to wage the war against terror, particularly in the light of US strategies.Thus, if EU Member S tates are unable to enter into this technological bargain, they will be left stranded, as the US Anaconda military operations in Afghanistan have shown. Fundamentally, however, US politicalleaders remain ambiguous on those issues.Therefore, now that the EU Member States are trying to equip themselves with strategic airlift capabilities and to elaborate autonomous intelligence gathering capabilities, leaders at Washington are developing attitudes ranging from one of scepticism to one of looking simply to torpedo the project. Conclusion By way of conclusion, the question can be asked as to what kind of situation we could expect in the near future.For the next few years, we can imagine a continuance of the present status qua, the two organizations developing their own competences in their preferred areas ( co-operation on poli ce and judicial matters-for the EU and consequence management or prevention for NATO), while punctually maintaining a mínimum co-ordination on certain particular issues.In this respect, consultation will be focused mostly on the issue of the proliferation of weapons of mass destruction and on the question of managing the consequences of terrorist attacks.On the longer term, we can expect greater dependence on co-operation between the two organizations on the basis of these 'common points'. To reach this second stage, politicalleaders will have to implement a strategic agreement or a global joint action plan for defence against terrorism throughout the entire Euro-Atlantic area.Such a global plan will have to set the guidelines for the joint action to be taken during the next ten years.However, this optimistic scenario remains an uncertain one, as the political will to use NATO as a real platform for transatlantic co-operation remains weak on both si des of the Atlantic.In the years to come, both organizations will at least have made the effort to pool their competences, in order to maximize the scant resources allotted to the defence against terrorism. Besides this question of the NATO-EU relationship, one of the greatest difficulties in 11471 the years to come regarding our comprehension of the phenomenon of transnational terrorism is still our incapacity to make a precise diagnosis and to determine the i11 afflicting us.In most cases, the terrorist organizations do not sign their action, nor do they have clear claims either.In their communiqués, they simply express the fact that a given terrorist act has happened and that it is a good thing for them, but we cannot speak of claims in the classical sense.The main Western/European weakness-and, in a broader perspective, the weakness of all people who are targeted by intemational terrorism -is the growing ignorance conceming the very nature of the threat, mostly because there are no chronological logics in this terrorism, and no claims.When political authorities have to face a secessionist movement, for example, there are identifiable claims (territorial independence, for instance) and thus there is a possibility to negotiate.It is possible to calculate the cost and the benefits of confrontation or negotiation.This is not the case in respect of the intemational terrorism of recent years. At the present time, unfortunately, there is no European forum for conducting independent scientific research in those fields.Nor is there any industrial integrative mechanism for developing work within the framework of asymmetric warfare.This is also the case at the academic leve!. lt probably falls within the competence of regional security organization such as NATO and the EU to provide the ímpetus to go beyond those barriers.In fact, if scientists have found order when confronted with chaotic structures, order may be discovered in intemational terrorism.To discover it, NATO and the BU will have to work together to finance and conduct fundamental research into this phenomenon.The two organizations could, for example, work together to ensure the interoperability of the anti-terrorist equipment that will be developed over the next ten years. Finally, we must put an end to the «intellectual taboo», apparent mostly in Europe, against examining the unexpected or the impossible.Too often, in fact, the planners or the media tend to confound what is new with what has been forgotten.Bombs have been planted in trains since the railway was invented and the hijacking of planes is a practice that is already a few decades old now.Those are not new phenomena.If it is important to invest within NATO and the EU to prevent such appalling acts taking place, it is equally important to allot resources to prevent other events ever occurring.In this respect, the exercise conducted by the UE and NATO regarding reaction to terrorist chemical, biological, radiological and nuclear attacks could be considered as a positive step.Nevertheless, many problems still remain. Firstly, there is still intellectual opposition, in the sense that a discontinuity can be observed: the phenomenon of terrorism does not fall within the norm: it is outside our traditional way of thinking.The particular nature of this leads to dramatic analytical difficulties.There is what might be called 'strategic jamming'.At the manageriallevel, work is conducted in terms of normality and within a hierarchical and compartmentalized perspective, all the time when what is demanded is that the distortions and the instances of chaos be studied.The scientist examining crisis explains that there is an instinctive reflex on the part of political authorities, as well as on the part of security authorities, to try to avoid what is not fully under control.It is a major governance issue: we cannot frighten people by explaining to them that the potential threats do not fall within the scope of contingency planning.In post-9/ 11 United States, reaction formations and schemes were established and developed throughout the country.The government tried to implement programmes such as the Family Disaster Plan or the Disaster Plan For Kids or even the 'Disaster Plan Kit'.Details of those various plans are available on the Internet and explain the way to put together a survival kit in case of a terrorist biological, chemical or radiological attack or even a natural catastrophe.They have also developed the Federal Emergency Management Agency (FEMA) in co-operation with the American Red Cross.However, recent events in New-Orleans have show most graphically that, four years after 9/11, the authorities (whether local or federal) still face huge difficulties in managing a crisis that falls outside 'ordinary' contingency planning.In Europe, aside from certain exceptions, such contingency planning is non-existent.It would be valuable if NATO and the EU, together or independently, were able to establish a co-ordination mechanism for national warning campaigns targeting populations in potentially hazardous areas.However, it is not simple to justify to the authorities the necessity to simulate a large-scale terrorist attack, even if such exercises permit the development of a full range of new attitudes and reflexes.Furthermore, there is a dramatic paucity of information-gathering after a terrorist attack.Little work is done with people who ha ve been caught up in this kind of situation to try to understand the difficulties they were faced with and how they managed them.Although is not done regularly, interrogating people who have witnessed or were victims of a terrorist attack is instructive in the attempt to comprehend how the mind reacts, in order that the right approach can be developed for the future. It is thus essential to work not only within, but also outside the normal framework of prediction, because the terrorist will always work outside of it.Lastly, competition among the services or among regional organizations has also hada negative impact in this matter.A solution to deal with that could be the creation of a neutral meetingpoint.In this respect, the security professional could ask for the creation of «trust zones» outside identified services or organizations. Since 2001, a number of large-scale terrorist attacks ha ve been perpetrated throughout the world.If one cannot speak of waging a «war» in the classic meaning of the word, one can at least speak of a struggle against a determined opponent.Such a struggle has global implications: if the threat is worldwide, the answer cannot be limited by national or regional borders, but must be transnational.Moreover, it requires co-ordination of economic, social, political and security institutions.In fact, although military action should be used as a means of prevention or reaction, it is not sufficient to counter terrorism on its own.The answer should also, and often principally, be based on judicial and police action.In conclusion, if international co-1 1 4 9 1 operation is to constitute an essential tool in defence against terrorism, it will also be necessary to avoid creating inadequate and inefficient new international structures over and above what is already a complex international security system.The efficiency of this defence will for a great part depend on the balance established between national responsibilities and international co-operation.
9,268
2006-02-15T00:00:00.000
[ "Philosophy" ]
Validated Spectrofluorimetric Method For The Determination of Cefoxitin Sodium in Its Pure Form and Powder for Injection via Derivatization with 4-Chloro-7-nitrobenzo-2-oxa-1 , 3-diazole ( NBD-Cl ) Objectives: An accurate and precise spectrofluorimetric methodwas developed and validated for the determination of βlactam antibiotic named; cefoxitin sodiumin its pure form and powder for injection. Methods: Based on nucleophilic substitution reaction of target drug with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) to form a highly fluorescent fluorophore measured at 540 nm after excitation at 460 nm. Results: Under optimum condition, the proposed method obeys Beer’s law in range (0.5-7 μg mL) and the reaction mechanisms were presented. Conclusion: The method was validated according to ICH guideline for accuracy, precision and was successfully applied for the determination of the drug in its pure form and powder for injection. The obtained results were statistically compared with those of the reported method and found to be in good agreement. INTRODUCTION Cefoxitin sodium is a semisynthetic cephamycin antibiotics classified as a second generation Cephalosporin, chemically named Sodium 3carbamoyloxymethyl-7-methoxy-7-[2-(2-thienyl) acetamido]-3-cephem-4-carboxylate 1 .The most novel of chemical feature of cefoxitin sodium is the possession of an alpha-oriented methoxyl group in place of the normal H atom at C-7. Figure (1), this increased steric bulk conveys very significant stability against β-lactamases 2 .It is produced by Streptomyces lactamdurans and used for the treatment of infections caused by anaerobic and mixed aerobic anaerobic infections, such as pelvic inflammatory disease and lung abscess 3,4 .Literature survey reveals that HPLC methods were developed for the determination of cefoxitin sodium in pharmaceutical formulations 5 and in biological fluids [6][7][8][9] , TLC method 10 , LC-MS/MS 11 and a flow injection chemiluminescent method was also reported 12 .Colorimetric methods were used for the determination of cefoxitin sodium in pharmaceutical formulations and in biological fluids [13][14][15] , first and second derivative UV spectroscopy 16,17 and a stability indicating method by spectrofluorimetric analysis 18 was also described for its analysis.Khalid et al, recently developed different spectrophotometric method for determination of cefoxitin sodium in the presence of its alkali-induced degradation product 19,20 . Chemicals and reagents Cefoxitin Sodium 98.8% was kindly supplied by Pharco B International Co., Cairo, Egypt.Lot no.12052036 Primafoxin ® 1gm vial labeled to contain 1gm of cefoxitin sodium per vial, Batch No. (109), the product of PharcoB international Co., Egypt, were purchased from local pharmacies. Water used throughout the procedures was freshly double distilled. Standard solutions Stock solution (1mg mL -1 ) was prepared by dissolving 100 mg of cefoxitin sodium in 80 mL water, and the volume was then completed to 100 mL with water.The solution was found to be stable for at least two weeks when stored at 5°C in the dark 16 . Working solution (0.1 mg mL -1 ) was obtained by dilution of the stock solution with water. Linearity and construction of calibration curves Aliquots from stock standard solution of cefoxitin sodium were accurately measured and transferred into a test tube set to prepare different concentration covering the linearity range (0.5-7 µg/mL), then 1mL (0.1% NBD-Cl) was added followed by 1.5 mL of (0.2M) NaHCO3.The reaction mixtures were allowed to proceed in thermostatically controlled water bath at 60 °C for 30 minutes, and then cooled to room temperature.After cooling, the reaction mixture was acidified by adding 1mL of 1M HCl, and completed to volume with water.The relative fluorescence intensity was measured at λem.= 540 nm after excitation at λex.= 460 nm. Application to pharmaceutical preparation An accurately weighed quantity of well mixed powder from three vials of Primafoxin ® 1gm equivalent to 100 mg of cefoxitin sodium was transferred into a 100-mL volumetric flask.The powder was dissolved by shaking with 50 mL water.Then volume was adjusted with water to obtained stock solution labeled to contain (1mg mL -1 ) cefoxitin sodium, which was further diluted to contain (0.1 mg mL -1 ).Cefoxitin sodium then analyzed by the corresponding regression equation for the proposed method. RESULTS AND DISCUSSION Cefoxitin sodium doesn't has a native fluorescence, so its derivatization with fluorigenic reagent was necessary for spectrofluorimetric determination. Effect of reagent volume The influence of NBD-Cl concentration was studied using different volumes of 0.1% (w/v) NBD-Cl solution ranging from (0.25-2 mL), it was found that 1mL of 0.1% (w/v) NBD-Cl produce the highest FI and beyond which the FI decreased. Effect of NaHCO3 concentration The reaction of cefoxitin sodium with NBD-Cl should be carried out in alkaline medium (pH ~8.3) in order to generate the nucleophile from cefoxitin sodium.The influence of NaHCO3was studied using different volumes of 0.2 M NaHCO3 solution ranging from (0.25-2.5 mL), it was found that 1.5 mL (0.2 M) NaHCO3produces the highest FI and above and beyond which the FI decreased, Effect of temperature The influence of temperature the reaction was carried out at different temperatures (2570 C), it was found that the reaction was dependent on the temperature and the FI increased as the temperature increased and the maximum FI was obtained at 60 C (Figure 6).This result was coincident with the result reported previously by H. W. Darwish et al 25 . Effect of reaction time In order to determine the time required for completion of the reaction, the reaction was carried out at different reaction time interval (5-40 min.).The results indicated that the optimum time was 30 min (Figure 7). Effect of HCl concentration Addition of HCl 21 to the reaction mixture before measurement of the FI was necessary for remarkably decreasing the background fluorescence (duo to the hydrolysis product of NBD-Cl to the corresponding hydroxyl derivative namely, 7-hydroxy-4 nitrobenzoxadiazole (NBD-OH) 32 .The fluorescence of NBD-OH was found to be quenched in strong acidic medium (pH ≤ 1), where the reaction product was not affected, the reaction was carried out using different volumes of 1M HCl ranging from (0.25-2mL).The optimum concentration of HCl required for acidification was found to be 1 mL of 1M HCl (Figure 8). Effect of diluting solvent To select the most appropriate solvent for diluting the reaction solution, different solvents involve: water, methanol, ethanol, propanol, acetone, acetonitrile were studied.The highest FI was obtained upon using water or methanol but water was used as a diluting solvent because it is environmental friendly (Figure 9). Stability of fluorescent fluorophore The effect of time on the stability of the Fluorescent cefoxitin-NBD fluorophore was studied by measuring the FI at different time intervals.It was found that the FI values remain constant for at least 24 hour at room temperature. The optimum variables affecting the reaction of cefoxitin sodium with NBD-Cl were summarized in Table 1.24 hr.at room temp. Stoichiometry and mechanism of the reaction 36-3 The stoichiometry of the reaction between cefoxitin sodium and NBD-Cl was investigated by the limiting logarithmic method 34 , where, two sets were prepared, one of which containing variable concentration of (NBD-Cl) ranging from (6×10 -3 -3×10 -2 M) while, constant drug concentration containing (3×10 -3 M), the second set contained variable concentration of the drug ranging from(1×10 -4 -1×10 -2 M), while constant concentration of (NBD-Cl) containing (1×10 -3 M), Figure 10, A plot of log FI against log concentration of NBD-Cl and cefoxitin sodium, two straight lines were obtained.The slopes were 0.8245 and 0.9368 indicating the 1:1 ratio for the reaction (owing to the molar reactivity of the reaction is 0.8245/0.9368).This ratio means one molecule of the drug reacts with one molecule of NBD-Cl.Methods validation 39 The proposed method was validated according to the International Conference on Harmonization (ICH) guidelines in terms of linearity, range, LOD, LOQ, accuracy and precision. Linearity and range The method obeys the Beer's law in the studied range of 0.5-7 µg mL -1 , Table 2, illustrated the regression parameters of the calibration curve and correlation coefficient of the drug analyzed. Limits of detection and quantitation LOD was found to be 0.048μg mL -1 , while LOQ was found to be 0.160μg mL -1 , as shown in Table 2. Accuracy and precision Accuracy of the proposed procedure (%R) was found to be 99.84.Intra-day precision (repeatability day precision) as % RSD was found to be 1.551, while interday precision (intermediate precision) was found to be 1.036, table (2).Good %R confirms excellent accuracy.Recovery study by standard addition technique: Validity of the proposed method was performed by adopting standard addition technique with mean recovery of added ± SD of 100.78 ± 0.610 %. Results are presented in Table 3 CONCLUSION Because cefoxitin sodium has no native fluorescence, this work introduced an accurate spectrofluorimetric method for the determination of cefoxitin sodium in its pure form and powder for injection based on nucleophilic substitution reaction with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) to form a highly fluorescent yellow fluorophore.The proposed method is suitable for the routine analysis of cefoxitin sodium in quality control and clinical laboratories. Figure 3 . Figure 3. Excitation and emission spectra of the reaction product of cefoxitin sodium (7 µg mL -1 ) with 0.1% NBD-Cl.Optimization of experimental conditionsDifferent experimental parameters affecting the fluorescence intensity were studied and optimized. Figure 4 . Figure 4. Effect of volume of 0.1% NBD-Cl on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 5 . 5 . Effect of volume of 0.2 M NaHCO3 on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 6 . Figure 6.Effect of heating temperature (°C) on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 7 . Figure 7. Effect of heating time at 60 o C on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 8 . Figure 8.Effect of volume of 1M HCl on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 9 . Figure 9.Effect of diluting solvent on the on the fluorescence intensity of cefoxitin-NBD fluorophore at λem 540 nm. Figure 10 . Figure 10.Stoichiometry of the derivatization reaction between cefoxitin sodium and NBD-Cl using limiting logarithmic method. Figure 11 . Figure 11.The Proposed reaction pathway between cefoxitin-sodium and NBD-Cl. Table 2 . Linearity studies and regression equation of the proposed spectrofluorimetric method. .
2,326.8
2017-10-01T00:00:00.000
[ "Chemistry" ]
CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool) and OpenDSS (power system simulation tool). By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid. Introduction The smart grid is a new kind of electrical grid where the power system and the IT system are tightly coupled with each other.In the smart grid, various elements of the power system are managed by remote controls, which are performed based on situational awareness [1].Moreover, the deployment of eco-friendly elements such as renewable energy generators (solar generators and wind generators) and PHEV (Plug-in Hybrid Electric Vehicle) requires a lot of information exchange and processing.The information exchange and processing can contribute to not only providing numerous electricity services, but also making power grid more stable, efficient and eco-friendly.Therefore, it is exceedingly important to investigate how we should apply the advanced computing and networking technologies to the power system in the smart grid. There are many studies aimed to enhance the performance of the smart grid, especially the networking system.Bae et al. [2] addressed the data communication scheme for Advanced Metering Infrastructure (AMI) considering the reduction of network delay and the privacy of the smart grid users.Son et al. [3] proposed a novel voucher scheme for the secure trading between the customers of the smart grid.Furthermore, Kim et al. [4] designed a wireless communication framework which considered the signal interference of electromagnetic wave and delay-constraint of the smart grid system.Since the smart grid indicates the convergence of the network system and power distribution system, which were distinctive before, studies must be elaborately designed and clearly evaluated. The majority of the smart grid studies usually depend on sophisticated simulation tools [5], since it is very difficult to find out a well-defined theoretical model which can represent the complexity and dynamics of the smart grid.If we try to represent the smart grid system with a mathematical model, we need to use unrealistic assumptions and/or leave out subtle details of the system.Especially, the degree of analytic complexity becomes more intractable in the distribution and customer domain [6], since many applications, such as demand-response (DR), energy management, and smart metering, operate on the domains.The applications generally collect diverse data, perform information processing, and control power system elements through the network system.Therefore, the complexity of the analytic approach rapidly increases as the number of applications and information exchanges increase.Another approach of the smart grid research is testbed based verification where numerous applications can be implemented and studies can be performed elaborately.However, the cost to construct the testbed is too expensive, since new kinds of equipments and systems should be employed.Moreover, since the smart grid is a very large system and many elements are tightly coupled, it is very difficult to construct a well specified and designed smart grid testbed. Therefore, a simulation study is the first choice for investigating both structures and operations of the smart grid.However, since the network system and the power system are tightly coupled with each other in the smart grid, organic combination of simulation tools for both systems is required to carry out an elaborate simulation study.In this paper, the word organic is used to describe a cosimulation in which two simulators are necessary, fit together harmoniously as fundamental parts of a whole, and interact with each other systematically, so that any complex applications can be precisely modeled.However, there are several challenges to achieve the goal.One of the challenges is that the smart grid is an application-centric system [7].Various applications would be introduced in the smart grid and each application would have different operations.Therefore, we are strongly convinced to build a cosimulation technique through which diverse applications can be easily modeled and simulated.Another challenge is that the smart grid is a mixture of large and complex systems.Therefore, we cannot build a precise cosimulation tool by extending an individual simulation tool for a single system.For example, if we employ a single network simulator which has an extension for the power system, the large scale of the power system might be simplified, so unrealistic assumptions would be introduced.Therefore, integrating independent simulators of each system would be a favorable choice for simulating smart grid. Figure 1 shows the conceptual representation of the proposed cosimulation.The application model defines the control policies, algorithms, and functions, which receive current states of the power system and event signal, and then decide the next states.The transmission of the event signals and control messages can be simulated through the network simulation.As the network simulation is invoked, the network-side information is retrieved by the application model in the context of the cosimulation.Similarly, control policies are applied to the power system simulation, and the new states information are reported to the application model.Moreover, meaningful results of the smart grid application are generated by the power system simulation, which can verify the impact of the application. Based on the conceptual representation of the cosimulation, we propose a novel cosimulation work for the smart grid applications, called OOCoSim, which stands for OPNET [8] and OpenDSS [9] cosimulation.Since both OPNET and OpenDSS have large scalability, various applications in the future smart grid can be implemented and simulated in OOCoSim.Moreover, both OPNET and OpenDSS can operate simultaneously on the same platform (Windows platform), dynamic and organic simulation can be achieved.In the future smart grid, complex applications which require dedicated interaction, such as deciding the amount of adjustment of power system state based on the network performance, would be introduced.Such complex application can be modeled and simulated in OOCoSim, since OOCoSim utilizes and manages the live simulations of OPNET and OpenDSS.However, to manage the live simulations of OPNET and OpenDSS, a scheduling scheme which performs synchronization between the events of the power system and the network system is required.To solve the synchronization issue, we devised global time management scheme which has rollback mechanism to reflect the events from the network system to the global event scheduler.To verify the proposed OOCoSim and provide a guideline to model the smart grid application, we designed a simulation study of the demand response application.The application model of the demand response is introduced and the results generated by OOCoSim are discussed in a later section.Through the simulation study, the impact of the underlying network infrastructure of the demand response application would be analyzed.The remainder of this paper is organized as follows.We first introduce some related work in Section 2.Then, a blueprint of OOCoSim and its implementation are presented in Section 3. In Section 4, we describe a model of the smart grid application.We present a simulation model and a set of simulation results that OOCoSim produces in Section 5. Finally, we conclude the paper in Section 6. Related Work There have been a few research studies to verify the impact of the networking system in the smart grid.In this section, we introduce the previous achievements in smart grid cosimulation research domain, and analyze the differences between the OOCoSim and the existing studies. The work in [10] is a novel attempt which tries to integrate power and communication simulation.In the paper, authors propose EPOCHS platform which includes PSCAD/EMTDC simulators for the electromagnetic transient simulation, PSLF for the power system simulation, and NS-2 for the communication infrastructure.The challenging issue in the work is a synchronization since the PSLF simulator performs typical continuous-time transient simulation in power system and the NS-2 is a discrete event simulator.To address the issue, authors employed time-step based synchronization, in which each simulator performs simulation independently and is suspended at certain synchronization points.However, in the method, events between the synchronization points are ignored or buffered, and it can cause accumulated simulation errors [11].The errors can be reduced by introducing small size of synchronization step, but it has trade-off with simulation speed.Moreover, using NS-2 as a network simulator is an inappropriate option since it has simplified lower layer models including physical layer models.In smart grid, the wireless channel conditions are harsher than those of typical wireless environments [12,13], therefore, using NS-2 as a network simulator for the smart grid simulation can cause inaccurate results. In [14], multi-agent system based simulator was introduced to simulate dynamic control in smart city.In the simulator, the software agents try to create the dynamic behavior of devices that perform monitoring, controlling, and producing electrical energy, and the created behavior is simulated to generate various results (for example, power demand over time).However, in the simulator, consideration of the network system, which can make the critical influence on the smart city system, was excluded.Since the network system is very important infrastructure of the smart city that can create the serious impact on the system, inclusion of the network system is required to generate more realistic simulation results. In [15], open source based simulation method was proposed.NS-2 and OpenDSS were employed to simulate the smart grid scenario.They designed an application scenario in which electrical energy in energy storage units is dispatched to prevent voltage drop when the solar ramping phenomenon is occurred.The dispatch command is transmitted to each storage units through IEEE 802.11 based multi-hop network system.Through the NS-2 simulator, the arrival time of the dispatch command is simulated, and then the information is merged into the OpenDSS simulation script.Through the proposed cosimulation, the designed application scenario was successfully simulated, and the simulation results let us know that voltage drop in solar ramping phenomenon can be successfully compensated through the energy storage units.However, since the simulation operates on different platforms (Linux platform for NS-2 and Windows platform for OpenDSS), the dynamic and controllable cosimulation cannot be achieved.For example, simulation of an application which adjusts transmission power of the dispatch command based on the voltage dropping level requires repeated re-simulation of the NS-2 simulator since the voltage dropping level is a dynamic value; therefore, the value cannot be considered in the NS-2 simulation phase.To simulate such applications which require more complicated combination of the two systems, the dynamic and controllable cosimulation should be employed. In [11,16], authors proposed GECO, a cosimulation frame work with PSLF and NS-2 simulator for WAMS (Wide Area Measurement System).To integrate the both simulation tools, authors implemented bidirectional interface between NS-2 and PSLF, therefore dynamic simulation of smart grid application can be achieved.GECO framework has similar challenging synchronization issue with EPOCHS.In GECO, the authors proposed a global event-driven synchronization method which can prevent the simulation errors which incur unwanted time delays which do not exist in a real system.In the method, there is an instance which performs iteration and repeatedly suspends/resumes the PSLF simulation to synchronize both simulations.However, there is a trade-off between iteration time step and the accuracy of the simulation.Shorter iteration time can make much more accurate simulation but requires much more simulation time.This issue is deeply analyzed in the paper. In [17], authors proposed a co-simulation for wide-area Smart Grid monitoring systems, integrating OMNET++ and OpenDSS.Authors point out the time management solution of EPOCHS and addresses their novel solution to remove the accumulation errors.However, although their time-stepped solution does not occur the accumulated errors, time-stepped approach could generate the control overhead and lead to the lower performance.OOCoSim proposes the efficient time management solution by considering the characteristics of the simulator components. The work in [18] proposed an integrated simulation framework for smart grid applications.In [18], the electrical distribution grid is modeled in MATLAB and linked to the OMNET++ modules.Since both power system and communication network models are simulated in the single domain, no synchronization problem is occurred.However, its applicability to diverse smart grid applications is limited.Some work already introduced the OPNET simulator to simulate wide area network in smart grid.In [19], the proposed simulation framework employs OPNET simulator and simplifies the power system as a virtual demander of the framework.Basically, the virtual demander manages the OPNET simulation.If the virtual demander needs to transmit data, it suspends itself and creates a data packet in OPNET.The transmission of the packet is simulated by OPNET, and the results are returned to the virtual demander.Since virtualized concept of the power system is introduced in the work, it can only generate limited simulation results in which the operations of the power system can be unrealistically simplified.In [20], OPNET simulator was employed to construct a testbed for the SCADA (Supervisory Control and Data Acquisition) systems.The work tried to verify the vulnerability of the wide area network for the SCADA system in perspective of cyber security.In the work, power system is simulated in multiple computer with PowerWorld simulation tool, and the security attacks are generated and simulated in OPNET.Since the work concentrated on the cyber security in the SCADA system, it did not consider the dynamic interaction between the PowerWorld simulation and the OPNET simulation.In [21], a cosimulation system that employs OPNET for information simulation and OpenDSS for power system was addressed.The proposed system highly resembles OOCoSim since it uses the same simulators and shows similar architecture.However, some components of each system are distinctive, especially the Coordinator and Time synchronization.In OOCoSim, the intermediate module abstracts the model of various smart grid application, while the main program of [21] only manages the control state of simulations.Furthermore, time synchronization scheme in [21] granularly divides the timeline of cosimulation and alternately runs OPNET and OpenDSS to avoid the failure of time synchronism.However, this design assumes that the information events invoked in a time slot do not affect the change of the state of power system simulated in power simulator of the same time slot.Our time synchronization scheme addressed in Section 3.2 holds the dependencies that should be kept in live cosimulation without any assumption. OOCoSim Design and Implementation In this section, we introduce the design principles and describe the architecture of OOCoSim and its implementation. OPNET and OpenDSS We first briefly introduce OPNET and OpenDSS.Basically, the cosimulation in the smart grid strongly depends on legacy simulation tools since extending an individual simulator to emulate the opposite part of the simulation model can cause unrealistic simplifications and assumptions as we mentioned in Section 1.Therefore, it is importantly considered in OOCoSim design how the interfaces of compromising simulators should be provided to the application model.In this context, we will focus on the functionality, which provides the way of accessing to each simulator for external programs and the reason of why we chose OPNET and OpenDSS as network and power simulators. OPNET is a widely used GUI based network simulator which allows event-driven packet-level simulation.The most novel characteristic of OPNET is the hierarchical structure of a network element-node model and a process model.A node model represents a network device, and each node model consists of process models which is a model of a network protocol.Due to its hierarchical structure, we can easily implement new functionalities and modify existing protocols in OPNET. In addition, OPNET simulation can cooperate with any external program through message and data exchanges by External System External System Module (ESM) and External Interface (EI). By inserting a new state which employs EIs to exchange messages, OPNET can easily communicate with the external program. OpenDSS is a comprehensive electrical system simulation tool for utility distribution systems which supports many new types of simulations that are designed to meet future needs.The most notable uniqueness of OpenDSS is that it provides sequential-time simulation.OpenDSS provides the functionality by the Quasi-static solution mode and Monitor and Energy Meter objects can generate time-series results from the simulation.Another notable uniqueness of OpenDSS is script based simulation.Engineers can design various simulation scenario through the script.The script consists of DSS commands (such as Compile and Solve) and DSS elements (such as Circuit, Transformer, Linecode and Load) [22].Finally, various analyses can be performed by creating scripts and invoking the scripts from another program through COM interface [23], which enables the external program running a DSS script and generating the analysis results. Choosing OPNET and OpenDSS for smart grid cosimulation does not have much advantages in perspective of integration, since other network simulators, such as NS-2, NS-3, and OMNET++, and power system simulator like PSLF also have the functionality of external interface.However, each simulator has different merits for the smart grid cosimulation.By using OPNET, smart grid applications which consider diverse underlying network technologies can be precisely modeled and simulated, since it provides much more number of protocol models and system architectures than other simulators.Especially, OPNET has high performance with its realistic physical layer model, such as mobility, noise, and interference model [24].In smart grid environment, wireless channel conditions are more severe than those of typical wireless environment [12,13] due to the electromagnetic interference generated by electrical equipments, densely deployed obstacles, etc.The physical layer of OPNET is elaborately modeled with 14 pipeline stages, and each stage can be easily modified and implemented for the desired configuration.Therefore, OPNET can reflect the influence of physical layer characteristics of smart grid environment better than other typical network simulators, so much more realistic simulations can be achieved.Although other simulators such as NS-2 and NS-3 also have well-defined transport layer and application protocol models, they have limited and simplified models of physical layer characteristics [25].Therefore, OOCoSim employs OPNET as a network-side simulator.As for OpenDSS, it provides distribution domain specific functionalities which OOCoSim focuses on.Since most of smart grid applications are executed in distribution and customer domain [6], and OpenDSS is designed to meet future needs of the domains [22,26], it is an appropriate simulator for the cosimulation.If legacy simulation tools which do not have simulation functionality of future system are employed, many parts of simulation modeling should be done in application model, which can cause unrealistic simplification and assumption.For these reasons, OOCoSim utilizes OpenDSS as a power system simulator. OOCoSim Design In the smart grid, the operation of the power system can be represented by a set of state variables where each variable represents an operation state of power system elements.For example, in the distribution automation system, the operation of circuit breakers can be abstracted with boolean state variables.Therefore, the goal of smart grid applications can be defined as managing state variables of the power system to make the system stable and efficient.To achieve the goal, diverse functions and policies are defined in each application.The policies and functions usually receive current states of the power system and control event signals, and they generate output state.Therefore, we can explain the smart grid application through the following equation: where S power is a vector of state variables of power system, E is a vector of event and/or control variable, and f (•) is an application policy or function.However, in the smart grid, applications try to change the state variables of power system through the network system.Therefore, the output state variables of the applications cannot be applied to the power system immediately.In the worst case, the desired state variable cannot be delivered due to a transmission failure, so the application function can yield inaccurate results.It could be meaningless to consider the network delay in the smart grid application since the network delay has milliseconds unit in well-designed (typically wired) communication infrastructure while the smart grid application has the dynamics in the order of seconds (for example, energy dispatching from the storage unit in [15] and fault detection in [11]).However, it is very difficult to design the communication infrastructure in which every entity has milliseconds-level response time and zero packet error rate.If we consider wireless-based infrastructure, it becomes more unrealistic.In wireless infrastructure, the delivery of the control message can be longer than order of milliseconds and frequent packet error can occur.Moreover, if we consider an application which requires multiple message communication to solve any problem in power system, considering the communication delay and packet error in application model become more important.Therefore, the smart grid application can be redefined by following equation: where d i (t) is the communication delay of a control message sent to the ith IED (Intelligent Electronic Device) at time t, and p is used to represent a failure in the network system (p = 0, if packet error). To simulate the behavior of the smart grid application defined in (2), capability of power system simulation, network simulation, and application model should be converged into one simulation model.Basically, the application model defines actions which receive state variables and event signals as input arguments, processes the arguments, and generates output state variables and decisions for the event.The network simulation performs message communication of reporting and controlling power system state variables, and d i (t) and p of (2) are retrieved.Then, the resulting state variables of the application model are applied to the power system simulator to retrieve power system-side simulation results which are relevant to the stability and the efficiency of the smart grid.Through this procedure, cosimulation in the smart grid can be successfully performed. To implement the cosimulation, OOCoSim mechanism is described by a FSM with four states: idle, event received, message communication, and power system analysis, as shown in Figure 2. When an event signal or a timer signal is delivered to the cosimulation, it steps into "event received" state.In each state, the application model processes the signal and decides the state transition.If further actions are required to handle the event, it calculates the state variables according to the predefined policy and goes into "message communication" state.In the "message communication" state, the network simulation is invoked to measure the communication delay and its accuracy.When a response message is received, the cosimulation steps into "power system analysis" state.In this state, the resulting values of the state variables are applied to the power system simulator and the relevant impacts are simulated.It is worth noting that the power simulation employing OpenDSS results in the time-driven signal and the network simulation employing OPNET results in the event-driven signal during cosimulation.The gap between the types of simulation results should be tightly engaged to guarantee the correctness of cosimulation.OOCoSim resolves this problem by casting the time-driven signal to the event signal form, exploiting the mechanism of the smart grid system and its application model.After the power system simulation in "power system analysis" state, OOCoSim analyzes the change of state variables provided by OpenDSS and extracts the specific events that the application model should regard, such as the completion time of a certain electrical vehicle.The events are registered as a set of timer signals to the cosimulation and the cosimulation is invoked by those signals in "idle" state with the information the application model needs. However, the casting strategy takes effect on the assumption that the time table of OpenDSS and OPNET simulation should be synchronized, since the FSM manages the events driven by both simulations.There are three types of the synchronization schemes widely used for cosimulation: master-slave, time-stepped, and global event-driven synchronization methods [27].The master-slave method selects one simulator as the master simulator, so the function of the application module can be limited.The time-stepped method can cause accumulated errors so the fidelity of the simulation can be hampered [27].To deal with these limitations, OOCoSim employs a global scheduler and a global/local time management scheme for synchronization. Before addressing our time synchronization design, it is worth considering the relationship between OOCoSim components in time manner.OPNET performs the network simulation which directly affects the control of the smart grid simulation.Most of the network events are invoked at the unpredictable (stochastically distributed) time during the simulation, and most of them should be alerted to the application model or OpenDSS to determine their action.On the other hand, OpenDSS outputs the simulation result until it is in a steady-state.OpenDSS results in the scheduling of the time of the application model, consequently the scheduling of the time of the OPNET.However, it is notable that the OpenDSS simulation operates in continuous manner, while its trigger state is defined discretely in smart grid environments.Thus, time management module must satisfy these requirements: 1.The OPNET must advance its time after all of the OpenDSS events that could occur in that time are probed.2. The OpenDSS results must be registered before OPNET advances its time. OOCoSim achieves these objectives by its novel time management module.Firstly, every event of the application model and the pre-defined simulation scenario are scheduled in the global scheduler.For instance, output diversity of renewable energy generators and charging schedule of electric vehicle are scheduled in the global scheduler.Then, the application model transmits these schedules to the OpenDSS and operates Solve command to simulate in the current schedule status.The simulation results from OpenDSS are reported to the application model, and those are registered in the application model timeline.Then, the global scheduler grants the time advance of the OPNET by the specific amount of time, which is previously set as constant by user.The global scheduler adds the events which occur in that time to the OPNET.If there were no meaningful events that occurred in the OPNET simulation, the global scheduler simply continues the time advance of OPNET.Otherwise, if an event that requires a simulation in OpenDSS occurs during OPNET simulation, OPNET pauses the simulation and reports to the application model.Since the OPNET event refers to the request of the operation of smart grid system, OpenDSS simulation should be recalculated from the time where the event occurs.For example, if the information message containing the price of the electricity arrives at a certain node in OPNET, future events of DR application and a load adjustment event of OpenDSS may be reassigned to the global scheduler.The application model invokes the Solve command of the OpenDSS, and reflects the renewed schedules received from OpenDSS and continues the time advance of the OPNET.Figure 3 shows the example of the time schedule in OOCoSim.s 0 , s 2 are the time of pre-defined schedules of a specific scenario, and s 1 , s 3 , s 4 are the time reported by OpenDSS initial simulation.The events s 0 , s 1 and s 2 are transmitted to the OPNET, respectively, when the application model grants the time advance by T. OPNET simulates network event which occurs at the simulation time e 0 , and the OPNET pauses its simulation at e 0 since this event might cause the change of future plans.After re-simulating the OpenDSS from e 0 , its simulation results are reported to the application model and renews the adjusted schedules s 3 , s 4 to s 3 , s 4 .OPNET receives the adjusted schedules and continues simulation during T or until the other event occur.To verify our time management scheme guarantees the time synchronism between the OPNET and the OpenDSS, it should be proved that the renewal operation of OpenDSS does not violate the time advancing status of OPNET (i.e., s 3 > e 0 and s 4 > e 0 ).Let E(t) be the vector of control variable at time t, then f (S power (t), E(t)) results in a set of vectors S(t) = {S power (t + n∆t)}, where ∆t is an intrinsic time step parameter used in OpenDSS.In our example, f (S power (e 0 ), E(e 0 )) results in the events at e 0 + n∆t, which satisfies s 3 > e 0 and s 4 > e 0 .The analysis proves that the renewed schedule does not violate the time synchronism of OOCoSim timeline. OpenDSS The proposed solution seems to be similar with the existing time-stepped methods because of the existence of T. However, T does not indicates the granularity of the time management, since the event handling mechanism and the adjustment of the OOCoSim schedules fully covers the time synchronism.Due to the event-driven feature of OPNET, co-simulation related events can be instantly handled independent on the variance of T. OOCoSim Architecture and Implementation In this subsection, we describe the architecture and implementation of OOCoSim.The most important issue of OOCoSim implementation is how to interface both simulators with user-defined application function.To implement the interfacing functionality, OOCoSim employs ESD, EI, and ESM of OPNET and also COM Interface of OpenDSS. Figure 4 represents the overall architecture of OOCoSim.As we can see from the figure, the CoSim module is the most fundamental component of OOCoSim.It manages the operations of both simulators using the global scheduler and exchanges data among OPNET, OpenDSS, and the application model.The pre-defined functions of the external system access (ESA) library of OPNET are employed by the CoSim module to manage the operation of OPNET.Once OPNET simulation is activated by the CoSim module, the CoSim module and OPNET simulation communicate with each other via EIs.EIs are interfaces through which the CoSim module and OPNET simulation can access to the same information, and they are used to report network information to the CoSim module.ESM is a process model which performs protocol operations of application layer.The basic functionality of ESM is message communications.In OOCoSim, ESM is used to exchange relevant information with the CoSim module through EIs.Therefore, a supplementary state which employs EIs is added to the ESM.Specifically, OPNET collaborates with the CoSim module in the following manner. • First, OPNET simulation is invoked by the CoSim module through the functions defined in ESA, such as Esa_Main(), Esa_Init() and Esa_Load(). • Second, message communications of the application model are defined and performed by ESM.The protocol can be implemented with a finite state machine. • Third, the required information, such as response time and packet error, is reported to the CoSim module through the function of ESM -op_esys_interface_value_set() with the defined data type. • Fourth, the information is retrieved by the CoSim module through the callback handler which is defined for each EI.The information is passed to the application module. The CoSim module delivers information to OPNET and schedules an interrupt in the following manner. • The CoSim module can execute OPNET simulation until the specific simulation time through the function Esa_Execute_Until(). • When an event occurs in OPNET simulation , the CoSim module can interrupt OPNET simulation by using the function Esa_Interrupt(). • When CoSim module receives information targeting ESM by OpenDSS or the application model, it delivers information with/without an interrupt through the functions defined in ESA such as Esa_Interface_Value_Set() and Esa_Interface_Array_Set(). OpenDSS simulation is defined by a DSS script file.The DSS script file configures a distribution system which we want to simulate and runs a simulation.Once the simulation is invoked, we can access and change the values of simulation elements through the COM interface.Therefore, we implemented CoSim module to communicate with OpenDSS simulation via the COM interface.To externally drive OpenDSS simulation: (i) we have to create DSS files to define and configure a distribution system; then (ii) we invoke the simulation with the DSS files via the COM interface; and (iii) we set/get properties of circuit elements of OpenDSS simulation.In the CoSim module, we implemented some functions that handle OpenDSS with the COM interface by C++ languages and IDispatch class of VC++, so that OOCoSim can employ OpenDSS.Specifically, OOCoSim uses IText, ICircuit, and circuit elements (e.g., ILoad and IGenerator) interfaces, all of which are subclasses of IDispatch class.The IText interface is for an execution of OpenDSS commands such as compile and clear, the ICircuit interface lets the CoSim module retrieve circuit element interfaces, and the circuit element interfaces are used to set/get the properties of the circuit element. The OOCoSim utilizes handlers for both OPNET and OpenDSS simulation, the application model can employ user-defined interfaces of OPNET, can dispatch all the interfaces of circuit elements of OpenDSS, and can collect all the event-information of each simulator.By using the interfaces, most of operations in the smart grid application can be precisely specified in the CoSim module for the simulation study.Note that two simulators can be invoked concurrently in OOCoSim since they operate on the same platform, so that the overall simulation for those applications can be conducted in an integrated way. Modeling of Smart Grid Application In this section, we explain how the applications of the distribution and customer domains can be modeled with the demand response (DR) application.We considered a scenario in which DR application is used with a deployment of renewable energy generator. The main problem of employing a renewable energy generator is its inherent characteristic that an environmental factor like clouds can affect its output power.This can influence on efficiency and stability of the distribution system.Therefore, in DR application, the real-time pricing is introduced to solve the problem.By providing electricity with high price when the power availability is low, the DR application tries to reduce energy consumption of each customer.To perform the DR application, we deploy the energy management system which perform rescheduling of the electricity demand of customers in the dynamics of renewable energy generation.However, in perspective of peak load distribution, an entity is required to collect all the power generation and consumption information and to determine proper electricity load of each customer since peak load distribution cannot be performed at each energy management system.Therefore, we devise the concept of gateway energy management system which can be mapped onto a service provider in the smart grid.The gateway energy management system is placed in the distribution domain and supervises the energy management of the distribution domain. The most important point of the DR application is how to decide the electricity consumption of each customer according to the real time price.The function needs to minimize the electricity consumption cost and distributes the peak load of the distribution domain.To achieve the goal of the DR application, we simply modeled the application as follow [28]: where C(t) and C f orecasted (t + k) represent the real time electricity price at time t and forecasted electricity price at time t + k, P L,i (t) and P L,i (t + k) are decision variables which represent the power demand of customer i at time t and t + k, B total represents the total daily budget of each customer for electricity, T is the size of scheduling window, and N is the number of customer.The objective function indicates that the energy management system tries to minimize the electricity consumption cost by rescheduling the electricity demand of each customer.The first and second constraints represent that enough daily and instant energy consumption should be guaranteed for each customer to perform basic home tasks.The third constraint represents that the energy management system tries to find the minimum electricity consumption cost which is lower than predefined (or customer defined) budget.Totally, Equation (3) tries to minimize total energy consumption cost while consuming predefined amount of energy demand. In our application model, when the real time and forecasted electricity prices are given, the gateway energy management system solves Equation (3) according to the given cost.Then, the optimal value of electricity load is distributed to each energy management system to control their electricity.In the next section, we will present how this DR application model can be modeled into OOCoSim and can be simulated through OOCoSim.Moreover, we will precisely investigate whether OOCoSim can produce the proper simulation results. Simulation Study In this section, we establish a simulation scenario for OOCoSim based on the model described in Section 4.Then, the simulation results are provided with detailed explanations. Simulation Model Power System: The distribution system can be defined in OpenDSS through the DSS script. To perform simulation study of the DR application with OOCoSim, we configured IEEE 13 Node and 34 Node test feeder system as a target distribution system through the DSS scripts.On each distribution system, we placed the wind generator to simulate the impact of the dynamics of the Distributed Energy Resource (DER).However, determining the location and the output power of DER is an important issue because it highly impacts the technical benefits of distributed power system such as the power loss.There has been a lot of research to solve this problem [29,30].Especially, A. Anwar and H.R. Pota [29] proposed an algorithm to identify the appropriate location and size (output power) of a distributed generator.At first, the algorithm calculates the sensitivity factor of each bus, where dP L /dP i = (P L 1 − P L 2 )/(P DG 1 − P DG 2 ).The sensitivity factor represents the change in power loss (P L1 − P L2 ) of specific bus (P i ) when the size of distributed generation is varied from P DG1 to P DG2 .After listing the most sensitive buses, the algorithm calculates the power loss for large variation of DER size in each bus.The power loss curves are almost a quadratic function [31], and the algorithm picks the bus and the size which give the minimum power loss.According to the analyses in [29], we installed the wind generator at bus 675 in 13 Node and at bus 844 in 34 Node system whose output power are 1.29 MW and 1.15 MW, respectively.Figure 5 is a graphical representation of the configured distribution system. Moreover, to consider the output diversity of the power generator, we made a probable profile for the generator in the period of 5 min according to the wind power data at Seoul, Korea, from 9 September, 09:56 to 10:00 [32] in which {4,1,0,8,2,0,4,5} m/s of wind speed are reported at each time point of {09:56,09:59,09:58,09:59,10:00}.We simply assumed that the output power of the wind generation is proportional to the wind power as G wind (t) ∝ G re f • wind_power(t), where G wind (t) represents actual output power of the wind generator and G re f is the reference output power (1.29 MW for the 13 node system and 1.15 MW for the 34 node system).Figure 6 depicts the assumed output profile of the wind generator. We modified the 34 Node system by reducing each line length down to 10% since the original 34 Node system is too large to be used together with the traditional WLAN coverage.Network System: The next step to simulate the smart grid application is to model the network infrastructure.We assumed that there exists an energy management system at each bus.We mapped the topology of the target distribution system into OPNET network topology.Figures 7 and 8 To verify the impact of the networking system, we employed two network systems for the distribution system: the one with a WLAN Mesh topology (Figures 7a and 8a), and the other with a combined topology of WiMAX and Ethernet backbone (Figures 7b and 8b).The network topology of each system was designed considering the practical feature of communication devices.For example, the distance between the bus 650 and the bus 632 is about 610 m, while the distance between the bus 632 and the bus 645 or the bus 633 is about 152 m, according to [33].Since increasing the communication range for only one node could lead to the inefficient design of the wireless network, we connected the energy management system of bus 650 to the backbone network (WiMAX) or the mesh router (WLAN Mesh).Then, the APs (WLAN Mesh) or the BSs (WiMAX) are deployed to connect the wireless devices running the enenrgy management system for each bus. We modeled the network system with server/client model.The gateway energy management system acts as a server and the other energy management systems perform the role of a client.The specific configuration for each node is as follows: • The gateway energy management system is a wlan_server or ethernet_server node model in OPNET. • The energy management system model is a node model which includes the ESM process, and each ESM process reports the response time of application traffic to the CoSim module; • The communications between the gateway system and each energy management system are configured with a server/client model. As the operational scenario, each energy management system sends a request message whose size is 1 KB every second, and the gateway sends a response message whose size is 5 KB to the energy management system.When the energy management system receives the response message, it calculates the response time and inform the result to the CoSim module. Smart Grid Application: We also implemented the application model in the CoSim module.In Section 4, we formulated the energy management system operation.We simply assumed that electricity forecasting is performed at each five minute, and the price of electricity is inversely proportional to the output power of the wind generator.Therefore, T in Equation ( 3) is set to 300 s, the k is set to 1 s, and C(t) = α • 1 G wind (t) .The forecasted electricity price is delivered to the gateway energy management system, and it decides the second-level electricity load of each energy management system by solving Equation (3).Every second, each energy management system requests its actual load to the gateway energy management system and it adjusts their load.Since the gateway energy management system calculates the load of each customer according to the forecasted output power of the wind generator, it might recalculate the load value if the actual output power is changed.However, we simply assume that the forecasted wind power is correct and the load calculation is performed only once every five min. The retrieval of the actual load of each customer is performed through the network infrastructure, there might exist the network delay and packet error.Therefore, the CoSim module should schedule the load adjustment event according to the following equation, where P determined L,i (t) is a determined load of customer i for t second, and d i (t) is a communication delay of the load decision message for customer i at time t. Figure 9 summarizes the operation flow of the DR application.Note that traditionally, DR is performed for a long period of time and as a prearranged plan, for instance, 1-day early or 1-hour early.However, as the advanced computing and networking technologies are applied to the power system, the scale of the DR is being shortened.Recently, many studies have proposed and verified the impact of load scheduling at minute-level or second-level [34,35].Therefore, we do not regard the defined application model as an unrealistic model but as a future application model in the smart grid. Simulation Results We conducted the simulation as follows: • OOCoSim creates events for the wind generator according to the pre-defined wind generator profile. • The CoSim module calculates the load based on the forecasted 5 min electricity cost information. • Every second, OOCoSim makes message communications through OPNET simulation and receives the response time information of each bus. • Based on the calculated load and the response time information, CoSim module creates new events for OpenDSS simulation and assigns them to the global scheduler. • When the scheduler encounters OpenDSS event, it applies the event to OpenDSS. • After each event is applied to OpenDSS, OOCoSim dispatches the distribution system information-the total power loss and voltage. In our simulation study, we used energy consumption data of Almanac of Minutely Power Dataset-AMPds [36], which provides minute-level observed electricity metering data.We compared the impact of the DR application with the data.The tagging "w/o DR" in the following figures and tables represents the results of the profile which follows AMPds data. Table 1 denotes total energy consumption cost during 5 min, and Figure 10 describes the power consumption of the 13 and 34 node system.We can easily note from the table that the DR model achieved its goal-minimizing the electricity usage cost.In both systems, we could save approximately 15% of electricity usage cost while consuming the same amount of electricity energy.Moreover, through Figure 10, we can also note that the application also achieved its another goal-distributing the peak load.Based on the results, we can conclude that the application tried to use large amount of the electricity when the electricity price is cheap and it also uses small amount of the electricity when the electricity price is expensive while successfully minimizing the total budget of electricity consumption and distributing the peak load.The next step is to consider the following questions "How the DR application affects the distribution system?" and "How the underlying network system affects the performance of the DR application?"These are more important questions to analyze in the smart grid where the power system and IT infrastructure are tightly coupled.To answer these questions, we analyzed the impact of the DR application in perspective of the power loss and voltage of the distribution system.To represent the impact of the networking system, we devised an ideal network scenario where the network delay and the packet error rate are 0. Figure 11a shows the power loss of the 13 Node system.We can observe that if we enable energy management function, we can obtain much benefit in perspective of the power loss since the lines tagged with "DR" shows much lower power loss than that of "w/o DR".A notable point is that through the DR application, small power loss can be achieved when the output power of the renewable energy generation is very low (from 60 s to 180 s).Since deficient energy provision can lead serious results such as "blackout", it is much more important to reduce power loss when power availability is very low.Therefore, we can conclude that the real time price based DR application contributed greatly to stability and efficiency of the distribution system in our simulation study.On the other hand, Figure 11b shows the voltage profile of the bus 652 at which the voltage is increased mostly when we do not use energy management functions.We can identify from the results that the energy management function can also contribute to the stability of the distribution grid since the lines tagged with "DR" have values closer to the voltage of 1 pu.Note that the "pu" means the fraction of actual voltage to defined base voltage.As the actual voltage gets closer to the base voltage (i.e., 1 pu), the system can be regarded as a more stable system.Table 2 describes the total power loss and the voltage index of the 13 Node system for 5 min.From the results we can note that through the DR application, the power loss can be reduced and the voltage stability of the distribution system can be maintained.By observing the results, we can conclude that the designed DR application can successfully contribute to stable and efficient distribution system.A notable point of the simulation results of Figure 11a,b and Table 2 is that the impact of the underlying network infrastructure is relatively small.Since the 13 node system is very a small system, the performance gap between the two network systems is very small, and it causes little differences of the simulation results.We can note that both WLAN network and WiMAX network scenarios tend to converge to the ideal network scenario. However, when we conducted a simulation with 34 Node system, we can precisely see to what extent the network performance affects the distribution system.Because the 34 Node system is a much more large system than the 13 Node system, there exists much greater differences between the application performance of WLAN Mesh and WiMax-Ethernet backbone network.Figure 12a shows the total power loss that we simulated from the 34 Node system.Basically, the DR application also could achieve its goal in 34 Node system.However, there exists large difference between the WiMax-Ethernet backbone and WLAN Mesh network.Since the 34 node system consists of many energy management systems, the probability of packet collision was seriously increased.It led the performance degradation of the DR application.Moreover, a number of packets were dropped due to the multi-hop transmissions, which led the unexpected result.However, the WiMax-Ethernet backbone network shows almost similar performance to the ideal scenario.Figure 12b shows the voltage of the 852 bus in the 34 Node system.We can see that the energy management function can successfully maintain the distribution system stable.However, when we employed WLAN Mesh network, we cannot achieve grid stability since the network has poor connectivity and few energy management systems receive the control information in desirable time. Table 3 shows the total power loss and voltage index of the 34 node system.The application which employs WLAN Mesh network system shows larger power loss and higher voltage index value.Therefore, we can conclude that to employ WLAN based backbone for DR application in the 34 node system, a new protocol which can resolve the low connectivity issue should be designed.For example, modified MAC protocols or routing schemes, which can guarantee high delivery ratio in multi-hop wireless network, should be introduced to construct WLAN based backbone network.In addition, we analyzed the transmission delay of network packet in each scenario.The transmission delay of the smart grid network is one of the critical issue since an excessive delay of the power state report could lead to the delayed control of the power distribution, which results in the unexpected power loss.We ran the OOCoSim simulation for each scenario (13 and 34 node system and WLAN Mesh and WiMAX network system) for 600 s and collected packet response provided from the OPNET in each simulation.Figure 13 shows the CDF of the transmission delay of the simulations.As shown in Figure 13, WiMAX-Ethernet shows almost similar distribution of transmission delay which has about 0.1 s at 99% probability in the 13 node and the 34 node cases, while WLAN Mesh shows over 1 s at 70% probability in the 34 node system.However, 13 node WLAN Mesh system shows lower delay in the 13 node case.The evaluation shows that WLAN Mesh network system performs lower delay communication than WiMAX-ethernet network system in smaller network, and WiMAX-ethernet network system could be suitable choice for large-scale and delay-constrained smart grid application. Based on all the simulation results, we can confirm that the proposed OOCoSim can successfully conduct the simulation for the smart grid where the power grid and the communication infrastructure are tightly coupled with each other.Additionally, we can also realize that we can use OOCoSim to see how the underlying network affects the power grid. Discussions on Simulation Scalability and Stability Table 4 summarizes execution time of simulation study using a computing system which has core2 duo 2.4 GHz CPU and 8 GB physical memory.The simulation time was 600 s for each scenario and network topology.Basically, in WLAN Mesh network scenarios, many more events were generated and processed in OPNET due to the CSMA/CA based channel access and multi-hop transmission, and therefore, longer execution time was reported.In addition, the number of events of OpenDSS is smaller in WLAN Mesh network because its poor network performance created fewer number of application events.We can observe that the execution time of both simulation tool is well proportional to the number of events.Although the number of events can differ according to the simulation configuration, the linearity can guarantee the scalability of the simulation [26].Therefore, we can conclude that the both simulation tools have enough scalability to perform simulation on the smart grid application.Figure 14a represents the usage of memory resource for the scenarios described in previous subsection.We can observe that OOCoSim has a linearly increasing pattern of the memory usage for all scenarios.The linearity can guarantee that simulations of the smart grid application can be performed within expected size of memory resource [37].Therefore, we can conclude that OOCoSim is scalable and stable in the perspective of memory usage.The cpu usage of OOCoSim is represented in Figure 14b.For four scenarios, stable cpu usage pattern in which most of values are positioned between 44% and 52% was reported.The results indicate that regardless of the configuration of simulations, OOCoSim requires similar amount of CPU usage, therefore, OOCoSim can perform stable and sustainable simulations for applications in distribution and customer domain.To verify the scalability of the simulation system further, we duplicated 13 node WiFi Mesh system for two to eight times and ran each case.Since OpenDSS only can run a single circuit, we utilized Direct connection Shared Library for OpenDSS.With this engine, multiple OpenDSS simulations can be executed in parallel.Figure 15 shows the simulation time of OPNET, OpenDSS, and the total simulation time including OOCoSim operations with respect to the number of 13 node WiFi Mesh systems.As shown in Figure 15, OPNET simulation highly accounts for a high proportion of the total simulation time.In addition, since OpenDSS can simulate multiple circuits concurrently, the simulated time of OpenDSS grows much slower than OPNET.Regardless of the proportion of the simulation time, Figure 15 shows that the total simulation time of OOCoSim linearly increases as the number of system increases.The observation shows that OOCoSim has enough scalability to perform the simulation on the smart grid system and its application. Conclusions and Future Work In this paper, we proposed a cosimulation system, OOCoSim, for smart grid applications in the distribution and customer domain.We described the basic concept, design, and implementation of OOCoSim.Moreover, we conducted a simulation study for the demand response application which is a representative application of distribution and customer domain in smart grid.With the proposed OOCoSim, we can carry out an extensive simulation study for the smart grid, and we can have the right direction of building a smart grid. Figure 1 . Figure 1.The conceptual representation of the proposed cosimulation. Figure 2 . Figure 2. The finite state machine of the cosimulation. Figure 3 . Figure 3.The example case of the scheduling mechanism. Figure 9 . Figure 9.The operational flow of the application model. Figure 11 . Figure 11.The power loss and the voltage of 13 Node system.(a) Total power loss; (b) Voltage of 652 bus. Figure 12 . Figure 12.The power loss and the voltage of 34 Node system.(a) Total power loss; (b) Voltage of 852 bus. Figure 13 . Figure 13.The CDF of the packet transmission delay. Figure 14 . Figure 14.The usage of memory and cpu of simulation scenarios.(a) Memory usage; (b) CPU usage. Figure 15 . Figure 15.The simulation time of multiple smart grid systems. Table 1 . Total cost of each scenario. Table 2 . The power loss and the voltage of 13 Node system. Table 3 . The power loss and the voltage of 34 Node system. Table 4 . Execution time of simulation study.
12,719
2018-01-09T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Characteristics and economic burden of frequent attenders with medically unexplained symptoms in primary care in Israel Abstract Background Frequent Attenders with Medically Unexplained Symptoms (FA/MUS) are common in primary care, though challenging to identify and treat. Objectives This study sought to compare FA/MUS to FA with organic illnesses (FA/OI) and the general clinic population (Non-FA) to understand their demographic characteristics and healthcare utilisation patterns. Methods For this retrospective, observational study, Electronic Medical Records (EMR) were obtained from Clalit Health Services, regarding the population of a sizeable primary care clinic in Be’er-Sheva, Israel. Electronic medical records were screened to identify the top 5% of FA. FA were stratified based on whether they had OI. FA without OI were then corroborated as having MUS by their physicians. Demographics, healthcare utilisation and costs were analysed for FA/OI, FA/MUS and Non-FA. Results Out of 594 FA, 305 (53.6%) were FA/OI and 264 (46.4%) were FA/MUS. FA/OI were older (69.1 vs. 56.4 years, p<.001) and costlier (ILS27693 vs. ILS9075, p<.001) than FA/MUS. Average costs for FA/MUS were over four times higher than Non-FA (ILS9075 vs. ILS2035, p<.001). The largest disparities between FA/OI and FA/MUS were in hospitalisations (ILS6998 vs. ILS2033) and surgical procedures (ILS8143 vs. ILS3175). Regarding laboratory tests, differences were smaller between groups of FA but significantly different between FA and Non-FA. Conclusion FA/MUS are more costly than Non-FA and exhibit unique healthcare utilisation and costs patterns. FA/OI had more severe illnesses necessitating hospitalisations and surgical interventions, while FA/MUS had more investigations and tests, attempting to find an explanation for their symptoms. Introduction Medically unexplained symptoms (MUS) without a clear, organic illness (OI) are caused by a complex interaction of bio-psycho-social mechanisms, comprising 33% of primary-care visits [1][2][3][4]. MUS are often transient; however, can become chronic, necessitating extensive medical attention [5]. Patients with MUS who are frequent attenders (FA) of healthcare services present a significant challenge, incurring adverse treatment outcomes, high healthcare utilisation and costs [5][6]. Several studies have assessed the economic burden of MUS. A review of the economics of MUS found excess costs ranging from $432 to $5,353 [6]. A recent US study estimated annual MUS costs at $256 billion [7]. Identifying FA/MUS via electronic medical records (EMR) is advantageous, as information is easily obtainable, creating opportunities for PCP to initiate proactive care [13]. Proactive management of FA/MUS could improve care through longer consultation times and directing patients to evidence-based treatments like cognitive behaviour therapy (CBT) [12][13][14]. Although EMR-identification has been effective for other conditions [15,16], it is not well-established for MUS [13]. This study used EMR to characterise the demographics and unique healthcare utilisation patterns of FA/MUS and FA/OI. Characterising FA/MUS could be a first step in identifying them via EMR, creating opportunities for physicians to more effectively manage their care, decreasing healthcare utilisation and costs. Study subjects For this retrospective, observational study, EMR were obtained from Clalit Health Services (CHS), Israel's largest health maintenance organisation (HMO). Israeli citizens receive a National List of Health Services defined by law and provided by four not-for-profit HMO. Healthcare is universal, supported by progressive taxation, supplemented by governmental funding, with voluntary premiums for supplementary insurance [17]. The study was approved by the CHS and Soroka Medical Centre institutional review boards. Data was deidentified, to maintain patient anonymity. Patients were at least 18 years of age, registered with the largest primary-care clinic in Be'er Sheva, Israel. The clinic employs 10 PCP, with 15-35 years' experience. Interactions between PCP and the research team were minimal, focussing solely on the study. Study design In accord with previous research [10], FA were defined as the top 5% of clinic attendees. Using EMR, FA were identified and stratified based on whether they had cancer, renal failure, congestive heart failure (CHF), ischaemic heart disease (IHD), cirrhosis, cerebrovascular accident (CVA), dementia, psychosis or bipolar disorder; reflecting previous diagnoses throughout primary and secondary care and differentiating between FA/OI and FA without OI. Next, to establish the existence of MUS, EMR of FA without OI were examined by their PCP, who focussed on the symptoms causing their frequent visits. Patients were considered FA/MUS only after the PCP verified their visits were due to MUS. Once FA/MUS were identified, all groups were compared (e.g. FA/MUS, FA/OI and the general clinic population -Non-FA). Outcomes Utilisation and cost data were collected based on CHS' administrative claims data. Actual CHS costs are presented. Services paid out of pocket or by supplementary insurance were not included. Utilisation rates for PCP visits are presented without costs, as PCP are salaried employees, and their costs are not captured in CHS' administrative database. Utilisation/cost data for secondary and tertiary care were provided by CHS and reflect internal pricing estimates when services were provided directly by CHS or price rates when care was covered by CHS but provided externally. Costs are presented in Israeli Shekels (ILS); exchange rate 1 USD ¼ 3.5 ILS. To present a more nuanced view, utilisation/ cost data were analysed by type of service: (a) specialist consultations, (b) hospitalisations (visits priced by length of stay), (c) diagnostic tests (CT, MRI, etc.), (d) emergency department care (not resulting in hospitalisation), (e) laboratory tests, (f) surgeries (procedurerelated group -PRG), (g) health professions' services (physical therapy, occupational therapy, etc.), (h) medical equipment and (j) total costs (including prescribed medications). Other health conditions (e.g. diabetes, hypertension, chronic obstructive pulmonary disease and smoking) were also analysed. Analysis Continuous data was presented as mean and standard deviation (SD) or median and interquartile range (IQR), as appropriate. Dichotomous data was presented as N and percentage. Groups were compared using Chi-square tests for categorical data and t-tests for continuous data. Healthcare utilisation and cost categories were presented as mean and standard deviation (SD) or median and interquartile range (IQR). Group comparisons used t-test or Kruskal-Wallis test when variables were not normally distributed. Linear regression, adjusted for sex and age, compared total costs between treatment groups. Because of their non-normal distribution, total cost data were log-transformed before the multivariable analysis. The coefficient and 95% confidence interval were back-transformed to their original scale. p-Values <.05 were considered statistically significant. Statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). were analysed. The top 5% of FA included 594 patients, 255 of whom were FA/OI. PCP reviewed the remaining 339 EMR, identifying 50 additional patients with OI undetected by electronic screening and establishing FA/OI ¼ 305 altogether. An additional 25 CHS personnel were identified and excluded from further analysis, leaving a total of FA ¼ 569. These were deemed non-representative of FA, as their utilisation rates were due to the convenience of obtaining prescriptions and regular healthcare visits where they work. The remaining 264 files were confirmed as FA/MUS. <.001 <.001 Ã In secondary care, standard utilisation figures were generally close to 0. This is because in a given year, most patients do not utilise every type of medical service. Therefore, utilisation rates were shown for 1,000 patients to represent meaningful trends. The mean (SD) was calculated for FA/OI, FA/MUS and Non-FA and then extrapolated to 1000, for scale. Median data is not shown, as it would remain close to 0 and provide little added information. Utilisation rates were reflected in provider costs (Table 3, Figure 3). The total average cost for FA/MUS (9075.2 (27345.4)) was significantly less than that of FA/OI (27692.8 (50046.9)), p<.001 but more than four times higher than Non-FA (2035.3 (11549.9)), p<.001. Median (IQR) cost differences were even more striking. A linear regression was performed to adjust for age and gender ( Main findings This study was unique in looking at utilisation and cost patterns of FA from among the general population, avoiding clinical bias. Data was collected over 12 months to exclude transient symptoms rather identifying persistent FA. Almost half of FA had MUS. FA/ MUS were younger, less costly and had fewer health conditions than FA/OI but were nearly five times more expensive than Non-FA. Regarding specific utilisation/ cost categories, FA/OI had higher hospitalisation and surgical rates, whereas FA/MUS had higher laboratory test rates. [18,19]. Secondary care studies found similar prevalence rates, nearing 50% [20]. A UK study using a similar cut-off (top 5%) for FA, found that MUS accounted for 20% of consultations in secondary care, with higher rates in cardiology (30%), gastroenterology (50%) and neurology (50%) [21]. FA/MUS were younger and less costly than FA/OI but older and almost five times as costly as Non-FA. Similarly, Reid et al., found that FA/MUS were younger than FA/ OI [10]. There were more women among FA/MUS and FA/ OI. Studies found that women make up a higher percentage of patients with MUS and FA independently [2,19,22], reflecting broader gender differences in healthcare utilisation. FA/MUS had more chronic health conditions (e.g. diabetes, hypertension) than non-FA. Similarly, Smits et al., reported that FA/MUS had high rates of chronic illness [23], especially diabetes. Unique utilisation/cost patterns were found for FA/ OI and FA/MUS. Whereas FA/OI had severe illnesses, necessitating hospitalisations and surgical interventions, FA/MUS had lower rates of hospitalisations and surgeries. Concurrently, FA/OI had higher hospital and surgical costs, as well as higher costs around specialist consultations. The only cost category in which FA/ MUS had significantly higher costs was laboratory testing, perhaps reflecting patients' continuous attempts to find explanations for their symptoms via medical investigation. This is consistent with studies showing that FA/MUS did more medical testing than FA/OI and that diagnostic procedures accounted for 40% of MUS patients' total costs [6,10]. These patterns demonstrate that FA/MUS are a distinct population with their own utilisation profile and healthcare needs, likely due to complex interactions between patient, physician and healthcare system. Ring et al. found that during medical consultation for patients with MUS [24], physicians proposed physical interventions more often than patients did. While most patients indicated psychosocial needs,those needs were generally not picked up on. In addition to physician-patient communication, psychological factors play an important role in maintaining MUS [3]. Emotional distress and catastrophic Table 3. Healthcare costs (ILS). thinking can exacerbate MUS and psychiatric illness can increase FA/MUS' healthcare utilisation [4], as anxious patients are more likely to seek consultation [12]. This corresponds to the definition of Somatic Symptoms Disorder in the DSM-5 as 'excessive thoughts, feelings, or behaviours related to somatic symptoms' [25]. Ultimately, consultation behaviour of patients with FA/MUS is a complex interaction between patients' needs and the healthcare systems' ability to address those needs [10]. In demonstrating that FA/MUS are a distinct group with their own utilisation profile, this study strengthens the understanding that PCP could better help patients with FA/MUS by identifying which patients might benefit from focussed support, tailored to their needs, so that the most severely affected patients benefit most from additional support. Clinical guidelines for MUS recommend a stepped-care, multidisciplinary approach, wherein targeted medical investigations inform appropriate referrals to psychotherapy and additional medical professionals (e.g. physiotherapist, social worker) as necessary [26]. Early management and supportive medical treatment together with psychotherapeutic interventions (e.g. CBT, mindfulness) have been shown to improve depression, increase satisfaction and decrease addictive analgesic use for patients with MUS [27,28]. Strengths and limitations A strength of this study was that PCP corroborated FA/MUS. Physician evaluation has been posited as the most accurate way to identify MUS, particularly in comparison with methods like self-report surveys [29]. Ongoing physician-patient relationships have been shown to facilitate recognising MUS [30]. Physicians in this study had longstanding clinical relationships with their patients and decisions were based on thorough assessment. Although this study looked at a large, regional centre where both the PCP and patients were representative of the local population, it was limited in that it was a single-centre study. Future studies are needed to examine whether results are generalisable across communities and cultures. Additionally, future analyses should look at how socioeconomic factors, other health conditions and behaviours influenced healthcare utilisation for patients with MUS, as these factors were not investigated. Due to the limitations of CHS' administrative claims database, the cost of PCP visits could not be calculated. As this was the main inclusion criterion, FA/OI and FA/MUS had similar visit rates almost six-fold higher than Non-FA at the same clinic. Figure 3. Annual total costs of study population. OI: organic illness; FA: frequent attenders; MUS: medically unexplained symptoms; non-FA: non-frequent attenders (i.e. the general clinic population). This boxplot shows both the mean and median total costs per person for each group in the study population. Both groups of FA have significantly higher mean and median costs than Non-FA. For FA/OI, there is a greater difference between mean and median costs than for FA/MUS. This is likely due to outliers with severe organic illness who have much higher annual healthcare costs. This would raise mean costs without impacting the median; a situation which is less common for patients with medically unexplained symptoms. Thus, estimates regarding FA additional healthcare costs are conservative and the difference is likely higher. Finally, a linear regression was performed to look at the change in cost, adjusted for important risk factors, such as age and gender. Costs were found to be significantly related to group affiliation beyond the effects of age or gender. However, the R square of the model was low, meaning that not all variables influencing costs are accounted for. It is likely that patientcentred variables (e.g. specific diagnoses, emotional distress), as well as variables related to the physicianpatient relationship may need to be considered to create a more robust model. As we do not yet know all of the variables predicting costs, further prospective studies will be needed. Implications for clinical practice This study provides a retrospective characterisation that could be a first step in creating an EMR-based identification protocol for FA/MUS, helping PCP determine which patients might benefit from multidisciplinary care. Future studies could combine our exclusion criteria with the defining traits of FA/MUS to test potential identification algorithms. Initial criteria, such as high rates of PCP visits (>20 per year) and a negative diagnosis of OI could be used as a first stage. FA without OI could then be stratified based on age and utilisation profile to identify FA/MUS (FA/MUS were younger and had lower rates of hospitalisations, surgeries and specialist consultations, with higher rates of laboratory tests). To create an algorithm, specific cutoff points would have to be determined and their specificity and sensitivity confirmed. Conclusion Almost half of FA in primary care had no organic diagnosis. These patients are a significant and costly subpopulation whose needs are not being met. FA/MUS have a unique utilisation and cost profile making it possible to identify them using EMR, helping physicians create new therapeutic alternatives to meet their needs and contain costs.
3,407.6
2021-01-01T00:00:00.000
[ "Medicine", "Economics" ]
Dental Resin Cements—The Influence of Water Sorption on Contraction Stress Changes and Hydroscopic Expansion Resin matrix dental materials undergo contraction and expansion changes due to polymerization and water absorption. Both phenomena deform resin-dentin bonding and influence the stress state in restored tooth structure in two opposite directions. The study tested three composite resin cements (Cement-It, NX3, Variolink Esthetic DC), three adhesive resin cements (Estecem, Multilink Automix, Panavia 2.0), and seven self-adhesive resin cements (Breeze, Calibra Universal, MaxCem Elite Chroma, Panavia SA Cement Plus, RelyX U200, SmartCem 2, and SpeedCEM Plus). The stress generated at the restoration-tooth interface during water immersion was evaluated. The shrinkage stress was measured immediately after curing and after 0.5 h, 24 h, 72 h, 96 h, 168 h, 240 h, 336 h, 504 h, 672 h, and 1344 h by means of photoelastic study. Water sorption and solubility were also studied. All tested materials during polymerization generated shrinkage stress ranging from 4.8 MPa up to 15.1 MPa. The decrease in shrinkage strain (not less than 57%) was observed after water storage (56 days). Self-adhesive cements, i.e., MaxCem Elite Chroma, SpeedCem Plus, Panavia SA Plus, and Breeze exhibited high values of water expansion stress (from 0 up to almost 7 MPa). Among other tested materials only composite resin cement Cement It and adhesive resin cement Panavia 2.0 showed water expansion stress (1.6 and 4.8, respectively). The changes in stress value (decrease in contraction stress or built up of hydroscopic expansion) in time were material-dependent. Introduction Resin composite cements have been widely used with ceramic, resin, or metal alloy-based prosthodontic restorations [1]. The cementation technique used in adhesive dentistry is one of the major factors, which exerts influence on the clinical success of indirect restorative procedures. Cement is used to bond tooth and restoration simultaneously creating a barrier against microbial leakage [2]. The universal cement that can be applied in all indirect restorative procedures has not been introduced into been introduced into the market yet. Therefore, clinicians should understand the influence of applied material properties and preparation design on the clinical performance of the restoration [3]. The composition of resin composite cements is almost the same as of resin composites [4]. Resin cements mainly consist of various methacrylate resins and inorganic fillers which are often coated with organic silanes to provide adhesion between the filler and the matrix. These materials often include bonding agents to promote the adhesion between resin cement and tooth structure. Main monomers are, i.e., hydroxyethyl methacrylate (HEMA), 4-methacryloyloxyethy trimellitate anhydride (4-MET), carboxylic acid, and organophospate 10-methacryloxydecyl dihydrogen phosphate (10-MDP) (Figure 1). The acidic group bonds calcium ions in the tooth structure [5,6]. Resin composite cements are used in combination with adhesive systems. This procedure aims at creating micro-mechanical retention to both enamel and dentin. The material may also form a strong adhesion to an adequately-treated surface of the composite, ceramic, and metallic restorations [7]. Taking into account the surface preparation before the cementation process, resin cements can be divided into: (1) composite resin cement (used with total-etch adhesive systems); (2) adhesive resin cement (used with separate self-etching adhesive systems); and (3) self-adhesive resin cement (containing a self-adhesive system) [1]. The application of resin matrix-based cements is time-consuming and susceptible to manipulation errors [8]. The self-adhesive resin cements are proposed to simplify the restoration procedure. These materials bond dentin in one step without any surface conditioning or pre-treatment (priming) [9,10]. All currently available resin-based materials exhibit polymerization shrinkage. Moreover, resin cements are generally applied as a thin layer, particularly when used to lute posts, inlays, and crowns. In the aforementioned clinical cases, the cavity design has a high C-factor (high number of bonded surfaces and a low number of un-bonded surfaces) [11]. Additionally, low-viscosity composites exhibit a relatively high shrinkage amounting up to 6% (comparable to resin cements) [12,13]. These factors may generate sufficient stress resulting in debonding of the luting material, thereby increasing microleakage [14]. Nevertheless, there is little data on the stress generated by these materials. The sorption characteristic of resin-based dental cements has been extensively evaluated [15][16][17]. However, the analysis of the influence of water sorption on the change in contraction stress is inadequate. The purpose of this study was to evaluate the development of the stress state, i.e., the contraction stress generated during photopolymerization and hydroscopic expansion within different types of resin cements which undergo water ageing by means of photoelastic analysis. Materials and Methods The composition of investigated resin cements and bonding systems is presented in Tables 1 and 2. Resin composite cements are used in combination with adhesive systems. This procedure aims at creating micro-mechanical retention to both enamel and dentin. The material may also form a strong adhesion to an adequately-treated surface of the composite, ceramic, and metallic restorations [7]. Taking into account the surface preparation before the cementation process, resin cements can be divided into: (1) composite resin cement (used with total-etch adhesive systems); (2) adhesive resin cement (used with separate self-etching adhesive systems); and (3) self-adhesive resin cement (containing a self-adhesive system) [1]. The application of resin matrix-based cements is time-consuming and susceptible to manipulation errors [8]. The self-adhesive resin cements are proposed to simplify the restoration procedure. These materials bond dentin in one step without any surface conditioning or pre-treatment (priming) [9,10]. All currently available resin-based materials exhibit polymerization shrinkage. Moreover, resin cements are generally applied as a thin layer, particularly when used to lute posts, inlays, and crowns. In the aforementioned clinical cases, the cavity design has a high C-factor (high number of bonded surfaces and a low number of un-bonded surfaces) [11]. Additionally, low-viscosity composites exhibit a relatively high shrinkage amounting up to 6% (comparable to resin cements) [12,13]. These factors may generate sufficient stress resulting in debonding of the luting material, thereby increasing microleakage [14]. Nevertheless, there is little data on the stress generated by these materials. The sorption characteristic of resin-based dental cements has been extensively evaluated [15][16][17]. However, the analysis of the influence of water sorption on the change in contraction stress is inadequate. The purpose of this study was to evaluate the development of the stress state, i.e., the contraction stress generated during photopolymerization and hydroscopic expansion within different types of resin cements which undergo water ageing by means of photoelastic analysis. Materials and Methods The composition of investigated resin cements and bonding systems is presented in Tables 1 and 2. Absorbency Dynamic Study Absorbency dynamic was determined by means of procedure as described by Bociong et al. [18]. The samples were prepared according to ISO 4049 [19]. Curing time was consistent with the manufacturer's instructions ( Table 2). In order to characterize absorbency dynamic, the cylindrical samples with dimensions of 15 mm in diameter and of 1 mm in width were prepared. The tested materials were applied in one layer and cured with LED light lamp (Mini L.E.D., Acteon, Mérignac Cedex, France) in nine zones partially overlapping each other with direct contact of optical fiber with the material surface. Exposure time was applied according to the manufacturer's instructions (Table 1). Five samples were prepared for each tested cement. The samples' weight was determined (RADWAG AS 160/C/2, Poland) immediately after preparation and then for 30 consecutive days, and after 1344 h (56 days) and 2016 h (84 days). The absorbency was calculated according to the Equation (1) [20]: where A is the absorbency of water, m 0 is the mass of the sample in dry condition, and m i is the mass of the sample after storage in water for a specified (i) period of time. Water Sorption and Solubility Water sorption and solubility were investigated according to ISO 4049 [19]. The detailed procedure of tests has been described extensively in the previously published literature [18,21]. Curing time was consistent with the manufacturer's instructions (Table 1). Water sorption and solubility were calculated according to following equations: where: W sp is the water sorption, W sl is the water solubility, m 1 is the initial constant mass (µg), m 2 is the mass after seven days of water immersion (µg), m 3 is the final constant mass (µg), V is the specimen volume (mm 3 ). Photoelastic Study Photoelastic analysis allows for quantitative measurement and visualization of stress concentration that develops during photopolymerization or water sorption of resin composites [22,23]. The modified method enables analysis of the relationship between water sorption and the change of stress state (contraction or expansion) of resin materials. This test was described extensively in our previous articles [18,24]. Photoelastically-sensitive plates of epoxy resin (Epidian 53, Organika-Sarzyna SA, Nowa Sarzyna, Poland) were used in this study. Calibrated orifices of 3 mm in diameter and of 4 mm in depth were prepared in resin plates in order to mimic an average tooth cavity. The generated strains in the plates were visualized in circular transmission polariscope FL200 (Gunt, Barsbüttel, Germany) and photoelastic strain calculations were based on the Timoshenko equation [25]. Absorbency Dynamic Study Water absorbency and contraction stress mean values were presented in Figures 2-14. The water immersion resulted in an increase in weight of all tested materials. The water sorption (wt%) increased for Breeze up to three times. The lowest value of absorbency after 2016 h (84 days) was observed for Variolink Esthetic DC. Water Sorption and Solubility Mean values of water sorption and solubility were presented in Table 3. Maxcem Elite Chroma and Breeze exhibited the highest, while Estecem showed the lowest values of water sorption. Photoelastic Study All materials exhibited shrinkage and the associated contraction stress during hardening process. The significant reduction in contraction stress was observed due to hygroscopic expansion of cements (Figures 2-16). Water ageing of six cements resulted in additional stress characterized by the opposite direction of forces. The investigated materials exhibited various contraction stress values. Photoelastic Study All materials exhibited shrinkage and the associated contraction stress during hardening process. The significant reduction in contraction stress was observed due to hygroscopic expansion of cements (Figures 2-16). Water ageing of six cements resulted in additional stress characterized by the opposite direction of forces. The investigated materials exhibited various contraction stress values. The lowest contraction stress from tested materials exhibited Panavia SA Plus. The contraction stress decreased from 4.8 up to 0.0 MPa after 504 h (21 days) of water conditioning (Figure 11). Further water ageing resulted in additional stress: after 2016 h (84 days) stress level increased up to −1.6 MPa (Figure 11). The lowest contraction stress from tested materials exhibited Panavia SA Plus. The contraction stress decreased from 4.8 up to 0.0 MPa after 504 h (21 days) of water conditioning (Figure 11). Further water ageing resulted in additional stress: after 2016 h (84 days) stress level increased up to −1.6 MPa (Figure 11). The lowest contraction stress from tested materials exhibited Panavia SA Plus. The contraction stress decreased from 4.8 up to 0.0 MPa after 504 h (21 days) of water conditioning ( Figure 11). Further water ageing resulted in additional stress: after 2016 h (84 days) stress level increased up to −1.6 MPa ( Figure 11). The highest contraction stress was observed for SmartCem 2 amounting up to 15.1 MPa. The contraction stress of SmartCem 2 after 2016 h (84 days) of water storage reduced up to 1.6 MPa ( Figure 13). Discussion High configuration factor (C-factor) and the low viscosity of resin cements may generate relatively high contraction stress. This stress may cause debonding of the luting material, thereby increasing microleakage [14]. Our previous study (using an epoxy resin plate) [18,24], demonstrated that the photoelastic method can be used to evaluate the effect of water sorption on stress reduction at the tooth-restoration interface. This method shows that the contraction stress of dental resins may be partially relieved by the water uptake [18]. However, the in-depth analysis of the shrinkage stress values in various resin cement materials after water ageing is also highly demanded. The overall results showed the development of the initial stress in the compressive direction during photopolymerization. The composition of resin cements affected the sorption and solubility processes which, in turn, exerted influence on the hygroscopic expansion and plasticization. Thus, the compensatory effect was composition-dependent [26]. This study confirmed the lower water sorption for composite resin cements as compared with self-adhesive resin cements that underwent water ageing. The presence of hydroxyl, carboxyl, and phosphate groups in monomers made them more hydrophilic and, supposedly, more prone to water sorption [27]. Variolink Esthetic DC and NX3 do not contain adhesive monomers. In the present study they exhibited sorption similar to composite materials [18]. Variolink Esthetic DC showed a high solubility value and the lowest decrease in contraction stress (of about 70%). The water immersion of resin materials might result in dissolving and leaching of some components (unreacted monomers or fillers) out of the material [28]. Variolink Esthetic DC contains a modified polymer matrix and nanofiller. The lowest contraction stress might result from a small hydrolysis and plasticization effect of the modified resin matrix. Cement It is a composite resin cement which, in comparison with NX3 and Variolink Esthetic DC, showed higher values of water sorption and its total value of stress changes was 12.5 MPa. This could be explained by the composition of the polymer matrix containing bis-GMA and HDDMA. These monomers showed comparable characteristics: high water sorption values [29,30], similar polymer networks, and susceptibility to hydrolysis [31]. The four tested materials did not meet the requirements of ISO 4049, as they showed sorption values above 40 µg/mm 3 (Breeze and MaxCem Elite Chroma) or the solubility value above 7.5 µg/mm 3 (Variolink Esthetic DC and Panavia 2.0). The differences in water absorption of the polymer network depending on monomer type were reported. The highest water sorption was observed for MaxCem Elite Chroma (Figure 15). This material consists of HEMA and GDM that have one of the highest hydrophilicities among dental resins. HEMA was shown to induce water sorption, leading to the expansion of the polymer matrix [32]. Resin-modified glass-ionomer cements (RMGICs) absorbed more water due to hydroxyethyl methacrylate content, present in the hydrogel form in the polymerized matrices [33]. HEMA might be present either as a separate component or a grafted component into the structure of the polyacrylic acid backbone. The polymerized matrices of these materials were very hydrophilic and might include an interpenetrating network of poly(HEMA), copolymers of grafted HEMA, and polyacid salts that were more prone to water uptake [34]. Park et al. [35] showed that GDM exhibited the highest water sorption in comparison to bis-GMA, HEMA, EGDM, DEGDM, TEGDMA, GDM, and GTM. In the present study, the decrease in contraction stress after 2016 h (84 days) of water immersion varied significantly between tested materials. Figures 2-16 showed that the expansion dynamics also differed substantially. All studied materials exhibited contraction stress relief during water immersion. The value of stress decreased up to 0 MPa in different times depending on the material and its composition. To sum up, the phenomenon of hydroscopic expansion after compensation of contraction stress should be emphasized more and evaluated. Such over-compensation could lead to internal expansion stress [36]. Hygroscopic stress could result in micro-cracks or even cusp fractures in the restored tooth [37], poorer mechanical properties [38,39], hydrolytic degradation of bonds particularly at the resin-filler interface [40], polymer plasticization leading to hardness reduction and glass transition temperature [40], and impaired wear resistance [41]. Excessive water sorption is not desired as it causes an outward movement of residual monomers and ions due to material solubility. Furthermore, water sorption might generate peeling stress in bonded layers of polymers that may cause serious clinical consequences [42], which may occur especially when prosthetic restorations are adhesively cemented [42]. The present study demonstrated that self-adhesive cements, i.e., Maxcem Elite Chroma, Speed Cem Plus, Cement It, Panavia SA Plus, Breeze, and Panavia 2.0 exhibited high stress values due to water expansion (from 0 up to almost 7 MPa). Water expansion stress of Maxcem Elite Chroma and Breeze amounted to up to~6-7 MPa which could be associated with their composition, particularly with acidic monomer 4-MET in Breeze and HEMA, and GDM and tetramethylbutyl hydroperoxide in Maxcem Elite Chroma. The monomers, mentioned above, were responsible for water uptake and stress built-up associated with hydroscopic expansion [35]. According to the literature such high stress values were not desirable. Huang et al. [33] found that a giomer material exhibited extensive hygroscopic expansion (due to osmotic effect) enabling enclosing glass cylinders to crack after two weeks of immersion in water. Cusp fracture in endodontically-treated teeth was attributed to hygroscopic expansion of a temporary filling material [33], while cracks in all-ceramic crowns were associated with hygroscopic expansion of compomer and resin-modified glassionomer materials used as core build-up and/or luting cements [33]. Three-year clinical performance study also suggested hygroscopic expansion as a possible cause of cusp fracture in 19% of teeth restored with an ion-releasing composite [43]. Thus, the positive influence of water sorption on contraction stress relief in the case of luting cements, particularly self-adhesive materials, should be considered carefully. Shrinkage occurred within seconds, but water sorption took days and weeks. The rate of hygroscopic shear stress relief depended on the resin volume and its accessibility to water [11]. The contraction stress relief rates observed in luting resin cements could be much lower than in composite resin restorative materials. The composite restorations usually have a relatively large surface exposed to water in comparison to the overall surface of the luting material. As far as luting cements are concerned, the surface exposed to oral fluids is extremely small, while the pathway is extremely long. The consequences (slower compensation of contraction stress) might be less severe if water sorption is also possible from the dentin (dentin exposed to oral environment) [26]. The precise effect of water absorption depends on many factors including not only the material characteristics, the rate and amount of water absorbed, but also the mechanism of absorption [44]. Absorption leads to dimensional changes and has potentially important clinical implications. The positive effect of water absorption on composite restorative materials can be described as the mechanism for the compensation of polymerization shrinkage and the relaxation of stress. In clinical conditions, water absorption may help in the closure of contraction gaps around composite filling materials. It is worth emphasizing that the absorption can, in some cases, result in significant hydroscopic expansion and, thus, be damaging to the resin material and bonded tooth structure. Conclusions Among all studied resin cements, self-adhesive cements exhibited the highest water sorption due to acid monomer content, which affected the formation of hydroscopic expansion stress. The presence of this type of stress might pose a threat to prosthetic restorations. Therefore, there is still a need for research that would precisely illustrate the generated stress in clinical conditions. Tested resin cements generated differentiated contraction stress during photopolymerization. The dynamic of hydroscopic compensation (resulting from water sorption) or over-compensation of the contraction stress is also material characteristic-dependent.
4,403
2018-06-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Secure and Efficient Transmission in Mobile Ad hoc Network to Identify the Fake ID’s Using Fake ID Detection Protocol : Mobile Ad hoc Network (MANET) is a frequent infrastructure less self-coordinating network, in which each and every mobile node works as a source, destination and a wireless relay. Such pattern of wireless network is produced by dynamic mobile nodes without any centralized or fixed or existing network infrastructure. In MANET, however, there is no static network infrastructure and mobile nodes do not possess accession to a centralized host to assume IDs or IP addresses. Moreover, owing to mobility of node, network partitions and combines are quick occurrences. These occurrences produce the possibility of fake addresses inside the network. Thus, a centralized scheme cannot be employed in those networks; a dispersed and dynamic mechanism is required for mobile nodes to adopt and preserve a unique ID or IP address in mobile networks. In this study, we proposed two protocols namely Fake ID Detection (FIDD) and Path Selection Routing protocol (PSR) in which the FIDD protocol is used to detect the fake id i.e., the duplicate packets and PSR protocol is used to select the alternate path when the current path became busy or failure and it frequently chooses an alternate path to continue the transmission without any interruption. Hence, the efficiency and life-time of the network can be increased. Finally, the overall performance of the network can be increased. Introduction Mobile Ad hoc Network (MANET) is a type of wireless ad hoc network in which entire transferable nodes can directly communicate with each other without reckon on a centralized or fixed network infrastructure. Here, forwarding and routing of packets are accomplished by intermediate or neighbor nodes. By considering the routing protocols, the routing path is adopted while a sender wants to transmit data packets to the destination (Camp et al., 2002;Basagni et al., 2004;Chlamtac et al., 2003). To transmit and receive packets within two nodes, they must possess its unique IP address in that network. Because IP is also employed in mobile ad hoc networks and a unique IP address must be allotted to each and every node. Thus, IP address auto configuration approaches have formulated to eliminate the manual configuration overhead. Generally, we can classify the IP assignment results to be either reactive protocol or proactive protocol. Reactive protocols necessitate a consensus among entire nodes of that network on the novel IP address which is to be allotted, where in the proactive routing scheme; each and every node can severally allotted a novel IP address without demanding permission from any other mobile node in the mobile network. Mobility is the major causes for partitioning of the ad hoc network. While a mobile node possessing unique IP address in a partition, proceeds into other partition and there may be an increase a probability of fake of the node ID or IP address. Because, IP address accepts to be a unique one, address disputes require to be discovered through a Fake ID Detection (FIDD) protocol. Fake ID Detection (FIDD) protocol is the type of methodology inserted for supervising the delivery of ID or IP addresses by the private nodes by themselves. In this study, we presented a reactive ID or IP address appointment approach with FIDD for address dispute discovery for the mobile ad hoc network. The configuration of network parameter which is needed to be unique for each and every node in the mobile network is their ID or IP address. Our dispersed protocol with FIDD mechanism assures which no two nodes in the mobile network adopt the same ID or IP address. We depict improvements to the result which can address the troubles that may develop owing to node failures, packet losses, node mobility, or multiple simultaneous launchings of node configuration and the partitioning network and combiner. Here, the FIDD protocol has three phases namely detection of fake ID, Node initialization and node registration. In first phase, fake IDs are detected and in second phase nodes are initialized and in third phase nodes are registered in the network. Our aim is make the network more efficient. If the path is busy or became failure the packets which are needed to be transmitted could not able to transmit. For that reason we propose another protocol called Path Selection Routing (PSR) protocol which is used to choose alternate path when the current transmission path is busy (packet transmission takes place) or it became failure. If any of the paths are busy or it contains high packet dropping ratio due to some other factors like channel noise, it quickly chooses another one as an alternate path in order to continue the transmission without any interruption. Thus, the efficiency and lifetime of the network can be increased. Finally, the overall performance of the network can be increased. The rest of the chapter is organized as follows. Section 2 presents the related work. In section 3 proposed protocols are discussed in a detailed manner. Section 4 covers the algorithms for FIDD protocol and PSR protocol with their description. Section 5 presents the comparison table for existing and proposed protocols with its description. Section 6 provides the experimental results and section 7 summarizes the chapter with future work. Related Work In a self-directed mobile ad hoc mobile network the mobile nodes can be unambiguously discovered by an IP address with the exclusively introduce that such address should be dissimilar from that whatever other node in that ad hoc network. The process of configuration is the set of phases by which a single node receives their IP address inside the network. There are three mechanisms (Villalba et al., 2011) are present to set addresses like stateful and stateless and hybrid. Rather than the addresses assignment along an instant entity, stateless auto-contour appropriates the mobile nodes to build addresses by themselves, normally established on the ID of hardware or some random number. Mobility is the cause for zoning of the mobile network. While a mobile node experiencing unique IP address in single zone, proceeds into another zone and there may be originate a probability of IP address duplication. Later on, the IP address accepts to be unique, address disputes necessitate to be discovered by a Duplicate Address Detection (DAD) procedure. Duplicate Address Detection (DAD) is one of the approaches proposed for supervising the delivery of IP addresses through the single mobile nodes themselves. Zahoor ul Huq et al. (2010) Ganeshkumar and Thyagarajah (2010) Proposed the value of sensing of IP address disputes and various approaches presented for the process of detection. Weak DAD, proposed in (Vaidya, 2002;Velayutham and Manimegalai, 2014), is a type of methodology to keep a packet from routing to a incorrect destination, even if there is any duplicate addresses subsist. Mobile nodes in the ad hoc network are distinguished not only by their IP addresses, but in addition to a key that can be established on an ID of hardware or some random number. If a mobile node delivers a packet comprising an IP address which is kept in their routing table, but on a dissimilar key, an address dispute is discovered and it will merely function with the proactive one which updates or informs the routes invariably, but with a reactive one there may be mobile nodes which could not discover the IP addresses duplicity. It has no need to contribute extras overhead to the routing protocol, but then, it actually contributes to the overhead through transmitting ever the key along with each IP address. By considering the Strong Duplicated Address Detection (SDAD) approach (Perkins et al., 2001;Gopalakrishnan and Ganeshkumar, 2014) the mobile node prefers two IP addresses name namely temporary and tentative addresses. It will merely utilize the temporary address for the process of initialization when it discovers if the provisional one is unique or not. The discovery approach comprises of transmitting a message Internet Control Message Protocol (ICMP) intended instantly to such address. If it obtains a reply, such IP address is being utilized so the operation will be summarized. If this process does not deliver a reply, the message will be transmitted a particular number of times to make a point which the address is the unique one. It will not function for temporary disjuncture or dropping of the mobile network. Furthermore, while the network is so long and merely some free IP addresses persist, it enhances the overhead till it encounters a unique IP addresses. In address auto-configuration with Address Reservation and Optimistic Duplicated address detection (AROD) (Kim et al., 2007), the address reservation is based on the existence of nodes that have an IP address reserved to deliver it to the new nodes that enter. Two types of nodes will exist. Type 1: Agents type 1 with a reserved IP address, apart from the IP address that has its network interfaces. When a node joins the network, this reserved IP will be assigned to it immediately. Type 2: Agents type 2, which do not have reserved IP addresses. If a node that joins newly asks one of these for an IP address, this node borrows the reserved address of one of its neighbors who is of type 1 and it is assigned to the new one immediately. If the possibility of changing the reserved IP address number to the node type, number increases, the latency of IP address assignment is minor, but on the contrary the overhead increases due to having had to undergo more DAD processes and vice versa. Hybrid Centralized Query-based Auto configuration (HCQA) (Sun and Belding-Royer, 2003) is a dynamic address configuration protocol for mobile ad hoc networks that provides address assignment to mobile nodes during the formation and maintenance of a network. The protocol assigns unique addresses and can be combined with a variety of routing schemes. Further, the address authority aids in the detection of duplicate addresses and handles address resolution after network partitions and merges. It has two main problems, firstly the overhead produced by the SDAD process and periodic messages of Address Authority and secondly, the network depends on a central entity with which all nodes must communicate directly in order to register its IP address, so that much latency is added at the joining of nodes to the network. Mohsin and Prakash (2002) propose a stateful protocol which uses multiple disjoint allocation tables. In this approach every node has a disjoint set of IP addresses that can be assigned to new nodes, is said that as node owns these pool of IP addresses hence no quorum is required to make a decision. This approach uses a proactive scheme for dynamic allocation of IP addresses in MANETs. This protocol employs the approach described in MANET to solve network partitioning. The major drawback of this protocol is that the synchronization depends on the existence of a reliable broadcast and such a thing does not exist in a distributed mobile environment, thus one can question the robustness of this protocol. An improvement of (Mohsin and Prakash, 2002) can be found in (Thoppian and Prakash, 2006), where Thoppian and Prakash propose a dynamic address assignment based on a so-called buddy system that manages mobility of nodes during address assignment, message loss and network partitioning and merging. However, the IP address allocation can generate a high overhead of control messages while it does a global search and the address recovery (to avoid missing addresses) requires diffusion messages by a flooding process. In addition, union and partition may incur in high overhead because of the global nature of this protocol. Extensible MANET Auto-configuration Protocol (EMAP) (Ros et al., 2010) is based on the idea of a protocol of REQUEST/REPLAY messages. The main advantage of this protocol is the possibility of doing it extensible, i.e., it can include new functionalities in the future that are analyzed in a theoretical way, such as Domain Name Server (DNS). The route discovery mechanism among nodes is similar to the Ad hoc On-Demand distance Vector (AODV) (Perkins and Royer, 1999) protocol. Proposed Protocol In this study, we presented two protocols one is Fake ID Detection (FIDD) protocol which the FIDD protocol is used to detect the fake id i.e., the duplicate packets and another one is Path Selection Routing (PSR) protocol which is used to select the alternate path when the current path became busy or failure. In this section the proposed work is discussed detailed manner. Fake ID Detection (FIDD) Protocol Fake ID detection protocol has three following phases through which the fake or duplicate packets are detected at the sender side. This protocol includes the following three steps. Step 1: Detection of Fake ID A main effect of auto configuration of address is Fake ID Detection (FIDD). Fake addresses may originate during the initialization process of nodes because the nodes may contain multiple hops and cannot heed each other at once. Besides during combining of two nodes may assume same address. Consider the nodes which are needed to combine are denoted as A and B. Nodes A and B receive the same ID, A, severally from dissimilar networks. A source or sender node in the network starts a session and transmission with node A through any of the routing path. Such path contains a length of two. While the two networks proceed towards one another and subsequently combine, node B moves into the direct range of transmission of the source node. Therefore, if proactive routing protocols are employed, after obtaining the routing information of periodic update, the sender node must update their routing initiation for ID or IP address A to the straight path to node B with length of the path as one. If the reactive routing protocols are used, novel path discovery processes will consequence in the path to address A to the novel path. Hence, the erroneous routing keeps the sender from communication with the right destination. So, FIDD is very essential. As we have considered WDAD (Vaidya, 2002) and SDAD (Perkins et al., 2001) protocols that both induce its own restrictions. In WDAD if a novel node connects the network with no knowledge of the routing data and the identifier and it cannot differentiate those two nodes merely by ID or IP addresses. Therefore, this will guide to erroneous routing. In SDAD, if a mobile node is disjointed for very long time, it would not deliver the packet which can be addressed as partition. Step 2: Node Initialization A mobile node getting in a network, broadcast Address Request Message (ARM) by setting their randomly selected candidate ID or IP address apart from their address of the hardware for the reply message. Node commences its respond timer, if respond timer runs out and it does not acquire any ARM, it duplicates the step for threshold number of time than which resolves that it can prefer the address for the process of communication. But ahead of registration process it cannot employ it for communication, thus it awaits for Original Address Information Holder (OAIH) ask_reg unicast message transmitted instantly at its developed ID or IP address, if anyway the node employing the candidate ID or IP address ARM as well as OAIH ARM message does not accomplish to the novel node then even fake address will be discovered and novel node will take over the above process again and again. If the network does not deliver any ask_reg message for ask_t where ask_t> n*ask_reg_t then the network will arrive to experience that it is the only node in this network and it will determine the mobile ad hoc network and announce itself as OAIH. Step 3: Node Registration Since extending all the three level of fake ID detection the novel node with their assumed ID or IP address, their MAC address and enquired lifetime for the ID or IP address unicasts a Registration Request (RR) to OAIH. The OAIH contributes the information for such node to their address list and unicasts a RR to the node suggesting whether the registration process was successful. On success, the RR suggests the sanctioned lifetime for the IP address. Now the former OAIH will become Backup Address Information Holder (BAIH) and transmit the address list with their state information to the novel registered node that is now novel OAIH. After getting OAIH it has to transmit their unique identifier of the network so that each and every node updates itself about novel OAIH and that network ID they belongs to. The former BAIH will arrive to experience with such broadcast that it is no longer been BAIH. Path Selection Routing (PSR) Protocol Then, we proposed Path Selection Routing (PSR) protocol which is a novel protocol that selects an alternate path while the current path is busy or it became failure since it has the dropped packets as maximum number and our PSR protocol selects the paths frequent and efficient manner for packet transmission with no delay. Here, the packets are transmitted to the determined destination without any time delay which also increases the overall network performance that can provide and affirm efficient transmission with scalable and robust and the life-time of the network is increased by reducing the traffic and is required for reliable and scalable communication without any interruption within the routing paths through the way the PDR is increased. In PSR protocol, there is no need to keep any routing information on routing table to update the link status since the sustainment and storage of this routing table necessitates much more bandwidth that also degrades the performance of the network. Description In this study, we proposed two protocols namely Fake ID Detection (FIDD) and Path Selection Routing protocol (PSR) in which the FIDD protocol is used to detect the fake id i.e., the duplicate packets and PSR protocol is used to select the alternate path when the current path became busy or failure. Generally, the network is said to be a collection of mobile nodes in which each mobile nodes contains their own packet ID which is not same as that of another one i.e., the packet ID (which means packet address) is a unique one for each and every packets. Some times during packet transmission fake IDs are created by the network itself due to some other problems of the network and such fake ID containing packets comprises of duplicate packets which is not similar with the original message. When the destination receives those packets it may collapse to detect which packet is original and then it intimates the source by sending an acknowledgement message. For that reason the packets are sensed or fake IDs are detected before packet transmission takes place and this process takes place at the receiver side. This fake ID detection protocol comprises of two three phases such as fake ID detection, Node initialization and Node Registration. Each path of the network contains queue buffer through which the packets are transmitted hop-by-hop manner or one by one. At first phase, the packets which are needed to be transmitted are sensed by the network in order to detect fake IDs and at the second phase nodes are initialized for transmission. In phase three, the initialized packets are registered in the network. Thus, the fake IDs are detected at the sender side. Then, we are going to discuss about the Path Selection Routing (PSR) protocol which is used to choose alternate path when the current transmission path is busy (packet transmission takes place) or it became failure. Here, we use 1200 packets and 10 paths for transmission. If any of the paths are busy or it contains high packet dropping ratio due to some other factors like channel noise, it quickly chooses another one as an alternate path in order to continue the transmission without any interruption. Thus, the efficiency and lifetime of the network can be increased. Finally, the overall performance of the network can be increased. Comparison of Existing and Proposed Protocols In above Table 1, we compare the parameters of Existing DAD and Proposed FIDD Protocols. In Existing DAD, we can send only 1000 packets but in Proposed FIDD we can transmit 1200 packets through an individual path. The DAD protocol uses total number of paths as 50 but our proposed FIDD protocol utilizes total number of paths as 20. Path selection in DAD protocol is random but in FIDD protocol is chooses the alternate path by using another protocol called PSR protocol in order to make the network efficient. The DAD protocol requires 10dB bandwidth but FIDD protocol has 15dB bandwidth. Transmission power of DAD and FIDD protocol is 8.0 W and 5.0 W respectively. The data rate of DAD protocol is 16.5e and FIDD protocol is 18.5e. In above Table 2, we compare the parameters of existing WADD and Proposed PSR Protocols. In existing WADD, we can send only 1200 packets but in Proposed PSR we can transmit 1500 packets through an individual path. The WADD protocol and proposed FIDD protocol use the omnidirectional antenna for 360 degree Coverage. Path selection in WADD protocol is random but in PSR protocol is chooses the alternate path by using another protocol called PSR protocol in order to make the network efficient. The WADD protocol requires total number of paths count as 10 PSR protocols Choose the path total number path is 25. Transmission power of WADD and PSR protocol is 5.0 W and 12.0 W respectively. The data rate of WADD protocol is 10.5e and PSR protocol is 28.5e. Experimental Results In Fig. 1, we particularly compare packet delivery ratio and scalability. The network that employs the FIDD protocol has high PDR and scalability. In Fig. 2, we particularly compare Number of nodes and Latency. The network that uses the FIDD protocol has more number of nodes and high latency. In Fig. 3 we particularly compare scalability and time delay. The network that uses the FIDD protocol has high scalability and reduced time delay. In Fig. 4 and here we particularly compare delay and avg. time delay. The network that uses the FIDD protocol has reduced delay and average time delay. In Fig. 5, we and here we particularly compare communication overhead and throughput. The network that uses the FIDD protocol has reduced communication overhead and high throughput. In Fig. 6, we compare and here we particularly compare latency and scalability. The network that uses the FIDD protocol has high scalability and high latency. Conclusion Thus, we proposed Fake ID Detection (FIDD) protocol and Path Selection Routing (PSR) protocol in which the FIDD protocol is used to detect the fake id i.e., the duplicate packets and PSR protocol is used to select the alternate path when the current path became busy or failure. Here, we have dealt the effect of assignment of unique ID or IP address to mobile nodes in mobile ad hoc networks in the absence of whatever fixed configuration or centralized hosts. Such intention has been accomplished by a reactive scheme in handling fake addresses, besides through reserve options in the function of network resources, such as information quantity that are stored in the mobile nodes. The proposed protocol allots unique addresses and which can be merged with a several routing approaches. Moreover, the Address Information Holder assists in the discovery of duplicate addresses and deals address dispute behind the partition of the network and combines. The PSR protocol frequently chooses alternate path when the current path is busy or failure. Thus, the packet transmission process continuously occurs in the network. Hence, the efficiency and lifetime of the ad hoc network can be increased. Finally, increased overall network performance is obtained. In future, we extend our research to provide security for the network by proposing new authentication protocol. Funding Information The authors have no support or funding to report. Author's Contributions All authors equally contributed in this work. Ethics This article is original and contains unpublished material. The corresponding author confirms that all of the other authors have read and approved the manuscript and no ethical issues involved.
5,594.4
2015-01-29T00:00:00.000
[ "Computer Science", "Engineering" ]
Veillonellae: Beyond Bridging Species in Oral Biofilm Ecology The genus Veillonella comprises 16 characterized species, among which eight are commonly found in the human oral cavity. The high abundance of Veillonella species in the microbiome of both supra- and sub-gingival biofilms, and their interdependent relationship with a multitude of other bacterial species, suggest veillonellae to play an important role in oral biofilm ecology. Development of oral biofilms relies on an incremental coaggregation process between early, bridging and later bacterial colonizers, ultimately forming multispecies communities. As early colonizer and bridging species, veillonellae are critical in guiding the development of multispecies communities in the human oral microenvironment. Their ability to establish mutualistic relationships with other members of the oral microbiome has emerged as a crucial factor that may contribute to health equilibrium. Here, we review the general characteristics, taxonomy, physiology, genomic and genetics of veillonellae, as well as their bridging role in the development of oral biofilms. We further discuss the role of Veillonella spp. as potential “accessory pathogens” in the human oral cavity, capable of supporting colonization by other, more pathogenic species. The relationship between Veillonella spp. and dental caries, periodontitis, and peri-implantitis is also recapitulated in this review. We finally highlight areas of future research required to better understand the intergeneric signaling employed by veillonellae during their bridging activities and interspecies mutualism. With the recent discoveries of large species and strain-specific variation within the genus in biological and virulence characteristics, the study of Veillonella as an example of highly adaptive microorganisms that indirectly participates in dysbiosis holds great promise for broadening our understanding of polymicrobial disease pathogenesis. INTRODUCTION The human oral cavity is home to one of the richest microbiotas of the human body, one primarily dominated by the domain Bacteria. Thus far, no <16 different bacterial Phyla comprising 106 Families, 231 Genera and 687 species-level taxa were cataloged in the Human Oral Microbiome Database (HOMD) [1]. This microbiota is almost exclusively present as biofilmgrowing polymicrobial communities [2]. Whereas, these biofilms may evolve in homeostasis with the human host, they are also the cause of two of the most prevalent diseases of mankind: dental caries and periodontitis [3,4]. Human studies conducted in the past half century have strived to identify specific pathogens implicated in the pathogenesis of these diseases. As such, Mutans streptococci (mainly Streptococcus mutans) were depicted as the main causal agents of dental caries [5], and species of the so-called "red complex", i.e., Porphyromonas gingivalis, Treponema denticola, and Tannerella forsythia were described as major etiological drivers of periodontitis [6]. Although these organisms have successfully served as models to enhance our understanding of the bacterial processes involved in oral diseases, the more recent advent of high-throughput sequencing technologies has greatly expanded the catalog of taxa identified within oral communities and further broadened our understanding of their ecological function and contribution to these diseases [7]. Compelling evidence now demonstrates that not only are these diseases polymicrobial in nature but also that they require polymicrobial synergism both to be initiated and progress [8]. Viewing oral diseases through the prism of single causative taxa was therefore rapidly deemed insufficient and gave way to the study of microbiota symbiosis and dysbiosis with host tissues. A cardinal feature of polymicrobial diseases, such as periodontitis, is that the compositional and functional activity of the local microbiota is highly responsive to environmental dynamics. Consequently, oral taxa that are widely regarded as being commensal and part of the healthy state microbiota may, in fact, demonstrate critical roles in compositional shifts of the normal microbiota from symbiosis to dysbiosis when the microenvironmental changes provide such opportunities. Understanding the early events that lead to dysbiosis is a major focus area of current research efforts toward using holistic means for disease prevention, and precisely, representatives of the genus Veillonella are purported to play a major role in the ecology of oral biofilms [9][10][11]. Despite representing among the most abundant genus in the oral cavity, after Streptococcus and Prevotella [12], Veillonella spp. remain poorly studied. In the words of Kolenbrander, veillonellae have emerged as "a critical genus that guides the development of multispecies communities" [13]. In the current review, we recapitulate information on the taxonomy, physiology, genomic and genetics of the genus Veillonella, and further provide nascent evidence that suggest the possibility that Veillonella species may act as "accessory pathogens", i.e., have the capabilities to promote the growth of more pathogenic species within oral biofilms, finally epitomize the relationship between these bacteria and biofilm-induced oral diseases-dental caries and periodontitis. MICROBIAL ECOLOGY IN THE ORAL CAVITY: THE PIVOTAL POSITION OF VEILLONELLA SPECIES The oral cavity represents a unique ecosystem spatially and ecologically diverse. This ecosystem offers epithelial surfaces alternating both keratinized and non-keratinized profiles, as well as coarser and cryptic mucosal surfaces such as on the tongue or tonsils. In addition to shedding epithelia, dental tissues expose non-shedding hard surfaces that display both accessible supragingival areas and protected subgingival surfaces. This interface between dental and gingival tissues creates a crevice, the gingival sulcus, which is physiologically bathed in a serumderived, nutrient-rich transudate [14,15]. Such spatial diversity creates local gradients of temperature, redox potential, pH and nutrients sinks, all of which create ecologically different niches that differently support the growth of microbial communities [16]. On top of that, an additional ecological pressure stems from the constant salivary flow that washes out floating (planktonic) microorganisms and coats surfaces with a proteinaceous layer of proteins and glycoproteins; the salivary pellicle. To sustain their growth, oral microbial communities have consequently evolved to behave as "site-specialists", and to adhere onto tissues (mostly non-shedding dental tissues) at sites that best cover their physiological needs, i.e., their preferred ecological niche [17,18]. Bacterial adhesion to surfaces, and further intergeneric coaggregations are regulated processes. Adsorbed components of the salivary pellicle, such as statherins, proline-rich proteins, mucins or α-amylase, provide binding motifs that enable the selective attachment of early colonizing species that express cognate surface receptors [19,20]. These principally include streptococcal species such as Streptococcus gordonii, Streptococcus sanguinis, and Streptococcus mitis [21]. In turn, early colonizers express at their surface adhesins recognized by bridging species, so named as they are themselves exposing surface receptors allowing further incremental coaggregation of later colonizers [22,23]. Veillonella species typically represent such bridging taxa and contribute to regulate these intergeneric coaggregations via affinity-dependent interactions [24,25]. The contribution of bridging taxa such as Veillonella spp. to polymicrobial communities extends beyond the establishment of cell-cell juxtaposition. The metabolism of veillonellae may alter their immediate environment and thereby foster the establishment of more pathogenic species. For instance, though veillonellae are classified as strict anaerobes, their catalase activity was shown to detoxify ambient hydrogen peroxide (H 2 O 2 ) stemming from the metabolism of initial colonizers (Streptococcus), thereby creating favorable low redox potential conditions for growth by more oxygen-sensitive anaerobes (Fusobacterium nucleatum), as would be periodontal pathogens in the gingival sulcus [11]. Veillonella was also shown to produce heme, which is the preferred iron source of the periodontopathogen P. gingivalis [26]. In these considerations, veillonellae were recently proposed to behave as "accessory pathogens" [23,27]. GENERAL CHARACTERISTICS Veillonellae are strictly anaerobic Gram-negative cocci. The genus Veillonella consists of 16 characterized species [28][29][30][31][32][33][34], eight of which are routinely isolated from the oral cavity [33,35,36]. A unique characteristic of the genus is their lack of glucokinase and fructokinase that renders them unable to metabolize glucose, fructose and disaccharides. Instead, veillonellae use short-chain organic acids as carbon sources, often produced as intermediary/end metabolites from other genera in the biofilm. With the exceptions of Veillonella seminalis [28], lactate is the preferred carbon source for most Veillonella species, and is metabolized to propionate via the methylmalonyl-CoA pathway, and by further pathways to acetate [37]. The metabolic consumption of lactate establishes a nutritional interdependence between Veillonella spp. and early-colonizing lactic acid-producing bacteria, such as species of streptococci or lactobacilli, and thereby accounts for their co-localization in oral biofilms [38]. Further worth noting here, is that lactate is the strongest acid produced in significant amount by oral taxa and is involved in the demineralization of dental hard tissues (see section "veillonellae and dental caries" below). Its conversion to weaker volatile acids by Veillonella spp. (primarily propionate) may therefore act as a buffer to the carious damages induced by saccharolytic species [20]. Interestingly, despite their strict requirement for anaerobic environments, Veillonella spp. demonstrate an important ability to cope with aerobic media. Such observation is partially imputed to their ability to detoxify hydrogen peroxide via the expression of a catalase in most Veillonella species, with the only exception of Veillonella atypica. Altogether, these abilities to establish mutually beneficial networks with both saccharolytic initial colonizers as well as to decrease the redox potential for themselves and later colonizers have made veillonellae among the most abundant taxa in both supra-gingival and sub-gingival biofilms [39][40][41]. TAXONOMY Veillonellae belong to phylum Firmicutes, and although most members are Gram-positives, representatives of the Class Negativicutes such as Veillonella spp. stain Gram-negative. Figure 1 shows typical photomicrographs of two Veillonella species as observed under differential interference contrast (DIC) and epifluorescence microscopy using fluorescent probes targeting the 16S rRNA. The genus stems from the Order Veillonellales that also includes the genera Dialister, Megasphaera, and Anaeroglobus, which have been recently proposed as putative periodontopathogens [42,43]. Recognized human oral Veillonella species include: Veillonella parvula, Veillonella dispar, V. atypica, Veillonella denticariosi (reportedly enriched in carious dentine), Veillonella rogosae (reportedly enriched in caries-free individuals), Veillonella tobetsuensis (often isolated form the tongue), Veillonella infantium and Veillonella nakazawae [32,33,38]. Due to high genomic similarities between species, discrimination among Veillonella spp. by the sole use of the 16S ribosomal RNA (16S rRNA) gene, as increasingly applied for taxonomic assignment, may prove highly challenging. Even more so since single genome bears several copies of the 16S gene that may display significant heterogeneities between them (intragenomic variations) [44]. One potential solution may be to complement the 16S-based taxonomy with other housekeeping-gene sequences, such as rpoB (encoding for the RNA polymerase β-subunit) and dnaK (encoding for Chaperone protein DnaK) [33,[44][45][46]. CARBON SOURCE METABOLISM IN VEILLONELLA SPECIES Lactate and malate are the preferred carbon sources in veillonellae, and are metabolized to propionate, acetate, CO 2 and H 2 [37]. Pyruvate, fumarate, and oxaloacetate may also be metabolized, but citrate, iso-citrate and malonate are not. Catabolism of succinate has been reported yet yielding suboptimal growth [47]. The balanced stoichiometry of lactate catabolism is: Of great importance in this overall reaction is the unique manner in which energy is produced. The typical generation of an ATP through conversion of pyruvate to acetate occurs, but it is complemented by the action of an enzyme unique to veillonellae: the lactate-oxaloacetate transhydrogenase (or malatelactate transhydrogenase) [49]. This NADH-binding enzyme allows the interconversion of lactate + oxaloacetate to pyruvate + malate in a single reaction without loss of electrons. The ensuing reducing equivalents are used to produce an ATP molecule during the transfer from malate to fumarate. Fumarate is further converted to succinate which is subsequently decarboxylated to generate propionate and carbon dioxide [48]. Thus, this unique transhydrogenase allows more ATP to be generated than would the simple conversion of lactate to pyruvate and to acetate. Another distinctive feature of Veillonella metabolism is the regeneration of oxaloacetate from pyruvate by fixation of carbon (from carbon dioxide) via the pyruvate carboxylase, rather than through transcarboxylation from methyl-malonyl CoA as occurs in Propionibacteria [50]. The regenerated oxaloacetate can then be decarboxylated and phosphorylated to phosphoenolpyruvate by the phosphoenolpyruvate carboxykinase in order to enter the gluconeogenesis pathway. An additional source of energy is an atypical pathway of nitrate reduction: nitrate is reduced to nitrite using pyruvate as the electron donor to yield an ATP, and the nitrite is then converted to hydroxylamine, then to ammonia which is assimilated [51]. This unique metabolism endows Veillonella spp. with a highly specialized ability to thrive on intermediary/end metabolites produced by other bacterial members in their vicinity. GENOMICS AND GENETICS Gronow and his colleagues deposited the first complete genome sequence of a Veillonella species, more specifically V. parvula strain Te3 T (DSM 2008) isolated from a human intestinal tract [52]. The type strain appeared to carry one main chromosome of 2,132,142 bp displaying 38.6% GC content. All predictable 1,920 genes consisted of 1,859 protein coding genes, 61 RNAs and 15 pseudogenes. Among all genes, 73.6% were functionally annotated, and the rest were labeled as hypothetical proteins. More recently, as a result of the mounting interest attracted in oral veillonellae, several other draft and complete genomes were further deposited, these include V. atypica OK5, V. tobetsuensis ATCC BAA-2400T, V. parvula HSIVP1, V. nakazawae JCM33966 T [36,[53][54][55]. Despite the purported importance of Veillonella in human health and disease, little is known about their biology and pathogenic traits, largely due to our inability to genetically manipulate this group of bacteria until recently. Liu et al. developed a first genetic transformation system in veillonellae, which comprises a genetically transformable strain of V. atypica OK5 and a shuttle vector [56,57]. Using this system, Zhou et al. inactivated by insertion all of the 8 putative hemagglutinin genes in V. atypica to study putative surface proteins responsible for coaggregation with oral bacteria [41]. By testing the capacity of all 8 adhesin mutants to coaggregate with various oral bacteria and human oral epithelial cells, one multi-functional adhesin Hag1 was identified in V. atypica. This protein is required for Veillonella coaggregation with streptococcal species, P. gingivalis and human buccal cells, thus believed to play a vital role in the development of human oral biofilm. Based on the same system, Zhou and colleagues successfully established the first markerless mutagenesis system in V. atypica using a mutant pheS gene (encoding the highly conserved phenylalanyl-tRNA synthetase alpha subunit) as the counter-selectable marker. To study the function of extracellular proteins sigma factor in Veillonella, they successfully deleted a gene encoding an alternative sigma factor (ecf3) from Veillonella chromosomal DNA [58]. This provided a valuable tool for studying genes' function and regulation in Veillonella species. Recently, natural competence in V. parvula has been reported and this provided a rapid and simple method to study Veillonella genetics [45]. BRIDGING ROLE IN ORAL BIOFILM DEVELOPMENT Oral and dental biofilms are multispecies communities, which develop in an incremental process with initial colonizers attaching to surfaces, followed by early-and later-colonizers physically co-aggregating [20,59]. Evidence that veillonellae function as bridging species in biofilm development stems from both in vivo and in vitro studies. Human epidemiological studies have shown veillonellae to be highly abundant in both supraand subgingival plaques as well as on the tongue and in saliva [39,40,[60][61][62]. Biofilms formed on intra-oral devices worn by volunteers showed veillonellae colonization as early as 2 h following insertion [63]. The presence of Veillonella species so early in biofilm development may appear somewhat surprising considering their requirement for anaerobic conditions and inability to catabolize sugars. Early biofilms are indeed not depauperate in oxygen, and sugars are likely the most abundant carbon source in immature oral biofilms [64]. Although initial colonizing saccharolytic streptococci could supply veillonellae with organic acids from their fermentation metabolism, most of these initially present Streptococcus species are not high producers of organic acids. A possible piece in this puzzle rests in the cell-cell recognition and intergeneric interaction [20]. Hughes et al. observed that sub-gingival isolates of the genus have extensive coaggregations with streptococci and Actinomyces spp. found in initial plaque [65]. However, isolates from the tongue and other mucosal surfaces display few of these interactions-instead they coaggregate with Streptococcus salivarius, an organism that likewise colonizes the tongue, and is less frequently isolated from dental plaque. These results suggest that the phenotype of Veillonella isolates may differ based on the isolation site, and that coaggregation interactions may enhance recruitment of a highly adaptable strain to a given microbial community. Coaggregation characteristics and cellsurface antigenic properties (assessed with polyclonal antibodies raised against whole bacterial cells) have been used as phenotypic markers to show that not only do veillonellae occur in immediate proximity to streptococci in the early plaque biofilm, but also that a change in veillonellae phenotypes within a single individual's plaque biofilm occurs over the course of 4 h [66]. Earlier studies have shown that Veillonella spp. physically coaggregate with streptococci [65,67]. Furthermore, in vitro studies using saliva as sole medium have demonstrated that when Veillonella is co-cultured with later colonizers, such as F. nucleatum or P. gingivalis, the latter always showed a growth advantage compared to their monocultures [10,13,59,68,69]. Veillonellae was also found to coaggregate with these two later colonizers, both in vivo and in vitro [24,70,71]. In an in vitro multi-species biofilm model, absence of streptococci from the inoculum decreased the total number of V. dispar cells [72]. Of note, in the same biofilm model, different Fusobacterium spp. differentially affected the growth of V. dispar, which may reveal previously unknown nutritional cues between the two taxa [73]. These observations collectively illustrate the bridging role of Veillonella spp. within oral biofilms, although the surface proteins and mediators involved in these intergeneric interactions remain largely understudied to date. Recently, eight putative genes, encoding adhesin proteins, have been identified in V. atypica OK5 strain [41]. Among them is a YadA-like autotransporter protein Hag1, encoded by hag1 gene required for its coaggregation with S. gordonii, Streptococcus oralis, Streptococcus cristatus, P. gingivalis and even for adhesion on human oral epithelial cells [41]. Given to the remarkably large size (7,187 amino acids) and complex organization of Hag1 protein, the coaggregation with different partners likely involves distinct domains and mechanisms, which thus far warrant further investigation. Indeed, the target for V. atypica OK5 binding to S. gordonii has been identified to be Hsa protein, a previously characterized sialic-acid binding adhesin in S. gordonii [74]. Phenotypic characterization of bacteria in general has become less prevalent since the advent of modern genomic approaches. However, the aforementioned studies demonstrate that a functional difference between Veillonella spp. likely exists, but that we lack an understanding of what the differences may be and how they influence community physiology in situ. The question remains as to the carbon source for veillonellae in the early plaque biofilm. Experiments performed in an in vitro model show that V. parvula PK1910 produces a diffusible signal which upregulates streptococcal amylase production [75]. In turn, degradation of streptococcal intracellular glycogen stores and subsequent glycolysis could yield aberrant lactic acid for the growth advantage of veillonellae. This observation is all the more important because the model system has incorporated flow. Neither signaling nor fermentation products can be permitted to accumulate, therefore these components can be present at a consistently high concentration only within the nearest proximity to the producer bacterial cell. Accordingly, only those streptococci in immediate proximity to veillonellae may be "signal-activated, " and, reciprocally, only those veillonellae in immediate proximity to streptococci would benefit from any enhanced lactate production. While these interactions are limited to dual bacterial species assays, recent advents in omics hold great promise for determining how Veillonella establishes mutualistic relationships within bacterial communities in niches that are characterized by Veillonella overgrowth. Thus, a convergence of coaggregation, physiology, and solute concentration gradients may influence which veillonellae are found at the various sites in healthy and diseased individuals. VEILLONELLAE AND DENTAL CARIES Dental caries, i.e., the demineralization of teeth enamel and dentine, are mainly caused by the local decrease of pH induced by the production of organic acids ensuing from fermentation in saccharolytic bacteria [5]. The mouth encompasses approximately 700 microbial species, but only a few specific species have been consistently implicated in dental caries, such as Lactobacillus spp. and S. mutans [76]. Yet, more recent studies using next generation sequencing identified additional taxa, most consistently Scardovia wiggsiae, and mixed taxonomic communities with common saccharolytic functions in the etiology of the disease [77][78][79]. Early studies by Mikx and colleagues on the role of oral bacteria in caries activity showed that Veillonella alcalescens may display a potentially anti-cariogenic effect [80]. Indeed, gnotobiotic rodents infected with S. mutans and/or V. alcalescens demonstrated less dental caries development when co-inoculated with both taxa. One later report came to challenge the purported role of veillonellae in dental health [81]. Using removable dental appliances worn by caries-free and caries-susceptible volunteers, Minah et al. showed that frequent exposure to sucrose stimulated the growth of Veillonella spp., Lactobacillus spp., S. salivarius, and to a lesser extent, S. mutans, and further decreased the microhardness of enamel. Whereas, this positive association appears to associate Veillonella spp. with caries development, it may also be reasoned that Veillonella spp. were advantageously thriving on the increased lactic acid production generated by streptococci and lactobacilli following sucrose exposure. Moreover, the authors argued that the increased level of Veillonella in caries-free patients' plaque exposed to sucrose, is consistent with its protective role in enamel decalcification. By contrast, Noorda et al. established an artificial mouth model using human enamel slabs and found co-cultures of S. mutans and V. alcalescens to generate an increased acid production as compared to the respective monocultures. Co-cultures also resulted in higher enamel surface demineralization under anaerobic conditions [82]. Beyond the consistent observation that Veillonella spp. are associated with dental caries, it appears difficult to conclude whether such association is causal or consequential in consideration of these conflicting findings. Not only in childhood group, were Veillonella spp. (especially V. parvula) found to be associated with caries [83], but in elder population, they are also one of the most abundant and prevalent in all samples from both health and caries, furthermore, the abundance from caries appeared to be higher [84]. Evidence exists that V. denticariosi occurs only in diseased sites whereas, V. rogosae is found only in healthy plaque [85], i.e., that V. denticariosi has unique properties which either restrict it to a caries community or make it uncompetitive elsewhere. One may ask whether V. denticariosi isolates from caries lesions display interactions with mutans streptococci? The physiological link between veillonellae (as lactate utilizers) and mutans streptococci (as lactate producers) has prompted much clinical research on the association of veillonellae with caries. For instance, Backer et al. [40] utilized a reverse capture checkerboard assay to analyze plaque samples from 30 subjects with caries together with 30 healthy controls and found Veillonella, together with 7 other species including S. mutans, to be associated with caries. A study by Aas et al. [86] also demonstrated the association of genera Veillonella with caries progression. Belstrom et al. reported that Streptococcus spp. and Veillonella spp. were the most predominant genera among all saliva samples from 292 participants with mild to moderate dental caries [87]. It may be purported that the association observed between cariogenic bacteria and veillonellae stems from their metabolic requirement for organic acids that are indeed found in higher concentrations in active caries. Hence, the presence of veillonellae may be indicative, and perhaps predictive, of a localized drop in pH. Indeed, by using an chemostat and a 9-species microcosm biofilm, Bradshaw and Marsh reported that numbers and proportions of S. mutans and Lactobacillus spp. increased as pH decreases, while V. dispar became the most abundant organism following glucose pulses, especially under low pH [88]. Similarly in another clinical study relying on community 16S cloning and sequencing, Gross et al. found the proportion of veillonellae to increase commensurately with the proportion of acidogenic streptococci [89]. At the very least, these studies suggest that veillonellae are spatiotemporally associated with acidogenic bacteria in oral biofilms, and it is enticing to consider that their correlation with caries progression identifies this genus as an early indicator of dysbiotic communities, regardless of the accompanying acidogenic phylotypes. This does not necessarily implicate veillonellae as etiological agents of dental caries, but ecological beneficiaries of an incipient dysbiotic community. In other words, veillonellae could comprise a risk factor for caries initiation, while mutans streptococci a risk factor for caries progression. VEILLONELLAE AND PERIODONTITIS Periodontitis affects about 11% of the population around the world and is a biofilm-induced chronic inflammatory disease that causes resorption of the periodontium, i.e., teeth-supporting tissues [90,91]. According to current and most accepted knowledge on periodontitis etiopathogenesis, the initial phase of the disease (gingivitis) is triggered by the accumulation of supra-and sub-gingival biofilm [92]. Rapidly the accumulation of bacterial cells and metabolites induces a mild inflammation that in turn results in an increased exudation of gingival crevicular fluid [92]. As a serum transudate/exudate, excess gingival fluid creates a protein-rich environment that fosters colonization by proteolytic species. One common view to explain the subsequent progression to periodontitis relies on the keystone-pathogen hypothesis, that purports low-abundant taxa of the red complex pathogens (P. gingivalis, T. forsythia, and T. denticola) as well as F. nucleatum, to further alter local nutrient conditions by tissue breakdown, subvert host immunity and ultimately promote the establishment of more abundant pathogens [93][94][95]. These microbial alterations set-off a self-feeding vicious cycle that enhances and maintains periodontal inflammation and leads to tissue resorption. All periodontal pathogens are considered as "intermediate" (F. nucleatum) or later (P. gingivalis) colonizing bacteria, and regularly colonize the subgingival crevice, which is mostly an anaerobic environment. This is consistent with the fact that these periodontal pathogens are obligate anaerobes and therefore extremely vulnerable to oxydative stress. Reactive oxygen species (ROS), such as hydrogen peroxide (H 2 O 2 ), are commonly generated by the metabolism of initial colonizers and may inhibit the growth of strict anaerobes, such as periodontopathogens [59,[96][97][98][99]. An interesting and contradictory observation showed that F. nucleatum and P. gingivalis are frequently isolated even from early biofilm communities [100]. How do these strictly anaerobic periodontopathogens cope with this lethal H 2 O 2 concentrations likely to occur around streptococci? As early colonizer, V. parvula has been reported to be catalase-positive bacterium and able to eliminate H 2 O 2 in the microniche and then rescue the growth of these anaerobic periodontopathogens [11]. Thus, the facts that Veillonella are frequently identified in the healthy oral microbiome and not considered as periodontal pathogen cannot rule out the possibility that this species greatly contributes to the shift from health to gingivitis and finally to periodontitis. Whereas, compelling evidence supports the keystone pathogen hypothesis [101], the fact that low-abundant P. gingivalis communities exhibit difficulties to sustain themselves and to durably colonize the oral cavity may indicate interdependence with a yet-missing piece of the puzzle [102,103]. It is well-known that P. gingivalis requires hemin or heme for growth, and this nutrient is provided by the crevicular fluid during gum inflammation [104,105]. P. gingivalis can be found in the early dental biofilm, where no inflammation occurs [100]. Recently, Zhou et al. reported that V. atypica, as early colonizer, can generate hemin/heme to support P. gingivalis growth in vitro [26]. This study suggests that Veillonella could be a potential have a key role in the colonization of mineralized (e.g., tooth enamel or cementum) and metal (e.g., dental implants) surfaces within the oral environment. Demonstrating symbiotic mutualism with commensal streptococci, they are early and abundant colonizers that also provide immune stimulation by their mildly potent LPS, which may be beneficial to the host for heightened immune surveillance that contributes to the establishment of a dynamic healthy equilibrium. Lower panel: In the inflammatory environment of the crevice in the presence of biofilms with high plaque biomass the microenvironmental conditions vary within the biofilm. Differences in lactate concentrations, the prime carbon source for Veillonella, and differences in oxygen availability may trigger virulent mechanisms in certain Veillonella strains, such as hemagglutinin-1 that provide adhesion positions for P. gingivalis or small molecules that support P. gingivalis early growth and colonization. hemin/heme provider to support periodontopathogen growth and might play a crucial role in dental biofilm formation and periodontitis development. In addition, unlike early-colonizers, P. gingivalis is indeed unable to grow at low-cell densities and does require large initial inocula [106]. Most recently, it has been shown that the presence of V. parvula in co-cultures also supports the growth of P. gingivalis, even when the latter was inoculated at low-cell densities [23]. The mechanism of directly physical interaction between Veillonella and P. gingivalis has been reported by Zhou et al. [41], but this growth promoting signal appeared not to be dependent on cell-cell contact and rather was mediated via a soluble factor. More interestingly, this soluble factor was necessary to enable the colonization by P. gingivalis of a mice model oral cavity [23]. Although it remains unclear whether this growth-promoting factor acts as a quorum-sensing signal informing on cell density, a nutrient necessary to P. gingivalis or simply a metabolic mediator, these findings highlight the bridging role of Veillonella spp. in biofilm microbial interactions and illustrate the concept of "accessory pathogen". Figure 2 provides a schematic representation of this dual role of Veillonella spp. within oral biofilms, at times commensals yet also potentially pathogenic. FUTURE PERSPECTIVE Considering the important abundance of the genus Veillonella in the oral cavity, the limited number of studies available until recently is surprising. Potential reasons that may account for this scarcity include the commensal nature of Veillonella spp., together with the lack of dedicated genetic tools that lately allowed deeper investigation of the genus. More recently, an increasing number of studies have pointed toward the importance of veillonellae in the ecology of oral biofilms and their role in the homeostasis between oral health and disease. First, there is now compelling evidence showing that Veillonella spp., as bridging organisms, play a pivotal role in establishing multispecies biofilm communities via direct and indirect interactions with both initial and later colonizers [9,10,13,41,74]. The recent development of genetic tools in Veillonella spp. [45,[56][57][58], the mechanisms of Veillonella binding with other oral bacteria and human epithelial cells have been studied [41,74]. However, due to the complexity of veillonellae's outer membrane and abundance of adhesins, further studies are warranted to better understand veillonellae's role in the development of multispecies biofilm communities. Second, it is crucial that future research focuses on veillonellae's role as "accessory pathogen" in incipient dysbiosis. Indeed, while their pathogenic potential has been shown to be limited, the role of V. atypica in producing nutrients and reducing the oxidative microenvironment to support and facilitate the growth of periodontal pathogens has been reported [11,26]. Most recently, Hoare and his colleagues reported that P. gingivalis, even inoculated at low-cell densities, also can survive in co-culture with V. parvula [23]. In addition, Veillonella spp. are spatiotemporally associated with acidogenic bacteria in oral biofilms and caries development, and then identified as an early indicator of dysbiotic communities in dental caries [89]. Considering the fact that veillonellae are early colonizer and frequently isolated and identified in early biofilm communities, the studies investigating their role in the development of oral diseases, such as periodontitis and caries, and characterize their putative involvement as "accessory pathogen" may have significant clinical relevance. Third, it is important to study Veillonella's biology at a speciesand strains-level resolution, rather than at the genus level. Most studies that investigated the ecological role of veillonellae in oral biofilms often remained species-specific. As an example, it has been reported that V. denticariosi is only identified in caries sites whereas, V. rogosae is isolated in healthy plaque [85]. Another instance, the spectrum of Veillonella interspecies interaction with oral microbes showed that among all tested Veillonella strains, all V. parvula, partial V. atypica and V. rogosae, and none of V. dispar physically interact with S. gordonii [74]. Whereas, these studies emphasize differences observed at the species level, different strains from the same species often display further genomic variations [107]. As such, various Veillonella strains may display different abilities to behave as commensals or conversely as "accessory pathogens" that remain concealed at the strainlevel. However, there is a big challenge for Veillonella research in the future. So far, the genetic system has only been established in V. atypica and V. parvula, this is because the most species in the Veillonella genus are non-transformable [45,57,58]. Isolating and establishing genetic tools in different Veillonella species/strains will be crucial in the coming years. Although Veillonella are frequently identified in the healthy oral microbiome and not considered as periodontal pathogen, is it possible that they are pathobionts for other oral diseases? Most recently, Daubert et al. reported that peri-implantitis is associated with a significant increase in Veillonella spp. [108]. Peri-implantitis is an infectious disease that causes an inflammatory process in soft and hard gum tissues around dental implants [109,110]. In peri-implantitis, titanium corrosion and dissolution has been implicated and titanium particles are generated in disease progression [111,112]. The enrichment of Veillonella genus in the diseased periimplant microbiome has been found to be correlated to the local concentration of titanium particles in the crevice, which titanium particles modify the peri-implant microbiota toward dysbiosis [108]. Because titanium particles in periimplantitis strongly activate oxidative burst pathways in humans [113], the potential role of Veillonella spp. with capabilities to detoxify reactive oxygen species (e.g., through catalase activity) may have important roles in microbiome dysbiosis in titanium-mediated inflammation in peri-implantitis. This connection warrants further investigation to better understand the differences between the periodontal and peri-implant microbiome. In addition, it has been reported that periodontal disease is associated with atherosclerosis, and this might be because oral bacteria contribute to the progression of atherosclerosis and cardiovascular disease [114]. Koren et al. reported that the genera Veillonella and Streptococcus are identified in the majority atherosclerotic plaque samples, and the combined abundances of these two taxa in atherosclerotic plaques are consistent with their abundance in the oral cavity, implying that Veillonella spp. might play a crucial (possibly potential pathogenic) role in the development of atherosclerosis and cardiovascular disease [115]. Thus, as a potential pathobiont, Veillonella's role in the development of other diseases remains to be investigated in future. AUTHOR CONTRIBUTIONS PZ and DM drafted the manuscript. GB and GK provided critical feedback. All authors contributed to figure development and have reviewed and approved the revised manuscript.
7,855.4
2021-10-29T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Contrast-Induced Nephropathy: Current practice Contrast induced nephropathy (CIN) is a common cause of hospital acquired acute kidney injury (AKI) and associated with adverse clinical outcomes. There is still debate regarding the exact definition, which has greatly influenced the reported incidence of CIN in literature. Recent studies have challenged the universal concern regarding risk of CIN in general population. It is found to occur more commonly after intra-arterial (IA) administration of contrast as in interventional cardiology and vascular procedures especially in patients with multiple comorbidities and underlying renal impairment. Recent studies report negligible risk after intravenous (IV) contrast administration for modern diagnostic radiological examinations. Since it is a potentially preventable clinical condition, it is imperative for health care professional to be well aware of this entity. All patients undergoing iodinated contrast exposure should be risk stratified and preventive measures should be employed in high risk population. This paper will review the epidemiology, controversies regarding definition, pathophysiology, risk stratification, iodinated contrast commonly used in practice and preventive strategies. Introduction Contrast induced nephropathy (CIN) is defined as acute renal impairment after exposure to iodinated contrast media (CM). In the modern era with the advancement of the diagnostic modalities, increase in number of percutaneous coronary and peripheral artery interventions and frequent use of contrast media, CIN has emerged as a common cause of acute renal failure. Mostly, it results in transient renal impairment; however in patients with multiple comorbidities it can be associated with high morbidity and mortality [1]. It is therefore imperative to identify patients at risk to decrease its occurrence and diagnose it early in course to avoid short and long term clinical adverse effects. It presents a challenging situation that is often encountered by practitioners in varied specialties including emergency medicine, nephrology, cardiology, radiology and intensive care unit (ICU). Since CIN represents a form of acute kidney injury (AKI), which is potentially preventable, it is important that clinicians are well aware of this entity and understands the basic pathogenesis involved. The aim of this review article is to discuss pathogenesis of CIN, associated risk factors, different iodinated CM used in clinical practice and their effect on kidney, alternative diagnostic modalities and preventive measures. Epidemiology CIN remains an important cause of in-hospital AKI, accounting for approximately 11% of the total cases of AKI [1]. Literature on CIN mainly reports it to occur following exposure to iodinated contrast during coronary or peripheral angiography [1,2] or diagnostic imaging, however recent data shows contrast exposure associated with modern radiographic procedure is not associated with significant increase in CIN incidence [3]. Exposure following cardiac or peripheral angiography differs from that during diagnostic imaging in that the injection is intra-arterial, requires catheter and dose of CM is concentrated and abrupt [4,5]. In general population without any risk factor, incidence of CIN is very low; however risk increases as comorbidities increases [1,6,7]. Critically ill patients with baseline renal impairment are particularly susceptible and contrast enhanced computed tomography (CECT) might account for 18% of AKI cases in this population [8]. Reported incidence of CIN varies with the procedure and is reported to be 11% after outpatient CECT [9], 4% following intravenous pyelography [10], 9% after peripheral arteriography (4), <3% following percutaneous trans-luminal coronary angioplasty (PTCA) in patient with normal renal function and as high as 40% in patient with underlying renal impairment [2,11,12]. Studies have documented at least 3.1% of patients with CIN required renal replacement therapy. In hospital mortality is reported to be 7.1% in patients with CIN after coronary intervention and as high as 35.7% in patients requiring dialysis, which further increases to 81.2% at 2 year [1]. CIN is reported to be independent predictor of poor outcome in patients with diabetes mellitus with or without renal impairment [6]. Definition Post-contrast acute kidney injury (PC-AKI) is often defined as acute renal impairment occurring within 48hours of exposure to intravascular iodinated CM. It is a correlational diagnosis and can occur irrespective of CM being cause of renal deterioration. In contrast, CIN comes under subgroup of PC-AKI and is a causative diagnosis. In literature, CIN has been defined as relative change in the baseline serum creatinine (> 25-50%) and absolute elevation from baseline serum creatinine (>0.5-2 mg/dL) [13][14][15]. Temporal relationship between exposure to CM and rise in serum creatinine along with exclusion of other causes of AKI are crucial for the diagnosis of CIN. Defining set criteria for diagnosis is important as incidence of CIN in clinical practice can vary greatly with small change in serum creatinine. An absolute increase of serum creatinine by ≥0.5 mg/dL from baseline is still commonly used definition of AKI [16]. Another well-known criterion for CIN proposed by Barrett and Parfrey in early 1990s was an absolute increase in serum creatinine levels by ≥0.5 mg/dL or a relative increase of ≥25% in serum creatinine from baseline within 72 hours after contrast exposure [17]. However, for purpose of standardizing the definitions, recently American College of Radiology (ACR) has recommended using the AKIN (Acute Kidney Injury Network) criteria to define CIN. It employs absolute serum creatinine increase >0.3mg/dl or ≥50% (≥1.5 times) percentage increase in serum creatinine or ≤0.5 mL/kg/hour urine output for at least 6 hours within 48hours of exposure to define AKI [18]. In clinical practice, serum creatinine concentration is commonly used measure of renal function; however it has its own limitations. There is often delay between renal insult and rise of serum creatinine, which delays the diagnosis. It is also affected by other factors including age, gender, nutritional status, volume status, hyper catabolic state, muscle mass and concomitant use of other medications, therefore it is important to interpret results in appropriate clinical settings [19,20]. Changes in serum creatinine concentration are not linearly related to changes in effective glomerular filtration rate (eGFR,) because of which small changes in eGFR can often go unnoticed. Normal serum creatinine is actually maintained till GFR or creatinine clearance is reduced by nearly 50% [20]. Diagnosis Diagnosing CIN is challenging. It is important to follow systemic approach while evaluating patient with AKI. CIN is a diagnosis of exclusion after other causes of AKI (prerenal/intrinsic/post renal) have been ruled out. There is temporal relationship with contrast exposure, however this should not preclude from searching for other reversible causes of AKI, which may coexist [13][14][15]. Periprocedural hypotension, bleeding, release of athero-embolic material and use of catheter exchanges may further complicate renal tubular injury after angiography [21]. After exposure to intravascular iodinated contrast, serum creatinine typically rises within the first 24-48 hours, peaks at 3-5 days and returns close to baseline within 1-3 weeks. In most cases, it is asymptomatic and has no significant clinical consequences. However, in rare cases it can lead to oliguria or anuria, irreversible renal damage and need for renal replacement therapy. Basic work up includes clinical assessment, urine output, urinalysis and renal imaging. Volume depleted state can predispose patient to CIN; however CIN can result in intrinsic renal injury with tubular necrosis in extreme cases. Finding on urinalysis may include muddy brown granular cast, tubular epithelial cells and minimal or no proteinuria. Urine examination is neither sensitive nor specific to diagnose CIN, however its usefulness is mainly to exclude underlying glomerular disease or acute interstitial nephritis [22]. Urine studies may show intrinsic renal injury picture with urine sodium < 10 mEq/L, fractional excretion of sodium <1% and urine osmolality <350 mOsm/kg. Persistent nephrogram ( Figure 1) may be incidentally seen on follow up imaging because of delayed clearance of contrast [23]. Timing of serum creatinine rise after contrast exposure in literature has been reported to be 24-72hours. However, it is speculated that renal damage actually begins soon after exposure to intravascular iodinated contrast. In study by Ribichini et al. [24], it is reported that measuring change in serum creatinine from baseline within 12hours of contrast exposure significantly predicted occurrence of CIN and was associated with renal damage after 30days [24]. However, relying on serum creatinine for diagnosis of renal injury has its own limitation as discussed above, therefore finding of specific biomarkers that could predict early diagnosis of CIN is desirable. Recently, it has been proposed to classify these biomarkers into two group, namely, (1) reflecting change in renal function (e.g. cystatin C) and (2) representing renal damage {e.g. Plasma or urine neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18} [25]. Briguori et al. [26] reported increase in levels of cystatin C ≥ 10% at 24 h after PTCA had 100% sensitivity and 85.9% specificity for the prediction of CIN [26]. Another condition often encountered after exposure to CM especially catheter related studies is cholesterol embolism syndrome (CES). It can be challenging to distinguish the two conditions clinically; however it is important as both have different treatment approach. CES is usually a rare entity and renal impairment typically develops gradually over 3-8weeks after the procedure. It is a multisystem disease caused by dislodgement of cholesterol crystals from atherosclerotic plaque and occlusion of peripheral arterioles. Other peripheral sign that can help clinch the diagnosis include live do reticularis, petechiae, digital gangrene, splinter hemorrhage and Hollenhorst plaques on ophthalmological examination. Laboratory studies may show peripheral eosinophilia, hypocomplementemia and high erythrocyte sedimentation rate. Spectrum of presentation is varied ranging from mild and asymptomatic to life threatening complications. CES is associated with significantly worse long term renal effects as compared to CIN and is suspected when multiple oral dysfunction occurs following coronary angiography [27]. Pathophysiology Kidneys are particularly susceptible to ischemic injury in setting of delicate micro vascular circulation, which is particularly vulnerable to systemic and local hypo perfusion. Pathogenesis of CIN is multi factorial and still not completely understood. Interplay of factors including medullary hypoxia, oxidative stress, im balance of renal vaso constrictive and vaso dilatory mediators, changes in renal perfusion and direct tubular toxicity of CM has been suggested [28][29][30]. Two main mechanisms responsible are direct tubular toxicity of CM and changes in renal micro vascular hemo dynamics. Recently, high end diastolic pulmonary arterial blood pressure and left anterior descending artery lesion has been independently associated with development of CIN, pointing towards role of hemo dynamics and cardiac dynamics in its pathogenesis [31]. Effect of CM on Renal Tubules CM is freely filtered from renal glomerulus and increases the tubular osmolality. It has direct cytopathological effects on tubular cells, which affects energy metabolism and impairs intracellular transport. Histopathologically, these changes consist of tubular cell vacuolization, necrosis and termed as osmotic nephrosis. There has been conflicting evidence to support precipitation of intratubular proteins triggered by CM. Actually it is proposed that CM induced cellular injury might be the inciting event responsible for CIN which triggers the cascade rather than hypoxia or hypoperfusion. Osmolality, viscosity and ionic properties of CM contribute to its nephrotoxicity. High osmolar CM have been shown to affect erythrocyte deformability, leading of stacking of blood cells and affecting blood flow in animal models. Another mechanism reported is mitochondrial dysfunction and release of catalytic iron, which serve as catalyst in oxidative reactions leading to production of free radicles [28][29][30]. Effects on Renal Microvascular Hemodynamics Animal model have suggested that intra-arterial contrast infusion results in biphasic renal perfusion, with initial transient vasodilation followed by sustained prolonged vasoconstriction. Renal medullary perfusion and oxygen tension is lower than outer renal cortex under normal physiological state and thick ascending loop of henle in outer renal medulla has high oxygen requirement exacerbating relative medullary hypoxia. CM further aggravates this hypoxia by causing osmotic diuresis leading to increased sodium transport to thick ascending limb and subsequently increased oxygen extraction. Total medullary hypoxia is actually combination of increased oxygen demand of tubular cells and changes in regional renal microcirculation [32,33]. Subclinical CIN can occur in majority of patients following contrast exposure; however it goes undetected as healthy individuals have intact tubular repair mechanism that prevents clinically significant renal damage. In contrast, in patients with comorbidities like baseline renal impairment and diabetes mellitus even average dose of contrast can have clinical implications in setting of poor reparative mechanism and baseline reduced functional nephrons [34]. Risk Factors Various risk factors have been described which increases the risk for development of CIN after exposure to iodinated CM. These can be broadly divided into intrinsic or patient related and external or procedural/contrast related. Among the patient related factors, baseline renal impairment (GFR < 60 mL/min per 1.73 m2) and diabetes mellitus are the two important independent risk factors that are commonly coexisting in vascular patient population [7,35]. However, recent analyses by McDonald [16] and Davenport et al. [36] have found increased incidence of CIN in diabetes only if renal function is compromised (GFR < 30 mL/min/1.73 m2) [16,36]. Studies have reported significant risk for CIN if baseline serum creatinine concentration is ≥1.3 mg/dL in men and ≥1.0 mg/dL in women, mostly equivalent to an eGFR <60 mL/min/1.73 m2 [37]. In light of recent studies, European Renal Best Practice (ERBP) along with Kidney Disease Improving Global Outcomes (KDIGO) guidelines suggest that the threshold at which actual risk for CIN increases could be lowered to 45 mL/min/1.73 m2 [38]. Other established risk factors include advanced age (≥70 years), dehydration, anemia, vascular disease, hypertension, coronary artery disease, congestive heart failure, smoking and concomitant use of other medications including metformin, non-steroidal anti-inflammatory drugs (NSAIDS), diuretics or calcium channel blockers. It was long thought that patients with multiple myeloma have increased risk CIN secondary to precipitation of myeloma proteins in the renal tubules. However, recent studies propose that contrast studies can be safely performed in myeloma patients with normal renal function provided there is no dehydration. Also, studies have reported acute urate nephropathy in patients with hyperuricemia, particularly leukemic patients on chemotherapy, secondary to uricosuric property of CM, however recent studies have failed to show independent association between the two [21,39]. Extrinsic or contrast related factors includes type of CM, route of administration, dose of CM and numbers of contrast exposure. Procedure related factors may include urgent or emergent procedure, use of intra-aortic balloon pump, delayed reperfusion and nature of procedure [12,21,39]. Various risk scoring have been proposed to stratify the risk for CIN. One of them reported by Mehran et al. [6] included eight variables namely age > 75 years, anemia, hypotension, diabetes mellitus, chronic congestive heart failure (CHF), acute pulmonary edema, chronic kidney disease, use of intra-aortic balloon pump and increase volumes of CM. Total risk score ≥ 16 was associated with 57% risk of CIN with 13% of patient requiring dialysis [40]. Another simple risk scoring, which has shown to have clinically significant predictive value for CIN, is ACEF score. It uses three variables, namely age, creatinine level and ejection fraction and has been developed for patients undergoing coronary angiography [41,42]. Iodinated CM Type of Iodinated CM: Commonly available CMs are benzoic acid derivatives with three iodine atoms. High osmolar CM (HOCM), with osmolality up to eight times that of plasma were frequently used in past and had higher incidence of CIN. To decrease the osmolality, several approaches were considered including production of non-ionic agents, which resist dissociation in solution or by producing double benzoic acid rings. Low osmolar (LOCM) and isosmolar CM (IOCM) are commonly used nowadays and associated with low nephrotoxicity [43]. (Table1) descries the properties of CM. The adverse effect of CM is mainly related to its osmolality, ionicity, viscosity and iodine content. Viscosity instead of osmolality is responsible for vascular resistance [44]. Though IOCM has lower osmolality, because of its dimeric structure it has more viscosity than LOCM. There are conflicting results in studies comparing LOCM with IOCM and in most, no difference was found with respect to renal safety [45]. KDIGO guideline recommends use of LOCM or IOCM instead of HOCM however due to lack of data, there is no recommendation regarding preference of IOCM over LOCM [46]. However, CIN Consensus Working Panel recommends use of nonionic IOCM in patients with high risk for CIN undergoing coronary angiography, whereas IOCM or LOCM can be used for intravenous administration of contrast [47]. The current Canadian Association of Radiologists (CAR) consensus recommends use of CM depending upon route of administration of contrast and renal function; LOCM or IOCM should be used if eGFR < 45 mL/min for intravenous exposure and GFR < 60 mL/min for intra-arterial CM studies [48]. Many radiology department now-a-days uses IOCM in high risk patient population, especially with severe renal impairment with eGFR < 30 mL/min. Local hospital protocols based on guidelines can be used to decide between IOCM and LOCM. Dose of CM: A number of studies have shown contrast volume to be independent risk factor for subsequent development of CIN after coronary or peripheral angiography [39]. Both volumes of contrast and iodine content needs to be carefully considered taking in account renal safety and optimum imaging. Iodine content is important determinant of image enhancement and degree of attenuation achieved. Iodine content for most of the commonly used contrast ranges from 300 to 370mg/ml. Low dose of CM defined as <30-125ml or <5mg/kg, is less nephrotoxic and associated with lower risk of CIN. However, AKI can even occur with small (30 mL) volumes of iodinated CM, ruling out threshold effect. Some reports suggest performing staged angiography particularly when large volume of contrast use is anticipated, however this is solely dependent on clinical situation [39,43]. Brown et al. [49] proposed formula and use of "maximal allowable contrast (MAC) dose" (contrast volume limit [ml = 5 × body weight {kg}]/ [88.4 × SCr {μmol/l}]), which correlated, with development of CIN [49]. Whenever possible lowest dose of contrast should be used by incorporating MAC as a part of pre-procedure contrast 'Time-Out'; especially in patients with high risk factors when other alternative modalities cannot be used. Route of Administration: This is procedure dependent. Intraarterial (IA) injection of iodinated contrast is associated with higher risk of CIN compared to intravenous (IV) administration. Proposed mechanism behind greater risk after IA exposure is amount of contrast directly reaching kidneys if contrast is administered directly in abdominal aorta or renal arteries. Risk is less if contrast is given below origin of renal arteries and minimal risk after IV administration [4,43]. Recent studies by McDonald et al. [16] and Daveport et al. [36] using "propensity matching" as statistical tool in order to compare incidence of CIN in patients who received IV contrast for CECT to control group, found that occurrence of CIN in this population is rare and if it occurs it happens in patients with baseline severe renal impairment (eGFR<30ml/min/1.73 m 2 ) [16,36]. Repeat Contrast Exposure: Multiple dose exposure of contrast within short period of time is well-documented risk factor for CIN [40,43,50]. It is believed that at least 24 hours interval should be considered between two contrast exposures as it takes approximately 20 hours to eliminate contrast from the body, provided patient has normal renal function. Guitterez et al. [51] demonstrated renal impairment may persist for 10 days after contrast exposure and duration of renal impairment is dependent on baseline renal function. Therefore, repeat exposure should be delayed in patients with chronic kidney disease to allow time for renal recovery [51]. Alternative Imaging Modalities to Prevent CIN: In patients with high risk for CIN, alternative imaging modalities should be chosen after proper risk benefit discussion with the patient. Though contrast enhanced imaging have its own advantages particularly for the diagnosis of inflammatory/ infectious disease and neoplastic conditions as well as diagnosis and treatment of vascular diseases, it is important for the clinicians to be well aware of alternative modalities like ultrasonography, carbon dioxide (CO2) angiography and magnetic resonance imaging (MRI) with or without gadolinium ( Table 2). In patients at risk for CIN every effort should be made to avoid iodinated contrast if possible. Non contrast computed tomography (NCCT) is preferred for diagnosis of intracranial hemorrhage, fracture or dislocation and interstitial lung disease [52]. Non contrast MRI [53] including Steady-state free precession MRI (SSFP), Phase contrast MRI, time of flight (TOF) MR angiography should be employed in appropriate setting as described in (Table 2). Gadolinium based contrast agents can be used in patients at risk of CIN, however it is best avoided in patients on dialysis and with severe renal impairment especially chronic kidney disease (CKD) Stage 4 and 5 for concerns of nephrogenic systemic fibrosis (NSF) [54,55]. CO 2 angiography is another useful modality often used in various diagnostic and therapeutic interventions when exposure to CM must be avoided to prevent CIN or patient is allergic to iodinated contrast [56]. It is highly soluble, non-allergic, inexpensive and readily available gas with low viscosity relative to blood. For some procedures, CO 2 angiography is actually superior to conventional CT including better visualization of collateral circulation, enhanced vascular filing in central venography, detection of AV shunting in tumors and detection of occult GI bleeding. It can be used in patients with chronic lung disease with caution if sufficient time is allowed for elimination of gas from the body. However, it has some limitations as well. It is preferably used for infradiaphragmatic arteriography due to concerns of neurotoxicity. Other complication limiting its use is vapor lock or air trapping which increases with simultaneous use of nitrous oxide. Another limitation for its use is cumbersome delivery system and risk of error of measurement of vessel size, which is dependent on injection technique [56]. Special Consideration With Use of Iodinated CM: a) Met Formin Use: Metformin has not been recognized as independent risk factor for developing CIN, however serious complications like lactic acidosis can rarely develop in patients on metformin, if they subsequently develop AKI after contrast exposure. Whether it should be stopped in all patients anticipating contrast exposure or only in patients with underlying renal impairment and how earlier it should be stopped is still matter of controversy. The monogram for Glucophage (metformin) recommends that it should be discontinued at the time of or before the procedure in all patients, withheld for at least 48 hours subsequent to the procedure, and restarted only after confirmation of normal renal function [57]. The European Society of Urogenital Radiology (ESUR) ( Table 3) and Canadian Association of Radiologist (CAR) consensus recommends holding metformin at the time contrast exposure in patient with normal renal function and 48 hours before in patients with preexisting chronic kidney disease [48,58]. However, ACR recommends withholding it for 48hrs before exposure only in patients with marked renal impairment (eGFR ≤ 30 mL/min/1.73 m 2 ) or undergoing intra-arterial catheter studies and no need to discontinue or withhold if eGFR > 30 mL/min1.73 m 2 ) [59]. Detection of gallstones, acute cholecystitis and biliary obstruction. Appendicitis in childrens. Hydronephorosis or ureteral stone. Acute or chronic pelvic pain in female such as ovarian cysts, hemorrhagic cysts, ovarian torsion, ectopic pregnancy, and pelvic inflammatory disease. Solid vs cystic adenexal mass with evaluation of internal vascularity, Acute scrotal emergency in male such as testicular torsion or epididymo-orchitis. In unstable trauma patients, a FAST* scan can be used to assess for free fluid. Assessment of the extracranial carotid arteries for atherosclerotic disease. For evaluation of lower extremity DVT. Assessment of thyroid nodules. Operator dependent Non-ionizing, so useful in children and in women of child-bearing age, pregnancy. No intravascular contrast is required. Better than CT for gall stones detection, ovaian and testicular torsion, Soid vs cystic adenexal mass differentiation and assessment of thyroid nodules. c) Screening baseline renal function: It is actually general practice to screen baseline renal function prior to contrast exposure; however it can lead to diagnosis or procedural delay, increase financial cost and patient discomfort. ACR recommends to screen renal function (e.g., serum creatinine, eGFR) prior to exposure to iodinated CM in only selected patients with high risk factors including age >60 year, patient with history of renal disease, renal cancer, renal surgery, patient on dialysis, patients with single kidney, hypertension on medical therapy and diabetes mellitus. Patient who do not have above-mentioned risk factors and require routine intravascular contrast do not require routine screening [61,62]. Choyle et al. [62] listed few risk factors (preexisting renal dysfunction, hypertension, gout, proteinuria and prior renal surgery) and found that if none of these factors are present 99% of patients had serum creatinine levels less than 1.7mg/dl and 94% of patients had normal renal function [62]. However, ERBP guidelines recommend measuring baseline serum creatinine in all patients before an intervention and repeat levels 12 and 72 hours after contrast administration in high-risk patients [38]. The decision to repeat serum creatinine after procedure is a matter of debate. According to ESUR, renal function after contrast exposure should be assessed in 3 days, however some studies have documented that in certain patient populations particularly with diabetes mellitus and chronic kidney disease peak rise in creatinine levels is actually delayed, which demands prolonged renal function survey [58,63]. Prevention of CIN There has been number of preventive measures to reduce incidence of CIN in high-risk population, however there is no clear consensus and current evidence remains inconclusive [64,65]. However, it has been agreed that identification of highrisk population is the foremost. Once this population is identified, best attempt should be done to minimize renal injury by stopping other nephrotoxic medications particularly NSAIDS, loop diuretics and Metformin at least 24-48hours prior to contrast exposure particularly if IA exposure is anticipated and in patients with baseline severe insufficiency [47,57,58]. Unfortunately, in emergent procedures it might not be possible to stop medications or delay the procedure, however when possible restarting the nephrotoxic medication should be delayed for at least 48 hours after or as deemed clinically appropriate (Figure 1). Figure 1: 68-year-old female with drop in hemoglobin status-post cardiac catheterization one day ago. Non-Contrast enhanced CT Abdomen demonstrating persistent nephrogram phase of enhancement from prior cardiac catheterization due to contrast induced nephropathy. Note hyperdense collection suggestive of an acute hematoma extending adjacent to RIGHT psoas muscle (*) lateral to the RIGHT psoas muscle continues with RIGHT iliac vessels, likely cause of drop in hemoglobin. Red arrow depicting no intra-vascular contrast media in the current study. The cornerstone of practice to prevent CIN remains volume expansion. It is low-risk practice and carries few complications. Randomized trials have documented effectiveness of IV hydration with normal saline in the prevention of CIN. It should be given in all risk categories, particularly patients with an estimated GFR <60 ml/min/1.73 m 2 . It is postulated to promote diuresis, increase intravascular volume, induce vasodilation and suppress renin aldosterone axis. However, there is no fixed regimen and in general, starting it well before exposure and continuing even after procedure is the best practice. Oral volume expansion has shown some benefit, however evidence does not support it to be as effective as IV volume expansion. For isotonic saline administration, most studies suggest that 0.9% saline should be started at a rate of 1-1.5 mL/kg/h 3-12 h before and 6-12 h after contrast media exposure [66,67]. IV administration of sodium bicarbonate has also been studied for prevention of CIN; however recent data have shown conflicting and mixed results. The theoretical benefit of sodium bicarbonate is alkalization of tubular fluid and reduced production of free oxygen radicals, along with volume expansion. Most studies have suggested that it should be started at a rate of 3 mL/kg/h 1 h before and 1 mL/ kg/h for 6 h after contrast exposure. Though some studies have documented its beneficial effects, they have been critiqued for being single center, non-blinded and small studies [67,68]. N-acetylcysteine (NAC) also gained popularity for use to prevent CIN. It is believed to be direct free radicals scavenger, which improves blood flow through nitric oxide-mediated pathways. Its antioxidant and vasodilator actions are believed to protect against CIN. As compared to IV saline alone, low-dose NAC along with IV saline has shown significant decrease in CIN in patients receiving either IA or IV contrast. It is commonly employed agent in clinical setting in patients at high risk because of high tolerability, low cost, potential cardio protective properties, even in absence of clear scientific evidence. KDIGO 2012 guideline recommends using oral NAC with IV fluids in patients with high risk of CIN in setting of its low risk profile [38,69]. Algorithm showing preventive measures to be taken is shown in (Figure 2) Various other experimental pharmacologic agents have been studied for CIN prophylaxis including statin, ascorbic acid, tocopherol, dopamine, fenoldopam, theophylline, nebivolol, atrial natriuretic peptide and prostaglandin, however none of them have achieved clinical significant results [70]. Also, no beneficial effect of experimental procedures such as hemo filtration (HF) and prophylactic hemodialysis (HD) was found against CIN [71]. Moreover, prophylactic HD was found to be associated with increased CIN [71]. New investigational study 'Renal guard system' which is a fluid management device seems promising except for concern of electrolyte abnormalities and additional studies are needed before renal guard system can be implemented clinically [72,73]. Conclusion Clinicians should be well aware of CIN, which is potentially serious entity. Since there is no established treatment for CIN, every effort should be taken to prevent it from occurring by recognizing at risk population, weighing risk-benefit ratio in all patients, optimizing volume status prior to contrast exposure, avoiding simultaneous use of other nephrotoxic agents, using newer generation of CM and using lowest possible dose.
6,794.2
2018-06-05T00:00:00.000
[ "Medicine", "Biology" ]
Finite element level validation of an anisotropic hysteresis model for non-oriented electrical steel sheets This paper presents the finite element level validation of the anisotropic Jiles–Atherton hysteresis model. Numerical analysis of a round rotational single sheet tester is performed using the 2D finite element method. Anisotropic extension of the Jiles–Atherton hysteresis model is coupled with the 2D finite element method. The finite element simulations are performed for the cases when the magnetic field alternates and rotates in the lamination plane. The simulated results for alternating and rotational flux density excitations agree with the measured data. The measured data used in this paper corresponds to the M400-50A nonoriented silicon steel. Introduction Nonoriented (NO) silicon steels are suitable for electromagnetic applications where the core experience directionally varying magnetic fields, for example, the stator yoke of a medium-sized induction motor [1].The magnetic field can alternate and rotate in the stator core, depending on the location.A phenomenological hysteresis model should predict the magnetic fields for different types of excitations: alternating, rotating, and elliptical.The model should also describe magnetic responses under external stress and thermal loadings [2][3][4].However, a physics-based hysteresis model of such caliber is unavailable [5]. The NO silicon steel possesses a significant level of magnetic anisotropy [6][7][8][9].Several models of magnetic anisotropy have been proposed for NO silicon steel.In [10], a method based on magnetic energy is proposed to account for magnetic anisotropy observed in the NO silicon steel.The model expresses energy with a Gumbel-type distribution, whose parameters are extracted utilizing the anhysteretic data.The model is suitable for the finite element method (FEM).However, the results they predict disagree with the measurement data.Instead of using the magnetic energy, [11] employs bicubic splines to represent the anhysteretic characteristics identified from the rotational measurements.The method based on the surface bicubic spline is easier to integrate with the FEM and it is more accurate than the energy-density method.Contrarily, [12] expresses magnetic energy using the Fourier series.The coefficients of the Fourier series-based model are identified using anhysteretic data in several measurement directions.[13] shows that the Fourier series-based anisotropic model is well adapted to the FEM by simulating the magnetic field in an interior permanent magnet machine composed of NO silicon steel. Alternatively, [14] has modified Mayergoyz's approach to model anisotropy observed in a NO silicon steel sheet.The modification adds the Everett data from the alternating measurements in several directions.The anisotropic generalization of Mayergoyz's model controls the amplitude of the input projections, introducing anisotropy in the predicted results.[15,16] extend vector version of the Jiles-Atherton (JA) model to include magnetic anisotropy.Their extension considers measurement data in only two directions-the rolling and transverse.The anisotropic extension of the JA model estimates the field strength loci based on the parameters from rolling direction (RD) and transverse direction (TD) in the intermediate directions.Likewise, [17,18] implement anisotropy in the energy-based (EB) hysteresis model to predict variations in alternating and rotational field strength in a NO silicon steel sheet.The anisotropic EB model utilizes parameters identified from the alternating measurements in the RD and TD.Moreover, the pinning field probability density is represented by a univariate Gaussian-type distribution, whose parameters are estimated by fitting the model to the measurement data [17]. Likewise, [19] presents an anisotropic play-type model similar to [17]'s.The model considers parameters from the alternating measurements in all principal axes: the RD, TD, and the silicon steel lamination's thickness.The anisotropic play-type model shows a reasonably good fit for the magnetic field alternating in the RD, TD, https://doi.org/and 45 • directions.Moreover, [20] proposes an anisotropic play-type model.The model introduces anisotropy in the isotropic play-type model with the parameters identified from several unidirectional alternating measurements.The anisotropic play-type model yields a reasonably good fit with the measured data for NO silicon steel. A physics-based multi-scale model (MSM) can be employed to predict the anisotropic behavior in a NO silicon steel lamination [21].However, the MSM uses texture data, requiring complex measurements.Additionally, the MSM is a large minimization problem and computationally heavy.For the FEA, a simplified MSM is considered in [22].The anhysteretic magnetization identified from the simplified MSM is fed to the hysteresis models-such as JA, EB, and Hauser [23,24].In [22], the simplified MSM-JA model is coupled with the FEM to study the effect of stress-induced anisotropy in a switched reluctance motor.Moreover, [25] couples the MSM with the EB model to account for the anisotropy observed in grain-oriented silicon steel.Nevertheless, the physics-based MSM-EB or MSM-JA model could be a good choice for predicting magnetic behavior in the NO silicon steel [22,25]. Characterization of NO silicon steel is an essential part of the electrical machine design process [26].An Epstein frame is a standardized device for measuring alternating BH characteristics.However, the Epstein frame is limited to alternating fields.A rotational single sheet tester is employed to measure rotational fields because it allows measurements of vector components using the field sensing methods [27].A round rotational single sheet tester (RRSST) is a popular choice as it ensures better rotation of the flux density, utilizing a larger measuring region.Additionally, the RRSST allows measurement of alternating and rotational magnetic flux density-magnitudes as high as 2 T [28,29]. This paper presents the finite element analysis (FEA) of an RRSST sample-a NO silicon steel disk of grade M400-50 A. The FEA considers anisotropic and isotropic Jiles-Atherton (JA) hysteresis models [30].The following four cases are considered for the FEA: flux density alternating in the RD, alternating in the 45 • direction (ALT45), alternating in the TD, and rotating in the counter clock-wise direction (ROT).The anisotropic JA model (AJAM) utilizes the anhysteretic magnetization from alternating BH measurements in seven directions: 0 • , 15 • , 30 • , … , 90 • [30].Moreover, the JA model's parameters vary as a function of the magnitude and polar direction of the flux density [31].The unidirectional alternating BH measurements are used to estimate the JA model's parameters.In contrast, rotational measurements are used for validation (see Table 1). Anisotropic JA model This paper considers AJAM explained in [30]-an improved extension of Bergqvist's vector JA model [32].The model equations are re-summarized in the following: Differential susceptibility Bergqvist's vector JA model defines differential susceptibility [15]: where is the identity matrix, and are the differential anhysteretic and irreversible susceptibilities, ≥ 0 is a parameter that represents interdomain coupling, and ∈ [0, 1] is another parameter that quantifies the reversible processes associated with the bowing of domain walls ( = 1 corresponds to a completely reversible process) [33,34].The differential susceptibility in (1) is expressed using an auxiliary vector as follows: where ≥ 0 is a parameter associated with the pinning of domain walls, an represents the anhysteretic magnetization, and ef f = + is the effective field strength.The condition in (2) reflects the assumption that irreversible changes occur only when the field ef f is incremented in the direction of ( an − ) . Differential reluctivity The FEM based on magnetic vector potential formulation uses differential reluctivity [35]: where 0 is the permeability of the free space, represents the differential permeability, and is the differential susceptibility given by (1). Anhysteretic magnetization The anhysteretic magnetization for the isotropic case is usually expressed as [35] where ef f = ‖ ef f ‖ is the norm of effective field, and an = ( ef f ) describes the anhysteretic curve identified from the unidirectional alternating BH loop.The differential anhysteretic susceptibility is expressed as In this paper, an defined in ( 4) is identified by averaging the unidirectional alternating BH characteristics measured in seven directions: Because the measurement is performed at an excitation frequency of 50 Hz, the hysteretic field strength is extracted by subtracting the eddy-current loss field from the measured field strength meas : where and represent the electrical conductivity and thickness of the NO silicon steel sample.The derivative on the right-hand side of ( 6) is evaluated using the central difference method.After that, the phase shift between and is neglected, meaning and are projected in the reference direction, which is the polar direction of vector .This process is performed for all the measurement directions.The field strength is averaged because the measurement is B-controlled [31]: where = ( − 1)15 • is the polar direction of , = 1, 2, 3, … , represents the data point index, and is the total number of data points in one excitation cycle.Thus, an is obtained by averaging the ascending and descending branches of the major ( avg )-loop [36]. The anhysteretic magnetization follows a more generic expression for the anisotropic case: where is the polar angle of ef f .The components of the anhysteretic magnetization (8) could be modeled using the multiscale approach [21].However, the multiscale approach is a large minimization problem and computationally heavy.Analytical representation of the anhysteretic characteristic is possible with the help of transcendental functions [25].Nevertheless, to ease the computational burden, this paper applies bicubic splines [30].Explicitly, in each rectangle k, where k x and k y are 4 × 4 coefficient matrices whose elements are identified from the anhysteretic M(H) characteristics (see [30,Fig. 2]).The source code available in [37] is used to obtain the elements of k x and k y .The elements of are evaluated from ( 9) and ( 10) using the following polar-to-Cartesian transformation: Directional variation of JA parameters The following equations describe the directional variations of the JA model's parameters [31]: where JA = {, , }, x = { x , x , x }, y = { y , y , y }, and and represent the norm and polar direction of .The parameters are identified from the unidirectional alternating BH measurements in only two directions-the RD and TD.The parameters in the right hand side of ( 12) are represented with a piece-wise linear polynomial along the magnitude of the flux density (see [30,Fig. 5]). Finite element modeling This section considers a magnetostatic field problem in a domain ∈ R 2 .The domain is a collection of linear L and nonlinear N subdomains: = L + N .The spatial variation of the magnetic field strength in is described by the following partial differential equation: where s = (, ) z is the source current density-usually imposed on the subdomain L .A hysteretic material law = ℋ () links and in N , whereas, in L , a linear relationship = 0 , where 0 = 1∕ 0 , exists between them. For any continuous vector potential = (, ) z , the flux density = ∇ × , and satisfies the Coulomb gauge, ∇ ⋅ = 0.The Galerkin method is employed to solve (13).The equation is multiplied by a weight function: = z , and integrated over the domain , resulting in [38] where (, ) and ⟨, ⟩ denote the integration ∫ ( ⋅ )d and ∮ ( ⋅ )d , and is the outward oriented unit vector normal to the boundary of .For simplicity, the contour integral on the right hand side of ( 14) is omitted.The boundary conditions will be discussed in a separate section. Solution of the nonlinear field equation by fixed point method The following constitutive relation is used to solve ( 14) by the fixed-point method [39,40] where FP is a reluctivity like quantity, and is a magnetization like quantity.Thus, using (3), Moreover, using ( 15) in ( 14) results in the linearized system: In the FEM, the approximation of the vector potential where and are the nodal value and shape function associated with node of the finite element (FE) mesh, and represents the total number of nodes.Commonly, the shape functions connected to the free nodes are used as the weight functions.This paper adopts the triangular elements to discretize , utilizing linear shape functions so that the weight functions are linear basis functions [38]. Eq. ( 17) is expressed in the following form: where = [ where = N + L is the system matrix, which is usually sparse and symmetric, and +1 = +1 − +1 is the load vector, subscripts +1 and denote the current and previous time instant, and superscripts + 1 and denote the current and previous iterate.This paper uses the sparse direct linear solver to solve the system of linear Eqs.(20) [41].The nonlinear iterations in (20) are stopped if the 1 norm of the change in solution ensures where abs and rel represent the absolute and relative tolerances, and min and max denote the minimum and maximum nonlinear iterations. Boundary condition The following boundary condition is applied on the outer boundary of the RRSST sample to set the magnetic flux density to alternate in any arbitrary direction : where b is the -component of the vector potential on the outer boundary , Âb is the peak amplitude of the vector potential, is the angular coordinate of the particular point on the boundary, = 2 is the angular frequency, and is the frequency in cycles per second.Moreover, the flux density rotates on the sample if the vector potential satisfies the following condition [42]: The flux density's amplitude B is controlled by controlling the value of Âb .Thus, using Âb = B, where is the radius of the circular steel sheet, in ( 22) and ( 23), respectively.Furthermore, considering the Dirichlet boundary condition, ( 20) is expressed as follows: where the additional subscripts h and b denote the free and fixed boundary nodes. FE mesh of the RRSST The layout of the RRSST is detailed in [30,Fig.4].Fig. 1 is the 2D FE mesh of the circular steel sample.The mesh is generated using the freely available meshing software Gmsh [43].The mesh consists of two material domains: air and silicon steel.The elements with yellow edges represent the air domain, whereas those with red and blue edges represent silicon steel.The elements with blue edges distinguish the sensor region from the remaining core.The mesh consists of 29,682 elements, of which 144 are air elements, and the remaining 29,538 elements represent silicon steel core.There are 14,898 nodes, of which 112 belong to the outer boundary, and the remaining 14,786 are internal free nodes. Estimation of 𝑩 and 𝑯 The magnetic flux density in the sensor region (see Fig. 1) is obtained by using the following expression: where denotes the average value of the -component of the vector potential, is the cross-sectional area of the th hole, and w is center-to-center distance between holes aligned in the RD or TD.Note that the holes with index = 1 and 2 are aligned in the TD, and holes with index = 3 and 4 are aligned in the RD.Likewise, the components of the magnetic field strength sensed by the H-coils, which is placed on the surface of the sample, are considered to be the weighted average: where is the total surface area of the sensor region excluding the holes.Moreover, after estimating and from ( 25) and ( 26), the hysteresis loss density is evaluated using the following integral [44]: The integral in the right hand side of ( 27) is numerically evaluated using the trapezoidal rule. Evaluation of JA parameters at Gauss integration points The surface integral in the 2D FEM is obtained using the Gauss-Legendre quadrature rule [38].The surface integration is converted to a weighted sum according to the quadrature rule.The parameters of the JA model are evaluated from the flux densities at the two previous steps at the integration point (IP): where JA = {, , }, and subscript denotes the time step.This paper considers only one IP-the centroid of the reference triangle. The R-squared measure of goodness of fit The R-squared ( 2 ) is used as a quantitative measure of goodness of fit between the predicted and measured magnetic data.The following equation defines the 2 : where meas and simu denote the measured and predicted values, Ȳ is the mean value, and denotes the total number of data points.For the RRSST, the goodness of fit is evaluated by setting = { x , x , y , y } in (29).For an exact match between the predicted and measured results, 2 = 1.Thus, a positive value of 2 close to 1 could be considered a good fit. Results and discussion This section presents the results of the 2D FEA of the RRSST sample.Eqs. ( 22) and ( 23) are considered to establish either unidirectionally alternating or rotating magnetic flux density in the circular steel sample.The magnitude of is controlled by setting B = {0.5, 1, 1.5} and = 1 Hz in the equations describing the essential boundary conditions.The FE simulations are performed for the first five periods of the supply frequency.A single period of the supply frequency is discretized into 400 steps.The number of minimum allowed nonlinear iteration is set to be min = 2.The values of tolerances are fixed as abs = 10 −5 and rel = 10 −7 .Equations ( 25) and ( 26) are used to estimate and in the sensor region of the RRSST sample.The last or fifth period of the magnetic field solution is considered to obtain the hysteresis loss density.The results of the FE simulations are in Figs.2-10.The following subsections discuss the results. 𝑩 Alternating in the RD Fig. 2 shows the result when the magnetic flux density alternates parallel to the RD.The isotropic JA model (IJAM) considers BH characteristics averaged over seven directions, meaning the prediction would be closer to the magnetic characteristics in the 45 • direction.Because the field strength is higher for the isotropic case, the losses would also be higher for the isotropic case than anisotropic.Fig. 3 compares the measured and simulated alternating BH characteristics in the RD.The results of the AJAM agree with the measured data.However, some differences are seen in the results of the IJAM.Clearly, the IJAM underestimates the BH characteristics when the flux density alternates in the RD. 𝑩 Alternating in the 45 • direction Fig. 4 shows the result when the magnetic flux density alternates in the 45 • direction.The differences between the results are difficult to see from the field distribution.The hysteresis loss distribution shows noticeable differences.The losses are higher for the AJAM than for the IJAM.Fig. 5 compares the measured and simulated BH loci.The BH characteristics predicted by the AJAM agree better with the measurement data than the IJAM.However, notable differences are seen at a high magnitude of the flux density. 𝑩 Alternating in the TD Fig. 6 shows the result when the magnetic flux density alternates in the TD.Because the TD is a hard direction of magnetization, the magnetic field strength is higher for the AJAM than IJAM.Consequently, the losses are higher for the anisotropic case.Fig. 7 compares the measured and simulated alternating BH characteristics.The AJAM's results are in good agreement with the measured data.In contrast, the IJAM's results differ significantly from the measured data.Thus, the IJAM underestimates the BH characteristics if the flux density alternates in the TD. 𝑩 Rotating in the counter-clockwise direction Fig. 8 shows the result when the magnetic flux density rotates in the counter clock-wise direction.The field quantities rotates at each step, so, it may not be intuitive to compare the results from two different models at a single time instant.Nevertheless, the hysteresis losses are evaluated from a full cycle of the input excitation, so, comparing the isotropic and anisotropic models is more intuitive.The distribution of the hysteresis loss density is more pronounced for the AJAM than IJAM (see Figs. of the AJAM agree with the measured data until 1 T.However, the differences are significant at 1.5 T. Contrarily, the IJAM's results differs considerably from the measured data. Comparison of R-squared coefficient Tables 2 and 3 show the 2 values for the FEA results of the IJAM and AJAM.Note that the 2 values are tabulated for the components of .For the components of , 2 = 0.9999 in all cases, so it is not tabulated.The 2 values for the AJAM are higher than those for the IJAM, meaning the predictions are better for the AJAM.The 2 values for IJAM decrease with an increase in excitation magnitude, signifying the predictions worsen at high flux density excitations. Comparison of hysteresis losses Fig. 10 compares the hysteresis losses for four different types of input excitations.The IJAM should yield identical magnetic characteristics for alternating input excitations, which means the losses should also be the same.Thus, according to the result presented in Fig. 10(a), input excitations along RD, ALT45, and TD yield similar losses.In contrast, for the anisotropic model, alternating losses depend on the direction of the input excitation.As shown in Fig. 10(b), losses produced by the field alternating in the RD is the lowest, followed by ALT45 and TD.If we now compare rotational losses, we see that the differences are negligible until 1 T, and at 1.5 T, the AJAM yields higher losses in the RRSST sample. Simulation time and iterations The FE simulations are performed using Intel ® Core™ M-5Y51 @1.10 GHz RAM 8 GB with GNU/Linux environment.An in-house program written in the C programming language is used for the simulations.The information related to the simulation time and average number of nonlinear iterations are in Tables 4 and 5.The FE simulation that considers the IJAM spends about one second per step on average, whereas, the simulation times are on average 10% higher for the AJAM.Nevertheless, the average numbers of nonlinear iterations are the same. Fig. 1 . Fig.1.2D FE mesh of the RRSST sample.The sensor region includes four holes that are drilled to accommodate the B-coils.The radius of the circular steel sample = 39 mm, and the hole hole = 0.4 mm (see[30, Fig.4]).A zoomed section on the right hand side depicts the triangular mesh inside the hole that is aligned in the RD. Fig. 2 . Fig. 2. Distribution of the magnetic field strength and flux density at a time instant = 4.25 s, and time-averaged hysteresis losses in the RRSST sample.The magnetic flux density alternates in the RD.(a), (c) and (e) FEM coupled with the IJAM.(b), (d) and (f) FEM coupled with the AJAM. Fig. 3 . Fig. 3. Measured and FE simulated BH loops captured in the sensor region of the RRSST sample.The magnetic flux density alternates in the RD.(a) FEM coupled with the IJAM.(b) FEM coupled with the AJAM. Fig. 4 . Fig. 4. Distribution of the magnetic field strength and flux density at a time instant = 4.25 s, and time-averaged hysteresis losses in the RRSST sample.The magnetic flux density alternates in the 45 • direction.(a), (c) and (e) FEM coupled with the IJAM.(b), (d) and (f) FEM coupled with the AJAM. 8(a) and 8(b)).The presence of easy and hard magnetization directions leads to different amounts of losses in different directions around the perimeter of the holes.The rest of the sample experiences uniform loss density distribution.Fig.9compares the x ( y ), x ( y ), x ( x ), and y ( y ) characteristics.The results predicted by the AJAM agree with the measured data.However, notable differences are seen between the measured and simulated data at a high magnitude level.Conversely, the IJAM's prediction differs significantly from the measured data.Figs.9(e)-9(h) compare the measured x ( x ) and y ( y ) characteristics.The results B.Upadhaya et al. Fig. 5 . Fig. 5. Measured and FE simulated x ( x ) and y ( y ) loci obtained from the sensor region of the RRSST sample.The magnetic flux density alternates in the 45 • direction.(a) and (c) FEM coupled with the IJAM.(b) and (d) FEM coupled with the AJAM. Fig. 6 . Fig. 6.Distribution of the magnetic field strength and flux density at time instant = 4.25 s, and time-averaged hysteresis losses in the RRSST sample.The magnetic flux density alternates in the TD.(a), (c) and (e) FEM coupled with the IJAM.(b), (d) and (f) FEM coupled with the AJAM. Fig. 10 . Fig. 10.FE simulated hysteresis losses in the RRSST sample for the cases when the magnetic flux density alternates in the RD, 45 • , TD, and rotates in the counter clock-wise direction.(a) FEM coupled with the IJAM.(b) FEM coupled with the AJAM. Table 1 Magnetic measurements for the parameter identification and model validation. Table 4 FE simulation time and nonlinear iterations for the IJAM.
5,587.4
2022-10-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Using unique ORFan genes as strain-specific identifiers for Escherichia coli Background Bacterial identification at the strain level is a much-needed, but arduous and challenging task. This study aimed to develop a method for identifying and differentiating individual strains among multiple strains of the same bacterial species. The set used for testing the method consisted of 17 Escherichia coli strains picked from a collection of strains isolated in Germany, Spain, the United Kingdom and Vietnam from humans, cattle, swine, wild boars, and chickens. We targeted unique or rare ORFan genes to address the problem of selective and specific strain identification. These ORFan genes, exclusive to each strain, served as templates for developing strain-specific primers. Results Most of the experimental strains (14 out of 17) possessed unique ORFan genes that were used to develop strain-specific primers. The remaining three strains were identified by combining a PCR for a rare gene with a selection step for isolating the experimental strains. Multiplex PCR allowed the successful identification of the strains both in vitro in spiked faecal material in addition to in vivo after experimental infections of pigs and recovery of bacteria from faecal material. In addition, primers for qPCR were also developed and quantitative readout from faecal samples after experimental infection was also possible. Conclusions The method described in this manuscript using strain-specific unique genes to identify single strains in a mixture of strains proved itself efficient and reliable in detecting and following individual strains both in vitro and in vivo, representing a fast and inexpensive alternative to more costly methods. Supplementary Information The online version contains supplementary material available at 10.1186/s12866-022-02508-y. Background The tracing of microbes in complex biological systems is indispensable to answer many scientific questions in applied, clinical, and environmental microbiology. Bacterial identification at the strain level is highly challenging since closely related bacteria may have similar morphologic and physiologic profiles, and strains belonging to the same bacterial species and even to the same bacterial family, are indistinguishable solely by morphological methods [1]. Techniques to identify strains have evolved and advanced in the past decades, but all available methods have limitations and flaws, and none of them is 100% reliable or accurate [2]. Differentiation of bacterial strains based on their genome information has become the preferred approach due to its excellent resolution, high reliability, and easy availability [3]. Genotyping methods can be grouped into three major categories [3]. The first is based on DNA fragment patterns, in which amplification and/or enzymatic digestion of bacterial DNA is followed by electrophoretic resolution of differently sized fragments, the pattern of which serves as specific species or strain identifier. For amplicon-based classification, the DNA sequence of a reference genome has to be at least partially known. Second, DNA-hybridisation systems deploy nucleic acid-based probes, labelled fragments of known sequence complementary to their corresponding targets, which are detected after probe binding. Third, DNA sequence-based genotyping, the most powerful tool currently being used to classify bacteria, utilizes strain-specific variations, such as single nucleotide polymorphisms (SNP) [4] as well as deletion or addition of genetic material [5,6]. These methods all rely on specific differences between individual strains, which have to be at least qualitatively, but preferably quantitatively, detectable within the context of a complex microbial environment, in which one or more strains of the target species may already be present. Escherichia coli is a commensal member of the vertebrate gut, but certain strains, grouped into E. coli pathovars, have acquired virulence genes, mainly located on mobile elements, on multiple occasions resulting in a high degree of genomic flux [7,8]. E. coli has an open pan-genome, i.e., the number of genes it contains (consisting of core and accessory genes) increases with the number of additional genomes sequenced. Many accessory genes are only present in one strain (also known as singletons) or in a few strains [9,10]. Such genes with no known relatives or homologues in species belonging to other lineages are known as orphan or ORFan genes [11]. Improved genotyping and sequencing techniques during the last two decades have led to the discovery of large numbers of ORFan genes present within bacterial genomes [11,12]. New studies analysing the genomes of pathogenic and non-pathogenic bacteria have shown that all genomes of a single species have their own share of specific and unique ORFan genes, which are common to some strains but are not found consistently in all members of the species: this part of the genome is also considered to belong to the "variable or accessory genome" [13][14][15]. Several ORFan genes are lineage-specific, while others can even be strain-specific and seem to contribute to a particular strain's characteristics, including potential pathogenicity [16,17]. These features qualify specific ORFan genes to serve as molecular tracer for bacteria at the strain level. In this study, we describe an ORFan gene-based identification method characterized by the generation and development of PCR primers specific at the bacterial strain level. An extensive E. coli library containing 1198 whole-genome-sequenced strains collected by the consortium served as database to identify suitable ORFan genes. Specific regions within these ORFan genes were tested against larger sequence databases to increase the marker's probability of being restricted to a single strain. Specific PCR and quantitative PCR (qPCR) primers derived from the search results were successfully applied in vitro and in vivo in an animal infection experiment. Strain selection One thousand one hundred and ninety-eight E. coli strains were provided by all HECTOR project partners and whole-genome-sequenced using short-read technology (van der Putten, B., Tiwari, S.K., HECTOR consortium, Semmler, T. and Schultsz, C., unpublished data https:// doi. org/ 10. 1101/ 2022. 02. 08. 479532). The sequencing was executed using Illumina MiSeq (2 × 150 bp, 2 × 250 bp, 2 × 300 bp paired reads) and the Illumina HiSeq 4000 system (2 × 150 bp paired reads). As quality control, adapter sequences and low-quality bases within raw reads were trimmed. For genome assembly and annotation, adapter-trimmed reads were assembled with SPAdes v3.13.1 using read correction [18]. Scaffolds smaller than 500 bp were discarded. QUAST v5.0.0 was used to assess assembly quality using default parameters [19]. Data on the assemblies is given in Supplementary Table S1. Seventeen strains, carrying extended-spectrum beta-lactamase (ESBL) genes, were selected from this collection for an animal study to assess their colonization properties in livestock animals in parallel with an infection approach using a mixture of strains (cocktail) ( Table 1). To facilitate experimental strain detection in complex, non-sterile matrices by adding a selective culturing step, all 17 strains were artificially selected for rifampicin resistance. To this end, a 50 ml overnight culture of each strain was centrifuged at 4000 x g for 10 min, the pellet resuspended in 1 ml of LB media, and then plated on LB-agar plates containing 100 μg/ ml rifampicin (Carl Roth, Karlsruhe, Germany). After overnight incubation at 37 °C, the plates were inspected for colony growth. The rpoB genes of the rifampicinresistant strains were sequenced and one mutant was selected per experimental strain. ORFan gene identification The DNA isolated from each strain was sequenced using Illumina short read technology (see Supplemental Table S1). The draft genomes were annotated by Prokka v1.13 [20] using a genus-specific blast for E. coli. The pan-genome was constructed at 95% amino acid identity by using Roary v3.12.0 [21]. Genes found in 99% of the strains within our collection were considered to represent core genes and the remaining genes classified as accessory genes. Paralogs were split into different orthologous groups. The strain-specific genes were identified by in-house scripts based on the binary matrix of gene presence or absence obtained from Roary. The nucleotide sequences of these strainspecific genes were extracted for each strain, and their specificity was further confirmed by using BLAST [22]. Each strain-specific gene was scanned against the entire gene pool of the HECTOR strain collection. Genes found in other strains or in more than one copy in the same strain at 90% identity and 90% coverage were discarded. A gene with a single copy present only in one strain was considered as strain-specific in this study. Its corresponding sequence was extracted based on the respective locus-tag. These genes were further examined individually via online BLAST analysis [23] against the GenBank [22] database specific for E. coli in order to assess exclusivity. As databases, the "Standard database" and the "Nucleotide collection (nr/nt)" were selected, and the organism was specified as "Escherichia coli (taxid: 562)". As program selection, the search was optimized for "highly similar sequences (megablast)" and when no results were given, a new search was performed with the "somewhat similar sequence (blastn)" option selected. At the moment of the analysis (between February and June 2019), the GenBank database was composed of 4 representative E. coli genomes, more than 8000 complete E. coli genomes and more than 31,000,000 drafted E. coli genomes. If the top results from the BLAST [22] analysis of the GenBank [23] database belonged to known plasmids and fully matched the sequences being compared, they were considered plasmid-derived, and excluded from further analysis. If no hits were found from the search, a more exhaustive search against the non-redundant nucleotide database from GenBank was carried out. Those genes with few (< 10) or no hits were considered ORFans in this analysis. PCR and qPCR primer design The ORFan genes detected in silico were used to design primers. If no "low-hit" ORFans were available for a specific strain, those regions within a selected ORFan gene, identified in another strain, but showing a higher sequence variation when blasted against the GenBank database were used to generate the primers. All PCR and qPCR primers were manually designed for each strain according to the following criteria: primer length and melting temperature, avoidance of dimers, hairpins, and self-complementarity. Specificity verification was performed using the tool Primer-BLAST [23]. The PCR primers were designed so that four multiplex PCR reactions could be performed in order to qualitatively (i.e., presence/absence from a given sample) trace all 17 experimental strains with a minimum number of reactions. For this, both melting temperatures of the primers and sizes of the products within a multiplex were selected so that the temperature should not differ more than 1 °C between primer pairs, and product sizes should differ by at least 100 bp from each other, if possible. The oligonucleotide primers were synthesized by Eurofins Genomics (Ebersberg, Germany). Their sequences are listed in Table 2. Primers for qPCR were optimized for use with the Luna qPCR mastermix (New England Biolabs Inc., Ipswich, MA, USA). Amplicon sizes ranged between 100 and 250 bp, with a GC content from 40 to 60%, a melting temperature not greater than 61 °C with less than 1 °C difference between primers of the same pair, and a primer length of 19 to 25 nucleotides. If possible, ORFan gene and ORFan gene region used to design the qPCR primers were the same as the ones used for PCR primer design. The oligonucleotide primers were manufactured by Eurofins Genomics (Ebersberg, Germany). Their sequences are listed in Table 3. PCR multiplexes Four multiplexes were needed in order to detect all 17 experimental strains. As much as possible, multiplexes targeted strains originating from the same host. In the case of the "Mix multiplex", due either to fragment size or melting temperature three strains were included that didn't fit in the other multiplexes. Multiplex PCR conditions were optimised following the recommendations published by Zangenberg et al. [24] or the PCR mastermix manufacturer. PCR was performed in a total volume of 25 μl containing 2 μl of purified DNA, 12.5 μl of One-Taq 2x Master Mix with Standard Buffer (New England Biolabs Inc., Ipswich, MA, USA), 9.5 μl of nuclease-free water, and 0.5 μl each of 10 μM forward and reverse primer. The reactions were performed in a Biometra T3 Thermocycler System (Analytik Jena, Jena, Germany) using the conditions specified in Table 4. Gel electrophoresis was performed by using 1.0% allpurpose, high-purity agarose (VWR International, Radnor, PA, USA) gels with 0.25X SERVA DNA stain clear G (SERVA Electrophoresis GmbH, Heidelberg, Germany) in 1X Tris-borate-EDTA buffer (VWR International, Radnor, PA, USA) in a Perfect Blue Gel System (VWR International, Radnor, PA, USA) at 150 V for 1 h. Two microliters of amplified DNA were mixed with 4 μl of gel loading dye (New England Biolabs Inc., Ipswich, MA, USA) for analysis. For reference, a Quick-Load Purple 100 bp DNA Ladder (New England Biolabs Inc., Ipswich, MA, USA) was used (bands every 100 bp up to 1000 bp, plus two additional bands at 1200 bp and 1500 bp). Quantitative PCR reaction setup Quantitative PCR conditions were optimised following the qPCR mastermix manufacturer's specifications. Assays for qPCR were performed on a CFX96 Touch Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, USA). Reactions contained a total volume of 20 μl, in which 2 μl of purified DNA were used together with 10 μl of Luna Universal qPCR Master Mix (New England Biolabs Inc., Ipswich, MA, USA), 7 μl of nuclease-free water, and 0.5 μl each of 10 μM forward and reverse primer. Each reaction was performed in triplicate. The cycling conditions included an initial denaturation step of 1 min at 95 °C followed by 40 cycles of 95 °C for 15 s and 60 °C for 30 s. No-template controls (2 μl of nuclease-free water instead of DNA extract) and an internal calibrator control for each strain (2 μl of each strain's purified DNA at a concentration of 10 ng/μl with a known Ct value, ranging between 12 and 14 Ct, used to account for possible variations between plate runs) were performed with each batch of samples tested. The uidA gene encoding a β-glucuronidase specific for E. coli was included in the qPCR assay as housekeeping gene [25]. Specificity and efficiency testing Primer pairs were individually tested with their respective target strain first by simplex PCR and afterwards together in multiplexes. For this, each strain was streaked onto a Gassner agar plate (Sifin, Berlin, Germany) containing ceftiofur at 4 μg/ml (ceftiofur hydrochloride, The efficiency of the qPCR primers was calculated following the recommendations published by Svec et al. [26]. Ten-fold dilutions ranging from 10 ng down to 0.0001 ng of DNA were tested in triplicate. The mean average of the triplicates was plotted on a logarithmic scale along with the corresponding template concentrations. A linear regression curve was applied to the data points, and the slope of the trend line was calculated. Finally, efficiency was calculated by using the equation: E = − 1 + 10 (− 1/slope) . All primer pairs tested showed efficiency values between 90 and 95%. Non-template controls did not show any amplification, and internal calibrator values always stayed within the determined Ct value range of 12-14 cycles depending on the respective calibrator used. Tracing of bacterial strains after cocktail infection of piglets For in vivo analysis, pigs were inoculated with a bacterial cocktail containing 17 different strains ( Table 1). The animal experiment was approved by the competent authority (State Office for Agriculture, Food Safety and Fisheries of Mecklenburg-Western Pomerania, Rostock, Germany, reference no. 7221.3-1-034/19). Eight German landrace pigs, 42-45 days old and healthy as per veterinary guidelines, were purchased from a conventional pig farm (bhzp Garlitz, Langenheide, Germany) and housed in an environmentally controlled animal facility at the Friedrich-Loeffler-Institut (FLI) on the Isle of Riems, Greifswald. The animals adapted to the environmental conditions for 3 weeks prior to experimental infection. Meanwhile, faecal samples were collected to determine the resistance status of the coliform bacterial population in the intestinal tract of the pigs. Some samples tested positive for ceftiofur-resistance, but all samples tested negative for ceftiofur/rifampicin double-resistant bacteria. All inoculation strains were grown individually on Gassner plates containing 4 μg/ml ceftiofur and 50 μl/ml rifampicin and stored at 4 °C. The bacterial cocktail was prepared before inoculation, by mixing liquid cultures of all 17 strains at equal numbers (5.88 × 10 8 cells per strain) in order to reach a total of 10 10 bacteria per inoculation dose. Mixtures were gently centrifuged, the media removed, and the bacterial pellets resuspended in 10 ml of physiological saline containing 10% sodium bicarbonate to buffer stomach acid. After re-suspension, cooled individual doses were immediately transported to the animal facility. For inoculation, all animals were lightly sedated intramuscularly with azaperon (Stresnil ® , Elanco, Greenfield, IN, USA) using 0.5 ml / 20 kg of body weight. The inoculation of the strain cocktail was performed intra-gastrically using a gastric tube (B. Braun, Melsungen, Germany). The animals recovered quickly and were fed immediately after the procedure. Post-inoculation, clinical observation of the animals was performed once per day during the entire experiment. Rectal swabs, in addition to faecal samples from the pen, were collected daily from day 1 to 14 post-infection, and every 2 days from day 15 until day 56 at the end of the experiment. Rectal swabs were suspended in 1 ml of LB medium and allowed to rest at 37 °C for 30 min. The swab wash-offs were serially diluted from 10 − 1 to 10 − 4 and used for plate spotting on Gassner agar plates containing either no antibiotics or ceftiofur (4 μg/ml) and rifampicin (50 μg/ml). For spotting, 10 μl droplets of the 10 − 1 to 10 − 4 dilutions were gently spread on each plate in duplicate and plates were left open inside the bench for 1-2 min to allow excess media to be absorbed by the agar. After overnight aerobic incubation at 37 °C, colonies were counted in each droplet. In addition, 100 μl of the suspension from the rectal swabs were plated on Gassner agar plates containing ceftiofur (4 μg/ml) and rifampicin (50 μg/ml). After overnight incubation at 37 °C, plates were washed off using 2 ml of LB, and the suspension was used to isolate DNA with a commercial kit (peqGOLD Bacterial DNA Mini Kit, Peqlab, Erlangen, Germany). To collect colonic content and tissue, four animals each were euthanized by intravenous administration of Pentobarbital (Release ® 500 mg/ml, WDT) on days 43 p.i. and 56 p.i., respectively. At post-mortem examination, the intestinal tract of each animal was removed and the colon section was separated by a double ligation. After opening, approximately 50 ml of the content was collected and a large piece of intestinal tissue (approximately 2-3 cm) sampled and gently washed to remove any remaining content. One gram of content was weighed, diluted in 9 ml of LB medium and allowed to rest at 37 °C for 30 min. One gram of tissue was weighed, finely chopped, suspended in 9 ml of LB medium and allowed to rest at 37 °C for 30 min. One hundred microliters of each suspension were serially diluted from 10 − 1 to 10 − 4 and used for plate spotting (see detailed description above) on Gassner agar plates containing either no antibiotics or ceftiofur (4 μg/ ml) and rifampicin (50 μg/ml). After overnight incubation at 37 °C, colonies were counted. One millilitre of the initial dilution (10 -1 ) was used to isolate DNA with a commercial kit (peqGOLD Bacterial DNA Mini Kit, Peqlab, Erlangen, Germany) prior to enrichment. Sample enrichment was performed by adding rifampicin to the initial content suspensions, which was further incubated overnight at 37 °C. The next day, 100 μl of the overnight suspension were plated on Gassner agar plates containing ceftiofur (4 μg/ml) and rifampicin (50 μg/ml). After overnight aerobic incubation at 37 °C, plates were washed off using 2 ml of LB. One millilitre of the suspension was used for generating glycerol stocks and the remaining millilitre was used to isolate DNA with a commercial kit (peqGOLD Bacterial DNA Mini Kit, Peqlab, Erlangen, Germany). Statistical analysis Statistical analyses were performed with GraphPad Prism Software (GraphPad Prism version 9.0.2 for Windows, GraphPad Software, San Diego, California USA, www. graph pad. com). The one-way ANOVA on ranks (Kruskal-Wallis) test for multiple comparisons was used to determine the significance of differences between ∆Ct (∆Ct = Ct (gene of interest) -Ct (housekeeping gene)) values from strains. Values with p ≤ 0.05 were considered significant. ORFan gene identification A total of 299 ORFan genes were identified in the whole genome sequences of the 17 experimental strains ( Table 5). The number of strain-specific genes per strain was highly variable, ranging from a maximum of 158 in one strain to zero in two strains. Approximately one-third (86) of the 299 strain-specific genes were classified as plasmid-borne and, therefore, not deemed suitable for strain identification. They were not further analysed as to their ORFan status. Nearly all of the strain-specific genes, that were not potentially plasmid-encoded (213), turned out to be ORFan genes (206). For the three remaining strains that showed no ORFan genes, a comparison within the 17 strains selected for the bacterial cocktail was made. Genes that were unique among the cocktail strains, and showed less than 5 hits with other E. coli in the GenBank search, were then used as strain identifiers. These primers were tested with DNA isolated from faecal samples from clinically healthy (non-inoculated) pigs with no positive matches, and were deemed specific enough to be used during the animal experiment. PCR primer specificity and multiplex functionality A primer pair was designed for each experimental strain ( Table 2). Only primer pairs that exclusively amplified DNA of the corresponding strain were considered target-specific. Among the 34 initially designed primers, seven pairs failed (41.2%), either by not-amplifying their respective target or by yielding non-specific bands when tested against other strains. Consequently, seven new primers pairs were designed and two pairs again failed (28.5%) for the same reasons. The third pair of primers yielded specific signals for the last two strains. Combining the primer pairs into multiplexes affected neither their specificity nor their ability to identify the matching strain (Fig. 1). No non-specific bands were seen after multiplexing the primers; however, it was observed that the bands of the smallest fragments in the multiplex (smaller than 150 bp) had reduced intensity in those multiplexes containing a larger number of primer pairs (e.g., Cattle multiplex with six pairs of primers and Pig multiplex with five pairs of primers -see Table 4). During multiplex testing with DNA isolated from pure cultures, the detection limit of the reactions ranged between 1 to 10 ng/μl DNA, which is equivalent to 200 to 2000 genome copies. However, DNA isolated directly from faecal samples spiked with 10 8 bacteria did not yield positive results, indicating that inhibitory substances or the high endogenous background of bacterial and eukaryotic DNA might be problematic in the aforementioned set up. To overcome this obstacle, a selection step was added for experimental strain detection from faecal samples. Since all experimental strains possessed ESBL genes and were rifampicin resistant, a plating step on Gassner agar containing ceftiofur (4 μg/ml) and rifampicin (50 μg/ ml) was added to limit strain isolation to the experimental strains and to eliminate the presence of inhibitory faecal substances. This eliminated growth of other endogenous ESBL bacteria that were already present in the animals pre-inoculation, and facilitated specific detection of experimental strains in a multiplex PCR approach using boiled lysates or DNA prepared from the pooled bacteria. Monitoring of faecal samples The suitability of the ORFan approach to qualitatively monitor shedding of different E. coli strains by pigs when inoculated with 17 strains simultaneously was assessed by conducting colony counting of resistant E. coli and detection of strains by classical PCR. All experimental strains were confirmed to be present in the initial inoculation cocktail prepared to be given to the animals (Fig. 2). After inoculation of the animals with the bacterial cocktail, faecal samples were collected for 56 days. Results after the first 24 h post-inoculation were highly variable between animals. After 48 h, 12 of the 17 strains were detected in at least one animal, with a minimum of four strains and a maximum of 12 strains detected in the eight animals. The remaining five strains were not detected at all in faecal matter during the experiment. Experimental strain counts, i.e., the total number of colonies, which grew without differentiating the single strains, remained high up to day 3-4 post-inoculation, after which they slowly declined. A significant number of strains from the inoculum mixture were detectable at day 3 (Fig. 3), with three animals being positive for four strains, one animal for six strains, one animal for nine strains, two animals for eleven strains, and one animal positive for twelve strains. Inoculated bacteria were shed and identified by PCR up to day 21 p.i., when their counts on selective plates had declined to single-digit numbers per gram of faeces or were absent, requiring an enrichment protocol for further detection of experimental strains. By day 29 p.i., only 5-6 different strains were shed, even after enrichment, and by day 53 p.i., only four experimental strains were detected. qPCR system results The qPCR system was used to prove the general suitability of the ORFan approach to compare different E. coli strains quantitatively within complex intestinal porcine microbiomes. To this end, strains that gave positive signals in the PCR Multiplex setup were further analysed via qPCR for their presence in the dissection sample contents. Delta-Ct (ΔCt) values obtained from the qPCRs showed significant differences in quantities between the strains. The strain classified as P3 (39533) showed the lowest ΔCt values, indicating a high presence in the colonic content. This strain's presence in the samples was significantly different from all other strains tested via qPCR ( Fig. 4; Table 6). The strains classified as C2 (R45) and M1 (21225_2#178) showed the highest ΔCt values on average, denoting both strains' lower presence in the intestinal content of the inoculated animals. Also, M1, together with P2 (IMT28138), showed the lowest degree of significance when compared against the other strains tested ( Fig. 4; Table 6). Discussion Current strain identification methods based on DNA fragment patterns, including pulsed-field gel electrophoresis (PFGE), restriction fragment length polymorphisms (RFLP), repetitive sequencing-based PCR (REP-PCR), Enterobacterial Repetitive Intergenic Consensus PCR (ERIC-PCR), and multiple-locus variablenumber tandem repeat analysis (MLVA) [3], cannot be used to characterize mixtures of strains because it is not possible to assign individual bands to their cognate isolate unambiguously. DNA sequencing-based methods cannot resolve individual strains in a mixture except if sequences with differentiating SNPs have been identified for all strains. The analysis would then require prior testing of the animals to ensure the absence of these discriminatory SNPs in the endogenous bacterial population. DNA hybridization-based methods, such as cDNA and oligonucleotide microarrays, permit the detection of individual genes or gene fragments and, consequently, strains, but they also require previous knowledge of strain-specific sequences and have to be individually adapted for each new combination of tobe-detected strains. Other strategies, commonly used in infection models, like introducing artificial selection markers, such as antimicrobial resistance genes or genes coding for fluorescent proteins, are of limited feasibility since they allow only a limited number of strains to be introduced simultaneously [27]. The ORFan gene targeting approach utilized herein has the advantage of allowing the introduction of multiple strains simultaneously into an experimental setup, including an in vivo animal trial, as demonstrated. As opposed to existing methods for strain identification, which are either extremely time-consuming, expensive or need specific equipment, the ORFan identification system can be implemented relatively fast and is accessible to everyone with standard laboratory equipment and moderate knowledge of bioinformatics. This method can also be flexibly and easily scaled up, because it only requires the identification of a specific ORFan gene for any novel strain to be introduced. The ORFan gene approach allows the combination with other visualization techniques to expand strainspecific detection to other types of biological sample. Recently, a HiPR-FISH (high phylogenetic resolution -fluorescence in situ hybridization)-based identification technique was described [28]. In the aforementioned study, HiPR-FISH was employed to identify (i) over 1000 E. coli isolates, using artificially introduced barcoded sequences to generate fluorescent probes for individual strain identification and (ii) bacterial genera present in the murine gut microbiome or in human plaque biofilms using 16S rRNA sequences to identify different genera [28]. Using ORFan genes instead of 16S rDNA as targets for such strain-specific fluorescent Another possible application could be specific strain detection in tissue sections from infected animals, which would even permit comparative spatial-temporal resolution of multiple strains in a host coupled with respective niche identification. The ORFan gene identification workflow pinpointed a large enough number of ORFan genes for most strains to allow the successful selection of specific primers. Previous E. coli phylogenetic studies unveiled that ORFan genes could compose approximately 1% of the bacteria's total core genome, with an increase of approximately 26 genes per new genome sequenced [29]. The average number of 18 specific genes per strain was lower for the set of strains selected here for the animal test, and the respective total number of strain-specific genes was lower for 15 of the 17 strains. An additional complication is that these genes could also be located on plasmids and, thus, subjected to horizontal transmission, disqualifying them as stable markers. In our study, approximately one-third of the strain-specific genes were classified as plasmidderived, but all affected strains, except one, harboured alternate ORFans for which a specific detection system could be designed. Primer implementation was an iterative process, requiring up to three rounds of sequence selection until all 17 strains could be explicitly identified in multiplex PCR assays. Each round required up to 2 weeks of time spent between ORFan gene selection and successful testing of the specific primer pairs. For the three strains, for which no ORFan genes were identified, an additional search of their accessory genomes was performed to identify unique genes within the sequence context of the 17 experimental strains. ORFan genes found using this approach were also subjected to the workflow presented in the "Materials and Methods" section to assess exclusivity. As expected, most of the newly identified genes did have more than 10 hits in the GenBank search. For this reason, only nucleotide stretches within a selected ORFan gene that showed a higher degree of sequence variation between the experimental strain's sequence and the sequences available in GenBank, were used to generate primers. Furthermore, designated cocktail strains were artificially rendered rifampicin-resistant to distinguish them from cephalosporin-resistant E. coli in the endogenous animal microbiota. The combination of these measures was successful for expanding the limits of the ORFan approach, at least for the list of bacteria under study and in the group of animals used. An aliquot from the strain cocktail used to infect the animals was immediately stored at −80 °C after preparation. DNA was extracted, and the four multiplexes performed to corroborate the experimental strains' presence. All 17 strains were detected, confirming their presence in the inoculation cocktail. Their respective band intensities were also similar, indicating that each strain had been added in a similar quantity to the cocktail. Among the 17 strains used to infect the pigs, five strains were not detected in the faecal samples throughout the experiment and were also not detected in the intestinal content at necropsy. Eight strains were only detectable intermittently during the entire experiment, with all of them displaying higher prevalence during the first 14 days p.i.. The remaining four strains were consistently detected throughout the entire experiment. The presence or absence of the 12 successfully detected strains was closely monitored during the entire experiment via the multiplex PCRs, demonstrating that the detection system can be used to follow dynamic changes in a strain's presence. All qPCR primers tested showed high efficiency with values ranging from 90 to 95%. Several tests with various samples corroborated that the primers were indeed sufficiently specific to allow the direct use of faecal DNA Table 6 to detect experimental bacteria from the mixed background of total bacteria. Strain presence in gut content from the animals' colon, which had already been demonstrated in the multiplex PCRs, was also seen with qPCR primers. The qPCR data indicated colonization differences between experimental strains. Based on the qPCR results, strains with high colonization capacity were also easily re-isolated from the faecal samples, demonstrating that the qPCR results are accurate at monitoring specific strain abundancies in the colonic samples. Next-generation metagenome sequencing provides vital information on microbial populations and genetic diversity at all taxonomic levels. A fast and easy but robust and reliable method for individual strain identification based on information derived from whole-genome sequencing has yet to be described. The usage of ORFan genes, specific for individual strains, could be such a valuable tool, as it allows to develop precise PCR markers for tracing the strains in complex mixed-culture experiments. The method has the potential to be applied in multiple ways to foster our understanding of, for example, the population dynamics of closely related strains of a pathogen. A probiotic or any other strain of interest can be evaluated as to its colonization ability, general strain fitness, or zoonotic risk and might guide the development of intervention strategies [3]. This method could also potentially be used as a fast and powerful tool for back-tracking the identification of specific pathogens in the event of an outbreak. At present, whole-genomesequencing is an essential part of outbreak investigations. After acquiring the sequence data of the suspected outbreak agent, the strain detection method presented here could be used for rapid identification of a specific outbreak strain following identification of ORFan genes unique to the specific outbreak agent and the design of strain-specific primers. Without the need to amplify a whole set of virulence markers characteristic of an outbreak strain or to isolate the pathogen from multiple samples, ORFan genes may be used for specific-strain identification via PCR, allowing the rapid pre-screening of many different samples to narrow down the potential sources that could have served as origin or as potential transmission route of the pathogen in the outbreak. In the ensuing second step of outbreak analysis, PCR-positive samples would be subjected to classical approaches involving strain isolation and characterization for unambiguous identification of the culprit strain. This would allow to perform large-scale surveys of many different samples to identify potential outbreak sources and clusters. Similar approaches have been used in the past, such as the one described by Bielaszewska et al. [30], where an outbreak strain was successfully identified by detecting a specific virulence gene profile. A similar approach was described by Kiel et al. [31], where two pipelines were simultaneously run, comparing Shiga-toxin expressing E. coli (STEC) genomes versus control genomes and an STEC core proteome versus control proteomes. Lineageand serotype-specific genes were identified this way and used for monitoring specific STEC strains. Conclusion The method described in this manuscript using single unique genes to identify specific strains proved easy to implement and very reliable in identifying and following individual strains and their dynamics in an in vivo animal model of experimental infection, thus representing a fast, inexpensive and reliable alternative to more costly and laborious identification methods. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12866-022-02508-y . Table S1. Isolate and sequencing metadata for the strains used in the study.
8,178.8
2021-06-25T00:00:00.000
[ "Biology" ]
An IoT Framework for Modeling and Controlling Thermal Comfort in Buildings Humans spend more than 90% of their day in buildings, where their health and productivity are demonstrably linked to thermal comfort. Building thermal comfort systems account for the largest share of U.S energy consumption. Despite this high-energy cost, due to building design complexity and the variety of building occupant needs, addressing thermal comfort in buildings remains a difficult problem. To overcome this challenge, this paper presents an Internet of Things (IoT) approach to efficiently model and control comfort in buildings. In the model phase, a method to access and exploit wearable device data to build a personal thermal comfort model has been presented. Various supervised machine-learning algorithms are evaluated to produce accurate personal thermal comfort models for each building occupant that exhibit superior performance compared to a general model for all occupants. The developed comfort models were used to simulate an intelligent comfort controller that uses the particle swarm optimization(PSO) method to search for optimal control parameter values to achieve maximum comfort. Finally, a framework for experimental validation of the new proposed comfort controller that interactively works with the HVAC element has been introduced. INTRODUCTION Nowadays, in developed countries, people spend more than 90% of their time in indoor spaces (Höppe and Martinac, 1998;Frontczak and Wargocki, 2011). Most of these indoors are conditioned with different types of HVAC systems that consume about 50% of primary energy in the building (Pérez-Lombard et al., 2008) to ensure occupant thermal satisfaction (Wagner et al., 2007) and health (Allen et al., 2015). While the impact of thermal satisfaction on productivity in workplaces is well-established (Leaman and Bordass, 1999;Salonen et al., 2016), there is a misconception that considers air temperature as an accurate indicator of thermal comfort, as opposed to including the variability in each individual's thermal responses. Thus, common practices in buildings are limited to setting universal temperature set points that may take seasonal changes into account but without including the human in the control loop. These practices simply may result in a violation of the recommendation of many health organizations such as the Health and Safety Executive (UK) for establishing the minimum requirement of a reasonable comfort environment [i.e., at least 80% of the indoor occupants are feeling comfortable (Contributors, 2016)]. The conventional comfort model is the Predicted Mean Vote model (PMV model) (Fanger, 1970). The PMV model, the most commonly used comfort model adapted into ASHRAE Standard 55-Thermal Environmental Conditions for Human Occupancy (ANSI/ASHRAE Standard 55-2013Standard 55- , 2013, is meant to estimate the average thermal sensation that a group of people would report when occupying a space. It correlates multiple environmental parameters (air temperature, air velocity, relative humidity, and radiant temperature) and personal parameters (metabolism and clothing) to different levels of comfort based on a rating between −3 and 3, where −3 means the body thermal sensation is very cold and 3 means the body thermal sensation is very hot. The PMV value can be directly calculated using a system of highly non-linear and iterative equations. One of the key challenges of the PMV model is that it cannot be applied to estimate the personal comfort level because it is built to estimate the statistical average thermal sensation of a large population of people. Moreover, our recent sensitivity analysis for the PMV model revealed that the PMV model thermal comfort prediction is very sensitive to its personal parameters (metabolism and clothing) (Hasan et al., 2016). Ironically, the PMV in real implementation uses the predefined constant values for its personal parameters with no feedback from occupants (Van Hoof, 2008;Auffenberg et al., 2015). All these limitations, led to significant error and high occupant's comfort dissatisfaction when adapting the PMV model to model and control comfort in buildings. To address the PMV limitations, the study in Kim et al. (2018) reviewed the new developments in comfort modeling during the last 10 years and categorized the researches into two groups. The first group is a data-driven approach to model and predicts the thermal comfort of a general population (Chen et al., 2015;Dai et al., 2017) and the second group is using the synthetic data to model personal comfort (Ari et al., 2008;Zhang et al., 2018). For the model output, most studies used the 3-point thermal preferences (warmer/no change/cooler) or ASHRAE 7point thermal sensation scale. Indoor air temperature, mean radiant temperature, and relative humidity along with individual information such as metabolism and rated skin temperature were used mostly as the model input in this study (Peng and Hsieh, 2017) . Recently, machine learning methods have been used extensively in modeling thermal comfort. For example, in Zhang et al. (2018) a deep neural network (DNN) was used to model and control thermal comfort. In Chaudhuri et al. (2017), a machine learning-based prediction model of thermal comfort in buildings of Singapore was performed in real-time. The model was trained using environmental and human factors such as the six Fanger's factors and newly proposed factors such as age, gender, and outdoor weather. While the proposed model requires many sensors data, it was shown to offer high computational speed compared to the PMV model. Kim et al. (2018), developed personal comfort models to predict individuals' thermal preference using six different machine learning algorithms. A median accuracy of 0.73 was archived for the best performing algorithm. A new work (Gao et al., 2019) proposed a deep reinforcement learning-based framework for controlling the comfort in buildings while minimizing the energy consumption of the HVAC systems. To achieve this goal, a deep neural network was used for predicting the occupants' thermal comfort, and a deep deterministic policy gradients (DDPG) approach was used for learning the thermal control policy. A recent work (Jung et al., 2019) used machine learning to create a heat flux sensing model to infer personal thermal comfort under transient ambient conditions. Finally, an online learning approach was introduced in Ghahramani et al. (2015) for modeling personalized thermal comfort via stochastic modeling. In this model, a Bayesian network is used to create a personalized comfort model from multiple probability distribution comfort models. Very few studies explored the use of wearable biometric data to enhance modeling comfort (Huang et al., 2015;Hasan et al., 2016;Rafaie et al., 2017). This reduces the need for having many building sensors to model comfort in buildings. For example, in our work (Hasan et al., 2016;Rafaie et al., 2017) we have shown the use of wearable device biometric data to augment the PMV comfort model with continuous feedback for the personal parameter data. The work of (Huang et al., 2015) built a single generalized/global comfort model (GM) from wearable devices sensing data and comfort votes for eleven human subjects. As the differences among people are poorly captured in a single model, the global model leads to high error in predicting individual comfort for these 11 human subjects. Thus, a much higher prediction error is expected when the population size of the occupant is very large, i.e., the case when the comfort management system is deployed in a large building, and when trying to predict comfort for new users that their data were not used in training the GM model. From the aforementioned literature review, it is concluded that using machine learning methods for modeling thermal comfort is gaining great attention recently. However, most of these models were trained using many conventional building sensors data. In real life implementation, these sensors are assumed to be fully integrated. Thus, increasing the sensors installation complexity and cost. Wearable devices, however, offer an affordable alternative to provide most of the required data for training machine learning comfort models. However, their potential to accurately train personalized comfort models has not been fully explored in the literature. To fill this research gap, in this paper we develop a wearable-based personalized comfort model, which exploits machine learning schemes to infer and predict the comfort level of each person by fusing multidimensional sensing data including (1) minimum environment sensing data from static sensors deployed in the building, (2) the human biometric data from the wearable devices, and (3) the direct subjective feedback from the occupants. The organization of the paper is as follows. In section The Thermal Comfort Framework, an overall thermal comfort framework is discussed in detail. In this section, the wearable, as well as indoor ambient sensory employed in this work, are presented. Moreover, the machine learning algorithms used for comfort modeling and intelligent control approaches are introduced and potential improvement in human thermal comfort is presented. In section Future and Ongoing Work, our ongoing and future experimental works on studying the impact of the new comfort controller on HVAC energy use are briefly presented. In section Conclusions, the conclusions of the paper are presented. THE THERMAL COMFORT FRAMEWORK The general setup of the comfort control framework is shown in Figure 1. The figure shows the framework three major components; (1) data collection through wearable devices and FIGURE 2 | A picture for the Microsoft Band 2. In the Figure 1, refers to the Barometer, 2 to the Heart rate monitor, 3 to the UV sensor, 4 to the Charging port, 5 to the Microphone, and 6 to the Galvanic skin response (GSR) sensor. 7 UV Provides the current ultraviolet radiation exposure intensity. 8 Band contact Provides the current state of the Band as being worn/not worn. Calories Provides the total number of calories the wearer has burned 10 Galvanic skin response Provides the current skin resistance of the wearer in kohms 11 RR interval Provides the interval in seconds between the last two continuous heartbeats indoor ambient thermal condition sensors, (2) thermal comfort modeling module, and (3) intelligent control module. Next, we provide more details on each of these components. Data Collection Through Wearable Devices and Indoor Ambient Conditions Sensors The second generation of the Microsoft smart band (Microsoft Band 2) was selected in this study for collecting occupants' biometric data. The band combines multiple features from a smart band, a smartwatch, and an activity tracker. Similar to the other smart-bands in the market, the band uses Bluetooth connection to pair with a phone and interact with the cloud service. Figure 2 shows a picture of the band including numbers referring to some sensors and features in the band. The band has 11 sensors, listed in Table 1, and has a microphone (numbered 6 in the figure) to speak with the Microsoft Personal Digital Assistant (Cortana). This feature helps to operate some voice commands like sending texts. Mobile applications were implemented to collect the Microsoft band biometric data and participant feedback. The collected data were pre-processed before the application of different machine learning classifiers as explained in the next section. In addition to the Microsoft Smart Band 2, Hobo Data Logger UX100 with wireless temperature and humidity sensors to be carried by users throughout the day to measure ambient thermal conditions. Figure 3 summarizes the data flow and mobile application architecture implemented for data collection. The mobile application is an Android application capable of connecting to the Microsoft Smart band 2 and receives the sensor data in a customized fashion as shown in Figure 4. The application was designed to allow the user to enter his clothing conditions (the clo value). A feedback of the thermal comfort of a user is received as a number (called comfort vote here). The user vote and their bio-information data are stored and labeled with an accurate date-time. A notification in the mobile application was another customizable functionality built to remind the user of entering his/her current thermal votes. The voting scale was initially chosen to be twice the PMV range from −6 to 6, with 6 being very hot, −6 being very cold, and 0 being comfortable. While this scale offers a large amount of variation, it was determined that people struggle to distinguish between minimal differences on this scale such as that between 5 and 6, introducing unnecessary human error. For this reason, an alternative scale was created as shown in Table 2. In this scale, the values from +/− 6 to +/−2 were classified as +/− 1 respectively, while values from−1 to 1 were classified as 0. Comfort Modeling The wearable-based personalized comfort model, designed to take into account the subjective nature of thermal comfort, initially takes both biometric (such as heart rate and skin temperature) and environmental sensing data (temperature and humidity) and the human direct vote/feedback and gradually create the mapping between the features (i.e., temperature, heart rate,..) and the predicted comfort level for each person. Then, the model can infer the comfort level of each person without asking for his subjective and direct comfort votes/feedbacks. A small experiment was conducted to provide some preliminary results for this comfort model. Three individuals, their descriptions described below (Table 3), were invited to take part in this study. These individuals were periodically prompted to vote for their thermal comfort throughout the day. For the completion of this task, five of the most prominent machine learning algorithms were applied to create three personalized models for each occupant and one general model for the combined data for the three occupants as described in Table 3. The used machine learning algorithms are decision tree (Freund and Schapire, 1995;Quinlan, 2006), adaptive boosting classifier (Mason et al., 2000), gradient boosting classifier (Vezhnevets and Barinova, 2007), random forest classifier (Ho, 1998), and support vector machines (Chang et al., 2010). Next, we briefly introduce these algorithms. Decision Tree The decision tree algorithm is a non-parametric supervised learning method and is one of the simplest and yet most successful forms of machine learning for classification and regression. It has the tree-like graph representation that can be trained as a classifier to decide from multiple possible choices. The depth of the tree is one of the main parameters that can be tuned to enhance learning performance. Adaptive Boosting Known as AdaBoost, is a learning method that is designed to select a collection, or ensemble, of hypotheses from the hypothesis space and combine their predictions. In our investigation, for one of the AdaBoost tuning parameters; the estimator count N, we varied its value from five to a thousand with the step of five. Gradient Boosting Classifier The Gradient Boosting Classifier is another ensemble learning technique used for classification and regression. This classifier is known as a robust method to avoid overfitting. While it is found that for this method that higher estimator counts generate better performance in this study, we employed the same estimator counts used for AdaBoost. Random Forest Classifier Random Forest classifier is based on utilizing the aggregation of decision trees built from various sub-samples of the datasets and their averages to improve the predictive accuracy Similar to the previous classifiers, it was employed while varying the estimator count to achieve better accuracy. Support Vector Machines Known as SVM, is the most popular approach for "off-theshelf " supervised learning. Besides the linear classification approach, it adopts the kernel approach to perform non-linear classification. Linear, Poly, Radial basis function (RBF), Sigmoid, and Precomputed are the main kernels. In this work, we have utilized the RBF. SVM with the RBF kernel can be tuned with a variable called C. In this investigation, we varied the C parameter with values from 0.1 to 35. The advantages and limitations of the five algorithms are summarized in Table 4. The scikit-learn package (Pedregosa et al., 2011) has been used to simulate the above machine learning methods. The scikit-learn is a Python-based program, built on top of SciPy and distributed under the 3-Clause BSD license. The Holland Super Computing Center at the State University of Nebraska was used to carry out the heavy calculation needed in this investigation. To evaluate the accuracy of the five machine learning algorithms, a cross-validation method has been used (random parts of the data used for learning and evaluation). Each of these algorithms was applied to 45 feature groups (lists) as shown in Table 5. These groups consisted of different combinations of the predictors (variables). When creating these groups, it was required that all groups have a minimum of one piece of external data (temperature and relative humidity), and one piece of wearable data (such as heart rate, metabolism, and skin temperature). The significance of creating these groups was to allow individual variables to be separated from one another and for their individual effects to be studied. Table 6 shows the top 15 feature lists with the highest median accuracy and Table 7 shows different machine learning methods accuracy when the best features list are considered. Table 6 shows that 12 of the 15 most accurate feature lists include room temperature, 8 include metabolism, and 9 include skin temperature. Meanwhile, only one feature list includes CLO. Therefore, it seems likely that using room and skin temperatures and metabolism to predict thermal comfort will give a relatively accurate thermal comfort prediction while using the clothing insulation will result in a less accurate prediction. As we believe clothing is an important factor in human comfort, one can argue that skin temperature might have better representation for that factor compared to the individual's self-reporting of their clothing status. The result of the heart rate is less conclusive. While 10 of the 15 most accurate feature lists include the heart rate; this variable is not seen in any of the top 3 most accurate feature lists. While less conclusive, it seems using the heart rate is relatively accurate at predicting thermal comfort. Finally, Table 7 shows that SVC is the most accurate machine learning method. As mentioned above, our initial analysis revealed that, as one would intuitively predict, indoor temperature and skin temperature have been found as the most salient features that capture the thermal comfort level of occupants. However, while the above results were generated using one general model that lumped all the three users' data in one model, it is worth to compare these results with results obtained from machine learning models for each user (personalized model). Moreover, while doing this work, it has been observed that the galvanic skin resistance (GSR) also referred to as skin conductance, which usually is ignored in comfort modeling, plays a vital role in improving the accuracy of thermal comfort modeling. To shed more light on these observations, next we investigate the performance of personalizing the machine learning algorithms as well as the relevance of GSR in determining an accurate comfort model. Figures 5-9 compare the performance of the generalized comfort model with the personalized comfort model for each occupant for each of the five machine learning methods while varying a model parameter in the machine learning method with and without GSR. The figures show, in almost all cases, the models that were built by considering skin conductance are more accurate. In the same fashion, it is evident that the private models appear to be more representative of the comfort level of an individual compared to the general models. Another interesting finding from Figures 5-9 is they confirm the fact that thermal comfort is a highly subjective matter that is may attribute to other factors. Three people sitting in similar room conditions exhibit different responsiveness characteristics as evidently seen in the differences in the accuracy level of the study subjects. Specifically, the same machine-learning algorithm generates different performance levels for each individual, which is mainly due to the reason that comfort depends on many physiological and biological factors. Table 8 summarized our findings in Figures 5-9. The table presents a comprehensive performance measure of the five best performing machine learning algorithms used on the data from the three subjects. Except for a few exceptions where data collected from an individual is limited, the private models outperform the general model that was trained with no regard to the identity of the person reporting the comfort feedback data. The private comfort models for Person 3, with the biggest reported data size, are shown to have better accuracy levels than the other individuals on all machine learning techniques applied. In conclusion, our findings presented in the tables and figures in this section can be summarized as follows: • The accuracy of the personalized models are in most cases higher than the general model. • Including the GSR sensor data in most of the cases improves both the personalized and general models accuracy. • The random forest classifier exhibits a one time best accuracy of about 88% compared to other machine learning. Overall, SVM-RBF outperforms the others in mean accuracy. Intelligent Thermal Comfort Control An accurate individual comfort model is the first necessity for providing thermal comfort in a building. The next challenge is determining how information from this model can be integrated with the building HVAC system controller (e.g., building thermostat). Typically, the thermal conditions are set to a temperature set-point, that a typical building occupant tends to change dramatically in response to temporary cold or hot situations; resulting in more discomfort and high energy cost. Ideally, the control parameter set-point should be automatically selected to satisfy occupant comfort, and it should address the conflicting comfort preferences of different people in one conditioned space. To achieve these, we show next how to integrate our new comfort models with the building's HVAC controls by inferring an adaptive set-point from occupants' comfort information. In particular, the comfort level of each occupant in a building is to be calculated from his learned comfort model. Then, the particle swarm optimization technique (PSO) will search for the optimal control parameters set-points to resolve any comfort conflicts by solving the comfort model inverse problem. The PSO; developed through attempts to model bird flocks, treats each moving particle as a potential solution and records its current and the group's best positions over iterations. The velocity of each particle in the swarm can be updated by: where the i th particle position is then updated by: where i is the particle index, k is a discrete-time index, v is the velocity of i th particle, x is position of i th particle, p is the best position found by i th particle (personal best), G is the best position found by swarm (global best), γ 1,2 is a random number on the interval [0,1] applied to i th particle. Inertial and acceleration weights could also be included to improve the algorithm convergence. The PSO supports multiple-dimension optimization. Hence, the comfort model can be simultaneously searched for a set of control parameters (such as ambient temperature, humidity level, and air velocity) to achieve a certain FIGURE 10 | Simulation example for the PSO to search for the best comfort control parameters for: (A) one user, (B) two users. Blue circles represent the particles, green circles are the local best particles, and the red circle is the best global particle. comfort level for the building occupants. Preliminary simulations were performed to evaluate the use of the PSO method to search for optimal control parameter values to achieve maximum comfort. For example, Figure 10A shows the use of PSO to find the optimal temperature (input 1) and humidity (input 2) to move a user comfort level from −3 (very cold) to 0. The initial temperature and humidity values were 15 • C and 70% and the suggested set-points by the PSO are 20 • C and 67%. The simulation was obtained assuming a metabolism value of 1.1 MET. Figure 10B shows an example of expanding PSO use to negotiate comfort differences among multiple users sharing the same conditioned space. To simulate personal comfort difference, a metabolism value of 2.0 MET was assumed for another user. Figure 10B shows that an ambient temperature of 20 • C and a humidity ratio of 46% are the suggested set-points to make both users comfortable. With respect to comfort control, Table 9 (second column) summarizes some of our preliminary results for the comfort improvement for the three different human subjects. In this 24 h test, we have used the wearable-based comfort model to select the right thermostat set-point compared to using an average thermostat set-point. Table 9 (third column) shows comfort improvement while negotiating their comfort preferences when all are to present at the same conditioned place. While comfort improvement in Table 9 is less than in Table 8, the multi-occupant case demonstrates a more practical use of the algorithm as most homes will have more than one occupant with conflicting needs. FUTURE AND ONGOING WORK Wearable device sensor accuracy, our new comfort app usability, and the small data size for issues of overfitting as well as other factors remains to be the limitations of the private thermal comfort model work. Moreover, as a continuation of this work, we plan to validate the thermal comfort control impact on energy use using experimental data from a real HVAC system. Toward this goal and as shown in Figure 11, we have heavily instrumented an HVAC packaged rooftop unit that its status will be controlled by the new thermal comfort model. The unit has two separate cooling circuits allowing a two-stage capacity modulation with a partial load of 7.5 ton and a full load of 12.5 ton. Multiple universal superheat controllers produced by DunAn Microstaq, Inc. will be used to log the temperature and pressure values of both cooling circuits. Users can utilize the MODBUS RTU communication protocol or a Windowsbased graphical user interface to communicate with the superheat controller and retrieve the measured data. The measured data will be used to evaluate the HVAC system runtime before and after applying the new comfort controller. For example, Figure 12 shows the HVAC system cooling run time for more than 10 months before applying the comfort controller (baseline). Data will be collected for a similar duration when the comfort controller is applied. As these durations are long, a typical HVAC system might experience some faults or FIGURE 11 | The HVAC system planned to be used in the experimental validation of the new comfort controller will be heavily instrumented so its operation can be integrated and controlled by the new comfort controller. wrong operation. Thus, in our work, the online pressure and temperature measurements as shown in Figure 11 will be used to evaluate the system health and factor out any excessive run time due to these faulty operations that might bias our comparison. For example, as shown in Figure 13 the very low suction temperature and the inlet compressor pressure values indicate an FIGURE 13 | An example of an excessive run time that was detected using the HVAC system pressure and temperature measurements. evaporator frost event that occurred, as indicated in the figure, due to running the HVAC system at low outside temperature. This frost event has resulted in a very excessive HVAC run time that should be ignored in our planed comparison. CONCLUSIONS In this work, we have presented a framework for modeling and controlling thermal comfort in buildings. Specifically, an improved private comfort model has been developed from biometric data gathered via wearable devices. In this model, we have addressed the model accuracy to the features used to learn the model and the machine learning type and its tuning parameters. Thus, the best features that capture the comfort characteristics and the best machine learning method and parameter to model human comfort has been identified. Apart from the typical bio-metric sensors that were proposed in the literature to model thermal comfort such as skin temperature, skin conductance has been introduced and it has been observed that it is an important feature in creating a private comfort model. The difference in the accuracies of three private models of the individuals presented in this work shows that comfort is a subjective state of being. While the general comfort model failed to guarantee an accurate and reliable model that is representative of the study subjects, limited data size was the main limitation of the private comfort models. Finally, we have presented an intelligent control approach that utilized the newly developed comfort model to control thermal comfort in a building. Simulation results for using the SOP algorithm were presented showing a superior performance compared to use an average thermostat set-point. A framework for experimental validation of this new comfort controller has been developed along with a new HVAC setup. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS FA developed the idea and managed the work and performed the full edit for the paper. PA provided the idea and the wireless infrastructure integration for the future work section. DB implemented the wireless infrastructure and data collection for the experimental HVAC system data. KS provided the realtime data with thermostat API and weather station API data for the future work section that include the HVAC system run time. MR developed the machine learning models. MT wrote the initial version of the paper and perform the data cleaning and simulation. All authors contributed to the article and approved the submitted version. FUNDING Part of this project was funded by DunAn Microstaq, Inc.
6,760.6
2020-06-23T00:00:00.000
[ "Engineering" ]
On Solving a System of Volterra Integral Equations with Relaxed Monte Carlo Method A random simulation method was used for treatment of systems of Volterra integral equations of the second kind. Firstly, a linear algebra system was obtained by discretization using quadrature formula. Secondly, this algebra system was solved by using relaxed Monte Carlo method with importance sampling and numerical approximation solutions of the integral equations system were achieved. It is theoretically proved that the validity of relaxed Monte Carlo method is based on importance sampling to solve the integral equations system. Finally, some numerical examples from literatures are given to show the efficiency of the method. Introduction In engineering, social and other areas, a lot of problems can be converted to Volterra integral equations to solve, such as elastic system in aviation, viscoelastic and electromagnetic material system and biological system, and some differential equations are often transformed into integral equations to solve in order to simplify the calculation.For example, the drying process in airflow, pipe heating, gas absorption and some other physical processes can be reduced to the Goursat problem.Then, some of the Goursat problem can be described by Volterra integral equations [1].Another example, when one-dimensional situations are concerned and the coolant flow is incompressible, the definite solution problem of the transpiration cooling control with surface ablation appears as Volterra integral equations of second kind [2].In practice, the analytical solutions for this kind of integral equations are difficult to obtain.Therefore, it is more practical to research the numerical method for solving this kind of integral equations. The main aim of this paper is to propose a numerical algorithm based on Monte Carlo method for approximating solutions of the following system of Volterra integral equations where ( , ), ( , ) 0, 1, 2 p q = are known kernel functions, the functions 1 ( ) f x , 2 ( ) f x are given and defined in a x b ≤ ≤ , and 1 2 ( ), ( ) are the unknown functions to be determined.One of the earliest methods for solving integral equations using Monte Carlo method was proposed by Albert [3], and was later developed [4].Literatures [5]- [8] employed Monte Carlo method to solve numerical solutions of Fredholm integral equations of the second kind.But very few studies are devoted to employing Monte Carlo method to solve Volterra integral equations and the system of Volterra integral equations.In this paper, we present and discuss a relaxed Monte Carlo approach with importance sampling to solve numerically systems of Volterra integral equations.Due to less accuracy and lower efficiency of Monte Carlo method, in this paper, combination of Monte Carlo and quadrature formula will be used to deal with Equation (1) and importance sampling is applied to accelerate the convergence and improve the accuracy of Monte Carlo method.Some numerical examples are given to show the efficiency and the feasibility of proposed Monte Carlo method. Discretizing System of Integral Equations Here, Newton-Cotes quadrature formula is used to discretize Equation (1).Dividing the interval ( , ) . For convenience, denoting the notation ( ) , where , 0,1, , Thus the following linear algebra system can be obtain ( , , , , , , , ) ( ) , j ω is the weight of Newton- Cotes quadrature formula.The matrix of coefficients of Equation ( 2) is A I B = − .If we assume that there ex- ists a unique solution of (2), the solution would be a numerical approximation of (1).This process will produce an error which is determined by numerical quadrature formula and can be reduced by increasing the number of nodes for a given quadrature formula.For a large number of nodes, Equation ( 2) is too large to solve directly.It is well known that Monte Carlo technique has a unique advantage for large systems or high-dimensional problems.At the same time, this method can obtain function values at some specified points or their linear combination that is just what researchers need.But for determined numerical methods, in order to obtain function value at a certain point, it is often necessary that find function values for all nodes.Here relaxed Monte Carlo method is used to Equation (2) based on a random sample from Markov chain with discrete state.According to theory of importance sampling, probability transition kernel is selected to suggest a possible move.To obtain solution of the linear algebraic system (2), the following iterative formula is considered where is chosen such that it minimizes the norm of L for accelerating the convergence, and 1 F DF = .The iterative formula (3) can define a Neumann series, as following Set iterative initial value (0) Φ is the exact solution of Equation ( 2), the truncation error and con- vergency of the iterative formula (3) can be obtained by the following expression This conclusion can be proved by using theories in numerical analysis.Here, the iterative matrix L satisfies ) To achieve a desirable norm in each row of L , a set of relaxation parameters, { } 2( 1) will be used in place of a single γ value.According to the arguments of Faddeev and Faddeeva [9] [10], the relaxed Monte Carlo method will converge if Relaxed Monte Carlo Method with Importance Sampling For Neumann series (4), we have In order to obtain the approximation solution of linear system (2) and system of integral Equation ( 1), the kth iteration ( ) Φ of i Φ be evaluated by means of computing the following series . Construct the Markov chain on the state space { } 1, 2, , The weight function m W of Markov chain is defined as follows By expressions ( 7) and ( 8), the following conclusion can be gotten.Theorem 3.1 For the given ( 0) m m > , we have This theorem is easy to prove.In the light of the expression (7), the following estimator is defined ) . Due to Theorem 3.1, the conclusion ( 11) is easy to prove. < , ε is the precision of truncation error and given in ad- vance.Then one can evaluate the sample mean is bounded, according to the Central Limit Theorem, we would obtain ( ) So the precision of the estimator ( ) k ξ in the sense of probability can be measured by its variance . Based upon the minimum variance of estimator 1 ξ , by the variance expression ( ) Var( ) , the transition probability ij p of Markov chain should be chosen in the following form . This form of ij p leads to that more samples are taken in regions which have higher function values.This is importance sampling. Numerical Examples In this section, we employ the proposed relaxed Monte Carlo method with importance sampling (say RMCIS) to compute the numerical solution of some examples and compare it with their exact solutions.The numerical results are presented in Table 1 and Table 2, where AE means absolute error for ( ) ( 1, 2) p x p ϕ = . We plot the 2. Conclusion In this paper, a relaxed Monte Carlo numerical method is provided to solve a system of linear Volterra integral equations.The most important advantage of this method is simplicity and easy-to-apply in programming, in comparison with other methods.The implementation of current approach RMCIS is effective.The numerical examples that have been presented in the paper and the compared results support our claims. A denotes the row norm of the given matrix A . Figure 2 . Figure 2. The figure of average absolute errors (MAE) for Example 2 at eleven points 1 , , 1 .0 , 0  , (a) for ) ( 1 x ϕ numerical results are listed in Table Table 1 . Numerical results of Example 1 with Figure 1 and Figure 2. Below are the numerical results for some of them.
1,829.8
2016-06-30T00:00:00.000
[ "Mathematics" ]
Editorial: Ferroptosis in cancer and Beyond—volume II It has been 11 years since Brent Stockwell identified and named ferroptosis (Dixon et al., 2012). Ferroptosis results from iron-dependent lipoxidation at various cellular membrane structures. Searching the PubMed database by using the keyword “ferroptosis” results in more than 8,000 papers. Why has ferroptosis received such intensive attention? There are at least three fundamental reasons. First, ferroptosis has a unique mechanism distinct from other known regulated cell death types. Ferroptosis is tightly associated with cell metabolism, such as amino acid, iron, and ROS metabolism. There are three key elements for ferroptosis: substrate of lipid peroxidation, executor of lipid peroxidation, and anti-ferroptosis system (Liu and Gu, 2022a). The balance between the three elements dictates the sensitivity of a cell to ferroptosis. Second, there are multiple ways to induce ferroptosis meaning it has a complex regulatory network. Many pathways are involved in ferroptosis mediation. Key factors regulating ferroptosis, including GPX4, p53, FSP1, and ALOXs have been identified (Dixon and Stockwell, 2019; Liu et al., 2019; Liu and Gu, 2022a; Liu and Gu, 2022b). However, new pathways and regulators are still emerging. Third, ferroptosis participates in the regulation of numerous physiological or pathological processes, such as normal development, degenerative diseases, ischemic injuries, immune system activities, and particularly cancer. This means that ferroptosis has amazing potential as a therapeutic target in many diseases (Stockwell et al., 2020). To examine progress in the ferroptosis field and advances in basic research and clinical applications focusing on ferroptosis, we launched a Research Topic named Ferroptosis in Cancer and Beyond in early 2022, which was a great success. Given the rapid progression in this field, we opened a second call for this same Research Topic in late 2022, which has now been successfully closed. This second volume brings together 13 papers, including 6 research articles and 7 reviews. These articles outline recent information about ferroptosis from both basic research and clinical translation angles. The papers are briefly introduced below. To comprehensively understand the contribution of iron regulatory proteins (IRPs) to ferroptosis, McKale Montgomery and Cameron Cardona reviewed the regulatory processes OPEN ACCESS Introduction It has been 11 years since Brent Stockwell identified and named ferroptosis (Dixon et al., 2012).Ferroptosis results from iron-dependent lipoxidation at various cellular membrane structures.Searching the PubMed database by using the keyword "ferroptosis" results in more than 8,000 papers.Why has ferroptosis received such intensive attention?There are at least three fundamental reasons.First, ferroptosis has a unique mechanism distinct from other known regulated cell death types.Ferroptosis is tightly associated with cell metabolism, such as amino acid, iron, and ROS metabolism.There are three key elements for ferroptosis: substrate of lipid peroxidation, executor of lipid peroxidation, and anti-ferroptosis system (Liu and Gu, 2022a).The balance between the three elements dictates the sensitivity of a cell to ferroptosis.Second, there are multiple ways to induce ferroptosis meaning it has a complex regarding iron homeostasis, from absorption, metabolism to its participation in ferroptosis, and discussed the essential roles of various IRPs in ferroptosis and their potentials to be therapeutically maneuvered in cancer treatment.To explore how ferroptosis is regulated at the post-translational level, Zhang et al. introduce emerging evidence for the O-GlcNAc modification (O-GlcNAcylation) in ferroptosis in a review article and discuss the crosstalk between O-GlcNAcylation and ROS and related antioxidant defense systems.The authors elucidate the role of O-GlcNAcylation in proteins involved in iron metabolism and the regulation of lipid metabolism and peroxidation during ferroptosis.Furthermore, the underlying mechanisms including mitochondria dysfunction and endoplasmic reticulum alteration brought by O-GlcNAcylation are discussed.In their original research study, Nikulin et al. identified ELOVL5 and IGFBP6 may modulate the sensitivity of breast cancer cells to ferroptosis, possibly via enhancing the activity of GPX4, an antioxidant enzyme that plays a critical role in ferroptosis.Through analysis of the transcriptomic database and validation with HPLC-MS, the knockdown of either ELOVL5 or IGFBP6 was shown to cause remarkable changes in the production of long and very long fatty acids.In addition, the knockdown of ELOVL5 or IGFBP6 in MDA-MB-231 cells promotes cell death induced by PUFAs, and the potential benefit of PUFAs addition for improving chemotherapeutic effects was proposed in the condition of low IGFBP6 (and maybe ELOVL5) gene expression. Glutathione S-transferase P1 (GSTP1) was proposed to be a potential target to tackle radioresistance in cancer therapy by Tan et al.GSTP1 is fundamental to maintaining cellular oxidative homeostasis and is involved in ferroptosis.Based on increasing evidence showing that iron metabolism, lipid peroxidation, and GSH level are modulated by radiotherapy, the authors elaborated on the potential to control GSTP1 levels to enhance the efficacy of radiotherapy in cancer treatment.More pathways in ferroptosis induced by radiotherapy and their implications for radiotherapy were reviewed by Giovanni Luca Beretta and Nadia Zaffaroni, and other strategies were proposed to improve the efficacy of radiotherapy, including enhancing ionizing radiation by other reagents or selectively inducing ferroptosis with metal-based nanoparticles.Lu et al. introduced all kinds of therapies for glioblastoma, including immunotherapy, radiotherapy, and chemotherapy, and discussed how ferroptosis participates and affects the efficacy of different therapeutic treatments.In an original research article, Shi et al. found that dihydroartemisinin (DHA), an adjuvant drug-enhancing chemotherapy, induced cervical cancer death via initiating ferroptosis and explored the involvement of ferritinophagy induced by DHA.Furthermore, DHA was also shown to have a synergistic role with doxorubicin (DOX) in promoting cervical cancer cell death. Growing evidence has revealed the impact of T cell infiltration in the development of various types of cancer.Jiang et al. analyzed the differential gene expression in CD8 + T cells from CD8 + highly or low infiltrated samples in acute myeloid leukemia (AML) and conducted extensive bioinformatics analysis, and six ferroptosis-related genes (FRGs) were identified to generate a prognostic prediction model, which was validated to be helpful to risk stratification and prognostic prediction of AML patients.Han et al. identified several ferroptosis-related genes (FRGs) which correlate well with the immune microenvironment and establish a model to predict the prognosis of cervical cancer patients.Further mechanisms underlying iron homeostasis, ROS and lipid peroxidation, GPX4-GSH, and other regulator systems in cervical cancer were discussed in a review by Xiangyu Chang and Jinwei Miao.In another review, Lai et al. specifically elaborated on the influence of steroid hormone signaling on ferroptosis and discuss the involvement of ferroptosis in gynecologic cancers and potential therapies targeting ferroptosis for the treatment of gynecologic cancers. With data from FerrDb and TCGA database, Li et al. established a prognostic prediction model for colorectal cancer (CRC) patients with 8 FRGs among which NOS2 is one of the most significantly affected examples and was validated with the CRC mouse model and the involvement of NF-κB pathway was elucidated.To investigate whether ferroptosis is associated with colon adenocarcinoma (COAD), Baldi et al. identified a 4-gene signature that distinguishes high-risk and low-risk patients, and those FRGs were further shown to be implicated in many pathological related pathways and a variety of miRNAs and transcription factors were found to be involved.These researches consolidated the idea that disease-associated cell death has a specific gene expression profile relevant to the prognosis of the patient (Liu et al., 2022;Ye et al., 2022;Liu et al., 2023). Taken together, this second volume of the Research Topic Ferroptosis in Cancer and Beyond adds new knowledge to this field, furthering research and the clinical translation of ferroptosis.
1,687
2023-08-29T00:00:00.000
[ "Biology" ]
Aggregation Controlled Charge Generation in Fullerene Based Bulk Heterojunction Polymer Solar Cells: Effect of Additive Optimization of charge generation in polymer blends is crucial for the fabrication of highly efficient polymer solar cells. While the impacts of the polymer chemical structure, energy alignment, and interface on charge generation have been well studied, not much is known about the impact of polymer aggregation on charge generation. Here, we studied the impact of aggregation on charge generation using transient absorption spectroscopy, neutron scattering, and atomic force microscopy. Our measurements indicate that the 1,8-diiodooctane additive can change the aggregation behavior of poly(benzodithiophene-alt-dithienyl difluorobenzotriazole (PBnDT-FTAZ) and phenyl-C61-butyric acid methyl ester (PCBM)polymer blends and impact the charge generation process. Our observations show that the charge generation can be optimized by tuning the aggregation in polymer blends, which can be beneficial for the design of highly efficient fullerene-based organic photovoltaic devices. Introduction Organic semiconductors (OSCs) have been intensively studied due to their unique electronic and optical properties. Their properties-including relatively easy and inexpensive fabrication, light weight, mechanical flexibility and compatibility with stretchability, and potential for non-toxic processing methods-open broad prospects for their applications in a variety of industrial and technological areas, including solar cells [1,2]. Considerable efforts have been dedicated to the development of polymer solar cells (PSCs) due to several advantages, such as high absorption coefficients [3], highly tunable molecular energy levels [4], and low reorganization energy associated with low voltage loss [5,6]. To date, power conversion efficiency of over 17% [7] has been achieved in PSCs. There are several factors that influence charge generation and transport in bulk heterojunction (BHJ) polymer solar cells. These include the miscibility of donor and acceptors [8], molecular orientation of donor and acceptors at the interface [9], energy difference between the bulk excitonic states and interfacial charge transfer (CT) states [10,11], domain size [12], and the interaction between donor and acceptors [13,14]. In addition, the molecular order [15] and packing [16,17] determine the electronic interactions [18], which influence, for instance, exciton delocalization [19] and charge generation [20]. Furthermore, change in morphology can impact processes such as charge generation [21], charge transport [22], and optical absorption and emission [23,24]. It has been reported that the addition of solvent additives such as 1,8-diiodoctane (DIO) in polymer blends results in the change in nanomorphology of the BHJ active layer [25,26]. The improved morphology by DIO resulted in the high charge transfer and charge transport efficiency that are needed for high-efficiency PSCs. Therefore, it is very important to understand how the molecules assemble in thin BHJ films and how the species of the assemblies affect the solar cell efficiency. As the charge carrier motion in polymer films depends on the coupling of the electronic states, the electronic coupling in these systems has direct implications for the charge generation and transport; hence, it impacts the device performance [23]. When the molecular arrangement leads to the transition dipole moments interacting along the polymer backbone, strong intrachain electronic coupling is obtained, which is referred to as J-like, in analogy to J-aggregates in molecular systems [23,27]. In contrast, parallel π-π packing of multiple chains favors strong interchain electronic coupling, referred to as H-like [23,24,28]. Therefore, it is crucial to understand the impact of electronic coupling on the optoelectronic properties of these materials for their applications in photovoltaic devices. In this work, we prepared thin blended films from medium-band gap copolymer poly(benzodithiophene-alt-dithienyl difluorobenzotriazole (PBnDT-FTAZ) [29] and phenyl-C61-butyric acid methyl ester (PCBM). The PBnDT-FTAZ polymer consists of a benzodithiophene (BnDT) donor moiety and fluorinated benzotriazole (FTAZ) acceptor moiety. This donor polymer has shown planar conformation, molecular face-on orientation, and high hole mobility [9,29]. The acceptor PCBM has (i) high electron mobility, (ii) ability to aggregate in BHJ, and (iii) good charge transport due to a delocalized lowest unoccupied molecular orbital over the entire surface of the molecule [30]. These properties of donor and acceptor molecules are desired for the fabrication of highly efficient PSCs. In this contribution, we used a PBnDT-FTAZ:PCBM blend with and without the solvent additive DIO to investigate the impact of DIO on aggregation, which can modify the optical absorption and emission spectra in polymer blends. Using transient absorption spectroscopy (TAS), small-angle neutron scattering (SANS), and atomic force microscopy (AFM), we probed the changes in aggregation, charge dynamics, exciton delocalization, and the morphology. Our observations indicate that the electronic coupling in conjugated polymer blends can be tuned by processing methods and can be probed using optical absorption and emission measurements. Materials PBnDT-FTAZ polymer was synthesized by Prof. Wei You's lab at the University of North Carolina, Chapel Hill in the same way as previously reported [29]. PCBM and DIO additive were purchased from Sigma Aldrich and were used as received. Thin Film Preparation PBnDT-FTAZ:PCBM blend solution (20 mg/mL) was prepared in chlorobenzene with a donor/acceptor weight ratio of 1:2. The blend solution was heated at 80 • C and stirred overnight before mixing with DIO (3 wt.%, defined as percentage of total PBnDT-FTAZ:PCBM weight) and stirred for one additional hour. Glass substrates were cleaned ultrasonically using DI water, acetone, and isopropanol for 15 min per cleaning solvent before spin casting. Blend films with and without DIO were prepared by spin casting the hot solution onto the precleaned glass substrates at 500 rpm for 60 s. The thin film samples were encapsulated using UV curable glue before optical measurements [31]. Thin films for SANS were prepared by casting the solution in one-inch diameter silicon wafer. All the thin films were prepared inside a nitrogen filled glove box. Absorption and Photoluminescence Measurements A Cary 100 UV-spectrophotometer from Varian was used for the ground state absorption measurement in the spectral range 350-900 nm, which was carried out at ambient conditions. The room temperature, steady state photoluminescence (PL) spectra in the UV-NIR spectral range were recorded using an Edinburgh Instruments Fluorescence Spec-Polymers 2021, 13, 115 3 of 11 trometer (Model: FLS900) equipped with a xenon lamp (Xe 900, xenon arc lamp). The samples were excited with 2.48 eV (500 nm) excitation energy and the emitted PL was detected using the red sensitive PMT [31]. Transient Absorption Spectroscopy Transient absorption spectroscopy (TAS) was measured using an Ultrafast Systems Helios pump-probe transient absorption spectrometer. Coherent Libra Ti:Sapphire femtosecond regenerative amplifier (4 W, 1 kHz, 800 nm, 100 fs) was used as a source for both pump and probe pulses. The output of the amplifier was split into two beams. The first beam pumped a Coherent OperA Solo optical parameter amplifier which converted the 800 nm input to a 2.48 eV (500 nm) output. We kept the pump excitation intensity low (~2 µJ/cm 2 ) to avoid possible exciton-exciton annihilation and non-linear effects. The second beam generated a broadband white light continuum (WLC) from 0.8 eV to 1.55 eV by focusing 800 nm light into a sapphire plate. The pump and probe beams were then overlapped spatially on the thin film sample. The WLC transmitted through the sample was sent to a fiber optics coupled linear array spectrometer. The pump-probe delay was controlled by an optical line with a range of approximately 5 ns [31]. AFM Measurements AFM topographic images and phase images were taken using the AAC mode with a Keysight 5500 AFM/SPM system (Keysight Technologies, Inc., CO, USA). A Bruker's Sharp Nitride Lever probe, SNL-10, with a normal frequency of 65 kHz and a normal spring constant of 0.35 N/m was used in the AFM scanning (Bruker AFM Probes, Camarillo, CA 93012, USA). SANS Measurements Small-angle neutron scattering experiments were carried out at the NGB 30 m SANS beamline at the NIST Center for Neutron Research (NCNR), National Institute of Standards and Technology (NIST) [32]. Five instrumental configurations were used to collect SANS data from q ≈ 0.001 to q ≈ 1.0 Å −1 (q was the momentum transfer defined as q = 4πsin θ/λ, with θ and λ being half of the scattering angle and neutron wavelength, respectively). Neutron wavelength was 6 Å (wavelength spread ∆λ λ ≈ 14%) at the sample-to-detectordistances (SDDs) of 1 m, 4 m, and 13 m, which covers a q-range between q ≈ 0.003 and q ≈ 0.5 Å −1 . Low-q scattering data extended to q ≈ 0.001 Å −1 (was collected using a focused neutron beam at λ ≈ 8.4 Å. High-q (between q ≈ 0.5 and q ≈ 1.0 Å −1 .) scattering data were collected using 3 wavelength at a broader wavelength spread of ≈ 22%). To obtain scattering from polymer films, scattering data from polymer films deposited on silicon wafers and that from a blank wafer were measured separately, and then the scattering from the blank wafer were subtracted with transmission coefficients of the samples being handled properly. All 1D scattering profiles have been normalized to absolute scale. Details of data reduction protocol can be found in Ref. [33]. Figure 1a shows the chemical structure of the donor PBnDT-FTAZ polymer, electron acceptor PCBM molecule, and DIO additive, whereas Figure 1b shows the absorption spectra of PBnDT-FTAZ:PCBM with and without DIO. Vibronic features below 650 nm are reflected in the absorption spectra of both films. The intensities of the first two vibronic peaks attributed as 0-0 and 0-1 transitions in the absorption spectra identify the different types of aggregation [26]. When the 0-0 absorption is stronger than that of the 0-1 transition, the polymer is J-like, and when the reverse is the case, the material is H-like aggregated. The DIO additive leads to slight differences in aggregation of PBnDT-FTAZ:PCBM, which is reflected in the relative intensities of the 0-0 and 0-1 vibronic transitions in the absorption spectra. The ratio of intensity of the 0-0 peak to 0-1 is 1.03 for the pristine PBnDT-FTAZ:PCBM blend, whereas it increases to 1.06 for the DIO-added blend peak, suggesting J-like aggregation in both blends [28,34]. However, as the ratio of 0-0 to 0-1 vibronic peaks increases with the addition of DIO, this suggests that the additive can induce more J-aggregated behavior in the PBnDT-FTAZ:PCBM films. The slightly red-shifted absorption in DIO-added blend further supports this assignment [24]. It has been reported that PBnDT-FTAZ also shows more J-like behavior when it is blended with non-fullerene acceptor and DIO is added [35]. The observed strong J-aggregation leads to a stronger intrachain exciton coupling, planarization of the polymer backbone, enhanced crystallinity, and a higher hole mobility [24,36]. The more J-like absorption characteristics in the DIO-added blend are attributed to: (i) conformational changes, which may increase the planarity of the polymer backbone, and (ii) a higher delocalization of the π -electron density along the more planarized conjugation system, which may result in enhanced intrachain exciton interactions [24]. These subtle conformational differences have direct consequences on the head-to-tail interactions of the transition dipole moments of the chromophores, which ultimately influence the spectral line shapes [37]. Results and Discussion are reflected in the absorption spectra of both films. The intensities of the first two vibronic peaks attributed as 0-0 and 0-1 transitions in the absorption spectra identify the different types of aggregation [26]. When the 0-0 absorption is stronger than that of the 0-1 transition, the polymer is J-like, and when the reverse is the case, the material is H-like aggregated. The DIO additive leads to slight differences in aggregation of PBnDT-FTAZ:PCBM, which is reflected in the relative intensities of the 0-0 and 0-1 vibronic transitions in the absorption spectra. The ratio of intensity of the 0-0 peak to 0-1 is 1.03 for the pristine PBnDT-FTAZ:PCBM blend, whereas it increases to 1.06 for the DIO-added blend peak, suggesting J-like aggregation in both blends [28,34]. However, as the ratio of 0-0 to 0-1 vibronic peaks increases with the addition of DIO, this suggests that the additive can induce more J-aggregated behavior in the PBnDT-FTAZ:PCBM films. The slightly redshifted absorption in DIO-added blend further supports this assignment [24]. It has been reported that PBnDT-FTAZ also shows more J-like behavior when it is blended with nonfullerene acceptor and DIO is added [35]. The observed strong J-aggregation leads to a stronger intrachain exciton coupling, planarization of the polymer backbone, enhanced crystallinity, and a higher hole mobility [24,36]. The more J-like absorption characteristics in the DIO-added blend are attributed to: (i) conformational changes, which may increase the planarity of the polymer backbone, and (ii) a higher delocalization of the π -electron density along the more planarized conjugation system, which may result in enhanced intrachain exciton interactions [24]. These subtle conformational differences have direct consequences on the head-to-tail interactions of the transition dipole moments of the chromophores, which ultimately influence the spectral line shapes [37]. The ratio of 0-0 and 0-1 peak absorbance is related to the free exciton bandwidth (W) [38] and the energy of the intermolecular vibration Ep. We calculated W of these blended films using weakly interchain-coupled modified Frank-Condon model [28,39], ≈ The ratio of 0-0 and 0-1 peak absorbance is related to the free exciton bandwidth (W) [38] and the energy of the intermolecular vibration E p . We calculated W of these blended films using weakly interchain-coupled modified Frank-Condon model [28,39], , where A 0−0 and A 0−1 are peak absorbances from Figure 1b, and E p was obtained from the difference in energy of 0-0 and 0-1 absorbance peaks. We obtained the exciton bandwidth (W) of −8.60 meV and −16.9 meV for the blend films with and without DIO, respectively. The negative W further indicates the J-aggregate molecular packing in these blends [28]. The different exciton bandwidths of these two blends indicate that DIO addition does, in fact, alter the conjugation length of the polymer chain. In reality, both coupling mechanisms are present in conjugated polymers, as described by the generalized HJ aggregation model [18,40]. It is noted that the H-and J-aggregate analysis of steady-state polymer UV-Vis spectra has provided significant insights into the interplay between structural and optoelectronic properties of polymers and unraveled some of the physics behind the differences in measured spectral line shapes in solution, as well as the amorphous and aggregated fractions of the material in the solid state [28,34,41]. In contrast to the absorption spectra, the steady state PL spectrum of the pristine blend and DIO added blend films do not exhibit J-like character. The 0-1 emission is stronger compared to the 0-0 transition (Figure 2a). The ratio of intensity of 0-0 peak to 0-1 is 0.88 and 0.97 for pristine PBnDT-FTAZ:PCBM and DIO added blend films respectively. This difference, thus, suggests that the photo-excited species created in the absorption process and those that recombine during the emission exhibit very different intra-and interchain electronic interactions because there is a change in electronic coupling during the relaxation process [42]. Spano et al. established the connection between the vibronic progression and exciton delocalization through the ratio 0-0 to 0-1 photoluminescence intensity in molecular aggregates [43]. This ratio is proportional to the exciton coherence length [44], which is related to exciton delocalization [43]. Therefore, there can be slight differences in the 0-0/0-1 ratios in the absorption and emission lines, because the PL is more sensitive to the exciton coherence length, while the absorption is more sensitive to the free exciton bandwidth. As a result, it is common to observe stronger characteristics of one type over the other when comparing absorption and emission profiles. In general, though, within the Spano model, the emission is not completely symmetrical with the absorption [23]. tained from the difference in energy of 0-0 and 0-1 absorbance peaks. We obtained the exciton bandwidth (W) of −8.60 meV and −16.9 meV for the blend films with and without DIO, respectively. The negative W further indicates the J-aggregate molecular packing in these blends [28]. The different exciton bandwidths of these two blends indicate that DIO addition does, in fact, alter the conjugation length of the polymer chain. In reality, both coupling mechanisms are present in conjugated polymers, as described by the generalized HJ aggregation model [18,40]. It is noted that the H-and J-aggregate analysis of steadystate polymer UV-Vis spectra has provided significant insights into the interplay between structural and optoelectronic properties of polymers and unraveled some of the physics behind the differences in measured spectral line shapes in solution, as well as the amorphous and aggregated fractions of the material in the solid state [28,34,41]. In contrast to the absorption spectra, the steady state PL spectrum of the pristine blend and DIO added blend films do not exhibit J-like character. The 0-1 emission is stronger compared to the 0-0 transition (Figure 2a). The ratio of intensity of 0-0 peak to 0-1 is 0.88 and 0.97 for pristine PBnDT-FTAZ:PCBM and DIO added blend films respectively. This difference, thus, suggests that the photo-excited species created in the absorption process and those that recombine during the emission exhibit very different intraand interchain electronic interactions because there is a change in electronic coupling during the relaxation process [42]. Spano et al. established the connection between the vibronic progression and exciton delocalization through the ratio 0-0 to 0-1 photoluminescence intensity in molecular aggregates [43]. This ratio is proportional to the exciton coherence length [44], which is related to exciton delocalization [43]. Therefore, there can be slight differences in the 0-0/0-1 ratios in the absorption and emission lines, because the PL is more sensitive to the exciton coherence length, while the absorption is more sensitive to the free exciton bandwidth. As a result, it is common to observe stronger characteristics of one type over the other when comparing absorption and emission profiles. In general, though, within the Spano model, the emission is not completely symmetrical with the absorption [23]. The spectroscopic measurements, such as ground-state absorption and PL, provided the information about the effect of DIO in polymer aggregation and exciton delocalization in PBnDT-FTAZ:PCBM. To understand the interrelation between the exciton delocalization and the charge transfer, we measured the PL of the neat PBnDT-FTAZ and blended films. Comparison of the PL between the blended and the neat films indicates quenching of polymer photoluminescence. We observed the 73% PL-quenching efficiency in the pristine blend, whereas it is 78% in the DIO-added blend (Figure 2b). The increased PLquenching efficiency in the DIO-added blend suggests the increase in charge separation in this blend [45,46]. The PL quenching is the indicator of the exciton splitting at the donor-acceptor interface and provides the indication of an upper limit to the yield of dissociated charges [46]. As the PL is insensitive to the non-radiative species, such as polaron pairs, charges, etc., we utilized TAS-a widely used technique that is crucial for detecting non-luminescent species, such as polarons/charges or polaron pairs, and their time evolution. Figure 3 shows the TAS spectra, the exciton, and charge (polaron) separation dynamics of pristine and DIO added PBnDT-FTAZ:PCBM blends. Figure 3a,b show the transient absorption spectra at different time delays after the samples are excited using pump pulses tuned to 2.48 eV (500 nm). The transient absorption spectrum of both blends exhibits two features at~1250 nm and~880 nm. Based on the previously published results on conjugated polymers, we assign these two peaks to the excited-state absorption of the polymer singlet exciton and polaron, respectively [10,[47][48][49]. Exciton and polaron dynamics were monitored by plotting the time evolution of the features at~1250 nm and~880 nm. The averaged exciton lifetimes, obtained from double exponential function, are 4.1 ps and 2.6 ps for the pristine and the DIO added blend, respectively, indicating that the electron transfer from PBnDT-FTAZ to PCBM is more favored in the DIO added blend (Figure 3c). The polaron lifetimes for pristine and the DIO added blend are 365 ps and 420 ps with 30% and 35% respective residual charges at 5 ns (Figure 3d). These results indicate the efficient electron transfer and long-lived charge generation in the DIO added blend. These observations are consistent with the improved fill factor and higher short circuit current density observed in the 3% DIO added PBnDT-FTAZ:PCBM solar cell device compared to that of 0% DIO PBnDT-FTAZ:PCBM device [45]. To verify that the changes in optical properties are microstructural in origin, we studied the surface morphology of these thin films using AFM. Figure 4 shows the AFM topographic and phase images of PBnDT-FTAZ:PCBM blend films with 0% and 3% DIO. We observed morphological changes in the PBnDT-FTAZ:PCBM blend upon the addition of DIO. Specifically, under the same AFM scanning conditions, the 3% DIO-added blend formed a firmer surface with less drifting (Figure 4c,d) than that of the PBnDT-FTAZ:PCBM blend without DIO (Figure 4a,b) in the AFM images. Statistical analysis To verify that the changes in optical properties are microstructural in origin, we studied the surface morphology of these thin films using AFM. Figure 4 shows the AFM topographic and phase images of PBnDT-FTAZ:PCBM blend films with 0% and 3% DIO. We observed morphological changes in the PBnDT-FTAZ:PCBM blend upon the addition of DIO. Specifically, under the same AFM scanning conditions, the 3% DIO-added blend formed a firmer surface with less drifting (Figure 4c,d) than that of the PBnDT-FTAZ:PCBM blend without DIO (Figure 4a,b) in the AFM images. Statistical analysis showed a 0.84 µm 2 surface area increment for the 3% DIO-added blend compared to the only 0.61 µm 2 increment for the 0% DIO-added blend at a 9 µm 2 range, with a significant 37.7% increase. The phase image median root mean square (RMS) was measured to be 0.14 deg for the 3% DIO-added and 0.024 deg for the 0% DIO-added (Table 1). These facts suggest an increased phase separation between the PBnDT-FTAZ polymer and PCBM and more rigid aggregates formed after adding 3% DIO-a desired effect for efficient charge generation and transport. These observations are consistent with optical spectroscopy data, which show more exciton delocalization, efficient electron transfer, and long-lived charges in the DIO-added PBnDT-FTAZ:PCBM blend. We extended our morphological study by performing SANS measurements. Figure 5a shows SANS profiles of PBnDT-FTAZ: PCBM films with and without DIO. The SANS We extended our morphological study by performing SANS measurements. Figure 5a shows SANS profiles of PBnDT-FTAZ: PCBM films with and without DIO. The SANS profiles are distinctly different in three aspects. First, in the high-q region between q ≈ 0.2 Å ening (particularly associated with the neutron wavelength spread). Nevertheless, the positions of the first and the second peaks can be approximately identified at q ≈ 0.33 and q ≈ 0.65 Å −1 which is consistent with reported body-centered cubic (BCC) packing [50], allowing uncertainties associated with the broad peaks. The second peak was absent in the scattering profile from the sample without DIO, which might be due to the more severe disorder of packing, given that the two samples were measured using the same instrumental configurations. Second, a broad 'shoulder' shows in the intermediateregion between ≈ 0.004 and ≈ 0.02 Å for both the profiles. The 'shoulder' is a manifestation of interference of waves scattered from nanoscale domains. Note that for the polymer film with 3 % of DIO, the 'shoulder' shifts toward higher-, suggesting that the domain is smaller as compared with that of the film without DIO. The slight change of the position of the 'shoulder' can be clearly seen in the versus plot (in-set of Figure 5a), where the maxima are at ≈ 0.0087 and ≈ 0.011 Å , respectively, for the samples without and with 3 % DIO. This is corresponding to a shrinkage of domain size from ≈ 7.2 nm to≈ 5.7 nm (estimated using , 2π/ with being the position of the maximum in the versus plot), owing to the addition of DIO. Third, the two profiles show scattering upturns in the region of ≈ 0.02 Å for both profiles. The upturns phenomenologically follow a power-law decay, and the sample with 3 % DIO shows a larger asymptote, which suggests the existence of larger macroscopic aggregates with the sizes being out of the probe limit of SANS. Combining all the observations, hierarchical structures in the PBnDT-FTAZ:PCBM The second peak was absent in the scattering profile from the sample without DIO, which might be due to the more severe disorder of packing, given that the two samples were measured using the same instrumental configurations. Second, a broad 'shoulder' shows in the intermediate-q region between q ≈ 0.004 and q ≈ 0.02 Å −1 for both the profiles. The 'shoulder' is a manifestation of interference of waves scattered from nanoscale domains. Note that for the polymer film with 3 % of DIO, the 'shoulder' shifts toward higher-q, suggesting that the domain is smaller as compared with that of the film without DIO. The slight change of the position of the 'shoulder' can be clearly seen in the Iq 2 versus q plot (in-set of Figure 5a), where the maxima are at ≈ 0.0087 and q ≈ 0.011 Å −1 , respectively, for the samples without and with 3 % DIO. This is corresponding to a shrinkage of domain size from ≈ 7.2 nm to ≈ 5.7 nm (estimated using, 2π/q m with q m being the position of the maximum in the Iq 2 versus q plot), owing to the addition of DIO. Third, the two profiles show scattering upturns in the region of q <≈ 0.02 Å −1 for both profiles. The upturns phenomenologically follow a power-law decay, and the sample with 3 % DIO shows a larger asymptote, which suggests the existence of larger macroscopic aggregates with the sizes being out of the probe limit of SANS. Combining all the observations, hierarchical structures in the PBnDT-FTAZ:PCBM films can be assessed, which is schematically shown in Figure 5b. PCBM particles are dispersed in polymer matrix, forming nanoscale domains. DIO can promote close packing of PCBM, which causes two consequences. On one hand, the PCBM domain size is smaller in the polymer film with DIO, as compared with that without DIO. On the other hand, the domains with more closely packed PCBM tend to form aggregates at an even larger length scale [51,52]. Conclusions In this work, we prepared the pristine and DIO-added PBnDT-FTAZ:PCBM blend films to investigate the role of 1,8-diiodooctane additive on optical properties. We observed the changes in aggregation, exciton delocalization, electron transfer efficiency from donor to acceptor, and charge generation when 3% DIO was added to the pristine blend solution. Ground-state absorption and PL measurements indicate longer conjugation length and more exciton delocalization in DIO-added PBnDT-FTAZ:PCBM, whereas higher charge separation ability at the interface was observed in this blend in the TAS measurement. In addition, the AFM and SANS data indicate the greater phase separation and the aggregate formation in this blend. Our work indicates that the molecular conformation and aggregation changes caused by the DIO additive can result in the contrast in the device performance of fullerene-based PSCs. This suggests that understanding and controlling the microstructures of polymer-blend films using additives is important for optimizing the performance of PSCs.
6,231.4
2020-12-30T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
GARPOS: analysis software for the GNSS-A seafloor positioning with simultaneous estimation of sound speed structure 11 Global Navigation Satellite System – Acoustic ranging combined seafloor geodetic technique 12 (GNSS-A) has extended the geodetic observation network into the ocean. The key issue for analyzing 13 the GNSS-A data is how to correct the effect of sound speed variation in the seawater. We 14 constructed a generalized observation equation and developed a method to directly extract the 15 gradient sound speed structure by introducing appropriate statistical properties in the observation 16 equation, especially the data correlation term. In the proposed scheme, we calculate the posterior 17 probability based on the empirical Bayes approach using the Akaike’s Bayesian Information 18 Criterion (ABIC) for model selection. This approach enabled us to suppress the overfitting of sound 19 speed variables and thus to extract simpler sound speed field and stable seafloor positions from the 20 GNSS-A dataset. The proposed procedure is implemented in the Python-based software “GARPOS” 21 (GNSS-Acoustic Ranging combined POsitioning Solver). 22 Basic configurations of the GNSS-A observation 24 Precise measurements of seafloor position in the global reference frame opens the door to the 25 "global" geodesy in the true sense of the word. It extended the observation network for crustal 26 deformation into the ocean and has revealed the tectonic processes in the subduction zone including 27 megathrust earthquakes (e.g., Bürgmann and Chadwell, 2014;Fujimoto, 2014, for review). Many 28 findings have been reported especially in the northwestern Pacific along the Nankai Trough (e.g., 29 Yokota combined) seafloor positioning technique, proposed by Spiess (1980). 33 Observers can take various ways to design the GNSS-A observation for the positioning of the 34 seafloor benchmark. They have to solve the difficulties not only in the technical realizations of 35 GNSS-A subcomponents such as the acoustic ranging and the kinematic GNSS positioning, but also 36 in designing the observation configurations and analytical models to resolve the strongly correlated 37 parameters. For example, because the acoustic ranging observations are performed only on the sea 38 surface, the errors in sound speed perturbations are strongly correlated with the relative distance, 39 typically the depths of the benchmark. 40 In the very first attempt for the realization, Spiess et al. (1998) derived horizontal displacement using 41 a stationary sea-surface unit which was approximately placed on the horizontal center of the array of 42 multiple seafloor mirror transponders. They determined the relative positions and depths of the 43 transponders in advance. The relative horizontal positions of the sea-surface unit to the transponder 44 array can be determined by acoustic ranging data, to be compared with the global positions 45 determined by space geodetic technique. In this "stationary" GNSS-A configuration, the temporal 46 variation of sound speed is less likely to affect the apparent horizontal position under the assumption 47 that the sound speed structure is horizontally stratified. Inversely, comparing the residuals of acoustic 48 travel time from multiple transponders, Osada et al. (2003) succeeded in estimating the temporal 49 variation of sound speed from the acoustic data. Kido et al. (2008) modified the expression to 50 validate the stationary configuration for a loosely tied buoy even in the case where the sound speed 51 has spatial variations. The stationary GNSS-A configuration is applied mainly by the groups in the 52 Scripps Institution of Oceanography (e.g., Gagnon et al., 2005;Chadwell and Spiess, 2008) and in 53 the Tohoku University (e.g., Fujimoto et al., 2014;Tomita et al., 2015). 54 On the other hand, Obana et al. (2000) and Asada and Yabuki (2001) took a "move-around" 55 approach where the 3-dimensional position of single transponder can be estimated by collecting the 56 acoustic data from various relay points on the sea surface. Figure In order to enhance the stability of positioning, an assumption that the geometry of transponder array 68 is constant over whole observation period is usually adopted (e.g., Matsumoto the averaged acoustic-ray direction, which results in the distortion of the estimated array geometry. 71 Constraining the array geometry contributes to reducing the bias error in the sound speed estimates 72 and the transponders' centroid position. 73 It should be noted that these two configurations are compatible under the adequate assumptions and 74 constraints. Recently, the group in the Tohoku University uses not only the stationary but also the 75 move-around observation data collected for determining the array geometry (Honsho and Kido,76 2017). 77 Recent improvements on GNSS-A analytical procedures 78 In the late 2010s, analytical procedures with the estimation of the spatial sound speed gradient for the 79 move-around configuration have been developed . In the earlier stage of the move-around GNSS-A 80 development, the spatial variations of sound speed were approximated as the temporal variations, 81 because most of the sound speed change are confined in the shallowest portion along the acoustic ray 82 paths (e.g., Watanabe and Uchida, 2016 The ATD offset values should be measured before the GNSS-A observation. 140 Underwater acoustic ranging 141 Another key subcomponent is the technique to measure the acoustic travel time between the sea-142 surface transducer and the seafloor transponders. The techniques for the precise ranging using 143 acoustic mirror-type transponders had been developed and practicalized in early studies (e.g., Spiess, 144 1980;Nagaya, 1995). Measuring round-trip travel time reduces the effect of advection of the media 145 between the instruments. 146 structure is assumed to be horizontally stratified, we can apply a heuristic approach based on the 155 Snell's law (e.g., Hovem, 2013), which has an advantage in computation time (e.g., Chadwell and 156 Sweeney, 2010; Sakic et al., 2018). 157 Therefore, we decomposed the 4-dimensional sound speed field into a horizontally stratified stational 158 sound speed profile and a perturbation to obtain the following travel time expression: 159 Sound speed perturbation model 174 In seawater, sound speed is empirically determined as a function of temperature, salinity, and 175 pressure (e.g., Del Grosso, 1974). Because these variables strongly depend on the water depth, the 176 vertical variation of the sound speed is much larger than the horizontal variation in the observation 177 scale. Thus, | | ≪ 1 will be satisfied in most cases where the reference sound speed appropriately 178 represents the sound speed field. In such cases, the average sound speed along the actual ray path is 179 expressed as 0 ̅ + ~ 0 ̅ + 0 ̅ , where 0 ̅ denotes the average sound speed of the reference 180 profile. 181 Recalling that the sound speed field is continuous and usually smooth in time and space compared to 182 the sampling rates of acoustic data, the acoustic ray path also has continuity in time and positions of 183 both ends, within the observation scale. It means that the acoustic rays from/to the neighboring ends 184 transmitted at almost the same time will take almost the same paths. Thus, can be modeled with a 185 smooth function of time and acoustic instruments' positions for the transmission and return paths, 186 i.e., vector, respectively. Let us assume that and follow a normal distribution with a variance-248 covariance of ( 2 ) and ( 2 ), whose scale can be adjusted by a hyperparameter 2 , i.e., = 249 2̃ and = 2̃, respectively. The prior probability density function (pdf) for the constraints 250 can be written as, 251 Then, the prior pdf can be written using the hyperparameter 〈•〉 , 261 where denotes the normalization constant. 263 Combining these prior informations, we obtain the following prior pdf: 264 and, 266 where and ‖ ‖ represent the rank of and the absolute value of the product of non-zero 268 eigenvalues of , respectively. 269 Variance-covariance of data 270 Now for the observed data, we take the assumption that also follows a normal distribution with a 271 variance-covariance of 2 ( , ), where and are the hyperparameters which control the 272 non-diagonal component of , i.e., 273 where is the number of data and |•| denotes the determinant of the matrix. temporal correlation which comes from the kinematic GNSS noise. The modeling error has spatio-280 temporal correlation because the sound speed variation is modeled by a smooth function of space and 281 time. Thus, we assumed the following covariance terms: 282 independent on the travel time value. Therefore, we apply = ( * ⁄ ) 2 , so that all measured data, 292 , has the same weight in the real scale. 293 Posterior probability 294 The posterior pdf after the data acquisition, which can be defined to be equal to the likelihood of the 295 model parameter given the data, can be written as, 296 Tables 1 and 2, 364 respectively. 365 We developed GARPOS to be compatible with both observation configurations. When handling the 366 GNSS-A data collected in the stationary configurations, we should process data with some 367 constraints on model parameters. Specifically, (1) upward components of transponders' positions 368 should be fixed to zero, and (2) spatial gradient components of the sound speed perturbation model 369 should not be solved, i.e., 1 = 2 = 0, because these parameters cannot be well resolved in 370 the stationary configuration. Although further parameter tuning may be required for optimization, 371 users can solve the seafloor position by GARPOS with the stationary data in addition to the move-372 around data. 373 5 Applications to the actual data 374 Data and settings 375 In order to verify the proposed analytical procedure, we reanalyzed the GNSS-A data at the sites 376 named "TOS2" and "MYGI" (Table 3 Along with the acoustic observations, the profiles of temperature and/or conductivity were measured 394 by CTD, XCTD or XBT probes several times. The reference sound speed profile, 0 ( ), was 395 calculated from the observed temperature and salinity profiles using the empirical relationship 396 proposed by Del Grosso (1974). To save the computational cost for ray tracing, the profile was 397 approximated into a polygonal curve with several tens of nodes ( Figure 5). 398 During a GNSS-A survey, the vessel sails on a pre-determined track over the seafloor transponder 399 array to collect geometrically balanced acoustic data (e.g., Figure 1). The along-track observation is 400 repeated several times by reversing the sailing direction in order to reduce the bias due to the errors 401 in the ATD offset. The along-track observation (called "subset", hereafter) is repeated several times, 402 with reversed sailing direction in order to reduce the bias due to the errors in the ATD offset. 403 During an observation cruise, it occasionally took more than a few weeks to collect sufficient 404 acoustic data at a single site due to weather conditions or other operational restrictions. Even so, we 405 compiled a single dataset per site per cruise for the static seafloor positioning in practice, because the 406 positional changes should be too small to detect. We call the collection of a single GNSS-A dataset 407 "observation epoch" or "epoch", hereafter. 408 We set the parameters for the numbers of basis functions, , , and , in equation 5, as 0 = 409 1 = 2 = 15 for both preprocess and main process. 410 Array geometry determination 411 In order to calculate the proper array geometry ̅̅̅ for the rigid-array constraint, we first determined The preliminary array-free positioning was also used for the verification of the collected data. We 423 eliminated the outliers whose discrepancies from the preliminary solution were larger than the 424 arbitrary threshold. We set the threshold to be 5 times as large as the root mean square value (RMS) 425 of the travel time residuals. 426 Hyperparameter search 427 In order to get the solution * , we should determine the appropriate values for the various 428 hyperparameters, i.e., 2 , , , 2 , 0 2 , 1 2 , 1 2 , 2 2 , and 2 2 . In the scheme of the ABIC 429 minimization, 2 can be determined analytically by equation 23. It is reasonable to assume 2 ≡ 430 1 2 = 1 2 = 2 2 = 2 2 because these hyperparameters control the smoothness of the spatial 431 sound speed structure. For the purpose of single positioning, should be a large number, for example 432 in meter-order. The large hardly changes the ABIC value and thus the solution. 433 In order to save the computational resources, we should further reduce the number of 434 hyperparameters. We tentatively put = 0.5. For the sound speed variations, we had to assume the 435 strong constancy of spatial sound speed structure to resolve them with the single transducer GNSS-A. 436 For this reason, we selected the ratio of 0 2 and 2 , as 2 = 0.1 0 2 . The last two hyperparameters, 437 and 0 2 , were determined with the grid search method. The tested values for and 0 2 are = 438 (0 min. , 0.5 min. , 1 min. , 2 min., 3 min. ) and 0 2 = (10 −3 , 10 −2 , 10 −1 , 10 0 , 10 1 , 10 2 ), 439 respectively. 440 TOS2 is located offshore in the south of Shikoku Island, southwestern Japan, above the source region 446 of the 1946 Nankaido earthquake (e.g., Sagiya and Thatcher, 1999) along the Nankai Trough. 447 According to Yokota and Ishikawa (2020), who investigated the transient deformations at the GNSS-448 Results A sites along the Nankai Trough, no significant signal was detected at TOS2. The results by the 449 proposed method show the same trends as the conventional results. Although the trend of horizontal 450 displacement seems to be changed in 2018 or 2019, careful inspection is needed because the 451 transponders had been replaced during this period. 452 MYGI is located in the offshore east of Miyagi Prefecture, northeastern Japan, which experienced the 453 2011 Tohoku-oki earthquake (Sato et al., 2011). After the earthquake, significant westward 454 postseismic movement and subsidence due to the viscoelastic relaxation has been observed at MYGI 455 (Watanabe et al., 2014). The postseismic movements continue but appear to decay. It is true that the 456 changes in the displacement rate at these sites are crucial in seismic and geodetic researches, but 457 discussing these matters is beyond the scope of the present paper. The point is that the seafloor 458 positioning results were well reproduced by the proposed method. gradient of slowness lies in the certain layer, from sea-surface to the depth , in the water. In such 510 cases, ( ) is propotional to ( ), as = ( 2 ⁄ ) . This is exactly the same assumption as the 511 model by Yasuda et al. (2017). The model of Kinugasa et al. (2020) is the special case of those 512 models where equals to the water depth. 513 In the proposed method, the sound speed field is approximately interpreted by their models when the 514 unit vector of is supposed to be same as that of and | | ≥ | |. The depth of the gradient 515 layer is calculated as, 516 = 2 * 1 + ⁄ (32) 517 When = , it concludes to the model of Kinugasa et al. (2020). Conversely, when | | ≪ | |, 518 sound speed gradient lies in the thin layer near the surface. 519 In addition to the simple model above, the proposed method can extract more complicated sound 520 speed field, which partly described by . Extracted parameters for the 521 sound speed perturbation indicate the complicity of oceanographic structure, as shown in the next 522 section. 523 Validity of extracted sound speed perturbation model 524 Typical examples for the estimation results for each observation, i.e., the time series of travel time 525 residuals, and sound speed perturbation interpreted from the correction coefficient, are shown in 526 Figure 7. Results for all the datasets are available in Supplementary Figure 1. 527 In the most cases for site TOS2, both terms of the estimated sound speed gradient vector stably direct 528 south to southeast. Because the sound speed increase with the water temperature, it means that the 529 water temperature is higher in the southern region. The results that 2 is comparable with 1 in many 530 cases indicate that the gradient of water temperature continues to the deeper portion, as discussed in 531 the previous section. This is consistent with the fact that the Kuroshio current continuously flows on 532 the south of TOS2. 533 In contrast, the directions of gradient terms at MYGI have less constancy than TOS2. Unlike the area 534 around TOS2 where the Kuroshio current dominantly affects the seawater structure, MYGI is located 535 in an area with a complicated ocean current system (e.g., Yasuda, 2003;Miyazawa et al., 2009 in other observation subcomponents such as the random walk noise in GNSS positioning, the drifts of 545 gyro sensor, or the time synchronization error between the devices. 546 Preferred models for all the tested epochs had positive values for data correlation length, . It 547 contributed to avoiding overfitting of the correction coefficient . It is considered that the plausible 548 estimation of sound speed is realized by introducing the statistic information criteria and the 549 information of data covariances. 550 Figure 8 shows the examples of the cases for the models without assuming the data correlation, i.e., 551 = 0. The preferred models were selected from 0 2 = (10 −3 , 10 −2 , 10 −1 , 10 0 , 10 1 , 10 2 , 10 3 , 10 4 ). 552 It is clear that the preferred models without assuming the data correlation have larger 0 2 . Although 553 the residuals of travel time were reduced in these models, overfittings occurred for each term of . 554 Comparing the preferred and less-preferred results, the existence of data covariance components 555 contributes to the selection of a model with less perturbation by decreasing the impact of individual 556 data on model parameters. 557 Finally, we confirm the stability of the seafloor positioning results. The differences of seafloor 558 position for the tested models from the most preferred models are summarized in Figure 9. The 559 differences in estimated positions for most of the tested models converged into several centimeters. 560 For both sites, variations in vertical component tend to be larger for larger values of 0 2 . It indicates 561 that finer hyperparameter tuning is not required when considering the application to seafloor 562 positioning. 563 Conclusions 564 We reconstructed the GNSS-A observation equation and developed the Python-based software 565 GARPOS to solve the seafloor position as well as the sound speed perturbations using the empirical 566 Bayes approach. It provides a stable solution for a generally ill-posed problem caused by the 567 correlation among the model parameters, by introducing the hyperparameter tuning based on the 568 ABIC minimization and data covariance to rationalize the normalization constant of the posterior pdf. 569 The most important point is that the proposed method succeeded in directly extracting the time-570 dependent sound speed field with two end members of spatial gradient terms, which are roughly 571 characterized by depths, even when the observers used only one sea-surface unit. Statistical approach 572 allowed us to suppress the overfitting and thus to obtain simpler sound speed field from densely 573 collected dataset. It successfully reproduced the stationary southward sound speed gradient at TOS2, 574 which is consistent with the Kuroshio current. 575 On the other hand, model overfits were shown in several epochs. These overfits can be caused not 576 only by the actually complicated sound speed field but also by other error sources which were not 577 well included in the model. It means that the hyperparameter tuning also plays a role in the 578 verification of dataset and model. Error analyses in such cases might rather help improving the 579 GNSS-A accuracy and methodology. 580 We suggested a simplified formatting for the GARPOS input files. Researchers can enter into the 581 field of seafloor geodesy by collecting the listed data with adequate precision. Since each 582 subcomponent of GNSS-A technique, i.e., GNSS positioning, acoustic ranging, and so on, has been 583 well established, observers can combine them on their platform. Especially, GNSS-A is expected to 584 be practicalized in the near future with an unmanned surface vehicle (Chadwell, 2016)
4,751.6
2020-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
A point mutation in the DNA-binding domain of HPV-2 E2 protein increases its DNA-binding capacity and reverses its transcriptional regulatory activity on the viral early promoter Background The human papillomavirus (HPV) E2 protein is a multifunctional DNA-binding protein. The transcriptional activity of HPV E2 is mediated by binding to its specific binding sites in the upstream regulatory region of the HPV genomes. Previously we reported a HPV-2 variant from a verrucae vulgaris patient with huge extensive clustered cutaneous, which have five point mutations in its E2 ORF, L118S, S235P, Y287H, S293R and A338V. Under the control of HPV-2 LCR, co-expression of the mutated HPV E2 induced an increased activity on the viral early promoter. In the present study, a series of mammalian expression plasmids encoding E2 proteins with one to five amino acid (aa) substitutions for these mutations were constructed and transfected into HeLa, C33A and SiHa cells. Results CAT expression assays indicated that the enhanced promoter activity was due to the co-expressions of the E2 constructs containing A338V mutation within the DNA-binding domain. Western blots analysis demonstrated that the transiently transfected E2 expressing plasmids, regardless of prototype or the A338V mutant, were continuously expressed in the cells. To study the effect of E2 mutations on its DNA-binding activity, a serial of recombinant E2 proteins with various lengths were expressed and purified. Electrophoresis mobility shift assays (EMSA) showed that the binding affinity of E2 protein with A338V mutation to both an artificial probe with two E2 binding sites or HPV-2 and HPV-16 promoter-proximal LCR sequences were significantly stronger than that of the HPV-2 prototype E2. Furthermore, co-expression of the construct containing A338V mutant exhibited increased activities on heterologous HPV-16 early promoter P97 than that of prototype E2. Conclusions These results suggest that the mutation from Ala to Val at aa 338 is critical for E2 DNA-binding and its transcriptional regulation. Background Human papillomaviruses (HPVs) are small, doublestranded DNA viruses that infect the mucosal epithelial tissues of anogenital tract, oral cavity and upper alimentary tract, as well as cutaneous epithelial tissues of hands, feet and trunks. HPVs have been grouped into cutaneous type that causes cutaneous warts and epidermodysplasia verruciformis, and mucosal type that predominantly induces benign and malignant lesions of the genital tract, in which HPV-2 has been frequently associated with verrucae vulgaris [1]. HPV-2 genome is composed of eight open reading frames (ORFs) encoding the regulatory proteins essential for completion of the viral life cycle and the structural components of the virion, respectively [2]. HPVs' E2 proteins are believed to control the transcriptions of viral genes through binding to the specific sites in viral DNA, multiple copies of which are found in the viral upstream regulatory regions (URRs) [3]. The HPV E2 protein can function as either a repressor or an activator of the early gene transcription, depending on the location of E2 binding sites in the viral regulatory region as previously demonstrated for genital HPVs [4]. The structure of the E2 protein resembles a typical transcription factor, with an amino-terminal transcriptional activation domain (TAD) and a carboxyl-terminal DNAbinding/dimerization domain (DBD), separated by a variable hinge region [2]. The E2 protein exists in solution and binds to the target DNA as a dimmer. The HPV-16 E2-DBD forms a dimeric β-barrel, with each subunit contributing an anti-parallel 4-stranded β-sheet "half-barrel" [5]. Several studies showed that E2 acts as a transactivator at low concentrations, while as a repressor at high concentration. Recently, it has been reported that the locations of E2 binding sites are important for transcriptional repression, independent of binding affinities [6]. Besides being a transcriptional regulator in the lifecycle of virus, E2 protein is believed to play an important role in the carcinogenesis of HPV-associated cancers. The HPV genome can exist in the malignant cells in two forms, integrated into the host chromosome or episomal DNA. The majority of HPV-associated cancers, especially cervical carcinoma, contain integrate HPV DNA [7]. Usually, integration of viral genome into host chromosome results in disruptions of E2 and E1 ORFs, leading to an increased transcription from the viral early promoter and elevated expression of viral oncogenes E6 and E7 [7,8]. About 15-20% HPV-positive cervical cancers contain intact HPV genomes in extrachromosomal state. Various point mutations or deletions in the HPV genome were reported to be related to the viral oncogenesis potential, e.g. in the long control region (LCR) [9], E2 and E1 ORFs [10]. Previously we reported a verrucae vulgaris patient with huge extensive clustered cutaneous who was confirmed to be infected by a HPV-2 variant [11,12]. Several point mutations were detected in the LCR of this HPV-2 variant that lead to an increased promoter activity. In addition, five point mutations were found within the E2 ORF. Expression of the E2 mutant exhibited increased activities on the viral early promoter as compared with the prototype E2 [13]. In order to gain insight into the potential influences of these mutations within E2, we constructed a series of mammalian and prokaryotic expressing plasmids encoding E2 proteins with one to five amino acid (aa) substitutions. Upon co-transfection with a CAT reporter under the control of HPV-2 LCR, the E2 construct containing the A338V mutation within the DNA-binding domain functioned as a transactional activator instead of repressor. Electrophoretic mobility shift assays (EMSA) demonstrated that the ability of E2 protein with A338V mutation bind a double-stranded DNA sequence containing two E2 binding sites is markedly stronger than HPV-2 prototype E2. The binding affinity of the E2 A338V mutant for the promoter-proximal LCR sequences of HPV-2 and HPV-16 were also significantly increased. Structural analyses indicated that the mutation A338V located in the region of beta barrel. These results imply that the mutation A338V is critical for the E2 DNA-binding and promoter regulation. Results The A338V mutation within the HPV-2 E2 DNA-binding domain is critical for E2 transcriptional regulation activity on the HPV-2 early promoter To assess the effect of the point mutations within E2 on its transcriptional activity, a series of HPV-2 E2 mammalian expressing plasmids were constructed. These include the point mutations within the E2 transactivation, hinge and DNA-binding regions. In addition, two plasmids expressing truncated E2 proteins were also generated ( Figure 1A). To detect expression of HPV-2 E2 from the transiently transfected plasmids in the cultured cells, HeLa and C33A cells transfected with pcDNA-E2-proto and pcDNA-E2-A338V were harvested 24, 48 and 72 h post-transfection, respectively. The presences of HPV E2 proteins in cell lysates were confirmed by Western blot with a HPV-2 E2 specific monoclonal antibody (mAb) prepared with a full-length recombinant HPV-2 E2 protein as immunogen, which recognized the segments of N-terminus and hinge region of E2 protein (unpublished) ( Figure 1B). In addition, E2 expressing in HeLa cells were also evaluated by Western blots after co-transfection of the HPV-2 E2 plasmids with the pCAT-LCR (L) reporter plasmid. As shown in Figure 1C, the full-length E2 (43 KD) and Nterminal E2 (22 KD) were detected, whereas no signal was observed for the C-terminal E2. Under our experimental condition, transfection of the blank CAT reporter vector (pBL-CAT6) did not induce detectable CAT expression (data not shown). Consistent with previous observations in HeLa cells containing HPV-18 genome [13], co-transfection of pcDNA-E2proto significantly reduced the HPV-2 LCR driven CAT expression (Figure 2A, pcDNA-E2-proto), whereas cotransfection of pcDNA-E2-Mut significantly increased the CAT expression (pcDNA-E2-Mut). Co-transfected with the plasmids encoding N-(pcDNA-E2-N) and Cterminal (pcDNA-E2-C) E2 resulted in significantly higher CAT expressions than that with the plasmid encoding full-length prototype E2. Transfection of the plasmids expressing single point mutation within the transactivation domain (Figure 2A, pcDNA-E2-L118S) and the hinge region (Figure 2A, pcDNA-E2-S235P) resulted in comparable CAT expressions as that of prototype E2, while pcDNA-E2-L118S/S235P induced a relatively higher CAT expression. Interestingly, transient transfection of pcDNA-E2-A338V containing a single point mutation in the E2 DNA-binding domain led to a significantly increased CAT expression that was even slightly higher than that of pcDNA-E2-Mut. In contrast, transfection of pcDNA-E2-Mut (-) containing all four point mutations but A338V caused a significant repression of CAT expression that was comparable with Figure 1 Expressions of various HPV-2 E2 proteins in the cultured cells. A. Schematic structures of the full-length E2 and various mutated E2. The black crosses indicate the amino acid exchanges of these mutants. B. HeLa (left panel) or C33A (right panel) cells were either mock transfected or transfected with 500 ng of pcDNA-E2-proto or pcDNA-E2-A338V as indicated. Cells were harvested 24, 48 or 72 h post-transfection. The prepared cells extracts were separated by 15% SDS-PAGE, transferred to nitrocellulose membrane and probed with E2 or β-actin antibody as indicated. Exposition time was 5 min for E2 and 2 min for β-actin. The blot shown is a representative experiment among three experiments. C. 500 ng of various E2 constructs were co-transfected with 2 μg CAT reporter plasmid pCAT-LCR into HeLa cells, respectively. Cells were harvested 48 h post-transfection. The blot shown is a representative experiment among three experiments. The blots of E2, E2-N and β-actin are indicated on the left and relative molecular weights are arrowed on right Figure 2 The influences of various HPV-2 E2 constructs on the promoter activity under control of the LCR of HPV-2. A. The relative CAT expressions co-transfected with different E2 expressing plasmids in HeLa cells. The schematic structure of HPV-2-LCR cloned in the CAT reporter plasmid pBL-CAT6 is shown above. HPV-2-LCR covers the sequences from nt 6934 to 134. The positions of TATA box and four potential E2 binding sites are indicated with the starting nucleotide (nt) below. HeLa cells were transfected with 2 μg of plasmid pCAT-LCR and 500 ng of either mock plasmid (pcDNA3.1) or the plasmids for various E2 constructs (as indicated). 1 μg pCMV-β-galactosidase was transfected as internal control. Cells were harvested 48 h after transfection. The expressions of CAT and β-galactosidase were determined. The CAT expression each preparation was normalized with its β-galactosidase value. The relative CAT expressions are averaged from at least three independent experiments and presented relative to that of pcDNA3.1. Data are represented as mean ± SEM. B. The relative CAT expressions co-transfected with different E2 expressing plasmids in C33A and SiHa cells. C33A and SiHa cells were transfected with 2 μg of plasmid pCAT-LCR and 500 ng of either mock plasmid (pcDNA3.1) or the plasmids for various E2 constructs (as indicated). 1 μg pCMV-β-galactosidase was transfected as an internal control. The relative CAT expressions are averaged from at least three independent experiments in C33A and SiHa cells and presented relative to that of pcDNA3.1. Data are represented as mean ± SEM pcDNA-E2-proto. These results demonstrate that the A338V mutation within the DNA-binding domain is essential for the E2 repression activity. Next, we examined the transcriptional activity of E2 with A338V mutation the HPV-negative cervical cancer cell lines C33A and the HPV-16 genome-containing SiHa cells. Consistent with the results in HeLa cells, under the control of HPV-2 LCR, co-expression of pcDNA-E2-proto and pcDNA-E2-Mut (-) led to obviously low CAT expressions compared with mock, whereas co-expressions of pcDNA-E2-Mut and pcDNA-E2-A338V caused high CAT expressions ( Figure 2B). Notably, in pcDNA-E2-A338V-transfected cells, the relative CAT expression was higher than that of mock. These results imply that the transcriptional repression activity of E2 mutant A338V is independent of the endogenous HPV genome. The A338V E2 mutant increased the binding capacity to the DNA sequences containing conservative E2 binding sites in vitro To explore the mechanism for derepression on the HPV-2 promoter activity caused by the mutation A338V, a serial of recombinant E2 proteins were expressed and purified in E. coli. Figure 3A summarizes the E2 proteins in different contexts, including one construct of E2 transactivation domain, two constructs of E2 DNA-binding region, four constructs of E2 hinge region and DNA-binding domain and four constructs of full-length E2. All proteins were expressed in soluble form as GST-fusions ( Figure 3B). Using the biotin-labeled double-stranded oligo HPV-E2BS containing two E2 protein binding sites (E2BS), the DNA-binding activities of different expressed E2 proteins were evaluated by EMSA. The specificity of oligo HPV-E2BS for HPV E2 protein was first evaluated by competition experiments with homologous or heterologous unlabeled oligos. Compared with the clear DNAprotein complex formation in the mixture of HPV-2 E2 and oligo HPV-E2BS, addition of the excessive cold homologous oligo instead of the heterologous oligo T7 ( Figure 4A, left panel). To get more evidences on the specificity of the binding of oligo HPV-E2BS with E2 protein, the recombinant HPV-2 E2 was incubated with a mAb against HPV-2 E2 prior to EMSA. Along with the reductions of the signals of the DNA-E2 complexes in the presence of mAb anti-E2, obvious supershifts were detected ( Figure 4A, right panel). These results indicate the interaction between oligo HPV-E2BS and E2 protein is specific. In the context of E2 binding domain, both prototype E2 and the A338V mutant E2 formed protein-DNA complexes with the probes in a dose-dependent manner. Interestingly, the binding activity of E2 mutant was significant stronger than that of prototype E2 ( Figure 4B). To confirm this phenomenon, four E2 proteins covering E2 hinge region and DNA-binding region were employed into EMSA. Figure 4C showed that the DNAbinding activity of E2-HC-A338V was stronger than that of prototype E2-HC. Additionally, E2-HC-Mut with A338V and other three point mutations in hinge region showed similar DNA-binding activity as E2-HC-338 V, while E2-HC-Mut (-) with only three mutations in hinge region caused similar DNA-binding activity as prototype E2-HC. As expected, neither GST nor E2 transactivation domain (E2-N), formed a complex in EMSA. Similar manner were observed in the EMSA in the context of full-length E2 protein, in which two constructs containing the A338V mutation (E2-FL-A338V and E2-FL-Mut) formed more obvious protein-DNA complexes than the other two constructs without the A338V mutation (E2-FL and E2-FL-Mut (-)), regardless of having the point mutations in the transactivation domain and hinge area ( Figure 4D). These results highly suggest that the substitution of Ala to Val at residual 338 in HPV-2 E2 protein influences critically its DNA-binding affinity. E2 DNA-binding affinities were influenced by the length of the E2 peptides From the EMSA results shown in Figure 4, it seemed that the DNA-binding activities of E2 were also affected by the length of the peptides. To address this possibility, the same molar number of E2 proteins in three different lengths was mixed with biotin-labeled oligos. With 12.5 fM of oligos, only the E2-C construct formed detectable protein-DNA complexes ( Figure 5, left panel). The protein-DNA complexes of E2-HC constructs were clearly observed when the amount of oligo was increased to 125 fM, while that of E2-C became much stronger ( The A338V E2 mutants increased the binding affinity to the promoter-proximal LCR sequences of HPV-2 and HPV-16 To evaluate the DNA-binding activities of E2 with A338V to the wild-type HPV sequences, the biotinlabeled double-stranded oligos derived from the sequences of prototype HPV-2 and HPV-16 LCRs, which contained two E2 protein binding sites, were mixed with equal amount of the different recombinant Figure 4, the A338V E2 mutants showed clearly stronger binding affinities to both HPV-2 ( Figure 5B) and HPV-16 (5 C) oligos than the HPV-2 prototype E2, in the context of either full-length or truncated forms. No difference was observed in the binding affinity of HPV-2 E2 to the LCR sequences of homologous or heterologous HPV genotypes. These results show that A338V E2 mutant has stronger binding affinity to the promoter-proximal LCR sequences of wild-type HPVs. E2 C-terminus (E2-C) possessed much stronger binding activities to HPV-2 and HPV-16 LCR than E2-HC and E2-FL, which were coincident well with the binding tendency of different E2 in length shown in Figure 4B, C and 4D. The multiple bands at higher molecular weight position in the gels ( Figure 4C, D and 5) may represent the dimmers of the E2 proteins. Co-expressions of HPV-2 E2 mutants with A338V induced more active activity on heterologous HPV-16 early promoter P97 than the HPV-2 prototype E2. In order to figure out whether the E2 mutation A338V induced similar effectiveness on viral early promoter of heterologous genotype HPV, a CAT-reporter plasmid under the control of HPV-16 LCR was co-transfected with same amount of various HPV-2 E2 expressing plasmids, including pcDNA-E2-proto, pcDNA-E2-Mut(-), pcDNA-E2-Mut and pcDNA-E2-A338V, respectively. Remarkably decreased CAT expression was observed when pcDNA-E2-proto was co-transfected ( Figure 6, column 2), indicating that HPV-2 E2 was able to inhibit the activity of HPV-16 promoter P97. Similar to the observations under the control of HPV-2 LCR, expressions of either E2 with single A338V mutation (pcDNA-E2-A338V, column 4) or A338V plus other four point mutations (pcDNA-E2-Mut, column 3) resulted in significantly more CAT expressions under the control of HPV-16 LCR. As expected, transfection of pcDNA-E2-Mut (-) (column 5) with other four point mutations except A338V still maintained the same repression on P97 activity as that of pcDNA-E2-proto. These data suggest that the A338V E2 mutant may reverse its regulation activity on viral early promoters of HPVs with similar upstream constructs. Discussion In this study we have provided evidence data that a naturally occurred mutation of A338V in HPV-2 E2 increases E2 DNA-binding capacity and reverses its transcriptional regulation activity on the viral early promoter. The effect of this mutation on the biological functions of E2 seems to be very critical, since the other four amino acid exchanges locate at the transactivation domain and the hinge regions of E2 have little impact. The E2 protein has the typical structure of transcriptional regulator, which consists of a multiple-proteinbinding transactivation domain, a DNA-binding/dimerization domain, and a flexible linker [14]. Consistent with other previous studies, our data confirm that although the C-terminal segment of E2 alone has DNAbinding capacity, lacking its N-terminal portion makes the truncated E2 almost loss it's of promoter repressor activity. Our data indicate that the E2 N-terminal alone works as a transcriptional activator, inducing about 1.5fold increased promoter activity. However, this positive effect on the promoter is totally abolished in the context of whole E2 protein. The substitution of L118S in the E2 transactivation domain shows no influence on either DNA-binding or promoter activity. The contribution of E2 hinge region to its transcriptional regulator is believed to be not essential [15], as three naturally occurred mutations in this area together do not influence either the E2 DNA-binding or transcriptional activation. The structural analysis of the C-terminal DBD from several PV E2 proteins, e.g. HPV-16, -18, -31 and bovine papillomavirus (BPV-1), either alone or together with TAD, suggest it to be a tight dimer upon DNA binding [16,17]. The structure of E2 DNA-binding domain is conserved among HPV families [18]. E2 DNA-binding domains of HPV-2 and HPV-18 have 52% identity and Figure 6 The influences of various HPV-2 E2 constructs on the promoter activity under control of the LCR of HPV-16. The relative CAT expression co-transfected with different E2 expressing plasmids was evaluated. HeLa cells were transfected with 2 μg of plasmid pCAT-HPV16-LCR and 500 ng of either mock plasmid (pcDNA3.1) or the plasmids for various E2 constructs (as indicated). 1 μg pCMV-β-galactosidase was transfected as internal control. Cells were harvested 48 h after transfection and the expressions of CAT and β-galactosidase were determined. The CAT expression each preparation was normalized with its β-galactosidase value. The relative CAT expressions are averaged from three independent experiments and presented relative to that of pcDNA3.1. Data are represented as mean ± SEM there is only one gap between the alignments ( Figure 7A). With software Modeller9.5 and NAMD2.6, we have constructed 3D structures of DNA-binding domains of wild-type (338A) and mutant (338 V) E2 proteins using the published crystal structure of HPV-18 E2 DNAbinding domain as the template. The amino acid residue 338 locates in the region of beta barrel that is far away from the helix region that binds to DNA ( Figure 7B), indicating that the influence of the mutation on DNAbinding is not due to the direct alteration in the helix region. However, in beta barrel structures the hydrophobic residues are oriented into the interior of the barrel to form a hydrophobic core and the stability of theβbarrel depends largely on the interaction of the inner hydrophobic amino acid residues. The mutation from Ala to Val at aa 338 increases the hydrophobic property and subsequently stabilizes the dimeric structure of E2, which is possibly responsible for the enhanced DNA binding activities observed in the EMSA. Previous study has showed that binding of the fulllength wild-type BPV-1 E2 protein to the LCR sequences leads to formation of DNA loops and the transcriptional activating domain of E2 is necessary for this loop [19]. Such structure will result in the tissuespecific enhancers shifting closer to the core transcription complex for transcriptional activation [20,21]. Meanwhile, some studies have indicated that binding of the intact E2 to the LCR sequences may spatially prevent the transcriptional machine to active the promoter, which are the main molecular mechanism for E2 transcriptional repression [22,23]. E2 has also been shown to be able to interact with other cellular agents, e.g. Brd4, to regulate its transcriptional activity [24]. The identification of Brd4 as a component in a dominant form of E2 complexes indicates that Brd4 may be the cofactor for HPV E2 repressor function [25]. Apparently, Brd4 recruits E2 that in turn prevents the recruitment of TFIID and pol II to the HPV promoter [26]. Amino acid substitutions within the E2 transactivation domain impaired both the transcriptional activity and binding to Brd4 [27]. Furthermore, Brd4 is a host chromatin adaptor for papillomavirus. The dimerization of the E2 is required for efficient Brd4 binding [28]. The mutation from Ala to Val at aa 338 of HPV-2 E2, which would change the hydrophobicity and/or tertiary structure of E2, will lead to a modification of its interaction with the chromatin, and thus, modulates its transcriptional regulation activity. Although our data highlight a close correlation between the increased activity for DNA-binding and the enhanced activity for viral early promoters of the mutated E2 protein, the exact mechanism remains unclear. Our data indicate that the DNA-binding capacity of the C-terminal fragment of E2 is stronger than those with the hinge region, and much stronger than the fulllength E2. Earlier study has found that besides the fulllength E2, bovine papillomavirus (BPV) E2 ORF also encodes two other E2 peptides, E2-TR and E8/E2 proteins [29]. These shorter E2 proteins contain the DNA binding and dimerization domains of the C-terminus and hinge region, but lack the transactivation domain. Relative abundances of the truncated E2 proteins have been observed in BPV transformed cells (the molar ratio of E2:E2-TR:E8/E2 is 1:10:3) [30]. Expression of HPV-31 E8E2C protein has been reported to be able to inhibit HeLa cell growth [31]. However, the transcriptional profiles of other HPV E2 ORFs, regardless in benign or malignant cells, are rarely addressed. The fact that the C-terminus E2 binds DNA stronger suggests that it is more competitive than the full-length E2 in the cells. Our study provides the evidence that HPV-2 E2, regardless of wild-type or mutant (A338V), induces the similar biological effectiveness under the controls of the homologous and heterologous HPV LCRs. This suggests that E2 protein may induce same regulative activity on the viral early promoters from different HPVs with similar upstream components. Although there are more than 100 genotypes of HPVs involving in various human . There is only one gap between the alignment. In HPV-2 sequence, the A338V mutation is colored red. The secondary structures are derived using the software DSSP. The arrows represent β-strands, and zigzag lines indicate helices. B: The modeled structures of HPV-2 E2 dimeric DNA-Binding domain. Compared to the crystal structure of HPV-18 E2 DNA-Binding domain (PDB: 1F9F), the two helix regions which bind DNA are confirmed (colored cyan). The right structure is mutant form and the residue 338 (Val) is colored red. The modeled structures show that the residue 338 is located in the region of beta barrel benign or malignant proliferating diseases, the sequences of viral genomes are relatively conservative. Hence, the effectiveness of HPV-2 E2 may represent a common property of HPVs' E2 proteins. In addition to the role in regulating viral transcription, HPV E2 protein involves in enhancing E1-dependent viral DNA replication and genome maintenance. In HPV genomes the viral DNA replication initiation site co-localizes with the viral transcription region. However, the regulative function of E2 in viral DNA replication is far from understood compared with its role in transcriptional regulation. Although the point mutations in TAD and in hinge region within this E2 mutant do not affect DNA-binding and transcriptional regulation, their influence on viral genome replication cannot be excluded. Sequences analyses of this variant HPV-2 strain have also identified several point mutations in its E1 ORF. Further studies of viral genome replication will help explore the inconvenient reason of such huge verrucae vulgaris. Conclusions Our study provides evidence that HPV-2 E2 with Ala to Val mutation at aa 338 is critical for E2 DNA-binding and its transcriptional regulation. The binding abilities of E2 proteins with A338V to either an artificial probe containing with two E2 binding sites or HPV-2 and HPV-16 promoter-proximal LCR sequences were significantly stronger than that of HPV-2 prototype E2. Furthermore, co-expression of the E2 constructs containing A338V mutation induces higher activities on heterologous HPV-16 early promoter P97 than that of prototype E2. Plasmids construction Mammalian expression plasmid pcDNA-E2-proto containing the whole E2 sequences of HPV-2 prototype, plasmid pcDNA-E2-Mut containing the whole E2 sequences of isolate 1 and CAT reporter plasmid pCAT-LCR(L) containing HPV-2 prototype LCR sequence (from nt 6934 to 134) were generated previously [13]. The plasmid pCAT-LCR-HPV16 containing HPV-16 LCR was generated previously [32]. Sited-direction mutation PCR was performed using pcDNA-E2proto or pcDNA-E2-Mut as the templates to generate various E2 sequences with one to four point mutations. Table 1 summarized the primers used in PCR, in which the primers amplifying whole E2 ORF (from nt 2685 to 3860), E2 N-segment (from nt 2685 to 3279) and E2 Csegment (from nt 3618 to 3860) contained a Hind III site in the up-steam and Bam HI site in the down-steam primers. PCR amplification was performed in 25 μl of a reaction mixture containing 2.5 U of Taq DNA polymerase (TaKaRa, Dalian, China), 20 mM dNTP, 150 ng of each mixture of HPV-2 E2 specific primers at the cycle condition of denaturing at 94°C for 30 s, annealing at 58°C for 30 s, extending at 72°C for 1 min, totally 30 cycles, respectively. Briefly, to construct the E2-sequence containing single point-mutation including the mutants of L118S, S235P or A338V, two separated PCR amplifications were conducted using pcDNA-E2-proto as the templates, with the primer mixture of E2-up with 118down, 235-down or m338-down, and the mixture of E2down with 118-up, 235-up or m338-up, respectively. After purified, two individual PCR products were mixed and annealed, and the sequence covering whole E2 ORF of each mutant was constructed by another PCR amplification with primers E2-up and E2-down, generating E2-L118S, E2-S235P and E2-A338V, respectively. To generate E2-sequence containing two point-mutations of L118S and S235P, two separated PCRs were conducted based on the sequence of E2-L118S, with the primes E2up and 235-down, as well as 235-up and E2-down, respectively. The whole E2 sequence of this mutant was obtained with the same protocol above, generating E2-L118S/S235P. To construct E2-sequence containing other four mutations except A338V, the PCR reactions were separately performed using E2-Mut sequence as the template, with the primer E2-up and 338-down, as well as 338-up and E2-down. The whole E2 sequence was obtained based on the protocol above, generating E2-Mut (-). E2 N-segment (from nt 2685 to 3279) and E2 C-segment (from nt 3618 to 3860) were generated by PCR with respective primer mixtures. The generated E2 sequences were cloned into plasmid pMD18-T. After verified with sequencing assays, various E2 segments were released from cloning vectors and subcloned into pcDNA3.1, generating mammalian expressing recombinant plasmids pcDNA-E2 ( Figure 1A). To construct the different HPV-2 E2 prokaryotic expressing plasmids, including E2 of prototype (E2), E2 of isolate 1 (E2-Mut) and various mutated E2, three lengths of E2 sequences, including the full-length E2 ORF (from aa 1 to 391, FL), the sequence starting from hinge region to the end (from aa 197 to 391, HC) and C-terminal segment (from aa 311 to 391, C), were generated by PCR technique with different primer mixtures, using individual pcDNA-E2 as the templates. The PCR products were cloned into plasmid pMD18-T and subcloned into a (GST) expression vector pGEX-2 T, generating various plasmids pGST-E2 (Figure 2A). Cell line, transfection and CAT assay The human cervical cancer cell lines HeLa were maintained in Dulbecco's modified Eagle's medium (Invitrogen) with 10% fetal calf serum (HyClone). C33A and SiHa cell lines were maintained in ATCC-formulated Eagle's Minimum Essential Medium (Catalog No. with 10% fetal calf serum. Cells were plated into 60 mm 6-well plates (Falcon, Japan) one day before transfection. 2 μg of plasmid pCAT-LCR were transfected with Lipofectamine 2000 transfection reagent (Invitrogen, USA), together with 1 μg of pCMV-β-galactosidase as internal control. To evaluate the effectiveness of E2 protein on the promoter activity, various E2 expression plasmids (500 ng) were co-transfected into cells. Cells were harvested at 48 h after transfection. CAT expressions were measured quantitatively using a CAT ELISA kit (Roche, Switzerland), according to the instruction manual. The expression of β-galactosidase activity was determined using O-nitrophenyl-β-D-galactopyranoside (ONPG) as a colorimetric substrate. HPV-2 promoter activities were determined by calculating the rates of CAT and β-galactosidase values. Each experiment was independently performed for three to five times. Western blots HeLa and C33A cells transfected with 500 ng various E2 expressing plasmids, together with or without 2 μg pCAT-LCR, were harvested 24, 48 and 72 h post-transfection. Cells were pelleted by short centrifugation and suspended in the lysis buffer (10 mM Tris-HCl, pH 7.8, 0.5% sodiumdeodycholate, 0.5% Nonidet P-40, 100 mM NaCl, 10 mM EDTA), supplemented with complete proteasomal inhibitor mixture. Cell lysates were separated by 15% SDS-PAGE and electro-transferred onto nitrocellulose membranes. After blocking with 5% nonfatdried milk in PBS (phosphate buffered saline, pH 7.6) overnight at 4°C, the membranes were incubated with 1:1,000 HPV E2 specific monoclonal antibody at room temperature (RT). After washing with PBST (phosphate buffered saline, pH 7.6, containing 0.05% Tween-20), the membranes were incubated with 1:5,000 horseradish peroxidase (HRP)-conjugated anti-mouse antibody. The E2 protein signals were visualized by ECL kit (PE Applied Biosystems, USA). To reuse the blotted membrane, the developed membrane was treated in the Restore Western Blot Stripping Buffer (Thermo, USA) for 10 min at RT. 1:1,000 mAb anti-human β-actin (Santa Cruz, USA) and HRP-conjugated anti-mouse antibody were used to identify β-actin protein. ECL kit was used to visualize the signals. Expression and purification of E2 proteins The recombinant prokaryotic proteins tagged with GST were bacterially expressed in E. coli BL21 and purified with Glutathione Sepharose 4B Agarose (Pharmacia, USA) according the protocol described in our previous study [33]. The purities of the purified proteins were verified by 15% SDS-PAGE. Quantitative analysis of images was carried out using computer-assisted software Image Total Tech (Pharmacia, USA). The image was scanned with Typhoon (Pharmacia, USA), digitalized and saved as TIF format. Structure analysis The 3D structures of prototype and mutated HPV-2 E2 DNA-binding domain were modeled based on the existed crystal structure of HPV E2 with the help of software Modeller9.5 and NAMD2.6 (optimize the structure by energy minimization). The quaternary structure of HPV-18 (PDB: 1F9F) was choose as the template. Molecular modeling Molecular models of prototype and mutated HPV-2 E2 DNA-binding domain were constructed using the homology modeling software Modeller v9.5 [8]. The closely related structure of HPV-18 E2 DNA-binding domain (PDB: 1F9F) [34] was used as the template. The resulting structure files were subjected to energy minimization using NAMD2.6 [18].
7,376.6
2012-02-15T00:00:00.000
[ "Biology", "Medicine" ]
Research on Human Action Recognition in Dance Video Images This article mainly researches the recognition technology of dance video in human movement. Image preprocessing, codebook establishment, Zernike moments and support vector machines are used to classify and recognize human movements in dance videos. From the results of simulation experiments, using the recognition method proposed in this paper to recognize dance movements in dance videos in the database can effectively improve the recognition rate of human movements, and then better guide dancers’ dance movements. Introduction One of the hotspots in computer research in our country is the recognition of human actions in videos. This technology uses image processing and recognition analysis techniques to extract and analyze the actions of people in the video to determine the actions and behaviors of the people in the video, obtain effective information, and its uses are extensive. The key to implementing this technology is to properly preprocess the original video, and then perform extraction operations on image features in the video, and classify and describe them. Overall thinking research Our country's recognition technology for actions in video images has just begun to develop, let alone combine it with dance art. Using this technology to recognize human movements in dance videos can effectively identify dance knowledge in videos. After the dance moves are recognized, they are compared with the standard dance moves, and the dancers' movements are evaluated objectively based on this, and the dancers are given corresponding suggestions for movement correction. It is a new type of dance movement auxiliary training method. This article applies the related technology of human action recognition, trains the relatively simple KTH database into an SVM classifier, and then uses the A-go-go dance video database SVM classifier for intensive training to improve the function of the SVM classifier. Finally, the trained classifier is used to recognize the human actions in the A-go-go dance video, so as to achieve the purpose of obtaining a better classification effect [1][2][3]. Transform gray scale Before proceeding with the video image, it is necessary to extract the image from the video first, and perform operations such as grayscale conversion, image thresholding, and image segmentation to reduce the amount of computer calculations and facilitate the extraction of more effective information. The most common video image in our daily life is a true color image, that is, an RGB image, that is, each pixel in the image is composed of three primary colors (R, G, B). Because of the complexity of true-color images, if the true-color images are processed directly, the amount of calculations on the computer will increase dramatically and the analysis efficiency will decrease. Therefore, first transform the true color image into a grayscale image. Then reduce the color information contained in the video image [4-6]. Thresholding moving images In order to obtain a binary image of a moving image, the image needs to be thresholded first. The main method of thresholding the image is to select a reasonable threshold and divide the pixel gray value reasonably by the threshold. To segment the moving image, the general form of the threshold can be written as: In the above formula, the gray value of the pixel at ) , ( y x is used to represent ) , ( y x f ; the gray gradient function of this point is used to represent ) , ( y x p . After calculating this formula, the binarized image can be obtained. Segmentation of moving images In the above operation, the binarized image of the current moment of the image in the video is obtained, and then the scene and the motion area in the video need to be separated, which is related to the segmentation of the motion area. This article uses the binary image in Matable software to process the function, and then establishes a reasonable threshold to find the contour of the human body in motion. The specific operation is: assuming the size is expressed as N M × , the current time is t, and the frame at time t in the video is expressed as P(x, y, z), to obtain the binary image A(x, y, z). At time t, the background gray value of the binarized image A (x, y, z) is equal to 0, and the foreground gray value is equal to 255. Scan along the column vector of A, and then count the number of foreground pixels in each column ( ) Select the largest value of n frequency data, namely Ci, and denote the column number corresponding to the maximum value by i. If the ratio between Ci and the number of lines m generated between frames is greater than 1/6, it means that the frame contains the area of the moving human body. If 1/6<Ci<1/6, it means that the frame only includes the partial motion of human body; update the current time value, let t=t+1, all frames in the video are scanned, then the image after image thresholding and image segmentation is shown in the following figure: Zernike moments extract overall features After binarizing the image in the motion video, this paper uses Zernike moments to describe the characteristics of the binary image, and then classifies and recognizes the image. Zernike moment is an extremely effective orthogonal moment to describe the shape. It is widely used in various image processing. Its main function is to make the information extracted from the video more complete and less redundant information. As an image sequence, the calculation formula of Zernike moment is as follows: In the above formula, images represents all the numbers of images in the entire sequence; ( ) γ µ, , i U represents the introduced third dimension, as shown below: In the above formula, i x represents the center of gravity of the current image, then, represents the center of gravity of the previous image; in the same way, Y and X represent the same meaning; µ and γ are the parameters set by the user, the sequence of Differences may cause differences in the number of images. Therefore, after calculating 3D Zernike, you need to use the following formula to normalize it: In the above formula, A represents the number of pixels of the target (ie average area), and images represents all the numbers of images in the entire sequence. Use the above formula to process the image sequence of the human silhouette, and then obtain the corresponding 3D Zernike moment, which is the overall feature. Create codebook Combine the representative samples in the sample space to form a codebook. The samples in the codebook can easily distinguish this category from other categories. The method of creating codebooks in this paper is cluster analysis, defining similarity measures of various categories, and describing and analyzing the similarity measures between different categories, and then based on the similarity measures of 3D Zernike moments to complete the code. Whether the directions of the two vectors are similar is the basis for judging the similarity measure. > y x are selected to make them the standard requirements of the descriptor matrix. This also represents how many actions there are in the database, and how many descriptors matrix can be selected from them [7][8][9]. Classification of SVM After the codebook is created, any action selected from it is represented by a series of key gestures. When a frame of image corresponds to a certain descriptor, then the descriptor represents that it is very close to a certain key posture in the code book, so each action is composed of a series of key postures in the code book, and use SVM to achieve. SVM (ie Support Vector Machine) is a two-class classification model. It is a generalized linear classifier that performs binary classification of data in a supervised learning method, and uses a one-to-one method to expand it to multiple Class classifier. Its representation is as follows: (1) Suppose the training set is: In the above formula, all represent the feature vector. Simulation test In order to verify the effectiveness of the method proposed in this article, two different types of video databases, A-go-go dance video and KTH database, are used to verify the effect of this method. In the KTH database, there are 6 different types of common actions, including jogging, running, walking, waving, punching, and clapping, and they are some simple and basic actions. Each behavior Use the algorithm introduced in the previous chapter to preprocess the video frame by frame, extract features, and build codebooks. Then the many descriptor matrices obtained are reused to make them the training samples of the SVM classifier. After simulation and comparison, this paper decides to use the card house kernel function in the Matable software as the decision function, and use the SVM classifier to make 6 Different types of common actions are classified, and the results are as follows: Figure 2. The recognition rate of actions in the KTH database It is not difficult to see from Figure 2 that using the SVM classifier to recognize the actions of different videos in the KTH database can effectively improve the recognition rate of human actions. The highest recognition rate is walking, which is 92%, and the lowest recognition rate is 85%, and the average recognition rate of all samples in the KTH database is 88.71%. From the results, it can be seen that the recognition rate of using SVM classifier to classify and recognize samples is higher than usual. In order to verify the classification and recognition effect of this method on dance videos, the SVM classifier is used to classify and recognize the dance movements of the videos in the A-go-go dance video database. Unlike the KTH database, there are 19 different basic dance moves in the A-go-go dance video database, and make 5 dancers of different dance types collect 19 moves 3 times, then each type of movement can get 15 completely different sample data, so 19 moves have a total of 285 different human body motion sample data. In order to make full use of the SVM classifier, 200 sample data were extracted from 285 different sample data and divided into 200 groups. The SVM classifier obtained from the KTH database was used to train these 200 groups of samples. Finally, the SVM classifier is used to classify and recognize the dance movements in the A-go-go dance video database. The results are as follows: Figure 3. The recognition rate of dance action video data in the A-go-go database It can be seen from the figure above that the use of the enhanced SVM classifier to recognize the dance movements of the video in the A-go-go dance video database can effectively improve the recognition rate of dance movements, among which the lowest recognition rate actions are 2 and 15, both of them recognition rate are 86%, the highest recognition rate is action 3, which is 95%, and the average recognition rate of the entire action sample is 90.4%. From the results, we can see that this recognition method has a higher recognition rate for human actions in dance videos [10]. Conclusions This article analyzes the recognition technology in the video image, and focuses on the method and effect of the recognition of the dancer's body movements in the dance video. This article first extracts the video image from the video, and implements various operations such as binarization, grayscale and image segmentation on the video image. Then, the 3D Zernike moments in the binarized video pictures are sequentially classified according to the different features, and the similarity coefficient codebook based on the 3D Zernike moments is the standard requirement. The last step is to use the SVM classifier to classify and recognize the dance videos in the A-go-go dance video database and the videos in the KTH database one by one. From the results, it is not difficult to see that this method can obtain high accuracy of human action classification and recognition rate. Recognition rate Action
2,794.4
2021-04-01T00:00:00.000
[ "Computer Science" ]
The Role of Cytokines, Chemokines, and Growth Factors in the Pathogenesis of Pityriasis Rosea Introduction. Pityriasis rosea (PR) is an exanthematous disease related to human herpesvirus- (HHV-) 6/7 reactivation. The network of mediators involved in recruiting the infiltrating inflammatory cells has never been studied. Object. To investigate the levels of serum cytokines, growth factors, and chemokines in PR and healthy controls in order to elucidate the PR pathogenesis. Materials and Methods. Interleukin- (IL-) 1, IL-6, IL-17, interferon- (IFN-) γ, tumor necrosis factor- (TNF-) α, vascular endothelial growth factor (VEGF), granulocyte colony stimulating factor (G-CSF), and chemokines, CXCL8 (IL-8) and CXCL10 (IP-10), were measured simultaneously by a multiplex assay in early acute PR patients' sera and healthy controls. Subsequently, sera from PR patients were analysed at 3 different times (0, 15, and 30 days). Results and discussion. Serum levels of IL-17, IFN-γ, VEGF, and IP-10 resulted to be upregulated in PR patients compared to controls. IL-17 has a key role in host defense against pathogens stimulating the release of proinflammatory cytokines/chemokines. IFN-γ has a direct antiviral activity promoting NK cells and virus specific T cells cytotoxicity. VEGF stimulates vasculogenesis and angiogenesis. IP-10 can induce chemotaxis, apoptosis, cell growth, and angiogenesis. Conclusions. Our findings suggest that these inflammatory mediators may modulate PR pathogenesis in synergistic manner. Introduction Pityriasis rosea (PR) is an exanthematous disease associated with the systemic reactivation of human herpesvirus-(HHV-) 6 and/or 7 [1][2][3][4][5][6]. It usually begins with a single, erythematous plaque (herald patch) followed in about 2 weeks by smaller lesions on the cleavage lines of the trunk (Christmas tree distribution). The duration may vary but usually it gradually disappears in 4 weeks. Up to 69% of the patients experience prodromal symptoms as fever, headache, arthralgia, or malaise [5]. The constitutional symptoms, the frequent clustering of cases, and the almost complete absence of recurrent episodes support the infectious etiology of the disease [5]. The large positivity of HHV-6/7 in general population, the low human-to-human transmission rate, the variable severity of the eruption, its occurrence, and recurrences in states of altered immunity all favor the hypothesis that PR is a clinical presentation of HHV-6 and/or HHV-7 reactivation [1][2][3][4][5][6]. Immunohistological studies showed the presence of T cells and Langerhans cells within the inflammatory dermal infiltrate of PR lesions, suggesting a role of the cell-mediated immunity in the disease [7]. We previously demonstrated that fractalkine, the only member of thechemokines, and interleukin-(IL-) 22, a cytokine expressed by Th17 cells, are increased in sera of active PR patients [8,9]. The involvement of these chemokines, which promote the antimicrobial defense and protect against damage, suggests an active immunological response in PR. However, the knowledge on the PR pathogenesis and its cytokine profile is still poorly understood. To elucidate the mechanisms that underlie PR, the levels of serum inflammatory cytokines, growth factors, and chemokines were investigated in the same samples from PR patients and healthy controls correlating them with PR activity. Patients and Controls. The study included 24 patients (10 males and 14 females, mean age: 27.5 ± 9.5 years, age range: 7-49 years) with PR. The inclusion criteria were patients who sought care for a skin rash at our Dermatology Department between January and June 2014 who gave informed consent to take blood samples for laboratory investigations and in whom a final diagnosis of PR was made. All patients had the classical clinical findings of PR and the diagnosis was made clinically. Twenty-four healthy blood donors, sex-and agematched (11 males and 13 females, mean age: 29.0 ± 8.0 years, age range: 9-51 years), were enrolled as control. Each subject gave a written informed consent to the study. Exclusion criteria were (a) all PR patients who have received cytostatic or immunosuppressive drugs and (b) all PR patients with atypical rash (or with infectious diseases). (c) convalescent phase or remission (28 days from the beginning of the eruption) when the clinical resolution is often obtained (time 2). Serum Blood was collected in endotoxin-free silicone-coated tubes without additive. The blood samples were allowed to clot at room temperature for 30 minutes before centrifugation (1000 ×g, 15 minutes); the serum was removed and stored at −80 ∘ C until analysed. Statistical Analyses. Statistical analyses were performed using Prism 6 software. Continuous variables were presented as mean ± standard error of the mean (SEM), whereas categorical variables were presented as absolute and relative frequencies. Mann-Whitney's test was used to compare continuous data between the studied and control groups. Pearson's correlation coefficient was used in correlation analyses and 0.05 significant level was assumed in statistical tests. Diverse Cytokine Profiles of PR Patients and Healthy Controls 3.1.1. Cross-Sectional Evaluation. To facilitate functional interpretation of results, cytokines were sorted into three functional groups: a group denoted "cellular cytokines" which drive, albeit not exclusively, cytotoxic and antiviral responses (e.g., IL-1, IL-6, IL-17, TNF-, and IFN-), a group denoted "growth and angiogenic factors" (e.g., VEGF and G-CSF), and a group denoted "chemokines" (IL-8 and IP-10). In each group, we compared serum cytokines levels in patients with PR and controls. Evaluation of Selected Serum Cytokines Levels in PR Patients: A Cross-Sectional Evaluation. Figure 1 summarizes the serum cellular cytokines, chemokines, and growth and angiogenic factors levels of patients and controls at the onset of the disease. Among the "cellular cytokines," only IFN-( < 0.0001) and IL-17 ( = 0.0008) were significantly increased in PR patients compared with the levels found in healthy individuals (Figure 1(a)). The comparison of the serum levels of growth factors revealed a significant increase of VEGF ( < 0.0001) in PR patients compared to healthy controls. Among the chemokines, a marked increase of CXCL10 ( < 0.0056) in PR patients was detected compared to controls. Evaluation of Selected Serum Cytokines Levels in PR Patients: A Longitudinal Analysis. No significant difference was found among cytokines, chemokines, and growth factors levels at the different time points, suggesting that there was no correlation between cytokines levels and disease activity in PR. Figure 2 shows the cellular cytokines levels and the chemokines and growth factors levels at the different time point in PR patients: no significant changes were detected, except for a decrease of VEGF from the onset until clinical resolution. Differences in Cytokines Levels in Onset PR Patients. Spearman's rank correlation was used for calculating the correlations between the different cytokines (evaluating all possible combinations of cytokines levels) in onset PR patients. We performed correlation analysis among all studied cytokines. We found an intrasubject correlation between IFN-and IL-17 ( = 0.49 and = 0.008), IFN-and VEGF ( = 0.61 and = 0.005), and IFN-and IP-10 ( = 0.54 and = 0.006). A positive correlation was also found between IL-17 and VEGF ( = 0.58 and = 0.009), IL-17 and IP-10 ( = 0.51 and = 0.003), and VEGF and IP-10 ( = 0.55 and = 0.004). Finally, we did not find any correlation between other pairs of studied cytokines. Discussion HHV-6 and HHV-7 are closely related viruses, members of the Roseolovirus genus of the HHVs, largely diffuse in the general population (seroprevalence in the healthy adult population is 80-90%), and commonly acquired in early childhood. Saliva, through which HHV-6 is chronically shed, is probably the usual mode of transmission. These viruses are able to cause a primary infection that is usually asymptomatic or may cause the exanthema subitum or a febrile illness without any rash rarely accompanied by convulsions after which they establish a latent infection. HHV-6 persists in salivary glands and possibly also in terminal bronchi and neuroglial cells. HHV-7 persists in CD4 T lymphocytes and also in the epithelia of salivary glands, bronchi, and cells of the skin, liver, and kidney [5]. During all HHVs infections, the cell mediated immunity is crucial for the control of the viral infection and replication. The latency is maintained particularly by the infiltrating CD4 and CD8 T cells in the skin or mucosa and in the dorsal root ganglion where the replication takes place. Those lymphocytes are specific for the structural proteins of the virus, lie in apposition to neurons, and secrete IFN-. These viruses may reactivate possibly due to a drop in the cellmediated surveillance in occasion of other infections, exposure to endotoxins, or endocrine stimulation (including stress situations) [5]. The relationship between PR and HHV-6 and HHV-7 has been well established [1][2][3][4][5] but the elusive pathogenesis of the disease remains a subject of interest. We have shown previously that HHV-7 or HHV-6 is active during the early stage of PR, suggesting that they might play an etiological role in this disease. In addition, we have shown that plasma load of HHV-6 and HHV-7, a direct marker of viral replication, is associated with the development of systemic symptoms as well as with a significant reduction of the humoral neutralizing response against HHV-7, further suggesting that PR may be because of the endogenous reactivation of HHV-7 or HHV-6 infection [6]. Moreover, within the dermal infiltrate of PR lesions, an increase in Langerhans cells and presence of activated T cells (with an increased CD4+ versus CD8+ T ratio) have been described, confirming that the interaction between Thelper, Langerhans cells, and inflammatory dendritic dermal and epidermal cells (IDECs) may represent a pathogenetic mechanism in PR in a virus-triggered microenvironment [7]. In fact, CD4+ T cells, together with the salivary glands, are sites where the HHV-6 and HHV-7 infection state is established [11,12]. Studies of the cytokine profile in PR are necessary to decipher the role of the activated T cells and to further characterize the T-helper (Th1 and Th2) paradigm in PR. Recent studies challenged the Th1/Th2 paradigm by discovering several T-helper cell subsets with specific differentiation program and functions, including Th17 cells, regulatory T (Treg) cells, and follicular helper T (Tfh) cells. Th17 cells are characterized by production of IL-17, IL-21, IL-22, and other cytokines. IL-21, besides stimulating the proliferation and differentiation of activated leukocytes, acts also by an autocrine mechanism on Th17 cells, stimulating in turn the production of IL-17 and the other cytokines [13]. Our study is the first on the cytokine network in PR. The key cytokines that discriminate between onset PR and healthy individuals were IFN-, IL-17, VEGF, and CXCL10. We speculate the putative role of these key cytokines in PR pathogenesis. IL-17. The serum IL-17 levels were found to be significantly higher than healthy controls ( < 0.001). An intrasubject positive correlation was also found between IL-17 and VEGF ( = 0.58 and = 0.009) and between IL-17 and IP-10 ( = 0.51 and = 0.003). Similar to other authors, we have also found a positive correlation between IL-17 and VEGF levels confirming the involvement of IL-17 in angiogenesis [14][15][16]. To the best of our knowledge, a positive correlation between the other pairs of cytokines assessed in our study has never been described. IL-17 has a key role in host defense against certain pathogens through stimulating the release of antimicrobial peptides and proinflammatory cytokines and chemokines [13,17]. The significant increase of IL-17 serum levels in PR patients indirectly supports the involvement of HHV-6/HHV-7 in the PR pathogenesis, and it has been demonstrated for IL-22, another Th17 cytokine [9]. In fact, an antigenic trigger, as a reactivated quiescent HHV-6/HHV-7, via the Th17 cells stimulation and the IL-21 production, could boost IL-22 and IL-17 secretion by an autocrine mechanism [9,18,19]. IL-17, enhancing the production of proinflammatory and antimicrobial molecules, could start an inflammatory response that limits the spread of the HHV-6/HHV-7 reactivation. In fact, PR has a self-limited behavior lasting about 4-8 weeks. IFN-. IFN-, produced mainly by T cells and NK cells, has a direct antiviral activity promoting NK cells and virus specific T cells cytotoxicity. In fact, it is considered an important molecule involved in antiviral host defense. Acting via its receptor, IFN-activates hundreds of genes leading to proinflammatory effects by increasing antigen processing and presentation and anti-inflammatory effects due to its apoptotic and antiproliferative functions. In our study, we demonstrated an intrasubject positive correlation between IFN-and IL-17 ( = 0.49 and = 0.008), IFN-and VEGF ( = 0.61 and = 0.005), and IFN-and IP-10 ( = 0.54 and = 0.006). Usually, acute viral infections induce an increase in the serum levels of IFN- [20]. In fact, this cytokine has been demonstrated to be the most sensitive marker of the CD4+ T cells response to the HHV-6 acute infection [21]. In our study, we found IFN-plasma levels to be significantly higher in PR patients compared to controls, especially in the samples collected after 15 days from diagnosis, which usually corresponds to the peak of the clinical manifestations (time 1). Conversely, one important study found a significant decrease in the serum level of IFN-in patients with PR compared to healthy controls and postulated that such decrease is linked to the decreased number or impaired function of peripheral CD4+ T cells in PR patients [22]. Unfortunately, in that study, it was not specified in which phase of the eruption the blood samples were collected. We can hypothesize that it was done during a very early stage of the disease when a transiently weakened Th1 response to the latent HHV-6/HHV-7 infection with a temporary lower expression of IFN-may occur. VEGF. Among the 5 VEGFs, VEGF-A is the most potent vascular permeability agent. It is produced by various cell types (including endothelial cells) and stimulates vasculogenesis and angiogenesis following its binding with tyrosine kinase receptors (VEGFRs) [23]. In the herpetic stromal keratitis, it has been recently demonstrated that HSV-1 infected corneal epithelial cells are the primary source of VEGF-A during acute ocular infection [24]. Moreover, in many inflammatory skin diseases associated with vascular hyperpermeability (atopic dermatitis, psoriasis, and dermatitis herpetiformis), an overproduction of VEGF in lesional keratinocytes has been demonstrated. Although the mechanism is still unclear, it is possible that keratinocytes might release greater amounts of VEGF which in turn could contribute to the subsequent increase in plasma concentration. Analogous cellular types may be responsible for VEGF increase in PR patients: not only lesional keratinocytes but also HHV-6/HHV-7 infected peripheral blood mononuclear cells (PBMCs) may be able to synthesize and release VEGF during all the phases of the exanthem. Whether circulating VEGF contributes directly or indirectly to PR pathogenesis or it is merely a secondary phenomenon needs to be determined. In our study, we demonstrated an intrasubject positive correlation between VEGF and IP-10 ( = 0.55 and = 0.004). Notable, viral pattern recognition receptors almost universally activate IFN pathways [20] and IFN, especially type I (IFN-and IFN-), inhibits expression of VEGF-A [25]. In fact, in our PR patients, the blood interferon increase at 1 (usually the peak of the clinical manifestations) corresponds to a VEGF decrease at the same time point. It plays a major role in the control of viral replication as it is highly upregulated by type I and type II IFN and strongly promotes the chemotaxis of NK, CD4+, and CD8+ T cells [26][27][28]. Our study suggests that CXCL10 is upregulated by IFN-, since an intrasubject positive correlation between IFN-and IP-10 ( = 0.54 and = 0.006) was found and both mediators resulted to be significantly increased in PR patients compared to controls. CXCL10 levels rose mainly during the final phase of the disease ( 2 ), whereas IFN-has a major increment during the PR "intermediate" phase, corresponding to the peak of the clinical manifestation of the exanthem. Therefore, it may be hypothesized that IFN-upregulates CXCL10 with a certain latency. Conclusions In conclusion, our study investigated the cytokines and chemokines network in PR, providing evidence that circulating IL-17, IFN-, VEGF, and CXCL10 are increased in PR patients. The present results underscore the active immunological response in PR and may contribute to a better definition of the skin defense network. Incidentally, the cytokine pattern could support a viral induced disease process in PR pathogenesis.
3,668
2015-09-10T00:00:00.000
[ "Biology", "Medicine" ]
A new method for the prediction of diffusion coefficients in poly(ethylene terephthalate)—Validation data Prediction of the migration is a useful tool in compliance evaluation of food contact materials. In our previous work, such a prediction model had been established for polyethylene terephthalate (PET), which is widely used as packaging material for beverages as well as for meat and cheese. Within the actual study, 263 diffusion coefficients in PET for 66 substances at temperatures between 40°C and 120°C were determined from permeation kinetic experiments. The diffusion coefficients DP were compared with the predicted values by use of a log/log plot as well as in direct comparison of the diffusion coefficients. When applying the migration prediction model for compliance evaluation of food packaging materials, it is mandatory that the migration prediction is over‐estimative in any case. As a result, the predicted values are in good agreement with the experimental results. The prediction model slightly over‐estimates the real migration by a factor of 1.3 in average. Reduction of the molecular volume of 20% results in an average over‐estimation of 3 of the migration (worst‐case). Another finding of this study is that the diffusion behaviour at the glass transition temperature not significantly change. The prediction model is applicable below and above the glass transition temperature. Prediction of the migration is a useful tool in compliance evaluation of food contact materials. In our previous work, such a prediction model had been established for polyethylene terephthalate (PET), which is widely used as packaging material for beverages as well as for meat and cheese. Within the actual study, 263 diffusion coefficients in PET for 66 substances at temperatures between 40 C and 120 C were determined from permeation kinetic experiments. The diffusion coefficients D P were compared with the predicted values by use of a log/log plot as well as in direct comparison of the diffusion coefficients. When applying the migration prediction model for compliance evaluation of food packaging materials, it is mandatory that the migration prediction is over-estimative in any case. As a result, the predicted values are in good agreement with the experimental results. The prediction model slightly overestimates the real migration by a factor of 1.3 in average. Reduction of the molecular volume of 20% results in an average over-estimation of 3 of the migration (worstcase). Another finding of this study is that the diffusion behaviour at the glass transition temperature not significantly change. The prediction model is applicable below and above the glass transition temperature. K E Y W O R D S diffusion coefficients, diffusion modelling, permeation kinetics, polyethylene terephthalate | INTRODUCTION The prediction of the migration is a useful tool in compliance evaluation of food contact materials, especially for packaging polymers with a low diffusion. For low diffusive polymers, the migration process is very time consuming and result at the end in low concentrations in food that challenges the analytical approaches. Increasing the temperature is one of the approaches to increase the speed of the mass transfer. However, higher temperatures like 60 C are far away from realistic storage conditions, which is in most cases room temperature. One of these low-diffusive polymers for which migration prediction is important in food law compliance evaluation is polyethylene terephthalate PET. PET is mostly used for beverage packaging, but also for trays for meat and cheese. The most important factors influencing the mass transfer from the packaging material into food is the concentration of the migrant in the polymer, the storage time, the storage temperature, the diffusion coefficients D P , and the partition coefficient K P/F . Other factors are the surface volume ratio and the film thickness of the packaging material. Most of the abovementioned factors are known or analytically available. Only the diffusion coefficients D P and the partition coefficients K P/F are rarely available from the scientific literature. However, for low-diffusive polymers like PET, the partition coefficients are negligible because the equilibrium between the polymer and the food will not be reached under normal storage conditions of packed foods. Therefore, the diffusion coefficient D P is the most important factor in the prediction of the migration for low-diffusive polymers like PET. Predictive models for the migration from food packaging materials have been developed with the last 25 years. 1-5 A comprehensive review on the different approaches is given in the scientific literature. 6 These prediction models should be over-estimative to show the food regulatory compliance with a safety factor. 7 Regulation 10/2011 stated that "To screen for specific migration the migration potential can be calculated on the residual content of the substance in the material or article applying generally recognized diffusion models based on scientific evidence that are constructed in a way that must never underestimate real migration levels." 8 However, the prediction should be not too over-estimative; otherwise, compliance with current food law cannot be shown in some cases. Therefore, realistic but still over-estimative diffusion coefficients D P should be available or should be predicted. In our previous work, a prediction model for the diffusion coefficients in PET was developed. 9 This prediction model is based on the molecular volume V of the substances and the temperature in Kelvin (Equation 1). The prediction model was derived from two correlations: (i) a correlation between the activation energy of diffusion E A and the molecular volume V of the substance and (ii) a correlation between the activation energy E A and the pre-exponential factor D 0 of the Arrhenius equation. 9 The activation energy of diffusion E A and the pre-exponential factor D 0 were experimentally determined mainly from desorption kinetics of spiked PET sheets into the gas phase. 10 The parameters a and b are the slope and the intercept of the correlation between the activation energy E A and the pre-exponential factor D 0 . The parameters c and d are the intercept and the slope of the correlation between the activation energy E A and the molecular volume V. The parameters a to d for the prediction of diffusion coefficients D P in PET are given Table 1. Due to the fact that PET is a low-diffusive polymer, high temperatures have to be applied for the desorption kinetics in our previous study. Therefore, most of the activation energies of diffusion E A were derived from diffusion coefficients D P above the glass transition temperature T g. 10 Also the activation energies of an alternative prediction model are determined mainly above T g. 3 On the other hand, the low diffusion of PET is also responsible that at temperatures below the glass transition temperature only the diffusion coefficients for small molecules are available. 11 | RESULTS Within this study, a permeation method was applied to determine the diffusion coefficients D P for several substances in PET. From the experimental permeation data, the diffusion coefficients D P were derived from the so-called lag time according to Equation 2, where l is the thickness of the PET film. This method was applied in previous studies for the determination diffusion coefficients in PET, 14 polyamide PA6, 15 polyethylene naphthalate PEN, 16 and general purpose polystyrene GPPS. 17 However, in these studies, the permeants were limited to homologous rows of n-alkanes and 1-alcohols. In the actual study, we determined also other substances than n-alkanes and 1-alcohols towards their permeation through a thin PET film and derived diffusion coefficients for a broad range of substances and functional groups. The permeants used within this study are summarized in Table 2. The substances cover various molecular volumes V, functional groups, and therefore also polarities. Overall, 66 substances were tested at 14 temperatures between 40 C and 120 C. In addition, the diffusion coefficients were determined also at different gas phase with the parameters given in Table 1. This comparison is visualized in Figure 1 for each of the tested temperatures between 40 C and T A B L E 1 Parameters for the prediction of diffusion coefficients in PET from Equation 1 9 Parameter Value are not available or cannot be handled because they are too thin. The molecular weight range of the permeants was tried to expand the molecular weight range as much as possible. However, the bounders are very narrow. Therefore, diffusion coefficients below T g are only available for low molecular weight substances. It is important to note that all diffusion coefficients D P are derived from permeation kinetics into the gas phase. Interactions between the PET polymer and food (simulants) like swelling are therefore excluded. The diffusion coefficients derived from this method can be considered as the pure diffusion coefficients in PET. Swelling effects of food simulants on the PET surface are well-known especially at high temperatures of 60 C, which is mostly used for PET beverage bottle testing. 12,18,19 These swelling effects significantly increases the migration into the simulants and lead in some cases to migration levels, that exceed the specific migration limits. On the other hand, real foods did not significantly swell PET and lead to much lower migration limits. Diffusion modelling is therefore more suitable to realistically predict the migration into food at the end of shelf life compared to experimental migration tests using high ethanolic food simulants. 19 It is important to note, that also moisture can swell the PET polymer especially at high temperatures. In order to reduce these swelling effects, the diffusion coefficients in this study were determined at virtually zero levels. Therefore, the pure diffusion coefficients in the PET polymer were determined without any swelling effects. Regarding real application, foods in contact with PET include high moisture conditions, but only at low temperatures. At low temperatures swelling is negligible for moisture but are significant for high ethanolic simulants. 12,18,19 As mentioned above, most of the diffusion coefficients were determined above the glass transition temperature T g of 81 C. At the glass transition temperature T g , the diffusion behaviour might change. In order to investigate this effect, the diffusion coefficients derived from this study were compared with previous studies. For four permeants, diffusion coefficients were available in the scientific literature from desorption experiments at temperatures between 120 C and 180 C 20 and from migration kinetics into mineral water and 10% ethanol at temperatures between 23 C and 50 C. 12,13 The permeation experiments of this study are in between the temperature intervals. The results are visualized in Figure 3 with the correlation of the reciprocal temperature (in Kelvin) and the logarithm of the diffusion coefficient D P (Arrhenius plot). The results for these four substances show, that at the glass transition temperature T g no significant change of the diffusion behaviour of PET was detectable. In addition, diffusion coefficients predicted from three independent methods (desorption kinetics, 20 migration kinetics 12,13 and permeation kinetics [this study] for the determination of diffusion coefficients are in good agreement with each other. | PET film sample and model chemicals For the permeation tests, a commercial biaxially oriented PET film was used. The thickness was determined to 11.9 ± 0.1 μm. The glass transition temperature of the investigated PET film was determined by F I G U R E 2 Comparison between the predicted diffusion coefficients and the experimental diffusion coefficients (log/log plot). Dotted lines 95% confidence interval, solid dots: diffusion coefficients below T g , open dots, diffusion coefficients above T g differential scanning calorimetry (DSC) to 81 C. The melting range was also determined by DSC to 243-260 C (peak at 255 C). The permeation was tested with 66 different substance ( Table 2). The substances were purchased with a purity of 99% and used without further purification. The concentration applied in the lower space of the permeation cell are given in the supporting information together with the corresponding diffusion coefficients. | Molecular volume The molecular volume V of the molecules was calculated with the free internet program molinspiration. 21 | CONCLUSIONS The modelling parameters derived from our previous study 9 were validated with 263 diffusion coefficients from 66 different organic Correlation between the diffusion coefficients and reciprocal temperature (Arrhenius plots) for (a) benzene, (b) toluene, (c) chlorobenzene, and (d) tetrahydrofuran. Desorption data from Ewender and Welle, 20 migration data into water and 10% ethanol from Franz and Welle 12 and Welle and Franz, 13 and permeation data from this study substances with various functional groups, volatility, and polarity. The parameters given in Table 1 in combination with Equation 1 predict the diffusion coefficients D P of organic substances in PET with a slight average over-estimative factor of 2.6. When using the molecular volume V of the substances minus 20% the modelling parameters are in any case over-estimative which is mandatory for compliance evaluation of food packaging materials according to European Regulation 10/2011. This worst-case prediction results in an average overestimative factor of 9.2 in the diffusion coefficients. Increasing the diffusion coefficient D P by a one order of magnitude (factor 10) results in an increase of the migration by a factor of 3.16 (square root of 10). An average over-estimation factor of 9.2 results in a factor of 3.0 higher migration, which seems to be suitable for food law compliance evaluation purposes. The average over-estimation without virtual volume reduction is 1.6, which results in an over-estimation of the migration compared to the predicted migration of only 1.3. The parameters in Table 1 therefore realistically describe the migration from PET. Such a realistic migration prediction is useful for the evaluation of non-intentionally added substances [22][23][24] as well as in the evaluation of post-consumer PET recyclates in direct food contact applications. 25 Another important finding of this study is, that the diffusion behaviour at the glass transition temperature do not significantly change. Even if the most of the diffusion coefficients were derived above the glass transition temperature the diffusion coefficients D P can be predicted below the glass transition temperature. On the other hand, assuming that there is a slight change, the diffusion behaviour will be probably higher above the glass transition temperature as below. Therefore, the modelling parameters given in Table 1 can be considered as worst case for temperatures below the glass transition temperature T g of PET.
3,292.8
2022-01-23T00:00:00.000
[ "Materials Science" ]
Experimental study of shear rate dependence in perpetually sheared granular matter . We study the shear behaviour of various granular materials by conducting novel perpetual simple shear experiments over four orders of magnitude of relatively low shear rates. The newly developed experimental apparatus employed is called “3D Stadium Shear Device” which is an extended version of the 2D Stadium Shear Device [1]. This device is able to provide a non-radial dependent perpetual shear flow and a nearly linear velocity profile between two oppositely moving shear walls. Using this device, we are able to test a large variety of granular materials. Here, we demonstrate the applicability of the device on glass beads (diameter 1 mm, 3 mm, and 14 mm) and rice. We particularly focus on studying these materials at very low inertial number I ranging from 10 − 6 to 10 − 2 . We find that, within this range of I , the friction coe ffi cient μ of glass beads has no shear rate dependence. A particularly appealing observation comes from testing rice, where the attainment of critical state develops under much longer duration than in other materials. Initially during shear we find a value of μ similar to that found for glass beads, but with time this value decreases gradually towards the asymptotic critical state value. The reason, we believe, lies in the fact that rice grains are strongly elongated; hence the time to achieve the stable μ is primarily controlled by the time for particles to align themselves with respect to the shear walls. Furthermore, the initial packing conditions of samples also plays a role in the evolution of μ when the shear strain is small, but that impact will eventually be erased after su ffi cient shear strain. Introduction The mechanics of granular flow have been widely studied experimentally. Three flow regimes of granular materials include the quasi-static, inertial and collisional regimes [2], which are classified using the dimensionless Inertial number I [3]. It is defined as the ratio of the confinement timescale and the typical time of deformation [4,5]: I =γd ρ σ yy , whereγ is the shear rate, σ yy is the normal stress, d and ρ are the diameter and density of the grain respectively. This number is treated as the index of inertial effects in granular flows at various shear rates. Collisional rapid flows has been described by kinetic theories which provide a set of constitutive equations connecting the mean density, the mean velocity, and the granular temperature to the fluctuational energies induced by the binary collisions between particles [6]. The behaviour of the dense confined granular assemblies under extremely low shear rates are usually captured by an elasto-plastic, rateindependent constitutive law that characterises the critical state by the internal friction angle φ and the critical solid fraction v c [7]. However, a comprehensive constitutive law is still lacking to fully capture the characteristics of grane-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>ular flow in the intermediate inertial regime. Many studies [4,5,8,9] have indicated a positive shear rate dependence of the friction coefficient μ, defined as shear stress over normal stress. In the study of da Cruz, et al. [5], the friction and dilatancy laws were proposed by conducting DEM simulation of 2D disc particles assembly in a plane shear configuration. It stated that the solid fraction v linearly decreases, but the friction coefficient μ linearly increases with the increasing I over 10 −4 to 10 −1 . However, Kuwano, et al. [10] carried out annular shear cell testing with 270μm diameter glass beads, and they reported a crossover from negative to positive shear rate dependence of the friction coefficient over the range of inertial number I from 10 −6 to 1. In this study, we employed a novel experimental apparatus called the "Stadium Shear Device" (SSD) that enables testing granular materials under perpetual shear conditions, here for the first time in three dimensions (i.e. 3D SSD). Unlike the Couette geometry which can only produce a radial dependent stress field, our new device is able to produce a non-radial dependent stress field in the central region [1]. We conducted perpetual shear tests on a large variety of materials, and present results examining the shear rate dependency of friction coefficient μ and the evolution of μ with shear strain. Introducing the 3D SSD A schematic diagram of the 3D Stadium Shear Device is shown in Fig. 1(a). A belt customised from the belt designation Bando HTS 966-14M is centrally placed in a confining case (540 mm length ×250 mm width ×190 mm height). The belt has the same height as the confining case, and can be driven with shear rates in the range of 0.00645s −1 ∼ 0.645s −1 from the power supplied by a motor and gear assembly installed on top of the driven sprocket. The granular material filled inside the belt is horizontally confined by two side plates of 175 mm length ×100 mm height, and vertically confined by a top and base plates as indicated in Fig. 1(a). Note that the sample has to be filled to a height above the side plate to accurately measure the normal stress using the two load cells (Mark-10 MR01-100#, one at front and one at back) attached on the side plates. Shear force is recorded by a torque transducer (Kistler 4520A050, range: 50 N.m) coaxially installed under the motor. There are 6 rows of 7 rollers bearings halfembedded on the surface of side plates to allow the belt run with minimal resistance. The friction between the running belt and the top plate are minimised using a brush-like draught excluder glued along the side of the top plate. In addition, the belt is sitting inside a U-shaped groove on the base plate with a Teflon sheet inserted to reduce the friction. where S is the shear force, T the torque applied by the motor and gear assembly, r the pitch radius of the sprocket, h the sample height and p = 966 mm the circumference of belt, respectively. The perpetual shear strain condition of the granular assembly within the belt is schematically illustrated in Fig. 1(c). Red thick arrows represent the directions of global shear direction and grey arrows indicate the localised shear directions within the sample. Due to the particular "stadium" shape of the belt, this device is able to perpetually shear particles and produce a continuous plane shear condition in the central region while re-circulating particles at either end without particle congestion. Although not shown here for brevity purposes, we used DEM, FEM and radiography studies to extract velocity profiles quite similar to the schematic one shown in Figure 1c. Calibration procedure A first calibration procedure was carried out to account for the forces induced by bending of the belt and frictions along interfaces between the metal plates and the belt. Before any experiment, the top plate was firstly kept at a certain height similar to the one it would have during the experiments (without material to be tested), and then the readings from both the load cells and the torque transducer were synchronously recorded when the belt was running at the velocity that would be operated during the experiment to be conducted later on. To avoid any possible hysteresis effects from the measuring meters, the same calibration procedure was repeated after every experiment. Since the calibration was made without the material, the forces measured were purely from the mechanical frictions mentioned above. The average force from two calibrations (before and after the experiment) was deducted from the corresponding shear and normal stresses during the experiments. A second calibration procedure was conducted to examine the accuracy of the two load cells symmetrically attached to the adapt plates, which are connected to the side plates via bars on each side. This might lead to inaccurate readings of the normal force generated by shearing the sample. The calibration first involved tilting over the device in a horizontal direction, and then evenly placing over the side of the belt a bag of lead shot with known weight. This procedure was repeated many times with different weights. By comparing the readings of load cells with the actual weights, a factor of 1.71 with an error bar of 6% was adopted for the normal forces measured. Experimental procedure In the present study, we tested glass beads with three different diameters (1 mm, 3 mm and 14 mm), for which the density of grain is roughly 2400 ± 100 kg/m 3 with less than 10% polydispersity. The mean aspect ratio (ratio between the smallest diameter and the largest diameter orthogonal to it) is 0.926. Another material tested was rice with a mean equivalent diameter of 3.8 mm and density of 1300 kg/m 2 . The mean aspect ratio of the rice measured by macrophotographic imaging was 0.304, and the mean circularity, defined as a function of perimeter P and area A, 4πA P 2 was found to be 0.542. A set of tests with glass beads was conducted to investigate the relation between μ and inertial number I over the range of 10 −6 ∼ 10 −2 . The granular system was sheared to the same shear strain under different shear rates, and the normal and shear stress were measured only after the system achieved the critical state. Further discussion about the critical state is addressed in the results and discussions section. We firstly measured the shear and normal stresses under the lowest shear rate, and incrementally increased the shear rate to the highest possible. High reproducibility was indicated by repeating the same test from the highest shear rate to the lowest. The shear rate in our experiment is defined as velocity of the belt V divided by the width between two shearing walls w = 124 mm. Furthermore, the evolution of μ with nominal shear strain = x/w (x being the distance travelled by the belt) was analysed with different particle shapes. In this test, the samples (3 mm glass beads and rice) were sheared to a shear strain of 240 at a constant shear rate with vertical stress σ zz fixed. In addition, since the mechanical behaviour of a granular assembly is strongly dependent on its initial packing conditions, we compared the stress response and the critical states from dense and loose packed rice after more than 15000 shear strain. The loose sample was prepared by gently pouring a small amount of material into the device from minimal height until the target depth, while the dense one was prepared by compacting the sample layer by layer by hand. Results and discussions In Fig. 2, the friction coefficient μ of glass beads at steady state is shown as a function of inertial number I over the range of 10 −6 ∼ 10 −2 . The low I < 10 −2 values correspond to granular flows in quasi-static regime. The symbols and error bars are the means and standard deviations of the stress readings over 10 belt turns, of which the accumulative shear strain is =79.2. By comparing the results from 14 mm glass beads with other smaller glass beads, one can find that the error bars are relatively large due to the smaller w/d ratio (w/d equal to 8.7, 40.7 and 122 for 14 mm, 3 mm and 1 mm glass beads respectively). The value of μ fluctuates slightly between 0.32 ∼ 0.38 and shows no dependence on the shear rate. This shear rate independence at the quasi-static regime is in agreement with previous studies. For example, da Cruz, et al. [11] and Iordanoff and Khonsari [12] identified shear rate independence in the quasi-static regime from discrete numerical simulations of 2D discs in a plane shear configuration, and the same characteristic was captured in [13] for polystyrene beads and glass beads in a Couette geometry. In particular, the value of μ from our experiments is very close to the μ s that is an important parameter in the μ(I) rheology relationship [14]. This model assumes that the friction coefficient goes towards a minimum value μ s at very low I, and μ s is found to be equal to 0.38 for glass beads [15]. In [9], a weakly negative dependence of friction coefficient μ was found when I < 10 −3 and μ started increasing noticeably when I > 10 −2 . However, no crossover from negative to positive shear rate dependence within this I range could be observed in our experiments. At this stage, we can not obtain data at I larger than 10 −2 for glass beads because the velocity after solid-liquid transition is beyond the range covered by our device. In the future, further improvements will be implemented to increase the velocity range of the device. To study the impact of particle shape on the friction coefficient μ, we compare μ as a function of shear strain for the tests of 3 mm glass beads and rice. As shown in Fig. 3, the rice exhibits a softening of friction coefficient with increasing shear strain, in contrast to the steady flow observed in glass beads. The value of μ decreases to two thirds of the initial value after shearing the rice to 240 shear strain. The same amount of friction coefficient reduction can be found in [16]. We believe that compared to spherical grains, elongated particles need a much longer shear duration to align themselves against the streamline, which increases the time to achieve the critical state. Further investigation into the alignment time and the orientation of the particles is out of the scope of the current paper. The mechanical behaviour of a granular system is highly dependent on its initial preparation, and is expected to eventually reach a critical state that is independent of the initial packing conditions after sufficient shear strain [17]. In Fig. 4, we show the shear and normal stresses as a function of shear strain γ for both the initially dense (e=0.54) and loose (e=0.69) samples of rice. The samples are sheared at a shear rate of 0.38s −1 to a large deformation (15000 strain) with a constant vertical stress. The stress of the loose sample first reaches a peak at shear strain 600 before declining to a plateau at around 4000 shear strain, while the stress of the dense sample reduces from a higher initial value compared to the loose sample and may require more than 12000 shear strain to reach its critical state. Both systems converge to an identical value of 4 kPa for normal stress and 1 kPa for shear stress at the critical state. We also plot in Fig. 5 the friction coefficient μ as a function of shear strain and the same characteristics are observed: the value of μ converges to 0.25 after 16000 shear strain for dense system but only 4000 shear strain is required to reach the critical state for the loose one. Radjaï and Roux [7] also studied the critical state of initially dense and loose packed samples by conducting plane shear simulations with an assembly of rigid discs in two dimensions. Similar results were found in [7] except the peak of stress ratio was only observed in the dense packed system. Conclusion After conducting robust experiments with the 3D Stadium Shear Device, we found that the friction coefficient μ of glass beads is not dependent on the shear rate over four orders of magnitude at relatively low inertial numbers. The particle shape plays a role in the evolution of μ as a function of the shear strain. In particular, μ of granular assembly comprising elongated particles declines gradually to a plateau and reaches a critical state after very large deformation. By contrast, μ of spherical particles is fairly constant with increasing shear strain. The evolution of stress and μ at the beginning of shear is strongly influenced by the initial packing state but eventually collapse to the same value. Further research into the initial packing condition and its impacts on the stress responses and dilation curves will be an interesting topic to study. It would also be worthwhile to further examine (by numerical simulations) the properties of the device, for example, the shear strain condition and the stresses generated in the non-affine and rotational flows under the circular cap.
3,784.2
2017-06-01T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Spatial non-cyclic geometric phase in neutron interferometry We present a split-beam neutron interferometric experiment to test the non-cyclic geometric phase tied to the spatial evolution of the system: the subjacent two-dimensional Hilbert space is spanned by the two possible paths in the interferometer and the evolution of the state is controlled by phase shifters and absorbers. A related experiment was reported previously by some of the authors [Hasegawa et al., PRA 53, 2486 (1996)] to verify the cyclic spatial geometric phase. The interpretation of this experiment, namely to ascribe a geometric phase to this particular state evolution, has met severe criticism [Wagh, PRA 59, 1715 (1999)]. The extension to non-cyclic evolution manifests the correctness of the interpretation of the previous experiment by means of an explicit calculation of the non-cyclic geometric phase in terms of paths on the Bloch-sphere. The theoretical treatment comprises the cyclic geometric phase as a special case, which is confirmed by experiment. We present a split-beam neutron interferometric experiment to test the non-cyclic geometric phase tied to the spatial evolution of the system: the subjacent two-dimensional Hilbert space is spanned by the two possible paths in the interferometer and the evolution of the state is controlled by phase shifters and absorbers. A related experiment was reported previously by some of the authors to verify the cyclic spatial geometric phase. The interpretation of this experiment, namely to ascribe a geometric phase to this particular state evolution, has met severe criticism. The extension to noncyclic evolution manifests the correctness of the interpretation of the previous experiment by means of an explicit calculation of the non-cyclic geometric phase in terms of paths on the Blochsphere. The theoretical treatment comprises the cyclic geometric phase as a special case, which is confirmed by experiment. Another definition of a mixed state geometric phase is due to Sjöqvist et al. [10] using an interferometric approach for its definition. For these concepts one has to keep in mind that there exist points in parameter space for which the mixed state geometric phases remain undefined provoking an extension to offdiagonal mixed state geometric phases [11]. The geometric phase is associated with an evolution of a system governed by an Hamiltonian, e. g., a neutron in a magnetic field where the geometric phase arises by the spinor evolution due to the coupling with the magnetic field. Here, we observe a geometric phase as an effect of the change in the spatial degrees of freedom in an interferometry setup. A proposal to verify the spatial geometric phase is due to Sjöqvist [12] using polarized neutrons by reversing the roles of the magnetic field and the spatial degrees of freedom. Moreover, an experiment using unpolarized neutrons has been performed by Hasegawa et al. [13,14] to test the cyclic spatial geometric phase by inducing a relative phase shift of 2π between the interfering neutron beams in a perfect silicon single-crystal interferometer. The geometric interpretation of this experiment has been dismissed by Wagh [15] demanding further investigations, namely in the non-cyclic case, which is the purpose of the current article. Geometric Phases Let us briefly review the basic concepts of geometric phases: A geometric phase is a quantity which is deeply connected to the curvature of some underlying (state-or parameter-) space. A two-dimensional plane in three-dimensional real space does not have an intrinsic curvature, but when considering a sphere embedded in euclidean real space, we have to take the curvature of this manifold into account. In geometry this curvature is reflected, for example, in the angle difference of a vector transported around a loop along geodesics, i. e., great circles: If a vector is pinned onto a sphere and then transported along a meridian to the equator, for some angle α along the equator and back to the initial point without changing its length and its direction in the tangent plane to the surface of the sphere, the vector will point in a different direction with a relative angle of α as the holonomy associated with the loop. If we do the same on a two-dimensional plane the initial and the final vector will point in the same direction. Berry [1] was the first who addressed this issue in quantum mechanics: He considered a system initially in an eigenstate |n(R(t))〉 t = 0 of the governing Hamiltonian H(R(t)) dependent on the parameters R(t) changing with time t. As a demonstrative example one may consider a neutron coupling to a magnetic field H(R(t)) = -µ · B(R(t)) due to its magnetic dipole moment µ = µ n σ σ, where σ σ = {σ x , σ y , σ z } are the Pauli matrices and µ n denotes the magnetic moment of a neutron. Suppose now that the neutron is initially polarized in the direction of the magnetic field. If the direction of the magnetic field is changed adiabatically, i. e., slowly enough to avoid transitions to an orthogonal state, the system will stay in the eigenstate |n(R(t))〉 at all times t. Furthermore, when tracing out a loop in parameter space the final state |Ψ(τ)〉 at time τ will be the same as the initial state up to an additional phase factor: (1) The first phase value on the time needed to traverse the loop and on the instantaneous energy E n (t) = 〈Ψ(t)|H(t)|Ψ(t)〉 of the system, whereas the second phase φg = i dR is dependent only on the circuit integral in parameter space revealing the geometric structure. The latter is termed Berry phase or more general geometric phase in contrast to the former dynamical phase φ d . φ g can be rewritten as a surface integral by use of Stoke's Theorem yielding the surface enclosed by the loop in parameter space with dS denoting the area element and V n = ∇ × 〈n |∇n〉 in an obvious abbreviated notation. For the neutron example-or more generally for any spin-1/2 particle-φ g equals half of the solid angle enclosed by the loop as seen from the degeneracy point |R| = 0 in parameter space. This can also be related to the example from geometry above where the holonomy after the transport of the vector pinned initially to the north pole of a sphere equals the solid angle as seen from the origin of the sphere. Several restrictions have been relaxed in course of the years, e. g., extensions to nonadiabatic [3], noncyclic and nonunitary [4], and nonpure [9,10] vectors in Hilbert space H which differ only by an overall phase factor: The stress is therefore shifted from the parameter space of the Hamiltonian in case of Berry's construction to state space. We are not interested in the changes of the driving parameters (as the direction and strength of the magnetic field) but in the changes of the state itself. In Berry's considerations these two spaces are identical since the state follows the changes in parameters due to the adiabaticity condition. In the construction by Aharonov and Anandan one considers an open path in Hilbert space which is projected to a path in Ray space by use of the equivalence relation in Eq. (2), i. e., the curve The geometric phase is a property of Ray space, where C ∼ is closed due to the equivalence of the initial and the final state |φ (0)〉 ∼ |φ (τ)〉 and can be calculated via a surface integral over the area enclosed by C ∼ . One can find many different curves C′, C″, . . . in Hilbert space differing by a phase factor e iα(t) and yielding the same curve in Ray space under the projection map π. On the other hand for a given curve in Ray space, there exists one distinct curve in Hilbert space fulfilling the parallel transport conditions, namely that two neighbouring states |φ (t)〉 and |φ (t + dt)〉 in H have the same phase, that is to say, 〈φ (t)|φ (t + dt)〉 is real and positive. This implies by Taylor expansion that For this curve the dynamical phase vanishes as one can verify by inserting the Schrödinger equation parallel transport condition. The concept can be extended to apply to open paths in Ray space where |φ (τ)〉 ∼ / |φ (0)〉 by closing the curve by a geodesic, i. e., a path in Ray space with the shortest distance from |φ (τ)〉 〈φ (τ)| to |φ (0)〉 〈φ (0)|. Then one obtains a well-defined surface area enclosed by the path generated by the evolution of the system plus the geodesic closure. This surface provides an expression for the geometric phase, which has been proven by Samuel and Bhandari [4]. To sum up, for a general evolution of a quantum state the state obtains a dynamical phase dependent on the energy and time as well as a geometric phase only dependent on the subjacent geometry of state space. For special Hamiltonians which fulfill the parallel transport conditions the dynamical phase vanishes, which is also the case when the state is transported along a geodesic. An example of the latter is an evolution along a great circle on a sphere for a two-level system which we will encounter in the forthcoming discussion. Interferometric Setup Due to Feynman [16] the description of any twolevel quantum system is equivalent to the description of a spin-1/2 particle. Exploiting this equivalence there is in principle no difference between manipulations in the spin space of neutrons with the orthogonal basis {| ↑ 〉, | ↓ 〉} as eigenstates of σ z , and momentum space with {| k 〉, | k′ 〉} as orthogonal basis vectors corresponding to two directions of the neutron beam in an interferometer. In both cases one can assign a geometric phase to the particular evolution of the initial state. An even more appropriate description for the interferometric case for the forthcoming discussion is in terms of "which-way" basis states {| p 〉, | p ⊥ 〉}, namely, if the neutron is found in the upper beam path after a beamsplitting plate it is said to be in the state | p 〉, or in the state | p ⊥ 〉, if found in the lower beam path. In case of a 50 : 50 beamsplitting of the incident (neutron) beam into a transmitted beam and a reflected beam, the associated wave vector after the beamsplitter can be written as an equally weighted coherent superposition of the two paths with the relative phase δ ∈ R depending on the particular physical realization of the beamsplitter. For testing the spatial geometric phase we use a double-loop interferometer (Fig. 1) iα The geometric phase can then be extracted from the argument of the complex valued scalar product between the initial and the final state arg 〈ψ t |ψ f 〉 (when removing dynamical contributions as will be discussed later). This is where the reference beam comes into play: |ψ ref 〉 is not subjected to any further evolution, but is stationary apart from adding a phase factor e iη by use of the phaseshifter PS1. |ψ ref 〉 propagates towards the beamsplitter BS2 from the upper path, thus we can assert it to be in the state e iη |p〉. Then by the variable phase shift e iη one can measure the shift of the interference fringes reflecting the phase difference between |ψ ref 〉 and |ψ f 〉. This preparation of the states is followed by the recombination of the two beams |ψ f 〉 and |ψ ref 〉 at the beamsplitter BS2 and the detection at the detector D 0 in the forward beam. This step can be described by the application of the projection operator |q〉 〈q| = 1/2(|p〉 + |p ⊥ 〉) (〈p| + 〈p′ |) (with δ = 0, which can always be achieved by an appropriate choice of the phase of the basis states) to |ψ f 〉 as well as to |ψ ref 〉: where K is some scaling constant. The intensity I measured in the detector D 0 is proportional to the absolute square of the superposition |ψ′ f 〉 + e iη |ψ′ ref 〉: We notice a phase shift of the interference pattern by arg 〈ψ ′ ref |ψ ′ f 〉. This phase shifts corresponds to the Pancharatnam connection [5] between the state |ψ ′ f 〉 and the state |ψ ′ ref 〉 = |q〉 〈q|ψ t 〉 = |q〉 from which we can extract the geometric phase. Explicitly we obtain (6) where ∆φ ≡ φ 2 −φ 1 . The geometric phase is defined as [6] (7) where φ d denotes the dynamical part. From Refs. [13] and [15] we know that the dynamical part stemming from the phase shifter PS2 is given by a weighted sum of the phase shifts φ 1 and φ 2 with the weights depending on the transmission coeficient T. In particular we have (8) which vanishes by an appropriate choice of phase shifts and transmission, i. e., φ d = 0 for φ 1 /φ 2 = -T. By varying the relative phase ∆φ from 0 to 2π and setting φ d = 0 the geometric phase φ g can be plotted over ∆φ (Fig. 2). Bloch-Sphere Description For every two-level system we can use the Blochsphere for depicting the state vectors and evolutions thereof as points and curves on a sphere. Then the results obtained above, i. e., the shift of the interference pattern in Eq. (5) by the paths of the state vectors on the Bloch-sphere, or, equivalently, to the solid angle traced out by the state vectors as seem from the origin of the sphere. As we can observe in Fig. 3, the north pole of the sphere can be identified with a state with well known path, i. e., an eigenstate of the observable |p〉 〈p|. After the beam splitter BS1 the state |ψ t 〉 evolves to an equal superposition of upper path and lower path, therefore the evolution on the Bloch sphere is given by a geodesic from the north pole to the equatorial line (the particular point on the equator is arbitrary due to the arbitrary choice of the phases of the basis vectors). The absorber changes the weights of the superposed basis states, in particular for the extremal values of T parameterized by the angle θ with T = tan 2 θ/2, we end up either again with an equally weighted superposition for no absorption (T = 1 or θ = π/2) or the state is now on the north pole for total absorption (T = 0 or θ = 0), since in the latter scenario we know the particle has taken the upper path when detecting a neutron in D 0 . For T ∈ (0, 1) the state is encoded as a point on the geodesic from the north pole to the equatorial line. Due to the phase shifter PS2 we obtain a relative phase shift between the superposing states of ∆φ = φ 2 −φ 1 : This can be depicted as an evolution along a circle of latitude on the Bloch sphere with periodicity of 2π. The recombination at BS2 followed by the detection of the forward beam in D 0 is represented as a projection to the starting point on the equatorial line, i. e., we have to close the curve associated with the evolution of the state by a geodesic to the point |q〉 〈q| on the sphere as discussed for non-cyclic paths in Sec. 2. As for the reference state |ψ ref 〉 we note that the phase shift of η has no impact on the position of the state on the Bloch sphere, it stays at the north pole. Due to the recombination at BS2 and the detection the state is also projected to |q〉 〈q| contributing to the forward beam incident to the detector D 0 . The paths are depicted in Fig. 4 in detail for cyclic Fig. 4(a) as well as non-cyclic evolution, Fig. 4(b). For a relative phase difference greater than π/2 we have to take the direction of the loops into account. In Fig. 4(b) the first loop is traversed clockwise, whereas the second loop is traversed counterclockwise yielding a positive or negative contribution to the geometric phase, respectively. The numerical calculation of the surface area F enclosed by the path traversed by the neutron is straightforward by evaluating the solid angle Ω = ∫ F sin θ d(∆φ)dθ via a surface integral and using φ g = -Ω/2. For the cyclic case this integral can be solved easily by calculating the segment on the sphere according to Fig. 4(a) to obtain φ g = -Ω/2 = π(cosθ -1). For the non-cyclic case no analytic expression has been found to compare the results with the phase shift of the interference fringes appearing in Eq. Experimental Results For an experimental test of the spatial geometric beam we have used the double-loop perfect-crystalinterferometer installed at the S18-beamline at the high-flux reactor at the Institut Laue-Langevin, Grenoble [17]. A schematic view of the setup is shown in Fig. 1. Before falling onto the skew-symmetric interferometer the incident neutron beam was collimated and monochromatized by the 220-Bragg reflection of a Si perfect crystal monochromator placed in the thermal neutron guide H25. The wavelength was tuned to give a mean value of λ 0 = 0.2715 nm. To eliminate the higher harmonics we have used prism-shaped silicon wedges. The beam cross-section was confined to (5 × 5) mm 2 and an isothermal box enclosed the interferometer to achieve reasonable thermal environmental isolation. For the phase shifters parallel sided aluminium plates have been used: 4 mm inserted in the first loop as PS1 and 4 mm and 0.5 mm, respectively, in the second loop for PS2 yielding a ratio of 1/8 for φ 1 /φ 2 . To avoid dynamical phase contributions a gadolinium solution with T = 0.118 [19] has been used as an absorber. For a comparison with the theoretically predicted values one has to keep in mind that the contrast reflecting the coherence properties is different between each of the beams in the interferometer. Accounting for this experimental fact in the theoretical derivation of the geometric phase we notice a slightly flattened curve in Fig. 5 compared to Fig. 2. Nevertheless, one can recognize the increase in geometric phase for ∆φ ∈ [0, π/2] due to the positively oriented surface followed by a decrease due to appearance of a counter-clockwise traversed loop on the sphere yielding a negative phase contribution. Conclusions In summary we have shown that one can ascribe a geometric phase not only to spin evolutions of neutrons, but also to evolutions in the spatial degrees of freedom of neutrons in an interferometric setup. This equivalence is evident from the description of both cases via state vectors in a two dimensional Hilbert space. However, there have been arguments contra to the experimental verification in [13] which we believe can be settled in favour of a geometric phase appearing in the setup described above. The twofold calculations of the geometric either in terms of a shift in the interference fringes or via surface integrals in an abstract state space allows for a geometric interpretation of the obtained phase shift.
4,496.8
2004-08-23T00:00:00.000
[ "Physics" ]
Analysis of the Extreme Equilibrium Conditions of an Internal Cavity Located Inside a Flat Metal Plate Subjected to an Internal Pressure p The influence of surface bulges and cavities within metals is an important metallurgical-mechanical problem that has not been fully solved and motivates multiple discussions. This is not only related to the generation of interfaces, but also to the distribution of alloying components and elements. In this study, Laplace’s equation was used to develop a set of equations to describe these kinds of defects in plates, which arise during the development of metallurgical processes, and this can be used for the prediction of pipeline failures subjected to internal pressure. In addition, the stability conditions of a cavity under an internal pressure are analyzed. The developed method allows to identify the stress state in the generation of the cavity and its propagation. In addition to this, finite element analyses were carried out in order to show first the stress distribution around a cavity subjected to a series of theoretical operation conditions and second to show the crack growth on the tip of the cavity. Introduction In recent years, significant advances have been made in the development of new criteria that allows to explain the emergence, evolution, and propagation of cracks in materials. The fracture mechanics within the framework of linear and no-lineal mechanics is based on Griffith's theory, which considers the extreme balance of stress applied in a crack or a defect of critical length [1][2][3]. Linear fracture mechanics treats a fractured body as a structure, and all the attention is focused on the small area adjacent to the tip of the crack. We denote that this theory is applicable to materials such as glass, but metals have the property of being significantly deformed in the plastic range, and considering this plasticity is a complex problem. The objective of this work is focused on establishing the complex state of stresses that arises at the tip of the crack. High strength steels are currently used for manufacturing pipelines in order to have the best safety conditions in the transport of gases and fuels [4,5]. However, serious accidents may occur during its operation if there are porosities in the material, because cavities and micro discontinuities can be generated in the material structure when it is subjected to a high internal pressure. The most probable mechanism in the formation of fractures is usually due to the deceleration of the dislocations groups in the grain boundaries in combination with the diffusion of vacancies. The micro discontinuities and micro fractures will eventually produce a crack whose propagation will cause fragmentation of the body. The analysis of defects and cracks in metal plates, as well as their causes, has been widely studied. In some cases, hydrogen diffusion can produce embrittlement and can be associated with stress-corrosion cracking [6][7][8]. In other cases, hydrogen bubbles can appear; they are cavities below the surface of the material due to excessive internal pressure and can be the origin of hydrogen-induced cracking [9][10][11]. Hydrogen embrittlement commonly occurs in low resistance steels with magnitudes less than 80 ksi of yield strength [12], although there are also cases in aluminum [9][10][11]13]. The presence of cavities within the structure of a gas or oil pipe generates serious problems related to the combination of mechanical as well as physical and chemical fields. If the metal is used to withstand internal pressures as in transport pipelines, there are serious conditions for the formation and the growth of cavities due to the internal pressures. The aim of this study is to propose a new theoretical analysis of this failure by solving the Laplace's equation, which is relevant since these cavities create a complex stress states that can induce fracture of the body [14][15][16][17][18][19][20][21]. Plastic deformation is developed in several stages and at multiple levels. However, the physical foundation of this process was established a few years ago. Three stages (I-III) in the crack growth process were previously identified. The transition from one stage to the other is a process based on the evolution of the dislocational structure of metal. This was demonstrated by test and studies carried out in situ at the HVTEM [22]. These studies demonstrated that the dislocational structure spontaneously reorganizes itself with dramatic alterations of the physical and mechanical properties. These are self-organizing processes because the dislocations, being defects of imbalance and energy carriers, under the action of external factors, allows the system to acquire a minimum entropy during its evolution. Because of the reconfiguration of the dislocational structure, the frontal area of a propagating fracture evolves to an extraordinary and complex structure, which with the use of the numerical simulation dramatically approaches to what actually takes place. The conditions for a crack to arise and propagates depends on the state of stresses appearing at the tip of the crack, and it is precisely at this point that the contribution of this work is focused. Analytical Methodology Consider that because of the internal pressure p, under which a pipeline operates, a cavity propagates and subsequently causes a fracture. Assuming that the cavity has a very small area compared to the diameter P d of the pipeline, it is possible to simplify the problem of the cavity critical balance through the analysis of the extreme equilibrium, which is when: where l is the length of the cavity. Additionally, we neglect the curvature of the pipeline, simplifying a problem that is originally three-dimensional. Additionally, it is assumed that the original cavity was an internal notch of circular shape with a diameter l. In Figure 1, the scheme of the geometry of the plate and its cavity is shown. Without diminishing the generalities of the problem, initially it is assumed that the plane of incision of the original cavity matches with the central plane of the plate in which: where h is the thickness of the plate. Subsequently, when a pressure p is applied to the incision plane, the cavity is increased. In Figure 2a, the equivalent system 1 is shown, this means two flat plates with h 1 in thickness, fixed along the border and loaded with a uniformly distributed force p. where l is the length of the cavity. Additionally, we neglect the curvature of the pipeline, simplifying a problem that is originally three-dimensional. Additionally, it is assumed that the original cavity was an internal notch of circular shape with a diameter l. In Figure 1, the scheme of the geometry of the plate and its cavity is shown. Without diminishing the generalities of the problem, initially it is assumed that the plane of incision of the original cavity matches with the central plane of the plate in which: where h is the thickness of the plate. Subsequently, when a pressure p is applied to the incision plane, the cavity is increased. In Figure 2a, the equivalent system 1 is shown, this means two flat plates with h1 in thickness, fixed along the border and loaded with a uniformly distributed force p. Selection of the Differential Segment To analyze the evolution of the cavity as the pressure increases, the solution for a circular beam subjected to bending can be used. The known solution applicable to this problem is only possible when the load does not cause plastic deformation in the plate. The theory of elastic bending for similar plates has been previously analyzed [23][24][25][26][27]. Taking this theory for the development of the present study, we consider a load perpendicularly applied at the center of a circular cross-section, it is subsequently bended and its circular curvature is altered. This contributes to give the beam a microshape with a double curvature. The shape of this surface can be expressed by a radial function of the deviation as: By the elastic property of the plate it is assumed that: In this case the bending of the beam can be analyzed independently of its strain. When the deflection w is comparable with the thickness h1, it will be required that the solution considers the stretching of the center of the surface. In the theory of plate bending, two hypotheses are used to simplify the problem. First, the normal Kirchhoff invariant hypothesis, which considers that the coordinate points initially located along a normal line in the center of the plane, remains in this normal line after the deformation. Second, the hypothesis of the non-deformable layers of the plate, which considers the normal stresses negligible in comparison with the bending stresses. Considering a circular plate of constant thickness with the Z axis in the center, it is easy to see that the problem is symmetric, the deflection is only a function of the radial coordinates. If the angle of rotation of the normal is θ, then ω is related by: The negative sign is due to the bending direction as shown in Figure 3a, with the decrement of ω, θ increases. Now, if we consider radial sections along the plate before and after the load is applied, Selection of the Differential Segment To analyze the evolution of the cavity as the pressure increases, the solution for a circular beam subjected to bending can be used. The known solution applicable to this problem is only possible when the load does not cause plastic deformation in the plate. The theory of elastic bending for similar plates has been previously analyzed [23][24][25][26][27]. Taking this theory for the development of the present study, we consider a load perpendicularly applied at the center of a circular cross-section, it is subsequently bended and its circular curvature is altered. This contributes to give the beam a micro-shape with a double curvature. The shape of this surface can be expressed by a radial function of the deviation as: By the elastic property of the plate it is assumed that: In this case the bending of the beam can be analyzed independently of its strain. When the deflection w is comparable with the thickness h 1 , it will be required that the solution considers the stretching of the center of the surface. In the theory of plate bending, two hypotheses are used to simplify the problem. First, the normal Kirchhoff invariant hypothesis, which considers that the coordinate points initially located along a normal line in the center of the plane, remains in this normal line after the deformation. Second, the hypothesis of the non-deformable layers of the plate, which considers the normal stresses negligible in comparison with the bending stresses. Considering a circular plate of constant thickness with the Z axis in the center, it is easy to see that the problem is symmetric, the deflection is only a function of the radial coordinates. If the angle of rotation of the normal is θ, then ω is related by: Materials 2020, 13, 2043 of 18 The negative sign is due to the bending direction as shown in Figure 3a, with the decrement of ω, θ increases. Now, if we consider radial sections along the plate before and after the load is applied, it is noted that as a result of the plate deflection, the A 1 B 1 normal, rotate a θ angle, and the A 2 B 2 normal rotate at a θ + dθ angle, as shown in Figure 3b. it is noted that as a result of the plate deflection, the A1B1 normal, rotate a θ angle, and the A2B2 normal rotate at a θ + dθ angle, as shown in Figure 3b. The v-v' segment, which is at a z distance from the center of the plane, is extended by ( + ) − z = z . With this, the radial elongation is: The radial elongation of point v' in the tangential direction is determined by comparing the length of the circumference in which it is located, before and after the application of the load. If, before deflection, this circumference was equal to 2 and after the loading it becomes to 2 ( + ), then: For a better understanding, in Figure 4 a differential plate element is shown having two radial sections, in which the angle between them is equal to , and two cylindrical surfaces of and + . Now we analyze the equilibrium of this differential element after loading, replacing the effect on the plate through the internal tension. The generalized Hook's law for our bidimensional problem is expressed by: where E is the modulus of elasticity, μ is the Poisson's ratio, is the normal stress in the radial direction and is the normal stress in tangential direction. Expressing the stress from Equations (8) and (9) and considering (6) and (7), it is possible to express the stresses in the following form: The v-v' segment, which is at a z distance from the center of the plane, is extended by z(θ + dθ) − z dθ = z dθ. With this, the radial elongation is: The radial elongation of point v' in the tangential direction is determined by comparing the length of the circumference in which it is located, before and after the application of the load. If, before deflection, this circumference was equal to 2πr and after the loading it becomes to 2 π(r + z dθ), then: For a better understanding, in Figure 4 a differential plate element is shown having two radial sections, in which the angle between them is equal to dϕ, and two cylindrical surfaces of r and r + dr. Now we analyze the equilibrium of this differential element after loading, replacing the effect on the plate through the internal tension. The generalized Hook's law for our bidimensional problem is expressed by: where E is the modulus of elasticity, µ is the Poisson's ratio, σ r is the normal stress in the radial direction and σ t is the normal stress in tangential direction. Expressing the stress from Equations (8) and (9) and considering (6) and (7), it is possible to express the stresses in the following form: Materials 2020, 13, 2043 of 18 Materials 2020, 13, x FOR PEER REVIEW 5 of 18 The resultant forces applied on the faces of the elementary body of the plate can be determined. The tangential stresses on the A1B1 face contribute to the resulting shear load, which is parallel to the Z axis. We represent the intensity of this force, that is, the corresponding magnitude per unit length for , giving Q in kg. Its balance will be equal to . Similarly, it can be obtained for the A2B2 face, the resultant shear forces acting on this face are −( + ) ( + ) . Since the normal forces acting on the element faces are equal in magnitude but with opposite sign, the resultant of the normal forces is equal to zero. Its effect produces only bending momentums in the vertical planes of the element. We represent the momentum intensity per unit length on the faces A1B1 and A2B2, by Mr and Mt in kg cm/cm. In this case, the determination of the energy balance equation is: Using Equations (10) and (11) we obtain: Since the integer of Equations (14) and (15) can be expressed in terms of , then it is possible to write: Thus: The resultant forces applied on the faces of the elementary body of the plate can be determined. The tangential stresses on the A 1 B 1 face contribute to the resulting shear load, which is parallel to the Z axis. We represent the intensity of this force, that is, the corresponding magnitude per unit length for r dθ, giving Q in kg. Its balance will be equal to Q r dϕ. Similarly, it can be obtained for the A 2 B 2 face, the resultant shear forces acting on this face are −(Q + dQ) (r + dr)dϕ. Since the normal forces acting on the element faces are equal in magnitude but with opposite sign, the resultant of the normal forces is equal to zero. Its effect produces only bending momentums in the vertical planes of the element. We represent the momentum intensity per unit length on the faces A 1 B 1 and A 2 B 2 , by M r and M t in kg cm/cm. In this case, the determination of the energy balance equation is: Using Equations (10) and (11) we obtain: Since the integer of Equations (14) and (15) can be expressed in terms of h 2 12 , then it is possible to write: Materials 2020, 13, 2043 6 of 18 Thus: where D is the bending toughness of the plate. The balance equation of the selected element of the plate can be obtained by projecting the force along the Z axis and adding all the momentums with respect to the Y axis, which is tangent to the arc of radius r in the center of the plane, we obtain: From which: and By neglecting higher order components, the Equation (21) can be expressed as: The remaining balance equations satisfy the identities by the symmetry conditions. Setting up Equation (18) into Equation (22), under the condition that D is constant, yields to: Now we rearrange as follows: d dr Next, we integrate twice to obtain the radial distribution of the rotation angles: The integration constants C 1 and C 2 , are determined from the boundary conditions for each specific case. Equation (6) is used to determine the radial function of the deflection, and using Equation (12), the momentum diagrams can be obtained according to Equations (14) and (15). From Equations (10), (11), and (18), the functions σ r = f (M r ) and σ t = f (M t ) are: Substituting the expression for the toughness of the plate: In this case, the maximum stress values are achieved on the surfaces of the plate, when ± h 2 : On the Curvature of the Cavity Considering the problem as symmetrical with respect to the center of the plane, we analyze just the lower part. Later we analyze when the convexity is formed in the upper side. It can be considered that the initial position of the plate corresponds to the diagram in Figure 2. We can determine Q(r) from the equilibrium conditions for a round plate element, which was concentrically sectioned: So, if we substitute Equations (33) and (34) into (24), and integrate twice to obtain: In this case r = 0, in the center of the plate where θ = 0, it follows that C 2 = 0 For a fixed contour θ(r) = 0, we obtain: Finally, the Equation (36) can be rewritten as: Using Equation (6), we can see that: Now, ω(R) = 0 in order that: From Equation (18) we find the expressions for M r (r) and M t (r): Materials 2020, 13, 2043 8 of 18 In Figure 5, the momentum diagrams are shown, they are calculated with the use of these expressions: The maximum stresses are reached in the fixed contour from the internal side of the cavity. We analyze the stress of the plate at the fixed support: The equivalent stresses are: The maximum deviation is reached in the center of the plate for r=0. The stress state in the plate center is more adverse in the outside surface of the cavity, in which the tensile stresses are: = 0 (54) So far, we have analyzed the formation of cavities, without taking into account that the pipeline is exposed to an internal pressure p0. In approximation with the theory of membranes, from the Laplace's equation it is possible to calculate the σm and σt stresses obtained under a p0 pressure: The maximum stresses are reached in the fixed contour from the internal side of the cavity. We analyze the stress of the plate at the fixed support: The equivalent stresses are: The maximum deviation is reached in the center of the plate for r = 0. The stress state in the plate center is more adverse in the outside surface of the cavity, in which the tensile stresses are: So far, we have analyzed the formation of cavities, without taking into account that the pipeline is exposed to an internal pressure p 0 . In approximation with the theory of membranes, from the Laplace's equation it is possible to calculate the σ m and σ t stresses obtained under a p 0 pressure: where σ m is the stress in the meridional direction of the pipeline, σ t is the stress in the tangential direction, R m is the radius of curvature in the meridional direction, and R t is the radius of curvature in the tangential direction of the pipeline. For a pipeline of R m = ∞ Analyzing the balance of the pipeline in the axial direction, we can express: From which we obtain: Therefore, the selected differential element of the pipeline with a bulge or a cavity is shown in Figure 6. In this approach there are no stresses in the normal direction to the pipeline surface. Materials 2020, 13, x FOR PEER REVIEW 9 of 18 Analyzing the balance of the pipeline in the axial direction, we can express: From which we obtain: Therefore, the selected differential element of the pipeline with a bulge or a cavity is shown in Figure 6. In this approach there are no stresses in the normal direction to the pipeline surface. Therefore, in the contours of the plate that form the cavity, the stresses arising from the pressure in the duct and the pressure inside the cavity must be added. Analyzing only the axial direction: and in the tangential direction: To analyze the axial section of the pipe, we separate the portion of the plate that limits the cavity, which is under a p pressure and we have replaced its effect by an internal stress distribution in the contour. Thus, we can analyze the axial slide of the pipeline with a cavity as a plate with a crack. It is also considered that its contour is under a uniformly distributed stress and momentum. The probability of this crack to propagate should be analyzed within the background of fracture mechanics. Even before = + , which is the yield strength from the elastic locations and after the increment in the yield point in the elastic-plastic range > . . The problem can be analyzed in the plastic range, until all the segments of the duct that surround the cavity are under a p pressure. Therefore, in the contours of the plate that form the cavity, the stresses arising from the pressure in the duct and the pressure inside the cavity must be added. Analyzing only the axial direction: and in the tangential direction: To analyze the axial section of the pipe, we separate the portion of the plate that limits the cavity, which is under a p pressure and we have replaced its effect by an internal stress distribution in the contour. Thus, we can analyze the axial slide of the pipeline with a cavity as a plate with a crack. It is also considered that its contour is under a uniformly distributed stress and momentum. The probability of this crack to propagate should be analyzed within the background of fracture mechanics. Even before σ 1 = σ m + σ r max , which is the yield strength from the elastic locations and after the increment in the yield point in the elastic-plastic range σ 1 > σ 0.2 . The problem can be analyzed in the plastic range, until all the segments of the duct that surround the cavity are under a p pressure. The behavior of the material enclosing the cavity under this pressure after its transition to a plastic state requires a special analysis. To solve this problem, we apply an approach developed by Feodosiev [28] to describe the bending of a membrane. First, we assume that the stress is uniformly distributed in the plate thickness, and the surface of the plate acquires a spherical shape. In Figure 7, the geometry of the plate after being subjected to the load is shown. Considering that ρ is the radius of curvature of the plate after plastic bending, we get: Since α is very small, we can assume that: The deflection of the plate f, is: and the meridional and tangential stresses in the plate are: From the Laplace's equation = , thus: Furthermore, using relationships based on the theory of plasticity, the axis coordinates are oriented so that = 0, = , = , thus: From which and , Substituting into equation of plasticity: Considering that ρ is the radius of curvature of the plate after plastic bending, we get: Since α is very small, we can assume that: The deflection of the plate f, is: and the meridional and tangential stresses in the plate are: From the Laplace's equation ρ = R 2 2 f , thus: Furthermore, using relationships based on the theory of plasticity, the axis coordinates are oriented so that σ z = 0, σ x = σ m , σ y = σ t , thus: From which σ m and σ t , Substituting into equation of plasticity: Now, substituting Equation (66) into the equation for the strain intensity, we obtain: The elongation of the plate after bending in the plastic range can be determined by the difference in the length of the AC arc and the AB line: Locating Equation (68) into Equation (67') we obtain: To calculate the intensity of the stress σ i , considering that σ z = 0, and σ m = σ t and Equations (71) and (69) can be used to obtain the function f = f (p). To achieve this, we give a value for the deflection f, calculate ε i with the Equation (69), and by using the stress-strain curve in coordinates σ i -ε i we determine σ i . Then, using Equation (71) we calculate the pressure. Knowing the values of f and p we graph the curve. Also, it may be employed the tensile curve expressed in parabolic coordinates as σ 0 + Kε 1 2 i . Results and Discussion Two cases were analyzed by using the Academic version of the ANSYS ® software (2020 R1) in order to illustrate the mechanical behavior around the cavity, a group of numerical analysis with individual CAD models were created for each case. The first one is divided into three subcases: (a) to (c). The purpose of these computational finite element analyses was to identify the deformations of the cavity in the horizontal and vertical axis. The second case is a qualitative analysis of fracture propagation caused in a small portion of the cavity, specifically in the tip of the crack for a case (a). In case (b), the propagation of the crack is caused by a pressure load applied on the internal faces of the tip crack. The material properties of the API X80 with a Yield Strength of 555MPa were considered for the analysis [29]. The analyses of the first stage must be observed as a group of instantaneous static moments in which the stresses can change according to the applied boundary conditions described below and the second stage was calculated with the fracture tool of the academic ANSYS ® software. First Stage A commercial tube with 20 inch in diameter and 0.5 inch (0.0127 m) in thickness was modelled for this stage with the following boundary conditions: Case 1a considers three different external load temperatures; T1 = −30 • C, T2 = 22 • C, and T3 = 50 • C applied on the external face of the tube to create contraction or dilatation and the solution is carried out with the ANSYS ® 2020 R1 Thermal analysis module. The results of this analysis are loaded in the static structural module to include the effect of the applied pressure creating a thermal-structural coupled field analysis. For each selected temperature, the tube is subjected to an internal pressure P of 10, 90 and 180 PSI as shown in Figure 8a. The right side of the tube was restricted in such a way that the coordinate displacements are: x = 0, y = 0 and z = 0. Case 1b, considers also the mentioned operation conditions of temperature and pressure as in the previous case plus the presence of a 200µm cavity located on the opposite side of the displacement restriction as shown in Figure 8b. This case considers the same pressure magnitude applied inside the cavity (p c ) and the tube (p t ). Case 1c, considers the same model of previous case without the presence of the pressure in the tube, p t is eliminated. The model required sliced areas to create results paths, the paths were used to read the results always in the same nodes for the x and y coordinates as shown in Figure 8c. Tetrahedron type elements were selected to construct the continuum, Figure 9 shows the resulting mesh for the tube model and a close view of the refined mesh around the cavity, the creation of the meshing required the use of various size controls for the edges, areas and volumes. The selection of the cases comes from the fact that in real operation, tubes are subjected to a series of complex conditions like external temperature changes depending on the geographic Tetrahedron type elements were selected to construct the continuum, Figure 9 shows the resulting mesh for the tube model and a close view of the refined mesh around the cavity, the creation of the meshing required the use of various size controls for the edges, areas and volumes. Tetrahedron type elements were selected to construct the continuum, Figure 9 shows the resulting mesh for the tube model and a close view of the refined mesh around the cavity, the creation of the meshing required the use of various size controls for the edges, areas and volumes. The selection of the cases comes from the fact that in real operation, tubes are subjected to a series of complex conditions like external temperature changes depending on the geographic location. With respect to the pressure changes, they were considered because of the flow rate variations caused for example during maintenance operations. Many other real conditions like tube inclination, friction loss, internal liquid reactions, etc., are no considered in this study. As a result, The selection of the cases comes from the fact that in real operation, tubes are subjected to a series of complex conditions like external temperature changes depending on the geographic location. With respect to the pressure changes, they were considered because of the flow rate variations caused for example during maintenance operations. Many other real conditions like tube inclination, friction loss, internal liquid reactions, etc., are no considered in this study. As a result, Figure 10 shows the plate thickness changes with maximum values found at T1 = −30 • C. The cavity inside the plate subjected to different temperatures and pressures, suffers dimension changes which are shown in Figure 11a,b. Δx and Δy obtained with the presence of pc and pt. Figure 11c,d, shows the resulting Δx and Δy obtained when pt is eliminated. With respect to the stress distribution, the presence of the pressure inside the tube creates a difference around the cavity as shown in Figure 11e. The maximum stress levels are located on the horizontal sides and the magnitude depends on the applied pressure. Figure 11f, shows the stress distribution around the cavity without the pressure inside the tube. The cavity inside the plate subjected to different temperatures and pressures, suffers dimension changes which are shown in Figure 11a,b. ∆x and ∆y obtained with the presence of p c and p t . Figure 11c,d, shows the resulting ∆x and ∆y obtained when p t is eliminated. With respect to the stress distribution, the presence of the pressure inside the tube creates a difference around the cavity as shown in Figure 11e. The maximum stress levels are located on the horizontal sides and the magnitude depends on the applied pressure. Figure 11f, shows the stress distribution around the cavity without the pressure inside the tube. The cavity inside the plate subjected to different temperatures and pressures, suffers dimension changes which are shown in Figure 11a,b. Δx and Δy obtained with the presence of pc and pt. Figure 11c,d, shows the resulting Δx and Δy obtained when pt is eliminated. With respect to the stress distribution, the presence of the pressure inside the tube creates a difference around the cavity as shown in Figure 11e. The maximum stress levels are located on the horizontal sides and the magnitude depends on the applied pressure. Figure 11f, shows the stress distribution around the cavity without the pressure inside the tube. The resulting tetrahedral mesh for the second case is shown in Figure 13. The model must be observed as a pre-meshed crack, this means that the model considers a generated tip from which the fracture can propagate. It is possible to observe the mesh size changes as a consequence of the sphere of influence located on the tip of the crack. The resulting tetrahedral mesh for the second case is shown in Figure 13. The model must be observed as a pre-meshed crack, this means that the model considers a generated tip from which the fracture can propagate. It is possible to observe the mesh size changes as a consequence of the sphere of influence located on the tip of the crack. The resulting tetrahedral mesh for the second case is shown in Figure 13. The model must be observed as a pre-meshed crack, this means that the model considers a generated tip from which the fracture can propagate. It is possible to observe the mesh size changes as a consequence of the sphere of influence located on the tip of the crack. Figure 14a, shows the stress distribution created during the propagation of the crack for case 2a, a straight propagation is observed with a slight downward inclination, which is the same direction of the initial displacement. The maximum stress level is maintained at the front of the crack and the permanence of a horizontal line across the width of the crack growth can be observed as shown in Figure 14b, which has a small rotation with respect to the vertical axis to show the mentioned line. Figure 14c, shows the final geometry obtained from the crack growth, created by the pressure on the internal walls. Now the pressure generates a random break on the tip of the crack, both in direction and in magnitude, it is observed the creation of instantaneous areas during that process with different normal planes from those of the original geometry. Figure 14d, shows that the horizontal line along the tip of the crack no longer remains uniform on any of the x, y and z coordinate axes. Figure 14e, shows an example of micrograph of a similar crack growth with 200µm in length produced by internal pressure. It was taken in a High Voltage Transmission Electron Microscope (HTVEM), the random nature of the crack growth is observed. Figure 14a, shows the stress distribution created during the propagation of the crack for case 2a, a straight propagation is observed with a slight downward inclination, which is the same direction of the initial displacement. The maximum stress level is maintained at the front of the crack and the permanence of a horizontal line across the width of the crack growth can be observed as shown in Figure 14b, which has a small rotation with respect to the vertical axis to show the mentioned line. Figure 14c, shows the final geometry obtained from the crack growth, created by the pressure on the internal walls. Now the pressure generates a random break on the tip of the crack, both in direction and in magnitude, it is observed the creation of instantaneous areas during that process with different normal planes from those of the original geometry. Figure 14d, shows that the horizontal line along the tip of the crack no longer remains uniform on any of the x, y and z coordinate axes. Figure 14e, shows an example of micrograph of a similar crack growth with 200µm in length produced by internal pressure. It was taken in a High Voltage Transmission Electron Microscope (HTVEM), the random nature of the crack growth is observed. Figure 14a, shows the stress distribution created during the propagation of the crack for case 2a, a straight propagation is observed with a slight downward inclination, which is the same direction of the initial displacement. The maximum stress level is maintained at the front of the crack and the permanence of a horizontal line across the width of the crack growth can be observed as shown in Figure 14b, which has a small rotation with respect to the vertical axis to show the mentioned line. Figure 14c, shows the final geometry obtained from the crack growth, created by the pressure on the internal walls. Now the pressure generates a random break on the tip of the crack, both in direction and in magnitude, it is observed the creation of instantaneous areas during that process with different normal planes from those of the original geometry. Figure 14d, shows that the horizontal line along the tip of the crack no longer remains uniform on any of the x, y and z coordinate axes. Figure 14e, shows an example of micrograph of a similar crack growth with 200µm in length produced by internal pressure. It was taken in a High Voltage Transmission Electron Microscope (HTVEM), the random nature of the crack growth is observed. Conclusions The equilibrium and growth of a cavity inside a pipeline was analyzed, considering the effect of the internal pressure under which the pipeline works. The problem was considered as bidimensional for a cavity of the same characteristics but located inside a flat plate and was solved using the theory of elastic bending. Subsequently by using the Laplace equation, the increased stresses in the meridional and tangential directions of the pipe under its internal pressure were determined. Consequently, an equation to determine the behavior of the cavity in the flat plate in the plastic range was also obtained. A sudden increment in the stress levels was observed in the tip of the crack in comparison with the initial stage of cavity formation. It can be also observed that the appearance of maximum stress levels, instantaneous formation of small portion areas, and crack growth caused by pressure increments follow a random pattern which complicates the prediction of the future trajectory.
9,017.4
2020-04-27T00:00:00.000
[ "Materials Science" ]
Ab initio instanton rate theory made efficient using Gaussian process regression Ab initio instanton rate theory is a computational method for rigorously including tunnelling effects into calculations of chemical reaction rates based on a potential-energy surface computed on the fly from electronic-structure theory. This approach is necessary to extend conventional transition-state theory into the deep-tunnelling regime, but is also more computationally expensive as it requires many more ab initio calculations. We propose an approach which uses Gaussian process regression to fit the potential-energy surface locally around the dominant tunnelling pathway. The method can be converged to give the same result as from an on-the-fly ab initio instanton calculation but requires far fewer electronic-structure calculations. This makes it a practical approach for obtaining accurate rate constants based on high-level electronic-structure methods. We show fast convergence to reproduce benchmark H + CH4 results and evaluate new low-temperature rates of H + C2H6 in full dimensionality at a UCCSD(T)-F12b/cc-pVTZ-F12 level. I. INTRODUCTION Transition-state theory (TST) has surely become the most popular method for evaluating reaction rates in gasphase chemistry. 1It has achieved this status due to its simplicity and the fact that it can be evaluated with efficient computational algorithms.Two geometry optimisations are needed, for the reactant and transition states and two Hessian calculations, one at each stationary point.As only a small number of electronic-structure calculations are needed to evaluate the TST rate, expensive high-level ab initio methods can be used.This is necessary to achieve a good prediction, as small errors in the PES lead to exponential errors in the rate.][5][6] Ring-polymer instanton theory has proved itself to be a useful and accurate method for computing the rate of a chemical reaction dominated by tunnelling. 7The method is based on a first-principles derivation from the pathintegral representation of the quantum rate [8][9][10] and can be thought of as a quantum-mechanical generalisation of TST.5][16][17][18] When compared with benchmark quantum dynamics approaches applied to polyatomic reactions, the instanton method typically a) Electronic mail<EMAIL_ADDRESS>low-temperature rates within about 20 − 30% of an exact calculation on the same PES. 15,19This is, in many cases, less than the the error in the rate which can be expected to result from the best achievable convergence of the electronic Schrödinger equation, implying that the accuracy of instanton theory itself is not the major issue. The ab initio instanton method is very efficient when compared with other quantum dynamics approaches, including path-integral molecular dynamics or wavefunction propagation.However, it remains considerably more computationally expensive than a TST calculation.The major reason for this expense is that energies, gradients and Hessians of the PES are required, not just at the transition state, but for each ring-polymer bead along the instanton, of which about 100 may be required.For high-accuracy electronic-structure methods, such as is provided by coupled-cluster theory, gradients and Hessians are typically evaluated using finite-differences, and can thus consume a lot of computational power.If the ring-polymer instanton method is to become widely applied in place of TST, the number of ab initio points will need to be reduced to bring the computational expense down, closer to that of a TST calculation. It is important that high-quality electronic-structure calculations are employed as results can be stronglydependent on the PES and give significant errors when using cheaper and less-accurate surfaces. 19,20One suggestion for decreasing the computational effort required is to run the instanton calculation using a low-level surface and partially correct the result using a few high-level single-point calculations along the optimised pathway. 14,21,22This approach (termed the 'dual-level instanton approach') certainly improves results, but cannot always been relied upon as, in certain cases, the location of the instanton pathway may vary considerably depending on the quality of the PES.4][25][26][27] These approaches also have the potential to break down when the instanton pathway exhibits strong corner-cutting behaviour and deviates significantly from the transition state. The procedure which has generally been followed for ring-polymer molecular dynamics rate theory 28,29 or wave-function propagation methods 30,31 has been to use an analytical function for the PES which is fitted to approximately reproduce ab initio points on the surface.In particular much attention has been given to water potentials [32][33][34] on which instanton calculations have also been carried out for comparison with high-resolution spectroscopy. 35,36Despite improvements and automation of this procedure, it remains a difficult task to fit a global potential, and is often based on tens of thousands of ab initio points, 37 computations which we wish to avoid. The reason why these fitting procedures are typically difficult to carry out in practice is because a PES is a complex high-dimensional function.For many applications, including molecular dynamics or wave-function propagation, it is important to have a globally-accurate PES.In particular, if non-physical minima exist in the PES, the dynamics could be attracted there and give nonsensical results.Instanton theory has a particular advantage in that it only requires knowledge of a small region of the PES, located along a line representing the dominant tunnelling pathway.This implies that it might be possible to fit a locally-accurate surface around this small region in an efficient manner, as represented by figure 1.In this way we ensure that no extrapolation is used, but only interpolation, which is expected to be well behaved. In this paper, we describe how we use Gaussian process regression (GPR) 38 to fit a local representation of the PES and thereby obtain the instanton rate using only a small number of ab initio calculations.By converging the rate with respect to the number of electronic-structure calculations, it is possible to obtain the same results as ab initio instanton theory, for a fraction of the cost.In this way, our GPR approach is almost as efficient as a TST calculation, but has the accuracy of a fully-converged ab initio instanton calculation.We are then able to take advantage of recent developments in high-accuracy electronic-structure methods, 39 which might otherwise be too expensive for an on-the-fly calculation.A similar combination of GPR and path-optimisation has been used successfully by the group of Jónsson. 40,41A number of new developments are necessary for our implementation, as instanton theory also requires accurate knowledge of Hessians along the path, and because we apply the approach to gas-phase reactions, we must account for rotational invariance. In the following, we describe the background theory as well as the particulars of our implementation of the approach.Results are then presented for two applications and the convergence properties discussed. FIG. 1.The only areas of the PES which need to be accurately known are the those around the instanton pathway and the reactant minima (in order to obtain their partition functions).In this image, they are represented by the coloured areas, whereas those that are not built into the GPR are unshaded.The blue points represent the beads along the instanton path, while the black points represent the reactant and the transition state.Note that the tunnelling pathway cuts the corner to explore a space far from the transition state. II. THEORY The results in this paper are computed by combining together a number of different approaches.Ring-polymer instanton theory is used to evaluate the rate based on a GPR fit to the PES, which has a training set comprised of coupled-cluster electronic-structure calculations.It will be necessary to transform some data between different coordinate systems to use an appropriate set for each part of the calculation.The instanton equations are defined with Cartesians, as are the inputs and outputs of the electronic-structure calculations, but the GPR is best built using internal coordinates to ensure that it is rotationally invariant.In this way we formally make no further approximations to instanton theory and also avoid having to construct a kinetic-energy operator in curvilinear coordinates. A. Ring-Polymer Instanton Theory In the ring-polymer version of instanton theory, 10 the dominant tunnelling pathway is represented by a path discretised into N segments.The points where the segments begin and end are given by Cartesian coordi-nates, x i , called "beads".Because the instanton pathway folds back on itself, only one half of the path need be specified. 12,13A path defined by a set of N/2 beads, {x 1 , . . ., x N/2 }, has the associated half-ring-polymer potential (1) where x i,j is the Cartesian coordinate of the ith bead in the jth nuclear degree of freedom with associated mass m j .The number of degrees of freedom is 3n, where n is the number of atoms.The spring constants are defined by the temperature, T , such that 2][13] These require gradients of the target function at each iteration, but use update formulae to avoid recomputing the Hessians. 42The gradient of the ring-polymer potential depends on the gradients of the underlying PES at each bead geometry.In the on-the-fly implementation, these are obtained directly from an electronic-structure package, but here they are derivatives of the GPR fitted potential. Once the instanton pathway is optimised, the theory accounts for fluctuations up to second order.Thus in order to evaluate the rate, we require Hessians of each bead.Again, these can be computed by an electronicstructure package or from the GPR.The calculation of a Hessian is usually carried out using second-order finitedifferences, and is therefore on the order of 3n-times more expensive than a gradient calculation. Under the instanton approximation, the rate is given by where the action is S = 2β N U N/2 and explicit expressions for the instanton vibrational, rotational and translational partition functions are given in Ref. 10.The result should be converged with respect to the number of beads, N .Typically on the order of N = 100 beads are used to obtain a rate converged to two significant figures. B. CCSD(T)-F12 Theory For electronic structures where the independent particle model is qualitatively correct, electronic energies computed at the basis set limit CCSD(T) level of theory are expected to be accurate to better than 1 kcal/mol for reaction barriers, 0.1 pm for structures and 5 cm −1 for harmonic vibrational wavenumbers. 43Until relatively recently, the cost associated with using the large basis sets traditionally required to access the basis set limit has prevented this high level of theory from being routinely used in quantum dynamics simulations, which typically require many thousands of energy evaluations.With the maturation of modern F12 explicitly correlated theory, 44 near basis set limit CCSD(T) energies can now be computed using small (triple-zeta) orbital basis, at a cost only 15% larger than a traditional CCSD(T)/TZ calculation, and quantum dynamics studies can be performed using near basis set limit CCSD(T) Born-Oppenheimer potential energy surfaces on a routine basis. In F12 theory the standard manifold of correlating orbitals |ab that parameterise two body correlation functions in pair theories is supplemented with one geminal basis function per occupied orbital pair ij, chosen to directly model the coulomb hole in the first-order pair correlation function The correlation factor f (r 12 ) is chosen to be a linear combination of Gaussians 45 fit to an exponential function 46 with a length-scale of 1 a 0 , appropriate for valence electrons, and the many electron integrals that arise due to the explicit dependence of on the interelectronic distance r 12 and the presence of the strong orthogonality projector Q are decomposed into one and two-electron components by inserting approximate resolutions of the identity. 47he coefficients t ab ij are optimised in the presence of fixed geminal contributions, 48 to reduce geminal basis set superposition error, 49 with coefficients chosen to satisfy the first-order singlet and triplet cusp conditions. 50Small but numerically expensive geminal contributions to the energy Lagrangian function are neglected if they rank higher than third order in perturbation theory, 51 resulting in the CCSD(T)(F12*) approximation. 39In this work we use the Molpro electronic structure package 52 and are restricted to using the slightly less accurate CCSD(T)-F12b approximation 53 where geminal contributions from third order ring diagrams are also neglected.Nevertheless, CCSD(T)-F12b energies computed in a TZ basis set are within within 0.2 kJ/mol per valence electron of the CCSD(T) basis set limit and retain the intrinsic accuracy of the wavefunction ansatz. 54 C. Gaussian Process Regression (GPR) Gaussian process regression is a machine learning algorithm which can be used to efficiently generate complex hypersurfaces with limited data. 38Recent work has applied this technique for constructing potentialenergy surfaces [55][56][57] and determining minimum energy paths 40,41 at a much lower computational cost.In this paper, a local representation of the PES is constructed around the instanton pathway and used to evaluate reaction rate constants. Before carrying out the construction of a local PES with GPR, we first note that we have to utilise an internal coordinate system that accounts for rotational invariance.We define this internal coordinate system as q = q(x), where x is a set of Cartesian coordinates.This transformation to a rotationally invariant coordinate system is defined in section II D. In the simplest case, the training set consists of known values of the potential, V (q j ), at the M reference points {q 1 , . . ., q M }.This defines the column vector y with elements where is an energy shift chosen such that the average of these elements is approximately zero.Noting that the derivative of a Gaussian process is also a Gaussian process, 38,40,41 it is also possible to include gradients and Hessians into the training set as described in Ref. 40. The potential for an unknown point q * can be predicted from GPR as where k(q i , q j ) is a covariance function for the prior.We chose a squared-exponential covariance function with length-scale γ and a prefactor f : The elements, w j , of the vector, w, are determined by solving the linear equations where the covariance matrix is defined by K ij = k(q i , q j ).By differentiating Eq. 5, one obtains expressions for the gradient and Hessian of the PES.Because the covariance function is smooth, these derivatives are always well defined.σ is a noise term, which is introduced to avoid overfitting, and should be chosen to be the expected selfconsistent error in the reference data.Together f , σ and γ are known as hyperparameters.Their values can be optimised by maximising the log marginal likelihood, Alternatively, one can also optimise the hyperparameters through the minimisation of errors by cross-validation. 38he method above allows us to construct a local PES from a training set of M points.In our implementation, the general idea is as follows.Firstly, we construct an approximate PES with GPR using a small number of points, and then optimise the ring polymer based on this PES.After this, we refine the PES by adding new ab initio evaluations of points along the previously predicted ring-polymer configuration.Using the refined PES, we obtain a new ring-polymer configuration and then compare it with the previous one to check if the pathway has converged to the true instanton pathway.If this is not satisfied, the PES is refined again through the addition of ab initio evaluations; this is continued iteratively until the convergence is achieved.The abovementioned scheme is further elaborated in section III. The general scheme described above is similar to that done by the group of Jónsson, 40,41 wherein they obtain the minimum energy path using a GPR-aided nudged elastic band (NEB) method.This appears to have been highly successful, effectively reducing the number of ab initio evaluations required by an order of magnitude in comparison to a conventional NEB calculation.In this paper, we intend to emulate this drastic reduction in computational effort for locating the instanton pathway and evaluating rates.As mentioned before, there are some differences in our implementation, such as the need for rotational invariance and accurate knowledge of the Hessians.We have found that the accuracy of the Hessians returned by GPR is significantly improved by explicitly providing Hessian data into the training set. D. Non-redundant internal coordinate system We would like to build the GPR representation of the PES using an internal coordinate system which is rotationally and translationally invariant.This is necessary as the relative rotational orientation of individual beads along the instanton pathway is not known a priori.However, we will need to be able to convert the information obtained from the GPR-based PES back into a Cartesian coordinate system in order to evaluate the instanton rate.Also note that the data available from electronicstructure packages are in Cartesian coordinates, which will need to be converted into internal coordinates in order to build the GPR-based PES. 9][60][61] Such advanced approaches could also be applied to our problem.However, as we only need to fit the potential locally, it is an unnecessary complication and thus we choose here to neglect permutational symmetry.For our studies here, this is no inconvenience as we only need compute the instanton rate for one of the equivalent reaction pathways and multiply the rate by the degeneracy. A simple translationally and rotationally-invariant coordinate system for representing molecular geometries is provided by the n × n distance matrix, 62 defined as where r i are the three-dimensional Cartesian coordinates of atom i, such that r 1 = (x 1 , x 2 , x 3 ), r 2 = (x 4 , x 5 , x 6 ), etc. Although it is possible to convert data from a Cartesian coordinate system into this set, 63 the back transformation is not well defined, as the internal coordinates are redundant.In order to obtain a non-redundant set of internal coordinates, we follow the approach of Baker et al. 64 Firstly, we unravel the matrix D to give the coordinates as a vector of length n 2 , The B matrix is defined to describe how changes in the Cartesian coordinates affects these redundant coordinates as The elements of this n 2 × 3n matrix are given explicitly by where α runs over the indices of three-dimensional space. A square matrix, G = BB T , is formed and then diagonalised to obtain the eigenvalues and eigenvectors.The non-redundant eigenvectors are those corresponding to the nonzero eigenvalues (of which there will be 3n − 6 for a nonlinear isolated molecule), whereas the redundant eigenvectors have zero eigenvalues.The nonredundant eigenvectors are collected into the columns of a matrix, U.With this, we can now transform d into a non-redundant coordinate system, defined by It is this internal coordinate system which is used to build the GPR representation.Note that the matrix U is built only once at a reference geometry and used to define the transformation to q at all other geometries.The reference geometry used in our studies was the transition-state, although this is not a requirement.The same U matrix is then used for new geometries to give a consistent definition of the internal coordinates q = q(x). Therefore the required relationship between the internal coordinates and Cartesians is given by dq = B q dx, where The gradient and Hessian in the non-redundant internal coordinate system are defined as and similarly, in Cartesian coordinates, Given a geometry x to define the appropriate orientation, gradients and Hessians obtained from the GPR in internal coordinates can be transformed back to Cartesian coordinates.Obtained using the chain rule, the transformations are defined by where In order to transform the gradients and Hessians obtained from electronic-structure calculations into the q coordinate system, these equations need to be inverted.However, as B q is not a square matrix, we need to define the generalised inverse as The required transformations are These equations define all the necessary transformations needed for converting the ab initio data into reduced coordinates, and for converting it back to a Cartesian system at a given orientation. III. METHOD Our aim is to reproduce the same result to an ab initio instanton calculation performed on the fly.As with these calculations, we must therefore consider convergence with respect to N .For our new approach based on GPR, we must also simultaneously converge the result with respect to the number of points in the training set. Here we outline our standard protocol for computing converged instanton rates using GPR.This consists of two parts: first, in which the instanton pathway is located, and second, in which the fluctuation terms are converged to yield the final instanton rate.We have attempted to design this protocol to be stable and efficient.In our study, we have found that this protocol posed no significant problems for the systems tested here.In future studies, one could consider improvements which may increase the efficiency further.In a realistic working environment, a researcher has the freedom to add information to the GPR however they like until the result is converged. Our protocol is designed for the case that single-point ab initio calculations are by far the most expensive part of the calculation.We also assume that Hessian calculations are orders of magnitude slower than potential or gradient evaluations.This is commonly the case for many electronic-structure methods, especially if the Hessians are computed using finite differences.The efficiency of our protocol should thus be measured in terms of the number of ab initio calculations required, and in particular the number of Hessians.We show these figures for specific examples in the next section. The protocol described below is intended for a calculation of a single instanton rate at a given temperature, as is the approach used in the H + CH 4 benchmarks we present in Sec.IV A. If, as is common, one needs the rate at multiple temperatures, it is recommended to start just below the crossover temperature, T c .The optimised instanton can be used as the initial guess and GPR training set for a calculation at a lower temperature.We use this more efficient approach for our H + C 2 H 6 calculations in Sec.IV B. A. Protocol 1. Optimise the reactants and transition state (using a standard Quantum Chemistry package), and obtain gradients and Hessians for the optimised geometries.The optimised transition-state geometry in Cartesian coordinates is notated x ‡ . 2. By diagonalising the mass-weighted Hessian at the transition-state, calculate the cross-over temperature, where ω b is the magnitude of the imaginary frequency. 3. An initial guess for the instanton configuration is obtained using 13 where z is the normalised non-mass-weighted eigenvector corresponding to the imaginary mode at the transition state and ∆ is a user-defined spread of points.Typically we choose ∆ ∼ 0.1 Å and N = 16 for an initial guess. However, if previous instanton optimisations at a higher temperature have been performed successfully, these configurations usually provide a better initial guess. 4. Calculate ab initio potentials and gradients for the points obtained in step 3. Repeat until convergence: (a) Optimise hyperparameters using methods defined previously under section II C. (b) Starting with a low number of beads N , locate the ring-polymer path by increasing N until the action S/ is converged to 2 decimal places.(c) Check if the mean bead displacement ∆x = , where P C corresponds to the path convergence limit.Also check that the convergence of the action • If this is satisfied, this means that the ring-polymer path has converged.Continue to step 6. • Otherwise provide new inputs (ab initio energies and gradients) to the GPR training set along the current ring polymer (i.e. increase the number of training points M ) and then go back to step 5a.• If this is satisfied, the iterative algorithm is terminated, and the current value of k is taken as the converged instanton rate.• Otherwise, return to step 6a. We have outlined the simplest protocol which has the desired properties of converging the instanton rate without needing a large number of ab initio calculations.However, it is not necessarily the optimal choice for all problems.In particular, it should be noted that in this work, new information is provided to the GPR training set at the positions of beads chosen by hand.This was done in a systematic way, wherein during the path convergence step, the beads were chosen such that they are evenly distributed along the current ring polymer.Once the path is converged, beads where Hessians are to be included were chosen in a similar manner (ie.evenly distributed along the converged pathway).There may be better ways of providing new information to the GPR training data; for instance one can evaluate the expected fitting error along the current pathway and then provide points at the areas with high variance.By being more selective, one can potentially further reduce the number of ab initio calculations required. IV. RESULTS The method described above was applied to the following two systems: The first is a standard benchmark reaction for testing quantum rate theories and has been studied with various methods including MCTDH, 65,66 ring-polymer molecular dynamics, 67 quantum instanton, 68 as well as ringpolymer instanton theory. 12,15,19he second reaction is beyond the current limits of exact quantum mechanics unless reduced dimensionality models are used.Using the GPR formalism, we are able to here present a converged ab initio instanton rate for the first time.We compare these results with those predicted by other semiclassical methods. A. H + CH 4 An on-the-fly ab initio instanton calculation has been done by one of us for this polyatomic reactive system. 15ere, we use this reaction as a benchmark case for our GPR-aided instanton calculation and show that we are able to obtain the same result as an on-the-fly calculation with a significant reduction in the number of potentials, gradients, and most importantly, Hessians required. The electronic-structure method used in Ref. 15 was RCCSD(T)-F12a/cc-pVTZ and we use exactly the same method for the training set for the GPR.Note that in this paper, as well as in Ref. 15, this incomplete basis set is also used to define the energy of the isolated reactant H atom.The results in Ref. 15 were computed using N = 128, which we also use here.This required the calculation of 64 ab initio potentials and gradients per iteration of the instanton optimisation scheme.Because approximately 10 iteration steps usually required for an instanton optimisation, about 640 gradients were computed in addition to the 64 Hessians once the instanton had been optimised.To account for the indistinguishability of the H atoms, the instanton rate formula is multiplied by 4. We followed the protocol outlined in the previous section independently for three different temperatures.This allows us to accurately determine the computational effort required for a converged rate.In table I, the rows correspond to iterations of step 5 of the protocol, in which the pathway is optimised by adding more potentials and gradients to the GPR training set.The action is seen to converge to two decimal places after only a few iterations.Here, this was done with fewer than 50 potentials and gradients for all three temperatures.This means that a reduction in the number of gradient evaluations by an order of magnitude has been achieved.This fast convergence is also represented in figure 2, where it is seen that, at the lowest temperature studied, the pathway already has the correct shape after the second iteration.In this figure, the potential along the pathway is plotted as a function of cumulative mass-weighted path length, It should be noted that the plots are shifted such that it is centred around l = 0.In table II, the GPR model is further refined, as described in step 6 of the protocol, by providing more observations (i.e. more ab initio potentials, gradients and Hessians) to the GPR training set.Our findings show that it is necessary to include a few Hessians directly into the GPR training set, and that the transition-state Hessian alone is not sufficient to describe the fluctuation terms.Note that at low temperatures, the GPR requires slightly more Hessians to converge the rate.This is due to the fact that the instanton stretches out more at lower temperatures, thus meaning that GPR needs more information as the instanton covers a larger area of the PES. The convergence is fast and it takes no more than 6 Hessians to converge the rates for all temperatures to less than 2% of that of the ab initio calculation.This is a remarkable improvement in terms of computational effort required over the ab initio instanton calculations as Hessian calculations account for a huge percentage of the computational effort required.Having reduced the number of Hessians required from 64 to 6, the reduction in computational power needed would allow us to investigate problems involving larger molecules and to also use higher-level electronic-structure methods. B. H + C 2 H 6 The H abstraction reaction from ethane follows the same mechanism as abstraction from methane.From a theoretical point of view, it is of interest as the number of degrees of freedom is significantly higher such that full-dimensional exact quantum methods are not applicable and approximations must be made.There are two types of approximations which can be used to make the TABLE III.Barrier heights and imaginary frequencies for H+ C 2 H 6 using increasingly larger basis sets at UCCSD(T)-F12b level. Method V ‡ (kJ mol −1 ) ω b (cm −1 ) UCCSD(T)-F12b/cc-pVDZ 54.571398 UCCSD(T)-F12b/cc-pVTZ 51.24 1461 UCCSD(T)-F12b/cc-pVTZ-F12 50.03 1469 UCCSD(T)-F12b/cc-pVQZ 50.46 -UCCSD(T)-F12b/cc-pV5Z 50.07 simulation tractable.One makes use of semiclassical dynamics, and the other involves reducing the dimensionality of the system.The instanton method is an example of the former, as are other semiclassical extensions of transition-state theory 25,69 and ring-polymer molecular dynamics. 70Reduced-dimensionality models allow quantum scattering theory to be applied 71 and can also be combined with semiclassical approaches. 25,26,72Experimental results are available at 300 K, 73,74 but unfortunately not at lower temperatures, where the tunnelling effect is more important.Here, we compare the results of our instanton rate calculations with other theoretical calculations, and discuss the relative efficiency of the various methods. Ab initio calculations Due to the efficiency of the GPR-aided instanton approach seen in our benchmark tests, we are able to use high-accuracy and computationally expensive electronic-structure methods.The method we choose is UCCSD(T)-F12b as discussed in Sec.II B. Table III shows the predictions for barrier heights, V ‡ , and imaginary frequencies, ω b , with increasingly large basis sets.Hessians with cc-pVQZ and cc-pV5Z basis sets were not evaluated due to the large amount of computational resources which would be required.However, we can see that, the cc-pVTZ-F12 reproduces almost the same barrier height as cc-pV5Z, in accordance with the study by Spackman et al. 75 which suggested that cc-pVnZ-F12 basis sets have similar performance to cc-pV(n+2)Z basis sets (where n = D, T, Q, etc.) in terms of results when using CCSD(T)-F12.Hence in the following calculations, we will use the cc-pVTZ-F12 basis set. With our chosen method, the crossover temperature is predicted to be 337 K.We ran three instanton calculations, first at 300 K, and then used this as a starting point for a calculation at 250 K, and in the same way for 200 K.This approach may slightly reduce the number of iterations needed for convergence.For instance, it can be seen in figure 3, that the optimisation of the 200 K instanton is obtained in only a few iterations and that the path is almost correct even after the first.The convergence criteria used for this system were similar to that used in the H + CH 4 system. The Cartesian representation of the optimised path for the H abstraction from ethane is shown in figure 4. It is seen that the mechanism is similar to that of H + CH 4 , shown in Ref. 15, in that the abstracted hydrogen does most of the tunnelling, and is accompanied by a small movement of its neighbouring hydrogens.The atoms on the far end of the ethane molecule hardly participate in the instanton at all.Note, however, that they still make a contribution to the fluctuations, and thus cannot be neglected. 76he results of our GPR-based instanton calculations are presented in table IV.These rates account for the degeneracy of the reaction by multiplying the formula in Eq. 2 by a factor of 6.The 300 K result was obtained with a training set including 33 potentials and gradients, and 6 Hessians.Calculations at the lower temperatures of 250 K and 200 K added an additional 6 Hessians to the training set (i.e. at 250 K, training data includes 6 Hessians from 300 K and 6 Hessians from 250 K) in order to converge the rates.This represents a reduction in computational effort by an order of magnitude, similar to what has been observed for H + CH 4 . It is clear from our calculation that tunnelling effect makes a large contribution to the rate, even at 300 K.This is confirmed by experimental results at this temperature, which in various setups, have been measured to be 3.13 × 10 −17 cm 3 s −1 , 73 or 7.47 × 10 −17 cm 3 s −1 , 74 and which both lie in the same order of magnitude as our prediction.Note that we expect the instanton approach to slightly overpredict the rate (by up to a factor of 2) at 300 K as this lies close to the value of T c . 77Unfortunately, no experimental results are available for comparison at lower temperatures where the tunnelling effect is predicted to increase dramatically.Table IV also compares our predicted rate with those of reduced-dimensionality quantum scattering (RD-QS) calculations by Horsten et al. 71 and a full-dimensional semiclassical transition state theory (SCTST) rate calculation by Greene et al. 25 The RD-QS calculations utilised a similar electronic structure method to our calculations, albeit with F12a rather than F12b, which gives a barrier height only 0.1 kJ mol −1 lower.The SCTST calculations employed the CCSD(T)/cc-pVTZ method for the energies at the stationary points, which gives a barrier height 0.4 kJ mol −1 lower.We expect these differences to lead to only a minor deviation. The instanton results are in quite close agreement with RD-QS, where the rates differ by no more than 25%.This is what is typically expected when comparing results obtained with the instanton method and that obtained with exact quantum methods. 19This confirms that, at least for this system, the reduced-dimensionality approach is not causing an appreciable error in the tunnelling effect. There is a slightly larger discrepancy between the instanton and SCTST results, 25 which increases at lower temperatures.The SCTST rate calculation involved a total of 118 ab-initio Hessians at the MP2/cc-pVTZ level, with energies at stationary points evaluated with CCSD(T)/cc-pVTZ.The GPR-Instanton method required only 6 Hessians to converge the rate at each temperature and thus a high-level of theory for the Hessian calculations can be used as well.There are two reasons for the discrepancy in the SCTST rates.One is that lower-level electronic-structure theory was used in Ref. 25 for the Hessian calculations.The second is that at low temperatures, the instanton pathway stretches far from the transition state and the PES cannot therefore be well represented by a Taylor series around the transition state. In this case, there are no dramatic differences between the theoretical predictions.It seems that the H + C 2 H 6 reaction follows a simple pathway for which reduceddimensionality models are applicable.However, we ex- pect that for more complex reactions there will be a larger discrepancy and that, in many cases, the full-dimensional instanton theory will be the most accurate. Results on a fitted PES In order to get an idea of the accuracy of the instanton approach for this reaction, we compare instanton rates with those of other semiclassical approaches based on the fitted, global CVBMM potential-energy surface. 69his PES was constructed by dividing the system into a reactive part which would be treated with semiempirical valence bond theory and a non-reactive part treated with molecular mechanics.It was parametrised against density functional theory, of which more details can be found in Ref. 69.The barrier height obtained with the CVBMM PES is 47.90 kJ mol −1 and has a predicted crossover temperature of 352 K. Table V presents rates of three methods, instanton theory (this work), quantum instanton theory (QI) 78 and the small curvature tunnelling correction to canonical variational TST (CVT/SCT). 69The tunnelling factors are seen to be about a factor of 2 larger than those from the ab initio method, mainly due to the fact that the CVBMM barrier is too narrow and thus overpredicts the tunnelling factors. The CVT/SCT rate is in close agreement with that of instanton theory, which implies that, at least in this case, the dominant tunnelling pathway, is well approximated by the minimum-energy pathway used by CVT/SCT.It is expected that, in general for more complex reactions, the instanton method, which defines the tunnelling pathway in a rigorous manner, will give a more accurate result. Unlike the ring-polymer instanton approach, the QI method does not use a steepest-descent approximation and thus includes anharmonic vibrational effects in full dimensionality.In order to do this, it samples over a statistically large number path-integral configurations and would therefore not be a practical computational method when combined high-level ab initio potentials.Nonetheless, these anharmonic effects only change the rate by less than 50% at the lowest temperature studied.This is in agreement with the findings of Ref. 78 which showed that, at low temperatures, a small increase in the rate resulted from making a harmonic approximation to the internal rotation.This confirms that instanton theory gives a reliable prediction of the order-of-magnitude of the rate.The real advantage of the instanton approach over this method is that it can be applied to new reactions without needing to build a global PES at all. V. CONCLUSIONS We have demonstrated how ab initio instanton theory can be made efficient by using GPR to fit the PES locally around the dominant tunnelling path.This was demonstrated first using the H + CH 4 reaction as a benchmark, for which we have shown that the number of electronicstructure calculations can be reduced by an order of magnitude, while converging the rate to within 1% of the benchmark result.We then proceeded to evaluate instanton rates for H + C 2 H 6 , based on UCCSD(T)-F12b/cc-pVTZ-F12 electronic-structure calculations.Most importantly, the number of Hessians needed for all these calculations is about 6, which makes the method more efficient than full-dimensional SCTST calculations and almost as efficient as a classical TST calculation. When studying a complex network of reactions, TST is commonly used to obtain a rate for the many possible reaction steps. 79By evaluating the crossover temperature for each step, it can be easily determined whether tunnelling is likely to play a role, and instanton calculations can be run for these steps only.As there are typically many more steps for which tunnelling is not important, than those for which it is, the number of ab initio calculations needed for the instanton calculations would be small in comparison to the overall total.In this way, tunnelling can be rigorously accounted for without significantly increasing the computational effort. In this work, we suggested a simple protocol which, in our tests, showed no particular problems.We note, however, that it could still be improved in a number of ways which would further increase the efficiency.For instance, by using estimates of the GPR fitting error, we could select new points to be added to the training set in a more systematic way.These could also be used to estimate the fitting error in the rate constant in a similar way to as has been done for TST calculations. 80ther techniques might allow us to reduce the number of high-level calculations by including low-level ab initio information into the GPR training set.One possibility would be to use this low-level information only for the initial iterations to locate the region of space where the instanton is likely to exist on the high-level surface.The final iteration could be done using only high-level information to ensure convergence to the correct result.However, one could also consider combining the high-and low-level information in the training set, as in the dual-level approach. 22By using a larger value of the noise term for the low-level points, the GPR would then fit itself accurately to the high-level points, and use the low-level information as a rough guide for the shape.Typically the frequency calculations from low-level calculations are a good approximation even if the absolute energies are not, and so most Hessians could be derived from low-level calculations.One could imagine systematically converging to the correct result by adding more high-level ab initio points such that the accuracy would not be compromised. We have shown in this paper that we can converge the rate with respect to the number of ring-polymer beads, N , as well as with respect to the number of points included in the GPR training set.However, the accuracy of our method is still limited by the computational expense of electronic-structure methods, which are rarely possible to fully converge.Methods such as F12 have been very useful for increasing this efficiency 44 as well as linearscaling methods 81 and the use of graphical processing units. 82Nonetheless, we can say that we have expanded the range of systems which can be studied with ab initio instanton theory using high-level electronic structure methods. We did not find particularly large differences in rate predictions for the H + C 2 H 6 reaction between the instanton approach and other theories.This is due to the rather simple mechanism exhibited by the H abstraction reaction, which follows a pathway close to the minimum-energy path, making the CVT/SCT and reduced-dimensionality models valid.The advantage of instanton theory is that no a priori choice of reduced coordinates, or tunnelling coordinate is made.This makes the approach applicable also to more complex reactions as well as tunnelling splitting calculations. 35In these cases it is expected that the instanton path will deviate more strongly from the minimum-energy path, and the full-dimensional instanton theory will be required to obtain an accurate prediction.The proof of principle outlined in this work for combining GPR with instanton theory will then be exploited in future studies of new reactions. CONFLICTS OF INTEREST There are no conflicts to declare. 6 . Repeat until convergence: (a) Provide a couple of new points along the converged instanton pathway to the GPR training set, this time also including Hessians.(b) Optimise hyperparameters using methods defined previously under section II C. (c) Locate the instanton pathway and calculate the rate, k, increasing N until this converges.(d) Test if |k new − k old |/k new ≤ RC, where RC corresponds to the rate convergence limit. FIG. 2 . FIG.2.Convergence of the ring-polymer instanton at 200 K for H + CH 4 .The initial GPR training set was defined by Eq. 23.The ring-polymer beads are plotted as a function of their potential energy and the path length, l, as defined by Eq. 24. FIG. 3 . FIG. 3. Convergence of ring-polymer instanton at 200 K for H + C 2 H 6 .The initial GPR training set was given by points along the 250 K instanton path.The path length, l, is defined by equation 24. TABLE I . Convergence of the instanton path with iteration of protocol step 5.The number of potentials (V), gradients (G), and Hessians (H) included in the GPR training set is explicitly noted.Here the single Hessian in the training set corresponds to that of the transition state. TABLE II . The rate obtained from the GPR-based instanton calculations are given as the information provided to the GPR training set is increased.The error is measured relative to the on-the-fly ab initio results of Ref.15.Note that for the rate calculation, one further Hessian is needed at the reactant geometry, but that this is not included in the GPR training TABLE IV . Calculated rates (in cm 3 s −1 ) for H + C 2 H 6 obtained by the GPR-aided instanton method and other direct dynamics methods.The tunnelling factor, κtun, is defined as the ratio between the instanton rate and Eyring TST. TABLE V . Rate comparison between methods using the CVBMM PES.All rates are in cm 3 s −1 .
10,154.2
2018-05-07T00:00:00.000
[ "Chemistry", "Physics" ]
An Analysis on Students’ Grammatical Errors in Writing Degrees of Comparison Writing can be defined as one of the productive skills to express ideas or thinking in a written form that can be the tool to communicate with other people. According to Nunan (2003), writing is the mental work of inventing ideas, thinking about how to express them, and organizing them into statements and paragraphs that will be clear to a reader. Furthermore, Richard and Renandya (2002) state that writing is the most difficult skill for the second language and foreign language learners. They claim that writing is not only generating and organizing ideas of our mind, but also translating these ideas into readable text. Therefore, grammar ability is essential for students because it will support the mastery of writing skills. According to Swan (1998), grammar is the rule that says how words changed to show different meanings, and they combine into sentences. It means that grammar has a significant role in writing, as a good writing has to contain good grammar. McKay (1987) also states that students need to have a good grammar foundation to communicate effectively in English. Grammar has many components, and the degree of comparison is one of them. Degree of comparison refers to adjectives being written in different forms to compare one, two or more nouns which are words describing people, places, and things. According to the 2013 Curriculum, Degree of Comparison is included and taught at the second semester of the eighth grade of SMP/MTs. The degree of comparison may sound simple but actually has various Article Info Abstract grammatical rules. Moreover, the degree of comparison is widely used both in speaking and writing. As a second language learner, it is inevitable that learners will make errors in their writing. Hendrickson (1987) mentioned that errors are signals that indicate an actual learning process taking place and that the learner has not yet mastered or shown a wellstructured competence in the target language. Dulay, Burt, & Krashen (1982) also stated that errors are a flawed side of a learner's speech or writing. In other words, an error can be defined as a form of language which deviates from its standard. Although errors are bad, they can bring some benefit for students who will learn more from their own error, as well as for teachers because an error can tell how far the learner has progressed and what remains for the learner to learn. Saville-Troike (2006) said that the learners' errors are the windows into the language learners' minds because learners' errors indicate teachers to know about learners' language ability. However, an error has many synonyms in English, such as mistake, wrong, incorrect, and untrue. Even though these words are synonyms, but in learning a language, they have different meanings, so it is important to differentiate them. The words of wrong, incorrect, and untrue, simply used to describe something that is not correct or when the learner is not right about something. Meanwhile, errors and mistakes are frequently mentioned in the learning process. Richard (1985) stated that mistake is made by a learner when caused by lack of attention, fatigue, carelessness, or other aspects of performance. On the other hand, an error is made by a learner when caused by lack of ability in the target language. In this case, only errors are significant to the process of language learning. Moreover, it is important to investigate the errors that students made so that the students will be able to communicate effectively as well as to prevent the learners to make the same errors. According to Brown (1994), error analysis is a branch of applied linguistics where the teacher can observe, analyze, and classify errors that students made to reveal something of the system operating within the learner, which led to a surge of study of learners' errors. Dulay, Burt, & Krashen (1982) stated that studying students' errors serves two major purposes; providing data that can be used to make the nature of the language learning process, and giving some hints to the teachers and curriculum developers which types of error that prevent the learners to communicate effective. So, error analysis is important because it enables the teacher to find the source or cause of errors and how to find the right treatment to decrease their students' errors. Research Design This study is a mixed-method research project. According to Dornyei (2007), mixed-method research is a combination of qualitative and quantitative methods within a single research project. Likewise, Cresswell & Plano-Cark (2011) stated that mixed-method research is a methodology for conducting research that involves collecting, analyzing, and integrating quantitative and qualitative research in a single study or a longitudinal program of inquiry. The purpose of this form of research is that both qualitative and quantitative data provide a better understanding of a research problem or issue than either research approach alone. Setting and Participants This study took place at MTs Masmur Pekanbaru which is located on Jl. Soekarno Hatta, Pekanbaru, Riau. The research data was collected and analyzed from November -January 2019. The population of this study is the students of the third grade of MTs Masmur Pekanbaru, in the academic year 2019/2020 which consists of 3 classes. Class A consists of 25 students, class B consists of 24 students, and class C consists of 25 students. To determine the sample of the research, cluster sampling technique was used and 25 students of third grade MTs Masmur Pekanbaru were selected. Data Collection and Data Analysis The data of this research were obtained from written tests and semi-structured interviews. Nitko (1983) defined a test as a systematic procedure for observing and describing one or more characteristics of person with the aid of either a numerical or category system In this research, the test consists of 20 questions in the form of cloze procedure and 10 questions in the form of essay. Meanwhile, the semi-structured interview is a qualitative data collection strategy in which the researcher asks informants a series of predetermined but open-minded questions (Given, 2008). The qualitative data from the interview and quantitative data from the test in this study were analyzed differently based on each instrument. In analyzing the qualitative data, the procedure of error analysis proposed by Corder in Ellis and Barkhuizen (2008) was used. The procedure consists of several stages. The first stage is collecting data. During this stage, the evidence that indicated the grammatical errors were investigated in students' writing. The students were given an hour to complete the written test which consists of 20 items in the form of cloze procedure and ten items of an essay. After all students finished, the test was collected in order to be checked. The second stage is the identification of errors. The identification of errors involved a comparison between what learners have produced and what a native speaker would produce in the same context. The writer identified the error by circling the parts that were deviating from the standard grammatical rules. The third stage is the classification of error. In this stage, the number of errors was counted based on the error classification of Dulay, Burt, & Krashen (1982), they are: omission, addition, misformation, and misordering. To get the quantitative result, the data were calculated using Microsoft Excel 2010. The fourth stage is the description of error. During this stage, it was specified how the forms produced by students differ from native speakers' counterparts. In short, this step aimed to describe why the words or sentences were identified as errors. The final stage is error evaluation. In this stage, the errors made by the students were reconstructed into the correct form. One of the purposes of error analysis is to help students in learning second language. In this case, the correction of errors is given according to nature and significance of errors, especially errors that affect communication and cause misunderstanding. After analyzing the errors made by students in writing degrees of comparison, the frequency of each error was presented in the form of tables and charts, and finally, the conclusion was drawn, and some recommendations were given. Meanwhile, the qualitative data were analyzed using the descriptive qualitative method proposed by Sugiyono (2008). In collecting the data, open-ended questions that related to the topic that have been prepared before were asked to the students. The interview was recorded with a recording device, and the results of the interview are rewritten in a paper. After obtained the data, the data were analyzed. There are three steps to analyze data in descriptive qualitative research. Those steps are data reduction, data display, and conclusion drawing. First, the writer selected, identified, and focused on data that are considered important and give valuable information. Next, the data was displayed in the form of narrative text in order to be easier to understand. Last, the writer drew a conclusion. Types of Errors It was found that four types of errors were committed by students in writing degrees of comparison, they are: omission, addition, misformation, and misordering. The percentage of error can be seen in the table below: The table shows that there are 365 total errors made by the students in writing degrees of comparison with the most frequent errors were: 220 errors of misformation (60,27%), 97 errors of omission (26,58%), 25 errors of addition (6,85%), and 23 errors of misordering (6,30%). Sources of Errors After identifying the data, this research also tried to explain the sources of errors made by students in writing degrees of comparison according to Brown's (1994) classification of source of errors, which are: interlingual transfer, intralingual transfer, communication strategy, and context of learning. In classifying the source of error, the writer also used the supporting data obtained from the semi-structured interview to strengthen the validity of this research. DISCUSSION Types of Error 1. Misformation Misformation is the most frequent errors made by the students in writing degrees of comparison, which has 220 with rate of 60,27% of errors. Misformation errors are characterized by the use of the wrong form of the morpheme or structure (Dulay, Burt, & Krashen, 1982). This kind of error shows that students still having a hard time comprehending and applying English grammatical rules. Several cases of misformation errors are explained below: a. "Cheatting is badder than getting a bad mark." (Committed by students 4, 5, 6, 9, 10, 15, 16, 17, 18, 19, 20, 21, 22, 24, & 25) In this sentence, the student wrote comparative form badder instead of worse. This error occurred because the students ignored the Degree of Comparison rules for irregular words. Bad is one of irregular form, they used regular form when irregular form is required. b. "The purse is expensiver than the pencil case." (committed by students 11, 12, 13, 14, 22, & 23) In this sentence, the students wrote expensiver instead of more expensive. They ignore the grammatical rule of degree of comparison, that to compare adjective with two or more syllable, determiner more instead of suffix -er is used. c. "English is easyer than History." (committed by students 1, 11, 12, 14, & 17) In this sentence, the students wrote easyer instead of easier. It is wrong because the grammatical rule of affixation, which is if a word ends in -y preceded by a consonant, then it change into -i before add the suffix. So, the correct form of comparative easy is easier. d. "English is the easiest one, and history is the difficultest one." (committed by students 2, 3,11,12,13,22,23) In this sentence, the students wrote superlative form difficultest instead of most difficult. They ignore the grammatical rule of degree of comparison, that to write superlative form of adjective with two or more syllable, determiner most of suffix -est is used. Omission Omission error occurs when the learner omitted a necessary element of word. In other word, this kind of errors are characterized by the absence of an item that must appear in a well-formed utterance and the content of morphemes which should be in the correctly expression. This error shows that students are still influenced by their first language. There are 97 errors with rate 26,58%. 1. "The horse is biger than the donkey." (committed by students 2. 3,4,5,11,12,14,23) In this sentence, the students wrote biger instead of bigger. When a monosyllable adjective that ends in a single consonant (expect w, x, and z) changed into a comparison or superlative form, the final consonant is doubled before the affix. The students ignore it and omit the double "g" in bigger. 2. "Stone is heaver than cotton." (committed by students 4, 5,6,8,9,10,15,16,17,19,20,21,24,25) In this sentence, the students wrote heaver instead of heavier. When an adjective that ends in -y changed into a comparison or superlative form, the -y letter is changed into -i letter. The students omitted the -y instead of changed into -i before adding the affix. 3. "The ocean ø deeper than the sea." (committed by students 3,4,5,6,8,11,12,13,14,16,20,21,22) In this sentence, the students omitted the be -is. A comparative adjective can be placed after the verb to be to make a proper sentence. The suggested correction is "The ocean is deeper than the sea". 4. "Gold is more expensive ø Silver." (committed by students 22) In this sentence, the student omitted the preposition "than". In a comparative sentence, than must be placed after the adjective. The suggested correction is "Gold is more expensive than Silver" 3. Misordering In this research the writer found 23 errors of misordering which is 6,3% of all errors. Misordering error occurred when the student put an utterance in a wrong order. Several cases of misordering are explained below: a. "The purse is expensive more than the pencil case." (committed by students 2, 3) The student placed more after the adjective. The more should've been placed before the adjective to make it into comparative form. The suggested correction is "The purse is more expensive than the pencil case". b. "The ocean deeper is than the sea." (committed by students 11,13,16,20,21,22) The students wrote deeper is instead of is deeper. The right order of an English sentences is adjective placed after be. The suggested correction is "The ocean is deeper than the sea) 4. Addition Addition error occurred when the students put an unnecessary element to their sentences. In this research, the writer found 25 this kind of errors which is 6,85% of all errors. The writer found that students tend to failed to delete certain item when needed in and that leads to double marking. Several cases of addition are explained below: a. "The city is wideer than the town." (committed by students 1, 3) The students wrote "wideer" instead of "wider". If a monosyllable adjective that ends with "e" changed into comparative or superlative form, double marking should be omitted. b. The pencil case is cheapper than the purse. (committed by students 6, 8, 9, 10, 15, 16, 17, 18 19, 20, 21, 25) The students wrote "cheapper" instead of "cheap". They added unnecessary element in their sentence where this element is not applied in the target language. Sources of Error 1. Intralingual Transfer Intralingual transfer is the most frequent source of errors made by the students in writing degrees of comparison, which has 226 with rate 61.39% of errors. When students acquired a new rule of a language, they must save the data on their mind. However, if they failed applying the rule due to their lack in target language, an error occurred, this kind of error belongs to Intralingual Transfer. For example, "Stone is heavyer than cotton", the students who wrote this sentence ignore the grammatical rule of affixation, which is if a word ends in -y preceded by a consonant, then it change into -i before add the suffix. Moreover, based on the interview, the students stated that degrees of comparison is hard to be understood both the usage and the form. Student 1 stated "I personally think that this material (degrees of comparison) is hard to understand because it has different rules for different words, I can't remember all the rules". In fact, there are so many forms of degrees of comparison, which are; adjective with one syllable, two syllables, more than two syllables, and irregular form of some adjectives that have different rules. Meanwhile, in the usage of degrees of comparison, students 4 stated: "I'm still having hard time in differentiating comparative degree and superlative degree, it's because they almost look alike to each other". Moreover, student 2 also stated: "I think this material (degrees of comparison) is hard for me and all of my classmate, there may be only one or two smartest kid in the class that fully understands degrees of comparison". Based on students' statement, according to third grade students of MTs Masmur Pekanbaru, degrees of comparison's rule is difficult to understand. Communication Strategy Communication Strategy is related to learning style. Learners usually try an effort to cross their message, but sometimes it can be error. The writer found 17,82% of errors caused by communication strategy. For example, many students wrote "Cheating is badder than getting a bad mark". These students applied the wrong rule for irregular adjective. According to student 2's statement, "I thought that adding -er or -more to construct a comparative form is applied to all words, I didn't remember about irregular or regular rules", this explained that the students has understood that adjectives have a change of form to construct a comparative sentence, but they only learned one rule and applied it to all adjectives that has different rules. Based the interview, when asked about how their learning style in writing degrees of comparison is, student 7 stated "When I was given a task to write a degree of comparison sentence, I usually just follow the examples or the pattern provided in the book". This statement explains that he/she tend to look at the examples of degrees of comparison sentence and follow the pattern, ignoring the different rule for different adjective. In line with this, student 4 stated: "Because this material (degrees of comparison) has many rules, I have a hard time in memorizing all of it. So I need English book when writing degrees of comparison". The writer concluded that students' learning style, such as memorizing and following one examples of sentence is influencing students' writing and becoming a source of errors. 3. Interlingual Transfer Another cause of error is interlingual. It's natural that second language learners made errors due to his/her mother language interference. In this research, the writer found that students still get influenced by their mother language in writing sentences using degrees of comparison. There are 14,85% errors caused by Interlingual Transfer. For example, student 5 wrote on her/his paper "The pencil case is cheap than the purse". As we know, in Indonesian "than" means "daripada", so after the interview, it was found that the student thought that the sentence is already a comparison sentence without changing the adjective into comparison form. Furthermore, even though the students' know that comparison in English and Indonesian is different, they still confused. Student 7 stated "When I am told to write in English, I write the sentence in Indonesian first and then translate it to English. I usually use English-Indonesian dictionary when I face difficulty". This statement explained that he/she got influenced by his/her mother language a lot. Likewise, student 8 stated "Writing in English is difficult, because I barely know any vocabulary in English. I need the dictionary or English books to write". This statement supported the conclusion that third grade students of MTs Masmur Pekanbaru still get influenced by their mother language in writing degrees of comparison. 4. Context of Learning Context of Learning is the source of errors caused by the learners' misinterpretation of the teacher's explanation and textbook or an inappropriate pattern contextualization. The writer found 5,94% errors caused by Context of Learning. For example student 2 wrote "The purse is expensive more than the pencil case". Later according to the student's interview, she/he stated "I've been told that more than one syllable adjective is using more instead of -er. But I don't really understand about the placement". Furthermore, based on the interview, students stated that their teacher has explained about the material about degrees of comparison both the usage and the form, but the teacher's delivering way is too plain that they didn't get interested in the material. Student 8 stated "The teacher has taught the material clearly. But if I can be honest, the explanation about the material (degrees of comparison) is too plain and boring. It's hard to concentrate and understand when I got no motivation". The student admitted that her/his motivation in learning English is low, so even though the teacher has taught them clearly they still found English, especially degrees of comparison, is boring and difficult to understand According to the finding and data analysis, there are 365 errors made by students in writing degrees of comparison which was divided into four types of error, they are misformation (220 errors or 60,27%), omission (97 errors or 26,58%), misorder (23 errors or 6,30%), and addition (25 errors or 6,85%). Furthermore, there are also 4 sources of error, they are intralingual transfer (226 errors or 61,39%), communication strategy (64 errors or 17,82%), interlingual transfer (54 errors or 14,85%), and context of learning (21 errors or 5,94%). It can be understood that students still made many errors in writing degrees of comparison. In general, the study's findings are similar to some other studies with the same topic, such as Mirnanda (2014) that found the dominant or the highest rate type of error made by students in writing degrees of comparison is misformation with 71,75%. Moreover, Tambunan & Nababan (2018) who focused on students' errors in writing degrees of comparison of adjective, also found that the highest rate of all error types is misformation with 43,28%. It can be concluded that misformation is the dominant type of error occurred in students' writing, and it shows that students are still having a hard time comprehending and applying English grammatical rules in their writing. However, there are also differences between the findings of this study and other studies with the same topic, such as Hidayatullah (2015) that found the dominant source of error made by students in writing degrees of comparison is Communication Strategy with 42,63%, which was occurred due to students' strategies in overcoming the communication problem they found in writing a second language sentences. Meanwhile in this study, the dominant source of error is Intralingual Transfer with 61,39% which was caused by students' lack in target language's grammatical rules. CONCLUSIONS In Indonesia, according to Curriculum 2013, students of middle-high school are expected to be able to write degrees of comparison sentences. However, their English ability, especially in writing, is still below the expectation. This research is aimed at analyzing grammatical errors made by the third-grade students of MTs Masmur Pekanbaru in writing degrees of comparison. Based on the data provided in the finding, it can be concluded that third-grade students of MTs Masmur Pekanbaru still made many errors in writing degrees of comparison. A total of 365 errors were found in students' writing of degrees of comparison. The errors were classified into four categories, they are misformation with 220 errors (60,27%), omission with 97 errors (26,58%), misordering with 23 errors (6,30%), and addition with 25 errors (6,85%). Generally, such errors occurred due to four types of source of errors, they are intralingual transfer with 226 errors (61,39%), communication strategy with 64 errors (17,82%), interlingual transfer with 54 errors (14,85%), and context of learning with 21 errors (5,94%).
5,510.4
2021-03-05T00:00:00.000
[ "Education", "Linguistics" ]
Language and culture in speech-language and hearing professions in South Africa: Re-imagining practice South African speech-language and hearing (SLH) professions are facing significant challenges in the provision of clinical services to patients from a context that is culturally and linguistically diverse (CLD) due to historic exclusions in higher education training programmes. Over 20 years postapartheid, little has changed in training, research, as well as clinical service provision in these professions. In line with the Health Professions Council of South Africa’s (HPCSA) SLH Professional Board’s quest to transform SLH curriculum and in adherence to its recently published Guidelines for Practice in a CLD South Africa, in this review article, the authors deliberate on re-imagining practice within the African context. They do this within a known demand versus capacity challenge, as well as an existing clinician versus patients CLD incongruence, where even the clinical educators, a majority of whom are not African, are facing the challenge of an ever more diverse student cohort. The authors systematically deliberate on this in undergraduate clinical curriculum, challenging the professions to interrogate their clinical orientation with respect to African contextual relevance and contextual responsiveness (and responsibility); identifying gaps within clinical training and training platforms; highlighting the influencing factors with regard to the provision of linguistically and culturally appropriate SLH clinical training services and, lastly, making recommendations about what needs to happen. The Afrocentric Batho Pele principles, framed around the concept of ubuntu, which guide clinical intervention within the South African Healthcare sector, frame the deliberations in this article. Introduction Seabi, Seedat, Khoza-Shangase and Sullivan (2014) explored how speech-language and hearing (SLH), psychology and social work students experience living, learning and teaching in higher education in South Africa, revealing that there was a general feeling of dissatisfaction with the lack of transformation of the curriculum in these fields. In this study, factors negatively impacting the learning process were found to include high workload, English as a medium of instruction and limited access to 'other' resources (epistemological freedom). This study asserts that this limited access to epistemological freedom remains the prevailing condition within the SLH programmes in South African universities because of the consequences of the historic exclusion of black and African language-speaking candidates from South African higher education training programmes. This is because the epistemological frames that enjoy purchase in South African universities are '... largely still based on a Eurocentric, Western epistemology with minimal, if any, inclusion of Afrocentric epistemology and ontology' (Khoza-Shangase & Mophosho, 2018, p. 3). The current authors published an article entitled 'Language and culture in SLH professions in South Africa: The dangers of a single story (2018)', framed around Novelist Chimamanda Ngozi Adichie's (2009) TED talk 'The danger of a single story'. This review article is a continuation of the process of confronting the dangers of 'a single story' started in 2018 by the current authors, which is a deliberate process for epistemic freedom and disobedience in the curriculum of these professions. In a study of South African medical students by Matthews and Van Wyk (2018), findings indicated a strong desire for cultural competence in the curriculum to improve delivery of services and provide support to culturally and linguistically diverse (CLD) groups. Factors in epistemological access and language of instruction have been presented by the current authors in detail elsewhere (Khoza-Shangase & Mophosho, 2018), cautioning against a 'single story' in SLH professions. South African speech-language and hearing (SLH) professions are facing significant challenges in the provision of clinical services to patients from a context that is culturally and linguistically diverse (CLD) due to historic exclusions in higher education training programmes. Over 20 years postapartheid, little has changed in training, research, as well as clinical service provision in these professions. In line with the Health Professions Council of South Africa's (HPCSA) SLH Professional Board's quest to transform SLH curriculum and in adherence to its recently published Guidelines for Practice in a CLD South Africa, in this review article, the authors deliberate on re-imagining practice within the African context. They do this within a known demand versus capacity challenge, as well as an existing clinician versus patients CLD incongruence, where even the clinical educators, a majority of whom are not African, are facing the challenge of an ever more diverse student cohort. The authors systematically deliberate on this in undergraduate clinical curriculum, challenging the professions to interrogate their clinical orientation with respect to African contextual relevance and contextual responsiveness (and responsibility); identifying gaps within clinical training and training platforms; highlighting the influencing factors with regard to the provision of linguistically and culturally appropriate SLH clinical training services and, lastly, making recommendations about what needs to happen. The Afrocentric Batho Pele principles, framed around the concept of ubuntu, which guide clinical intervention within the South African Healthcare sector, frame the deliberations in this article. The Health Professions Council of South Africa's (HPCSA) Board for SLH professions recently produced Guidelines for Practice in a Culturally and Linguistically Diverse South Africa, where one of its key recommendations is that training institutions ought to (HPCSA, 2019): [U]se findings from locally relevant research to inform curricula; acknowledge and interrogate colonial influence in all its forms (power, being and knowledge) and move towards a repositioning of the professions to better serve all South Africans. (p. 15) These guidelines are a major milestone in the tangible steps that the council has taken to ensure that clinical service provision within the South African context is Afrocentric and CLD competent. However, measures to ensure that the guidelines are adhered to by the South African SLH professionals are still not in place, but active enforcement of the guidelines by employers, professional associations through Continued Professional Development (CPD) initiatives and the council in its annual CPD compliance audits of practitioners can facilitate compliance. Speech-language and hearing practice in the culturally and linguistically diverse South African context Khoza-Shangase and Mophosho (2018) suggested recommendations for the South African context that as part of transformation and decolonisation: (1) the SLH professions need to actively engage with the national calls to Africanise institutions and clinical service delivery, (2) clinical care changes that are contextually relevant and responsive, (3) research focus on local needs and issues for local benefit, (4) clinical focus that will allow for 'next practice' and not just 'best practice', (5) language policy that respects that many people speak several languages and so teaching and learning (and clinical service provision) in only English and/or Afrikaans creates challenges, which need addressing and so on (p. 6). These arguments are anchored in the Constitution of the Republic of South Africa (1996) and are supported by evidence from Statistics South Africa (2018), which estimates the country's population to be at 57.7 million. This population consists of a diversity of cultures, languages, religions, nationalities and ethnicities, with 80.2% of people being black Africans -a majority of which speak isiZulu (23.8%) as a home language, with English being spoken as a home language by only 9.6% of the population. This evidence highlights the need for re-visiting and reimagining practice in SLH to come in line with lived context to ensure positive patient outcomes. This would also be in line with the Universal Declaration of Human Rights (United Nations, 1948), the 2003 National Language Policy Framework and the 2012 Use of Official Languages Act as outlined by the South African Department of Arts and Culture (2003). The Department of Health (DoH) standards for accreditation in undergraduate programmes in SLH state that curriculum for academic and clinical education ought to 'ensure that provision of services to clients or patients is not compromised where the clinician does not speak the client's or patient's language' (DoH, 2014, p. 7). Furthermore, on the African continent, it behoves local professions to move away from Euro-centric thinking and embrace decolonial practices. Decolonisation entails 'deconstruction and reconstruction', that is (Ndlovu-Gatsheni, 2018): [D]estroying what has wrongly been written -for instance, interrogating distortions of people's life experiences, negative labelling, deficit theorising, genetically deficient or culturally deficient models that pathologizes [sic] the colonized [sic] other -and retelling the stories of the past and envisioning the future. (p. 38) The intellectual landscape of SLH programmes in training institutions could adopt the ubuntu worldview. This worldview, '... recognises the importance of others, of history, of context and community in the formation of one's identity and the interdependent relations between individuals and collectives' (Oelofsen, 2015, p. 141). Sufficient evidence exists to support the positive outcomes linked to health interventions conceived and implemented, taking careful cognisance of patients' language and culture. Matthews and Van Wyk (2018) asserted that integrating culture and language in teaching and learning may be a facilitating factor in developing cultural competence for medical students and improving healthcare and outcomes in the South African context. This is supported by evidence on global health that indigenous groups worldwide tend to have worse health outcomes than corresponding nonindigenous populations (Anderson et al., 2016). Flood and Rohloff (2018) stated that global health interventions planned and delivered using indigenous languages are likely to be more successful, that groups that are not part of the dominant culture have less positive health outcomes than the dominant communities, with language presenting as a barrier to healthcare delivery, and that real indigenous language sensitivity includes not only the use of interpreters and translators, but also the creation of language-oriented programmes from the initial stages. Sue and Sue (2003) maintained that patients seen by healthcare providers whose racial and linguistic background matches their own continue to attend their treatment sessions and maintain treatment plans for longer duration; and this speaks directly to adherence, which is an important factor in SLH intervention success. Kathard and Pillay (2013, p. 85) recommended that speechlanguage therapists acquire political consciousness, which implies that they '... be tolerant and accepting of being disrupted in order to challenge and change historical, takenfor granted practices'. In this regard, Pillay and Kathard (2018) recommended using the Equitable Population-based Innovations for Communication (EPIC) framework, which aims to decolonise the SLH profession by developing services that are equitable and which meet the needs of underserved populations. Sue and Sue (2003) advocated for racial-cultural competence, which obligates therapists to be able not only to interpret the knowledge that they have about their client's culture, but also the ramifications of race. Subsequently, acquiring raised awareness of racial aspects of diversity is important because diversity is not only about culture, gender, socio-economic status, disability or language but also includes race. Frequently, white individuals regard racial-cultural differences (e.g. expressiveness of verbal and non-verbal communication of African people) as a form of deviance (Sue & Sue, 2003), and this misinterpretation has serious ramifications for professions dealing with communication disorders. Clinicians ought to be cautious about these misconceptions, as they may lead to misdiagnosis. Oelofsen (2015) advised white academics in South Africa, such as herself, to move away from their identity of advantage and embrace a hybrid identity, which has values and concepts from the traditional African concepts. She advocates that white academics ought to learn an African language as part of performance appraisals. Universities ought not to stay silent when injustice unfolds; they are instead obliged to speak up, talk back and push the boundaries to become relevant to the context (Bell, Canham, Dutta, & Fernández, 2019). If university personnel adopt such strategies, they facilitate the development of multicultural competence within their spaces, which leads to lifelong practice outside. It is of importance to note that for multicultural competence to be lifelong practice, it is essential that it should be developed initially as part of clinical training (Falender, Shafranske, & Falicov, 2014b). The SLH professions both in South Africa and more globally must accept that sensitivity to their patients' cultural and linguistic context -in the form of critical diversity literacyis pivotal. The ultimate point of achieving critical diversity literacy is being conscientised at a cognitive and affective level. The SLH professions in South Africa need to be conscientised to the fact that the burden of disease and structural inequalities affect the poor, which, because of apartheid oppression, are mostly the black people. These constitute the majority patients in the public hospitals (Kathard & Pillay, 2013). As asserted by Freire (1998), conscientisation requires both reflection and action, with action being fundamental in dismantling the status quo as key to the SLH professions in South Africa. Stein (2010) reiterated that symbolic and material values of hegemonic identities do become established in social relations. Healthcare professionals need to be able to identify the ways in which issues such as whiteness, masculinity, heterosexuality, able-bodiedness and middle-classness are accepted as the norm and are reproduced in contexts such as healthcare. Furthermore, it becomes highly challenging to change mindsets in society and shift the status quo. Ethnic disparities in obtaining medical care are a good example in the South African context. As part of its mandate to 'protect the public' and 'guide the professions', in its concerted and sustained effort to transform the SLH professions in South Africa, the HPCSA (2019) SLH Professional Board released Guidelines for Practice in a Culturally and Linguistically Diverse South Africa. These guidelines propose five main principles to be adhered to by the SLH professions. Principle 1: contextual relevance as an overarching philosophy for more relevant practice that will lead to more effective management of communication difficulties (to take cognisance of the processes and protocols adopted in clinical practice); Principle 2: focus on assessment and intervention (to practise epistemic disobedience in terms of what knowledge and which knower is the respected source); Principle 3: importance of local knowledge and calls for a shift in how the profession values it (Khunou, Canham, Khoza-Shangase, & Phaswana, 2019) (to examine the 'how' of clinical training within the South African context); Principle 4: focus on clinical training (remaining vigilant in continued professional development of critical consciousness); Principle 5: lifelong development of critical consciousness. The current authors believe that these principles, if adhered to and practised, would not only lead to positive SLH services outcomes but would also contribute significantly towards the transformation goals of the SLH programmes to make them more Afrocentric. This would, therefore, require enforcement and accountability -as the voluntary, optional 'goodness of the heart' approach does not seem to have yielded tangible outcomes over two decades after the democratic dispensation. This article is a clarion call to the HPCSA SLH Board, Departments of Health and Education (as largest employers) and SLH training institutions to ensure that translation of these principles into practice occurs. On the importance of epistemological and language diversity access, for example, Southwood and Van Dulm (2015) argued that even though more speech language therapists who can work in African languages are entering the profession now than before, the lack of access will continue to make it difficult to disentangle children who are speakers of African languages and who are typically developing but still in the process of acquiring English (or Afrikaans) from those who have underlying language impairment. They lament that this is because such a differential diagnosis requires knowledge of the status of the child's language skills in his or her first language, which most clinicians are not privy to. They therefore conclude, similar to current authors, that recruiting first or other language speakers of African languages into SLH training programmes promises to change levels of service delivery more than specialised training or maturation as an SLH practitioner. These authors, as argued by Khoza-Shangase and Mophosho (2018), suggested that such recruitment and the appropriate training of African language speakers into SLH professions ought to be performed concurrently with development of linguistically and culturally appropriate assessment and remediation materials if efficacious contextually relevant research evidence-based assessment and intervention is to be provided (Southwood & Van Dulm, 2015). Pascoe and Smouse (2012) also argued that: [C]linicians within the SLT profession have an ethical responsibility to effectively assess and manage their clients in the client's first language, even where a language mismatch between client and clinician exists. (p. 471) http://www.sajcd.org.za Open Access Gaps in the South African speech-language and hearing clinic The South African context shows significant mismatch between SLH practitioners and the population they serve with regard to language (English and/or Afrikaans) and culture (mostly Western). This incongruence creates a barrier to provision of efficacious clinical services and prevents growth and development of the professions. With only approximately 5% of HPCSA registered practitioners being black African language speakers -in a country that has over 70% of its population speaking an African language as their first language -an obvious gap exists. This gap is not localised to the clinical service provision by graduates but exists in the staffing profile of academics and clinical supervisors and/ or educators in universities and clinical training platforms. This reality means that even African language-speaking students who enrol into South African training programmes receive training that is not 'Africanised'. Consequently, they do not necessarily graduate in a better professional position than non-African language speakers. This is well described by Letsoalo and Pero (2020), who pointed to the white gaze and its persistent influence over the curriculum. 1 If the demographic profile of teaching and research staff is not diversified in order to reflect the contextual profile, there is limited, if any, epistemological access, which in turn influences the beliefs held and approaches adopted about communication disorders and their assessment and management. This extends to the development of assessment and intervention resources that are linguistically and culturally appropriate over and above the translation and adaptations of resources, which has its own limitations and weaknesses for this context. As a result of the very nature of the SLH professions to assess and treat language and communication pathology in patients, this gap cannot continue to be ignored (Khoza-Shangase & Mophosho, 2018;Mophosho, 2018;Mdlalo et al., 2019;Pascoe et al., 2017). In fact, Khoza-Shangase (2019) presented a tripartite threat to the helping professions (SLH) professions in South Africa (Figure 1), as including ignorance and naivety about the influences of linguistic and cultural diversity over and above its contextual realities, such as burden of disease, resource constraints, poverty and inequality. Moreover, it is pivotal for clinical educators to acknowledge that their primary goal is to facilitate: '... thoughtful, systematic reflection about the contextual and multicultural factors affecting the clinical and supervisory relationships' (Falender et al., 2014b, p. 60). Influencing factors to the provision of linguistically and culturally appropriate speech-language and hearing clinical training services The current authors believe that a key challenge in achieving linguistically and culturally appropriate SLH clinical services in South Africa is the knowledge base, theory, research evidence (academic curriculum) that fails to reflect diverse This study is grounded in postcolonial theory that addresses the current political, aesthetic, economic, historical and social impact of European colonial rule around the world (Elam, 2019). 2 Although postcolonial theory takes many different shapes and interventions, the authors support Elam's (2019) assertion that all the variations incorporate an important contention, namely that our understanding of the world is unfeasible outside of relating it to the history of imperialism and colonial rule, in terms of Europe's colonial encounters and oppression around the world. This is argued here to be the case in SLH within the African context. These influencing factors within the clinical training and training platforms in the South African context, as reflected in Figure 2, are further impacted by several other factors, which can be grouped into five aspects: people, places, practices, processes and policies, guided by National Center for Culturally Responsive Educational Systems and invitational education theory (Purkey, 1999), which the current authors adapted for the current review article. In terms of people, the current cohort of students, including clinical supervisors and/or educators, as well as the current combination of clinicians and patients, are mismatched both culturally and linguistically. The current cohort of clinical 2.Other anti-colonial positions such as African Humanism, black Marxism, social theory also pertain. Contextual realiƟes LinguisƟc diversity educators is not reflective of the demographic profile of the country, and consequently their linguistic and cultural competence and awareness of the context are limited. Furthermore, evidence of restricted training in African cultural and linguistic knowledge, for example, cultural humility, is seen to be generally lacking in the SLH curriculum. Limited tuition in courses such as African Anthropology and Sociology, over and above African languages, has been observed in all training programmes. These weaknesses exist in the context of a student body that remains largely nonlinguistically and non-culturally diverse, with small numbers of African students assimilating into a distinctly non-African curriculum. Firstly, as far as practices as an influencing factor are concerned, the highly structured curricula of the various programmes, in line with the HPCSA regulations, seem to be to the exclusion of CLD -unless there is a specific staff member interest. This leads to optional input in different programmes and unequal access and competency by students from different institutions. Secondly, the current authors argue that there is generally limited critical consciousness in clinical practice, with supervisors and/or educators not being aware of the influence of factors such as power, cultural capital, etc., in clinical provision and clinical training. We need to recognise that power and culture intersect within communicative interactions. According to Hyter and Salas-Provance (2019), power can be exerted using different measures, such as structural violence, physical violence, symbolic violence, manufactured consent and organisation of work. Speech-language and hearing practices ought to avoid all these power differentials, especially symbolic violence, where benefits of one group depend on the disadvantage or deprivation of another group. There is a need for programmes to deliberate on culturally responsive pedagogy and practice with training provided as part of clinical educators' training. Thirdly, there is unabated use of CLD inappropriate resources, where it is deemed acceptable to simply translate resources without following structured protocols or considering the adaptations and expertise of translators. Barratt, Khoza-Shangase and Msimang (2012) stated that translating and adapting a test are critical processes that need to consider the personal characteristics of individuals translating the test, which will influence how the test is translated. These authors raise caution about, for example, any inappropriate use of vocabulary or sentence structure, which they argue are some of the factors that may dilute the complexity of the translated material. The current authors feel strongly that translations and adaptations ought to be performed by expert speakers of the target language, preferably from an academic linguistics department, who would understand the importance of maintaining the validity and usefulness of the test measure. Furthermore, current authors recommend that where translations and adaptations are performed, specific documented protocols ought to be followed, over and above simply translating and piloting the materials. Specifically, protocols followed ought to include translating and adapting the measure, reviewing the translated or adapted version of instrument, adapting the draft instrument based on the comments of the reviewers, pilot testing the instrument, field testing the instrument, standardising the scores and performing validation research (Geisinger, 1994). Peña (2007) aptly asserts that during translation: [T]ypically, the translation process focuses on ensuring linguistic equivalence. However, establishment of linguistic equivalence through translation techniques is often not sufficient to guard against validity threats. In addition to linguistic equivalence, functional equivalence, cultural equivalence, and metric equivalence are factors that need to be considered… (p. 1255) The current culture of using planisa as a lingua franca in CLD resources ought to be called into question, as it not only negatively impacts the accuracy and efficacy of clinical practice, but also infringes on patients' rights. Mophosho (2016, p. 162) describes planisa as '... they get on and get by in the hospitals through their improvisation and the teams' self-initiated action'. 3 Furthermore, the use of untrained interpreters during assessments and intervention with limited use of cultural brokers forms part of the planisa-linked factors influencing the provision of linguistically and culturally appropriate SLH clinical training services within the South African context. South Africa has a strong constitutional, legal and policy dispensation that protects patient rights. At the policy level, implementation and monitoring of the Language Policy in the DoH and Batho Pele (People First) principle; translating existing policies and regulations around CLD in healthcare 3.Phakathi (2013) describes planisa as a type of Fanakalo (mining lingua franca), collaged together from various languages for miners to help them problem-solve day-to-day challenges. into practice; confronting the use of academic freedom by training institutions to defend lack of social accountability and responsibility around CLD matters; raising and sharpening political will and government mandate around CLD; as well as confronting the silence around clinical practice without CLD competence in the ethical codes of conduct all present significant challenges. Councils such as the Council of Higher Education (CHE) and HPCSA, involved in quality assurance of higher education training, have a significant role to play here. Their failures in this regard illustrate the disjuncture between the universal aspiration of human rights as norms and the complexities arising in their implementation. As far as places (physical environment in which people interact), as a factor, are concerned; it is a context where many of the speech language therapists and audiologists do not speak the language of their clients, consequently limiting their work. The paradox remains a structural mechanism in contemporary South African public health system that does not seem to be responsive to the needs of the citizens. Again, the context is also limited by existing interpreter resource challenges. Given that cultural and linguistic diversity has a profound effect on the ways in which families and professionals inter-relate cross-culturally and participate together in treatment programmes, the DoH ought to invest in finding trained interpreters to assist healthcare providers and patients in government institutions. In public settings where there are no mediated or interpreter services, the objectives of the National Language Policy Framework are certainly contravened. Lastly, as far as processes (systematic series of actions directed to some end) are concerned, Penn, Mupawose and Stein (2009) attributed the lack of preparedness of speech language therapists and audiologists in a South African health context to professional, technical, systemic, managerial, interpersonal and ethical challenges. These can be attributed to a variety of complex factors, including gaps in resources including research, culturally appropriate intervention tools and relevant human resources. Solutions Falender, Shafranske and Falicov (2014a) observed culturally responsive pedagogy in clinical training to include personal dimensions, institutional dimensions and instructional dimensions, as depicted in Figure 3. In line with Figure 3, which defines culturally responsive pedagogy as requiring attention to the personal, institutional, as well as instructional dimensions; the aforementioned gaps and influencing factors, as numerous as they are, are not insurmountable. A multipronged approach is required, where a number of factors are simultaneously addressed as possible solutions to the current CLD conundrum facing the South African SLH professions. Key to this is the transformation of students and staff demographic profiles to become reflective of the country's CLD profile. Speechlanguage and hearing professions need to also understand change processes in public policy implementation within the institutional dimension (Wylie, McAllister, Davidson, & Marshall, 2013). Within the personal dimension, cultural competence humbles the practitioner and stimulates cultural humility. In addition, it enhances patient care, as the understanding of patient belief systems becomes integral to their healthcare (Juarez et al., 2006). This leads to cultural humility, which is defined as a lifelong learning process that incorporates openness, power-balancing and critical self-reflection when interacting with people for mutually beneficial partnerships and institutional change (Tervalon & Murray-Garcia, 1998). At an instructional level, SLH departments need to have a student-centred vision and mission statement, commitment to diversity, provide culturally and linguistically appropriate services, with equitable and supportive learning environments, such as the conceptual framework for Responsive Global Engagement (Hyter & Salas-Provance, 2019). Furthermore, in relation to the clinical component, the current researchers recommend that clinicians collaborate with other disciplines, such as Sociology and Anthropology, and include clients and their families to understand the cultural history of persons on their caseloads. The worldview, cultural assumptions and practices of the client are considered in the clinical service by incorporating them into assessment and intervention plans (Hyter & Salas-Provance, 2019). Research into CLD knowledge base and clinical practice, leading to development of CLD resources that are evidence based, is also paramount. Furthermore, taking a social action approach would be pivotal. This is the highest level on the Banks' (1989) model, which involves students taking decisions and actions on important social issues. This can happen if training institutions have the political will and if programme leadership lobby not only for policy formulation around clinical care and clinical practice more generally but also specifically for training purposes in training platforms. For example, training institutions can play a major role in influencing the inclusion of official and trained interpreters Culturally responsive pedagogy InstrucƟonal dimension Personal dimension in staff establishments such as hospitals, which serve as training platforms. Furthermore, clinical educators need to be trained to engage diverse and/or Afrocentric minds when consuming literature or clinical training sources, so as to ensure that their teaching and assessment approaches are contextually relevant. In addition to this engagement of diverse minds, clinical educators need to be assisted in developing multicultural competence, where their awareness of influence of culture on clinical presentation and clinical interaction is raised and where their awareness of the influence of culture on the supervision process is highlighted. All these speak about culturally responsive (and competent) pedagogy (Falender et al., 2014a) and cultural humility (as counter to ethnocentricity or white privilege) during the supervisory -supervisee/therapist -client dynamics. Clinical supervision of SLH students that are CLD competent requires commitment to social justice and advocacy, where such a commitment underlies the attitude necessary for effective diversity and multicultural competence (Vasquez, 2012). Those involved should have a clear understanding of environmental contributions to human development, including pathology. Therefore, the curriculum needs careful interrogation in order to be CLD sensitive and competent. This is for both assessment and intervention with development of guidelines for clinical educators and clinician students forming part of the core clinical curriculum. The core curriculum should include diversity sensitivity programmes such as those provided by the People for Awareness on Disability Issues (PADI), an academic service-learning programme for first-year SLH students with disability awareness workshops run over the span of one semester specific to language and culture. These should not be optional as they currently are in most programmes. These types of service-learning projects also enhance civic responsibility in students because they focus '... on meeting both communities' needs and learning objectives; it emphasises critical thinking to improve skills and civic responsibility' (Pillay & Ramkissoon, 2020, p. 1075. Linguistic and cultural consciousness and critical diversity literacy should form part of the CLD enhancing tuition in SLH programmes. Way forward and conclusion It is important that when South African SLH professionals deliberate on transformation of the profession, they should also face the decolonisation of the curriculum by introducing political consciousness and critical literacy diversity. Political consciousness in the curriculum and training is not just about learning to pass, but it teaches students how to respond to cultural context, foster connection with patients or clients and invest in relationships with the communities. The SLH professions in South Africa need to show greater recognition of the urgent need to both examine and re-imagine SLH clinical training, which will have a positive effect on clinical outcomes and will inform academic curriculum. Training programmes need to identify environmental and instructional elements of culturally (and linguistically) responsive training and training platforms so as to ensure that these elements are optimised for their students' clinical training. Training institutions need to deliberate on and define culturally (and linguistically) responsive pedagogy and hold themselves accountable to it with education and training regulating bodies, including this as part of institution evaluation and accreditation processes. This includes identifying features of culturally (and linguistically) responsive pedagogy -as recommended by the National Center for Culturally Responsive Educational Systems (NCCRESt, n.d.). The South African undergraduate clinical curriculum needs to undergo reform, where standards followed are international, but Afrocentric with SLH training programmes from all South African universities understanding that they have a moral obligation to actively engage with and address the legacy of apartheid in all their teaching and learning activities (including their CPD offerings). Student clinicians and qualified clinicians ought to demonstrate an understanding of their client's challenges, causes of the problems and work collaboratively with their clients in identifying ways to overcome them. Supervisory relationships should have a framework that has multicultural perspectives and diversity mindfulness; in addition, it should consist of an explicit antiracism and anti-oppression standpoint (Porter, 2019). As far as the CPD offerings are concerned, professional associations/bodies have an important role to play as accreditors of CPD events and CPD providers themselves, in making sure that CLD activities are also prioritised. The South African government must be lobbied to address the CLD-related challenges in healthcare facilities, which serve as clinical training platforms, as state responsibility. In re-imagining practice, the current authors have presented a number of suggestions, including decolonisation of the SLH professions with responsibility to context through best practice-driven clinical training that is contextually relevant and responsive. This is important if the SLH professions provide efficacious services within the National Health Insurance where universal health coverage is the goal. If the status quo remains, where there is a lack of demand-driven engagement, South African SLH practitioners will continue to rely on the single story, which impedes provision of effective intervention. This will continue to maintain the cycle of exclusion, with patients or clients entering the healthcare system but exiting without receiving effective treatment or care, because language and culture remain barriers in an environment where communication pathology is the presenting disability. Social structures of clients ought to be considered contextual factors when planning SLH assessments and interventions. For the SLH professions to be effective and relevant within the South African context, practitioners need to deal with the unequal relations of power and strive to maintain equality in these relations with those they serve -both in clinical training and clinical service provision. The authors have deliberately not included a template or a case example to illustrate implementation of the suggestions they have provided. This series remains a call for independent critical deliberations and engagements with the SLH practice, which must leave room for independent creativity in re-imagining SLH practice within the African context, thereby allowing us to carve out next practice for our context.
7,989.8
2021-06-03T00:00:00.000
[ "Linguistics", "Sociology", "Education" ]
First de novo draft genome sequence of Oryza coarctata, the only halophytic species in the genus Oryza Oryza coarctata plant, collected from Sundarban delta of West Bengal, India, has been used in the present study to generate draft genome sequences, employing the hybrid genome assembly with Illumina reads and third generation Oxford Nanopore sequencing technology. We report for the first time the draft genome with the coverage of 85.71 % and deposited the raw data in NCBI SRA, with BioProject ID PRJNA396417. Amendments from Version 1 In this revision, we have addressed all the issues raised by the referee. In additions, we have made few grammatical corrections. We revised the manuscript as a result of reanalysis of the data. We have also improved clarity in the Methods regarding assembly of the genome. Further, as per the suggestions of the referee as well as requirement, we incorporated 6 more references. Introduction Soil salinity is a major abiotic stress of rice cultivation globally (Molla et al., 2015), and rice cultivation areas under soil salinity stress are increasing gradually. Genetic potential for salt tolerance of rice that exists among the natural population has been largely exploited, and alternative useful alleles may further enhance salinity tolerance. Wild species are a potential source of many useful genes and QTLs that may not be present in the primary gene pool of the domesticated species. Oryza coarctata, known as Asian wild rice, grows naturally in the coastal region of South-East Asian countries. It flowers and set seeds under as high as 40 E.Ce dS m -1 saline soil (Bal & Dutt, 1986). It is the only species in the genus Oryza that is halophyte in nature. However, with the exception of one transcriptomic (Garg et al., 2014) and one miRNA (Mondal et al., 2015) experiment, no large scale generation of any other genomic resource is available for this important species, although several pinitol biosynthesis pathway genes have been cloned to study the functional genomics (Sengupta & Majumder, 2009). Methods The plant was collected from its native place, Sundarban delta of West Bengal, India (21°.36'N and 88°.15' E) and established at our institute Net house through clonal propagation. To determine the genome size, 20 mg of young leaf tissue from Net house grown plants was chopped into small pieces and stained with RNase containing propidium iodide (50 μg/ml) (BD Science, India) as per the protocol of Dolezel et al. (2007). The samples were filtered through a 40-μM mesh sieve (Corning, USA), before analysis in (CFM) BD FACS Calibur (BD Biosciences, San Jose, CA, USA). Pisium sativum leaf was used as standard for calculating the genome size. Further, high-quality genomic DNA from 100 mg young leaf of a single plant was extracted using CTAB method (Ganie et al., 2016) for the preparation of various genomic DNA libraries. We used standard Illumina HiSeq 4000 platform (San Diego, CA, USA) to construct 151-bp paired-end libraries and four mate-pair libraries of four different sizes (average of 2, 4, 6 and 8 kb size). In addition, we also used third generation sequencing (Oxford Nanopore) technology for better assembly. Sequencing was performed on MinION Mk1b (Oxford Nanopore Technologies, Oxford, UK) using SpotON flow cell (R9.4) in a 48h sequencing protocol on MinKNOW 1.4.32. Base calling was performed using Albacore. Base called reads were processed using poRe version 0.24 (Watson et al., 2015) and poretools version 0.6.0 (Loman & Quinlan, 2014). Assembly of the high quality reads was performed using PLATANUS v1. 2.4 (Kajitani et al., 2014) andSSPACE v3.0 (Boetzer et al., 2011) with default parameter. The simple sequence repeats (SSRs) of each scaffold were identified by MISA perl script (Thiel et al., 2003). Gene model prediction was done by ab initio gene predictor AUGUSTUS 3.1 (Stanke & Waak, 2003) and sequence evidence based annotation pipeline, MAKER v2.31.8 (Campbell et al., 2014) with O. sativa ssp. japonica as reference gene model. The protein-coding genes were annotated by using BLAST based approach against a database containing functional plant genes downloaded from NCBI with Blast2GO (version 4.01) (Conesa & Gotz, 2008). Genes with significant hits were assigned with GO (Gene Ontology) terms and EC (Enzyme Commission) numbers. InterProScan search and pathway analyses with KEGG database were also performed by using Blast2GO. Non-coding RNAs, such as miRNA, tRNA, rRNA, snoRNA, snRNA, were identified by adopting Infernal v1.1. 2 (Nawrocki & Eddy, 2013) using Rfam database (release 9.1) (Nawrocki et al., 2015) and snoscan distribution. Transfer RNA was predicted using tRNAscan-SE v 1.23 (Lowe & Eddy, 1997) Discussion The O. coarctata genome (2n=4X=48; KKLL; Sanchez et al., 2013) is self-pollinated, (Sarkar et al., 1993) tetraploid plant with a genome size estimated by flow cytometry is found to be approximately 665Mb. The Illumina 4000 GA IIx sequencer pair-end generated 123.78 Gb data. Further four mate-pair libraries together generated 36.54 Gb and Nanopore generated 6.35 Gb sequence data. Hence, we achieved 250.66 X depth of the genome of O. coarctata. The final assembly generated 58362 numbers of scaffolds with a minimum length of 200 bp to maximum length of 7,855,609 bp and 1,858,627 bp N50 value, making a total scaffold length of 569994164 (around 570 Mb) assembled genome, resulting in 85.71% genome coverage. It has been calculated that data contain very small amount of non-ATGC character. Further, we also found that the 19.89% of the assembled genome is repetitive in nature. We also identified approximately 5512 different non-coding RNAs and around 230,968 SSRs. Gene ontology analysis identified several salt responsive genes. Data availability Raw sequence data are available at NCBI SRA under the BioProject ID: PRJNA396417. Competing interests No competing interests were disclosed. Grant information The author(s) declared that no grants were involved in supporting this work. The work describe the whole genome sequence of wild species of species that Oryza coarctata exclusively grow under saline water and thus will be an important source of salinity tolerance genes. These genes can later be used to introduce salinity tolerance in commercial cultivars of rice. The authors used Illumina and Oxford nanopore sequencing platforms to generate 372.48X data. Open Peer Review The genome sequencing methods seems good enough but authors have discussed very little about the annotation of the genome data. I can understand that there is word limit under Data Note in F1000Research, but still by looking at the discussion, I think analysis portion is weak point in this paper. Authors should provide a comparative note on the genome of and How this Oryza sativa Oryza coarctata. species is tolerating such a high saline conditions, which kind of genes/osmoregulators are involved in this adaptation should be discussed along with comparison to . How many different genes were O. sativa predicted should be mentioned. Authors found approximately 1605 non-coding RNAs? I am not sure, what are trying to tell here, this number should be high as per my opinion. There are some minor mistakes like; in the affiliation the word "Delhi" is not required. The word, "Primary" 1. There are some minor mistakes like; in the affiliation the word "Delhi" is not required. The word, "Primary" should be inserted in the first paragraph last line of Introduction. So the correct sentence will be "...in the primary gene pool...". Is the rationale for creating the dataset(s) clearly described? Yes Are the protocols appropriate and is the work technically sound? Yes Are sufficient details of methods and materials provided to allow replication by others? Yes Are the datasets clearly presented in a useable and accessible format? Yes No competing interests were disclosed. Competing Interests: I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The authors report a whole genome sequence dataset for a halophytic wild rice species. These data will be useful for discovery of novel alleles for rice improvement, and for comparative/evolutionary genomics within the Oryza genus. The report would benefit from more details on the plant accession used as source of DNA for sequencing. It is stated O. coarctata is tetraploid. Was that determined by the authors, or is there a citation to include? Is it known whether O. coarctata is typically self or cross-pollinated, or other information about expected degree of heterozygosity? When grown in greenhouse to generate the plant tissue used for DNA extraction, were the plant(s) established from seeds, or via clonal propagation? Was the genomic DNA used to prepare sequencing libraries from a single plant, or a pool from multiple plants? This information is important to assess expected frequencies of variant types such as alleles or homeologs due to tetraploidy, which are likely collapsed to varying degrees in the subsequent assembly. There is mention of an assembly and its quality, but not about the method(s) used to produce it or key parameters that guided the assembly. Can the authors provide that information, so that others have a benchmark upon which to compare future assemblies using the datasets? The sentence "Further, we also found that the repeat contain 19.89% of the genome." Is not completely clear. I believe what the authors intend to say is that approximately 20% of the genome assembly is comprised of repeats. How was this sequence fraction defined as repeats, via tool for matching to known repeat sequences, or a de novo approach? By inference, it is also likely that the approximately 100-kb of the estimated genome size not covered by the assembly is comprised of Are the protocols appropriate and is the work technically sound? Yes Are sufficient details of methods and materials provided to allow replication by others? Yes Are the datasets clearly presented in a useable and accessible format? Yes No competing interests were disclosed. Competing Interests: Referee Expertise: Comparative genomics, brassica, polyploidy, regulatory evolution I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
2,381.4
2017-09-25T00:00:00.000
[ "Environmental Science", "Biology" ]
The pre-pregnancy fasting blood glucose, glycated hemoglobin and lipid profiles as blood biomarkers for gestational diabetes mellitus: evidence from a multigenerational cohort study Abstract Background Early prevention of gestational diabetes mellitus (GDM) is important to reduce the risk of adverse pregnancy outcomes and post-pregnancy cardiometabolic risk in women and offspring over the life course. This study aimed to investigate some blood biomarkers before pregnancy as GDM predictors. Methods We investigated the prospective association of blood biomarkers before pregnancy and GDM risk among women from the Mater-University of Queensland Study of Pregnancy (MUSP) cohort. A multiple logistic regression model was applied to estimate the odds of experiencing GDM by blood biomarkers. Results Out of 525 women included in this study, the prevalence of GDM was 7.43%. There was an increased risk of experiencing GDM among women who experienced obesity (Odds ratio = OR 2.4; 95% confidence interval = CI 1.6–3.7), had high fasting blood glucose (OR = 2.2; 95% CI = 1.3–3.8), high insulin (OR = 1.1; 95% CI = 1.0–1.2), high insulin resistance (OR = 1.2; 95% CI = 1.0–1.3) and low high-density lipoprotein (OR = 0.2; 95% CI = 0.1–0.7) before pregnancy. Adjustment for potential confounders, such as age, marital status, and BMI did not attenuate these associations substantially. Conclusion The pre-pregnancy fasting blood glucose, insulin, and insulin resistance were independent predictors of GDM. They may be used as early markers for predicting the incidence of GDM. Introduction Gestational diabetes mellitus (GDM) is a state of glucose intolerance, first detected anytime during the pregnancy, which does not meet the criteria for diagnosis of diabetes in non-pregnant women [1]. GDM is a common complication of pregnancy that affects about 12% of Australian pregnancies [2], potentially causing several short-and long-term health consequences for the mother and her child [3]. Most importantly, GDM is associated with an elevated risk of hypertensive disorders of pregnancy, operative delivery, and macrosomia [4][5][6]. It increases antenatal and postnatal care expenses [7]. In the long term, it is associated with an increased risk of metabolic disorders and cardiovascular diseases (CVDs) in the mother and her baby [8]. Therefore, early detection of GDM allows for risk management, improves the health of the mothers and newborn babies, and decreases the healthcare burden [7]. Pancreatic beta-cell dysfunction (lower insulin secretion) and peripheral insulin resistance play primary roles in GDM pathophysiology [9]. However, for most women with GDM, the pathological process has already begun before pregnancy [3,10], so early pregnancy lifestyle interventions for GDM prevention have had limited success [11]. The systematic review by Song et al. [12] has suggested that first-trimester lifestyle intervention may reduce GDM by 20%. However, preconception interventions may potentially be more successful [10]. Promising, preliminary research from retrospective bariatric surgery studies suggests that pre-pregnancy reduction in body weight may help in GDM prevention and reduce GDM recurrence [13]. Currently, risk factors including age, body mass index (BMI), previous history of GDM, and family history of diabetes, are used to identify women who would benefit from early pregnancy screening for GDM [14]. Studies have considered some biomarkers during pregnancy as GDM predictors [14,15]. For example; pre-pregnancy and early pregnancy fasting blood glucose (FBG) may help in excluding women who do not require further investigation for GDM, but it cannot replace oral glucose tolerance test (OGTT) in GDM diagnosis [16]. Many studies have reported that elevated first-trimester FBG (within the range of normoglycemia) is an independent risk factor for the later development of GDM [17][18][19]. Furthermore, all 10 studies included in the systematic review of Kattini et al. reported that glycosylated hemoglobin (HbA1c) was associated with an increased risk of GDM and that women with levels between 5.7% and 6.4% in early pregnancy, are more likely to develop GDM [20]. A recent study by Nissim et al. found that HbA1c !5.45% in the first trimester of pregnancy predicted GDM with 83.3% sensitivity and 69% specificity. Therefore, HbA1c in early pregnancy may serve as a simple, early predictor for GDM [21]. Multiple large-scale studies suggested that fasting blood glucose and HbA1c measurements in the first trimester may help diagnose women who would benefit from early treatment [14]. Furthermore, some studies reported increased fasting insulin in the first trimester of pregnancy in women who later develop GDM [22,23]. However, other investigators have found that fasting insulin was not an independent predictor for GDM after adjusting for clinical characteristics [22,24]. Moreover, other studies have reported increased lipid profiles in women with GDM compared to normal women, and lipid abnormalities are inducing factors for insulin resistance [9,25]. Thus, overall, the results of the studies considering lipid patterns and GDM are inconsistent and most of these studies have focused on the association of lipid profiles in early or throughout pregnancy and GDM risk rather than before pregnancy. Several studies have tested whether such biomarkers can be identified and employed to identify women at risk of GDM [9,26,27]. However, most of these studies are cross-sectional and have examined the association during pregnancy rather than before pregnancy. Moreover, it is as yet unproven whether these biomarkers, together or separately, are of practical value as GDM prediction tools [14]. Detecting some pre-pregnancy blood biomarkers using a longitudinal design would provide valuable information to reduce the need for screening testing in all pregnant women and allow early intervention to improve GDM outcomes. This study aimed to determine whether pre-pregnancy blood biomarkers predict GDM, using the Mater University of Queensland Study of Pregnancy (MUSP) cohort [28]. These blood biomarkers could be used as a pre-pregnancy screening tool for GDM and would decrease the need for further screening tests for women without risk factors and initiate early prevention and treatment for those women with GDM risk factors. Study data We used the Mater-University of Queensland Study of Pregnancy (MUSP) [28], which is a prospective cohort study of 7223 women (G1) and their offspring (G2) who received antenatal care at a major public hospital in South Brisbane between 1981 and 1984. The maternal cohort and their index children were subsequently followed up by maternal questionnaires at 6 months, 5, 14, and 21 years after childbirth. Mothers' cohort and offspring were followed up separately at 27 years and 30 years, respectively, due to the difference in the timing of the research grants. Recently (2016-2018), a 34-year follow-up of G2 and children-of-the offspring (G3) was undertaken. For this study, the participants are the female offspring (G2) of the original (G1) mothers at the 34-year follow-up. The analytical sample comprises a sub-sample of 525 offspring (G2) for whom we have information on some of their blood biomarkers (these women provided blood samples at the 30-year follow-up), their BMI, and additional factors at 30-year follow-up ( Figure 1). The ethics committees at the Mater Hospital and the University of Queensland approved each phase of the study. At each data collection phase, informed written consent was obtained from the adults. Full details of information about the study participants and measurements have been previously reported [29]. Measures Blood biomarkers at 30-year follow-up At the 30-year follow-up of G2, fating blood samples were drawn from 1625 young adults (after written informed consent) to evaluate different biomarkers [30]. For those participants residing in Brisbane, samples were collected by Mater Pathology, Brisbane. For participants outside the Brisbane area, their specimen collection was performed by their nearest Sonic Healthcare clinic. Those participants from Brisbane who were having difficulty attending a Sonic clinic underwent home specimen collection with a registered nurse from LifeScreen. Detailed instruction about fasting blood sample collection was provided to all the participants who were advised to eat their normal evening meal by 7 pm and fast for a minimum of 9 h with blood sampling before 9 am (an overnight fasting blood sample). The blood samples were used for testing some blood biomarkers, such as FBG, HbA1c, high-density lipoproteins (HDL), and low-density lipoproteins (LDL). Accordingly, FBG, HbA1c, insulin, and lipid markers were considered as potential pre-pregnancy predictors for GDM in our study, considering their associations with GDM in previous epidemiological studies [1,9,14,15,[17][18][19]21,25,31]. The Glucose Oxidase method was used for assessing FBG and Cation exchange-HPLC method using Bio-Rad D10 for HbA1c calculation. Prediabetes was defined by FBG, ranging from 5.6-6.9 mmol/L or HbA1c 5.7-6.4% based on American Diabetes Association recommendations [32]. Phosphotungstate/Mg 2þ was used to assess serum lipid using the Ortho Clinical Diagnostics Vitros analyzer. According to the guidelines for preventive activities in general practice [33], 2.0 mmol/L for LDL, 1.0 mmol/L for HDL, and 5.5 for total Figure 1. Flowchart demonstrates the study exposures, outcome and the number (%) of G1, G2 and G3 retained in MUSP study at each phase (age) of data collection [29]. cholesterol: HDL ratio (TC/HDL) were considered as cutoff values. Insulin resistance was estimated using the homeostatic model assessment of insulin resistance score HOMA-IR, according to the formula fasting glucose (mmol/l) x fasting insulin (mU/ml)/22.5 [1]. Women with HOMA-IR >2 [34] and fasting insulin >10 mIU/L [35] were considered to have insulin resistance. Gestational diabetes mellitus The GDM status of G2 females were based on selfreport at the 34-year follow-up. Our expert recruiter asked the women by telephone about their pregnancy history, pregnancy-related complications, delivery, lifestyle, and birth outcomes. GDM data was recorded using the question: "Did the doctor ever diagnose you with diabetes (high blood sugar) during pregnancy?" with response options "yes" or "no." However, the diagnosis did not identify which pregnancy was affected by GDM and there was no information regarding treatment and biochemical confirmation of the diagnosis. In addition, at 30 years follow-up, G2 women were asked whether they had experienced type 1 diabetes mellitus (T1DM) or type 2 diabetes mellitus (T2DM). This information was used to know whether they experienced diabetes mellitus before age 30 and was used as an exclusion criterion. Statistical analysis Overall distribution (mean and standard deviation) of FBG, HbA1c, insulin, HOMA-IR, HDL, LDL, and TC/HDL ratio were calculated. Statistical analysis was performed using F-tests for mean values comparison and the X 2 test was used for categorical variables. Multiple logistic regression with adjustment for potential confounders was conducted. Comparing different models, we found which factor(s) confounded/mediated the association between the selected biomarkers and GDM. All analyses were interpreted as odds ratio (OR) with 95% confidence intervals (CI). The MUSP cohort has some loss to follow-up, as observed in other cohorts. The main study had revealed that the participants' mothers (G1), who were lost to follow-up, were more likely to have been teenagers when they delivered, of lower educational status, lower socioeconomic status, been smokers during pregnancy, and have poorer mental health [29]. Therefore, standard analytic methods were applied to correct the potential bias induced by attrition. The proportion of missing data ranged from 0% to 35%. Missing values for variables were imputed using multiple imputation chained equations/MICE with 30 imputed datasets [36]. We assessed all covariates' influence in the primary analysis on our complete data in a logistic regression model. All analyses were undertaken using Stata version 16 (StataCorp, College Station, TX, USA). Results A sub-sample of 525 females during the 34-year follow-up of G2 provided information about GDM and for whom we have data on blood biomarkers before pregnancy (at 30-year follow-up) were included in this study. We divided the sub-sample (regardless the number of pregnancies) into women experienced GDM and women without GDM. As a result, 39(7.43%) of the women self-reported a diagnosis of GDM during pregnancy. Participant women's characteristics stratified according to GDM status are shown in Table 1. Women who reported GDM were more likely to be married, nonsmokers, had annual family income >$50k, had high IPAQ scores, and BMI !30 kg/m 2 at age 30. The association between BMI and GDM was statistically significant and about 50% of the women who reported GDM experienced obesity before pregnancy. However, no significant differences were observed between GDM and non-GDM women regarding age, marital status, family income, smoking, alcohol consumption, and physical activity at 30-year follow-up. Table 2 demonstrates the mean values of the prepregnancy blood biomarkers with the risk of GDM. Women who reported a diagnosis of GDM had significantly increased mean FBG, insulin, HOMA-IR, TC/HDL ratio, and decreased mean HDL concentrations before pregnancy compared to the non-GDM group. The univariate and multivariate-adjusted estimates of the continuous associations between pre-pregnancy blood biomarkers and the risk of GDM are presented in Table 3. In the univariate analysis, FBG (OR ¼ 2.2; 95% CI ¼ 1.3-3.8), insulin (OR ¼ 1.1; 95% CI ¼ 1.0-1.2), HOMA-IR (OR ¼ 1.2; 95% CI ¼ 1.0-1.3) and TC/HDL ratio (OR ¼ 1.5; 95% CI ¼ 1.2-2.0) were positively associated with increased GDM risk and HDL (OR ¼ 0.2; 95% CI ¼ 0.1-0.7) was inversely associated with GDM risk. Adjustment for potential confounders did not explain these associations. Furthermore, no statistically significant associations were noted between HbA1c and the risk of developing GDM in unadjusted and adjusted models. Table 3. Associations between pre-pregnancy blood biomarkers (continuous variables) and GDM. Table 4 shows the proportion of GDM status by the pre-pregnancy blood biomarkers groups. The women who reported GDM were more likely to have increased insulin resistance (about 46% had insulin level >10 mIU/L and almost 60% had HOMA-IR >2, p < .001), and about 10% were in pre-diabetes status before pregnancy based on FBG (p ¼ .006). Table 5 shows the multiple logistic regression models of the categorical blood biomarker groups. Significantly lower odds were observed for FBG, insulin, and insulin resistance in the adjusted models (OR ¼ 6.4; 95% CI ¼ 2.0-36.6), (OR ¼ 1.4; 95% CI ¼ 1.1-3.4) and (OR ¼ 3.0; 95% CI ¼ 1.2-7.3), respectively. The association between pre-pregnancy HbA1c and GDM was not statistically significant. Discussion The present study showed that the associations between pre-pregnancy FBG, insulin, and HOMA-IR and GDM were statistically significant and remained so after adjustment for various potential confounders indicating that metabolic impairments preceded GDM pregnancy. Therefore, they would be considered independent markers for the prediction of GDM. Our results suggest that pathophysiological changes related to glucose regulation, which may lead to GDM, are present before pregnancy and are not simply induced by pregnancy in women who develop GDM [37]. These findings are of clinical relevance, as they may be used in identifying women at risk of GDM before pregnancy and may allow for preconception interventions to reduce the risk of GDM in a future pregnancy. These may be more effective than interventions starting in early pregnancy for GDM prevention. Study by Gunderson et al. considered multiple biomarkers together for GDM prediction using the clinical data of Coronary Artery Risk Development in Young Adults (CARDIA) [27]. It found that impaired fasting glucose, elevated fasting insulin, and low HDL levels were present in 41% of women (58 out of 141) before GDM pregnancy. Furthermore, the retrospective study of Noussitou et al. examined the relationships between GDM and metabolic syndrome found that the metabolic abnormalities were already present in about 26% of the included women before their GDM pregnancy and the associated increased risk of future T2DM [38]. In addition, studies of early pregnancy found that elevated first-trimester FBG (within the range of normoglycemia) was an independent risk factor for the later development of GDM [17,19] but did not measure other blood biomarkers. Moreover, higher first-trimester fasting glucose levels, even within the non-diabetic range, increased the risk of adverse Table 4. Generation 2 participants' pre-pregnancy blood biomarkers level (categorical variables) in GDM-group and non-GDM group. 34 pregnancy outcomes [17]. The FBG cannot replace OGTT in diagnosing GDM, but it could help exclude women who do not require further investigation for GDM [16]. The study of Immanuel [1] showed that insulin resistance contributed relatively to early GDM. In another study of early pregnancy, it has reported that there is a positive correlation between high serum insulin levels and GDM. In addition, women with elevated first-trimester insulin should be managed as same as those with GDM despite a negative OGTT. Furthermore, it concluded that serum insulin measurement before 16 weeks of gestation was reliable for predicting GDM [23]. This study has some strengths, as it used data from a large longitudinal cohort. The longitudinal design reduces the likelihood of recall bias. The availability of a wide range of blood biomarkers and confounders were also strengthened our study. However, there are several potential limitations to this study. First, we relied on self-reported GDM during the 34-year followup without confirmation by objective laboratory tests. Although the trained interviewer asked the women whether the doctor ever diagnosed them with diabetes during pregnancy, women in this cohort were highly likely to have underdiagnosed gestational diabetes mellitus. However, our prevalence of GDM was consistent with the last report from the Australian Institute of Health and Welfare [39]. Several studies have reported that a self-report of diabetes yielded high agreement when compared with medical records data and physical examination and HbA1c [40,41]. Therefore, this limitation does not affect the clinical utility of these biomarkers significantly. Second, although we considered age, marital status, family income, smoking, alcohol drinking, physical activity, and BMI as potential confounders, there are unmeasured confounders that may have impacted the associations but cannot be considered. Finally, the loss to follow up in the cohort. Nonparticipants at different follow-ups were likely to be from poorer backgrounds; their mothers were more likely to have lower educational attainment and younger at childbirth. Missing data because of attrition would be biased our results only if the associations we assessed were either non-existent or in the opposite direction in non-participants, which is unlikely. A variety of modeling strategies have been used in the MUSP study for attrition adjustment. However, these methods have minimally affected overall findings [42,43]. Therefore, multiple imputation chained equations were carried out to adjust for missing data in our analyses [36]. We suggested that missing data was unlikely to have biased our results. Conclusion FBG, insulin, and insulin resistance were independent pre-pregnancy predictors of GDM. Therefore, women with high FBG and insulin-resistance before pregnancy should be monitored and paid more attention to prevent GDM and its adverse outcomes. Pre-pregnancy biomarker measurements might be more beneficial in GDM prediction than early pregnancy values. Therefore, such analyses could be included in the routine health checks before pregnancy to identify women with a high risk of GDM in subsequent pregnancies. Future longitudinal studies are needed to examine the association between these biomarkers before and during pregnancy and GDM. Furthermore, further research is needed to determine the most appropriate risk-benefit use of pre-pregnancy blood biomarkers.
4,303.6
2023-03-31T00:00:00.000
[ "Medicine", "Biology" ]
Novel DOX-MTX Nanoparticles Improve Oral SCC Clinical Outcome by Down Regulation of Lymph Dissemination Factor VEGF-C Expression in vivo : Oral and IV Modalities Mehran BACKGROUND Oral squamous cell carcinoma (OSCC) remains as one of the most difficult malignancies to control because of its high propensity for local invasion and cervical lymph node dissemination. The aim of present study was to evaluate the efficacy of novel pH and temperature sensitive doxorubicin-methotrexate- loaded nanoparticles (DOX-MTX NP) in terms of their potential to change the VEGF-C expression profile in a rat OSCC model. MATERIALS AND METHODS 120 male rats were divided into 8 groups of 15 animals administrated with 4-nitroquinoline-1-oxide to induce OSCCs. Newly formulated doxorubicin-methotrexate-loaded nanoparticles (DOX-MTX NP) and free doxorubicin were IV and orally administered. RESULTS RESULTS indicated that both oral and IV forms of DOX-MTX- nanoparticle complexes caused significant decrease in the mRNA level of VEGF-C compared to untreated cancerous rats (p<0.05) . Surprisingly, the VEGF-C mRNA was not affected by free DOX in both IV and oral modalities (p>0.05). Furthermore, in DOX-MTX NP treated group, less tumors characterized with advanced stage and VEGF-C mRNA level paralleled with improved clinical outcome (p<0.05). In addition, compared to untreated healthy rats , the VEGF-C expression was not affected in healthy groups that were treated with IV and oral dosages of nanodrug (p>0.05). CONCLUSIONS VEGF-C is one of the main prognosticators for lymph node metastasis in OSCC. Down-regulation of this lymph-angiogenesis promoting factor is a new feature acquired in group treated with dual action DOX-MTX-NPs. Beside the synergic apoptotic properties of concomitant use of DOX and MTX on OSCC, DOX-MTX NPs possessed anti-angiogenesis properties which was related to the improved clinical outcome in treated rats. Taking together, we conclude that our multifunctional doxorubicin-methotrexate complex exerts specific potent apoptotic and anti-angiogenesis properties that could ameliorate the clinical outcome presumably via down-regulating dissemination factor-VEGF-C expression in a rat OSCC model. Introduction Oral squamous cell carcinoma (OSCC) ranks the sixth most common cancer worldwide as it accounts near to 95% of all oral neaoplasms and 38% of all head and neck cancers in particular tongue and lip (Kademani et al., 2005;Massano et al., 2006;Bell et al., 2007).In contrast to the good prognosis of lip cancers, tongue carcinomas generally exhibit a much more aggressive biological with an unfavorable prognosis and high metastatic potential (Jones et al., 1992;Kademani et al., 2005;Montoro et al., 2008).Unfortunately, the increase in incidence has not been paralleled by the development of new therapeutic agents while the survival rate has only improved slightly, with the 5-year survival rate stays 50% over the past 30 Novel DOX-MTX Nanoparticles Improve Oral SCC Clinical Outcome by Down Regulation of Lymph Dissemination Factor VEGF-C Expression in vivo: Oral and IV Modalities Mehran Mesgari Abbasi 1,2 , Amir Monfaredan 3 , Hamed Hamishehkar 1 , Khaled Seidi 4 , Rana Jahanban-Esfahlan 2,4 * years.Patients with premalignant lesions and early stage cancers have a high rate of survival, but the vast majority of Stages III and IV cases are fatal (Zwetyenga et al., 2003;Bell et al., 2007;Rusthoven et al., 2010;Albano et al., 2013).The prognosis of SCC depends on a series of factors such as the proliferative activity of the tumor, degree of differentiation, and invasion and metastatic potential (Han et al., 2008). Angiogenesis, the formation of new blood vessels from existing vasculature, is an important process in many malignancies including oral cancer (Zhao et al., 2013).It is the result of an intricate balance between pro-angiogenic and anti-angiogenic factors (Srivastava et al., 2014).The VEGF family is composed of several subtypes, including VEGF-A, VEGF-B, VEGF-C, and VEGF-D which exist as numerous splice variant isoforms (Sugiura et al., 2009).Shintani et a. ( 2004) described VEGF expression in OSCC, correlating subtypes A and B with tumor angiogenesis and subtypes C and D with the risk of nodal metastases.Uehara et al. (2004) found a significant correlation between the high expression of VEGF in OSCC and worse prognosis.In view of this, the evaluation of VEGF-C and -D expression in oral SCCs could be a valuable tool for predicting their prognosis and designing agents with potential to inhibit their activity (Mohamed et al., 2004). Combination chemotherapy and nanoparticle drug delivery are two fields that have shown substantial promise in cancer therapy.Administration of two or more drugs results in synergism effects between different drugs and can combat drug resistance through different mechanisms of action (Kalaria et al., 2009;Hu et al., 2010;Nasiri et al., 2013).Furthermore, nanoparticle drug delivery enhances therapeutic effectiveness whilst reduces side effects of the drug payloads by ameliorating their pharmacokinetics (Wang et al., 2010;Tacar et al., 2013;Salehi et al., 2014).Coupling these two active areas resulted in current measurable advances in improving the efficacy of cancer therapeutics (Mollazade et al., 2013).Although there are also some challenges and design specifications that need to be addressed in optimizing nanoparticle-based combination chemotherapy (Colleoni et al., 2002;Chen et al., 2011;Benival and Devarajan, 2012;Baykara et al., 2013;Lasrado et al., 2014). Doxorubicin, as one of the most potent of the FDA approved chemotherapeutic Drugs has shown substantial anti-cancer potential, limited only by its cardiotoxicity (Deng andZhang, 2013, Duong andYung, 2013).However, combined to nanodelivery systems, DOXnanoparticles not only increase intracellular uptake of DOX, at the same time reduce its side effects significantly compared with conventional DOX formulations (Hu et al., 2010, Liboiron andMayer, 2014) Methotrexate (MTX) is another central chemotherapeutic drug that is widely used either in monotherapy or in combination with other biologic and synthetic disease modifying anti cancer drugs (Rossi et al., 2010;Cipriani et al., 2014). DOX-MTX NP is a new combination chemotherapy and nanoparticle drug delivery system that showed initial promising results in affecting the OSCC in rat model.However more studies require evaluating its efficacy, safety and also the mechanism of action. In this respect, this study conducted to evaluate the efficacy of IV and oral modalities of DOX-MTX-loaded nanoparticles in term of their potential in affecting the expression level of VEGF-C since over-expression of this lymph-angiogenesis gene strongly associate with metastasis and poor prognosis.Targeting angiogenic pathway by combination chemotherapy and nanoparticle drug delivery may provide a promising approach for treatment of aggressive tumors including oral cancer. Dual anticancer drug loaded nanoparticles The synthesis procedure of nanoparticles was fully explained by Salehi et al (Salehi et al., 2014).Briefly, appropriate amount of novel synthesized nanoparticles were ultrasonically dispersed in the MTX solution for 5 min.After stirring for 24 h under dark conditions DOX-HCl was added to MTX-loaded nanoparticles mixture and dispersed with the aid of ultrasonication (Sonics Vibra cell, Model: VCX 130 PB, Newton, CT) for 3 min.The final carrier/drug ratio was 5 to 1 for both of drugs.The mixture was kept under magnetic stirring at room temperature for another 24 h under dark conditions.Then MTX-DOXloaded nanoparticles dispersion was left for 2 h to allow the sedimentation of the fine precipitates.DOX-MTXloaded nanocomposites were collected by centrifugation at 14000 rpm for 15 min and vacuum dried for 24 h at room temperature and stored in a desiccators until used.The dual anticancer drug loaded nanoparticles were diluted with physiologic saline solution in appropriate concentration before administration to rats. Animals 120 male Sprague-Dawley rats weighing 180±20 grams were randomly divided into 8 groups.The animals were housed in the polycarbonate standard cages in a temperature-controlled animal room (22±2 o C) with a 12/12 hours light/dark cycle during the experiments.The animals were provided by a standard rat pellet diet ad libitum.Drinking water containing 4-NQO was prepared three times a week by dissolving the carcinogen in distilled water and was given in light-opaque bottles. Experimental design 120 rats randomly were divided into 8 groups of 15 animals each, as following: i) served as a carcinoma control and received 4-NQO(Sigma) at the concentration of 30ppm in their drinking water for 14 weeks without any treatment; ii)iii) served as the treatment groups and received 4-NQO at the concentration of 30ppm in their drinking water for 14 weeks and oral doses (Gavage) of Doxorubicin and the DOX-MTX-loaded nanoparticles respectively at the dose 5 mg/kg of body weight once a day on the days of 2, 5 and 8 of the study; iv)v) served as the treatment groups and received 4-NQO at the concentration of 30ppm in their drinking water for 14 weeks and intravascular (IV) dosages of doxorubicin and the DOX-MTX-loaded nanoparticles at the dose 1.5 mg/ kg of body weight once a day on the days of 2, 5 and 8 of the study; vi) and vii) served as the treated control group that received oral and IV the dose DOX-MTX-NPs(5mg\ kg and 1.5 mg/kg of body weight once a day on the days of 2, 5 and 8 of the study, respectively); viii) served as normal control group and the rats of this group didn't get any carcinogen or treatment material; Death rate of the animals was also recorded during the study. Ethics All the ethical and the humanity considerations were performed according to the Helsinki humanity research declaration during the experiments and the euthanasia of the animals.All the animals' experiments were approved by the Ethics Committee of the Tabriz University of Medical Sciences. Histological evaluation At the end of the interventional period, the animals were euthanized under anesthetic condition (Pentobarbital, 150mg/kg IP).The tongue tissue samples were taken from each animal and were immediately fixed in 10% phosphate-buffered formalin.The 5 μm thick microscopic sections were prepared after embedding of tissue samples in paraffin.Afterward, the sections were stained by Hematoxylin-Eosin staining method and histological evaluations were performed with light microscopy.Histopathological changes in tumors evaluated blindly by two pathologists. Detection of VEGF-c mRNA expression by quantitative real time PCR Briefly ,total RNA (2 µg) extracted from homogenized fine powder of removed tongue tissues as described elsewhere (Jahanban Esfahlan et al., 2011).RNA were reverse transcribed to cDNA using Revert Aid first strand cDNA synthesis kit(fermentase).The resulting cDNA was diluted 1:30 fold and the PCR reaction was performed with 2 µl cDNA, 10pM each forward and reverse primers, 12.5 µl SYBR Green PCR Master Mix (Fermentase) in a final volume of 25 µl.The thermal profile for the real-time Q-PCR was 95°C for 10 min and followed by 45 cycles of 95°C for 15 seconds and 60°C for 1 min.The gene expression was expressed as fold change from the GAPDH level which is calculated as 2 -ΔΔCt .In addition, melting curve analysis was performed to assure the specificity of PCR product in this experiment.The following rat primers were used: VEGF-C (NM_053653.1):5'-TTGTCTCTGGCGTGTTCCTTGC-3' (forward), ' -A G G T C T T T G C C T T C G A A A C C C T T G -3 ' ( r e v e r s e ) . G A P D H ( A F 1 0 6 8 6 0 ) : 5'-ATGACTCTA CCCACGGCAAG-3' (forward), 5'-CTGGAGATGGTGATGGGTT-3' (reverse). Data analysis The data were analyzed by SPSS 13.One-Way Analysis Of Variance (ANOVA) was used to compare fold change differences of VEGF-c between and within studied groups followed by the multiple comparisons with the Tukey post-hoc test.Fischer's exact test used for analyzing pathological changes in groups.Chi square test used to verify the possible relation between expression of VEGF-c gene and pathological changes in tissue samples.A p value <0.05 was considered significant. Establishment of Oral Squamous Cell Carcinoma (OSCC) model in rat OSCC carcinogenesis usually develops through a multistep process that begins from hyperplasia and passes to mild, moderate and severe dysplasia before OSSC.4-NQO induced OSCC have been used to study the various stages of oral carcinogenesis, because of its capability of inducing sequentially the phases of carcinogenesis (hyperplasia, mild dysplasia, moderate dysplasia, severe dysplasia, carcinoma in situ and OSSC).We have previously verified that 4-NQO successfully induces different stages of tongue carcinogenesis process in all cancer groups.High mortality rate, low weight gain, and frequency of OSCC and high proliferation severity of cancer control group compared to other groups demonstrate the efficacy of 4-NQO induced OSCC model (Mehdipour et al., 2013) Effect of DOX-MTX NP on mRNA expression of VEGF-C in tongue tissues of OSCC rats models Compared to all three healthy group, VEGF-C expressed approximately 2.98 folds more in cancerous group (p=0.02)(Figure 1). In groups that received oral doses (5 mg/kg of body weight once a day on the days of 2, 5 and 8 of the study), results showed that compared to untreated cancerous group, mRNA expression of VEGF-C decreased approximately 2.27 folds in DOX-MTX NPs treated groups (p=0.046).At the other hand, no significant difference observed in VEGF-C expression between the group that were received oral dosages of free DOX and untreated cancerous group(p=0.107)(Figure 2) (Fold changes represented as mean±SE). After IV treatment with DOX and DOX-MTX NP (1.5 mg/kg of body weight once a day on the days of 2, 5 and 8 of the study), in DOX-MTX NP treated group, mRNA expression of VEGF-C decreased ~3.33 folds compared to untreated cancerous that was statistically significant (p=0.048), in contrast in DOX treated group, there was no significant decrease in VEGF-C mRNA level (p=0.943)(Figure 3) Furthermore, we find no significant difference between IV and oral routes of DOX-MTX nanodrug (p=0.67) and neither DOX (p=0.173)(Figure 4). All three healthy controls showed significant difference in VEGF-c MRNA expression compared to untreated cancerous group (P>0.01).In case of evaluation of the safety of our nanodrug on normal cells and also its specificity to target tumoral cells, we included three healthy controls.Our results indicated that both healthy groups that were treated with oral and IV doses of DOX-MTX NPs showed no significant difference in mRNA level of VEGF-C compared to untreated healthy group (p= 0.285 and p=0.634,respectively) (Figure 5). Our results indicate that both oral and IV modalities of MTX-DOX NPs exhibit superior performance over free DOX, in term of decreasing the VEGF-C mRNA level in OSCC model in rat (p=0.047,p=0.006, respectively) (Figure 6). Relation between VEGF-C mRNA level and histopatological changes AS IV mode of nanodrug showed superior performance over oral form, hence this group subjected for evaluation of histopathological changes. Our results showed that in DOX treated group 6\13 of lesion showed a low stage (No/Mild/moderate dysplasia) while 7\13 were advanced (Severe dysplasia, Carcinoma in situ and OSCC).At the other hand, we observed markedly increase in frequency of low stage tumors (12\14 vs 2\14) in group treated with IV doses of nonodrug.Pathologicl changes significantly were different between the two groups (p<0.05).Furthermore, no pathological changes detected in either of healthy controls, whilst all rats of cancerous group developed aggressive lesions. Subsequently, we tested the relation between VEGF-C mRNA profile and the tumor stage in DOX and DOX-MTX NP treated group.In this respect according to the observed mRNA fold changes, samples categorized in two main groups: group with high mRNA and group with low\ moderate mRNA level.As shown in Table 1, high stages of tongue cancer tend to exhibit high expression of VEGF-c while low level of VEGF-c in samples indicated for a good prognosis with improved clinical outcome (p=0.011) Discussion Squamous cell carcinoma (SCC) is the most common head and neck cancer with poor clinical outcome (Montoro et al., 2008).One of the factors governing the poor prognosis of oral SCC is frequent metastasis of cancer cells to local lymph nodes or distant organs (Rusthoven et al., 2010).Combinatorial chemotherapy coupled with nanomedicine have opened appealing window to the current therapeutic approaches that always failed due to There was no significant difference in VEGF-c amount of treated healthy in both modalities of nanodrug with that of untreated healthy control.*Indicates to a significant P value (p<0.05)tumor cell resistance and unwanted side effects of drug on normal cells but by advent of nano-based drug delivery systems and combination chemotherapeutic agents this problem resolved drastically (Liboiron BD and Mayer LD, 2014) Malignancies have the ability to induce growth of new blood vessels, which is important for tumor progression, aggressiveness, and ability to metastasize.It is a highly regulated to keep the intricate balance of pro and anti angiogeneic factors (Schliephake, 2003).Marked angiogenesis correlated well with the risk of nodal metastases and should probably imply a more aggressive postoperative adjuvant therapy (Shintani et al., 2004).Among VEGF families, it is known that VEGF-C and VEGF-D express on lymph nodes and promote nodal metastasis in OSCC (Shintani et al., 2004;Uehara et al., 2004).Moreover, OSCC angiogenesis correlates with T and N parameters and is an independent predictor of tumor recurrence and a reliable prognosticator (Zhao et al., 2013).In this respect, drugs that could inhibit activity of lymph node metastasis dissemination factors may provide promising approach for treatment of aggressive tumors including OSCC. Cytotoxic effect of Doxurobicin most often attributed to its potential in induction of apoptosis by recruiting divergent targets (Gibson et al., 2005;Huang et al., 2011).However to our knowledge there is no record that indicate to the anti-angiogenesis potential of Doxorubicin in vitro or in vivo, nonetheless in our study, not oral and neither IV forms of DOX affect VEGF-C mRNA level in treated groups.Surprisingly in DOX treated groups even a subtle up-regulation of VEGF-C observed, although the difference was not statistically significant.Interestingly, both oral and IV forms of the new formulated DOX (DOX-MTX NP) showed high performance in affecting VEGF-c mRNA level and in particular this decrease in amount of VEGF-C paralleled with less aggressive tumors in this group. In a single study related to anti-angiogentic properties of DOX, Fengjun Liu et al indicated to the role of DOX in inhibiting the VEGF expression in hepatocellular Carcinoma."Gene transfer of antisense HIF-1α downregulated the expression of both HIF-1a and VEGF, whereas doxorubicin only downregulated VEGF expression.Both antisense HIF-1α and doxorubicin inhibited expression of proliferating cell nuclear antigen, and combined to exert even stronger inhibition of proliferating cell nuclear antigen expression."(Liu et al., 2007;2008).In this paper, Fengjun Liu et al clearly indicated to the feasible anti-angiogentic properties of DOX via modulating the expression of PCNA. At the other hand, there are several studies that clearly indicate to the anti-angiogenesis properties of MTX.Methotrexate is a well-known effective therapeutic modality in the treatment of psoriasis.Apart from reducing the expression of CD31 (a known endothelial cell marker) in psoriatic lesions ,study by Olfat G. Shaker et al in 2013 shows that the percentage of clinical improvement in the examined psoriatic plaque was significantly positively correlated with the percentage of reduction in the amount of VEGF mRNA (r=0.850, p 0.001) and the percentage of reduction in the capillary perfusion (Shaker et al., 2013).Moreover Methotrexate inhibits in vitro corneal vascular endothelial cell proliferation and blocks VEGF-and bFGF-induced corneal neovascularization in vivo (Hirata et al., 1989).Others also demonstrated significant decrease in serum levels of VEGF after 2 and 6 months of treatment with low dose methotrexate and cyclophosphamide in patients with metastatic breast cancer (Colleoni et al., 2002) .In addition, neoadjuvant multidrug chemotherapy including high-dose methotrexate modifies VEGF expression in osteosarcoma (Rossi et al., 2010;Park et al., 2011). We have previously shown that beside apoptotic effects, our nanodrug exerts potent anti proliferative and anti metastatic potentials, as well (Mesgari Abbasi et al., 2014, Mesgari Abbasi et al., 2014,Mesgari Abbasi et al., 2014.The anti-VEGF-C potential of DOX-MTX NP is an extra merit beside its enhanced apoptotic effects in its new format.These features makes DOX-MTX NP a powerful multifunctional weapon designed to combat resistant/ aggressive forms of a wide range of tumors whilst have least the cytotoxic effects on normal surrounding cells. In conclusion, According to the obtained results, DOX-MTX NPs is safe drugs that have the potential to inhibit activity of VEGF-C and improve the clinical outcome in invasive stages of OSCC. Figure 3 . Figure 3.The Effect of IV Dosage of DOX and DOX-MTX NP on mRNA level of VEGF-C in OSCC Cancer Model in Rat.VEGF-C mRNA level decreased significantly in DOX-MTX NPs compared to untreated cancerous group whilst IV dosages of DOX could not alter amount of VEGF-c in treated group.*Indicate to a significant P value (p<0.05) Figure 6 . Figure 6.Comparison between the Efficacy of Oral and IV Dosage of DOX and DOX-MTX NP in Affecting the VEGF-c mRNA Expression in OSCC Model in Rat.Both Oral and IV route of DOX-MTX NP showed superior performance over DOX in affecting VEGF-C expression in OSCC model in rat.*Indicate to a significant P value (p<0.05) Figure 4 . Figure 4. Comparison between the Efficacy of IV and Oral Administration of DOX-MTX NP in Affecting the VEGF-C mRNA Expression in OSCC Model in Rat.There was no difference between the oral and IV modalities in term of affecting the VEGF-C mRNA level in groups treated with DOX and DOX-MTX NPs.*Indicates to a significant P value (p<0.05) DOI:http://dx.doi.org/10.7314/APJCP.2014.15.15.6227Novel DOX-MTX Nanoparticles Effect VEGF-C Expression and Improve Clinical Outcome in OSCC Model in vivo Table 1 . High Stages of Tongue Cancer tend to Exhibit High Expression of VEGF-c While Low Level of VEGF-c in Samples Indicated for a Good Prognosis with Improved Clinical Outcome (p=0.011 MTX Nanoparticles Effect VEGF-C Expression and Improve Clinical Outcome in OSCC Model in vivo
4,845.4
2014-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Adaptive Nearest Neighbor Machine Translation kNN-MT, recently proposed by Khandelwal et al. (2020a), successfully combines pre-trained neural machine translation (NMT) model with token-level k-nearest-neighbor (kNN) retrieval to improve the translation accuracy. However, the traditional kNN algorithm used in kNN-MT simply retrieves a same number of nearest neighbors for each target token, which may cause prediction errors when the retrieved neighbors include noises. In this paper, we propose Adaptive kNN-MT to dynamically determine the number of k for each target token. We achieve this by introducing a light-weight Meta-k Network, which can be efficiently trained with only a few training samples. On four benchmark machine translation datasets, we demonstrate that the proposed method is able to effectively filter out the noises in retrieval results and significantly outperforms the vanilla kNN-MT model. Even more noteworthy is that the Meta-k Network learned on one domain could be directly applied to other domains and obtain consistent improvements, illustrating the generality of our method. Our implementation is open-sourced at https://github.com/zhengxxn/adaptive-knn-mt. kNN-MT, recently proposed in (Khandelwal et al., 2020a), equips a pre-trained NMT model with a kNN classifier over a datastore of cached context representations and corresponding target tokens, providing a simple yet effective strategy to utilize cached contextual information in inference. However, the hyper-parameter k is fixed for all cases, which raises some potential problems. Intuitively, the retrieved neighbors may include noises when the target token is relatively hard to determine (e.g., relevant context is not enough in the datastore). And empirically, we find that the translation quality is very sensitive to the choice of k, results in the poor robustness and generalization performance. To tackle this problem, we propose Adaptive kNN-MT that determines the choice of k regarding each target token adaptively. Specifically, instead of utilizing a fixed k, we consider a set of possible k that are smaller than an upper bound K. Then, given the retrieval results of the current target token, we propose a light-weight Meta-k Network to estimate the importance of all possible k-Nearest Neighbor results, based on which they are aggregated to obtain the final decision of the model. In this way, our method dynamically evaluate and utilize the neighbor information conditioned on different target tokens, therefore improve the translation performance of the model. We conduct experiments on multi-domain machine translation datasets. Across four domains, our approach can achieve 1.44∼2.97 BLEU score improvements over the vanilla kNN-MT on average when K ≥ 4. The introduced light-weight Meta-k Network only requires thousands of parameters and can be easily trained with a few training samples. In addition, we find that the Meta-k Net- 2 Background: kNN-MT In this section, we will briefly introduce the background of kNN-MT, which includes two steps: creating a datastore and making predictions depends on it. Datastore Creation. The datastore consists of a set of key-value pairs. Formally, given a bilingual sentence pair in the training set (x, y) ∈ (X , Y), a pre-trained autoregressive NMT decoder translates the t-th target token y t based on the translation context (x, y <t ). Denote the hidden representations of translation contexts as f (x, y <t ), then the datastore is constructed by taking f (x, y <t ) as keys and y t as values, Therefore, the datastore can be created through a single forward pass over the training set (X , Y). Prediction. While inference, at each decoding step t, the kNN-MT model aims to predictŷ t given the already generated tokensŷ <t as well as the context representation f (x,ŷ <t ), which is utilized to query the datastore for k nearest neighbors w.r.t the l 2 distance. Denote the retrieved neighbors as , 2, ..., k}}, their distribu-tion over the vocabulary is computed as: where T is the temperature and d(·, ·) indicates the l 2 distance. The final probability when predicting y t is calculated as the interpolation of two distributions with a hyper-parameter λ: p(y t |x,ŷ <t ) = λ p kNN (y t |x,ŷ <t ) where p NMT indicates the vanilla NMT prediction. Adaptive kNN-MT The vanilla kNN-MT method utilizes a fixed number of translation contexts for every target token, which fails to exclude noises contained in retrieved neighbors when there are not enough relevant items in the datastore. We show an example with k = 32 in Figure 1. The correct prediction spreadsheet has been retrieved as top candidates. However, the model will finally predict table instead because it appears more frequently in the datastore than the correct prediction. A naive way to filter the noises is to use a small k, but this will also cause over-fitting problems for other cases. In fact, the optimal choice of k varies when utilizing different datastores in vanilla kNN-MT, leading to poor robustness and generalizability of the method, which is empirically discussed in Section 4.2. To tackle this problem, we propose a dynamic method that allows each untranslated token to utilize different numbers of neighbors. Specifically, we consider a set of possible ks that are smaller than an upper bound K, and introduce a lightweight Meta-k Network to estimate the importance of utilizing different ks. Practically, we consider the powers of 2 as the choices of k for simplicity, as well as k = 0 which indicates ignoring kNN and only utilizing the NMT model, i.e., k ∈ S where Then the Meta-k Network evaluates the probability of different kNN results by taking retrieved neighbors as inputs. Concretely, at the t-th decoding step, we first retrieve K neighbors N t from the datastore, and for each neighbor (h i , v i ), we calculate its distance from the current context representation as distances and c = (c 1 , ..., c K ) as counts of values for all retrieved neighbors, we then concatenate them as the input features to the Meta-k Network. The reasons of doing so are two-fold. Intuitively, the distance of each neighbor is the most direct evidence when evaluating their importance. In addition, the value distribution of retrieved results is also crucial for making the decision, i.e., if the values of each retrieved results are distinct, then the kNN predictions are less credible and we should depend more on NMT predictions. We construct the Meta-k Network f Meta (·) as two feed-forward Networks with non-linearity between them. Given [d; c] as input, the probability of applying each kNN results is computed as: (3) Prediction. Instead of introducing the hyperparameter λ as Equation (2), we aggregate the NMT model and different kNN predictions with the output of the Meta-k Network to obtain the final prediction: where p k i NN indicates the k i Nearest Neighbor prediction results calculated as Equation (1). Training. We fix the pre-trained NMT model and only optimize the Meta-k Network by minimizing the cross entropy loss following Equation (4), which could be very efficient by only utilizing hundreds of training samples. Experimental Setup We evaluate the proposed model in domain adaptation machine translation tasks, in which a pretrained general-domain NMT model is used to translate domain-specific sentences with kNN searching over an in-domain datastore. This is the most appealing application of kNN-MT as it could achieve comparable results with an indomain NMT model but without training on any in-domain data. We denote the proposed model as Adaptive kNN-MT (A) and compare it with two baselines. One of that is vanilla kNN-MT (V) and the other is uniform kNN-MT (U) where we set equal confidence for each kNN prediction. Datasets and Evaluation Metric. We use the same multi-domain dataset as the baseline (Khandelwal et al., 2020a), and consider domains including IT, Medical, Koran, and Law in our experiments. The sentence statistics of datasets are illustrated in Table 1. The Moses toolkit 1 is used to tokenize the sentences and split the words into subword units (Sennrich et al., 2016) with the bpecodes provided by . We use Sacre-BLEU 2 to measure all results with case-sensitive detokenized BLEU (Papineni et al., 2002). directly use the dev set (about 2k sents) to train the Meta-k Network for about 5k steps. We use Adam (Kingma and Ba, 2015) to optimize our model, the learning rate is set to 3e-4 and batch size is set to 32 sentences. Main Results The experimental results are listed in Table 2. We can observe that the proposed Adaptive kNN-MT significantly outperforms the vanilla kNN-MT on all domains, illustrating the benefits of dynamically determining and utilizing the neighbor information for each target token. In addition, the performance of the vanilla model is sensitive to the choice of K, while our proposed model is more robust with smaller variance. More specifically, our model achieves better results when choosing larger number of neighbors, while the vanilla model suffers from the performance degradation when K = 32, indicating that the proposed Meta-k Network is able to effectively evaluate and filter the noise in retrieved neighbors, while a fixed K cannot. We also compare our proposed method with another naive baseline, uniform kNN-MT, where we set equal confidence for each kNN prediction and make it close to the vanilla kNN-MT with small k. It further demonstrates that our method could really learn something useful but not bias smaller k. Generality. To demonstrate the generality of our method, we directly utilize the Meta-k Network trained on the IT domain to evaluate other domains. For example, we use the Meta-k Network trained on IT domain and medical datastore to evaluate the performance on medical test set. For comparison, we collect the in-domain results from Table 2. We set K = 32 for both settings. As shown in Table 3, the Meta-k Network trained on the IT domain achieves comparable performance on all other domains which re-train the Meta-k Network with in-domain dataset. These results also indicate that the mapping from our designed feature to the confidence of retrieved neighbors is common across different domains. Robustness. We also evaluate the robustness of our method in the domain-mismatch setting, where we consider a scenario that the user inputs an outof-domain sentence (e.g. IT domain) to a domainspecific translation system (e.g. medical domain) to evaluate the robustness of different methods. Specifically, in IT ⇒ Medical setting, we firstly use medical dev set and datastore to tune hyperparameter for vanilla kNN-MT or train the Meta-k Network for Adaptive kNN-MT, and then use IT test set to test the model with medical datastore. We set K = 32 in this experiment. As shown in Table 4, the retrieved results are highly noisy so that the vanilla kNN-MT encounters drastic performance degradation. In contrast, our method could effectively filter out noises and therefore prevent performance degradation as much as possible. Case Study. Table 5 shows a translation example selected from the test set in Medical domain with Source Wenn eine gleichzeitige Behandlung mit Vitamin K Antagonisten erforderlich ist, müssen die Angaben in Abschnitt 4.5 beachtet werden. Reference therapy with vitamin K antagonist should be administered in accordance with the information of Section 4.5. Base NMT If a simultaneous treatment with vitamin K antagonists is required, the information in section 4.5 must be observed. kNN-MT If concomitant treatment with vitamin K antagonists is required, please refer to section 4.5. Adaptive kNN-MT When required, concomitant therapy with vitamin K antagonist should be administered in accordance with the information of Section 4.5. K = 32. We can observe that the Meta-k Network could determine the choice of k for each target token respectively, based on which Adaptive kNN-MT leverages in-domain datastore better to achieve proper word selection and language style. Analysis. Finally, we study the effect of two designed features, number of training sentences and the hidden size of the proposed Meta-k Network. We conduct these ablation study on IT domain with K = 8. All experimental results are summarized in Table 6 and Figure 2. It's obvious that both of the two features contribute significantly to the excellent performance of our model, in which the distance feature is more important. And surprisingly, our model could outperforms the vanilla kNN-MT with only 100 training sentences, or with a hidden size of 8 that only contains around 0.6k parameters, showing the efficiency of our model. Conclusion and Future Works In this paper, we propose Adaptive kNN-MT model to dynamically determine the utilization of retrieved neighbors for each target token, by introducing a light-weight Meta-k Network. In the experiments, on the domain adaptation machine trans-lation tasks, we demonstrate that our model is able to effectively filter the noises in retrieved neighbors and significantly outperform the vanilla kNN-MT baseline. In addition, the superiority of our method on generality and robustness is also verified. In the future, we plan to extend our method to other tasks like Language Modeling, Question Answering, etc, which can also benefit from utilizing kNN searching (Khandelwal et al., 2020b;Kassner and Schütze, 2020). A.1 Datastore Creation We first use numpy array to save the key-value pairs over training sets as datastore. Then, faiss is used to build index for each datastore to carry out fast nearest neighbor search. We utilize faiss to learn 4k cluster centroids for each domain, and search 32 clusters for each target token in decoding. The size of datastore (count of target tokens), and hard disk space of datastore as well as faiss index are shown in Table 7. A.2 Hyper-Parameter Tuning for kNN-MT The performance of vanilla kNN-MT is highly related to the choice of hyper-parameter, i.e. k, T and λ. We fix T as 10 for IT, Medical, Law, and 100 for Koran in all experiments. Then, we tuned k and λ for each domain when using kNN-MT and the optimal choice for each domain are shown in Table 9. The performance of kNN-MT is unstable with different hyper-parameters while our Adaptive kNN-MT avoids this problem. A.3 Decoding Time We compare the decoding time on IT test set of NMT, kNN-MT (our replicated) and Adaptive kNN-MT condition on different batch size. In decoding, the beam size is set to 4 with length penalty 0.6. The results are summarized in Table 8.
3,280.2
2021-05-27T00:00:00.000
[ "Computer Science" ]
Presynaptic mGluR5 receptor controls glutamatergic input through protein kinase C–NMDA receptors in paclitaxel-induced neuropathic pain Chemotherapeutic drugs such as paclitaxel cause painful peripheral neuropathy in many cancer patients and survivors. Although NMDA receptors (NMDARs) at primary afferent terminals are known to be critically involved in chemotherapy-induced chronic pain, the upstream signaling mechanism that leads to presynaptic NMDAR activation is unclear. Group I metabotropic glutamate receptors (mGluRs) play a role in synaptic plasticity and NMDAR regulation. Here we report that the Group I mGluR agonist (S)-3,5-dihydroxyphenylglycine (DHPG) significantly increased the frequency of miniature excitatory postsynaptic currents (EPSCs) and the amplitude of monosynaptic EPSCs evoked from the dorsal root. DHPG also reduced the paired-pulse ratio of evoked EPSCs in spinal dorsal horn neurons. These effects were blocked by the selective mGluR5 antagonist 2-methyl-6-(phenylethynyl)-pyridine (MPEP), but not by an mGluR1 antagonist. MPEP normalized the frequency of miniature EPSCs and the amplitude of evoked EPSCs in paclitaxel-treated rats but had no effect in vehicle-treated rats. Furthermore, mGluR5 protein levels in the dorsal root ganglion and spinal cord synaptosomes were significantly higher in paclitaxelthan in vehicle-treated rats. Inhibiting protein kinase C (PKC) or blocking NMDARs abolished DHPG-induced increases in the miniature EPSC frequency of spinal dorsal horn neurons in vehicleand paclitaxel-treated rats. Moreover, intrathecal administration of MPEP reversed pain hypersensitivity caused by paclitaxel treatment. Our findings suggest that paclitaxelinduced painful neuropathy is associated with increased presynaptic mGluR5 activity at the spinal cord level, which serves as upstream signaling for PKC-mediated tonic activation of NMDARs. mGluR5 is therefore a promising target for reducing chemotherapy-induced neuropathic pain. Painful peripheral neuropathy is a major dose-limiting side effect of some commonly used chemotherapeutic agents, including paclitaxel, bortezomib, oxaliplatin, and vincristine (1,2). Persistent and severe pain may require dose reduction or cessation of chemotherapy, which can increase cancer-related morbidity and mortality. The pathophysiology of chemotherapy-induced neuropathic pain is not fully known, and treatment options remain limited. Glutamate released from the central terminals of nociceptive primary afferents is critically involved in nociceptive transmission in the spinal dorsal horn (3,4). Activation of glutamate-gated N-methyl-D-aspartate receptors (NMDARs) 2 and the resulting Ca 2ϩ influx through NMDAR channels are crucial for promoting synaptic plasticity in both physiological processes, such as learning and memory, and pathological conditions, such as neuropathic pain (5)(6)(7). Treatment with paclitaxel or bortezomib potentiates nociceptive input from primary afferent nerves by inducing tonic activation of presynaptic NMDARs in the spinal cord via protein kinase C (PKC) (8,9). However, the upstream signaling mechanism that leads to activation of PKC and presynaptic NMDARs in chemotherapy-induced neuropathic pain remains unclear. Glutamate activates 2 major types of receptors: fast-acting ionotropic glutamate receptors and G protein-coupled metabotropic glutamate receptors (mGluRs). Eight mGluR subtypes have been identified; they can be divided into 3 groups on the basis of their signal transduction mechanisms, pharmacology, and sequence homology (10). In contrast to Group II mGluRs (mGluR2 and mGluR3) and Group III mGluRs (mGluR4, mGluR6, mGluR7, and mGluR8), which are coupled to inhibitory G␣ i/o proteins, Group I mGluRs (mGluR1 and mGluR5) preferentially activate phospholipase C via coupling to stimulatory G␣ q/11 proteins (11). Activation of Group I mGluRs stimulates the phospholipase C␤-associated pathway, producing inositol 1,4,5-triphosphate and diacylglycerol. Inositol 1,4,5-triphosphate releases calcium from cellular stores, activating calcium-dependent ion channels; intracellular calcium and diacylglycerol stimulate PKC and its associated downstream signaling pathways (11,12). In the brain, Group I mGluRs are often activated when synaptic glutamate release increases (11,13). Chemotherapy-induced tonic activation of presynaptic NMDARs potentiates synaptic glutamate release in the spinal dorsal horn (8,9). Both Group I mGluRs and NMDARs are closely associated with maintaining long-lasting enhancement of excitatory synaptic transmission (14,15). Nevertheless, the role of Group I mGluRs and their link to PKCmediated presynaptic NMDAR activation in chemotherapy-induced neuropathic pain remain unknown. Therefore, in this study, we sought to determine the role of Group I mGluRs in the regulation of glutamatergic synaptic transmission in the spinal dorsal horn in paclitaxel-induced neuropathic pain. We found that presynaptic mGluR5, a Group I mGluR, acts in concert with PKC and NMDARs to form a signaling cascade that maintains long-lasting enhancement of synaptic glutamate release to spinal dorsal horn neurons in paclitaxel-induced neuropathic pain. This new information extends our understanding of the molecular mechanisms of chemotherapy-induced neuropathic pain and points to new strategies to treat this condition. Stimulation of mGluR5 increases glutamatergic input to spinal dorsal horn neurons Group I mGluRs are expressed in the dorsal root ganglion (DRG) and spinal dorsal horn (12,16,17). We first recorded miniature excitatory postsynaptic currents (mEPSCs) in spinal cord lamina II neurons and determined whether stimulating mGluR1 or mGluR5 affects synaptic glutamate release to these neurons in naive rats. Bath application of (S)-3,5-dihydroxyphenylglycine (DHPG, 20 M), a combined mGluR1 and mGluR5 agonist (15,18), significantly increased the baseline frequency, but not the amplitude, of mEPSCs in all lamina II neurons examined (n ϭ 8 neurons, Fig. 1, A and B). Stimulation of mGluR5 increases glutamatergic input from primary afferent terminals to spinal dorsal horn neurons To determine specifically whether mGluR5 at primary afferent terminals is involved in regulating glutamatergic input to spinal cord dorsal horn neurons, we recorded monosynaptic EPSCs of spinal lamina II neurons evoked from the dorsal root, which correspond to induced glutamate released from primary sensory nerves (8,21). Because the amplitude of evoked EPSCs largely depends on the intensity of the stimulation, we normalized it to the baseline amplitude for each recorded neuron. Bath application of 20 M DHPG significantly increased the amplitude of monosynaptic EPSCs of lamina II neurons from naive rats (n ϭ 9 neurons, Fig. 2, A and B). To confirm the presynaptic action of DHPG, we also examined the effect of DHPG on the paired-pulse ratio (PPR) of monosynaptically evoked EPSCs in spinal dorsal horn neurons. Bath application of DHPG significantly reduced the PPR of evoked EPSCs in lamina II neurons (n ϭ 10 neurons, Fig. 2, A and B). In addition, we determined whether selectively stimulating mGluR1 or mGluR5 affects synaptic glutamate release from primary afferent terminals. Bath application of 20 M DHPG plus 50 M LY367385 (for activating mGluR5) significantly increased the amplitude of monosynaptic EPSCs of lamina II neurons from naive rats (n ϭ 14 neurons, Fig. 2, C and D). However, bath application of DHPG plus 20 M MPEP (for activating mGluR1) had no effect on the amplitude of evoked EPSCs of lamina II neurons (n ϭ 10 neurons). Furthermore, we examined the effect of stimulating mGluR1 or mGluR5 on the PPR of monosynaptically evoked EPSCs in spinal dorsal horn neurons from naive rats. Bath application of DHPG plus LY367385 significantly reduced the PPR of evoked EPSCs from baseline levels (n ϭ 14 neurons, Fig. 2, C and D). In contrast, bath application of DHPG plus MPEP had no significant effect on the PPR of evoked EPSCs (n ϭ 14 neurons). Together, these results suggest that presynaptic mGluR5 at primary afferent terminals controls glutamatergic input to spinal dorsal horn neurons. Presynaptic mGluR5 mediates paclitaxel-induced increases in synaptic glutamate release to spinal dorsal horn neurons We next determined whether presynaptic mGluR5 plays a role only in the paclitaxel-induced increase in glutamatergic input to spinal dorsal horn neurons. We examined the differential effect of MPEP on the mEPSCs of lamina II neurons from vehicle-and paclitaxel-treated rats. Bath application of 1 to 20 M MPEP decreased the frequency, but not the amplitude, of mEPSCs in lamina II neurons from paclitaxel-treated rats in a concentration-dependent manner (n ϭ 9 neurons, Fig. 3). At 20 M, MPEP application completely normalized paclitaxel-induced increases in the frequency of mEPSCs to the level observed in neurons of vehicle-treated rats. In contrast, MPEP had no effect on the frequency or amplitude of mEPSCs in lamina II neurons from vehicle-treated rats (n ϭ 8 neurons, Fig. 3). These data suggest that presynaptic mGluR5 plays a critical role in the paclitaxel-induced increase in synaptic glutamate release to spinal dorsal horn neurons. mGluR5 mediates paclitaxel-induced increases in glutamatergic transmission from primary afferent nerves to spinal dorsal horn neurons We next determined whether mGluR5 is involved in paclitaxel-induced increases in glutamatergic transmission between primary afferent nerves and spinal dorsal horn neurons. We recorded EPSCs of spinal lamina II neurons monosynaptically evoked from the dorsal root using a constant stimulation intensity. The baseline amplitude of the evoked EPSCs was signifi-Glutamatergic signaling in chemotherapy-induced pain cantly higher in paclitaxel-treated rats than in vehicle control rats (Fig. 4, A-C). Bath application of 20 M MPEP normalized the increased amplitude of evoked EPSCs of lamina II neurons from paclitaxel-treated rats (n ϭ 10 neurons, Fig. 4, A-C) but had no effect in neurons from vehicle-treated rats (n ϭ 9 neurons). Furthermore, we examined the effect of MPEP on the PPR of monosynaptically evoked EPSCs in spinal dorsal horn neurons. Bath application of MPEP reversed the reduction in the PPR of evoked EPSCs observed in neurons from paclitaxeltreated rats (n ϭ 10 neurons, Fig. 4, B-D) but had no effect in neurons from vehicle-treated rats (n ϭ 9 neurons). These results suggest that increased presynaptic mGluR5 activity at primary afferent terminals mediates paclitaxel-induced potentiation of glutamatergic synaptic input to spinal dorsal horn neurons. Paclitaxel treatment increases the mGluR5 protein levels in the DRG and dorsal spinal cord synaptosomes Because mGluR5 is expressed in primary sensory neurons in the DRG (12), we quantified paclitaxel-induced changes in the level of mGluR5 protein in the DRG. Western immunoblotting detected a single band corresponding to the correct molecular mass (ϳ160 kDa) of mGluR5 protein in DRG tissues (Fig. 5A). The mGluR5 level in the DRG tissue was significantly higher in paclitaxel-treated rats than in vehicle-treated rats (n ϭ 8 rats in each group, Fig. 5, A and B). Because the electrophysiological recording data (Figs. 3 and 4) showed that presynaptic mGluR5 activity in the spinal dorsal horn is increased by paclitaxel treatment, we also measured mGluR5 protein levels in synaptosomes isolated from the dorsal spinal cord. Western immunoblotting showed that mGluR5 protein levels in spinal cord synaptosomes were also significantly higher in paclitaxel-than in vehicle-treated rats (n ϭ 8 rats in each group, Fig. 5, C and D). mGluR5 activation increases synaptic glutamate release to spinal dorsal horn neurons via PKC PKC plays a critical role in the paclitaxel-induced increase in synaptic glutamate release in the spinal cord (8). Because mGluR5 activation leads to increased PKC activity (11), we determined whether PKC plays a role in mGluR5-mediated increases in synaptic glutamate release. It has been shown that chelerythrine, a specific membrane-permeable PKC inhibitor, inhibits PKC activity in spinal cord slices (8,22). In vehicletreated rats, bath application of 20 M DHPG had no effect on the baseline frequency or amplitude of mEPSCs in spinal cord slices that had been incubated with 10 M chelerythrine for 30 to 60 min (n ϭ 8 neurons, Fig. 6, A and B, versus Fig. 1, A and B). In paclitaxel-treated rats, chelerythrine treatment normalized the increased baseline frequency of mEPSCs of lamina II neurons (n ϭ 10 neurons, Fig. 6, C and D, versus Fig. 3, B and C). In Glutamatergic signaling in chemotherapy-induced pain these neurons, further bath application of 20 M DHPG had no effect on the frequency of mEPSCs. These findings suggest that PKC is critically involved in the mGluR5-mediated glutamatergic input to spinal dorsal horn neurons that is potentiated by paclitaxel treatment. mGluR5 activation increases synaptic glutamate release to spinal dorsal horn neurons via presynaptic NMDARs Blocking mGluR5 (Figs. 3 and 4) or NMDARs (8) normalizes paclitaxel-induced increases in synaptic glutamate release to spinal dorsal horn neurons. We next determined whether NMDARs are involved in mGluR5-medated increases in synaptic glutamate release to spinal dorsal horn neurons. In the presence of the specific NMDAR antagonist 2-amino-5-phosphonopentanoic acid (AP5, 50 M), application of 20 M DHPG had no significant effect on the frequency or amplitude of mEP-SCs recorded from lamina II neurons of vehicle-treated rats (n ϭ 8 neurons; Fig. 7, A and B). In another 8 lamina II neurons recorded from paclitaxel-treated rats, bath application of AP5 normalized the increased mEPSC frequency. In these neurons, subsequent application of DHPG failed to increase the frequency of mEPSCs in the presence of AP5 (n ϭ 8 neurons; Fig. 7, C and D). These results suggest that mGluR5 activation increases the glutamatergic input to spinal dorsal horn neurons that is stimulated by paclitaxel treatment through presynaptic NMDARs. Blocking mGluR5 at the spinal cord level reverses paclitaxelinduced pain hypersensitivity In addition, we determined the functional significance of mGluR5 in the regulation of nociceptive transmission at the spinal cord level in paclitaxel-treated rats. We tested the effect of intrathecal injection of MPEP on the tactile and nociceptive withdrawal thresholds in both vehicle-and paclitaxel-treated rats. Paclitaxel treatment caused a large reduction in the paw withdrawal threshold in response to the tactile and noxious pressure stimuli (Fig. 8). Intrathecal administration of 10 -60 g of MPEP significantly reduced tactile allodynia and mechanical hyperalgesia in paclitaxel-treated rats in a dose-dependent manner (n ϭ 8 rats in each group, Fig. 8). The effect of MPEP reached its maximum about 30 min after intrathecal injection and lasted for about 120 min. By contrast, intrathecal injection of 60 g of MPEP had no significant effect on the tactile or pressure withdrawal thresholds in vehicle-treated rats (n ϭ 8 rats, Fig. 8). These data indicate that increased mGluR5 activity at the spinal cord level contributes to the pain hypersensitivity induced by paclitaxel treatment. Discussion In this study, we showed that stimulation of presynaptic mGluR5, but not mGluR1, increases glutamatergic input to spinal dorsal horn neurons. In this regard, DHPG, a selective Group I mGluR (mGluR1 and mGluR5) agonist, profoundly increased the frequency of mEPSCs of spinal dorsal horn neurons. DHPG also significantly increased the amplitude of EPSCs monosynaptically evoked from the primary afferent nerves and reduced the PPR of evoked EPSCs. Furthermore, these effects of DHPG were abolished by blocking mGluR5 with MPEP, but not by blocking mGluR1 with LY367385. Our findings are consistent with those of previous molecular and biochemical studies showing that mGluR5, but not mGluR1, is expressed in DRG neurons and primary afferent nerve terminals in the superficial spinal dorsal horn (12,17). Our electrophysiological data indicate that stimulation of mGluR5 on primary afferent nerve terminals potentiates synaptic glutamate release to spinal dorsal horn neurons. A major finding of our study is that paclitaxel treatment induces tonic activation of presynaptic mGluR5, which is in return responsible for the increased glutamatergic input to spinal dorsal horn neurons that promotes neuropathic pain. We demonstrated that MPEP reduced the frequency of mEPSCs of neurons of paclitaxel-treated rats in a dose-dependent manner. MPEP also normalized the amplitude of EPSCs evoked from primary afferent nerves and the PPR of evoked EPSCs, both of which has been altered by paclitaxel treatment. These findings indicate that presynaptic mGluR5 is tonically activated by ambient glutamate in paclitaxel-treated rats. Furthermore, our results suggest that mGluR5 is up-regulated at the synaptic sites of the spinal dorsal horn in paclitaxel-induced neuropathic pain. Because mGluR5 is highly expressed in the DRG (12,16), it is possible that mGluR5 is synthesized in DRG neurons and transported to the central terminals of primary afferent nerves in the spinal cord. However, we found that blocking mGluR5 activity had no effect on synaptic glutamate release from primary afferent nerves in vehicle-treated rats. This differential effect of mGluR5 blockade in paclitaxel-treated and control rats suggests that presynaptic mGluR5 is silent under normal conditions but is tonically activated at primary afferent nerve terminals after paclitaxel treatment. It is unclear how paclitaxel treatment increases the mGluR5 protein level at presynaptic terminals in the spinal cord. mGluR5 physically interacts with tubulins (23) for microtubule-dependent active axonal transport. Paclitaxel, by causing microtubule stabilization, may impair axonal transport of mGluR5 in DRG neurons (24) to increase its accumulation in the DRG and at the primary afferent nerve terminals. Alternatively, paclitaxel-induced DRG neuronal damage may lead to augmented mGluR5 protein synthesis and transport to the presynaptic terminals in the spinal dorsal horn. The sources of glutamate for the paclitaxel-induced tonic activation of presynaptic mGluR5 probably include the same primary afferent terminals, excitatory interneurons in the dorsal horn, and/or surrounding synapses and glial cells (4,(25)(26)(27). Glutamatergic signaling in chemotherapy-induced pain Data from this and other recent studies indicate that blocking mGluR5, PKC, or NMDARs normalized the increased glutamatergic input from the central terminals of primary afferent nerves caused by paclitaxel (8). However, the direct link between mGluR5, PKC, and NMDARs in the spinal dorsal horn remains elusive. PKC is involved in NMDAR phosphorylation (9) and in mGluR5-mediated potentiation of NMDAR currents in the brain (15,28). In the present study, PKC inhibition abolished DHPG-induced increases in the mEPSC frequency of dorsal horn neurons from vehicle-and paclitaxel-treated rats. We also found that blocking NMDARs abolished the stimulatory effect of DHPG on the mEPSC frequency in spinal cord neurons from both vehicle-and paclitaxel-treated rats. Thus, presynaptic mGluR5 probably serves as an upstream signal for PKC and NMDAR activation, increasing glutamatergic input to spinal dorsal horn neurons and leading to chemotherapy-related neuropathic pain. Our study provides new in vivo evidence about the causal relationship between augmented mGluR5 activity at the spinal cord level and pain hypersensitivity induced by paclitaxel. We showed that blocking mGluR5 at the spinal cord level profoundly attenuated tactile allodynia and mechanical hyperalgesia induced by paclitaxel. However, mGluR5 blockade had no effect on normal nociception in control rats. A similar differential effect of mGluR5 antagonists has been shown in other chronic pain models (16,29). Our findings have important therapeutic implications, as blocking mGluR5 at the spinal cord level could conceivably be an effective strategy for treating chemotherapy-induced painful neuropathy. In summary, our findings indicate that paclitaxel treatment causes up-regulation and tonic activation of presynaptic mGluR5, which in turn augments augmented glutamatergic input to spinal dorsal horn neurons via PKC and NMDARs. Together, the mGluR5-PKC-NMDAR-glutamate release may form a signaling cascade and a positive feedback loop leading to sustained nociceptive input and pain hypersensitivity after paclitaxel treatment. Because mGluR5 is preferentially involved in augmenting spinal nociceptive transmission in paclitaxel-induced chronic pain but not in physiological pain conditions, mGluR5 could be a desirable target for development of effective therapies specifically for chemotherapy-induced neuropathic pain. Animal model and paclitaxel treatment Experiments were carried out using adult male Sprague-Dawley rats (220 -250 g; Harlan, Indianapolis, IN). A total of 78 rats was used for the entire study. All procedures and protocols were approved by the Institutional Animal Care and Use Committee of The University of Texas MD Anderson Cancer Center and were performed in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. lamina II neurons from vehicle-treated rats (n ϭ 9 neurons, 5 rats) and paclitaxel-treated rats (n ϭ 10 neurons, 5 rats). *, p Ͻ 0.05 compared with the respective baseline control. #, p Ͻ 0.05 compared with the baseline in the vehicle-treated group. Box-and-whisker plots show medians, 25th and 75th percentiles, and ranges. Glutamatergic signaling in chemotherapy-induced pain To induce peripheral neuropathy, we injected rats intraperitoneally with 2 mg/kg of paclitaxel (TEVA Pharmaceuticals, North Wales, PA) on 4 alternate days, as described previously (8,30). Rats in the control group received intraperitoneal injection of the vehicle (Cremophor EL/ethanol, 1:1) on the same 4 alternate days. We confirmed the presence of mechanical hyperalgesia and tactile allodynia in the hindpaws of the rats 10 to 12 days after the completion of paclitaxel treatment. All terminal experiments were conducted 2 to 3 weeks after the last paclitaxel or vehicle injection. Nociceptive behavioral tests To determine tactile sensitivity, we placed rats individually on a mesh floor within suspended chambers and allowed them to acclimate for at least 30 min. We applied a series of calibrated von Frey filaments (Stoelting, Wood Dale, IL) perpendicularly to the plantar surface of both hindpaws with sufficient force to bend the filament for 6 s. Brisk withdrawal or flinching of the paw was considered a positive response. In the absence of a response, we applied the filament of the next greater force. After a response, we applied the filament of the next lower force. The tactile stimulus with a 50% likelihood of producing a withdrawal response was calculated using the "up-down" method (31). To measure the mechanical nociceptive threshold, we conducted the paw pressure (Randall-Selitto) test on the left hindpaw using an analgesiometer (Ugo Basile, Varese, Italy). To activate the device, we pressed a foot pedal to activate a motor that applied a constantly increasing force on a linear scale. When the animal displayed pain by either withdrawing the paw or vocalizing, the pedal was immediately released, and the ani-mal's nociceptive threshold was recorded (32,33). Each trial was repeated 2 or 3 times at ϳ2-min intervals, and the mean value was used as the force to produce a withdrawal response. Spinal cord slice preparation Rats were anesthetized with 2 to 3% isoflurane, and the lumbar spinal cord at the L3-L6 level was removed through laminectomy. The spinal cord tissues were placed in an ice-cold sucrose artificial cerebrospinal fluid presaturated with 95% O 2 and 5% CO 2 . The artificial cerebrospinal fluid contained (in mM) 234 sucrose, 3.6 KCl, 1.2 MgCl 2 , 2.5 CaCl 2 , 1.2 NaH 2 PO 4 , 12.0 glucose, and 25.0 NaHCO 3 . A vibratome was used to slice the spinal cord tissue transversely into 400-m sections. The slices were preincubated in Krebs solution oxygenated with 95% O 2 and 5% CO 2 at 34°C for at least 1 h before being transferred to the recording chamber. The Krebs solution contained (in mM) 117.0 NaCl, 3.6 KCl, 1.2 MgCl 2 , 2.5 CaCl 2 , 1.2 NaH 2 PO 4 , 11.0 glucose, and 25.0 NaHCO 3 . Electrophysiological recordings in spinal cord slices The spinal cord slice was placed in a glass-bottomed chamber and continuously perfused with Krebs solution at 5.0 ml/min at 34°C. Lamina II outer zone neurons were visualized on an upright fixed-stage microscope with differential interference contrast/infrared illumination and selected for recordings. Most (Ͼ85%) lamina II neurons are glutamate-releasing excitatory interneurons (34) that predominantly receive nociceptive input from primary afferent nerves (3). EPSCs were recorded using whole-cell voltage-clamp techniques (3,35). The impedance of the glass recording electrode was 4 -7 M⍀ when it was filled with the internal solution containing (in mM) EPSCs were evoked from the dorsal root using a bipolar tungsten electrode connected to a stimulator (0.2 ms, 0.5 mA, 0.1 Hz) (27,35). Monosynaptic EPSCs were identified on the basis of the constant latency and absence of conduction failure of evoked EPSCs in response to a 20-Hz electrical stimulation, as we described previously (27,35). To measure the PPR, 2 EPSCs were evoked by a pair of stimuli administered at 50-ms intervals. The PPR was expressed as the ratio of the amplitude of the second synaptic response to the amplitude of the first synaptic response (8,27). In addition, miniature EPSCs (mEPSCs) were show that application of 20 M DHPG had no effect on the frequency and amplitude of mEPSCs of lamina II neurons from vehicle-treated rats' spinal cord slices incubated with 10 M chelerythrine (n ϭ 8 neurons, 4 rats). C and D, original recording traces and cumulative plots (C) and box-and-whisker plots (D) show that application of 20 M DHPG had no effect on the frequency or amplitude of mEPSCs of lamina II neurons from paclitaxel-treated rats' spinal cord slices incubated with chelerythrine (n ϭ 10 neurons, 5 rats). Box-and-whisker plots show medians, 25th, and 75th percentiles, and ranges. Glutamatergic signaling in chemotherapy-induced pain recorded in the presence of 2 M strychnine, 10 M bicuculline, and 1 M tetrodotoxin at a holding potential of Ϫ60 mV (32,36). The input resistance was monitored, and the recording was abandoned if it changed by more than 15%. Synaptic currents were recorded using an amplifier (MultiClamp 700A, Axon Instruments, Foster City, CA), filtered at 1-2 kHz, and digitized at 10 kHz. Only 1 neuron was recorded in each spinal cord slice, and at least 3 rats were used in each recording protocol. Western immunoblotting Two weeks after the end of treatment with paclitaxel or vehicle, the rats were anesthetized with 3% isoflurane, and the lumbar DRGs and dorsal spinal cord tissues were rapidly removed. The DRG tissues were homogenized in ice-cold radioimmunoprecipitation assay buffer containing mixtures of the protease and phosphatase inhibitors. The DRG homogenate was centrifuged at 12,000 ϫ g for 10 min at 4°C, and the supernatant was collected. The spinal cord tissues were homogenized in 10 volumes of ice-cold HEPES-buffered sucrose (0.32 M sucrose, 1 M EGTA, and 4 mM HEPES, pH 7.4) containing the protease inhibitor mixture (Sigma). The homogenate was centrifuged at 1,000 ϫ g for 10 min at 4°C to remove the nuclei and large debris. The supernatant was centrifuged at 10,000 ϫ g for 15 min to obtain the crude synaptosomal fraction. The synaptosomal pellet was lysed via hypo-osmotic shock in 9 volumes of ice-cold HEPES-buffer with the protease inhibitor mixture for 30 min. The lysate was centrifuged at 25,000 ϫ g for 20 min at 4°C to obtain the synaptosomal membrane fraction. Samples were subjected to 12% SDS-polyacrylamide gel electrophoresis and transferred to a polyvinylidene difluoride membrane. The blots were probed with rabbit polyclonal anti-mGluR5 antibody (catalog number AGC-007, Alomone Laboratories, Jerusalem, Israel). We have shown that this mGluR5 antibody detects a single protein band that matches the molecular mass (ϳ160 kDa) of mGluR5 (16). An ECL kit (GE Healthcare) was used to detect the protein band, which was visualized and quantified with an Odyssey Fc Imager (LI-COR Biosciences, Lincoln, NE). We used PSD95 (catalog number 75-348, NeuroMab, Davis, CA) as the protein loading control for spinal cord synaptosomes, and GAPDH (catalog number ab9485, Abcam) as the loading control for the DRG tissue. The amount of mGluR5 protein in the DRG and spinal cord synaptosomes was normalized to the amount of GAPDH and PSD95, respectively, in the same blots. The mean value of mGluR5 protein in vehicle-treated rats was considered to be 1. Data analysis The amplitude of the evoked EPSCs was quantified by averaging 10 consecutive EPSCs off-line with Clampfit 10.0 software (Axon Instruments). The mEPSCs were analyzed using the MiniAnalysis program (Synaptosoft, Leonia, NJ), and the cumulative probability of the amplitude and the inter-event interval of the mEPSCs were compared using the Kolmogorov-Smirnov test. The group data were presented in box-and-whisker plots, which show the minimum, 25th percentile, median, 75th percentile, and maximum values. We used a Student's t test to compare the mGluR5 protein levels in the 2 groups. We used one-way analysis of variance followed by Tukey's post hoc test to compare values in more than 2 groups. To determine the effects of treatment on hyperalgesia and allodynia, we used repeated measures analysis of variance followed by Dunnett's post hoc test. p values Ͻ 0.05 were considered to be statistically significant.
6,259.4
2017-01-01T00:00:00.000
[ "Biology" ]
From Kant to Popper: Reason and Critical Rationalism in Organization Studies The objective of this essay is to revisit the theoretical construction of Critical Rationalism, starting from the philosophy of Kantian reason contained in the works Critique of pure reason and Critique of practical reason to discuss their respective influences over the work of Karl Popper. We aim, with this exercise, to shed light on the critical-rationalist approach in Organization Studies. Our argument is that Kantian thought has been conducive, on the one hand, to a negative philosophy that considers idealism prior notions and a priori knowledge fundamental to the creative conception of knowledge and, on the other hand, to a hypothetical-deductive science that seeks to bring us closer to truth through criticism. The basis of critical rationalism lies in the search for reason and transcendental truth. This is a call not only for the production of theories, but for dedication to test their validity – a problem that has not received much attention from researchers in the field of organization studies. most basic needs. We perform those actions because we believe in their purpose, in the sense that we believe that taking a particular course of action will lead to a certain outcome, which may or may not interest us. Understanding and vindicating the concept of universal reason is a task that has seduced many philosophers and scholars since the dawn of human civilization. Plato was one of the first philosophers to pose notions of reason. His "Allegory of the cave" is taught to this day in undergraduate courses so that students have the opportunity to grasp the transcendentality of knowledge, accessible only to those who are able to disentangle themselves from the currents of experience in order to witness true knowledge, a privilege granted exclusively to the most enlightened philosophers, the ones capable of engendering universal discourse and ideas. Plato's philosophical constructions come, as Châtelet (1994) points out, from the realization that men are unhappy, and that they are unhappy because they suffer and commit injustices while finding no empirically obtainable remedy able to overcome the pain caused by injustice. In dealing with reason, Plato's proposal is to make it a remedy for unjust human action. Reason, in this sense, would be a universal discourse, or a coherent, well-organized set of statements, legitimized in their process of development, so that any bona fide individual is forced to accept them as valid. Such a discourse would be able to answer every question man is able to ask and, to this end, its every word ought to have a correspondence in the real world. One needs to make sure that their discourse is not a hollow shell, that there is something that corresponds to it, and that there are consistent data to support it. Thus, for Plato the driving force of discourse is the Idea, which corresponds to a transcendental abstraction wherein man is able to find the matrix of true knowledge, freeing himself from sophism (Châtelet, 1994). The idea, for Plato, is an archetype. The world of appearances presents itself to us as the real, and this appearance of the real may be the object of a great diversity of interpretations or opinions (this diversity Plato called doxa), guided by each individual's passions, interests, desires and circumstances. Plato considers that "each one sees the real as it suits him, and calls 'reality' all of that which corresponds to his subjective dispositions" (Châtelet, 1994, p. 37, our translation). Yet, universal discourse, the true reality, is composed of ideas that are unchanging, unlike appearances, which change incessantly. The knowable reality is nothing more than the product of these ideas, an imperfect imitation. Understanding reality is the task of the philosopher, who presents himself as a legitimate idealist (Châtelet, 1994). Aristotle , however, believed that the philosopher's formation should be based on what he is, on the sensible world in which he believes, because certainties are built from experience -from concrete views and beliefs. Since experience has true existence, as Aristotle thought, human beings are able to formulate discourses and build theories that are convincing insofar as they correspond to reciprocal experience. This is because "adherence [to a discourse] is not only a sign that the discourse is well constructed, but is also proof that the recipient is convinced and sees things in the same way as the emitter of the discourse" (Châtelet, 1994, p. 44, our translation). We refer to this kind of thinking as common sense, which unveils the real as the what it is (to ti esti) of a thing. In other words, the real is the essence of appearance (Châtelet, 1994). This clash between the notion of knowledge as originating in the world of ideas (idealism) and knowledge as originating in the world of life (materialism) has yet to be overcome by either philosophy or the social sciences. The question of what is real is still one of the great human dilemmas. This divergence concerns not only the ontology of the real, but also the epistemological position of researchers who choose to affiliate with one theoretical line or another. Immanuel Kant (1724Kant ( -1803 is one of the great representatives of the idealist movement, and one of the great metaphysical thinkers of the 19th Century. The aim of this essay is to revisit the theoretical construction of Critical Rationalism, starting from the philosophy of Kantian reason contained in the books Critique of pure reason (1781/2012) and Critique of practical reason (1788/2015) in order to discuss their respective influences on Karl Popper's work. With this exercise, we aim to shed light on the critical-rationalist approach to Organization Studies. Our argument is that Kantian thought has been conducive, on the one hand, to a negative philosophy that considers idealism, pre-conceived notions and a priori knowledge as fundamental to the creative conception of knowledge and, on the other, to a hypothetical-deductive science which aims to bring us closer to the truth through criticism. The basis of critical rationalism lies in the pursuit of reason and transcendental truth, a pursuit that calls not just for the production of theories but also for dedication to testing their validity. Science and philosophy, according to Friedman (2001), have been closely related throughout our intellectual history. In addition to being born together in Greece between the 6th and 3rd centuries BC, they flourished together in the Medieval Period, the Renaissance, and Modernity (13th to 17th centuries). Its historical development gave rise to the philosophy and modern sciences we practice today -even though there was no strict distinction between these concepts until the end of the 17th century. Philosophy, as a form of transcendental inquiry, is not only different from all empirical sciences, but also from the a priori theoretical constructs we have devised to explain reality (such as geometry, epistemology itself, etc.). What matters to philosophy is the distinction between objects and reality, and reason is, in Kantian terms, a model for the cognition of these objects, insofar as it is possible to know them a priori (Friedman, 2001). The duty of philosophy, in Kant's own words (1781/2012), was "to abolish the semblance arising from misinterpretation, even if many prized and beloved delusions have to be destroyed in the process." (p. 100). The goal of philosophy, in this sense, is the logical clarification of thoughts; philosophy is not a doctrine or body of dogmas, but an activity. Thus, a philosophical work consists essentially of elucidation (Friedman, 2001;Wittgenstein, 1992). For Kant (1781Kant ( /2012, pure reason is such a perfect and reliable unit that it would be able to answer any and all questions submitted to it. This is because "[h]uman reason has the peculiar fate ... that it is burdened with questions which it cannot dismiss, since they are given to it as problems by the nature of reason itself, but which it also cannot answer, since they transcend every capacity of human reason." (Kant, 1781(Kant, /2012). Kant rejects the foundation of reason in experience, and instead chooses to rely on the distinction between phenomena and things-in-themselves. That is, he considers that the object of cognition is not fully comprehensible only by empirical means, since, no matter how much human knowledge inevitably begins in the course of experience, it is endlessly questioned until even its last refuge in the use of experience is conquered. Kantian reason, in this sense, is grounded in principles that are ultimately transcendental and have little relationship with empirical praxis, considering their high level of abstraction, breadth and generalization -the reason why these principles are called metaphysical. The task of Kantian metaphysics, therefore, is "to discover the inner forces of things, the first causes of the laws of motion and the ultimate constituents of matter" (Beiser, 1992, p. 31). And not only that, but also to extract from this knowledge the universal principles that govern human knowledge of reality. Unlike empirical physics, in which the mechanics of nature and the laws of motion are analyzed, Kantian metaphysics provides us with rational knowledge of the intelligible world (Beiser, 1992). The basis of Kantian metaphysics is to distinguish between two faculties of knowledge, sensibility and rationality. Sensibility, in this sense, is the receptivity of the subject, through which he is affected by experiential objects. Rationality, on the other hand, is the subject's activity, through which he creates a priori representations that have not been obtained sensorially. These concepts, rather than derived from experience, are expressed by mathematical formulas, for example, which stem from man's reflection on a particular object -i.e., from one's ability to theorize. According to Wartenberg (1992), what Kant presents is "an account of the use of theoretical concepts in the development of scientific theories under the rubric of the 'regulative use of reason'" (p. 228). Our premise in this essay is that understanding Kantian reason means not only analyzing our own ability to theorize, but also seeking to approximate thought, ideas and critical reflections on the empirical world. In regards to the latter, we apply the critical rationalism of Karl Popper (1902-1994. His view on the falsifiability of theories and hypothetical-deductive science attributes great potentiality to Kant's writings, structuring a way of conceiving scientific knowledge in accordance to the metaphysics of real knowledge. We propose, therefore, to reconstruct this theoretical movement so as to explain the foundations of pure reason and critical rationalism, transposing these constructions into the field of organizational studies. To this end, we will first revisit historical aspects related to the figure of Immanuel Kant, his main arguments about transcendental reason, his influence on Karl Popper's critical rationalism and, at last, draw our final considerations. About the philosopher of the Enlightenment Immanuel Kant (1724-1803) Several authors have dedicated their academic trajectory to analyze Immanuel Kant's thought. His texts, although loaded with embroiled language often characterized by redundancy, pose the essence of an idealistic thought that values man's capacity for inspiration and reflection more than empiricism. Guyer's (1992) biographical essay on Kant narrates the story of a professor walking the narrow alleys of his place of birth, a small town called Könisberg which no longer exists, having been destroyed in World War II and replaced by the Russian naval base of Kaliningrad. Kant's career as an academic begins with his entry into university at the age of 16, following a preparatory education financially supported by the family's pastor and suplemented by classes that were not able to ensure much more than a life of relative poverty. Only in 1770, at the age of 46, was Kant appointed to the chair of metaphysics, after a decade of continuous publication. Having assumed this long-sought position, Kant fell into a decade of silence "which must have persuaded many that his long wait for a chair even at such a provincial university had been fully deserved." (Guyer, 1992, pp. 3-4). From the monotony of solitary studies, however, emerged one of the great philosophical authors in history, of a kind rarely witnessed before. Beginning in his 57 years of age, Kant published a great work almost every year for over a decade and a half (Guyer, 1992). His most mature works are from this period, especially the three famous critiques: Critique of pure reason (1781, revised 1787) offered new foundations for human knowledge by deconstructing the main aspects of traditional metaphysics; Critique of practical reason (1788) inextricably linked human freedom with morality while reconstructing the foundations of metaphysics on practical rather than theoretical bases; The critique of the faculty of judgment (1790) ostensibly discussed the topics of aesthetics and teleological judgment, but also struggled to refine and even revise some of his earlier basic conceptions of practical and theoretical reason (Guyer, 1992). Beiser (1992) subdivides Kant's career into four phases: the first phase (1746-1759) is the period of obsession with seeking a proper ground for metaphysics and developing a rationalist epistemology able to justify the possibility of the knowledge of God and of the first causes of nature; the second phase (1760-1766) is the period of disillusionment, in which Kant breaks with his early rationalist epistemology and bends to skepticism, ultimately rejecting the possibility of a metaphysics that transcends the limits of experience; the third phase (1766-1772) is a period of partial reconciliation, in which Kant returns to metaphysics in the belief that he could finally provide it with a solid foundation, and finally the fourth phase (1772-1780) is the period of divorce, as Kant comes to realize that his renewed reliance on metaphysics could not solve a fundamental problem: how would a priori synthetic judgments of experience be valid if they have not been derived from experience itself? (Beiser, 1992). From 1772 onwards, he devotes his life and philosophical studies to formulating a mature critical doctrine on the possibility of metaphysics (Beiser, 1992). Like many philosophers since the time of Rene Descartes and Thomas Hobbes, Kant tried to explain both the possibility of new scientific knowledge and the possibility of human freedom, believing that "the validity of the laws of the starry sky above, as well as the moral law within, had to be sought in the legislative power of the human intellect itself" (Guyer, 1992, p. 2). Kant's proposition, according to Guyer (1992), is that we can be right in regards to the foundations of physical science because we ourselves impose on nature at least the basic form of the scientific laws that are given to us by our senses; it is precisely for this reason that we are free to observe the world from the standpoint of rational agents -whose actions are the result of choices and not simply predicted according to deterministic laws. In other words, we interpret and define the laws of the universe on the basis of our ideas; we are free because nothing prevents us from coming up with other ideas to interpret the world, and there is no deterministic law that prevents us from reframing reality. However, Kant never understood human freedom as complete. For the author, although it is possible to legislate on the most basic forms of the laws of nature and, in effect, bring these laws ever closer to the particularities of nature by means of concrete concepts, we can only do so incompletely, as there is an insurmountable distance between human perception and experience and the reality of nature, given its infinite extensionality and the vastness of space and time. In this sense, we will never have full knowledge of nature, nor will we ever realize its true dimension (Guyer, 1992). Therefore, it is important to note that Kant's philosophy understands that while we may seek to legislate towards establishing rational laws for our actions, we must always seek to understand nature not only as an external reality, but also as internal to our own reason -which we constitute and are constituted by. . . . Kant radically and irreversibly transformed the nature of Western thought. After he wrote, no one could ever again think of either science or morality as a matter of the passive reception of [an] entirely external truth or reality. In reflection upon the methods of science, as well as in many particular areas of science itself, the recognition of our own input into the world we claim to know has become inescapable. In the practical sphere, few can any longer take seriously the idea that moral reasoning consists in the discovery of external norms -for instance, objective perfections in the world or the will of God -as opposed to the construction for ourselves of the most rational way to conduct our lives . . . . Of course not even Kant could have singlehandedly transformed the selfperception of an entire culture; but at least at the philosophical realm of the transformation of the Western conception of the human being from a mere spectator of the natural world and a mere subject in the moral world to an active agent in the creation of both, no one played a larger role than Immanuel Kant. (Guyer, 1992, p. 3) For Châtelet (1994), Kant did not share the Cartesian program of dominating nature, but was ambitious about man, since he believed that mankind could improve itself. For Kant, through pure reason, man is able to assign meaning to and legislate over nature. Many of his constructions arose in answer to David Hume's, a philosopher who takes up John Locke's ideas of knowledge as stemming from experience. The Kant-Hume relationship is very similar to Plato's and Aristotle's, since both believed in the dichotomy between the enlightenment of man versus natural reality as important for the discovery of truth. For Hume, merely analyzing or reflecting on natural reality is not enough if we are to find necessary causes and connections between objects; only controlled experimentation allows humans to realize that nature obeys sometimes simple, sometimes complex laws (Châtelet, 1994). In this sense, it is nature, as we discover it through experimentation, that must command, not thoughts resulting from preconceived reflections. Guyer (1992) similarly understands the development of Kant's philosophy as contrary to the dreams of every rationalist philosopher since Descartes, since it claimed that philosophy could not appropriate the same methods as mathematics. While mathematics could start from definitions and then prove possible outcomes by constructing objects according to those definitions, "philosophy, however, could never begin with definitions but only with 'certain primary fundamental judgments' the analysis of which could lead to definitions as its conclusion, not its commencement." (Guyer, 1992, p. 8). For Kant, empirical statements about cause and effect relations, substance, and action could only serve as starting points for philosophy; however, the deeper the relationships between the objects are, the broader the philosophical activity, and thus we can never completely understand the complexity of the real world only through experience, without the reflective exercise that is the attribution of reason. In the words of Deleuze (1985), the Kantian definition of philosophy is to be "'the science of the relation of all knowledge to the essential ends of human reason" (p. 1). Deleuze acknowledges that Kant wages a double struggle: one against empiricism, and another against dogmatic rationalism. For empiricism, reason is not the faculty of ends, because all ends refer to a primordial affectivity, to a 'nature' capable of establishing them. Thus, the originality of reason consists in a certain way of accomplishing ends that are common to man and nature. The supreme ends of reason, according to Deleuze (1985), are to form the system of culture; in Kantian philosophy, this means that "[t]he final end is not an end which nature would be competent to realize or produce in terms of its idea, because it is one that is unconditioned" (p. 1). And it is through the exercise of metaphysics, in the words of Kant himself (1781Kant himself ( /1998, that a science of perfectly designed reason is made possible, in such a way that posterity will have nothing left to do except didactically adapt it to our purposes, without however expanding its contents at all. Because reason "is nothing but the inventory of all we possess through pure reason, ordered systematically" (Kant, 1781(Kant, /1998. This is because the knowledge sought by reason consists of pure concepts -without anything from experience or even from a particular intuition (which should lead to a particular experience) having any kind of influence to extend it (Kant, 1781(Kant, /1998. Understanding reason and its limits is fundamental for understanding the theorizing of reality, as well as the possibility of overcoming and improving man's knowledge; this subject will be further discussed in the next section. Reason in Kant: a priori knowledge, principiology and critical rationalism Kant's great treatises on reason are Critique of pure reason (1781/1998) and Critique of practical reason (1788/2015). In these works, the author began to delineate the framework of his metaphysics and discuss the underlying ideas of Newtonian physics (Friedman, 2001). Knowing these fundaments allows researchers in organization studies to rethink organizational theorizing as a function of reflection on experience. Some of the ideas of these two seminal works will now be analyzed more closely, though not exhaustively. Two major elements constitute the driving force of human science: curiosity and reasonthe latter is crucial because of the impossibility of using faith as a valid scientific source of knowledge. Kant (1781Kant ( /1998 understands that insofar as reason must be present in the sciences, something in them has to be composed of a priori knowledge, and such knowledge must establish a relationship to its object in two ways: (1) by conceptualizing the object, or (2) by giving it concrete form. The first kind of knowledge Kant called theoretical knowledge; the second he termed practical knowledge. The pure part of both, according to the author, is that through which reason determines its object in an entirely a priori manner, and must be presented alone, beforehand, without contamination from other sources. It may seem difficult at first glance to understand the difference between both types of knowledge, but Kant seeks to further explain this proposition in the following passage: When Galileo rolled balls of a weight chosen by himself down an inclined plane, or when Torricelli made the air bear a weight that he had previously thought to be equal to that of a known column of water, or when in a later time Stahl changed metals into calx and then changed the latter back into metal by first removing something and then putting it back again, a light dawned on all those who study nature. They comprehended that reason has insight only into what it itself produces according to its own design; that it must take the lead with principles for its judgments according to constant laws and compel nature to answer its questions, rather than letting nature guide its movements by keeping reason, as it were, in leading-strings; for otherwise accidental observations, made according to no previously designed plan, can never connect up into a necessary law, which is yet what reason seeks and requires. Reason, in order to be taught by nature, must approach nature with its principles in one hand, according to which alone the agreement among appearances can count as laws, and, in the other hand, the experiments thought out in accordance with these principles -yet in order to be instructed by nature not like a pupil, who has recited to him whatever the teacher wants to say, but like an appointed judge who compels witnesses to answer the questions he puts to them. (Kant, 1781(Kant, /1998 Knowledge, for Kant (1781Kant ( /1998, comes from within the individual, not from external sources, as previously thought. The author rejects the proposition that our knowledge must be regulated by objects; on the contrary, he posits that objects must be regulated by our knowledge, in agreement with the possibility of prior knowledge. Knowledge, to the same extent, is not composed of experience, but of ideas; this is because "since experience itself is a kind of cognition requiring the understanding, whose rule I have to presuppose in myself before any object is given to me" (Kant, 1781(Kant, /1998, and is expressed in a priori concepts to which all objects of the experiment will have to adjust. In other words, experience presupposes understanding; this, in turn, is founded on prior knowledge arising from the fundamental ideas we make of an object (Deleuze, 1985;Wartenberg, 1992). After inductively arriving at the a priori faculty of knowledge, Kant is disturbed to conclude that we can never use this faculty to surpass the limits of our experience of phenomena, so as to know the "thing in itself" -which is, however, the most essential interest of science (Kant, 1781(Kant, /1998. Epistemology, or the science of knowing, translates Kant's thinking well: it is not possible to understand how we know, but we can speculate in regards to the ways in which we make our knowledge real, and from these ideal constructions we can derive the means by which we articulate our convictions. However, for Kant, there is no doubt that all our knowledge begins in experience; for it is by stimulating our senses that our faculty of knowledge is awakened. These senses, while producing representations for themselves, set in motion the activity of our understanding, leading us to compare them, connect them, separate them, and thus transform gross sensory matter from impressions into a knowledge of objects (Kant, 1781(Kant, /1998. Nevertheless, argues the author, even if our knowledge begins with experience, it does not arise exclusively from experience. This is because all knowledge could be a composite of what we feel plus what our own faculty of knowing or interpreting produces by itself. Kant states, however, that there is a type of knowledge that is independent of experience or sense impression: For it is customary to say of many a cognition derived from experiential sources that we are capable of it or partake in it a priori, because we do not derive it immediately from experience, but rather from a general rule that we have nevertheless itself borrowed from experience. So one says of someone who undermined the foundation of his house that he could have known a priori that it would collapse, i.e., he need not have waited for the experience of it actually collapsing. Yet he could not have known this entirely a priori. For that bodies are heavy and hence fall if their support is taken away must first have become known to him through experience. In the sequel therefore we will understand by a priori cognitions not those that occur independently of this or that experience, but rather those that occur absolutely independently of all experience. Opposed to them are empirical cognitions, or those that are possible only a posteriori, i.e., through experience. Among a priori cognitions, however, those are called pure with which nothing empirical is intermixed. Thus, e.g., the proposition "Every alteration has its cause" is an a priori proposition, only not pure, since alteration is a concept that can be drawn only from experience. (Kant, 1781(Kant, /1998 For example, if a proposition is thought of simultaneously with its necessary outcome (or necessity), it is an a priori judgment: "when I eat, my hunger is satisfied." However, if, in addition, this proposition cannot be deduced from any judgment other than a judgment that is valid as a necessary proposition in itself, then it is called an absolutely a priori proposition: "Eating satisfies hunger." In this sense, Kant understands that experience never assigns its judgments a true or strict universality, but only a conditional and comparative one. In other words, experience defines what we have perceived, so far, as devoid of exceptions: "Until proven otherwise, eating satisfies hunger." Therefore, when a judgment is thought to be strictly universal, it has not been deduced from experience, but rather from an a priori judgment. Empirical universality "is therefore only an arbitrary increase in validity from that which holds in most cases to that which holds in all" (Kant, 1781(Kant, /1998. To ensure the robustness of knowledge, investigations must be careful about the foundations of a given object. Kant understands that speculative reason is the spring of the rapid production of a great amount of knowledge, but he also admits that the greatest work of reason consists in the decomposition of our current concepts, since this provides us with a variety of knowledge and, while no more than a clarification of what we already thought through our existing concepts, allows us to broaden our view on the subject matter (Kant, 1781(Kant, /1998). Metaphysics, for Kant (1781Kant ( /1998, is an indispensable science because of the nature of human reason; it must contain a priori knowledge and, therefore, does not deal only with the decomposition or analytical clarification of our concepts of things. In this sense, there is a need for principles that compound the given concept with something that was not previously contained in it, principles that are so far-reaching that experience itself cannot accompany them. Principles, in the Kantian sense, are the purest expression of knowledge, that is, propositions about a particular object that are known a priori and in the purest manner; so pure that they could only be established by a pure sense of reason, but which does not define objects as things in themselves, but rather to the extent our experience allows us to have knowledge about such objects. Reason, in this sense, becomes "the faculty that provides the principles of cognition a priori. Hence pure reason is that which contains the principles for cognizing something absolutely a priori." (Kant, 1781(Kant, /1998. Thus, science is a mere judgment of pure reason, its sources and limits -a kind of propaedeutic to a system of reason. Such a science would not have to be called a doctrine, but a critique of pure reason, and its use would be strictly negative, i.e., it would not be directed to the expansion of knowledge, but rather to the purification of reason, so as to keep it free of error (O'Neill, 1992). Knowledge becomes transcendental when it is not concerned with objects more than with cognition itself, to the extent that this is possible a priori (Deleuze, 1985). Kant's transcendental philosophy is "the idea of a science for which the critique of pure reason is to outline the entire plan architectonically, i.e., from principles, with a full guarantee for the completeness and certainty of all the components that comprise this edifice." (Kant, 1781(Kant, /1998. Principles are the generic and pure maxim of a priori knowledge, from which knowledge applied to empirical reality is derived. In the author's words: I would therefore call a "cognition from principles" that cognition in which I cognize the particular in the universal through concepts. Thus every syllogism is a form of derivation of a cognition from a principle. For the major premise always gives a concept such that everything subsumed under its condition can be cognized from it according to a principle. Now since every universal cognition can serve as the major premise in a syllogism, and since the understanding yields such universal propositions a priori, these propositions can, in respect of their possible use, be called principles. (Kant, 1781(Kant, /1998 Reason, according to Kant, "is driven by a propensity of its nature to go beyond its use in experience, to venture to the outermost bounds of all cognition by means of mere ideas in a pure use, and to find peace only in the completion of its circle in a self-subsisting systematic whole." (1781/1998, p. 673). Pure reason's sole purpose lies, therefore, in its negative use as a means to refine the knowledge we acquired on the basis of speculative or hermeneutic reason, in order to arrive at the principles governing the experiential limits of cognition -axioms. Knowledge, thus, becomes subjectively historical; this is because wherever knowledge is assumed as natural or ahistorical, it will nevertheless be subjected to scrutiny by the critique of pure reason. According to Wartenberg (1992), adopting a Kantian perspective of reason means believing that the use of ideas in scientific theorizing implies rejecting an instrumentalist conception of science. Theoretical ideas, in this sense, constitute a basis for questioning nature, since these ideas provide scientists with specific instructions on what to look for when they return to the field of experimentation. This view of scientific practice raises experimentation to a high level of importance, while delimitating that experiments will always take place in the light of theoretical ideas. For Wartenberg (1992), "Experiments are not simple observations of the phenomenal world, but directed interrogations of nature that take place in accordance with goals set up by the practice of science itself." (p. 243). Such ideas, therefore, are legitimized by discoveries and can constitute the perfection of knowledge -as limitless as the potential of ideas. Given these premises, we believe that in establishing his transcendental philosophy, Kant idealized a true critical rationalism (Châtelet, 1994;Deleuze, 1985), whose purpose is the regulative use of reason, applicable to the knowledge generated by research practice. We must still approach the relationship between pure reason and practical reason as applied to conventional research. If the theoretical use of reason concerns the faculty of knowledge, the practical use of reason must be concerned with the determinant foundations of the will, which means the investigation of the faculty of producing objects corresponding to representations, or the faculty of determining their causality (Kant, 1788(Kant, /2015. Here, Kant notes, reason in its practical sense must addresses the question of the interest behind the scientist's representations -since Kantian thought doesn't regard knowledge as neutral. Pure reason, as opposed to practice, wants to strip individual subjectivity from the cognition of objects, since reason is mediated by interest: In practical cognition -that is, cognition having to do only with determining grounds of the will -the principles that one makes for oneself are not yet laws to which one is unavoidably subject, because reason, in the practical, has to do with the subject, namely with his faculty of desire, to whose special constitution the rule can variously conform. A practical rule is always a product of reason because it prescribes action as a means to an effect, which is its purpose. But for a being in whom reason quite alone is not the determining ground of the will, this rule is an imperative, that is, a rule indicated by an "ought," which expresses objective necessitation to the action and signifies that if reason completely determined the will the action would without fail take place in accordance with this rule. (Kant, 1788(Kant, /2015 Pure practical reason, according to Kant, concerns universal will, which, without the interference of individual interests and appetites, would prevail in all cases because they are imperative for individuals (Kant, 1788(Kant, /2015. The maxims of science would be the hypothetical imperatives when they concern casuistic relationships between elements and categorical imperatives when they determine practical laws that determine will. Hypothetical imperatives are specific, intentional practical precepts, but they are not laws, since laws must determine will in a generic, pure way (Kant, 1788(Kant, /2015. In other words, the critique of practical reason is the search for imperatives that can strip the human will of individual appetites that constitute hypothetical imperatives. In making his critique, Kant recognizes that neither philosophers nor scientists can be neutral in the production of their knowledge, since they aim, first of all, at happiness itself, at well-being in seeking knowledge. Understanding the will behind cognition becomes an important task to purify knowledge. Thus, pure reason purifies impure knowledge; pure practical reason seeks to understand that the knowledge generated was established with legitimate motives, directed not at the philosopher's welfare but at higher intentions towards pure knowledge, the essence of truth contained in the thing itself. These laws Kant termed moral laws -laws that are directed towards a higher good (Deleuze, 1985) and that constitute a superior will or the categorical imperatives governing such a will. Kantian thought is idealistic due to being centered around the notion of pure good. Theorizing is a reflexive act that turns to the criticism of everything that destroys the purity of knowledge, be it the experience and hermeneutics of the empirical or the will of the individual. As an idealistic philosopher, Kant believed in man's potential to transcend his own humanity. While establishing a philosophy by which we must constantly seek to refine our knowledge, he recognizes that the object itself, even though it has true existence and reality, will always be unreachable -we always have more to know about things in themselves. The task that remains to us is, through reason, to question knowledge and the way knowledge is generated. Kant's influence on Popper: critical rationalism in organization studies One of Kant's great legacies was the possibility of approaching cognition from an idealistic standpoint, strictly based on a priori knowledge, negative reason and the ability to revise knowledge. Sir Karl Raimund Popper (1902Popper ( -1994 was one of the great heirs of Kant's critical thinking, and his ideas originated the philosophy of critical rationalism as we know it today (Chiappin, 2008;Ferrarin, 2016). In the penultimate section of this article, we will explore critical rationalism as a democratic attitude, marked by intellectual autonomy and modesty, as well as a necessary research approach to contemporary organizational studies -whose corpus is characterized by the large-scale production of justificationist studies (Thomas, 2010). Popper, like Kant, believed that all men and women are capable of sound reasoning, making them potential philosophers. The author starts from the Kantian assumptions that cognition of the truth is unreachable, and that no induction is genuine; in this sense, he emphasizes the idea that pure observation, in which the researcher's mind must be free of assumptions and hypotheses, is but a philosophical myth. Therefore, all observation is made in the light of a theory, as this would be the only possible way to make inferences (Chiappin, 2008;Popper, 1934Popper, /2005. Thus, "every observation is always guided by theoretical, conscious or unconscious expectations. That is, our body of theories and expectations about reality guide what in the perceptual field we shall highlight as relevant for observation" (Castañon, 2007, p. 280, our translation). Karl Popper was largely responsible for the decline of logical positivism -which stood for the pursuit of indisputable universal laws. He not only contradicted, but also refuted all the main positions of logical positivism, and criticized the principle of verification as a demarcation criterion, replacing it with the almost diametrically opposite concept of falsifiability (Castañon, 2007;Popper, 1934Popper, /2005. Popper was also largely responsible for the fall of the inductive method, replacing it with its opposite, the hypothetical-deductive method, in the defense of a perfectible science that completely rejected positivist anti-metaphysics, establishing metaphysics as a granary of scientific ideas (Galván, 2016;Thomas, 2010). In short, Popper considered falsifiability as a key criterion of the demarcation between science and pseudoscience, and argued that a theory would be more or less credible in terms of the number of times it was able to stand its ground against attempts of denial. Thus, the justification for maintaining a theory would lie in its ability to withstand the tests of ratio negativa [negative reason] (Popper, 1934(Popper, /2005. Initially, his view was considered a form of "intellectual conservatism," given that in Popper's framework the attachment to the falsificacionist impulse would be greater than the desire for a wider variety of theoretical thoughts and views about the real object (Castañon, 2007;Thomas, 2010). However, such criticisms have succumbed to the consistency of his view that the exercise of reason lies in problem-solving and in an uninterrupted effort to detect and eliminate errors. Falsificationism is, for Critical Rationalism, the new criterion of demarcation between scientific and unscientific assertions. It replaces the weakened criterion of verification in the demarcation of scientific propositions. This implies a major shift in the scientific outlook, poised to become absolutely vital to scientific claims: it is not the direct observation of certain phenomena that should provide us with testable hypothesesrather, these hypotheses could be created in any way whatsoever. What will allow their integration into the field of scientific knowledge is whether they are able to generate falsifiable predictions. This is because hypotheses stand at the beginning of the process, not at its conclusion. A hypothesis is falsifiable if there is any logically possible proposition of observation that, if established as true, would imply the rejection of said hypothesis as false. (Castañon, 2007, p. 281, our translation) The Popperian framework is therefore a universal theory of criticism and error. Popper's critical rationalism is radically opposed to any kind of classical rationalism (which translates into contemporary forms of instrumental, substantive, or communicative rationality, for example), as these must be provincialized as something that derives from the subject, and not from truth itself. The crux of Popper's critical argument is that the exercise of classical rationalism is, in essence, authoritarian and unilateral, because it imposes a view as real from justifications anchored in observations carried out by a subject and interpreted in the light of this subject's experience. Now, if all reason is negative (refutation), all rationality assumes a positive bias (construction) based on some kind of justification (this movement has been called justificationism). If the premises of Kant's and Popper's rationality are correct -and knowledge is influenced by previous theories and the subject's practical perception -any justificationist attempts to support a theory are unilateral, authoritarian, and presume that theoretical constructs are a priori entities, always correct and true -which is unacceptable within the framework of pure reason (Thomas, 2010). Well, what justifies that organizations should be profitable? What justifies capitalism as an economic system? What justifies that the best form of labor organization is the division of tasks? What justifies the mainstream theories of organization studies? For Popper, nothing justifies knowledge other than the beliefs and constructions that humans impose on their peers. In this sense, there is no way to speak of a "scientific administration" or even a "science of organization studies" that treats prescriptive models and assumptions as objective knowledge and data. Every assertive statement about organizations is fallible and might be proven invalid when viewed from the scientific lens of reason. "What do I know about organizations? I know that I know almost nothing." In positing this kind of statement, I assume, as a researcher, an anti-authoritarian stance on what organizations are, and in no way do I impose my perspective on others by confirming it by subjecting the data to tests and experiments (Thomas, 2010). Empirical data, therefore, can never be used to confirm theories, but only to refute them, since there is no validity in induction (Chiappin, 2008;Galván, 2016). This view brings true inspiration to skeptical, humble, and democratic conduct by researchers, as it conditions them to be less pretentious in their claims, and more focused on their rational duty to challenge and provoke thought towards its cognizable limits. In this sense, no knowledge is definitive, and no premise can be presupposed as true. What would the world look like if the attitude of critical rationalism were widely adopted? It is a peculiar attitude because it accepts that some statements and theories may be true and some may be false -but at the same time it contends that no one will know which is which with certainty. Critical rationalists would have to exercise their personal autonomy, adopting what is in effect a critical preference in accepting a basic statement, or a theory, but that critical preference has no necessary permanence. (Thomas, 2010, p. 30) Following Thomas (2010), our understanding is that traditional theories of knowledge in organization studies seek their source in justificationism. In this sense, we understand that the field of organization studies has been permeated more by classical rationalism than by critical reason. This is because "classical rationalism installed the authority of the intellect, whereas empiricism installed the authority of the senses" (Thomas, 2010, p. 13). In this sense, we affirm that functionalist, hermeneutic, radical structuralist and even classical humanist studies have been based more on the erudition/eloquence of arguments structured upon quali-quantitative data than on a negative dialectic that submits theories to logical tests aimed at falsification. This inevitably leads to the advancement of pseudoscience through: (1) the "argument of authority," that is, the power exerted by certain actors to profess what is right (and what is not) in science, commonly done through blind-review evaluations, and (2) the infinite tautological regression of justifications, because every justification used to defend an argument is based on other assumptions that are not necessarily true, and also require justifications. The result is a tangled chain of justifications based on paradigms that are themselves falsifiable (Thomas, 2010). The problem with authority arguments is that, once again, they rely on the unilateral views of actors who are guided by the justifications of their beliefs in their affirmation of "what is" -an inadmissible attitude in the rational domain (Popper, 1934(Popper, /2005. Similarly, to publish scientific papers, we are required an extensive web of theoretical arguments to assign "validity" to our thoughts. Each choice made throughout the work has to be justified: the authors' selection criterion, the data sampling criterion, the epistemological paradigms in which we are anchored, among so many other choices that are potentially exposed to criticism of various kinds. All these choices are subject to scrutiny by the "authority" of someone who inevitably verifies the adequacy of the article to the field of study according to previous beliefs, largely justified by pre-conceived notions, without taking care to critically assess whether there is any logical sense in the proposed argument or whether any idea therein has already been refuted. For Thomas (2010), justificationism is an invitation to relativism and conservatism that legitimizes the knowledge of those with social recognition in relation to alternative ideas that emerge from less celebrated researchers. Such a paradigm does not correspond to the form of scientific inquiry upheld by Kantian reason. Popper argued that "enlightened authorities" were nowhere to be found, while justificationist philosophies would not have the power of scientific veridiction, but rather (a) the power to propel man's creativity into grasping new ideas in order to scientifically test them, and (b) the power to describe the prevalent perceptions about the object according to different disciplines. Knowledge, in this sense, would be too human and too flawed not to be prone to error, individual whim, and arbitrariness (Thomas, 2010). According to Kantian reason and Popperian critical rationalism (Galván, 2016), all knowledge that assumes itself as true is criticizable. In critical rationalism, we find no synthesis whatsoever, since every conjecture or interpretation is merely temporary; likewise, the Popperian interpretation does not organize what comes from the world, since it is already organized by the theories invented by scientists, which derive them from their observations of the world. What the Popperian view does is engender a separate moment from the moment when knowledge is actually constituted -a closed, juxtaposed moment in which all empirically obtained knowledge is put to the test of logic. What is sought is the conclusive verification of generated knowledge (Chiappin, 2008;Galván, 2016). Table 1 provides a synthesis of potential objects of critical analysis, considering the premises hitherto analyzed. Critique of a Basic Statement All observation is impregnated with theories, since every belief is constituted by a theoretical view. In this sense, there is no independent or neutral point from which we can objectively observe the world, grasping its reality. Critique of a Scientific Theory Every scientific theory comprises universal empirical statements about the structure of the world and is therefore logically falsifiableand thus subject to the hypothetical-deductive method. Critique of a Universal Empirical Statement Popper distinguishes falsifiability from falsification. Falsification is to accept basic statements that contradict a universal empirical statement, a skeptical conduct of the researcher who strives to subject any statements whatsoever to critical reasoning. Importantly, no argument, even a logical refutation, is conclusive or final. Critique of a Pure Existential Statement Popper understands that purely existential statements can be criticized, even if they can be refuted neither logically nor empirically. Critique of an Instrument An instrumental theory is subject to criticism as much as any scientific theory, to the extent that it fails to solve the problems it intends to address, or to the extent that it implies unintended consequences. Critique of a Philosophical Doctrine The criticism of justificationist theories serves to demonstrate that it is possible for a nonempirical philosophical doctrine to be criticized. Critique of a Problem Situation Popper believed that some problem situations are nothing more than pseudo-problems, created by a lack of conceptual clarity in the discussion of a particularly relevant situation. Literary critique Poetry and science are related by blood, since they both derive from creative work. In this sense, some principles of the production of art and literature are as objectionable as scientific theories. Source: Adapted from Thomas (2010). Based on the ideas of Immanuel Kant, Karl Popper was able to develop a strongly cohesive system of thought, creating a systematic philosophy in the great tradition of the centrality of the subject. His primary concern would be to separate science from pseudoscience, upholding reason as an end in itself, an ethical duty of human beings to provide themselves with transcendental truth (Ferrarin, 2016). In this sense, critical rationalism can be seen as the rational action of seeking truth through argument and compromise; in the same sense, based on its premise that all knowledge is transient and falsifiable, critical rationalism enables a posture of openness to dialogue with different views (Popper, 1934(Popper, /2005Thomas, 2010). For the organization studies field, the approach of critical rationalism may be interesting as a way to question the models and beliefs brought to the field on various fronts. In this article, some examples of critical objects have been discussed. We argued that the Popperian spirit, based on Kantian reason, is first and foremost the researcher's commitment to a professional stance, not an unrestricted adherence to the hypothetical-deductive method. We understand that the basis of critical rationalism lies in the pursuit of reason and transcendental truth. This is a call not only for the production of theories, but rather for further dedication to testing their validity -a task that has not received enough attention from researchers in the field of organization studies, who have nevertheless devoted a significant amount of effort to producing research that brings theories to the most varied empirical situations. Finally, we understand, following Thomas (2010), that the critical rationalist posture is necessary for contemporaneity. The rationalist is "one who holds everything -including standards, goals, criteria, authorities, decisions and especially any framework or way of life -open to criticism" (Thomas, 2010, p. 27). Such a stance, ironically, demands faith. An irrational faith in reason, which comes to be seen as the only true path to the transcendence that brings us, as human beings, closer to the truth. Transitional considerations The aim of this essay was to revisit the theoretical construction of the concepts of reason in Kant's work and to delineate their influence on Popperian Critical Rationalism. Some points may be helpful to summarize our findings as well as the discussion of Kant's work proposed in this article: (a) Kantian philosophy values creativity and idealism, stimulating the production of a priori ideas and knowledge that contribute to the refinement of our way of thinking and interpreting the world; (b) equipped with these ideas, it falls to reason to do its negative work of eliminating errors in our thinking; (c) pure reason is undisputed truth that cannot be fully attained due to being on a metaphysical plane of knowledge, meaning that human knowledge will always have flaws; (d) practical reason is the basis of the interests and motivations that constitute the desire for discovering new knowledge, and varies according to individuals -therefore, our production of knowledge is not neutral, influenced as it is by our ideological and subjective biases (passions, demands, appetites); and, finally, (e) human beings have a moral duty to engage in perfecting knowledge in order to achieve transcendental status. Each of Kant's contributions to philosophy had implications in Karl Popper's work, which we also summarize here for the purpose of organizing the proposed arguments: (a) every theory is imperfect and falsifiable, and the scientist's job is to persistently and continuously subject his views to attempts at falsification; (b) the inductive method is not the most appropriate because it is affiliated with classical rationalism and justificationism, which are guided by the persistent task of committing to a theoretical view and defending its potential to explain phenomena in the field; (c) justificationism falls short since, to support a particular theoretical view, there is no need to speak of "authority" or to assess the consistency of the complex network of theoretical justifications; (d) the hypothetical-deductive method would be an interesting way of constituting tests of theoretical validity; (e) there is no object that cannot be falsified, and this creates a wide variety of validitytesting possibilities. Both authors' respective contributions provide a comprehensive picture in which science assumes a noble function as the driving force of human self-improvement through knowledge. It allows researchers to exercise self-criticism regarding their role in society: what are we building science for, and for whom? What kind of truth(s) are we seeking in our research work? To what extent are we open to criticism of the validity of our theoretical views? What are the mechanisms we create to keep our own lines of thought alive in the scientific field? Nowadays, these are relevant provocations, especially in a context in which a large portion of the work published in the field of
12,486.6
2021-04-26T00:00:00.000
[ "Philosophy" ]
Dangerous connections: on binding site models of infectious disease dynamics We formulate models for the spread of infection on networks that are amenable to analysis in the large population limit. We distinguish three different levels: (1) binding sites, (2) individuals, and (3) the population. In the tradition of physiologically structured population models, the formulation starts on the individual level. Influences from the ‘outside world’ on an individual are captured by environmental variables. These environmental variables are population level quantities. A key characteristic of the network models is that individuals can be decomposed into a number of conditionally independent components: each individual has a fixed number of ‘binding sites’ for partners. The Markov chain dynamics of binding sites are described by only a few equations. In particular, individual-level probabilities are obtained from binding-site-level probabilities by combinatorics while population-level quantities are obtained by averaging over individuals in the population. Thus we are able to characterize population-level epidemiological quantities, such as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0$$\end{document}R0, r, the final size, and the endemic equilibrium, in terms of the corresponding variables. Introduction Consider an empirical network consisting of individuals that form partnerships with other individuals. Suppose an infectious disease can be transmitted from an infectious individual to any of its susceptible partners and thus spread over the network. Consider an individual in the network at a particular point in time. We are interested in the disease status of the individual, but also in the presence of the infection in its immediate surroundings that are formed by the individual's partners. We may label this individual by listing -its disease status in terms of the S, I, R classification, where, as usual, S stands for susceptible, I for infectious and R for recovered (implying immunity) -how many partners this individual has -the disease status of these partners In this spirit, we may provide a statistical description of the network at a particular point in time by listing, for each such label, the fraction of the population carrying it. Is it possible to predict the future spread of the disease on the basis of this statistical description? The answer is 'no', simply because the precise network structure is important for transmission and we cannot recover the structure from the description. But if we are willing to make assumptions about the structure (and to consider the limit of the number of individuals going to infinity), the answer might be 'yes'. And even if the true answer is still 'no', we may indulge in wishful thinking and answer 'to good approximation'. When considering an outbreak of a rapidly spreading disease, we can consider the network as static. If we are willing to assume that the network is constructed by the configuration procedure (Durrett 2006;van der Hofstad 2015), the answer is indeed 'yes' (Decreusefond et al. 2012; Barbour and Reinert 2013;Janson et al. 2014). But if the disease spreads at the time scale of formation and dissolution of partnerships, we need to take these partnership dynamics into account and next indeed rely on wishful thinking (though the answer may very well be 'yes'). In case of HIV, the disease spreads on the time scale of demographic turnover and this motivated our earlier work (Leung et al. 2012(Leung et al. , 2015 that also takes birth and death into account [(here we know that the answer is 'no', see Leung et al. (2015, Appendix B)]. In the rest of this introduction we first discuss the model formulation used and the relation between our work and existing literature. Next, we consider three different settings based on the time scales of disease spread, partnership dynamics, and demographic turnover. Individuals are decomposed into conditionally independent components (the 'binding sites') and we discuss how the dynamics of these binding sites can be specified. We end the introduction with an outline of the structure of the rest of the paper. Physiologically structured population models As in our earlier paper (Leung et al. 2015), our model formulation is in the tradition of physiologically structured population models (PSPM Metz and Diekmann 1986;Diekmann et al. 1998bDiekmann et al. , 2001. This means that we start from the notion of state at u v w u v w Fig. 1 An illustration of binding sites with three individuals u, v, and w. In this example, u, v, and w have four, three, and two binding sites for partners, respectively. On the left, all binding sites are free. On the right, a partnership between u and w is formed and they both have one occupied binding site the individual level, called i-state (where i stands for individual). Model specification involves, first of all, a description of changes in time of the i-state as influenced by i-state itself and the relevant environmental variables that capture the influence of the outside world. Next the model specifies the impact of individuals on the environmental variables. Thus the feedback loop that creates density dependence, i.e. dependence among individuals, is described in a two step procedure. To lift the i-level model to the population level (p-level) is just a matter of bookkeeping, see Diekmann and Metz (2010) for a recent account. In the setting considered here, i-state ranges over a finite set. As a consequence, the p-level equations are ordinary differential equations (ODE). These ODE describe, apart from death and birth of individuals, the dynamical changes of i-state, i.e. how individuals jump back and forth between the various states. In the spirit of the theory of Markov chains (Taylor and Karlin 1998), we describe an individual not by its actual state but by the probability distribution, i.e. the probability of being in the various states. Equating a p-level fraction to an i-level probability provides the link between the two levels. The approach of both earlier work and this paper is to pretend that the label can be considered as the i-state, the information about the individual that is relevant for predicting its future. The i-state contains information about partners, but not about partners of partners. Implicitly this entails that we use a mean field description of partners of partners. We call this the 'mean field at distance one' assumption. The description of partners of partners is incorporated in an environmental variable, the information about the 'outside world' that is relevant for a prediction of the future of the individual. A rather special feature of the models considered here is that i-state involves a number of conditionally independent components: the binding sites. An individual has binding sites for partners. Two free binding sites can be joined together to form a partnership between two individuals (see Fig. 1 for an illustration). In graph theory the words 'half-edge' or 'stub' are often used. We think that for static networks these terms capture the essence much better than the word 'binding site'. But the latter provides, in our opinion, a better description for dynamic networks. The fact that our research started with dynamic networks is responsible for our choice of terminology. It is attractive to model the dynamics of one binding site and next use combinatorics to describe the full i-state. It is precisely this aspect that we did not yet elaborate in Leung et al. (2015) but highlight now. It is precisely this aspect that uncovers the link/relationship between the work of (Lindquist et al. 2011;Leung et al. 2015) on the one hand and the edge-based modelling approach of Volz and Miller et al. (2007, 2008, 2009 on the other hand. Volz and Miller focus on the binding site (=half-edge/stub) and individual level and draw p-level conclusions by a clever use of probabilistic arguments to determine the relevant environmental variables. Lindquist et al. (2011) systematically formulate and analyze the p-level equations. In Leung et al. (2015) we too emphasized the plevel equations, but used the i-level version to derive an expression for R 0 . The link between the two was established by somewhat contrived linear algebra arguments. In the present paper we build our way upwards from binding site-via individual-to population level. One of the secondary aims of this paper is to show that the systematic methodology of PSPM is also very useful when i-state space is discrete, rather than a continuum, and when i-state involves multiple identical components. In the present paper we do not specify any stochastic model that in the large population limit may lead to the equations that we consider. But we do specify, by way of time-dependent transition rate matrices, various continuous-time Markov chains (where the time-dependence is through the environmental variables). These describe how the states of binding sites change in time. Most of the dependence among binding sites is heuristically captured by the time-dependent coefficients in the transition rate matrix (partly based on the 'mean-field-at-distance-one' assumption). Binding sites that belong to the same individual (the 'owner') experience additional dependence, most notably through transmission of infection: if the owner is infected along one of them, it affects all of them. When we use the words 'probability', 'expectation', and 'conditioning', we refer to the Markov processes describing binding sites (and hence individuals) for given environmental conditions (i.e. given time-dependent coefficients in the transition rate matrices) and not to the population process as a whole. Three network cases Now, consider a network. An epidemic starts when, at some point in time, a small fraction of the population is infected from outside. Our idealized description shifts the 'point in time' towards −∞ while letting the fraction become smaller and smaller. In other words, our story starts 'far back' in time when all individuals are still susceptible (see Appendix 1 for elucidation). We consider three different situations, characterized by the relation between the time scales of, respectively, transmission, partnership dynamics and demographic turnover: I The disease dynamics are fast relative to any partnership-or demographic changes. The network is static and everyone is susceptible at time t = −∞. II The disease dynamics are on the same time scale as the partnership dynamics, but fast relative to demographic turnover. In this network individuals can acquire and lose partners over time. Everyone is susceptible at time t = −∞. III The disease dynamics and partnership-and demographic changes are on the same time scale. Here the age of an individual matters and we assume that, at birth, an individual enters the population as a susceptible without any partners. We assume that infection is transmitted from an infectious individual to a susceptible partner at rate β and infectious individuals recover at rate γ (but see Sect. 2.5 for a far more general setting). We also assume that infection does not influence the partnership dynamics or the probability per unit of time of dying in any way. Each individual in the population is assumed to have a so-called partnership capacity n which denotes the number of binding sites it has (so n is the maximum number of simultaneous partners it may have). Throughout the life of the individual this partnership capacity does not change. An individual with partnership capacity n can be thought of as having n binding sites for partners (in Fig. 1, individuals u, v, and w have partnership capacities 4, 3, and 2, respectively). We call the individual to which a binding site belongs its owner. For the purpose of this paper, we will assume that all individuals have the same partnership capacity n. One can generalize this by allowing individuals to have different partnership capacities; in that case, one needs to average over n in the correct way (see Sect. 2.5 for the static case). Binding sites An individual with partnership capacity n is to some extent just a collection of n binding sites. These n binding sites are coupled through the disease status (or death) of their owner. We assume that this is the only manner in which the binding sites of an individual are coupled. As long as the disease status of the owner does not change (and the owner does not die), binding sites behave independently of one another and the 'rules' for changes in binding site states are the same for each binding site. Obviously the latter depends on the network dynamics under consideration (either case I, II, or III). As a port to the world, a binding site can be in one of four states: -0 -free -1 -occupied by a susceptible partner -2 -occupied by an infectious partner -3 -occupied by a recovered partner. Here (and in the remainder of this introduction) our formulation is precise for case II while sometimes requiring minor adaptations to capture cases I and III. A key component of the model is the description of the dynamics of a binding site. The state of an individual is specified by listing its disease status and the states of each of its n binding sites. So it makes sense to first consider a binding site as a separate and independent entity and to only take the dependence (by way of a change in the disease status of the owner) into account when we combine n binding sites into one individual. The case of a susceptible binding site (i.e. a binding site with a susceptible owner) is, as will become clear, far more important than the other cases. This is partly due to our assumption that all individuals start out susceptible, i.e. are susceptible at time t = −∞ (I and II) or at birth (III). It is tempting to think of the dynamics of susceptible binding sites in terms of jumps between the various states until, sooner or later, transmission occurs along the binding site under consideration (with 'transmission' having no effect at all if the owner was infected earlier along another of its binding sites). There is nothing wrong with this mental image, but it ignores that the binding site under consideration acts as a source of infection once the owner is infected. So one should focus on the first binding site that facilitates transmission and label all binding sites as infected once this happens. The dynamics of a susceptible binding site are described by a differential equation for the variable x(t) = (x i (t)), i = 0, 1, 2, 3. Here x i can be interpreted as the probability that a binding site is susceptible and has state i at time t, given that its owner does not become infected through one of its other n − 1 binding sites (in other words, by conditioning on the individual not getting infected through its n − 1 other binding sites, the only way the individual could get infected is through the binding site under consideration). In particular, given that its owner does not become infected through one of its other binding sites, is the probability that the binding site is susceptible at time t (or, in other words, that the owner is not infected along this binding site before time t). Accordingly, the probability that an individual is susceptible at time t is equal tō In order to arrive at a closed system of equations for x, we need to go through several steps. The variable x contains information about a partner. Consequently the dynamics of x is partly determined by partners of partners, hence by one or more environmental variables. The 'mean field at distance one' assumption yields expressions for environmental variables in terms of subpopulation sizes (for a given label, the corresponding subpopulation size is the fraction of the population that carries this label). In turn, p-level fractions can be expressed in terms of i-level probabilities. And since a susceptible individual is in essence a collection of n conditionally i.i.d. binding sites, we can use combinatorics to express i-level probabilities in terms of binding-site-level probabilities as incorporated in x. The exchangeability of the binding sites is broken by the infection event. There is exactly one binding site along which infection took place, viz. the binding site occupied by the individual's epidemiological parent, and for this binding site we know with certainty that it is in state 2 at time of infection t + . We call the binding site through which the change in the owner's disease status occurred the 'exceptional' binding site. The other n − 1 binding sites are i.i.d. and, at time t + , they are distributed according to x(t + )/x(t + ). Recovery (and death) is an event that occurs at a constant rate for an infectious individual so they are independent of binding site states. Therefore, also after recovery, there remains exactly one exceptional binding site, viz. the one through which transmission occurred. See also Fig. 2 for an illustration of the exceptional binding site. Structure of the paper In Sects. 2, 3, and 4 below, we will discuss the three network cases I, II, and III separately. For each of the three cases we will explain how the model can be formulated Fig. 2 An illustration of the exceptional binding site. Susceptible, infectious, and recovered individuals are displayed in black, red, and blue, respectively. Three time points in the life of individual u are displayed. Suppose u is susceptible and becomes infected by an infectious partner v at time t + . From that moment on, the binding site along which transmission occurred is the exceptional binding site. This binding site remains exceptional throughout u's life and no other binding site can become exceptional, regardless of whether or not v is still a partner or u is still infectious and described in terms of susceptible binding sites. By considering the susceptible binding site perspective we can write a closed system of only a few equations that fully determine the dynamics of i-level probabilities and p-level fractions. This system is then used to determine epidemiological quantities of interest: R 0 , r , the final size (in cases I and II), and the endemic steady state (in case III). In all three cases, an explicit expression can be given for R 0 . In case I, one can derive a simple scalar equation for the final size. In cases II and III, we could only implicitly characterize the final size and endemic equilibrium, respectively. In Sect. 2 case I of a static network is considered. This is the simplest case among the three. The relative simplicity allows for the derivation of an ODE system for susceptible binding sites directly from the interpretation. This will be the first way in which we formulate the model for this case. But case I will also serve to illustrate the systematic procedure for model formulation in the spirit of PSPM. This systematic procedure allows us to connect the three different levels, viz. (1) binding sites, (2) individuals, and (3) the population, to each other. In network case I, since it is relatively simple, one can derive a one-dimensional renewal equation from which R 0 , r , and the final size almost immediately follow. This renewal equation will be treated in Sect. 2.5 for a much more general class of infectious disease models than only SIR. Part of the systematic procedure in cases II and III focuses on infectious binding sites. We use case I to illustrate the model formulation concerning infectious (and recovered) binding sites, even though, for case I these are not needed to obtain a closed system for susceptible binding sites. However, depending on the network features of interest (e.g. fractions of infectious individuals) one may still want to consider infectious (and recovered) binding sites. In network cases II and III, there are also network dynamics in absence of infection due to partnership changes (and demographic changes). We will only describe the essential characteristics of the network dynamics that we use in this paper. Certainly, much more can be said about the networks in absence of infection (Leung et al. 2012). Finally, in Sect. 5 we discuss the issues that we have encountered in the three different network cases and pose some open problems. We end the discussion by considering a few generalizations that can be implemented using the systematic model formulation of Sect. 2.2. Part I: static network 2.1 Model formulation We derive a closed system of ODE for x purely on the basis of the interpretation of binding sites (without explicitly taking into account i-level probabilities or p-level fractions). The relatively simple setting of a static network allows us to do so. We are able to consider a binding site as a separate and independent entity all throughout its susceptible life. Implicitly, this uses (2.8) below. One can show that the system of ODE for x indeed captures the appropriate large population limit of a stochastic SIR epidemic on a configuration network. This requires quite some work; see (Decreusefond et al. 2012;Barbour and Reinert 2013;Janson et al. 2014). Consider a susceptible binding site and assume its owner does not become infected through one of its other n − 1 binding sites for the period under consideration. If a susceptible binding site is in state 2, it can become infected by the corresponding infectious partner. This happens at rate β and when it happens, the binding site is no longer susceptible so it 'leaves' the x-system. It is also possible that the infectious partner recovers. This happens at rate γ . Finally, there is the possibility that a susceptible partner of a susceptible binding site becomes infectious (corresponding to a transition from state 1 to state 2). The rate at which this occurs depends on the number of infectious partners that this susceptible partner has. So here we use the mean field at distance one assumption: we average over all possibilities at the p-level to obtain one rate at which a susceptible partner of a susceptible binding site becomes infected. More specifically, we assume that there is a rate βΛ − (t) at which a susceptible partner of a susceptible binding site becomes infected at time t. Here Λ − (t) has the interpretation of the expected number of infectious partners of a susceptible partner of a susceptible individual. Inserting an expectation into a stochastic i-level model in order to lift it to the p-level is reminiscent of the work of Nåsell and Hirsch around 1972, see the book (Nåsell 1985) Then, putting together the various assumptions described above, the dynamics of x is governed by the following system (please note that the environmental variable Λ − is a p-level quantity that we have yet to specify): with 'far past' conditions To express Λ − in terms of x we use the interpretation. Consider a susceptible partner v of a susceptible individual u. Then, since u is susceptible, we know that v has at most n − 1 binding sites that are possibly in state 2 (i.e. occupied by infectious partners). Since v is known to be susceptible, also all its binding sites are susceptible (in the sense that their owner v is). The probability that a binding site is susceptible at (recall (1.1) and note that in case I we have x 0 (t) = 0). The probability that a binding site is in state 2, given that the binding site is susceptible, is x 2 (t)/x(t). Therefore, (2.4) By inserting (2.4) into (2.1) we find that the x-system is fully described by an ODE system in terms of the x-variables only: with 'far past' conditions Remark 1 In the pioneering paper (Volz 2008) an equivalent system of three coupled ODE was introduced to describe the binding-site level of the model. The variables of Volz are connected to our x-system as follows: θ =x, p S = x 1 /x and p I = x 2 /x. Systematic procedure for closing the feedback loop Before analyzing (2.5) in the next section, we describe a systematic procedure, consisting of five steps, for deriving the complete model formulation. A key aim is to rederive the crucial relationship (2.4) in a manner that can be extended to the dynamic networks. Thus the present section serves to prepare for a quick and streamlined presentation of the cases II and III in Sects. 3 and 4, respectively. The various steps reveal the relation between binding site probabilities, i-level probabilities and p-level fractions. In addition we introduce some notation. Step 1. Susceptible binding sites: x-probabilities The first step is to describe the dynamics of x while specifying the environmental variable Λ − only conceptually, i.e. in terms of the interpretation. We then arrive at system (2.1)-(2.2). Fig. 3 The susceptible partner v of a susceptible individual u has a certain number of infectious partners and the mean of this number equals Λ − V Λ_ U Next, we introduce P (d,k) (t), denoting the fraction of the population with label (d, k). Here k = (k 1 , k 2 , k 3 ) denotes the number of partners of an individual with each of the different disease statuses, i.e. k 1 susceptible, k 2 infectious, and k 3 recovered partners. Furthermore, d ∈ {−, +, * } denotes the disease status of the individual itself, with − corresponding to S, + to I, and * to R. In the second step, the environmental variable Λ − is, on the basis of its interpretation, redefined in terms of p-level fractions P (−,k) (t). Step 2. Environmental variables: definition in terms of p-level fractions The mean field at distance one assumption concerns the environmental variable Λ − . This variable is interpreted as the mean number of infectious partners of a susceptible partner v of a susceptible individual u (in terms of Fig. 3: we envisage the uv-connection as a random choice among all such connections). We define it in terms of p-level fractions as follows: . (2.6) Here the sums are over all possible configurations of m and k with m 1 + m 2 + m 3 = n, k 1 + k 2 + k 3 = n. The second factor in each term of this sum denotes the probability that a susceptible partner of a susceptible individual is in state (−, m). The number of infectious partners is then given by m 2 , and we find the expected number of infectious partners Λ − by summing over all possibilities. In the third step, we let p (−,k) (t) denote the probability that an individual is in state (−, k) at time t. This i-level probability can be expressed in terms of x-probabilities. Step 3. i-level probabilities in terms of x-probabilities The probability that an individual is, at time t, susceptible with k 1 susceptible, k 2 infectious, and k 3 recovered partners is given by the multinomial expression The solution of the x-system then gives us a complete Markovian description of the i-state dynamics of susceptible individuals. In this setting of a static network age does not play a role. Therefore, i-level probabilities can immediately be linked to p-level fractions in step 4 below. Step 4 . p-level fractions in terms of i-level probabilities The i-level probabilities and p-level fractions coincide, i.e. In a way, individuals are interchangeable as they all start off in the same state at t = −∞. Finally in the last step, by combining steps 2, 3, and 4, we can express Λ − in terms of the x-probabilities. Finally, steps 1 to 5 together yield the closed system (2.5) of ODE for x. The dynamics of the 1/2(n + 1)(n + 2) i-level probabilities p (−,k) (t) are fully determined by the system of three ODE for x. We can use this three-dimensional system of ODE to determine r , R 0 , and the final size as we will show in Sect. 2.3. In this particular case of a static network, we can do even better by considering one renewal equation forx. This one equation then allows us to determine the epidemiological quantities as well. This is the topic of Sect. 2.5 where we consider epidemic spread on a static configuration network in greater generality. The beginning and end of an epidemic: R 0 , r, and final epidemic size In this section we consider the beginning and end of an epidemic. We first focus on R 0 and r , so on the start of an epidemic. Note that we can very easily find an expression for R 0 from the interpretation: when infected individuals are rare, a newly infected individual has exactly n − 1 susceptible partners. It infects one such partner before recovering from infection with probability β/(β + γ ). Therefore, the expected number of secondary infections caused by one newly infected individual is (2.9) We now rederive R 0 and derive r from the binding site system (2.5). Note that the p-level fractions P (−,k) (t) can be fully expressed in terms of the binding site level probabilities x i (Eqs. (2.8), (2.7)). Furthermore, the P (−,k) (t) fractions, i.e. the fractions concerning individuals with a − disease status, form a closed system. Therefore, a threshold parameter for the disease free steady state of the binding-site system x is also a threshold parameter for the disease free steady state of the p-level system. (This argument extends to the dynamic network cases II and III in Sects. 3 and 4) Linearization of system (2.5) in the disease free steady statex 1 = 1,x 2 = 0 =x 3 , yields a decoupled ODE for the linearization of the ODE for x 2 . To avoid any confusion, letx 2 denote the linearized x 2 variable. Then the linearization yieldŝ with (for R 0 > 1) 'far past' behaviourx 2 (t) ∼ e (β(n−1)−(β+γ ))t for t → −∞. In particular, the right-hand side of the ODE forx 2 depends only onx 2 . To illustrate the method used in case II and III in Sects. 3.3 and 4.3, we derive expressions for R 0 and r from a special form of the characteristic equation. Variation of constants for the ODE ofx 2 yieldŝ Substituting the ansatzx 2 (t) = e λt yields the characteristic equation Then there is a unique real root to this equation for λ that we denote by r and call the Malthusian parameter. Evaluating the integral we find that r = β(n −1)−(β +γ ). Likewise, we can derive the expression (2.9) for R 0 by evaluating the integral with λ = 0. Next, we consider the final epidemic size. We do so by considering the dynamics of x defined in (2.3). Recall (1.2), i.e. the probability that an individual is susceptible at time t, is given byx(t) n . We observe that, by (2.8),x(t) n is also equal to the fraction of susceptible individuals in the population at time t (Alternatively, one can show that k P (−,k) (t) =x(t) n by combining (2.8) and (2.7)). In fact, it is possible to describe the dynamics ofx in terms of onlyx itself. This was first observed in Miller (2011), where the Volz equations of (Volz 2008) were taken as a starting point. The most important observation is the consistency relation (2.10) We can use the interpretation to derive (2.10); x 1 is the probability that a susceptible binding site with owner u is occupied by a susceptible partner v,x n−1 is the probability that v is susceptible given that it is a partner of a susceptible individual u (see also (2.27) below). Then, using (2.10) together with algebraic manipulation of the ODE system (2.5) (see Miller 2011 for details), one is able to find a decoupled equation forx: (2.11) The fraction of susceptible individuals at the end of the outbreak is determined by the probabilityx(∞). Sincex satisfies (2.11) andx(∞) is a constant, we find that necessarilyx(∞) is the unique solution in (0, 1) of The final epidemic size is given by In Sect. 2.5 we show that one can actually describe the dynamics of the probabilitȳ x for deterministic epidemics on configuration networks for a much larger class of submodels for infectiousness. The SIR infection that we consider here is a very special case of the situation considered in Sect. 2.5. There we show that it is possible to derive a renewal equation forx. The final size equation is then obtained by simply taking the limit t → ∞. We highly recommend reading Sect. 2.5 to understand the derivation of the renewal equation forx based on the interpretation of the model (with a minimum of calculations). After susceptibility is lost In the preceding section we have seen that the x-system (2.5) for susceptible binding sites is all that is needed to determine several epidemiological quantities of immediate interest. On the other hand, we might not only be interested in the fraction (1.2) of susceptibles in the population, but also in the dynamics of i-level probabilities p (d,k) (t) (and likewise p-level fractions P (d,k) (t) given by (2.8)) for d = +, * . So what happens after an individual becomes infected? We work out the details for infectious individuals and only briefly describe recovered individuals. Again, we are able to formulate the model following steps 1-5 of Sect. 2.2 (where the word 'susceptible' should be replaced by 'infectious' or 'recovered' whenever appropriate and step 3 should be replaced by a slightly different step 3', but we will come back to this later on in this section). But now we need to take into account the exceptional binding site, i.e. the binding site through which infection was transmitted to the owner (see also Fig. 2). In step 1 one considers the dynamics of infectious binding sites, i.e. binding sites having infectious owners. Suppose that the owner became infected at time t + and that it does not recover in the period under consideration. Let y e i (t | t + ) denote the probability for the exceptional binding site to be in state i at time t, i = 1, 2, 3. Similarly, y i (t | t + ) denotes the probability for a non-exceptional binding site to be in state i at time t, i = 1, 2, 3. Here the probabilities are defined only for t ≥ t + . Note V Λ + U Fig. 4 The susceptible partner v of an infectious individual u has a certain number of infectious partners and the mean of this number equals Λ + (note that this number is always larger or equal to 1 since u is a partner) that y and y e are probability vectors, i.e. the components are nonnegative and sum to one. Instead of 'far past' conditions we now have to take into account the distribution of binding site states at time of infection t + . Whether or not an infectious binding site is exceptional has an influence on the state it has at epidemiological birth. Indeed, the exceptional binding site is in state 2 at time t + with probability 1, while the distribution of the state of a non-exceptional binding site at time t + is given by we have boundary conditions (2.13) The mean field at distance one assumption again plays a role. Here, we need to deal with the environmental variable Λ + that is defined as the expected number of infectious partners of a susceptible partner of an infectious individual (in terms of Fig. 4: we envisage the uv connection as a random choice among all such connections, compare with Fig. 3). We can redefine Λ + in terms of p-level fractions P (−,k) for susceptible individuals: . (2.14) In particular, once again, Λ + can be expressed in terms of x by combining steps 2, 3, and 4. Using (2.14), (2.8), and (2.7) we find that (alternatively, one can find the same expression for Λ + in terms of x-probabilities directly from the interpretation, exactly as before in the case of Λ − ). The rates at which changes in the states (1, 2, 3) of infectious binding sites occur is the same for each binding site, including the exceptional one. There is a rate γ at which an infectious partner of an infectious binding site recovers (this corresponds to a change in state from 2 to 3). And there is a rate at which a susceptible partner of an infectious binding site becomes infected (either along the binding site under consideration or by one of its other infectious partners) corresponding to a change in state from 1 to 2. The rate at which this occurs is βΛ + where Λ + is defined by (2.14) and hence (2.15). Recall that we condition on the infectious binding site under consideration not recovering, therefore, these are all state changes that can occur. So we find that the dynamics of y and y e are described by the same ODE system and case specific boundary conditions (2.13). Observe that this means that y e 1 (t | t + ) = 0 for all t ≥ t + .This also immediately follows from the interpretation: at time t + , the binding site is occupied by an infectious partner, the network is static, and an infectious individual can not become susceptible again. Next, we turn to infectious individuals. Compared to susceptible i-level probabilities, it is more involved to express infectious i-level probabilities in terms of y e -and y-probabilities. Therefore, we first consider conditional i-level probabilities before finding an expression for the unconditional probabilities. We replace step 3 by step 3'. Step 3' Infectious i-level probabilities in terms of y and y e We let φ (+,k) (t | t + ) denote the probability that an infectious individual, infected at time t + , is in state (+, k) at time t, given no recovery. As in the case of a susceptible individual, this is given by a multinomial expression (note that there is one exceptional binding site which is either in state 2 or 3) (2.17) Note that φ (+,k) (t | t + ) = 0 for k = (n, 0, 0), i.e. for all t ≥ t + at least one partner is not susceptible. A susceptible individual becomes infected at time t + if infection is transmitted to this individual through one of its n binding sites. Infection is transmitted at rate β. Therefore, the force of infection at time t + , i.e. the rate at which a susceptible individual becomes infected at time t + , equals βn x 2 x (t + ) and consequently the incidence at time t + , i.e. the fraction of the population that becomes, per unit of time, infected at time t + , equals βn (recall thatx n is the fraction of the population that is susceptible). Furthermore, an infectious individual that is infected at time t + is still infectious at time t if it does not recover in the period (t + , t). Since the infectious period of an individual is assumed to be exponentially distributed with rate γ , the probability that We then find an expression for the unconditional i-level probabilities p (+,k) (t) that a randomly chosen individual is in state (+, k) at time t in terms of infectious binding site probabilities and the history of susceptible binding site probabilities: is given by (2.17). The i-level probabilities p (+,k) (t) are lifted to the p-level by (2.8). In this way we can use infectious binding sites as building blocks for infectious individuals. We see that y and y e explicitly depend on the dynamics of x through the boundary conditions (2.13) and the environmental variable Λ + (2.15). In addition, x 2 plays a role in determining the time of infection of an individual. Remark 3 Similar to the ODE system for − individuals considered in Remark 2, one obtains the p-level ODE system by differentiation of (2.20) and use of (2.16), (2.7) and (2.8). In doing so, one obtains a system of 1/2(n + 1)(n + 2) ODE for the p-level fractions concerning individuals with a + disease status: Lindquist et al. 2011, Eq. (13)). In case of recovered individuals, one considers their binding sites and first conditions on time of infection t + and time of recovery t * . Again one needs to distinguish between the exceptional and the non-exceptional binding sites. The dynamics of recovered binding sites are described by taking into account the mean field at distance one assumption for the mean number Λ * of infectious partners of a susceptible partner of a recovered individual. Boundary conditions are given by the y(t * | t + ) and y e (t * | t + ) for non-exceptional and exceptional binding sites, i.e. The dynamics for z and z e can be described by a system of ODE identical to the ODE systems for y and y e , but with Λ + replaced by Λ * . The environmental variable Λ * is given by . (2.21) By combining (2.21) with (2.8) and (2.7) we find ( 2.22) We find an expression for the probability ψ ( * ,k) (t | t + , t * ) that a recovered individual, infected at time t + and recovered at time t * , is in state ( * , k) at time t ≥ t * , in terms of z and z e probabilities for recovered binding sites with the same reasoning as for φ (+,k) (t | t + ) (one can simply replace φ by ψ, y i by z i , and y e i by z e i in (2.17)). Then, to arrive at an expression for the unconditional probability p ( * ,k) (t), we again need to take into account the incidence βnx 2x n−1 (t + ) at t + . The probability that recovery does not occur in the time interval (t + , t * ) is given by e −γ (t * −t + ) and the rate at which an infectious individual recovers is γ , therefore where the first equality in (2.23) follows from (2.8). The renewal equation for the Volz variable So far we dealt with the SIR situation, where an individual becomes infectious immediately upon becoming infected and stays infectious for an exponentially distributed amount of time, with rate parameter γ , hence mean γ −1 . During the infectious period any susceptible partner is infected with rate (=probability per unit of time) β. Here we incorporate randomness in infectiousness via a variable ξ taking values in a set Ω according to a distribution specified by a measure m on Ω. This sounds abstract at first, but hopefully less so if we mention that the SIR situation corresponds to with ξ corresponding to the length of the infectious period. In this section we only consider the setting where the 'R' characteristic holds, i.e. after becoming infected, individuals can not become susceptible for infection any more. In order to describe how the probability of transmission to a susceptible partner depends on ξ , we need the auxiliary variable τ corresponding to the 'age of infection', i.e. the time on a clock that starts when an individual becomes infected. As a key model ingredient we introduce π(τ, ξ ) = the probability that transmission to a susceptible partner happens before τ, given ξ. In the SIR example we have It is important to note a certain asymmetry. On the one hand, there is dependence in the risk of infection of partners of an infectious individual u. Their risk of getting infected by u depends on the length of the infectious period of u (and, possibly, other aspects of infectiousness encoded in ξ ). On the other hand, if u is susceptible, the risk that u itself becomes infected depends on the length of the infectious periods of its various infectious partners. But these partners are independent of one another when it comes to the length of their infectious period (see also Diekmann et al. 2013, Sect. 2.3 'The pitfall of overlooking dependence'). In particular, the probability that an individual escapes infection from its partner, up to at least τ units of time after the partner became infected, equals ( 2.24) For the Markovian SIR example (2.24) boils down to a formula that can also be understood in terms of two competing events (transmission versus ending of the infectious period) that occur at respective rates β and γ . As in Diekmann et al. (1998a) and earlier subsections, we consider a static configuration network with uniform degree distribution: every individual is connected to exactly n other individuals. At the end of this section we shall formulate the renewal equation for arbitrary degree distribution. In Diekmann et al. (1998a) an expression for R 0 and equations for both final size and the probability of a minor outbreak were derived. In addition, it was sketched how to formulate a nonlinear renewal equation for a scalar quantity, but the procedure is actually that complicated that the resulting equation was not written down. The brilliant idea of Volz (2008) is to focus on the variable θ(t) corresponding to the probability that along a randomly chosen partnership between individuals u and v no transmission occurred from v to u before time t, given that no transmission occurred from u to v (see also Fig. 5 for a schematic representation). Here one should think of 'probability of transmission' as being defined by π (and hence F) and not require that the individual at the receiving end of the link is indeed susceptible (though, if it actually is, or has been, infectious, the condition of no transmission in the opposite direction is indeed a nontrivial condition). The variable θ corresponds tox introduced in Sect. 2.1 and therefore we use that symbol also in this section. We reformulate (2.3) as x(t) = prob{a binding site is susceptible at time t | its owner does not become infected through one of its other binding sites before time t} (2.26) 5 Volz focused on the variable θ(t) corresponding to the probability that along a randomly chosen partnership between individuals u and v no transmission occurred from v to u before time t, given that no transmission occurred from u to v Fig. 6 Schematic representation ofx. In this figure, the binding site under consideration is indicated in green. Its owner has three binding sites in total. It is given that no transmission occurs through its other two binding sites (see also Fig. 6). There is an underlying stochastic process in the definition forx that we have not carefully defined here. Yet we shall use the words from the definition to derive a consistency relation that takes the form of a nonlinear renewal equation for x(t). The renewal equation describes the stochastic process starting 'far back' in time when all individuals were still susceptible. A precise mathematical definition and an in-depth analysis of a more general stochastic process (that allows the contact intensity between two connected individuals to depend on the number of binding sites of both of them) can be found in Barbour and Reinert (2013). See (Karrer and Newman 2010, Sect. V) for a different way of specifying initial conditions. To derive the consistency relation forx(t) we shift our focus to the partner that occupies the binding site under consideration. For convenience we call the owner of the binding site under consideration u and the partner that occupies this binding site v. Then, given that u does not become infected through one of its n − 1 other binding sites, u is susceptible at time t if (1) v is susceptible at time t or (2) v is not susceptible at time t but has not transmitted infection to u up to time t. We begin by determining (1). Given its susceptible partner u, individual v is susceptible if its n − 1 other binding sites are susceptible. Conditioning on its n − 1 other binding sites not transmitting to v, a binding site of v is susceptible at time t with probabilityx(t). Therefore, given susceptibility of partner u, v is susceptible at time t with probabilityx (t) n−1 . ( 2.27) This just repeats the consistency relation (2.10) x 1 =x n−1 stating that the probability x 1 that a susceptible binding site is occupied by a susceptible partner is equal to the probabilityx n−1 that a partner of a susceptible individual is susceptible. Next, suppose that v gets infected at some time η < t, then u is not infected by v before time t if no transmission occurs in the time interval of length t − η. The expression (2.27) has as a corollary that the probability per unit of time that v becomes infected at time η equals Noting that the probability of no transmission to u in the time interval (η, t) is F(t −η) we conclude that necessarily, (2.28) Finally, by integration by parts, we obtain the renewal equation For a configuration network with general degree distribution ( p n ) for the number of binding sites n of an individual, exactly the same arguments hold. But now there is randomness of the partnership capacity n of susceptible partners. This leads to the renewal equation (compare with (2.29)) The solutionx(t) = 1, −∞ < t < ∞, of (2.30) corresponds to the disease free situation. If we putx(t) = 1 − h(t) and assume h is small, we easily deduce that the linearized equation is given by The corresponding Euler-Lotka characteristic equation reads If we evaluate the right hand side of (2.32) at λ = 0, we obtain cf. Diekmann et al. (2013, Eq. (12.32), p. 294). In short, the relevant characteristics of the initial phase of an epidemic outbreak are easily obtained from the linearized renewal equation (2.31) (see Pellis et al. 2015 for a study of the Malthusian parameter, i.e. the real root of (2.31)). In the case that F is given by (2.25), the RĒ can be transformed into an ODE forx by differentiation: In the special case of Sects. 2.1-2.4, we have p n = 1 and p k = 0 for all k = n so g(x) = x n−1 and we recover (2.11). As explained in (Diekmann et al. Finite dimensional state representation of linear and nonlinear delay systems. Submitted), the natural generalization of (2.25) assumes that F is of the form where, for some m ∈ N, β and V are non-negative vectors in R m while Σ is a positive-off-diagonal (POD) m × m matrix. (In Diekmann et al. Finite dimensional state representation of linear and nonlinear delay systems. Submitted) it is explained how this relates to the method of stages from queueing theory, see (Asmussen 1987), but with the kernel F not only incorporating the progress of infectiousness of the focus individual but also the fact that any partner can be infected at most once. Concerning infectiousness, one might think of SEIR, SEI 1 I 2 , etcetera.) If F is given by (2.34), the variable and, since (2.30) can be rewritten as the Eq. (2.35) is a closed system once we replacex at the right hand side of (2.35) by the right hand side of (2.36) So one can solve/analyze (2.35) and next use the identity (2.36) to draw conclusions aboutx. We conclude that various ODE systems as derived in Miller et al. (2012) are subsumed in (2.30) and can be deduced from (2.30) by a special choice of F and differentiation. Part II: dynamic network without demographic turnover In Sect. 2, only one environmental variable Λ − is involved in the specification of the dynamics of the susceptible binding sites. In dynamic networks, additional environmental variables play a role. Notably, we have to specify the (probability distribution of the) disease status of a new partner. Before formulating the model for susceptible binding sites, we first consider the network itself in Sect. 3.1. This is needed in order to determine the appropriate 'far past' conditions of the susceptible binding site system. In Sect. 3.2, the model formulation is divided into three subsections. First, we formulate the model in terms of susceptible binding site probabilities x by following the scheme of five steps presented in Sect. 2.2. This allows us to express in terms of x those environmental variables that are defined in terms of susceptible p-level fractions P (−,k) . We then consider infectious and recovered binding site systems and these allow us to express the other environmental variables in terms of (the history of) x as well. Network dynamics Binding sites are either free or occupied. We denote the fraction of free binding sites in the population by F. We assume that a binding site that is free becomes occupied at rate ρ F, while an occupied binding site becomes free at rate σ . Similar to (Leung et al. 2012) (set μ = 0), we find that F satisfies the ODE So we find that F converges to a constant for t → ∞. Therefore, we assume that the fraction of free binding sites is constant, and this constant is again denoted by the symbol F. Then F satisfies Although we could give an explicit expression in terms of σ and ρ for F, we prefer to state the more useful identity (3.1) that, viewed as an equation, has F as its unique positive root. The network structure, although dynamic, is stable. A randomly chosen binding site (in the pool of all binding sites) is free with probability F and occupied by a partner with probability 1 − F. Later on we shall use that, given that a binding site is free with probability F at time τ , the probability that a binding site is free at time ξ + τ is F (and the probability that it is occupied at time ξ + τ is 1 − F). Finally, later on in Sect. 3.2.2, we also need the probability ϕ 1 (ξ ) that a binding site is free at time ξ + τ if it is occupied at time τ . Note that, by the Markov property, this probability only depends on the length ξ of the time interval. Since ϕ 1 (ξ ) is the unique solution of the initial value problem: where we used (3.1) in the second equality. Susceptibles We describe the dynamics of susceptible binding sites in terms of x-probabilities. Long ago in time, by assumption, all individuals (and therefore binding sites) are susceptible. In accordance with Sect. 3.1 the fraction of free and susceptible binding sites is equal to F and the fraction of susceptible binding sites occupied by susceptible partners is equal to 1 − F, i.e. we have 'far past' conditions The environmental variables F and Λ − are p-level quantities that we have yet to specify. Putting together the various assumptions described above, the dynamics of x is governed by the system: with 'far past' conditions (3.3), and Next, in step 2, we define the environmental variables in terms of p-level fractions. The definition (2.6) of Λ − in terms of p-level fractions carries over. We define the fractions of free binding sites in terms of p-level fractions as follows: where the sum is over all possible configurations of k with 0 ≤ k 1 + k 2 + k 3 ≤ n. In step 3 we define the i-level probabilities p (−,k) (t) in terms of the x probabilities by using the conditional independence of binding sites: (3.7) As in the static network case I, the i-level probabilities coincide with the p-level fractions, i.e. (2.8) holds. This is step 4 in our model formulation. Then, in step 5, we can express the environmental variables Λ − and F − in terms of x-probabilities. By combining (3.7) with (2.8) and (2.6), we again find (2.4) to hold (only now the x are defined by the system of ODE (3.4)). By combining (3.7) with (2.8) and (3.6) we find that exactly as the interpretations of F − and x would suggest. Before we can specify F + and F * in terms of (the history of) x we need to define p-level fractions P (+,k) (t) and P ( * ,k) (t). We do so in the next section where we turn to infectious and recovered binding site systems. After susceptibility is lost If an individual becomes infected at time t + , the binding site through which infection is transmitted is from that point on 'exceptional'. Then, given that its owner became infected at time t + and that it does not recover for the time under consideration, we consider an infectious binding site. Let y e i (t | t + ) denote the probability that the exceptional binding site is in state i at time t and y i (t | t + ) this same probability for a non-exceptional binding site. As in Sect. 2.4, the exceptionalness plays a role only in the states at epidemiological birth, i.e. at time t + . The exceptional binding site is with probability one in state 2 at time t + . The states of all other binding sites are distributed according to x(t + )/x(t + ). Therefore, we put boundary conditions Since the individual does not recover in the period under consideration, the infectious binding sites behave independently of one another. The dynamics of y e and y are both governed by the system Note that there is no rate γ of leaving the infectious state by the owner as we condition on the owner remaining infectious in the period under consideration (this is compensated by a factor e −γ (t−t + ) in various integrals below). Furthermore, note that, contrary to the network case I of Sect. 2.4, the exceptional binding site can lose its epidemiological parent by separation. Therefore y e 1 (t | t + ) > 0 for t > t + . Next, similarly to Sect. 2.4, by combinatorics (but now probabilities y e 0 and y e 1 are not equal to zero for t ≥ t + ), we find that the probability φ (+,k) (t | t + ) that an individual, infected at time t + , is in state (+, k) at time t ≥ t + is given by y e 0 y k 0 −1 0 y k 1 1 y k 2 2 y k 3 3 + k 1 n y e 1 y k 0 0 y k 1 −1 1 y k 2 2 y k 3 3 + k 2 n y e 2 y k 0 0 y k 1 1 y k 2 −1 2 y k 3 3 + k 3 n y e 3 y k 0 0 y k 1 1 y k 2 2 y k 3 −1 3 (t | t + ). (3.10) The probability P (+,k) (t) that a randomly chosen individual is in state (+, k) at time t is obtained by taking into account the time of infection t + and the probability (2.19) that an individual has not recovered time t − t + after infection. The definition (2.18) for the incidence carries over (but now with the x defined by the ODE system (3.4)). So By combining (2.8) and the expression (3.11) for P (+, f O f f k) in terms of y and y e we can redefine F + in terms of the history of x as we will show now. First of all, combining (3.10), (3.11) and (3.6) we express F + in terms of y and y e : n−1 + (n − 1)ȳ e y 0ȳ n−2 (t | t + )dt + . Next, we consider the probabilities y 0 , y e 0 . Note that with ϕ 1 given by (3.2). The dynamics of y 0 are described in terms of y 0 and the history of x 0 /x (by means of the boundary condition). We can solve for y 0 . This yields (note that time of infection t + matters in this probability and not only the length t − t + of the time interval). We can further simplify (3.12) to obtain which only depends on the model parameters and past probabilities x i for susceptible binding sites. We can use the consistency condition for the total fraction of free binding sites: (3.14) to express F * in terms of the history of x (use (3.8) for F − and (3.13) for F + ). So this specifies all environmental variables for the susceptible binding site system x in terms of (the history of x). Next, similar to case I of Sect. 2.4, we consider recovered individuals and their binding sites. Suppose that the infectious individual, that was infected at time t + , recovers at time t * . After recovery, we still distinguish between the exceptional binding site and the n − 1 other binding sites. We introduce probabilities z e i (t | t + , t * ) and z i (t | t + , t * ) for recovered binding sites. The y and y e probabilities yield the conditions for z and z e at time t = t * , i.e. The dynamics of z and z e are described by the system of ODE for y and y e , with the mean field at distance one quantity Λ + replaced by Λ * where Λ * is defined in terms of p-level fractions by (2.21) and hence is given by (2.22) in terms of x-probabilities. Let ψ ( * ,k) (t | t + , t * ) denote the probability that a recovered individual is in state ( * , k) given that it was infected at time t + and recovered at time t * . Then ψ ( * ,k) (t | t + , t * ) can be expressed in terms of z and z e by replacing φ in (3.10) by ψ, y i by z i , and y e i by z e i . The unconditional probability p ( * ,k) (t) is then obtained by taking into account time of infection t + and recovery time t * : which, by relation (2.8), is equal to the p-level fraction P ( * ,k) (t). Note that we can also use this definition of P ( * ,k) (t) to define F * in terms of x similar to the way we did for F + in (3.13). One renewal equation or a system of six ODE, whatever you like We ended the model formulation in Sect. 3.2.1 by defining the environmental variables Λ − and F − in terms of x (Eqs. (2.4), (3.8)). Subsequently, in Sect. 3.2.2, by considering infectious binding site probabilities y, and y e , we also defined F + and F * in terms of x (Eqs. (3.13), (3.14)). Combining these formulas, we find that the system describing the dynamics of susceptible binding sites is given by: with F + given by (3.13) and with 'far past' condition The ODE (3.16) for x together with the expression (3.13) for F + yields a closed system of five equations. By substituting expression (3.13) in system (3.16), one can view (3.16) as a system of four delay differential equations. The dynamics of the 1/6(n + 1)(n + 2)(n + 3) i-level probabilities (hence p-level fractions) p (−,k) for susceptible individuals are fully determined by this set of four delay differential equations (regardless of n). Alternatively, we can view the solution x(t) of (3.16)-(3.17) as fully determined by F + | (−∞,t] . Interpreting x 2 ,x, and x 0 at the right hand side of (3.13) in this manner, we arrive at the conclusion that the dynamics are fully determined by a single renewal equation for F + . One may prefer a system consisting only of ODE rather than a delay system. We can in fact reason directly in terms of the interpretation to derive an ODE for F + . In order to do so, we first consider the fraction I (t) = k P (+,k) of infectious binding sites in the population. This fraction decreases when infecteds recover. Infecteds recover at a constant rate γ . The fraction I increases when a susceptible individual becomes infected so there is the positive term (2.18) in the ODE for I (combine (3.7) with (2.8) and (2.18), the x are defined by the ODE system (3.4))). We find that the dynamics of I are described by the following ODE: with 'far past' condition I (−∞) = 0. Next, we consider F + . Any infectious owner recovers at constant rate γ . In addition, partnership formation and separation affect the fraction of free infectious binding sites. There is a rate ρ F at which free binding sites become occupied. The fraction of infectious binding sites that are occupied is given by I − F + and the rate at which these binding sites become free is σ . Then, finally, a susceptible individual with k 2 infectious partners becomes infected at rate βk 2 , taking into account all 0 ≤ k 2 ≤ n we find probability per unit of time βnx 2x n−1 at which a susceptible individual becomes infected. The probability that a non-exceptional binding site is free and susceptible upon infection is x 0 /x, so the expected fraction of free binding sites created upon infection of a susceptible individual is 1 n (n − 1)x 0 /x. Hence there is a flow β(n − 1)x 0 x 2x n−2 into F + . We have the following ODE for F + : with 'far past' condition F + (−∞) = 0. Alternatively, we can derive the ODE (3.19) for F + by differentiating (3.13) with respect to t. Note that we can express I in terms of x by first expressing it in terms of y and y e (similar to F + in Sect. 3.2.2). This yields n−1 (t + )dt + . The combination of (3.16) with (3.18) and (3.19) yields a six-dimensional closed system of ODE. (Compare with the slightly different but related network model called the 'dormant contacts' model of (Miller et al. 2012). Presumably (3.16), (3.18), (3.19) is a transformed but equivalent version of their system (3.11)-(3.19) for constant n, i.e. for a degree distribution concentrated in one point). Both (3.13) and (3.16)-(3.19) can be used to represent the system. In terms of the number of equations, it does not matter too much which system one considers. In the first case, one renewal equation is needed compared to six ODE in the second case. In both formulations one can determine r and R 0 with not too much effort. In Sect. 3.3 below, we will use a pragmatic mixture. This gives us a way of determining r and R 0 that prepares for the characterization of r and R 0 in case III in Sect. 4.3 (where a model formulation in terms of only ODE becomes troublesome). The beginning and end of an epidemic: R 0 , r and final size First, just as in case I, the final size is given by But while in case I we derived a simple scalar equation forx(∞) ((2.12) or (2.33)), depending explicitly on the parameters, we did not, despite fanatical efforts, manage to derive such an equation from the implicit characterization by (3.16), (3.18), (3.19); see also Appendix 1. Next, in the rest of this section, we use the binding site level system (3.16)-(3.19) to consider the beginning of an epidemic and determine R 0 and r . The point here is not only to use (3.16)-(3.19) to find threshold parameters but to find threshold parameters with their usual interpretation of R 0 and r . Using the same arguments as in network case I of Sect. 2, we find that a threshold parameter for the disease free steady state of system (3.16)-(3.19) on the binding site level is also a threshold parameter for the disease free steady state of the p-level system. The disease free steady state of (3.16) is given byx 0 = F,x 1 = 1− F,x 2 = 0 =x 3 . Linearization in this state yields a decoupled system of equations for the linearized x 2 and F + equations. We letx 2 andF + denote the variables in the linearization in the disease free steady state. Note that, in the disease free steady stateỹ 0 in the disease free steady state, the probability that an infectious binding site is free at time t given that it is free at time t + is equal to the probability F that a randomly chosen binding site is free. Then which can be viewed as a linear delay differential equation forx 2 . In order to obtain an informative version of the corresponding characteristic equation, we rewrite it as a renewal equation forx 2 . Variation of constants for the ODE forx 2 yields: Substituting (3.20) into this expression yields the renewal equation forx 2 : where the rearrangement of the terms in the integrals is in preparation for the interpretation). Next, we substitute the ansatzx 2 (t) = e λt , and obtain the characteristic equation There is a unique real root to (3.21) and this root is by definition the Malthusian parameter r . We define R 0 = ∞ 0 k(ξ )dξ . Then sign(r )=sign(R 0 − 1), and we find that R 0 is a threshold parameter with threshold value one for the stability of the disease free steady state, We can evaluate the integrals and find an explicit expression for R 0 . However, the interpretation is easier in the form it is written now. First of all, consider a newly infected individual u. Individual u transmits infection to a susceptible partner with probability ∞ 0 βe −(σ +β+γ )ξ dξ = β/(β + σ + γ ). By multiplying this probability with the expected number of susceptible partners u has at epidemiological birth plus the expected number of susceptible partners u acquires during its infectious period after epidemiological birth, we obtain R 0 . As we will explain now, these are exactly the two terms in {. . .} of (3.22). The mean number of susceptible partners of u at epidemiological birth is (n − 1)(1 − F) (note that, in addition to the susceptible partners, u has (n − 1)F free and 1 exceptional binding site). This is the first term in {. . .} of (3.22). We are left with determining the expected number of susceptible partners u acquires after epidemiological birth. This goes as follows. At time τ after u became infected, u has not recovered yet with probability e −γ τ . The exceptional binding site of u is free at time τ with probability ϕ 1 (τ ) (see (3.2)). Each of the n − 1 non-exceptional binding sites of u are free with probability F regardless of whether they were free or occupied at epidemiological birth (recall Sect. 3.1). Note that a free binding site becomes occupied by a susceptible partner at rate ρ F (at the beginning of the epidemic). Integrating over all possible lengths τ > 0 of the infectious period, we find that ∞ 0 e −γ τ ρ F ϕ 1 (τ ) + (n − 1)F dτ is the expected number of additional susceptible partners of u in its infectious period after epidemiological birth. Note that we made the distinction of the susceptible partners at and after epidemiological birth of u but what really matters is the total number of susceptible partners in the infectious period of u. So really, we did not need to make any distinction between at and after epidemiological birth. But this distinction is essential in Sect. 4.3 of case III. The distinction here serves both to illustrate this difference with case III and as a preparation for case III. Finally, in the same spirit, we would like to mention that rather than taking the perspective of an infectious individual/binding site, we can also take the perspective of a susceptible binding sites 'at risk' of infection, i.e. susceptible binding sites occupied by infectious partners, and interpret R 0 in that way. In the present context this does not change much. Therefore we refrain from elaborating. We leave this for Sect. 4.3 of case III where this different perspective leads to a major simplification compared to the 'standard' perspective of infectious binding sites that we took here. Part III: dynamic network with demography In this part, the network is not only dynamic due to partnership formation and separation but also due to demographic turnover. We assume that there is a constant per capita death rate μ and a constant population birth rate so that the population size is in equilibrium and the age of individuals is exponentially distributed with parameter μ. At birth, an individual does not have any partners. Details are presented in Leung et al. (2012). Network dynamics In a world with demographic turnover, next to calendar time, also age matters. We keep track of both age a and time of birth t b of an individual (calendar time is then given by t = a + t b ). When we speak about the age and time of birth of a binding site, we mean the age and time of birth of its owner. Often, we assume that the owner of a binding site does not die in the period under consideration. By assumption, at age zero, a binding site is free. A free binding site becomes occupied at rate ρ F where F denotes the total fraction of free binding sites in the population. This F is assumed to be constant (see Leung et al. 2012 for the justification) and satisfies (compare with (3.1)). If the binding site is occupied, then it becomes free at rate σ + μ where σ and μ represent separation and death of partner, respectively. In this section we will also make use of the following binding site probabilities (where, as usual, we condition on the owner not dying in the period under consideration). We let ϕ 0 (a) denote the probability that a binding site is free at age a + α, given that it was free at age α, and ϕ 1 (a) denotes the probability that a binding site is free at age a + α, given that it is occupied at age α. Note that, by the Markov property, these probabilities only depend on the time interval a (recall that F is constant). The dynamics of ϕ i as a function of a is described by with initial conditions, respectively, The explicit expressions for the ϕ i are given by See also (Leung et al. 2012, Eq. (10)) (where (a) can be identified with 1 − ϕ 0 (a)) and (Leung et al. 2015, Eq. (67)) (where 0 (t) and 1 (t) can be identified with 1−ϕ 0 (t) and 1 − ϕ 1 (t), respectively). Furthermore, we have the identity (use (4.1)), expressing that a randomly chosen binding site is free with probability F. So, according to Bayes' Theorem, the probability density function of the age of (the owner of) a free binding site is given by Similarly, the probability density function of the age of (the owner of) a randomly chosen occupied binding site is (in view of the derivation of a formula for R 0 in Sect. 4.3 below, we remark that π 0 and π 1 should be compared to probability distributions q and Q, respectively, in Leung et al. (2015); the difference is that q and Q concern the number of partners while π 0 and π 1 concern the age; the probability distributions, however, provide the same information). Susceptibles Demography does not give rise to any additional environmental variables, we still deal with the mean field at distance one variable Λ − , and the fractions F d of free binding sites with disease status d, d ∈ {−, +, * }. We follow the steps 1-5 of Sect. 2.2. In step 1 we consider x-probabilities. Consider a susceptible binding site, born at time t b , and suppose that its owner, for the period under consideration, does not die and does not become infected through one of its other n − 1 binding sites. Let F = (F − , F + , F * ). The dynamics of x as a function of age are described by the following system of equations: An individual is assumed to be susceptible without any partners at birth (and therefore the same applies to all its binding sites). So we have the birth conditions Given the environmental variables F and Λ − , we can formally view x(a | t b ) as a function of the environmental variables: We now define the environmental variables in terms of p-level fractions. Note that Λ − has the exact same interpretation as in network cases I and II. It should therefore come as no surprise that the definition of Λ − in terms of p-level fractions is again (2.6). The fractions of free binding sites with disease status d are again defined by (3.6). This is step 2. Next, in step 3, we define the i-level probabilities p (−,k) (a | t b ) in terms of x. Given that an individual was born at time t b and does not die in the period under consideration, the probability that an individual is at age a in state (−, k) is given by the multinomial expression (compare with Eq. (3.7) and note that we condition on the survival of the individual). In step 4 we relate p-level fractions P (d,k) . In order to do so, we use the stationary age distribution with density a → μe −μa . The fraction of the population that is in state (d, k) at time t is obtained by adding all individuals in that state that are born before time t and are still alive at time t. We find that In step 5, we express the environmental variables Λ − and F − in terms of x. This can be done by combining (4.10) and (4.11) with (2.6) (for Λ − ) or (3.6) (for F − ). We find that and (4.13) In order to complete step 5 (expressing the environmental variables F + and Λ − in terms of x) we need to consider infectious and recovered binding sites. After susceptibility is lost Consider a binding site that was born at time t b and infected at age a + and its owner remains alive and infectious for the period under consideration. Note that age a + for this individual corresponds to calendar time t + = t b + a + . Let y e i (a | t b , a + ) denote the probability that the exceptional binding site is in state i at age a and y i (a | t b , a + ) the same probability for a non-exceptional binding site. Then, at age a + , the exceptional binding site is for certain in state 2, while the other n − 1 binding site states are distributed according to x(a + | t b )/x(a + | t b ): , The dynamics of infectious binding sites are described by: with Again, there is no rate γ in M + (F, Λ + ) of the owner leaving the system of infectious binding sites as we assume that infectious binding sites remain infectious in the period under consideration (which is compensated by a factor e −γ (a−a + ) in various integrals below). In (4.14) we can consider Λ + as 'known'. Indeed, by combining (2.14) with (4.11) and (4.10), we can express Λ + in terms of x as follows: We now set out to derive an expression for F + . The probability φ (+,k) (t b , a | a + ) that an individual, born at time t b and infected at age a + , is in state (+, k) at age a ≥ a + is given by y e 0 y k 0 −1 0 y k 1 1 y k 2 2 y k 3 3 + k 1 n y e 1 y k 0 0 y k 1 −1 1 y k 2 2 y k 3 3 + k 2 n y e 2 y k 0 0 y k 1 1 y k 2 −1 2 y k 3 3 + k 3 n y e 3 y k 0 0 y k 1 1 y k 2 2 y k 3 −1 3 (a | t b , a + ). (4.15) The contribution to the incidence of individuals of age a + , born at time t b and alive for the period under consideration, is given by where the reasoning is similar to cases I and II. Then, taking into account all possible ages of infection 0 ≤ a + ≤ a, and the probability that as yet recovery did not occur, the probability that an individual, born at time t b , is in state (+, k) at age a is given by The p-level fractions P (+,k) (t) at time t are obtained through relation (4.11). In this way, the dynamics of infectious binding sites describe the dynamics of infectious individuals and the population of such individuals. In particular, we find that F + is defined in terms of infectious (and susceptible) binding sites as follows: y e 0ȳ n−1 + (n − 1)ȳ e y 0ȳ n−2 (a | t − a, a + )da + da. Since y e and y are probability vectors, they sum to one:ȳ e (a|t b , a + ) = 1 = y(a|t b , a + ). Moreover, with ϕ 1 given by (4.3), since we can express F + in terms of the history of x: (4.18) We can use the consistency condition for the total fraction of free binding sites: (4.19) to express F * in terms of the history of x by using (4.13) and (4.18). Thus we have specified all environmental variables for (4.7) in terms of (the history of) x. For completeness we briefly consider recovered binding sites. Suppose a recovered binding site was born at time t b , infected at age a + , and recovered at age a * (and as usual, suppose its owner does not die in the period under consideration).We consider probabilities z e i (a | t b , a + , a * ) and z i (a | t b , a + , a * ) for recovered exceptional and non-exceptional binding sites in state i, respectively. The y and y e probabilities yield the conditions for z and z e at age a = a * , i.e. The dynamics for z and z e can be described by a system of ODE similar to the ODE systems (4.14) for y and y e . Only now the mean field at distance one quantity Λ + needs to be replaced by Λ * where Λ * is defined in terms of p-level fractions by (2.21). By combining (2.21) with (4.11) and (4.10) we can express Λ * in terms of x-probabilities: Let ψ ( * ,k) (a | t b , a + , a * ) denote the probability that a recovered individual is in state ( * , k) given that it was born at time t b , infected at age a + and recovered at age a * , and does not die in the period under consideration. Then ψ ( * ,k) (a | t b , a + , a * ) can be expressed in terms of z and z e by replacing φ by ψ, y i by z i , and y e i by z e i in (4.15). The probability p ( * ,k) (a | t b ) is then obtained by taking into account all possibilities for age of infection a + and age of recovery a * : Finally, by relation (4.11), we obtain A system of three renewal equations To summarize, by replacing F * by (4.19), we are left with three environmental variables Λ − , F − , and F + which are defined by (4.22) Recall that x(a | t − a) is completely determined by ,t] , and Λ −| [t−a,t] , via (4.7)-(4.9). Therefore (4.20)-(4.22) is a closed system of three renewal equations. Together, the three renewal Eqs. (4.20)-(4.22) fully determine the dynamics of ilevel probabilities p (−,k) (a | t b ) and p-level fractions P (−,k) (t). (Note that there are in total 1/6(n + 1)(n + 2)(n + 3) states of the form (−, k) One may not particularly like renewal equations to work with. However, the ODE system (4.7) has t b as a parameter, so is not finite dimensional. Therefore, contrary to Sect. 3, in order to describe the model with a closed finite system of ODE one needs to turn to p-level fractions P (−,k) and P (+,k) (the p-level system of ODE can be written down directly from the interpretation; see also (Leung et al. 2015) and Remarks 2 and 3). Together with the definition of the environmental variables F ± and Λ ± in terms of p-level fractions, the system is then closed. However, there are in total 1/3(n + 1)(n + 2)(n + 3) variables of the form P (±,k) . As the system of three renewal Eqs. (4.20)-(4.22) has a clear interpretation, and R 0 , r , and the endemic steady state can very nicely be characterized from this system (see Sect. 4.3 below), we strongly advocate this formulation of the model rather than a (very high-dimensional) system with only ODE. The beginning of an epidemic: R 0 and r To describe the beginning of an epidemic, we are interested in characterizing R 0 and r . We have done so for the full p-level ODE system in Leung et al. (2015). In that paper, the characterization of R 0 involved the dynamics of infectious binding sites in the beginning of the epidemic. This infectious binding site system was then, via a linear map, coupled to the linearized p-system to show that the definition of R 0 via the interpretation indeed yields a threshold parameter with threshold value one for the p-level system. In this section, we use the system of three renewal Eqs. (4.20)-(4.22) to characterize R 0 and r . Using the same arguments as in Sects. 2.3 and 3.3 of network cases I and II, we deduce that, in order to find a threshold parameter for the disease free steady state of the p-level system, we can focus on a threshold parameter for the stability of the disease free steady state of the binding site level system (4.7). Hence we can focus on (4.20)-(4.22). The linearization of (4.20)-(4.22) involves the linearization of (4.7). The disease free steady state of (4.7) is given byx 0 where ϕ 0 (a), the probability that a binding site is free at age a given that it was born free (i.e. free at age 0), is given by (4.2). We again put a ∧ on the symbols to denote the variables in the linearized system. The ODE for the linearized variablex 2 is straightforward: In the following we condition (as usual) on the owner of the binding site staying alive in the period under consideration. The probability y e 0 (a | t b , a + ) is independent of t b and given by (4.16). On the other hand, y 0 (a | t b , a + ) in the disease free steady state can be interpreted as the probability that a binding site is free at age a given that it is free at age a + with probability ϕ 0 (a + ). But this is equal to the probability ϕ 0 (a) that a binding site is free at age a given that it was born free at age 0 (since then, the probability that it is free at age a + is exactly ϕ 0 (a + )). So we find that, in the disease free steady state, where the first equality follows from simply evaluating (4.17) in the disease free steady state and the second can be deduced (as above) from the interpretation (or by algebraic manipulation). So we find thatF + satisfieŝ where we used relation (4.4) between F and ϕ 0 . We now derive two renewal equations forF + andΛ − . Variation of constants yields an expression forx 2 in terms ofF + andΛ − : (4.26) We substitute this in the expressions forF + andΛ − to find the system of two renewal equations: In preparation for defining and interpreting R 0 we write these integrals in convolution form:F (4.28) (the π 0 and π 1 appear by multiplying with F/F and (1− F)/(1− F)). This is a system of two renewal equations of the form (4.29) with non-negative kernelK . From these two renewal equations (4.27) and (4.28), we can obtain the characteristic equation and deduce threshold parameters r and R 0 . We define (4.30) Note that ∞ 0K (τ ) is a 2 × 2 matrix that can be evaluated explicitly so we have an explicit expression for R 0 . We define r to be the real root (if it exists) of the characteristic equation such that the spectral radius of ∞ 0 e −λτK (τ )dτ equals 1. Note that r is necessarily the rightmost solution of the characteristic Eq. (4.31). Then r is a threshold parameter with threshold value zero for the stability of the disease free steady state of the system of renewal Eqs. (4.20)-(4.22). Furthermore sign(R 0 − 1) = sign(r ) so the definition (4.30) of R 0 indeed has the right threshold property. For R 0 > 1, to see that sign(R 0 − 1) = sign(r ), one uses that each matrix element of ∞ 0 e −λτ K (τ )dτ is a strictly monotonically decreasing function of λ and therefore the dominant eigenvalue of ∞ 0 e −λτ K (τ )dτ is strictly monotonically decreasing as a function of λ (Li and Schneider 2002;Diekmann et al. 2013, Sect. 8.2 the intrinsic growth rate). For R 0 < 1, one uses that the rightmost real solution r of (4.31) (if it exists) is strictly less than zero and this establishes the stability of the disease free steady state (Heijmans 1986;Inaba 1990;Thieme 2009). In the epidemic context, 'reproduction' corresponds to transmission of the infectious agent to another host. The definition of (and the derivation of an expression for) R 0 in Leung et al. (2015) is in this spirit: it follows infectious binding sites in time and counts how many new infectious binding sites are formed when transmission occurs. A slight modification of the derivation in Leung et al. (2015) is required to generalize from SI to SIR. We did check that (4.30) is identical to the appropriately modified version of the dominant eigenvalue of (59) in Appendix C of (Leung et al. 2015). Yet we would like to have a direct interpretation of the would-be reproduction number (4.30). To achieve this, it is helpful to think in terms of reproduction 'opportunities'. In the present context, these consist of +− links. In Leung et al. (2015) the spotlight is on the + side of the link. The present bookkeeping scheme focuses on x, so on − binding sites. So now the spotlight is on the − side of the link. The difference is just a matter perspective. A key point, however, is that after transmission the link disappears from the x stage. This forces us to formulate the interpretation in terms of reproduction opportunities rather than reproductions. (Note that, in traditional epidemiological models involving the random mixing assumption, contacts between individuals are instantaneous so there are no −+ links or 'reproduction opportunities' in the above sense.) We distinguish two birth-types of −+ links, according to the way they originate: Type 0: the −+ link was formed when a − binding site and a + binding site linked up Type 1: the −+ link is a transformed − link (one of the two owners got infected by one of its other partners) The relevant difference is the age distribution of the − binding site at the 'birth' of the −+ link (see Fig. 7): -for type 0 this distribution has density π 0 since the − binding site was free until that moment -for type 1 this distribution has density π 1 since the − binding site was (and remains) occupied v u w (-, k , k , k ) 1 2 3 Fig. 8 An example of a − − + configuration: u is one of the k 1 − partners of the 'middle' − individual v in state (−, k 1 , k 2 , k 3 ) and w is one of the k 2 + partners of v So the density of the age distribution of the − binding site at birth depends on the birth-type, making it necessary to distinguish between the two birth-types 0 and 1, while in case II there is no need to make this distinction. In the nonlinear setting, the total rate in the population at which −+ links of type 0 are formed is equal to ρ F − k 0 P (+,k) = ρ F − n F + (note the −+ asymmetry here, which is in preparation for the linearization). The rate at which type 1 −+ links are formed is equal to β k 1 k 2 P (−,k) , respectively. Indeed, the expected number of free infectious binding sites in the population is k 0 P (+,k) , and the rate at which a free and infectious binding site acquires a susceptible partner is ρ F − . The expected number − − + configurations per 'middle' − individual is k 1 k 2 P (−,k) (see also Fig. 8) and the rate of transmission is β. Rescaling (4.32) yields a system of renewal equations (4.34) (Again we note that each of these four integrals can be evaluated explicitly.) We now explain how (4.34) can be interpreted in terms of reproduction opportunities of types 0 and 1. A −+ link has no 'descendants' when transmission does not occur. When transmission occurs, it has at that very moment descendants of type 1, because the 'other' partners of the owner u of the − link then all of a sudden are connected to a + individual. In addition, it has descendants of type 0 when empty binding sites of u get occupied (necessarily by a − partner, since we consider the initial phase when + individuals are rare). Note that we should follow all binding sites of u until u either dies or becomes removed, since occupied binding sites may become free, occupied again, etcetera. We now compute the expected number of descendants of either type for a −+ link given that the owner u of the − binding site has age a at the birth of the −+ link. The force of infection on u along the link equals β as long as -the + partner is alive and infectious -separation did not occur u is alive and not yet infected Hence the probability per unit of time that u is infected at age α + τ is given by βe −(μ+γ +σ +μ+β)τ . The infectious binding site perspective In Leung et al. (2015) the focus was on infectious binding sites. As exhibited by the densities π 0 and π 1 of the age-distribution of − individuals in the birth-types of −+ links, it matters whether a newly created −+ links is type 0 or type 1. To take this into account, in Leung et al. (2015), we kept track of the number of partners of susceptible partners of infectious binding sites. This led to the reduction to a 2 × 2 next-generation-matrix involving mean times spent with a susceptible partner with k partners, k = 1, . . . , n, (in the form of the inverse of an n × n matrix). We were able to find an explicit expression for R 0 although it required quite a lot of work to deal with this n × n inverse matrix. The results in this paper teach us that, to take into account the birth-types of −+ links, we can also keep track of the age of susceptible partners rather than partners of partners. While age can be anything from zero to infinity, it can only move forward in time, i.e. individuals can only grow older. The same 2 × 2 NGM is obtained in a much more straightforward manner. Whether doing the bookkeeping of partners of partners or of the age of partners, a big downside of taking the infectious binding site perspective is that it takes quite some work to prove that so-obtained R 0 is actually a threshold parameter for the stability of the disease free steady state of the p-level system (see (Leung et al. 2015)). This comes almost for free when taking the − perspective as we did in this paper. Finally note that, whether we consider actual 'reproductions' (taking the + perspective) or 'reproduction opportunities' (taking the − perspective), both yield the exact same threshold parameter R 0 so in that sense it does not matter which perspective we take. However, while the the dominant eigenvalue R 0 of the next-generation-matrix is the same with both perspectives, the matrices themselves are different. And so are the underlying interpretations. Endemic steady state Let E = (F + , F * , Λ − ) be the vector of environmental variables. Note that we use consistency relation (4.19) to substitute environmental variable F * for F − . This choice of environmental variables leads to the disease free steady state corresponding to E = (0, 0, 0). Then we have a system of three renewal equations for E. Let ,a] via (4.7)-(4.9). Therefore, is a closed system of three renewal equations. In endemic equilibrium, the environmental variable E is constant (note that, if E is constant, then also p-level fractions are constant and binding-site-and i-level probabilities are constant as functions of time of birth t b ). So the endemic steady state is characterized as a solution to the fixed point problem (4.35) where now the symbols denote the values of constant functions. The fixed point problem always has a trivial solution given by the disease free steady state E = (0, 0, 0). Note that a solution E to (4.35) needs to have biological meaning. Therefore, we only consider solutions that satisfy F + , F * ≥ 0, 0 ≤ F + + F * ≤ F, and 0 ≤ Λ − ≤ n − 1. Conjecture: If R 0 < 1, then the only solution to the fixed point problem is the trivial solution. If R 0 > 1, then there is a unique nontrivial solution. Open problem: Prove (or disprove) the conjecture. In Appendix 2 we elaborate on an unsuccessful attempt at a proof of the conjecture for the simpler case of an SI infection, rather than an SIR infection, obtained by setting γ = 0. This attempt tried to use Krasnoselskii's method Krasnoselskii (1964) (see also Hethcote and Thieme 1985). Note that the three-dimensional fixed point problem (4.35) provides a way to find the endemic steady state numerically. Furthermore, even though we did not manage to prove the conjecture, numerical investigations strongly suggest that all conditions for Krasnoselskii's method are satisfied and that the conjecture holds true. Conclusions and discussion In this paper we formulated binding site models for the spread of infection on networks. The binding sites serve as building blocks for individuals. In fact we considered three different levels: (1) binding sites, (2) individuals, and (3) the population. On both the binding site and individual level, we have a Markov chain description of the dynamics, where feedback from the population is captured by environmental variables. These environmental variables are population-level quantities. By lifting the individual level to the population level (where the model is deterministic), the feedback loop can be closed. In the end, this leads to a model description in terms of susceptible binding sites in case I and in terms of just environmental variables in cases II and III. The systematic model formulation leads, in all three cases, to only a few equations that determine the binding site, individual, and the population dynamics. Moreover, from these equations we derive the epidemiological quantities of interest, i.e. R 0 , r , the final size (in cases I and II) and the endemic steady state (in case III). Quite in general, understanding is enhanced by an elaboration of the interpretation of R 0 in a specific context. In cases I and II we have taken the obvious perspective of a + binding site to do so. But in case III, cf. Sect. 4.3, we reasoned in terms of 'reproduction opportunities'. These consist of +− links. From these links we took the − perspective. Somewhat surprisingly, this turned out to lead quickly and efficiently to a simple interpretation. Moreover, the derivation of R 0 follows from the system of equations in a natural manner. One can adopt the − perspective in cases I and II too, but there it does not change much. Yet we wouldn't be surprised if the − perspective turns out to be powerful in other dynamic network models of infectious disease transmission. Several open problems remain. Although we are able to implicitly characterize the final size in case II, we have not been able to make it more explicit. We would like a characterization in the same spirit as (2.33) for case I, but we have not succeeded and our optimism subsided. A more useful characterization of the endemic steady state was given for case III as a three-dimensional fixed point problem. Unfortunately, we have not (yet) been able to prove the existence and uniqueness of a nontrivial fixed point for R 0 > 1 (and that no such fixed point exists for R 0 < 1) and therefore we posed this as a conjecture in Sect. 4.4. Of another nature are open problems related to the mean field at distance one assumption. While, in case I, the mean field at distance one assumption is proven to be exact in the appropriate large population limit of a stochastic SIR epidemic on a configuration network (Decreusefond et al. 2012;Barbour and Reinert 2013;Janson et al. 2014), it remains an open problem whether or not this also holds for the dynamic network case II (we conjecture it does). In the dynamic network case III, we know that the mean field at distance one assumption is really an approximation of the true dynamics as we pointed out in the introduction of this paper (see also Leung et al. 2015). What we have not discussed is how good or bad of an approximation it is. In particular, are there conditions for which the approximation works nicely and can we understand intuitively the extent to which this assumption violates the truth? In both cases II and III, we ended the model formulation with renewal equations. In case II one can just as easily consider a system of ODE, and we represented this view also in Sect. 3.2.3. In case III, a system of ODE clearly becomes inconvenient. An ODE formulation in that case would require at least 1/3(n + 1)(n + 2)(n + 3) variables, while, by considering a system of renewal equations, only three equations are needed. More importantly, the system of renewal equations has the huge advantage that R 0 and r more or less immediately follow from the linearization of the system in the disease free steady state. The calculations are straightforward, the expressions are interpretable biologically, and the proof that R 0 and r are threshold parameters for the disease free steady state of the p-level system comes more or less for free. By distinguishing the three different levels, and formulating the model on the binding site level, one can consider several generalizations (see also Leung et al. 2012, 2015 for a discussion). In principle, any generalization that maintains the (conditional) independence assumption for binding sites of an individual fits within this framework. One can think of generalizations concerning the network or generalizations concerning the infectious disease. For the infectious disease, one can easily take any compartmental model such as SIR, SEIR, SI, SI 1 I 2 (as long as infected individuals can not return to the susceptible class within their lifetime). The main difference is in the different states that a binding site can be in. Generalizations of the network that one can think of are (i) a heterosexual population rather than a homosexual population, (ii) allowing for different n in the population, i.e. letting n be a random variable (which we already considered in the static network case in Sect. 2.5) (iii) allowing for multiple types of binding sites, e.g. binding sites for casual and steady partnerships, and combinations of the three. One can formulate models incorporating these generalizations by following the five steps described in Sect. 2.2. The main added difficulty is in the bookkeeping that becomes more involved. But in terms of the characterization of R 0 and the endemic steady state, mathematically speaking the situation does not become more complex. Generalization (ii) is in a sense different from the other generalizations. At first sight this generalization seems rather straightforward: one only needs to average over n in the right way. But things are more subtle than they seem. One needs to also take into account the partnership capacity of partners in the bookkeeping. In particular, the mean field at distance one assumption is expressed in environmental variables Λ − (n) that can be interpreted as the expected number of infectious partners of a susceptible partner with partnership capacity n of a susceptible individual. However tempting it is, we cannot simply assume that we can average over all possible partnership capacities of susceptible partners, i.e. we cannot average over these terms Λ − (n) and consider only one termΛ − (at least in the static network case such an assumption would yield a system different from the Volz equations Volz 2008). Our modelling framework is powerful in the way that such subtleties are exposed. In particular, the systematic approach that connects the binding site level, i-level, and p-level yields better insight into the ramifications that generalizations have at the different levels. Finally, in the current framework, and as usual in literature, demographic turnover as considered in case III takes the individual's age to be exponentially distributed. This assumption is mainly for mathematical convenience and is not realistic for many populations. We believe that it is possible to relax the assumption on the age distribution to consider more general survival functions. In that case, lifting the i-level to the p-level changes, and one needs to take into account the age of partners (but hopefully this may be done by simply averaging in the right way). Moreover, in the current framework, disease does not impact mortality. In the context of HIV, disease-related mortality is certainly very relevant. We believe that the framework presented in this paper provides a way to incorporate this by means of the infectious y binding sites. While these generalizations relating to the demographic process are less straightforward to implement than the ones described in the previous paragraph, the current framework provides an excellent starting point. Linearization of an epidemic system in the disease free steady state leads to a linear system that leaves a cone, characterized by positivity, invariant. Perron-Frobenius theory, or its infinite dimensional Krein-Rutman variant, yields the existence of a simple eigenvalue r such that (i) the corresponding eigenvector is positive (ii) Re λ < r for all eigenvalues λ = r The theory of stable and unstable manifolds yields a nonlinear analogue: the nonlinear system has exactly one orbit that is tangent to the eigenvector corresponding to eigenvalue r . If r > 0 then this orbit belongs to the unstable manifold and tends to the disease free steady state for t → −∞. If r < 0 then the orbit belongs to the stable manifold and tends to the disease free steady state for t → +∞. Our interest is in the case r > 0. Note that one orbit of an autonomous dynamical system corresponds to a family of solutions that are translates of each other. See (Diekmann 1977) for an early example of this type of result (but note that the proof in that paper has a flaw; see (Diekmann and van Gils 1984, Sect. 7) for a flawless proof). These ideas apply directly to the three-dimensional ODE system (2.5) in case I. For the scalar renewal Eq. (2.30) we can refer to Sect. 1 of (Diekmann and van Gils 1984) provided that we are willing to assume that F has compact support. For the ODE system of case II there exists an eigenvalue zero (corresponding to conservation of binding sites). This eigenvalue zero creates havoc. Presumably, the difficulties can be overcome by the introduction of a tailor-made cone, but we did not elaborate this in all required detail. The alternative is to consider the scalar renewal Eq. (3.13) for F + and to combine ideas from Diekmann et al. (2007) with theory developed in Diekmann and Gyllenberg (2012). This combination should, we think, also cover the system of renewal Eqs. (4.20)-(4.22) for case III. Appendix 2: Endemic steady state: unsuccessful attempt at a proof We explain our attempt to prove the conjecture of Sect. 4.4 about the existence and uniqueness of solutions to the fixed point problem (4.35) for the simpler case of an SI infection rather than an SIR infection (set γ = 0). We only need to consider two environmental variables, rather than three, as we will explain. This attempt to prove the conjecture uses the sublinearity method of Krasnoselskii (1964) (see also Hethcote and Thieme 1985), the idea of which for one dimension is represented in Fig. 11. (5.1) with boundary condition (5.2) As before in Sect. 4, x(a | t b ) is completely determined by via (5.1)-(5.2). In particular, there are now only two environmental variables F + and Λ − . These environmental variables satisfy renewal equations. Let then we obtain a fixed point problem for the environmental variables F + and Λ − : Note that in endemic equilibrium the environment is constant, i.e. F + (t) =F + , Λ − (t) =Λ − . Therefore x no longer depends on time of birth. In what follows we write x = x(a). The fixed point problem (5.3) can be related to R 0 by considering the linearizaton of the right hand side of (5.3) in the disease free steady state (F + , Λ − ) = (0, 0). Indeed, the linearization DG(0, 0) has dominant eigenvalue R 0 . Monotonicity and sublinearity of G 1 in both variables F + and Λ − is easily proven. One can show that the derivatives of x 0 , x 1 , and x 1 + x 2 with respect to F + and Λ − are nonpositive while the mixed second order derivatives are all nonnegative. Then one can easily prove that the derivatives D i G 1 (F + , Λ − ) ≥ 0 showing that G 1 is a monotonically increasing function of both F + and Λ − . Sublinearity can be proven by showing that the function f (t) = G 1 (t (F + , Λ − ))−tG 1 (F + , Λ − ) satisfies f (t) < 0 for 0 < t < 1. We work out only the proof to show that D 1 G 1 (F + , Λ − ) ≥ 0. The derivative of G 1 with respect to F + is equal to n−2 (a)da. Remark 4 The variable x 2 is not necessarily monotone in F + or Λ − . One can find parameter values for which we find that ∂ x 2 /∂ E(a), E = F + , Λ − , is neither nonpositive nor nonnegative as a function of a. Note that the feedback function G 2 for Λ − involves x 2 . The arguments to prove monotonicity and sublinearity do not seem to work for G 2 . Numerical investigation strongly suggest that G 2 is indeed monotonically increasing as a function of both F + and Λ − as well as sublinear. So far, we have not been able to provide a proof. Nevertheless, once we show that both G 1 and G 2 are monotonically increasing functions of environmental variables F + and Λ − and sublinear, Krasnoselskii's method then provides a proof that for R 0 < 1 only the trivial solution exists and for R 0 > 1 there exists a unique nontrivial solution to (5.3).
26,570.6
2016-01-20T00:00:00.000
[ "Environmental Science", "Mathematics", "Medicine" ]
Jet Vetoes Interfering with H->WW Far off-shell Higgs production in $H \rightarrow WW,ZZ$, is a particularly powerful probe of Higgs properties, allowing one to disentangle Higgs width and coupling information unavailable in on-shell rate measurements. These measurements require an understanding of the cross section in the far off-shell region in the presence of realistic experimental cuts. We analytically study the effect of a $p_T$ jet veto on far off-shell cross sections, including signal-background interference, by utilizing hard functions in the soft collinear effective theory that are differential in the decay products of the $W/Z$. Summing large logarithms of $\sqrt{\hat s}/p_T^{veto}$, we find that the jet veto induces a strong dependence on the partonic centre of mass energy, $\sqrt{\hat s}$, and modifies distributions in $\sqrt{\hat s}$ or $M_T$. The example of $gg\rightarrow H \rightarrow WW$ is used to demonstrate these effects at next to leading log order. We also discuss the importance of jet vetoes and jet binning for the recent program to extract Higgs couplings and widths from far off-shell cross sections. Introduction With the recent discovery of a boson resembling a light Standard Model (SM) Higgs [1][2][3][4][5], a large program has begun to study in detail the properties of the observed particle . Of fundamental interest are the couplings to SM particles and the total width of the observed boson, which is a sensitive probe of BSM physics [35][36][37][38][39][40][41]. Most studies have focused on the extraction of Higgs properties from on-shell cross sections. In this case, the effect of jet vetoes and jet binning, which is required experimentally in many channels to reduce backgrounds, has been well studied theoretically [42][43][44][45][46][47][48][49][50]. A jet veto, typically defined by requiring that there are no jets with p T ≥ p veto T , introduces large logarithms, log(m H /p veto T ), potentially invalidating the perturbative expansion, and requiring resummation for precise theoretical predictions. In this paper, we analytically study the effect of an exclusive jet p T -veto on offshell particle production, resumming logarithms of √ŝ /p veto T , where √ŝ is the invariant mass of the off-shell particle, or more precisely, √ŝ is the invariant mass of the leptonic final state. We use gg → H → W W as an example to demonstrate these effects, although the formalism applies similarly to gg → H → ZZ if a jet veto is imposed. We find that the off-shell cross section is significantly suppressed by a jet veto, and that the suppression has a strong dependence on √ŝ . This results in a modification of differential distributions in √ŝ , or any transverse mass variable, in the case that the invariant mass cannot be fully reconstructed. The jet veto also has an interesting interplay with signal-background interference effects, which typically contribute over a large range of √ŝ . We use two cases, m H = 126 GeV, and m H = 600 GeV, to demonstrate the effect of the jet veto on the signal-background interference in gg → H → W W . There exist multiple motivations why it is important to have a thorough understanding of the far off-shell region in Higgs production, and the impact of a jet p T veto on this region. As has been emphasized in a number of recent papers [15,[51][52][53][54], the separate extraction of the Higgs couplings and total width is not possible using only rate measurements for which the narrow width approximation (NWA) applies. In the NWA the cross section depends on the couplings and widths in the form which is invariant under the rescaling preventing their individual extraction from rate measurements alone. The direct measurement of the width of the observed Higgs-like particle, expected to be close to its SM value of 4MeV, is difficult at the LHC, but is of fundamental interest as a window to new physics [35][36][37][38][39][40][41]. It is also important for model independent measurements of the Higgs couplings. Proposals to measure the Higgs width include those that rely on assumptions on the nature of electroweak symmetry breaking [15], direct searches for invisible Higgs decays [13,[55][56][57][58], and a proposed measurement of the mass shift in H → γγ relative to H → ZZ → 4l caused by interference [51]. More recently, it has been proposed [52][53][54] that the Higgs width can be bounded by considering the far off-shell production of the Higgs in decays to massive vector bosons. In this region there is a contribution from signal-background interference [59][60][61][62], and from far off-shell Higgs production [63][64][65]. Far off-shell, the Higgs propagator is independent of Γ H , giving rise to contributions to the total cross section that scale as for the signal-background interference and off-shell cross section respectively. The method proposed in [52] takes advantage of the fact that these components of the cross section scale differently than the NWA cross section. For A measurement of the off-shell and interference cross section then allows for one to directly measure, or bound, the total Higgs width. This method is not completely model independent, indeed some of its limitations were recently discussed in [66], along with a specific new physics model which decorrelated the on-shell and off-shell cross sections, evading the technique. However, interpreted correctly, this technique places restrictions on the Higgs width in many models of BSM physics. The study of the off-shell cross section as a means to bound the Higgs width was first discussed in the H → ZZ → 4l channel [52,53], where the ability to fully reconstruct the invariant mass of the decay products allows for an easy separation of the on-shell and off-shell contributions. Recently, CMS has performed a measurement following this strategy and obtained a bound of Γ H ≤ 4.2 Γ SM H [67] . The method was extended in [54] to the gg → H → W W → lνlν channel. The WW channel has the advantage that the 2W threshold is closer than for H → ZZ, as well as having a higher branching ratio to leptons, and a higher total cross section. It does however, also have the disadvantage of large backgrounds, which necessitate the use of jet vetoes, as well as final state neutrinos which prevent the reconstruction of the invariant mass. To get around the latter issue one can exploit the transverse mass variable which has a kinematic edge at M T = m H for the signal. This variable was shown to be effective in separating the region where the off-shell and interference terms are sizeable, namely the high M T region, from the low M T region where on-shell production dominates, allowing for the extraction of a bound on the total Higgs width. Although the experimental uncertainties are currently large in the high M T region, the authors estimate that with a reduction in the background uncertainty to 10%, the WW channel could be used to place a bound on the Higgs width competitive with, and complementary to the bound from the H → ZZ → 4l channel. They therefore suggest a full experimental analysis focusing on the high-M T region of the W W channel. More generally, it was proposed in [68] that a similar method can also be used to probe couplings to heavy beyond the Standard Model states. Independent of bounding the Higgs width, the study of the off-shell cross section opens up a new way to probe Higgs properties, which is particularly interesting as it probes particles coupling to the Higgs through loops over a large range of energies. Further benefits of the measurement of the off-shell cross section for constraining the parity properties of the Higgs, as well as for bounding higher dimensional operators were also discussed in [66,69]. A full theoretical understanding of the far off-shell region, especially in the presence of realistic experimental cuts, is therefore well motivated to allow for a proper theoretical interpretation of the data, and of bounds on new physics. Indeed, the current limits on the Higgs width from the off-shell region are based on leading order calculations combined with a parton shower. There has recently been progress on the calculation of the perturbative amplitudes required for an NLO description of the off-shell cross section, including signalbackground interference, with the calculation of the two loop master integrals with off-shell vector bosons [70,71]. However, one aspect that has not yet been addressed theoretically is the effect of jet vetoes, and more generally jet binning, on far off-shell cross sections, and on the signal-background interference. Jet vetoes and jet binning are used ubiquitously in LHC searches to reduce backgrounds. They are typically defined by constraining the p T of jets in the event. The H → W W channel is an example of such a search, where the exclusive zero jet bin, defined by enforcing that all jets in the event satisfy p T < p veto T , is used to reduce the large background from tt production. Indeed, the analysis of [54] used the exclusive N jet = 0 bin in the large M T region to estimate the bound on the Higgs width achievable in the H → W W channel. Furthermore, the recent bound by CMS [67] of the Higgs width from the H → ZZ → 2l2ν channel used jet bins to optimize sensitivity, splitting data into exclusive 0-jet and inclusive 1-jet samples, which were each analyzed and then combined to give the limit. The proper interpretation of the off-shell cross section measurements requires understanding, preferably analytically, the impact of the jet veto and jet binning procedures. As is well known, the jet veto introduces a low scale, typically p veto T ∼ 25 − 30GeV, into a problem which is otherwise characterized by the scale Q, of the hard collision. This causes large logarithms of the form α n s log m (Q/p veto T ), m ≤ 2n, to appear in perturbation theory, forcing a reorganization of the perturbative expansion. Physically these logarithms arise due to constraints placed on the radiation in the event, which prevent a complete cancellation of real and virtual infrared contributions. A resummation to all orders in α s is then required to make precise predictions. For the leading logarithms this resummation can be implemented by a parton shower. This approach is however difficult to systematically improve, and does not allow for higher order control of the logarithmic accuracy, or a systematic analysis of theoretical uncertainties in the correlations between jet bins. An alternative approach, which allows for the analytic resummation of large logarithms appearing in the cross section, is to match to the soft collinear effective theory (SCET) [72][73][74][75][76], which provides an effective field theory description of the soft and collinear limits of QCD. In SCET, large logarithms can be resummed through renormalization group evolution to desired accuracy, providing analytic control over the resummation. This framework also provides control over the theoretical uncertainties, including the proper treatment of correlations between jet bins [42,77,78]. The effect of jet vetoes on Higgs production in the on-shell region has attracted considerable theoretical interest [42-47, 49, 50]. For on-shell Higgs production, Q ∼ m H , and hence the resummation is of logarithms of the ratio m H /p veto T . The use of a jet clustering algorithm in the experimental analyses complicates resummation and factorization [47,79], and leads to logarithms of the jet radius parameter [44,47,80,81]. Current state of the art calculations achieve an NNLL +NNLO accuracy [49,50], along with the incorporation of the leading dependence on the jet radius, allowing for precise theoretical predictions in the presence of a jet veto. Such predictions are necessary for the reliable extractions of Higgs couplings from rate measurements. Indeed, the exclusive zero-jet Higgs cross section is found to decrease sharply as the p veto T scale is lowered. In this paper we use SCET to analytically study the effect of a jet veto on off-shell cross sections. In particular, we are interested in processes with contributions from a large range ofŝ, where √ŝ is the partonic centre of mass energy. In Sec. 2, we present a factorization theorem allowing for the resummation of large logarithms of the form log √ŝ /p veto T , in the cross section for the production of a non-hadronic final state. Working to NLL order, and using canonical scales, for simplicity, gives [43] where σ 0 (p veto T ) is the exclusive zero-jet cross section. In this formula, f i , f j are the parton distribution functions (PDFs) for species i, j, M ij is the hard matrix element, Φ is the leptonic phase space, and E cm is the hadronic centre of mass energy. K i NLL is a Sudakov factor, defined explicitly in Sec. 2, which depends only on the identity of the incoming partons. The form of Eq. (1.6) shows that the effect of the jet veto can be captured independent of the hard underlying process, which enters into Eq. (1.6) only through M. At higher logarithmic order a dependence on the jet algorithm is also introduced, but the ability to separate the effect of the jet veto from the particular hard matrix element using the techniques of factorization remains true, and allows one to make general statements about the effect of the jet veto. The resummation of the large logarithms, log √ŝ /p veto T , introduced by the jet veto leads to a suppression of the exclusive zero-jet cross section, evident in Eq. (1.6) through the Sudakov factor, and familiar from the case of on-shell production. The interesting feature in the case of off-shell effects is that this suppression depends on √ŝ . For example, when considering off-shell Higgs production, or signal-background interference, which contribute over a large range of √ŝ , the jet veto re-weights contributions from different √ŝ regions in a strongly √ŝ dependent manner. In particular, this modifies differential distributions in √ŝ , or any similar variable, such as M T . This is of particular interest for the program to place bounds on the Higgs width using the off-shell cross section in channels which require a jet veto, as this procedure requires an accurate description of the shape of the differential cross section. Furthermore, the jet veto has an interesting effect on the signal-background interference, which often exhibits cancellations from regions widely separated in √ŝ . The study of these effects is the subject of this paper. Our outline is as follows. In Sec. 2 we review the factorization theorem for the exclusive zero jet bin, with a jet veto on the p T of anti-k T jets, focussing on the dependence on √ŝ . Sec. 3 describes the generic effects of jet vetoes on off-shell production, including the dependence on the jet veto scale, the identity of the initial state partons and the hadronic centre of mass energy. In particular, we show that off-shell production in the exclusive zero-jet bin is suppressed by a strongly √ŝ dependent Sudakov factor, and comment on the corresponding enhancement of the inclusive 1-jet cross section. In Sec. 4 we perform a case study for the gg → H → W W → lνlν process, resumming to NLL accuracy the offshell cross section including the signal-background interference. For the signal-background interference, we consider two Higgs masses, m H = 125 GeV and m H = 600 GeV, whose interference depends differently on √ŝ , to demonstrate different possible effects of the jet veto on the signal-background interference. Since √ŝ is not experimentally reconstructible for H → W W , in Sec. 4.5, we demonstrate the suppression as a function of M T caused by the jet-veto restriction. In Sec. 5 we discuss the effect of the jet veto and jet binning on the extraction of the Higgs width from the off-shell cross section in H → W W (commenting also on H → ZZ). We conclude in Sec. 6. Cross Sections with a Jet Veto: A Review In this section we review the factorization theorem, in the SCET formalism, for pp → L + 0jets, where L is a non-hadronic final state. We consider a jet veto defined by clustering an event using an anti-k T algorithm with jet radius R to define jets, J, and imposing the constraint that p J T < p veto T for all jets in the event. This is the definition of the jet veto currently used in experimental analyses, with the experimental value of p veto T typically 25 − 30 GeV, and R 0.4-0.5. Factorization Theorem Following the notation of [50], the factorization theorem for pp → L + 0-jets with a jet veto on p T can be computed in the framework of SCET. For a hard process where L has invariant mass √ŝ (on-shell or off-shell), we have In this formula, Φ denotes the leptonic phase space, i, j denote the initial partonic species, H ij is the hard function for a given partonic channel, B i are the beam functions which contain the PDFs, and S ij is the soft function, each of which will be reviewed shortly. Since this factorization theorem applies to the production of a color singlet final state, we either have i = j = g, or i = q, j =q. Eq. (2.1) is written as the sum of three terms. The first term in Eq. (2.1) contains the singular logarithmic terms, which dominate as p veto T → 0, or in the case of off-shell production that we are considering, asŝ → ∞, with p veto T fixed. The second term, σ Rsub 0 , contains corrections that are polynomial in the jet radius parameter R, and σ ns 0 contains non-singular terms which vanish as p veto T → 0, and are suppressed relative to the singular terms when the ratio p veto T / √ŝ is small. The factorization theorem allows for each component of Eq. (2.1) to be calculated at its natural scale, and evolved via renormalization group evolution (RGE) to a common scale, resumming the large logarithms of p veto T / √ŝ . For the case of a veto on the jet p T , the factorization follows from SCET II , where the RGE is in both the virtuality scale, µ, and rapidity scale, ν [82,83]. In this section, we will briefly summarize the components of the factorization theorem with a particular focus on their dependence on the underlying hard matrix element, the identity of the incoming partons, the jet algorithm, and the jet veto measurement. We will also review their RGE properties. Further details, including analytic expressions for the anomalous dimensions, can be found in [42,47,50], and references therein. Soft Function The soft function S ij (p veto T , R, µ, ν) describes the soft radiation from the incoming partons i, j which are either both gluons or both quarks. It is defined as a matrix element of soft Wilson lines along the beam directions, with a measurement operator, M jet , which enforces the jet veto condition: The soft function depends only on the identity of the incoming partons, through the representation of the Wilson lines, which has not been made explicit. It also depends on the definition of the jet veto through the measurement function. The soft function is naturally evaluated at the soft scale µ S ∼ p veto T , and ν S ∼ p veto T , and satisfies a multiplicative renormalization in both µ and ν. The solution is given by Further details including expressions for the anomalous dimensions are given in [50]. In the case of interest, where the jets are defined using a clustering algorithm with a finite R, the soft function also contains clustering logarithms from the clustering of correlated soft emissions, which first arise at NNLL. These appear in the cross section as logarithms of the jet radius, log(R), but are not resummed by the RGE. For experimentally used values of R, the first of these logarithms is large [44], while the leading O(α 3 s ) term was recently calculated and found to be small [81]. We therefore treat these log(R) factors in fixed order perturbation theory. We discuss the impact of these logarithms on our results in Sec. 4.3. Beam Function The beam function [84][85][86], B i , describes the collinear initial state-radiation from an incoming parton, i, as well as its extraction from the colliding protons through a parton distribution function. The beam function depends only on the identity of the incoming parton i, and the measurement function. In the case of a p veto T , the beam function can be calculated perturbatively by matching onto the standard PDFs at the beam scale µ B ∼ p veto T , ν B ∼ √ŝ : The lowest order matching coefficient is so that to leading order the beam function is simply the corresponding PDF, but evaluated at the beam scale µ B p veto T . This was seen explicitly in the NLL expansion of Eq. (1.6). Higher order matching coefficients involve splitting functions, allowing for a mixing between quarks and gluons. This matching procedure corresponds to the measurement of the proton at the scale µ B ∼ p veto T by the jet veto. Above the scale µ B , the beam function satisfies a multiplicative RGE in both virtuality, µ and rapidity, ν, describing the evolution of an incoming jet for the off-shell parton of species i. Unlike the RGE for the PDFs, the RGE for the beam function leaves the identity and momentum fraction of the parton unchanged. The solution to the RGE is given by which resums the logarithmic series associated with the collinear radiation from the incoming parton. Further details and expressions for the anomalous dimensions are again given in [50]. As with the soft function, the beam function also contains logarithms and polynomial dependence on the jet radius, R, from the clustering of collinear emissions. These logarithms can be numerically significant, but are not resummed by the RGE. We again treat these terms in fixed order perturbation theory. Hard Function The hard function H ij encodes the dependence of the singular term of Eq. (2.1) on the underlying hard partonic matrix element of the pp → L + 0-jets process. It can be obtained by matching QCD onto an appropriate SCET operator at the scale √ŝ , giving a Wilson coefficient, C ij . The Wilson coefficient satisfies a standard RGE in virtuality, allowing it to be evolved to the scale µ. The hard function is then given by the square of the Wilson coefficient where Q denotes dependence on all variables associated with the final leptons as well as parameters like the top-mass, and the Higgs and W/Z masses and widths. The solution to the RGE equation for the hard function is where the Sudakov form factor is Here the integrals involve the β-function and anomalous dimensions where the channel i is either for quarks or gluons. Here the cusp and regular anomalous Explicit results for the functions in Eq. (2.10) can be found for example in Ref. [42]. Since we will be considering far off-shell production, and including signal-background interference effects, which have not been discussed in SCET factorization theorems before, we will discuss in more detail the definition of the hard function for the specific case of gg → lνlν in Sec. 4.1. The beam and soft functions are universal, depending only on the given measurement and the identity of the incoming partons, it is the hard function that needs to be calculated separately for different processes. The beam and soft functions are known to NNLL for the case of a jet veto defined using a cut on p T , and it is the hard coefficient that prevents resummation to NNLL for several cases of interest. In particular, since we are interested here in the case of off-shell production, one needs the full top mass dependence of loops, significantly complicating the computation. Indeed, for the case of signal-background interference for gg → H → W W → lνlν, only the leading order hard function is known [59], while for direct gluon-fusion Higgs production, analytic results exist for the NLO virtual corrections including quark mass dependence [87]. This restricts our predictions to NLL accuracy for signal-background interference for gg → H → W W → lνlν. Non-Singular Terms The non-singular term σ ns 0 (p veto T , R, µ ns ) is an additive correction to the factorization theorem, containing terms that vanish as p veto T → 0. This term scales as p veto T / √ŝ . The non-singular piece is important when p veto T is of the same order as √ŝ , where both singular and non-singular pieces contribute significantly to the cross section. In this paper, we will be focusing on the effect of a jet veto on far off-shell effects, and we will therefore always be considering the case that p veto T √ŝ . We will therefore not discuss the non-singular pieces of the cross section, and focus on the singular contributions. Uncorrelated Emissions Beginning with two emissions, the jet algorithm can cluster uncorrelated emissions from the soft and collinear sectors [44,45,47]. This produces terms proportional to powers of R 2 , which can formally be treated as power corrections for R 1, and are included in σ Rsub 0 . For the jet radii of 0.4-0.5 used by the experimental collaborations, these effects are numerically very small, especially compared to the log R terms from correlated emissions. We make use of the expressions from [50]. Expansion to NLL It is useful to consider the factorization theorem at NLL order with canonical scale choices, to see the main factors that control its behaviour. The result at NLL was first given in [43] for on-shell production with √ŝ = m H . Allowing for off-shell production, and using canonical scales, the cross section with a p veto T cut is given at NLL by where Φ are phase space variables for the final state leptonic decay products. In this equation, f i and f j are the appropriate PDFs, for example, they are both f g for the case of gluon-fusion since direct contributions from the quark PDFs do not enter until NNLL order. For a partonic center of mass energy √ŝ , Eq. (2.11) resums to NLL accuracy the logarithms of √ŝ /p veto T . Eq. (2.11) does not include the non-singular contribution to the cross section. As discussed previously, in the far off-shell region, p veto T √ŝ , and the singular contributions to the cross section dominate. It should also be emphasized that at NLL one is not sensitive to the jet algorithm or jet radius, as at O(α s ) there is only a single soft or collinear emission. Although the R dependence is important for accurate numerical predictions, it does not effect the qualitative behaviour of the jet veto. The R dependence appears in the factorization theorem at NNLL. The only dependence on the hard partonic process in Eq. (2.11) is in the matrix element M ij (ŝ). The Sudakov form factor K i given in Eq. (2.9) arises from restrictions on real radiation in QCD, and depends only on the identity of the incoming partons. At NLL the Sudakov factor is given by where r = α s (µ)/α s (µ H ). The form in Eq. (2.12) allows for the use of complex scales, such as µ H = −i √ŝ , to minimize the appearance of large π 2 factors in the Hard function. On the other hand, with canonical scales we would take . At LL order the terms with Γ 1 , β 1 , and γ 0 do not yet contribute and using the LL running cou- There are two important features of the expression in Eq. (2.11) compared with the case of no jet veto. First, the PDFs are evaluated at the scale µ = p veto T instead of µ = √ŝ . Secondly, the cross section is multiplied by a Sudakov factor, which depends on logs of the ratio √ŝ /p veto T . These have a strong impact on the cross section, which will be the focus of Sec. 3. Jet Vetoes and Off-Shell Effects In this section we will discuss quite generally the effect of jet vetoes on off-shell cross sections. We focus on the dependence on the identity of the initial state partons, and the relation between the exclusive 0-jet and inclusive 1-jet bins. We conclude with a discussion of the dependence on the hadronic centre of mass energy. For simplicity, in this section we will use the NLL expansion of Eq. (2.11) with canonical scale choices. The NLL expansion demonstrates the essential features that persist at higher logarithmic order, and makes transparent how these effects depend on various parameters of interest. This serves for the purpose of demonstrating the generic effects of jet vetoes, and their dependencies. In Sec. 4, we will perform a more detailed study for the specific case of gg → H → W W . Unlike on-shell effects, which contribute to the cross section over a small region in √ŝ , of order the width, off-shell effects, including signal-background interference and off-shell production, typically contribute over a large range of values of √ŝ . In this case the √ŝ dependence of the jet veto suppression can produce interesting effects. In particular, it modifies differential distribution in √ŝ , or any substitute such as M T in cases where the full invariant mass cannot be reconstructed, such as H → W W . Furthermore, for signal-background interference, the √ŝ dependence of the jet veto suppression can cause an enhancement or suppression of the interference relative to the on-shell contribution to the cross section, or enhance/suppress interference contributions with different signs relative to one another. With this motivation, we now study the √ŝ dependence of the jet veto suppression to the exclusive zero jet cross section using the NLL expression of Sec. 2.2. The benefit of the factorized expression is that this discussion can be carried out essentially independent of the matrix element prefactor |M ij | 2 . From Eq. (2.11), the NLL cross section is modified compared with the LO cross section, only by the evaluation of the PDFs at the jet veto scale, and by the Sudakov factor, which is a function of √ŝ . To study the suppression due to the jet veto as a function of √ŝ , we will therefore consider is the NLL exclusive zero jet cross section. In Eq. (3.1), the cross section in the denominator is evaluated to LO, namely to the same order as the matrix element that appears in Eq. (2.11) for the NLL resummed cross section. When forming this combination, one could choose to evaluate the denominator at various orders, for example using the full NLO result calculated without the p veto T . Since NLO corrections are typically large, especially for gluon initiated processes, this would typically decrease the above ratio. However, we have in mind an application to processes, such as signal-background interference in gg → lνlν, for which the NLO corrections are not yet known, so that current calculations are restricted to LO results. In this case, we can incorporate the effect of the jet veto at NLL using Eq. (2.11), and the ratio of Eq. (3.1) will characterize the effect of the resummation compared to previous calculations in the literature [53,54,59]. This approach also has the benefit that it can be done independent of the particular matrix element, as the NLO corrections are clearly process dependent. However, all of the general features described in this section persist to NNLL resummation, as will be demonstrated in Sec. 4. As was mentioned previously, at NLL one doesn't have sensitivity to the jet radius R. While this dependence is important for precise predictions, it does not dominate the behaviour of the jet vetoed cross section as a function ofŝ, or modify in any way the conclusions of this section. For numerical calculations in this section we use the NLO PDF fit of Martin, Stirling, Thorne and Watt [88] with α s (m Z ) = 0.12018. Unless otherwise stated, we use a hadronic center of mass energy of E cm = 8 TeV. In Sec. 3.3 we discuss the dependence on the E cm , comparing behaviour at 8, 13, and 100 TeV. In Fig. 1a we demonstrate the effect of the jet veto for a gluon-gluon initial state, as a function of √ŝ . 1 We plot the ratio E 0 (ŝ)/E 0 (m 2 H ), for m H = 126 GeV. We have chosen to plot this particular ratio to focus on theŝ dependence, rather than the impact that the jet veto has on the on-shell Higgs production cross section which is given by E 0 (m 2 H ). The ratio E 0 (ŝ)/E 0 (m 2 H ) describes the impact of the jet-veto for off-shell effects relative to its impact for on-shell production. It will also be useful when discussing the impact on Higgs width bounds in Sec. 5. Fig. 1a shows that the suppression of the exclusive zero-jet cross section has a strong dependence onŝ. The comparison between p veto T = 20 GeV, and p veto T = 30 GeV 1 Note that a similar effect was considered in [42] which performed resummation for gluon fusion Higgs production with a veto on the global beam thrust event shape, as a function of the Higgs mass. s GeV for both a gluon-gluon initiated process in (a), and a quark-antiquark initiated process in (b). In both cases we consider p veto T = 20, 30 GeV. The jet veto causes anŝ dependent suppression, which is significantly stronger for initial state gluons than initial state quarks, due to the larger colour factor appearing in the Sudakov. shows that a lower cut on the p T of emissions causes a more rapid suppression, as expected. Fig. 1a demonstrates that at scales of √ŝ 500 GeV, the suppression relative to that for on-shell production is of order 50%. Quarks vs. Gluons We now consider the difference in the jet veto suppression for quark initiated and gluon initiated processes. This is relevant in the case where multiple partonic channels contribute to a given process, or if the signal and background processes are predominantly from different partonic channels. This is the case for both gg → H → W W, ZZ, which have large qq initiated backgrounds. The factorization theorem in Eq. (2.1) allows one to easily study the dependence of the jet veto suppression on the identity of the incoming partons, which is carried by the hard, beam, and soft functions. The difference in the suppression arises from the differences in the anomalous dimensions, where for the 0-jet cross section, they involve C F for quarks, and C A for gluons. The clustering and correlation logarithms are also multiplied by the colour factors C F and C A . This phenomenon is similar to quark vs. gluon discrimination for jets [89], where the same factors of C F and C A appear in the Sudakov and allow one to discriminate between quark and gluon jets. However, in this case, the discrimination is between incoming quarks and gluons. Comparing Fig. 1a and Fig. 1b, we see a significant difference between a gluon-gluon and quark-antiquark initial state. The jet veto suppression increases more rapidly withŝ in the case of gluon-fusion induced processes than quark anti-quark induced processes. The suppression due to the jet veto being approximately twice as large for the case of gluonfusion as for quark-antiquark fusion, for the values considered in Fig. 1. (Note that for the quark-antiquark initial state, we have used the up quark for concreteness, however, the result is approximately independent of flavour for the light quarks, with the suppression being dominated by the flavour independent Sudakov factor. A small dependence on flavour comes from the scale change in the PDF.) The effect of the jet veto is therefore of particular interest for gluon initiated processes, such as Higgs production through gluon-gluon fusion, to be discussed in Sec. 4. This difference in the suppression is interesting for a proper analysis of the backgrounds for H → W W, ZZ in the off-shell region, and deserves further study since one may wish to vary p veto T as a function of √ŝ or M T . Inclusive 1-Jet Cross Section We have up to this point focused on the exclusive zero jet cross section. However, since the total inclusive cross section is unaffected by the jet veto, the inclusive 1-jet cross section has the same logarithmic structure as the exclusive zero-jet cross section, and can be related to the exclusive zero jet cross section by In this equation, σ ≥1 (p veto T ) is the inclusive 1-jet cross section defined by requiring at least one jet with p T ≥ p veto T , σ 0 is the exclusive zero-jet cross section and σ is the inclusive cross section. This relation allows us to discuss the properties of the inclusive 1-jet bin as a function ofŝ using the factorization theorem for the exclusive 0-jet cross section. Of particular interest is the split of the total cross section between the exclusive zero-jet bin and the inclusive 1-jet bin, and the migration between the two bins as a function ofŝ. This relation also implies a correlation between the theory uncertainties for the resummation for the two jet bins, which is important for experimental analyses using jet binning [78]. In Fig. 2 we plot E 0 (ŝ), and as a function ofŝ for a gluon-gluon initial state with p veto T = 30 GeV. The behaviour in this plot is of course evident from Fig. 1, but it is interesting to interpret it in this fashion: as an s dependent migration between jet bins. Although our calculation is only for the inclusive 1-jet bin, the dominant increase will be in the exclusive 1-jet bin. This migration between the jet bins as a function ofŝ is important for the proper understanding of the off-shell cross section predictions in the presence of jet vetos. For CMS's recent off-shell H → ZZ → 2l2ν analysis, ignoring the VBF category, the events were categorized into exclusive zero jet, and inclusive one jet bins [67], both of which have high sensitivity, due to the clean experimental signal. s GeV Gluon Gluon NLL, p T veto 30 GeV Figure 2: The ratios E 0 (ŝ), E ≥1 (ŝ) for a gluon-gluon initial state, and p veto T = 30 GeV. There is a large migration from the exclusive 0-jet bin to the inclusive 1-jet bin as a function ofŝ. This phenomenon is important for understanding the impact of jet binning on off-shell cross sections. used, although the experimental sensitivity is largest in the 0-jet bin, where the backgrounds are minimized. The effect of the migration is therefore different in the two cases. For H → ZZ, since the backgrounds are easier to control, the jets that migrate from the exclusive 0-jet bin are captured in the inclusive 1-jet bin. Since both are used in the experiment, there is not a significant loss in analysis power. Accurate predictions for the two jet bins should still be used, and the correlations in the theory uncertainties due to resummation should still be treated properly. For the case of H → W W , where the jet veto plays a more essential role in removing backgrounds, the migration causes a loss in sensitivity. For example, the analysis of [54] used the exclusive zero jet bin of H → W W to bound the Higgs width without a treatment of theŝ dependence induced by the jet veto. This will be discussed further in Sec. 5. Calculations for the exclusive 1-jet and 2-jet bins are more difficult. Although NLL resummed results exist for the case p jet T ∼ √ŝ [46,90], the treatment of p jet T √ŝ is more involved [91]. The latter is the kinematic configuration of interest for far off-shell production. Variation with E cm Here we comment briefly on the dependence of the exclusive zero jet cross section on the hadronic centre of mass energy, E cm . This is of course of interest as the LHC will resume at E cm = 13 TeV in the near future, and Higgs coupling and width measurements are important benchmarks for future colliders at higher energies. Here we only discuss theŝ dependence of the suppression due to the jet veto, the ratio of Eq. (3.1), on E cm . Of course, with an increased E cm , one can more easily achieve higherŝ, allowing for off-shell production over a larger range, magnifying the importance of off-shell effects. We will discuss this for the In Fig. 3 we compare the ratio E 0 (ŝ)/E 0 (m 2 H ) for E cm = 8, 13, 100 TeV. As the value of E cm is raised, theŝ dependence of the jet veto suppression systematically increases. Although the effect is relatively small between 8 TeV and 13 TeV, it is significant at 100 TeV. A similar effect was discussed in [92] where the exclusive zero jet fraction for on-shell Higgs production was observed to decrease with increasing E cm . Since the Sudakov factor is independent of E cm , this difference arises due to the fact that as the E cm is increased, the PDFs are probed over a larger range of Bjorken x, including smaller x a,b values. In the NLL factorization theorem of Eq. (2.11) the PDFs are evaluated at the scale p veto T instead of at the scaleŝ. The impact of this change of scales in the PDFs depends on the x values probed, and causes an increasing suppression as E cm is increased. For the majority of this paper we will restrict ourselves to E cm = 8TeV, although in Sec. 4.6 we will further discuss the effect of an increased E cm . gg → H → W W : A Case Study In this section we use gg → H → W W to discuss the effect of an exclusive jet veto in more detail. H → W W is a particularly interesting example to demonstrate the √ŝ dependence of the jet veto suppression since it has a sizeable contribution from far off-shell production [63,65], and furthermore has interference with continuum gg → W W → lνlν production, which contributes over a large range of √ŝ [59,60,62]. A jet veto is also required experimentally for this channel due to large backgrounds. For the signal-background interference, we will consider two different Higgs masses, m H = 126 GeV and m H = 600 GeV, which have interference which depend differently on √ŝ and therefore cover two interesting scenarios for the different effects that the jet veto can have. In Sec. 4.1 we discuss in detail the hard coefficients, and the matching to SCET. Default parameters are given in Sec. 4.2. In Sec. 4.3 we use gg → H → W W → lνlν, which can be calculated to NLL and NNLL, to study the convergence in the off-shell region. The extension to NNLL allows us to study the effect of the finite radius of the jet veto. In Sec. 4.4 we show results for the NLL resummation for the signal-background interference. Although we are unable to go to NNLL without the NLO hard function for the interference, the results of Sec. 4.3 give us confidence that the NLL result is capturing the dominant effects imposed by the jet veto restriction. In Sec. 4.5 we consider jet veto suppression in the exclusive zero jet bin as a function of the experimental observable M T . Hard Function and Matching to SCET In this section we discuss the hard function appearing in the SCET factorization theorem, which carries the dependence on the hard underlying process. This is discussed in some detail, as we will be considering signal-background interference, which has not previously been discussed in the language of SCET. It was shown in [59] that only two Feynman diagram topologies contribute to the process gg → ν e e + µ −ν µ at LO, due to a cancellation between diagrams with an s-channel Z boson. The two diagrams that contribute are the gluon-fusion Higgs diagram, and a quark box diagram for the continuum production, both of which are shown in Fig. 4. The gg → ν e e + µ −ν µ cross section consists of Higgs production, the continuum production, and the interference between the two diagrams. Although the interference contribution is small when considering on-shell Higgs production, it becomes important in the off-shell region. In the effective field theory formalism, these two diagrams are matched onto effective operators in SCET. It is convenient both for understanding the interference, and for comparing with fixed order QCD calculations to work in a helicity and color operator basis in SCET [93,94]. For this process the color structure is unique, as we are considering the production of a color singlet state from two gluons. We therefore focus on the helicity structure. The helicity of the outgoing leptons is fixed by the structure of the weak interactions, so we need only construct a helicity basis for the incoming gluons. We write the amplitudes for the above diagrams as where the subscripts H, C denote the Higgs mediated, and continuum box mediated diagrams respectively, and the superscripts denote helicity. In the following we will mostly suppress the lepton arguments, as their helicities are fixed, and focus on the gluon helicities. Since the SM Higgs boson is a scalar, we have In this paper, our focus is on the Higgs production and the signal-background interference. Since there is no interference between distinct helicity configurations, we can therefore also ignore the continuum production diagrams with the +−, −+ helicity configuration. These do contribute to the background, however their contribution is small compared to the qq → lνlν process. The above amplitudes are matched onto operators in the effective theory. The SCET operators at leading power are constructed from collinear gauge-invariant gluon fields [73,74] B µ n,ω⊥ = where n,n are lightlike vectors along the beamline. The collinear covariant derivative is defined as iD µ n⊥ = P µ n⊥ + gA µ n⊥ , (4.4) with P a label operator which extracts the label component of the momentum in the effective theory, and W n is a Wilson line defined by A helicity basis of SCET operators for the process of interest is given by where the 1/2 is a bosonic symmetry factor to simplify matching to the effective theory. We have defined collinear gluon fields of definite helicity by where ∓µ are polarization vectors, as well as leptonic currents of definite helicity In this expression, and in the expressions for the Wilson coefficients given below, we will use the standard spinor helicity notation, with ij =ū − (p i )u + (p j ), and [ij] =ū + (p i )u − (p j ). Note that we use Hard functions that are fully differential in the leptonic momenta. This allows for realistic experimental cuts on the leptonic phase space to be straightforwardly incorporated. It is important to note that operators with distinct external helicities do not mix under the SCET RGE at leading power. The jets from the incoming partons, which are described by the beam functions, can only exchange soft gluons, described by the soft function. At leading power, the soft gluons cannot exchange spin, only color, and therefore the RGE can only mix Wilson coefficients in color space, which in this case is trivial. This allows one to consistently neglect the operators O +− , O −+ , which would arise from matching the A C (1 − g , 2 + g ), and A C (1 + g , 2 − g ) onto SCET. They do not contribute to the process of interest, and do not mix under the RGE with the operators that do contribute. We are interested in considering both the direct Higgs production and signal-background interference separately, so it is convenient to maintain this distinction in SCET. Although the SCET operators are the same in both cases, we can separate the Wilson coefficient into a component from the Higgs mediated diagram, and a component from the box mediated continuum diagram. We then have four Wilson coefficients Since the operators are in a helicity basis, these four Wilson coefficients are simply the finite part of the helicity amplitudes for the given processes (or more specifically for MS Wilson coefficients in SCET are the finite part of the helicity amplitudes computed in pure dimensional regularization). These were computed in [59], and can be obtained from the MCFM code [95]. where the function P i is the ratio of the propagator for the particle species i to that of the photon, We have also used F H (s 12 ) for the usual loop function for gluon-fusion Higgs production with (4.15) The Wilson coefficients C C ++ and C C −− for the box diagram depend on the W mass and width, as well as the kinematic invariants formed by the external momenta. In the presence of massive quarks in the loops, they are extremely lengthy, so we do not reproduce them here. We refer interested readers to [59], and the MCFM code from which we have extracted the required results for our analysis. We have verified that our extracted expressions reproduce quoted numerical results and distributions in [59]. The Hard coefficient, H, appearing in the factorization theorem, Eq. (2.1) is given by the square of the Wilson coefficients: As is typically done in the case of squared matrix elements, we can separate the hard function into the sum of a hard function for the Higgs mediated process H H , a hard function for the interference H int , and a hard function for the background arising as the square of Wilson coefficient for the continuum process H C (which we will not use here). For the first two we have This decomposition allows us to discuss the resummation of the interference and the Higgs mediated processes separately in the effective theory, in a language that is identical to that used in Feynman diagram calculations. In Secs. 4.3 and 4.4 we will discuss the effect of resummation on both the Higgs mediated contribution and the signal-background interference. Parameters for Numerical Calculations For the numerical results, we use the default set of electroweak parameters from MCFM, following [54,59]: where the widths are determined from HDECAY [96]. We use the NLO PDF fit of Martin, Stirling, Thorne and Watt [88] with α s (m Z ) = 0.12018. The results in this section were obtained using the analytic results for the partonic process documented in [59]. Scalar loop integrals were evaluated using the LoopTools package [97], and phase space integrals were done using the Cuba integration package [98]. For all the results presented in this section, we have integrated over the leptonic phase space, and allow for off-shell vector bosons. Off-Shell Higgs Production We begin by studying the effect of the jet veto on far off-shell Higgs production in gg → H → W W → e + ν e µν µ . While a full analysis of the off-shell region also requires the inclusion of signal-background interference, for which the hard function to NLO is not known, we use the off-shell Higgs mediated process to study the convergence of the resummed predictions. In particular, one is first sensitive to the jet radius at NNLL. The ability to perform the resummation to NNLL for the Higgs mediated signal enables us to assess the convergence of the resummed predictions in the off-shell region. It also allows us to check that the NLL result, which will be used when signal-background interference is included, accurately captures the effect of the jet veto reasonably well. In particular, we will focus on the shape of the differential distribution inŝ. As will be discussed in more detail in Sec. 5, the procedure for extracting a bound on the Higgs width from the off-shell cross section uses a rescaling procedure to the on-shell cross section. Because of this, the shape, but not the normalization of the distribution is important for an accurate application of this method. Therefore, as in Sec. 3 we will rescale the differential cross sections by E 0 (m 2 H ), allowing us to focus just on the shape. The NNLL calculation requires the NNLL beam and soft functions, which are known in the literature for a jet veto defined by a cut on p T [50], as well as the virtual part of the NLO gluon-fusion hard function. The NLO virtual contributions for gluon fusion Higgs production are known analytically with full dependence on the top and bottom quark mass [87], which is necessary, as in the off-shell region one transitions through the √ŝ = 2m t threshold. 3 The NLO hard function is determined by matching onto the gluon-fusion operators in SCET, as discussed in Sec. 4.1. We do not include in our calculation the non-singular pieces, as we focus √ŝ . In Fig. 5 we plot the resummed distribution, normalized to the jet veto suppression at the Higgs mass: (dσ 0 /dm 4l )/E 0 (m 2 H ), for off-shell gg → H → W W → e + ν e µν µ . Note that in the case without a jet veto, the jet veto suppression at the Higgs mass is defined to be 1. We have integrated over the leptonic phase space. Here m 4l = √ŝ is the invariant mass of the 4 lepton final state. In Fig. 5a we use p veto T = 30 GeV, and in Fig. 5b we use p veto T = 20 GeV. In both cases, we use a jet radius of R = 0.5, as is currently used by the CMS collaboration. The uncertainty bands are rough uncertainty estimates from scale variations by a factor of 2. Note that in the calculation, we use a five flavour scheme, even above m t since the difference with using a six flavour coupling is well within our error band. Figs. 5a and 5b show a small modification to the differential distribution between NLL and NNLL. This arises primarily due to the clustering logarithms, which introduce dependence on the jet radius, which is not present at NLL. The R dependence reproduces the expected physical dependence of the cross section on R: for a fixed p veto T cut, the restriction on radiation from the initial partons becomes weaker as the jet radius is decreased, causing a smaller suppression of the cross section. Despite this, the shape is well described by the NLL result. In particular, the NLL result captures the dominant effect of the exclusive jet veto on the off-shell cross section. This is important for the resummation of the interference, considered in Sec. 4.4. In this case, higher order results are not available (for some approximate results, see [100]), and therefore one is restricted to an NLL resummation. However, the results of this section demonstrate that the NLL result accurately captures the effects of the jet veto on the shape of the distribution as a function ofŝ. Signal-Background Interference Signal-background interference for the process gg → lνlν has been well studied in the literature [59,60,62]. The interference comes almost exclusively from the √ŝ > 2M W region. For a light Higgs, m H < 2M W , this means that the interference comes entirely from √ŝ > m H . For a heavy Higgs, the Higgs width is sufficiently large that there are contributions to the interference from a wide range of √ŝ . The signal-background interference is therefore, in both cases, an interesting process on which to demonstrate the effect of the jet veto. The NLO virtual corrections are not available for the interference process, restricting the resummation accuracy to NLL. However, as argued in Sec. 4.3 if one is interested in the shape of the distribution, and not the normalization, the NLL captures the effects of the jet veto. One thing that cannot be known without a full calculation of the NLO virtual contributions to the interference is if the NLO virtual contributions for the interference are different than for the signal. For the case of interference in H → γγ where they are known, the virtual contributions for the interference were found to be smaller than for the signal [51]. Due to the similar structure of the diagrams for H → W W , the same could certainly be true. However, we expect this to be a minor correction compared to the effects of the jet veto. In particular, we do not expect the K-factor to have strongŝ dependence, which is the important effect captured by the resummation. In this section we use the LO result for gg → eνµν, fully differential in the lepton momenta, which is available in the MCFM code, and is documented in [59]. We begin by reviewing the notation for the signal-background interference in gg → eνµν at LO following [59]. It is convenient to pull out the dependence on m H and Γ H coming from the s-channel Higgs propagator. Defining C H = (ŝ − m 2 H + im H Γ H )C H , we can separate the Hard function for the signal-background interference into its so called "Imaginary" and "Real" contributions: In Eq. (4.19) there is a sum over helicities of the Wilson coefficients, which for notational convenience has not been made explicit. Note that the imaginary part of the interference is multiplied by an explicit factor of Γ H , and is therefore negligible for a light Higgs. The interference without a jet veto is shown in Fig. 6a for a 126 GeV Higgs and Fig. 6b for a 600 GeV Higgs, as a function of m 4l . We have integrated over the phase space of the leptons, including allowing for off-shell vector bosons. The interference is negligible below the √ŝ = 2m W threshold. For the case of m H = 126 GeV the only non-negligible contribution is the real part of the interference above the Higgs pole, which gives a negative contribution to the total cross section. In the case of m H = 600 GeV, there is significant interference both above and below the Higgs pole, and from both the real and imaginary parts. The interference below the pole dominates, leading to a net positive contribution to the total cross section. We have chosen these two Higgs masses, where the interference has a different √ŝ dependence, so as to demonstrate the different effects that a jet veto can have on signalbackground interference. Fig. 6 also shows as a function of √ŝ the result for interference including a jet veto of p veto T = 20, 30 GeV with NLL resummation, which can be compared with the interference without a jet veto. To make the interpretation of Fig. 6 as simple as possible, we have rescaled the interference by E 0 (m 2 H ), the jet veto efficiency at m H . Therefore, enhancements and suppressions in the jet vetoed interference correspond to enhancements and suppressions of the interference relative to the on-shell Higgs contribution when a jet veto is applied. As expected from the discussion in Sec. 3, we find a significant suppression of the interference at higher √ŝ , and this suppression increases with √ŝ . For m H = 126 GeV, shown in Fig. 6a, the interference comes entirely from above √ŝ = m H , and is therefore more highly suppressed by the jet veto relative to the on-shell Higgs cross section. However, the situation is quite different for the case of m H = 600 GeV, shown in Fig. 6b. Here the dominant contribution to the interference is from the real part in eq. (4.19), which changes sign at √ŝ = m H . The real part of the interference coming from below √ŝ = m H is positive and is partly cancelled by negative interference from above √ŝ = m H if we integrate overŝ. The jet veto suppresses the on-shell cross section and the negative interference from above √ŝ = m H more than the contribution from the positive interference below √ŝ = m H , and therefore the jet veto acts to enhance the interference contribution relative to the signal. This enhancement is significant in the case of m H = 600 GeV, as the interference has contributions starting at m 4l 2m W , where the suppression due to the jet veto is smaller. To quantify this further we can consider the effect of the jet veto on the ratio where σ H+I is the cross section including the signal-background interference, and σ H is the Higgs mediated cross section. The behaviour of this ratio is different for the two Higgs masses considered. Numerical values of R I are shown in Table 1. The effect of interference for m H = 126 GeV with or without the jet veto is fairly small, and would be made even smaller when cuts are made to eliminate interference. However, for m H = 126 GeV the effect of the jet-veto can also be significantly amplified when cuts are used to maximize sensitivity to the Higgs width. For example, the analysis of [54] considered the region M T > 300 GeV to bound the Higgs width. Since m 4l ≥ M T , we see from Fig. 6a that in this region the effect of the exclusive jet veto is by no means a small effect, giving a suppression of ∼ 1.5 − 2. A representative error band from scale variation is also shown in Fig. 6a. The effect on the derived bound will be discussed in Sec. 5. These two examples demonstrate that a jet veto can have an interesting interplay with signal-background interference, enhancing or suppressing its contribution relative to the Higgs mediated cross section, depending on the particular form of the interference. A detailed understanding of the interference is of phenomenological interest for both a light and heavy Higgs. In the case of m H = 126 GeV, the interference can be efficiently removed by cuts when studying the on-shell cross section [59], but is important for the understanding of the off-shell cross section. In the case of a heavy Higgs, the interference is important for heavy Higgs searches [59,101,102], where it is a large effect, and cannot be easily removed by cuts. The effect of the jet veto must therefore be incorporated in such searches. Suppression as a Function of M T We have so far discussed the effect of the jet veto on the cross section as a function of √ŝ , as the Sudakov factor is explicitly a function of √ŝ . However, in the case of H → W W → lνlν, the total invariant mass of the leptons cannot be reconstructed. A substitute for √ŝ , used in [59], and which is measured by the CMS and ATLAS collaborations [101,103,104] Fig. 6, the jet veto causes a suppression of the importance of the interference relative to the Higgs mediated process for a light Higgs, and an enhancement for a heavy Higgs. Similarly to the ratios considered in Sec. 3 of the exclusive zero jet cross section to the total cross section, as a function ofŝ, in Fig. 7, we plot the variable for gg → H → W W →ν µ µe + ν e in the far off-shell region. Since M T is designed as a proxy forŝ, the behaviour is as expected from the discussion of Sec. 3, however, since √ŝ ≥ M T , the events contributing at a given M T all have a larger √ŝ . Since it is the √ŝ that governs the Sudakov suppression due to the jet veto, the suppression due to the jet veto at a given M T is larger than at the same value of √ŝ . We should note that while the values of M T at which the suppression due to the jet veto becomes significant are larger than is normally considered, or studied experimentally, the authors of [54] show that with an improved understanding of the backgrounds in the ATLAS N jet = 0 bin of the W W data, the M T > 300 GeV region, where the jet veto effects are indeed significant, can be used to place a competitive bound on the Higgs width. As will be discussed in Sec. 5, their method relies heavily on having an accurate description of the shape of the M T distribution, which is modified by the jet veto. This section demonstrates that in the exclusive zero jet bin, there is a suppression by a factor of ∼ 2 above M T > 300 GeV, which is a significant effect. This will cause a corresponding weakening of the bound on the Higgs width by a similar factor, which we discuss further in Sec. 5. From 8 TeV to 13 TeV Since the focus will soon shift to the 13 TeV LHC, in this section we briefly comment on how the effects discussed in the previous sections will be modified at higher E cm . In Sec. 3.3 we noted that at higher E cm the jet veto suppression has an increased dependence onŝ due to the larger range of Bjorken x that is probed in the PDFs. The larger range of available x increases the gluon luminosity at highŝ allowing for an increased contribution to the cross section from far off-shell effects [53,54], and increasing the range over which they contribute, potentially amplifying the effects of the jet veto discussed in the previous sections. In this section we consider one example to demonstrate this point, the signal-background interference for m H = 126 GeV. The signal-background interference distribution for E cm = 8 and 13 TeV is shown in Fig. 8, along with the signal-background interference in the exclusive zero-jet bin at 13 TeV. As was done in Fig. 6 for the NLL predictions, we have normalized the distribution by the jet veto suppression at m H so that the suppression is relative to the on-shell production. The most obvious modification compared with E cm = 8 TeV, is the significant enhancement of the signal-background interference cross section, due to the large enhancement of the gluon luminosity at largerŝ. In particular, the contribution to the signalbackground interference cross section from the peak at m 4l = 2m t is enhanced at higher E cm , relative to the contribution from m 4l ∼ 2m W . Since there is a larger relative contribution from higher invariant masses, where the suppression due to the jet veto is larger, the effect of an exclusive jet veto is larger at 13 TeV. This is in addition to the fact that at 13 TeV, thê s dependence of the suppression due to the jet veto is slightly stronger, as was demonstrated in Sec. 3.3. We again emphasize that when cuts are applied to gain sensitivity to the off-shell region, the effect of the jet veto is not small. In particular, for 13TeV, there is a significant region above m 4l ∼ 350GeV where the suppression due to the jet veto is 2, as is seen in Fig. 8. Although we have focused on the effect of an increased centre of mass energy on a particular observable, the conclusions apply generically, for example, for the H → ZZ → 4l, 2l2ν channel, which exhibits similar signal-background interference. Indeed as the centre of mass energy is increased one has the ability to probe phenomena over an increasingly large range ofŝ. This amplifies the effects of off-shell physics, as well as the effect of an exclusive jet veto. These effects will be important in any physics channel for which a jet veto is applied, and for which one is interested in the physics over a range ofŝ. Effect of Jet Vetoes on Higgs Width Bounds In this section we discuss the impact of jet binning and jet vetoes for the recent program to use the off-shell cross section in H → W W, ZZ to bound the Higgs width [52][53][54]67]. Although we have focussed on the case of H → W W , we will review also the strategy for H → ZZ which is similar, also exhibiting a large contribution from the far off-shell cross section, as well as signal-background interference analogous to that in H → W W . We first discuss the procedure used to bound the Higgs width, and then relate it to our discussion of the suppression of the off-shell cross section in the exclusive zero jet bin. Our focus will be on the effect of the jet vetoes, rather than carrying out a complete numerical analysis. In particular the proper incorporation of backgrounds, and additional experimental cuts is beyond the scope of this paper. The method used to bound the Higgs width in Refs. [52][53][54] can be phrased in a common language for both H → ZZ, W W . It is based on the different scalings of the on-shell, off-shell and interference contributions to the Higgs cross section, as discussed in Sec. 1. Recalling the scaling from Eq. (1.4), the total cross section can be written as where the coefficients A, B, C correspond to on-shell, off-shell Higgs mediated, and signalbackground interference contributions, respectively. The coefficients depend strongly on the set of cuts that are applied. To extract a bound on the Higgs width, the procedure is as follows. First, one determines a normalization factor between the experimental data and theoretical prediction, which is as independent as possible of the Higgs width. This can be done for W W by using a strict M T cut, for example 0.75m H ≤ M T ≤ m H , and for ZZ by using a strict cut onŝ. In both cases, this essentially eliminates the coefficients B, C corresponding to the off-shell production and interference. Once this normalization factor is determined, it is scaled out from the entire differential distribution, so that we now must consider the ratio of offshell and onshell production cross sections. One can then compute the predicted number of events above some M T , orŝ value, for example, M T , √ŝ ≥ 300 GeV. In this region, the interference and off-shell production dominate the cross section, so that the coefficients B, C are significant, and the expected number of events is sensitive to the Higgs width, as can be seen from the scalings in Eq. (5.1). By comparing with the number of events observed by the experiment in this region, one can place a bound on the Higgs width. This method relies on the ability to normalize the theoretical prediction to data in the low M T , orŝ region, which is insensitive to the dependence on the Higgs width, and then use the same normalization in the high M T , orŝ region where there is a large sensitivity to the Higgs width through off-shell production and signal-background interference. However, to be able to do this, one needs to have an accurate theory prediction for the shape of the M T , or s distribution, particularly in the high M T , orŝ region. As we have seen, the jet veto significantly modifies the shape of the M T , orŝ distribution, causing it to fall off more rapidly at high M T , orŝ. Often we presented our results by normalizing the offshell cross sections to the cross section at the Higgs mass. Given the agreement between theory and experiment at m H , this normalization corresponds exactly to what is done if the theory prediction is normalized to the experimental data in the on-shell region, and therefore shows the extent to which a prediction without the inclusion of a jet veto overestimates the contribution to the cross section at high M T orŝ compared with that in the exclusive zero jet bin. In both the H → ZZ, and H → W W analyses, jet vetoes or jet binning are used, so it is interesting to consider how they will effect the width bounds. Their use in the two channels is quite different so we will discuss them separately. For the case of H → W W the jet veto plays an important role because the exclusive zero jet bin dominates the sensitivity, so the jet veto has a large impact on the potential Higgs width bound. This is because effectively it is more difficult to recover the jets which migrate out of the zero jet bin. The plots of the off-shell distributions in Sec. 4 show the extent to which a prediction without the inclusion of a jet veto overestimates the contribution to the cross section at high M T . This will lead to a weakening on the bound of the Higgs mass, compared with a calculation that does not incorporate the effect of the jet veto. For example, in [54], which first proposed the use of the H → W W channel, the estimated sensitivity was derived by comparing an inclusive calculation for the off-shell cross section with data in the exclusive zero jet bin. Here the effect of the restriction to the zero jet bin is not small, and will worsen the bounds by a factor of ∼ 2, as can be seen by the suppression of the far off-shell cross section in the exclusive zero jet bin, shown in Sec. 4. In an analogous experimental analysis this Sudakov suppression from the jet veto will be accounted for up to some level of precision by the use of a parton shower. Because this is such a large effect, we believe that an experimental analysis of the high M T region performed to bound the Higgs width, would benefit from using an analytic calculation of the jet veto suppression in the exclusive zero jet bin, instead of relying on the parton shower. We have demonstrated that the resummation, including the signal-background interference, can be achieved to NLL. Once the NLO virtual corrections are calculated for the interference, these results can also easily be extended to NNLL. In the H → ZZ → 2l2ν channel the situation is different, as the jet binning procedure is used to optimize sensitivity, splitting the data into exclusive 0-jet and inclusive 1-jet categories with comparable bounds coming from each category [67]. Because the inclusive 1-jet channel is still experimentally clean, the large migration to the inclusive 1-jet bin discussed in Sec. 3.2 should have a small (or no) impact on the width bounds derived from the combination of the two channels in H → ZZ. A proper treatment of the migration of events with changingŝ is still important when considering any improvement that is obtained by utilizing jet binning. The analytic results for the Sudakov form factor discussed here for H → W W could be utilized for jet bins for H → ZZ in a straightforward manner. Conclusions In this paper a study of the effect of jet vetoes on off-shell cross sections was made. A factorization theorem in SCET allowed us to analytically treat the summation of Sudakov logarithms, and make a number of general statements about the effect of the jet veto. In particular, the restriction on radiation imposed by the jet veto causes a suppression to the exclusive 0-jet cross section, and correspondingly an enhancement of the inclusive 1-jet cross section, which depends strongly onŝ. For gluon initiated processes theŝ dependence of the suppression is greater than for quark initiated processes, which is important for channels where the signal is dominated by one production channel, and the background by another. The fact that the jet veto suppression isŝ dependent has interesting effects on differential distributions inŝ, as well as on signal-background interference. To demonstrate these effects, we considered the gg → H → W W process, which has large off-shell contributions as well as signal-background interference. We performed an NLL resummation for the gg → H → W W → lνlν process, including a discussion of the resummation for the signal-background interference, for m H = 126 GeV, and m H = 600 GeV. These two examples demonstrated that depending on the structure of the interference, the jet veto can either enhance or suppress interference effects relative to the on-shell production. For a low mass Higgs a suppression is observed, while for a high mass Higgs there is a significant enhancement in the interference. These effects must be properly incorporated in high mass Higgs searches that use jet vetoes. The modification of differential distributions inŝ or M T due to theŝ dependence of the jet veto suppression is particularly relevant to a recent program to bound the Higgs width using the off-shell cross section [52][53][54]67]. In particular, for the H → W W channel, where an exclusive 0-jet veto is imposed to mitigate large backgrounds, the jet veto weakens the bound on the Higgs width by a factor of ∼ 2 compared to estimates without accounting for the jet veto. Furthermore, since the suppression in the exclusive 0-jet bin corresponds to an enhancement in the inclusive 1-jet bin, and the migration is significant as a function of √ŝ a proper understanding of the effect of the jet veto is crucial for experimental analyses which use jet binning. This migration may for example play some role in H → ZZ → 2l2ν, which was recently used by CMS to place a bound on the off-shell Higgs width, and which uses jet binning in exclusive 0-jet and inclusive 1-jet bins [67]. We presented a factorization theorem in SCET which allows for the resummation of large logarithms of √ŝ /p veto T , including for the signal-background interference, in a systematically improveable manner. This allows for the analytic study of the effect of the jet veto on the exclusive 0-jet and inclusive 1-jet bins, including the correlations in their theory uncertainties. A complete NNLL calculation would require the calculation of the NLO virtual corrections to the interference, but would allow for the analytic incorporation of jet radius effects, and would place the study of the off-shell cross section on a firmer theoretical footing. Furthermore, since our hard functions are fully differential in leptonic momenta, realistic experimental cuts on the leptonic phase space can be easily implemented. We leave a more detailed investigation, including the treatment of such cuts, and a calculation of the effect of the jet veto on the backgrounds, for future study. With the LHC beginning its 13 TeV run in the near future, the importance of the effects discussed in this paper will be amplified. As the centre of mass energy is raised, the range of s which can be probed increases. This typically increases the importance of off-shell effects, as well as the impact of the jet veto, which is essential for an accurate description of the differential distribution inŝ. In general a proper theoretical understanding of jet vetoes and jet binning for largeŝ can be achieved through resummation, and is important when theoretical cross sections are needed for the interpretation of experimental results.
18,585
2014-05-21T00:00:00.000
[ "Physics" ]
Laser-Based Monitoring of CH4, CO2, NH3, and H2S in Animal Farming—System Characterization and Initial Demonstration In this paper, we present a system for sequential detection of multiple gases using laser-based wavelength modulation spectroscopy (WMS) method combined with a Herriot-type multi-pass cell. Concentration of hydrogen sulfide (H2S), methane (CH4), carbon dioxide (CO2), and ammonia (NH3) are retrieved using three distributed feedback laser diodes operating at 1574.5 nm (H2S and CO2), 1651 nm (CH4), and 1531 nm (NH3). Careful adjustment of system parameters allows for H2S sensing at single parts-per-million by volume (ppmv) level with strongly reduced interference from adjacent CO2 transitions even at atmospheric pressure. System characterization in laboratory conditions is presented and the results from initial tests in real-world application are demonstrated. Introduction Monitoring gas emissions has become an important issue in the livestock sector [1][2][3][4]. Quality of air affects animals, has an impact on people who live nearby (due to odors) and climate (due to greenhouses gases emission). The environmental impact of pig farming (pig sector is the biggest contributor to global meat production) on air quality primarily includes emission of methane, ammonia, and hydrogen sulfide [5][6][7][8]. Proper assessment of emission rates requires all these gases to be detected in the continuous manner, at low cost, and with sensitivity and accuracy at single ppmv level. The existence of this problem was confirmed by the preliminary tests carried out in research facilities using portable, handheld devices or gas sampling bags for subsequent laboratory analysis. Fortunately, these requirements can be achieved when laser spectroscopy is applied. Laser spectroscopy is a powerful tool in chemical analysis. When implemented in the infrared spectral region, it can provide high sensitivity, high selectivity, robustness, and fast acquisition times. The strongest fundamental molecular transitions are located in the mid-infrared (beyond 3 µm) allowing for trace-gas detection down to or even below ppbv and pptv levels [9][10][11][12][13][14][15][16]. The near-infrared region offers overtone transitions that are usually from one to three orders of magnitude weaker. However, when detection at ppmv levels is needed, it is still a very attractive alternative, as it provides relatively cheap light sources and detectors (both can be operated at room temperature) and often benefits from using optical fiber-based components (fiber-coupled laser diodes and photodetectors, couplers, isolators, etc.) [17][18][19]. Near-infrared sources are frequently combined with a wavelength modulation spectroscopy (WMS) and multi-pass cells to provide sensitivities at or below ppmv levels for various species, including methane [20], ammonia [21], hydrogen sulfide [22], and carbon dioxide [23]. Simple and robust systems can fill the gap between relatively expensive instruments based on mid-infrared quantum cascade lasers (which can be few orders of magnitude more sensitive that required in our target application) [13,24] and portable, handheld devices which are cheap, but not sensitive enough (e.g., instruments for methane sensing are designed to detect it starting from relatively high levels such as 0.1%). In this paper, we present a laser-based system operating in the near-infrared spectral region that allows for CH 4 , NH 3 , and H 2 S detection at single-ppmv levels using three fiber-coupled laser diodes operated at 1651, 1531, and 1575 nm. Additionally, a laser diode at 1575 nm enables CO 2 monitoring. The system is characterized in laboratory conditions and initial results of the first field tests in a pig farm are demonstrated. Figure 1 shows the schematic diagram of the system whereas details on chosen spectral regions and target transitions are provided in Figure 2 and Table 1 (required detection limits were defined based on expected changes of concentrations and preliminary tests using gas sampling bags and laboratory analysis). Three laser diodes (Gooch & Housego, Ilminster, UK, model AA1401 at 1531 nm, NTT Electronics, model NLK1L5EAAA at 1575 nm and NTT Electronics, model NLK1U5EAAA at 1651 nm) were combined using fiber couplers and sent into two sensing paths. The first was a measurement path (shown in red), with a multi-pass cell, an off-axis parabolic mirror and a photodetector (Thorlabs, Newton, NJ, USA, PDA10CS-EC). The multi-pass cell was built using two spherical mirrors (Thorlabs, CM508-200) placed 397 mm away from each other. Approximately 30 m path length with volume of less than 800 mL was obtained using a Herriott type design which is relatively stable to small perturbations (e.g., due to temperature variations). The second path was a calibration section (shown in green) with four gas cells, a lens, and a photodetector (Thorlabs, PDA10CS-EC). Gas cells were: 5% NH 3 in air (100 mm length), pure H 2 S (50 mm length), 4% CH 4 in nitrogen (25 mm length), and 32% CO 2 in air (150 mm), all at atmospheric pressures. The same gas cells were also used in the system characterization described in the following section. Gas cells with methane and hydrogen sulfide were provided by Wavelength References (Corvallis, OR, USA). Cells containing ammonia and hydrogen sulfide were made in-house and concentrations were determined through recording of target absorption line and spectral fitting using HITRAN database. Signals from both photodetectors were fed into the input channels of a Virtual Bench device (from National Instruments, Austin, TX, USA). Data acquisition was synchronized with a function generator providing sinusoidal wavelength modulation (f m = 2.5 kHz) to the selected laser diode allowing for a WMS-based measurements [20][21][22][23][25][26][27][28][29][30][31][32][33][34]. For laser diodes at 1651 nm and 1531 nm we have selected modulation depth that was maximizing 2f WMS signals. For the laser diode at 1575 nm, modulation depth was adjusted to provide the smallest cross interference between spectral features of H 2 S and CO 2 in 4f WMS spectrum. Digital outputs of Virtual Bench could be used to turn on/off laser diodes for sequential detection of CH 4 , CO 2 /H 2 S and NH 3 . Acquired signal was processed using a LabVIEW program. It allowed for lock-in detection of selected harmonic signals in a line-locked mode, when laser wavelength was adjusted to the transition center, or in a spectral scan mode, with the wavelength being scanned through changes of the injection current for full WMS spectrum recording and analyzing. WMS amplitude at 1 × f m was also recorded for power normalization purposes [23,26,27]. Carbon Dioxide and Hydrogen Sulfide Detection Detecting CO 2 and H 2 S at 1574.5 nm is more challenging. When CO 2 concentration is above 300 ppmv and H 2 S concentration is at single ppmv level, the hydrogen sulfide line is surrounded by stronger features from carbon dioxide transitions [22,35]. This is demonstrated in Figure 4a where two 2f WMS spectra are presented. The first was recorded with H 2 S sample only (pure H 2 S at 50 mm which corresponds to 5 × 10 4 ppmv × m). The second was measured after setup was additionally filled with carbon dioxide, resulting in CO 2 concentration of approximately 50% (with path-length of 30 m; this is more than two orders of magnitude times higher concentration than the concentration of H 2 S). It is clearly visible that the signal from hydrogen sulfide (both baseline and its amplitude at the line center) is affected by the wings of CO 2 lines. In these conditions H 2 S concentration retrieval is still possible, but it becomes challenging and prone to errors. This issue can be addressed with analysis of WMS signal at higher harmonics of the modulation frequency [31]. In the presented system detection of 4th harmonic was implemented. The 4f /1f WMS signal typically has smaller baseline. Moreover, all spectral features recorded at higher harmonics become narrower (comparing to 2f WMS spectra), therefore cross-interference between neighboring transitions can be reduced [29,33,34]. This is demonstrated in Figure 4b. After filling the system with carbon dioxide spectral feature of hydrogen sulfide remains almost unchanged. Only small changes in the wings are observed which do not affect baseline or signal amplitude at the transition center. This reduced interference from CO 2 transitions requires using 4f /1f detection with non-optimal wavelength modulation amplitude but it simplifies concentration retrieval and makes it more accurate. At the same time, 2f /1f detection can be still used for CO 2 detection. Minimum Detection Limits For minimum detection limit (MDL) characterization each cell with gas under study was placed in line with a multi-pass cell, laser wavelength was adjusted to the transition center and WMS amplitude was recorded for subsequent Allan deviation analysis [36]. This configuration (gas cell in line with multi-pass cell) guarantees that any drifts that result from using multi-pass cell (e.g., from path length fluctuations, fringes etc.) will be visible in Allan deviations. At the same time, multi-pass cell was sealed tightly in order to minimize the impact of any changes in ambient concentrations of measured gases. For methane, ammonia, and carbon dioxide 2f /1f WMS amplitude was recorded, whereas 4f /1f WMS amplitude was measured for hydrogen sulfide sample. Figure 5 shows Allan deviation for each gas (detection limit was calculated assuming path length of 30 m). In all cases, the minimum is reached for integration time of~5 s (this is obtained without active wavelength stabilization). For ammonia and hydrogen sulfide measurements, Allan deviation stays flat even for integration times longer than 100 s. In the case of methane and carbon dioxide, some drifts are observed which are most likely due to changes in ambient CH 4 and CO 2 concentrations inside multi-pass cell. Obtained MDLs are: 26 ppbv for CH 4 (840 ppbv × m × Hz −1/2 ), 53 ppbv for NH 3 (1.8 ppmv × m × Hz −1/2 ), 5.5 ppmv for CO 2 (180 ppmv × m × Hz −1/2 ), and 2 ppmv for H 2 S (82.5 ppmv × m × Hz −1/2 ). These MDLs correspond to fractional absorptions of 2.9 × 10 −5 (methane), 1.7 × 10 −5 (ammonia), 0.6 × 10 −5 (carbon dioxide), and 9.9 × 10 −5 (hydrogen sulfide). During the experiments, we found that the long term stability of the system is limited by three factors: stability of the laser wavelength, optical fringes, and fluctuations of the baseline. These issues can be addressed by applying a wavelength scanning and with full WMS spectrum being recorded and analyzed. In a scanned mode, 100 points were collected during each scan at 100 Hz (10 ms per point) and several consecutive scans could be acquired and averaged before spectral analysis. As a result, acquisition times in the order of 10 to 30 s must be used in order to obtain detection limits estimated earlier in a line-locked mode. Signal post processing included baseline and peak fitting (using linear function and a second order polynomial) for both measurement and calibration paths. Figure 6 shows sample spectra recorded using calibrated gas mixtures from cylinders: a single 2f /1f WMS spectrum of 2 ppmv of methane (acquired within 1 s) is demonstrated in Figure 6a; retrieved methane concentration as multi-pass cell was filled with different gas mixtures is shown in Figure 6b; Figure 6c shows 4f /1f WMS signal when 5 ppmv of H 2 S was flown through a multi pass cell. Detection limit of H 2 S at single ppmv level can be obtained after averaging 30 scans (acquisition time of 30 s), which is consistent with Allan deviation analysis (that was performed in a line-locked mode). Figure 6. (a) Spectrum of methane at 1651 nm recorded using calibrated gas mixture from cylinder (2 ppmv). Acquisition time was 1 s; (b) retrieved concentration as nitrogen, gas from cylinder (2 ppmv of CH 4 ) and lab air were flown through the cell (magenta-1 Hz data; black-5 s moving average); (c) 4f/1f WMS spectra at 1574.5 nm of lab air (pink) and calibrated gas mixture from cylinder (5 ppmv of hydrogen sulfide; black). Acquisition time was 30 s. Linearity The linearity of the technique was experimentally analyzed by measuring the signal for different samples of methane and hydrogen sulfide. Verification of linearity for these two molecules is particularly important: for methane, because expected concentrations will result in the highest peak absorption among all four gasses (up to few percent); for hydrogen sulfide, because of large difference between concentrations that are expected to be measured (up to~20 ppmv) and concentration in a reference cell (pure H 2 S at 50 mm corresponds to 1667 ppmv at 30 m). For experimental verification of the linearity, a multi-pass cell in the setup was replaced with an appropriate glass cell. First, three samples with 5% of methane balanced with nitrogen at 740 Torr, enclosed in cells with optical path lengths of 25, 50, and 75 mm were used. For an optical path length of 30 m these concentration correspond to 42, 83, and 125 ppmv of methane. Figure 7a plots the obtained 2f /1f WMS signals together with simulated spectra based on HITRAN database. A similar experiment was subsequently performed using three glass cells with pure H 2 S at 740 Torr, with optical path lengths of 25, 50, and 100 mm (correspond to 833, 1667, and 3333 ppmv at 30 m). Measured 4f /1f WMS spectra are shown in Figure 7b. Linear fitting for both gases is shown in Figure 7c. R 2 values above 0.99 are obtained, which confirms the linearity of the sensor response. System Performance For its initial demonstration, the system was placed in a 19" rack cabinet (12U) and taken to a pig farm located in the Wielkopolska Region (in western Poland) where pigs of various ages are kept in different barn rooms. A set of teflon tubes was installed to deliver air samples from these rooms to the system that was installed in a boiler-room (shown in Figure 8). Acquisition times for each spectral range were set to 5 (CH 4 detection), 15 (CO 2 and H 2 S detection), and 10 s (NH 3 detection). Stability Test Three main sources of uncertainties during field measurements can be expected, all being related to changes in ambient conditions (primarily temperature fluctuations). The first is due to temperature-induced changes in absorption line parameters, such as line strength and shape. This contribution can be estimated using numerical model and data from HITRAN database: depending on target transition and wavelength modulation amplitude the impact of temperature on measured amplitudes is between −0.12%/K and −0.15%/K (e.g., for methane, hydrogen sulfide and ammonia this corresponds to the change of the retrieved concentration of only tens of ppbv per Kelvin at most). The second source of uncertainty is related to changes in the path length. A multi-pass cell was constructed using stainless steel rods. Assuming thermal expansion coefficient of stainless steel is 17.3 × 10 −6 1/K the impact of temperature changes on measured signal is approximately 0.13%/K. Therefore, these two contributions (both being proportional to measured amplitude) should nearly cancel each other. The third source of uncertainty is due to optical fringes and baseline drifts. To estimate this contribution the system was installed and run with no air pump, with multi-pass cell filled with laboratory air and sealed. The sensor was running for over one month from 1 November to 4 December 2017, retrieving molecular concentrations and temperature inside the instrument. Recorded date sets are shown in Figure 9. Some correlation between ambient temperature and retrieved concentrations can be noticed. The presence of temperature-induced changes in all four measurements (where H 2 S and NH 3 data sets have mean values ≈ 0) suggests that their origin is primarily in drifts of baseline and optical fringe. Observed fluctuations correspond to fractional absorptions of~10 −4 (CO 2 measurement),~2 × 10 −4 (H 2 S and NH 3 ) and~5.5 × 10 −4 (CH 4 ). Small differences are primarily due to different wavelength modulation amplitudes and different harmonics being analyzed. October and 1 November). Ambient temperature is also plotted to show some correlation between recorded signals and ambient conditions (temperature inside instrument and multi-pass cell was also recorded and it was~1.5 degrees higher than ambient temperature during the whole measurement). Gas Emission Measurements When a small pump was installed air was flown through the multi-pass cell (no pressure controllers were used) and samples from different rooms could be analyzed. Sample spectra are shown in Figure 10a-c, compared with signals recorded in laboratory (before the system was taken to the actual measurement site). Samples from two mechanically ventilated rooms were analyzed: 18 sows with suckling piglets (0-4 weeks old) were kept in room #1, while about 240 older piglets (5-10 weeks old) were kept in room #2. Concentration of methane in room #1 was found to be at 38 ppmv, while~15 ppmv was measured in room #2. Similarly, concentration of ammonia in room #1 was higher than in room #2 (~11.5 ppmv and~8 ppmv, respectively). This is primarily because sows emit much larger amounts of methane and ammonia (as compared to piglets). Concentrations of carbon dioxide in room #1 and #2 were~580 ppmv and~1460 ppmv, respectively. This difference may reflect ventilation rate in both rooms, caused by the need to maintain higher temperatures in the room for leftover piglets compared to the room for sows with piglets. No signal from hydrogen sulfide was recorded. Obtained results were consistent with NH 3 , CO 2 , and H 2 S measurements taken with manually-operated MultiRAE Lite device. Subsequently, the sensor was left for several days, analyzing air samples from one of the rooms in the farm with mid-size animals. Measured concentrations of methane, ammonia, and carbon dioxide are shown in Figure 10d-f. These are only preliminary results, but some periodicity can be observed in retrieved data sets. It is visible especially for ammonia measurement: concentration appears to be higher during the day and lower during the night. These changes seem not to be correlated with ambient temperature fluctuations and may be related e.g., to the activity of pigs [37]. Conclusions In this paper, a transportable, laser-based system for sequential detection of hydrogen sulfide, methane, carbon dioxide, and ammonia was presented. Using near-infrared sources, wavelength modulation spectroscopy, and a Herriot-type multi-pass cell, sub-ppmv detection limits for CH 4 and NH 3 were obtained. Measuring concentrations of H 2 S was found to be the most challenging due to interference from CO 2 lines located in the proximity of hydrogen sulfide transition. Especially since the instrument was designed to operate at atmospheric pressure and measured absorption features were relatively broad. This issue was addressed by application of WMS technique with signal detection at the fourth harmonic of the modulation frequency that enabled H 2 S sensing at single parts-per-million level with strongly reduced interference from adjacent CO 2 transitions. This performance was obtained in a simple and robust configuration which does not require pressure controllers or external gas cylinders. Setup characterization in laboratory conditions was presented. Initial tests in a pig farm facility enabled analyzing system performance during field operation. Early results suggest that sensor system is stable enough to perform long-term measurements of CH 4 and NH 3 concentrations with accuracy below 1 ppmv. With this performance, we can use it to analyze how emission of methane and ammonia depends on various factors such as weather conditions, ventilation rate, animals activity, or farm maintenance schedule. Further work will be focused on improving the accuracy of hydrogen sulfide detection. Potential modifications may involve implementing more advanced signal processing algorithms that include e.g., temperature compensation of measured signals, or improving system alignment and optical coatings for optical fringes reduction. We are also working on methodology for verifying the results obtained from the measuring system at its assembly site. During long-term deployment, we will also analyze the impact of gas sampling lines on system performance and its accuracy.
4,571
2018-02-01T00:00:00.000
[ "Agricultural and Food Sciences", "Engineering", "Environmental Science" ]
The LIM-Homeodomain Protein Islet Dictates Motor Neuron Electrical Properties by Regulating K+ Channel Expression Summary Neuron electrical properties are critical to function and generally subtype specific, as are patterns of axonal and dendritic projections. Specification of motoneuron morphology and axon pathfinding has been studied extensively, implicating the combinatorial action of Lim-homeodomain transcription factors. However, the specification of electrical properties is not understood. Here, we address the key issues of whether the same transcription factors that specify morphology also determine subtype specific electrical properties. We show that Drosophila motoneuron subtypes express different K+ currents and that these are regulated by the conserved Lim-homeodomain transcription factor Islet. Specifically, Islet is sufficient to repress a Shaker-mediated A-type K+ current, most likely due to a direct transcriptional effect. A reduction in Shaker increases the frequency of action potential firing. Our results demonstrate the deterministic role of Islet on the excitability patterns characteristic of motoneuron subtypes. Neurons grown in culture often express their normal complement of both voltage-and ligand-gated ion channels (O'Dowd et al., 1988;Ribera and Spitzer, 1990;Spitzer, 1994). This suggests a significant degree of cell autonomy in the determination of electrical properties that presumably facilitates initial network formation. Once part of a circuit, however, such neurons become exposed to synaptic activity. As a result, predetermined electrical properties are modified by a variety of well-described mechanisms (Davis and Bezprozvanny, 2001;Spitzer et al., 2002). Such tuning ensures consistency of network output in response to potentially destabilizing activity resulting from Hebbian-based synaptic plasticity (Turrigiano and Nelson, 2004). The formation of functional neural circuits would seem, therefore, critically reliant on both intrinsic predetermination and subsequent extrinsic activity-dependent mechanisms to shape neuronal electrical properties. Key to understanding how intrinsic and extrinsic mechanisms are integrated will be the identification of factors that regulate predetermination. The fruitfly, Drosophila, has been central to studies that have identified intrinsic determinants of neuronal morphology. Within the Drosophila central nervous system (CNS) the transcription factor Islet is expressed in the RP1, RP3, RP4, and RP5 motoneurons (termed ventral motoneurons, vMNs) that project to ventral muscles (Broihier and Skeath, 2002;Landgraf and Thor, 2006;Thor et al., 1999). By contrast, motoneurons projecting to dorsal muscles (e.g., aCC and RP2, termed dorsal motoneurons, dMNs) express a different homeodomain transcription factor, Even-skipped (Eve) (Broihier and Skeath, 2002;Landgraf et al., 1999). Misregulation of these transcription factors is sufficient to alter subtype-specific axonal projections (Broihier and Skeath, 2002;Landgraf et al., 1999). Thus, Eve and Islet constitute what might be considered a bimodal switch with each being deterministic for either dorsal or ventral-projecting motor axon trajectories, respectively. Here, we report that the presence of Islet is also deterministic for expression of Shaker (Sh)-mediated outward A-type K + current. The vMN and dMN subgroups differ in magnitude of outward K + currents recorded by whole-cell patch clamp. We show that this difference is maintained by endogenous expression of islet in the vMNs. We also show that Islet is sufficient to repress expression of a Sh-mediated K + current. By contrast, dMNs, which do not express islet, exhibit a robust Sh-mediated K + current. The deterministic function of Islet is evidenced first by the fact that loss of function results in a transformation of total outward K + current in the vMNs to mirror that present in dMNs. Second, ectopic expression of islet in dMNs or body wall muscle is sufficient to repress expression of the endogenous Sh-mediated K + current. Thus, in addition to being sufficient to predetermine aspects of neuronal connectivity, Islet is sufficient to specify electrical properties in those neurons in which it is expressed. Dorsal and Ventral Motoneuron Subgroups Show Specific K + Current Profiles A crucial test of the hypothesis that Islet regulates ion channel gene expression is the demonstration that membrane electrical (A) Schematic representation of dorsal and ventral motoneurons (dMNs and vMN, respectively) within the ventral nerve cord of young first-instar larvae and their muscle targets in one half segment. dMNs (magenta) comprise the two Eve positive motoneurons aCC (*) and RP2 (**), that project to dorsal muscles (magenta). vMNs (green) comprise the Islet positive motoneurons RP1, RP3, RP4, and RP5 (not individually indicated), that project to ventral muscles (green). AC, anterior commissure; PC, posterior commissure. (B) Average total K + current recorded from dMNs and vMNs are shown. Currents shown are the composite averages made by combining currents obtained from at least eight individual neurons that were normalized for cell capacitance. The voltageclamp protocol (bottom trace) was À90 mV for 100 msecs prior to voltage jumps of D10 mV increments/50 ms duration. Two parameters are measured from the current traces: I Kfast (arrow) was measured at the beginning of the response and I Kslow (gray box) was measured at the end of the voltage step. Scale bars 20 pA/pF and 10 ms for currents and 50 mV/10 ms for the voltage clamp protocol. (C) Current-voltage (IV) plots show significant differences in magnitude of I Kfast and I Kslow in the two motoneuron populations. Both I Kfast and I Kslow are larger in dMNs (black lines) compared to vMNs (gray lines). Values shown are means ± SEM (n R 8). properties of Islet-expressing vMNs differ to those of Eve-expressing dMNs. To determine if this is true, we recorded total K + currents from both motoneuron subtypes in first-instar larvae (1-4 hr after hatching; see Figure 1A). Motoneurons were initially identified on the basis of their medial dorsal position in the ventral nerve cord; following electrophysiological patch clamp recordings precise subtype was confirmed on the basis of axonal projection that was visualized by dye filling. We did not observe differences within either subgroups; therefore, recordings have been pooled for the vMN or dMN subtypes. Figure 1B shows averaged total outward K + currents recorded from both the dMNs and vMNs. The outward K + current is composed of a fast-activating and inactivating component, (I Kfast , indicated by the arrow in Figure 1B) and a slower-activating, noninactivating component, (I Kslow , indicated by the box in Figure 1B). Analyzing current densities for I Kfast and I Kslow ( Figure 1C) shows that dMNs have significantly larger outward K + currents compared to vMNs ( Figure 1C; at holding potential of +40 mV I Kfast : 60.1 ± 4.3 versus 42.6 ± 3.1 pA/pF; I Kslow : 49.0 ± 4.4 versus 33.3 ± 2.4 pA/pF, dMNs versus vMNs, respectively, p % 0.01). Thus, vMNs and dMNs differ in their electrical properties. The CNS of a first-instar larva is a mature functional neural network in which synaptic transmission is active. Hence, the differences we observe in K + currents could be established entirely due to network activity. Alternatively, subtype specificity might be determined prior to neuronal network formation and, as such, could be considered an intrinsic property of the specific motoneurons. To determine this experimentally, we repeated our analysis following complete block of synaptic transmission (i.e., absence of network activity), achieved through expressing tetanus toxin light chain (TeTxLC) throughout the entire CNS. Using the GAL4 1407 driver, TeTxLC was expressed pan-neuronally starting at the early neuroblast stage. Since TeTxLCexpressing embryos do not hatch, we recorded K + currents just prior to expected hatching (at late embryonic stage 17). At this stage motoneurons have become fully functional components of the motor network (Baines and Bate, 1998). We found that I K was not significantly perturbed, in either dMNs or vMNs, by blockade of synaptic release. Moreover the difference in K + currents between the dMNs and vMNs was maintained for both I Kfast and I Kslow . That differences in I K levels between dMNs and vMNs are established and maintained in the absence of synaptic release strongly suggests that they arise from intrinsic developmental mechanisms independent of evoked synaptic transmission. Islet Determines the Electrical Properties of Ventrally Projecting Motoneurons Drosophila larval motoneurons that project axons to ventral muscles express Islet, while those that innervate dorsal muscles express Eve. Loss of islet is sufficient to direct ventral-projecting axons dorsally and loss of eve to direct dorsal-projecting axons ventrally (Landgraf et al., 1999;Thor and Thomas, 1997). These two distinct motoneuron subtypes provide, therefore, a tractable system to test whether the differences we observe in K + conductance is also intrinsically determined. In order to test whether Islet is able to influence K + currents we recorded from vMNs in an islet null (À/À) mutant. This analysis indicated that Islet is sufficient to regulate K + conductance in these motoneurons. Thus, peak current density for I Kfast was significantly increased in homozygous islet À/À mutants (Figure 2A; WT 42.6 ± 3.1 versus (A) Shows composite averaged K + currents (representing the average from at least eight individual neurons) and respective IV plots for WT and islet À/À mutant vMNs. Voltage-clamp protocol as in Figure 1. Current density of I Kfast of vMNs (obtained from a prepulse of À90mV) is significantly larger in islet À/À compared to WT at all test potentials above À40 mV. (B) Neurons were subjected to a prepulse of À20 mV to inactivate I Kfast . The remaining I Kslow of vMNs is indistinguishable between islet À/À and WT. (C) Measurement of I Kfast (obtained from a prepulse of À90mV) in dMNs in islet À/À and WT are not different. Values shown are means ± SEM (n R 8). (D) Averaged responses of WT dMNs, WT vMNs, and islet À/À vMNs evoked by the highest test potential (À90 mV prepulse and +40 mV test) are superimposed. The absence of islet from vMNs increases K + current magnitude to WT dMNs levels. Scale bars are 20 pA/pF and 10 ms for voltage-clamp responses and 100 mV/10 ms for voltage-clamp protocol. To test for autonomy of effect, we also recorded from dMNs in the islet À/À mutant. Dorsal MNs do not express islet, and I Kfast currents of WT and mutant larvae were statistically indistinguishable ( Figure 2C; WT 60.1 ± 4.3 pA/pF versus islet À/À 68.2 ± 5.9 pA/pF p = 0.28). We conclude that loss of islet only affects I Kfast in vMNs in which it is normally expressed, but not in dMNs that lack expression of this transcription factor. We further noted that loss of islet from the vMNs resulted in a transformation of I Kfast to recapitulate the magnitude of this same current recorded in dMNs. When averaged responses of islet À/À vMNs and WT dMNs were superimposed, only small kinetic differences remain ( Figure 2D). Such an observation is entirely consistent with, and indeed predictive of, the magnitude of I Kfast being regulated by endogenous expression of Islet. Islet Represses a DTx-Sensitive Current Fast K + currents in Drosophila neurons are encoded by one or more of at least three different genes: two voltage-gated fastactivating and inactivating channels (A-currents) termed Shal and Shaker (Sh) and a Ca 2+ -activated BK channel termed slowpoke (Baker and Salkoff, 1990;Elkins et al., 1986;Singh and Wu, 1990). To determine which K + current is increased in vMNs following loss of islet, we used specific blockers of these individual currents. We first explored whether I Kslowpoke is repressed by Islet. To do so we added Cd 2+ to the bath solution. Cd 2+ blocks Ca 2+ entry and, as a consequence, prevents activation of Ca 2+ -activated K + channels. Addition of Cd 2+ did not diminish the increase in I Kfast observed in the vMNs in islet À/À mutants (data not shown). We conclude from this that Islet does not influence I Kslowpoke . By contrast, the presence of a-Dentrotoxin (DTx), a potent and specific blocker for Sh-mediated K + currents (Ryglewski and Duch, 2009;Wu et al., 1989), completely abolishes the increase of I Kfast seen in the vMNs in islet À/À ( Figure 3A; control 58.5 ± 6.9 versus DTx 43.1 ± 2.7 pA/pF p % 0.05). Indeed, I Kfast values obtained in the presence of DTx closely mirrored untreated WT vMNs (43.1 ± 2.7 versus 42.6 ± 3.1 p = 0.9). That DTx negates the islet À/À phenotype is consistent with Islet inhibiting a Shmediated K + current in WT vMNs. To verify this prediction, we recorded I Kfast in a Sh;islet double mutant. Similarly, under these conditions, peak current density of I Kfast in the double mutant was indistinguishable from WT vMNs ( Figure 3A; p = 0.24). Sh Is Differentially Expressed in dMNs versus vMNs Our data are consistent with Islet acting to repress expression of Sh in vMNs. Moreover, removal of this repression results in expression of Sh-mediated K + channels that confer ''dorsallike'' electrical properties. This model posits, therefore, that dMNs normally express a Sh-mediated K + current. To test this, we compared I Kfast in dMNs between WT and in the presence of either DTx or in a Sh null mutant (Sh[14]). We performed these recordings in the presence of external Cd 2+ to block Ca 2+ -activated fast K + currents. Both acute block of Sh activity (DTx) and loss of function of Sh expression significantly reduced I Kfast ( Figure 3B; WT 40.5 ± 1.9 versus WT + DTx 29.3 ± 2.7 versus Sh [14] 26.1 ± 1.7 pA/pF; p % 0.01 and p % 0.01, respectively). Moreover, the I Kfast recorded in dMNs under both conditions (WT + DTx 29.3 ± 2.7 and Sh [14] 26.1 ± 1.7 pA/pF) was indistinguishable from that of vMNs in WT (26.1 ± 2.3 pA/pF, DTx p = 0.38, Sh p = 1), which is in full agreement with our model. To further support the notion that the difference in I Kfast that exists between dMNs and vMNs is due, at least in part, to expression of Sh in dMNs, we recorded I Kfast in vMNs under the same conditions. As expected, neither the presence of DTx, nor loss of Sh, had any marked effect on I Kfast in vMNs (p = 0.51 and 0.23, respectively; Figure 3B). To further verify the differential expression of Sh in dMNs versus vMNs we assessed transcription of Sh in these two cell types by in situ hybridization. We designed probes that specifically recognize the Sh pre-mRNA. These intron probes label the unspliced Sh transcript at the site of transcription within the nucleus, but not the fully mature message in the cytoplasm. We detected Sh transcription in dMNs, labeled with Eve antibody ( Figure 3C, black arrowheads), but not in vMNs, labeled by expression of GFP (Lim3 > nlsGFP; Figure 3D, white arrowheads). Taken together, both electrophysiology and in situ hybridization are consistent with dMNs expressing Sh while the vMNs do not. Islet Is Both Necessary and Sufficient to Repress Sh-Mediated K + Currents Next, we tested whether Islet is sufficient to repress Shmediated K + currents in cells where Sh, but not islet, is normally expressed. We used two different preparations for these experiments. First, we ectopically expressed islet in dMNs. Driving a UAS-islet transgene with GAL4 RN2-0 significantly reduced I Kfast (34.4 ± 2.6 versus 41.2 ± 1.9 pA/pF, experimental versus controls which consisted of WT and heterozygous GAL4 driver line, p % 0.05; Figure 4A). These recordings were carried out in the presence of external Cd 2+ to eliminate Ca 2+ -dependent K + currents. The observed reduction in I Kfast in dMNs could, however, be due to a reduction in either Sh-or Shal-mediated K + currents. To distinguish between these two possibilities, we tested for DTx sensitivity, which is observed in WT dMNs and is an indicator for the presence of Sh currents. DTx sensitivity was lost when islet was ectopically expressed in dMNs ( Figure 4A). In addition, when we expressed ectopic islet in dMNs in a Sh À/À background, there was no further reduction in I Kfast compared to ectopic islet expression in a WT background ( Figure 4A). We conclude from this that ectopic expression of islet in dMNs is sufficient to downregulate Sh-mediated I Kfast . The second preparation we used takes advantage of the fact that I Kfast in body wall muscle is solely due to Sh and Slowpoke (the latter of which can be easily blocked [Singh and Wu, 1990]). We recorded from muscle 6 in abdominal segments 3 and 4 in first-instar larvae. To remove the I Kslowpoke component and hence isolate the Sh-mediated I Kfast , recordings were done in low calcium (0.1 mM) external saline. Figure 4B depicts the averaged responses from voltage-clamp recordings in control muscle (heterozygous GAL4 24B driver, upper trace) and muscle expressing islet (lower trace). Peak current densities of I Kfast (entirely due to Sh-mediated K + current) and the slow noninactivating currents recorded at +40 mV are shown in Figure 4C. Ectopic expression of islet in muscle is sufficient to produce a significant reduction in I Kfast (control 26.6 ± 2.4 versus 24B > islet 15.8 ± 1.0 pA/pF, p % 0.01) while no effect was seen on the slow current. Thus, expression of islet in dMNs is sufficient to reduce a DTx-sensitive component of I Kfast . Similar expression in muscle clearly demonstrates that Islet is sufficient to downregulate a Sh-mediated fast K + current. Islet Binds Directly to the Sh Locus Our electrophysiology indicates that Islet is able to repress Shmediated K + current. To identify putative targets of Islet we used DamID, a well-accepted technique for demonstrating direct binding to chromatin or DNA in vivo (Choksi et al., 2006;Filion et al., 2010;Southall and Brand, 2009;van Steensel and Henikoff, 2000). Our analysis identifies 1,769 genes (exhibiting The increase in I Kfast observed in vMNs in the islet À/À mutant is effectively blocked by the presence of 200 nM DTx in the bath saline indicative that the increased K + current is Sh mediated. This conclusion is further supported by the observation that the effect of removing islet on I Kfast requires the presence of Sh; no increase is seen in a Sh;islet double mutant. (B) The presence of DTx significantly reduces I Kfast in dMNs indicative that this neuron subgroup expresses an endogenous Sh-mediated K + current. This is confirmed by a similar reduction in I Kfast observed in a Sh null mutant (Sh À/À ). By contrast, I Kfast is unaffected in WT vMNs either by exposure to DTx or loss of Sh. All recordings are done in the presence of Cd 2+ . Values shown are means ± SEM (n R 8). (C and D) In situ hybridization with Sh intron probes. Intron probes detect pre-mRNA at the site of transcription within the nucleus, but not fully processed mRNA in the cytoplasm. The black arrowheads (C) indicate staining for Sh transcript in dMN nuclei, labeled with anti-Eve. White arrowheads (D) indicate vMN nuclei, labeled with nuclear GFP, which do not express Sh. Early stage 17 embryos were analyzed. Scale bar is 5mm. one or more peaks of Islet binding within 5 kb of the transcriptional unit) as direct targets of Islet (FDR < 0.1%). Consistent with our model of Islet regulating a Sh-mediated K + current, we find three significant binding sites within introns of the Sh locus (arrows 1 to 3 in Figure 5). Intragenic binding of transcription factors is common in both vertebrates (Robertson et al., 2007) and invertebrates (Southall and Brand, 2009). A fourth significant peak is found upstream of Sh (arrow 4 in Figure 5). Binding of Islet at this site could regulate the expression of either Sh and/ or CG15373 an adjacent, divergently transcribed, gene. By contrast, Shal and slowpoke, which also encode fast neuronal K + currents, were not identified as putative targets ( Figure 5). Thus, these data show that Islet binds to the Sh locus and is likely to regulate transcription of the Sh gene directly. To confirm that Islet binds Sh and regulates its transcription, we used qRT-PCR to quantify levels of Sh transcripts. We compared Sh transcript levels in larval CNS between control, islet À/À and panneuronal islet expression (1407 > islet). In comparison to control, the absence of islet À/À resulted in a 27% increase in Sh (1.27 ± 0.01, n = 2, p < 0.05). By contrast, panneuronal expression of transgenic islet resulted in a 45% decrease in Sh transcript (0.45 ± 0.06, n = 2, p < 0.05). We also measured Sh transcript level in body wall muscle following ectopic expression of islet (24B > islet). Similar to the CNS, Sh transcripts were reduced by 31% relative to control (0.31 ± 0.01, n = 2, p < 0.05). Taken together with the results obtained by DamID, this strongly suggests that Islet binds to, and represses transcription of, the Sh gene. Sh Regulates Action Potential Frequency Voltage-dependent K + currents, such as those mediated by Sh, contribute to setting membrane excitability (and thus the ability to fire action potentials) (Goldberg et al., 2008;Peng and Wu, 2007). These currents are therefore critical for network function and the generation of appropriate behaviors (Smart et al., 1998). It has been shown that modulation of Sh-mediated current, using dominant-negative transgenes, can bring about significant changes in excitability (Mosca et al., 2005). We were interested in whether and how excitability differs between motoneurons that express a Sh-mediated K + current (dMNs) and those that do not (vMNs). We recorded excitability in current clamp. Typical responses are shown in Figure 6A. We found that dMNs fired significantly fewer action potentials than vMNs at most current steps ( Figure 6B; 10 pA: 18.2 ± 0.9 versus 22.1 ± 1.4 p = 0.04; 8 pA: 15.3 ± 1.0 versus 19.1 ± 1.1 p = 0.02; 6 pA: 11.5 ± 1.0 versus 15.2 ± 1.2 p = 0.04; 4 pA: 6.5 ± 1.2 versus 9.9 ± 1.4 p = 0.09; 2 pA: 0.8 ± 0.3 versus 3.8 ± 1.0 p = 0.03; 1 pA: 0.1 ± 0.1, versus 0.9 ± 0.4: p = 0.13; dorsal versus ventral, respectively). The above results suggest that the Sh-mediated K + current (expressed only in dMNs) reduces action potential (APs) firing when present. To validate this conclusion, we reduced Sh current in dMNs acutely by adding DTx to the bath and recorded AP firing. AP firing increased from 18.2 ± 0.9 APs (WT) to 25.7 ± 1.9 APs (DTx, p < 0.05; Figure 6C). A similar result, although not significant, was obtained when APs were recorded from dMNs in a Sh mutant (18.2 ± 0.9 to 21.2 ± 1.5 APs, p = 0.07; Figure 6C). Indeed, in both treatments, firing rates between dMNs and vMNs were indistinguishable (Sh À/À 21.2 ± 1.5 versus 22.7 ± 1.1; DTx 25.7 ± 1.9 versus 23.0 ± 1.8 APs, dMNs versus vMNs respectively, p > 0.05; Figure 6C). As predicted, vMN excitability was not affected by either DTx or loss of Sh (22.1 ± 1.4 versus 23.0 ± 1.8 versus 22.7 ± 1.1, WT, DTX, Sh À/À , respectively, GAL4 1407 ). Simultaneous application of DTx did not further reduce I Kfast . Ectopic islet expression also had no effect on I Kfast in a Sh À/À mutant. Taken together, this data is indicative that islet decreases a Sh-mediated K + current in dMNs. Recordings were carried out in the presence of external Cd 2+ to block Ca 2+ -activated K + currents. (B) Expression of islet in body wall muscle results in a significant reduction in I Kfast . In low external Ca 2+ (0.1 mM) I Kfast in these muscles is mediated solely by Sh (see text for details). Traces show averaged composite K + currents, obtained from at least eight individual muscle 6 recordings, in control and islet overexpression background. The prominent I Kfast (arrow) of control muscles is significantly reduced when islet is ectopically expressed. Scale bar 10 pA/pF 10 ms for current and 50 mV/10 ms for voltage protocol. (C) Averaged peak current densities of I Kfast and I Kslow are shown. Ectopic expression of islet significantly reduces I Kfast but has no effect on I Kslow . Values shown are means ± SEM (n R 8). p > 0.05; Figure 6C). Perhaps unexpectedly, the increase in I Kfast in vMNs, which results from the loss of islet, did not influence AP firing. Loss of islet also had no effect on APs fired in dMNs which is predictable because dMNs do not express this protein (Figure 6C). Finally, determination of AP firing in a Sh;islet double loss of function mutant revealed no additional effects: AP firing is increased in dMNs and unaffected in vMNs (data not shown). Why loss of islet, which increases I Kfast in vMNs, does not influence AP firing in these neurons is unknown, but may be indicative of additional homeostatic mechanisms. DISCUSSION Diversity in neuronal electrical properties is dictated by the type, location, and number of ion channels expressed in individual neurons. While activity-dependent mechanisms that act to adjust these properties in mature neurons have been studied in detail (Davis and Bezprozvanny, 2001;Spitzer et al., 2002), the mechanisms that specify electrical properties in embryonic neurons, prior to network formation, are not understood. These mechanisms are, however, likely to be part of cell-intrinsic programs of specification. The demonstration of differential expression of transcription factors between neuronal cell types underpins the proposal of a combinatorial code sufficient to determine key aspects of neuron specification, including axon guidance and neurotransmitter phenotype (Polleux et al., 2007;Shirasaki and Pfaff, 2002;Thor and Thomas, 2002). However, whether these same factors are suffi- cient to set cell-specific electrical characteristics remains unknown. A wealth of studies on motoneuron specification, from flies to mammals, has shown that early developmental decisions, such as subclass identity, is dictated, at least in part, by a code of transcription factors (Dasen et al., 2005(Dasen et al., , 2008De Marco Garcia and Jessell, 2008;Landgraf et al., 1999;Landgraf and Thor, 2006;Thor and Thomas, 1997). With its relatively simple CNS and powerful molecular genetics, Drosophila has been central to these studies. Embryonic Drosophila motoneurons express a stereotypic mix of identified transcription factors which are evolutionary conserved with mammals (Thaler et al., 1999(Thaler et al., , 2002Thor and Thomas, 1997). Motoneurons which predominantly innervate ventral muscles express islet, Lim3, and dHb9. Motoneurons which project dorsally express eve (Landgraf et al., 1999;Landgraf and Thor, 2006;Thor and Thomas, 1997). A first indication that ion channel genes may also be targets of these transcription factors was provided by our demonstration that overexpression of eve was sufficient to alter the outward voltage-gated K + current through transcriptional repression of slowpoke (encoding a BK Ca 2+ -activated K + channel) in Drosophila motoneurons (Pym et al., 2006). However, while a common developmental regulation of neuronal morphology and function, at least in motoneurons, might be inferred from this study, only Eve-positive cells were investigated. This leaves open the question, whether Eve, or for that matter any of the other transcription factors, is deterministic for specific membrane currents. The principle of duality in role for transcription factors such as Eve and Islet is significant because it is predictive that neuron morphology and electrical signaling are, at least in part, determined by common developmental mechanisms. Studies of vertebrate homologs of these transcription factors, widespread in the mammalian CNS, provide additional support for such a scenario. For example, Islet-1 and Islet-2 are known to regulate neuron identity, axonal guidance and choice of neurotransmitter in vertebrate CNS (Hutchinson and Eisen, 2006;Segawa et al., 2001;Thaler et al., 2004). Associated microarray analysis on murine mutant tissue identifies ion channels as putative targets of Islet-1, including Shal-related K + channel Kcnd2 and Na + channel Na v 1.8. Regulation of expression has, however, yet to be demonstrated (Sun et al., 2008). It is conceivable that in zebrafish recently reported differences in outward K + currents between two embryonic motoneurons, dorsal MiP and ventral CaP (Moreno and Ribera, 2009), may be regulated by the differential expression of Islet1/2 in these neurons (Appel et al., 1995). We provide substantial evidence that differential expression of islet in vMNs versus dMNs is critical for determining subtypespecific differences in Sh-mediated K + currents. Because these Sh-mediated K + currents regulate action potential frequency, they will contribute to network function. Comparable to our findings in Drosophila, in both the mouse cochlea and cortex, neurons that fire only a small number of action potentials to a given current pulse (termed rapidly adapting) express a DTx-sensitive Kv1 (Sh-like) K + current. By contrast, neurons that fire many action potentials (slowly adapting) do not. The firing pattern of rapidly adapting neurons can be transformed into that of slowly adapting neurons by application of the Shspecific blocker DTx (Miller et al., 2008). Our own data are consistent with such a role for Sh because we show that dMNs which express Sh, fire fewer action potentials than vMNs. Moreover, the number of action potentials fired by dMNs is increased by genetic or pharmacological block of the Sh-mediated K + current. We envisage, therefore, that regulation of action potential firing, through Islet-mediated transcriptional control of a Sh-like K + current, might be well conserved. While the presence of early factors able to regulate ionchannel gene expression is predictive of predetermination of electrical signaling properties in embryonic neurons, a challenge remains to understand how individual neurons decode this information. In the Drosophila ventral nerve cord, we find that the presence or absence of a Sh-mediated K + current is determined by whether islet is expressed or not. Thus, Islet seems to act as a binary switch; when present it prevents expression of Sh and vice versa. However, it seems unlikely that all combinatorial factors act in this way. For example, the activity of Eve seems to be related to its relative level of expression, since endogenous Eve only partially represses transcription of slowpoke (a Ca 2+ -dependent K + channel) in the dorsal motoneuron aCC (Pym et al., 2006). It remains to be determined whether efficacy of regulatory activity is specific to individual transcription factors or to target genes. We show here that the Lim-homeodomain transcription factor Islet forms part of an intrinsic ''decision-making'' process that is critical to specifying subtype-specific electrical properties in developing motoneurons. It might be argued that input from pre-and postsynaptic partners is involved in setting early electrophysiological differences between neurons. Indeed such inputs play a pivotal role during axonogenesis and synapse development. Blocking all synaptic transmission showed that neural network activity is not required to establish early electrophysiological differences between motoneuron subgroups. Motoneurons also receive instructive cues from their postsynaptic muscle targets during NMJ development (Fitzsimonds and Poo, 1998). In this regard it is significant that the difference in I Kfast we observe between dMNs and vMNs is abolished in a myosin heavy chain mutant (mhc1) that fails to produce contractile muscles. Indeed, I Kfast is decreased in dMNs to the level seen in WT vMNs (V.W. and R.A.B., unpublished observations). This is, perhaps, indicative that the dMNs require an instructive signal from their muscle targets in order to follow a different path of electrical development. Whether this path suppresses islet expression in dMNs remains to be determined. Significantly, vMNs were not affected in the Mhc1 mutant suggesting that repression of Sh-dependent I K by Islet is independent of muscle derived input. Why do motoneurons differ in their electrical properties and what is the functional implication? dMNs and vMNs receive differential synaptic drive (Baines et al., 2002) and innervate distinct muscle targets, dorsal obliques and ventral longitudinals, respectively (Landgraf et al., 1997). During larval crawling ventral muscles are recruited prior to dorsal muscles (Fox et al., 2006) to, probably, facilitate coordinated movement. Interestingly, synaptic strength, based on EJP amplitude, is largest between vMNs and their target muscles. While the precise underlying mechanism is unknown, pharmacology suggests that terminals of dMNs express a larger Sh-dependent K + current compared to vMNs. This current disproportionately reduces presynaptic neurotransmitter release and hence regulates synaptic strength (Lee et al., 2008). Whether this alone Figure 6. Membrane Excitability Differs between dMNs and vMNs (A) Example of a whole cell current clamp recordings obtained from a dMN (aCC). Responses to 500 ms depolarizing current pulses of 2, 6, and 10 pA are shown. An example of a current step is shown underneath the responses. Scale bar is 10 mV/200 ms. (B) Number of action potentials fired per 500 ms current step by dMNs and vMNs are plotted against the amplitude of injected current. dMNs fire significantly less action potentials than vMNs at most current steps. (C) Number of action potentials evoked by a 10 pA current injection. WT dMNs fire significantly less action potentials than vMNs. Removal of Sh-dependent K + current by DTx or Sh À/À increases action potential firing in dMNs to levels seen in vMNs. Action potential firing in vMNs remains unaffected. Removal of islet (islet À/À ) also has no effect on firing in either dMns or vMNs. Values shown are means ± SEM (n R 8). can account for the delay of dorsal muscle contraction is not known. Differences in electrical properties, specifically delay to first spike, have also been observed between Drosophila motoneurons (Choi et al., 2004). While the precise reasons for these differences remain speculative, they are consistent with differential contribution to muscle activity that underlies locomotion in Drosophila larvae. We can recapitulate the repressive effect of ectopic islet expression on Sh-mediated K + current in body wall muscle. This is important for two reasons. First, it provides unequivocal support for the hypothesis that Islet is deterministic for expression of Sh in excitable cells, regardless of whether those cells are neurons or muscle. Second, body wall muscles are isopotential and do not therefore suffer from issues of space clamp (Broadie and Bate, 1993). Analysis of ionic currents in neurons can be complicated by such factors, which becomes more serious for analysis of those currents located further away from the cell body in the dendritic arbor. Hence electrophysiological-tractable muscles may offer the possibility to derive a more complete understanding of the differential activity of codes of transcription factors on the regulation of ion channel development within the developing nervous system. Fly Stocks For larval collections, flies were transferred into laying pots and allowed to lay eggs onto grape juice agar plates. Laying pots were kept at 25 C and 18 C for motoneuron and muscle experiments, respectively. The following fly strains were used: Canton-S as wild-type (WT), islet mutant tup [isl-1] ;tup[Isl-1]/CyO act::GFP. The islet mutants and Sh;islet double mutants are embryonic lethal; however, a few homozygous escapers are viable up until the first-instar larval stage. Transgenes were expressed in a tissue-specific manner using the GAL4/UAS system (Brand and Perrimon, 1993). The driver line GAL4 1407 (homozygous viable on the second chromosome) was used to express UAS containing transgenes carrying the active (UAS-TNT-G) or inactive (UAS-TNT-VF) form of tetanus toxin light chain (TeTxLC) in all CNS neurons (Sweeney et al., 1995). GAL4 Lim3 was used to express GFP in vMNs for in situ hybridization. GAL4 RN2-0 (homozygous viable on the second chromosome) or GAL4 RRa (homozygous viable on the 3 rd chromosome) were used to express islet (UAS-islet x2) in dMNs. GAL4 24B (homozygous viable on the second chromosome) was used to express islet (UAS-islet x2) body wall muscle. The dMN driver GAL4 RRa as well as the UAS-islet construct were crossed into the Sh[14] mutant background. Embryo and Larval Dissection Newly hatched larvae or late stage 17 embryos were dissected and central neurons were accessed for electrophysiology as described by Baines and Bate (1998). For muscle recordings newly hatched larvae were dissected as for CNS electrophysiology, but the CNS was removed. The muscles were treated with 1 mg/ml collagenase (Sigma) for 0.5 to 1 min prior to whole cell patch recording. Larvae were visualized using a water immersion lens (total magnification, 6003) combined with DIC optics (BX51W1 microscope; Olympus Optical, Tokyo, Japan). Electrophysiology Recordings were performed at room temperature (20 C to 22 C). Whole-cell recordings (current and voltage clamp) were achieved using borosilicate glass electrodes (GC100TF-10; Harvard Apparatus, Edenbridge, UK), fire-polished to resistances of between 15 -20 MU for neurons and between 5 and 10 MU for muscles. Neurons were identified based on their position within the ventral nerve cord. Neuron type was confirmed after recording by filling with 0.1% Alexa Fluor 488 hydrazyde sodium salt (Invitrogen), which was included in the internal patch saline. Recordings were made using a Multiclamp 700B amplifier controlled by pClamp 10.2 (Molecular Devices, Sunnyvale, CA). Only neurons with an input resistance > 1 GU were accepted for analysis. Traces were sampled at 20 kHz and filtered at 2 kHz. The voltage-clamp protocols used to record total K + currents were as follows: for neurons, from the resting potential of À60 mV neurons were hyperpolarized to À90 mV for 100 ms, the voltage was then stepped from À80 mV to +40 mV in increments of D10 mV for 50 ms. To isolate slow K + currents a prepulse of À20mV for 100 ms was used (Baines and Bate, 1998). For muscles a maintained holding potential of À60 mV was used and a À90 mV prepulse for 200 ms and voltage jumps of D20 mV increments were applied from À40 to +40 mV. Leak currents were subtracted off-line for central neuron recordings. For muscle recordings, however, on line leak subtraction (P/4) was used. Recordings were done in at least four animals and at least eight neurons/ muscles were recorded from in total for each experiment. Individual recordings were averaged, following normalization relative to cell capacitance, to produce one composite average representative of that group of recordings. Cell capacitance was determined by integrating the area under the capacity transients evoked by stepping from À60 to À90 mV (checked before and after recordings). Membrane excitability (i.e., action potential firing) was determined using injection of depolarizing current (1, 2, 4, 6, 8, 10 pA/500 ms) from a maintained membrane potential (V m ) of À60 mV. V m was maintained at À60 mV by injection of a small amount of hyperpolarizing current. Real-Time RT-PCR Muscle tissue and CNS were collected from newly hatched larvae or late stage 17 embryos. Between 100 and 180 animals were dissected for each genotype. Following RNA extraction (QIAGEN RNaesy Micro kit) cDNA was synthesized using the Fermentas Reverse Aid H minus First strand cDNA synthesis kit, according to the manufacturer's protocol. RNA concentration was matched for control and experimental sample prior cDNA synthesis. qPCR was performed on the Roche LightCycler 1.5 (Roche, Lewes, UK) using the Roche LightCycler FastStart DNA Master SYBR Green reaction mix. The thermal profile used was 10 min at 95 C followed by 40 cycles of 10 s at 95 C, followed by 4 s at 59 C, and finally 30 s at 72 C. Results were recorded using the delta delta C t method and are expressed as Fold difference compared to control (isl À/À compared to isl +/À , 1407 > islet to 1407 > GFP, 24 B > islet to 24B > GFP). C t values used were the means of duplicate replicates. Experiments were repeated twice. PCR primers (forward and reverse primers in 5 0 to 3 0 orientation) were as follows: rp49 CTAAGCTGTCGCACAAATGG and GGAA CTTCTTGAATCCGGTG; Sh CAACACTTTGAACCCATTCC and CAAAGTACC GTAATCTCCGA. DamID Analysis A pUASTattB-NDam vector was created (to allow integration of the Dam transgene into a specific site) by cloning the Dam-Myc sequence from pNDamMyc (van Steensel and Henikoff, 2000) into the multiple cloning site of pUASTattB (Bischof et al., 2007) using EcoRI and BglII sites. The full-length coding sequence of islet was PCR amplified from an embryonic cDNA library and cloned into pUASTattB-NDam using BglII and NotI sites. Transgenic lines were generated by injecting pUASTattB-NDam (control line) and pUASTattB-NDam-islet constructs (at 100ng/ml) into FX-22A (with phiC31 expressed in the germline and a docking site at 22A) blastoderm embryos (Bischof et al., 2007). Preparation of Dam-methylated DNA from stage 17 embryos was performed as previously described (Pym et al., 2006). The Dam-only and Dam-islet samples were labeled and hybridized together on a whole genome 2.1 million feature tiling array, with 50-to 75-mer oligonucleotides spaced at approximately 55 bp intervals (Nimblegen systems). Arrays were scanned and intensities extracted (Nimblegen Systems). Three biological replicates (with one dye-swap) were performed. Log2 ratios of each spot were median normalized. A peak finding algorithm with false discovery rate (FDR) analysis was developed to identify significant binding sites (PERL script available on request). All peaks spanning 8 or more consecutive probes (>$900 bp) over a 2-fold ratio change were assigned a FDR value. To assign a FDR value, the frequency of a range of small peak heights (from 0.1 to 1.25 log2 increase) were calculated within a randomized data set (for each chromosome arm) using 20 iterations for each peak size. This was repeated for a range of peak widths (6 to 15 consecutive probes). All of these data were used to model the exponential decay of the FDR with respect to increasing peak height and peak width, therefore enabling extrapolation of FDR values for higher and broader peaks. This analysis was performed independently for each replicate data set. Each peak was assigned the highest FDR value from the 3 replicates. Genes were defined as targets where a binding event (with a FDR < 0.1%) occurred within 5 kb of the transcriptional unit (depending on the proximity of adjacent genes). Statistics Statistical significance was calculated using a nonpaired t test with a confidence interval of p % 0.05 (*) and % 0.01 (**). All quantitative data shown are means ± SEM.
9,660
2012-08-23T00:00:00.000
[ "Biology" ]
Measurements of differential and double-differential Drell-Yan cross sections in proton-proton collisions at 8 TeV Measurements of the differential and double-differential Drell-Yan cross sections in the dielectron and dimuon channels are presented. They are based on proton-proton collision data at sqrt(s) = 8 TeV recorded with the CMS detector at the LHC and corresponding to an integrated luminosity of 19.7 inverse femtobarns. The measured inclusive cross section in the Z peak region (60-120 GeV), obtained from the combination of the dielectron and dimuon channels, is 1138 +/- 8 (exp) +/- 25 (theo) +/- 30 (lumi) pb, where the statistical uncertainty is negligible. The differential cross section d(sigma)/d(m) in the dilepton mass range 15 to 2000 GeV is measured and corrected to the full phase space. The double-differential cross section d2(sigma)/d(m)d(abs(y)) is also measured over the mass range 20 to 1500 GeV and absolute dilepton rapidity from 0 to 2.4. In addition, the ratios of the normalized differential cross sections measured at sqrt(s) = 7 and 8 TeV are presented. These measurements are compared to the predictions of perturbative QCD at next-to-leading and next-to-next-to-leading (NNLO) orders using various sets of parton distribution functions (PDFs). The results agree with the NNLO theoretical predictions computed with FEWZ 3.1 using the CT10 NNLO and NNPDF2.1 NNLO PDFs. The measured double-differential cross section and ratio of normalized differential cross sections are sufficiently precise to constrain the proton PDFs. Introduction At hadron colliders, Drell-Yan (DY) lepton pairs are produced via γ * /Z exchange in the s channel.Theoretical calculations of the differential cross section dσ/dm and the double-differential cross section d 2 σ/dm d|y|, where m is the dilepton invariant mass and |y| is the absolute value of the dilepton rapidity, are well established in the standard model (SM) up to the next-to-nextto-leading order (NNLO) in perturbative quantum chromodynamics (QCD) [1][2][3][4].The rapidity distributions of the gauge bosons γ * /Z are sensitive to the parton content of the proton. The rapidity and the invariant mass of the dilepton system produced in proton-proton collisions are related at leading order to the longitudinal momentum fractions x + and x − carried by the two interacting partons according to the formula x ± = (m/ √ s)e ±y .Hence, the rapidity and mass distributions are sensitive to the parton distribution functions (PDFs) of the interacting partons.The differential cross sections are measured with respect to |y| since the rapidity distribution is symmetric about zero.The high center-of-mass energy at the CERN LHC permits the study of DY production in regions of the Bjorken scaling variable and evolution scale Q 2 = x + x − s that were not accessible in previous experiments [5][6][7][8][9][10].The present analysis covers the ranges 0.0003 < x ± < 1.0 and 600 < Q 2 < 750 000 GeV 2 in the double-differential cross section measurement.The differential cross section dσ/dm is measured in an even wider range 300 < Q 2 < 3 000 000 GeV 2 . The increase in the center-of-mass energy at the LHC from 7 to 8 TeV provides an opportunity to measure the ratios and double-differential ratios of cross sections of various hard processes, including the DY process.Measurements of the DY process in proton-proton collisions depend on various theoretical parameters such as the QCD running coupling constant, PDFs, and renormalization and factorization scales.The theoretical systematic uncertainties in the cross section measurements for a given process at different center-of-mass energies are substantial but correlated, so that the ratios of differential cross sections normalized to the Z boson production cross section (double ratios) can be measured very precisely [11]. This paper presents measurements of the DY differential cross section dσ/dm in the mass range 15 < m < 2000 GeV, extending the measurement reported in [12], and of the double-differential cross section d 2 σ/dm d|y| in the mass range 20 < m < 1500 GeV and absolute dilepton rapidity from 0 to 2.4.In addition, the double ratios measured at 7 and 8 TeV are presented.The measurements are based on a data sample of proton-proton collisions with a center-of-mass energy √ s = 8 TeV, collected with the CMS detector and corresponding to an integrated luminosity of 19.7 fb −1 .Integrated luminosities of 4.8 fb −1 (dielectron) and 4.5 fb −1 (dimuon) at √ s = 7 TeV are used for the double ratio measurements. Imperfect knowledge of PDFs [13,14] is the dominant source of theoretical systematic uncertainties in the DY cross section predictions at low mass.The PDF uncertainty is larger than the achievable experimental precision, making the double-differential cross section and the double ratio measurements in bins of rapidity an effective input for PDF constraints.The inclusion of DY cross section and double ratio data in PDF fits is expected to provide substantial constraints for the strange quark and the light sea quark PDFs in the small Bjorken x region (0.001 < x < 0.1). The DY differential cross section has been measured by the CDF, D0, ATLAS, and CMS experiments [12,[15][16][17][18][19].The current knowledge of the PDFs and the importance of the LHC measurements are reviewed in [20,21].Measuring the DY differential cross section dσ/dm is important for various LHC physics analyses.DY events pose a major source of background for processes such as top quark pair production, diboson production, and Higgs measurements with lepton final states, as well as for searches for new physics beyond the SM, such as the production of high-mass dilepton resonances. The differential cross sections are first measured separately for both lepton flavors and found to agree.The combined cross section measurement is then compared to the NNLO QCD predictions computed with FEWZ 3.1 [22] using the CT10 NNLO PDF.The d 2 σ/dm d|y| measurement is compared to the NNLO theoretical predictions computed with FEWZ 3.1 using the CT10 and NNPDF2.1 NNLO PDFs [23,24]. CMS detector The central feature of the CMS detector is a superconducting solenoid of 6 m internal diameter and 13 m length, providing a magnetic field of 3.8 T. Within the field volume are a silicon tracker, a crystal electromagnetic calorimeter (ECAL), and a brass/scintillator hadron calorimeter (HCAL).The tracker is composed of a pixel detector and a silicon strip tracker, which are used to measure charged-particle trajectories and cover the full azimuthal angle and the pseudorapidity interval |η| < 2.5. Muons are detected with four planes of gas-ionization detectors.These muon detectors are installed outside the solenoid and sandwiched between steel layers, which serve both as hadron absorbers and as a return yoke for the magnetic field flux.They are made using three technologies: drift tubes, cathode strip chambers, and resistive-plate chambers.Muons are measured in the pseudorapidity window |η| < 2.4.Electrons are detected using the energy deposition in the ECAL, which consists of nearly 76 000 lead tungstate crystals that are distributed in the barrel region (|η| < 1.479) and two endcap (1.479 < |η| < 3) regions. The CMS experiment uses a two-level trigger system.The level-1 trigger, composed of custom processing hardware, selects events of interest at an output rate of 100 kHz using information from the calorimeters and muon detectors [25].The high-level trigger (HLT) is software based and further decreases the event collection rate to a few hundred hertz by using the full event information, including that from the tracker [26].A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in [27]. Simulated samples Several simulated samples are used for determining efficiencies, acceptances, and backgrounds from processes that result in two leptons, and for the determination of systematic uncertainties.The DY signal samples with e + e − and µ + µ − final states are generated with the next-to-leading (NLO) generator POWHEG [28][29][30][31] interfaced with the PYTHIA v6.4.24 [32] parton shower generator.PYTHIA is used to model QED final-state radiation (FSR). The POWHEG simulated sample is based on NLO calculations, and a correction is applied to take into account higher-order QCD and electroweak (EW) effects.The correction factors binned in dilepton rapidity y and transverse momentum p T are determined in each invariantmass bin to be the ratio of the double-differential cross sections calculated at NNLO QCD and NLO EW with FEWZ 3.1 and at NLO with POWHEG, as described in [12].The corresponding higher-order effects depend on the dilepton kinematic variables.Higher-order EW corrections are small in comparison to FSR corrections.They increase for invariant masses in the TeV region [33], but are insignificant compared to the experimental precision for the whole mass range under study.The NNLO QCD effects are most important in the low-mass region.The ef-fect of the correction factors on the acceptance ranges up to 50% in the low-mass region (below 40 GeV), but is almost negligible in the high-mass region (above 200 GeV). The main SM background processes are simulated with POWHEG (DY → τ + τ − , single top quark) and with MADGRAPH [34] (tt, diboson events WW/WZ/ZZ).Both POWHEG and MAD-GRAPH are interfaced with the TAUOLA package [35], which handles decays of τ leptons.The normalization of the tt sample is set to the NNLO cross section of 245.8 pb [36].Multijet QCD background events are produced with PYTHIA. All generated events are processed through a detailed simulation of the CMS detector based on GEANT4 [37] and are reconstructed using the same algorithms used for the data.The proton structure is defined using the CT10 [23] PDFs.The simulation includes the effects of multiple interactions per bunch crossing [38] (pileup) with the simulated distribution of the number of interactions per LHC beam crossing corrected to match that observed in data. Object reconstruction and event selection The events used in the analysis are selected with a dielectron or a dimuon trigger.Dielectron events are triggered by the presence of two electron candidates that pass loose requirements on the electron quality and isolation with a minimum transverse momentum p T of 17 GeV for one of the electrons and 8 GeV for the other.The dimuon trigger requires one muon with p T > 17 GeV and a second muon with p T > 8 GeV. The offline reconstruction of the electrons begins with the clustering of energy depositions in the ECAL.The energy clusters are then matched to the electron tracks.Electrons are identified by means of shower shape variables.Each electron is required to be consistent with originating from the primary vertex in the event.Energetic photons produced in a pp collision may interact with the detector material and convert into an electron-positron pair.The electrons or positrons originating from such photon conversions are suppressed by requiring that there be no more than one missing tracker hit between the primary vertex and the first hit on the reconstructed track matched to the electron; candidates are also rejected if they form a pair with a nearby track that is consistent with a conversion.Additional details on electron reconstruction and identification can be found in [39][40][41][42].No charge requirements are imposed on the electron pairs to avoid inefficiency due to nonnegligible charge misidentification. At the offline muon reconstruction stage, the data from the muon detectors are matched and fitted to data from the silicon tracker to form muon candidates.The muon candidates are required to pass the standard CMS muon identification and track quality criteria [43].To suppress the background contributions due to muons originating from heavy-quark decays and nonprompt muons from hadron decays, both muons are required to be isolated from other particles.Requirements on the impact parameter and the opening angle between the two muons are further imposed to reject cosmic ray muons.In order to reject muons from light-meson decays, a common vertex for the two muons is required.More details on muon reconstruction and identification can be found in [12] and [43].Events are selected for further analysis if they contain oppositely charged muon pairs meeting the above requirements.The candidate with the highest χ 2 probability from a kinematic fit to the dimuon vertex is selected. Electron and muon isolation criteria are based on measuring the sum of energy depositions associated with photons and charged and neutral hadrons reconstructed and identified by means of the CMS particle-flow algorithm [44][45][46][47].Isolation sums are evaluated in a circular region of the (η,φ) plane around the lepton candidate with ∆R < 0.3 (where ∆R = √ (∆η) 2 + (∆φ) 2 ), and are corrected for the contribution from pileup. Each lepton is required to be within the geometrical acceptance of |η| < 2.4.The leading lepton in the event is required to have p T > 20 GeV and the trailing lepton p T > 10 GeV, which corresponds to the plateau of the trigger efficiency.Both lepton candidates in each event used in the offline analysis are required to match HLT trigger objects. After event selection, the analysis follows a series of steps.First, backgrounds are estimated.Next, the observed background-subtracted yield is unfolded to correct for the effects of the migration of events among bins of mass and rapidity due to the detector resolution.The acceptance and efficiency corrections are then applied.Finally, the migration of events due to FSR is corrected.Systematic uncertainties associated with each of the analysis steps are evaluated. Background estimation The major background contributions in the dielectron channel arise from τ + τ − and tt processes in the low-mass region and from QCD events with multiple jets at high invariant mass.The background composition is somewhat different in the dimuon final state.Multijet events and DY production of τ + τ − pairs are the dominant sources of background in the dimuon channel at low invariant mass and in the region just below the Z peak.Diboson and tt production followed by leptonic decays are the dominant sources of background at high invariant mass.Lepton pair production in γγ-initiated processes, where both initial-state protons radiate a photon, is significant at high mass.The contribution from this channel is treated as an irreducible background and is estimated with FEWZ 3.1 [48].To correct for this background, a bin-by-bin ratio of the DY cross sections with and without the photon-induced contribution is calculated.This bin-by-bin correction is applied after the mass resolution unfolding step, whereas corrections for other background for which we have simulated events are corrected before.This background correction is negligible at low mass and in the Z peak region, rising to approximately 20% in the highest mass bin. In the dielectron channel, the QCD multijet background is estimated with a data sample collected with the trigger requirement of a single electromagnetic cluster in the event.Non-QCD events, such as DY, are removed from the data sample using event selection and event subtraction based on simulation, leaving a sample of QCD events with characteristics similar to those in the analysis data sample.This sample is used to estimate the probability for a jet to pass the requirements of the electromagnetic trigger and to be falsely reconstructed as an electron.This probability is then applied to a sample of events with one electron and one jet to estimate the background contribution from an electron and a jet passing electron selection requirements. As the contribution from two jets passing the electron selections is considered twice in the previous method, the contribution from a sample with two jets multiplied by the square of the probability for jets passing the electron selection requirements is further subtracted. The QCD multijet background in the dimuon channel is evaluated by selecting a control data sample before the isolation and charge sign requirements are applied, following the method described in [49]. The largest background consists of final states with particles decaying by EW interaction, producing electron or muon pairs, for example, tt, τ + τ − , and WW.Notably, these final states contain electron-muon pairs at twice the rate of electron or muon pairs.These electron-muon pairs can be cleanly identified from a data sample of eµ events and properly scaled (taking into account the detector acceptance and efficiency) in order to calculate the background contribu- tion to the dielectron and dimuon channels. Background yields estimated from an eµ data sample are used to reduce the systematic uncertainty due to the limited theoretical knowledge of the cross sections of the SM processes.The residual differences between background contributions estimated from data and simulation are taken into account in the systematic uncertainty assignment, as detailed in Section 9. The dilepton yields for data and simulated events in bins of invariant mass are reported in Fig. 1.The photon-induced background is absorbed in the signal distribution so no correction is applied at this stage.As shown in the figure, the background contribution at low mass is no larger than 5% in both decay channels.In the high-mass region, background contamination is more significant, reaching approximately 50% (30%) in the dielectron (dimuon) distribution. Resolution and scale corrections Imperfect lepton energy and momentum measurements can affect the reconstructed dilepton invariant-mass distributions.Correcting for these effects is important in precise measurements of differential cross sections. A momentum scale correction to remove a bias in the reconstructed muon momenta due to the differences in the tracker misalignment between data and simulation and the residual magnetic field mismodeling is applied following the standard CMS procedure described in [50]. The electron energy deposits as measured in the ECAL are subject to a set of corrections involving information both from the ECAL and the tracker, following the standard CMS procedures for the 8 TeV data set [51].A final electron energy scale correction, which goes beyond the standard set of corrections, is derived from an analysis of the Z → e + e − peak according to the procedure described in [49], and consists of a simple factor of 1.001 applied to the electron energies to account for the different selection used in this analysis. The detector resolution effects that cause a migration of events among the analysis bins are corrected through an iterative unfolding procedure [52].This procedure maps the measured lepton distribution onto the true one, while taking into account the migration of events in and out of the mass and rapidity range of this measurement. The effects of the unfolding correction in the differential cross section measurement are approximately 50 (20)% for dielectron (dimuon) channel in the Z peak region, where the invariantmass spectrum changes steeply.Less significant effects, of the order of 15% (5%) in dielectron (dimuon) channel, are observed in other regions.The effect on the double-differential cross section measurement is less significant as both the invariant mass and rapidity bins are significantly wider than the respective detector resolutions.The effect for dielectrons reaches 15% in the 45-60 GeV mass region and 5% at high mass; it is, however, less than 1% for dimuons over the entire invariant mass-rapidity range of study. Acceptance and efficiency The acceptance A is defined as the fraction of simulated signal events with both leptons passing the nominal p T and η requirements of the analysis.It is determined using the NNLO reweighted POWHEG simulated sample, after the simulation of FSR. The efficiency is the fraction of events in the DY simulated sample that are inside the acceptance and pass the full selection.The following equation holds: where N gen is the number of generated signal events in a given invariant-mass bin, N A is the number of events inside the geometrical and kinematic acceptances, and N is the number of events passing the event selection criteria.Figure 2 shows the acceptance, the efficiency, and their product as functions of the dilepton invariant mass. The DY acceptance is obtained from simulation.In the lowest mass bin it is only about 0.5%, rapidly increasing to 50% in the Z peak region and reaching over 90% at high mass. The efficiency is factorized into the reconstruction, identification, and isolation efficiencies and the event trigger efficiency.The factorization procedure takes into account the asymmetric p T selections for the two legs of the dielectron trigger.The efficiency is obtained from simulation, rescaled with a correction factor that takes into account differences between data and simulation.The efficiency correction factor is determined in bins of lepton p T and η using Z → e + e − (µ + µ − ) events in data and simulation with the tag-and-probe method [49] and is then applied as a weight to simulated events on a per-lepton basis. A typical dimuon event efficiency is 70-80% throughout the entire mass range.In the dielectron channel, the efficiency at low mass is only 20-40% because of tighter lepton identification requirements, and reaches 65% at high mass.The trigger efficiency for events within the geometrical acceptance is greater than 98% (93%) for the dielectron (dimuon) signal.The efficiency is significantly affected by the pileup in the event.The effect on the isolation efficiency is up to 5% (about 1%) in the dielectron (dimuon) channel. A dip in the event efficiency in the mass range 30-40 GeV, visible in Fig. 2, is caused by the combination of two factors.On one hand, the lepton reconstruction and identification efficiencies decrease as the lepton p T decreases.On the other hand, the kinematic acceptance requirements preferentially select DY events produced beyond the leading order, which results in higher p T leptons with higher reconstruction and identification efficiencies, in the mass range below 30-40 GeV.The effect is more pronounced for dielectrons than for dimuons because the electron reconstruction and identification efficiencies depend more strongly on p T . For the dimuon channel the efficiency correction factor is 0.95-1.10,rising up to 1.10 at high dimuon rapidity and falling to 0.95 at low mass.At low mass, the correction to the muon reconstruction and identification efficiency is dominant, falling to 0.94.In the dielectron channel, the efficiency correction factor is 0.96-1.05 in the Z peak region, and 0.90 at low mass.The correction factor rises to 1.05 at high dielectron rapidity.The correction to the electron identification and isolation efficiency is dominant in the dielectron channel, reaching 0.93 at low mass and 1.04 at high rapidity. Final-state QED radiation effects The effect of photon radiation from the final-state leptons (FSR effect) moves the measured invariant mass of the dilepton pair to lower values, significantly affecting the mass spectrum, particularly in the region below the Z peak.A correction for FSR is performed to facilitate the comparison to the theoretical predictions and to properly combine the measurements in the dielectron and dimuon channels.The FSR correction is estimated separately from the detector resolution correction by means of the same unfolding technique.An additional bin-by-bin correction is applied for the events in which the leptons generated before FSR modeling (pre-FSR) fail the acceptance requirements, while they pass after the FSR modeling (post-FSR), following the approach described in [12].The correction for the events not included in the response matrix is significant at low mass, reaching a maximum of 20% in the lowest mass bin and decreasing to negligible levels in the Z peak region. The magnitude of the FSR correction below the Z peak is on the order of 40-60% (30-50%) for the dielectron (dimuon) channel.In other mass regions, the effect is only 10-15% in both channels.In the double-differential cross section measurement, the effect of FSR unfolding is not significant, typically a few percent, due to a larger mass bin size. In order to compare the measurements corrected for FSR obtained in analyses with various event generators, the "dressed" lepton quantities can be considered.The dressed lepton fourmomentum is defined as where all the simulated photons originating from leptons are summed within a cone of ∆R < 0.1. The correction to the cross sections from the post-FSR to the dressed level reaches a factor of 1.8 (1.3) in the dielectron (dimuon) channel immediately below the Z peak; it is around 0.8 in the low-mass region in both decay channels, and is close to 1.0 at high mass. Systematic uncertainties Acceptance uncertainty.The dominant uncertainty sources pertaining to the acceptance are (1) the theoretical uncertainty from imperfect knowledge of the nonperturbative PDFs contributing to the hard scattering and ( 2) the modeling uncertainty.The latter comes from the procedure to apply weights to the NLO simulated sample in order to reproduce NNLO kinematics and affects mostly the acceptance calculations at very low invariant mass.The PDF uncertainties for the differential and double-differential cross section measurements are calculated using the LHAGLUE interface to the PDF library LHAPDF 5.8.7 [53,54] by applying a reweighting technique with asymmetric uncertainties as described in [55].These contributions are largest at low and high masses (4-5%) and decrease to less than 1% for masses at the Z peak. Efficiency uncertainty.The systematic uncertainty in the efficiency estimation consists of two components: the uncertainty in the efficiency correction factor estimation and the uncertainty related to the number of simulated events.The efficiency correction factor reflects systematic deviations between data and simulation.It varies up to 10% (7%) for the dielectron (dimuon) channel.As discussed in Section 7, single-lepton efficiencies of several types are measured with the tag-and-probe procedure and are combined into efficiency correction factors.The tagand-probe procedure provides the efficiencies for each lepton type and the associated statistical uncertainties.A variety of possible systematic biases in the tag-and-probe procedure have been taken into account, such as dependence on the binning in single-lepton p T and η, dependence on the assumed shape of signal and background in the fit model, and the effect of pileup.In the dielectron channel, this uncertainty is as large as 3.2% at low mass, and 6% at high rapidity in the 200-1500 GeV region.The uncertainty in the dimuon channel is about 1% in most of the analysis bins, reaching up to 4% at high rapidity in the 200-1500 GeV mass region.The contribution from the dimuon vertex selection is small because its efficiency correction factor is consistent with being constant. Electron energy scale.In the dielectron channel, one of the leading systematic uncertainties is associated with the energy scale corrections for individual electrons.The corrections affect both the placement of a given candidate in a particular invariant-mass bin and the likelihood of surviving the kinematic selection.The energy scale corrections are calibrated to a precision of 0.1-0.2%.The systematic uncertainties in the measured cross sections are estimated by varying the electron energy scale by 0.2%.The uncertainty is relatively small at low masses.It reaches up to 6.2% in the Z peak region where the mass bins are the narrowest and the variation of the cross section with mass is the largest. Muon momentum scale.The uncertainty in the muon momentum scale causes uncertainties in the efficiency estimation and background subtraction and affects the detector resolution unfolding.The muon momentum scale is calibrated to 0.02% precision.The systematic uncer-tainty in the measured cross sections is determined by varying the muon momentum scale within its uncertainty.The largest effect on the final results is observed in the detector resolution unfolding step, reaching 2%. Detector resolution.For both channels, the simulation of the CMS detector, used for detector resolution unfolding, provides a reliable description of the data.Possible small systematic errors in the unfolding are related to effects such as differences in the electron energy scale and muon momentum scale and uncertainties in FSR simulation and in simulated pileup.The impact of each of these effects on the measurements is studied separately, as described in this section. The detector resolution unfolding procedure itself has been thoroughly validated, including a variety of closure tests and comparisons between different event generators; the systematic uncertainty assigned to the unfolding procedure is based on the finite size of the simulated samples and a contribution due to the systematic difference in data and simulation.The latter must be taken into account because the response matrix is determined from simulation. Background uncertainty.The background estimation uncertainties are evaluated in the same way in both the dielectron and dimuon channels.The uncertainty in the background is comprised of the Poissonian statistical uncertainty of predicted backgrounds and the difference between the predictions from the data and simulation.The two components are combined in quadrature.The uncertainty in the background is no larger than 3.0% (1.0%) at low mass, reaching 16.3% (4.6%) in the highest mass bin in the dielectron (dimuon) channel. γγ-initiated background uncertainty.The uncertainty in the correction for γγ-initiated processes is estimated using FEWZ 3.1 with the NNPDF2.3QEDPDF and consists of the statistical and PDF uncertainty contributions combined in quadrature. FSR simulation.The systematic uncertainty due to the model-dependent FSR simulation is estimated using two reweighting techniques described in [12] with the same procedure in both decay channels.The systematic uncertainty from modeling the FSR effects is as large as 2.5% (1.1%) in the dielectron (dimuon) channel in the 45-60 GeV region.The systematic uncertainties related to the FSR simulation in the electron channel primarily affect the detector resolution unfolding procedure.The impact of these uncertainties is greater for the electron channel than for the muon channel because of the partial recovery of FSR photons during the clustering of electron energy in the ECAL.The effect of the FSR simulation on other analysis steps for the electron channel is negligible in comparison to other systematic effects associated with those steps. Luminosity uncertainty.The uncertainty in the integrated luminosity recorded by CMS in the 2012 data set is 2.6% [56]. Table 1 summarizes the systematic uncertainties for the dielectron and dimuon channels. Systematic uncertainties in the double ratio.In the double ratio measurements most of the theoretical uncertainties are reduced.The PDF and modeling uncertainties in the acceptance and the systematic uncertainty in the FSR modeling are fully correlated between 7 and 8 TeV measurements.The relative uncertainty δσ s i /σ s i in the cross section ratio corresponding to a correlated systematic source of uncertainty s i is estimated according to where the δ s i are relative uncertainties caused by a source s i in the cross section measurements at √ s = 7 and 8 TeV, respectively. The systematic uncertainties that are considered uncorrelated between the two center-of-mass Results and discussion The cross section measurements are first performed separately in the dielectron and dimuon decay channels and then combined using the procedure described in [57].To assess the sensitivity of the measurement to PDF uncertainties, a comparison to theoretical calculations is performed using FEWZ 3.1 with CT10 and NNPDF2.1 NNLO PDFs [23,24].While the theory predictions are presented for NNPDF2.1,similar results are expected from the use of the more recent NNPDF3.0 [58]. Differential cross section dσ/dm measurement The pre-FSR cross section in the full phase space is calculated as where N i u is the number of events after background subtraction and unfolding procedures for detector resolution and FSR, A i is the acceptance, and i is the efficiency in a given invariantmass bin i; L int is the total integrated luminosity. The cross section in the Z peak region is calculated with Eq. ( 4) considering the mass region 60 < m < 120 GeV. The Z peak cross section measurements in the dielectron and dimuon channels are summarized in Table 2.The measurements agree with NNLO theoretical predictions for the full phase space (i.e., 1137 ± 36 pb, as calculated with FEWZ 3.1 and CT10 NNLO PDFs), and also with the previous CMS measurement [38]. The pre-FSR cross section for the full phase space is calculated in mass bins covering the range 15 to 2000 GeV by means of Eq. ( 4).The results are divided by the invariant-mass bin widths ∆m i . The consistency of the differential cross section measurements obtained in the dielectron and Table 2: Absolute cross section measurements in the Z peak region (60 < m < 120 GeV).The uncertainties in the measurements include the experimental and theoretical systematic sources and the uncertainty in the integrated luminosity.The statistical component is negligible. Channel Cross section Dielectron 1141 ± 11 (exp) ± 25 (theo) ± 30 (lumi) pb Dimuon 1135 ± 11 (exp) ± 25 (theo) ± 30 (lumi) pb Combined 1138 ± 8 (exp) ± 25 (theo) ± 30 (lumi) pb dimuon channels is characterized by a χ 2 probability of 82%, calculated from the total uncertainties.Therefore the measurements in the two channels are in agreement and are combined using the procedure defined in [57].Based on the results in the two channels and their symmetric and positive definite covariance matrices, the estimates of the true cross section values are found as unbiased linear combinations of the input measurements having a minimum variance [59].The uncertainties are considered to be uncorrelated between the two channels, with the exception of modeling, PDF, and luminosity uncertainties.The effects of correlations between the analysis bins and different systematic sources are taken into account in the combination procedure when constructing the covariance matrix. The result of the DY cross section measurement in the combined channel is presented in Fig. 3.The theoretical prediction makes use of the fixed-order NNLO QCD calculation and the NLO EW correction to DY production initiated by purely weak processes.The G µ input scheme [33] is used to fix the EW parameters in the model.The full spin correlations as well as the γ * /Z interference effects are included in the calculation.The combined measurement is in agreement with the NNLO theoretical predictions computed with FEWZ 3.1 using CT10 NNLO.The uncertainty band in Fig. 3 for the theoretical calculation represents the combination in quadrature of the statistical uncertainty from the FEWZ 3.1 calculation and the 68% confidence level (CL) uncertainty from the PDFs.The uncertainties related to QCD evolution scale dependence are evaluated by varying the renormalization and factorization scales simultaneously between the values 2m, m, and m/2, with m corresponding to the middle of the invariant mass bin.The scale variation uncertainties reach up to 2% and are included in the theoretical error band. Double-differential cross section d 2 σ/dm d|y| measurement The pre-FSR cross section in bins of the dilepton invariant mass and the absolute value of the dilepton rapidity is measured according to The quantities N ij u and ij are defined in a given bin (i, j), with i corresponding to the binning in dilepton invariant mass and j corresponding to the binning in absolute rapidity.The results are divided by the dilepton absolute rapidity bin widths ∆y j .The acceptance correction to the full phase space is not applied to the measurement, in order to keep theoretical uncertainties to a minimum.The χ 2 probability characterizing the consistency of the double-differential cross section measurements in the two channels is 45% in the entire invariant mass-rapidity range of study.The measurements in the two channels are thus in agreement and are combined using the same procedure as for the differential cross sections described earlier in the section.Figure 4 shows the rapidity distribution dσ/d|y| measured in the combined dilepton channel with the prediction by FEWZ 3.1 with the CT10 and NNPDF2.1 NNLO PDF sets.The cross section is evaluated within the detector acceptance and is plotted for six different mass ranges.The DY differential cross section as measured in the combined dilepton channel and as predicted by NNLO FEWZ 3.1 with CT10 PDF calculations, for the full phase space.The data point abscissas are computed according to Eq. ( 6) in [60].The χ 2 probability characterizing the consistency of the predicted and measured cross sections is 91% with 41 degrees of freedom, calculated with total uncertainties while taking into account the correlated errors in the two channels. The uncertainty bands in the theoretical expectations include the statistical and the PDF uncertainties from the FEWZ 3.1 calculations summed in quadrature.The statistical uncertainty is significantly smaller than the PDF uncertainty, which is the dominant uncertainty in the FEWZ 3.1 calculations.In general, the PDF uncertainty assignment is different for each PDF set.The CT10 PDF uncertainties correspond to 90% CL; to permit a consistent comparison with NNPDF2.1 the uncertainties are scaled to 68% CL. In the low-mass region, the results of the measurement are in better agreement with the NNPDF2.1 NNLO than with the CT10 NNLO estimate, which is systematically lower than NNPDF2.1 NNLO in that region.The χ 2 probability calculated between data and the theoretical expectation with total uncertainties on the combined results in the low-mass region is 16% (76%) for the CT10 (NNPDF2. 1) PDFs.In the Z peak region, the two predictions are relatively close to each other and agree well with the measurements.The statistical uncertainties in the measurements in the highest mass region are of the order of the PDF uncertainty.The corresponding χ 2 probability calculated in the high mass region is 37% (35%) for the CT10 (NNPDF2.1)PDFs. Double ratio measurements The ratios of the normalized differential and double-differential cross sections for the DY process at the center-of-mass energies of 7 and 8 TeV in bins of dilepton invariant mass and dilepton absolute rapidity are presented.The pre-FSR double ratio in bins of invariant mass is calculated following the prescription introduced in [11] according to while the pre-FSR double ratio in bins of mass and rapidity is calculated as dm d|y| (8 TeV, p T > 10, 20 GeV) dm d|y| (7 TeV, p T > 9, 14 GeV) , where σ Z is the cross section in the Z peak region; denotes e or µ.The same binning is used for differential measurements at 7 and 8 TeV in order to compute the ratios consistently. The double ratio measurements provide a high sensitivity to NNLO QCD effects and could potentially yield precise constraints on the PDFs; the theoretical systematic uncertainties in the cross section calculations at different center-of-mass energies have substantial correlations, as discussed in Section 9. Due to cancellation in the double ratio, the effect of the γγ-initiated processes is negligible. Figure 5 shows the pre-FSR DY double ratio measurement in the combined (dielectron and dimuon) channel as a function of dilepton invariant mass, for the full phase space.The theoretical prediction for the double ratio is calculated using FEWZ 3.1 with the CT10 NNLO PDF set.The shape of the distribution is defined entirely by the √ s and the Bjorken x dependencies of the PDFs, since the dependence on the hard scattering cross section is canceled out.In the Z peak region, the expected double ratio is close to 1 by definition.It increases linearly as a function of the logarithm of the invariant mass in the region below 200 GeV, where partons with small Bjorken x contribute the most.The difference in regions of x probed at 7 and 8 TeV center-of-mass energies leads to a rapid increase of the double ratio as a function of mass above 200 GeV. fb Figure 5: Measured DY double ratios at center-of-mass energies of 7 and 8 TeV in the combined dilepton channel as compared to NNLO FEWZ 3.1 calculations obtained with CT10 NNLO PDF, for the full phase space.The uncertainty band in the theoretical predictions combine the statistical and PDF uncertainties; the latter contributions are dominant.The exact definition of R is given in Eq. ( 6). The uncertainty bands in the theoretical prediction of the double ratio include the statistical and the PDF uncertainties from the FEWZ 3.1 calculations summed in quadrature.The experimental systematic uncertainty calculation is described in Section 9. We observe agreement of the double ratio measurement with the CT10 NNLO PDF theoretical prediction within uncertainties.The χ 2 probability from a comparison of the predicted and measured double ratios is 87% with 40 degrees of freedom, calculated with the total uncertainties.At high mass, the statistical component of the uncertainty becomes significant, primarily from the 7 TeV measurements. The double ratios within the CMS acceptance as measured and as predicted by FEWZ 3.1 CT10 and NNPDF2.1 NNLO PDF calculations as a function of dilepton rapidity in six mass bins are summarized in Fig. 6.The measurements having the smallest experimental systematic uncertainty are used in the calculation.Thus, the 8 TeV measurement entering the numerator is estimated in the combined channel, while the 7 TeV measurement in the denominator is estimated in the dimuon channel [12].The shape of the theoretical prediction of the double ratio is nearly independent of the dilepton rapidity at low mass, showing an increase as a function of rapidity by up to 20% in the Z peak region and at high mass, and a significant dependence on rapidity in the 30-60 GeV region.The uncertainty bands in the theoretical predictions of the double ratio include the statistical and the PDF uncertainties from the FEWZ 3.1 calculations summed in quadrature.The uncertainties related to QCD evolution scale dependence are evaluated by varying the renormalization and factorization scales simultaneously between the values 2m, m, and m/2, with m corresponding to the middle of the invariant mass bin.The scale variation uncertainties reach up to 2% and are included in the theoretical error band. The double ratio predictions calculated with the CT10 NNLO and NNPDF2.1 NNLO PDFs agree with the measurements.Below the Z peak, NNPDF2.1 NNLO PDF theoretical predictions are in a closer agreement with the measurement.In the Z peak region, a difference in the slope of both theoretical predictions as compared to the measurement is observed in the central absolute rapidity region.In the high-rapidity and high-mass regions, the effect of the limited number of events in the 7 TeV measurement is significant.In the 120-200 GeV region, the measurement is at the lower edge of the uncertainty band of the theory predictions. The DY double-differential cross section and double ratio measurements presented here can be used to impose constraints on the quark and antiquark PDFs in a wide range of x, complementing the data from the fixed-target experiments with modern collider data. Summary This paper presents measurements of the Drell-Yan differential cross section dσ/dm and the double-differential cross section d 2 σ/dm d|y| with proton-proton collision data collected with the CMS detector at the LHC at a center-of-mass energy of 8 TeV.In addition, the first measurements of the ratios of the normalized differential and double-differential cross sections for the DY process at center-of-mass energies of 7 and 8 TeV in bins of dilepton invariant mass and absolute rapidity are presented.A previously published CMS measurement based on 7 TeV data [12] is used for the double ratio calculations. The measured inclusive cross section in the Z peak region is 1138 ± 8 (exp) ± 25 (theo) ± 30 (lumi) pb for the combination of the dielectron and dimuon channels.This is the most precise measurement of the cross section in the Z peak region at √ s = 8 TeV in CMS.The dσ/dm and d 2 σ/dm d|y| measurements agree with the NNLO theoretical predictions computed with FEWZ 3.1 Figure 1 : Figure 1: The dielectron (left) and dimuon (right) invariant-mass spectra observed in data and predicted by Monte Carlo (MC) simulation and the corresponding ratios of observed to expected yields.The QCD multijet contributions in both decay channels are predicted using control samples in data.The EW histogram indicates the diboson and W+jets production.The simulated signal distributions are based on the NNLO-reweighted POWHEG sample.No other corrections are applied.Error bars are statistical only. 7Figure 2 : Figure 2: The DY acceptance, efficiency, and their product per invariant-mass bin in the dielectron channel (left) and the dimuon channel (right), where "post-FSR" means dilepton invariant mass after the simulation of FSR. Figure 3 : Figure3: The DY differential cross section as measured in the combined dilepton channel and as predicted by NNLO FEWZ 3.1 with CT10 PDF calculations, for the full phase space.The data point abscissas are computed according to Eq. (6) in[60].The χ 2 probability characterizing the consistency of the predicted and measured cross sections is 91% with 41 degrees of freedom, calculated with total uncertainties while taking into account the correlated errors in the two channels. Figure 4 : Figure 4: The DY dilepton rapidity distribution dσ/d|y| within the detector acceptance, plotted for different mass ranges, as measured in the combined dilepton channel and as predicted by NNLO FEWZ 3.1 with CT10 PDF and NNLO NNPDF2.1 PDF calculations.There are six mass bins between 20 and 1500 GeV, from left to right and from top to bottom.The uncertainty bands in the theoretical predictions combine the statistical and PDF uncertainties (shaded bands); the latter contributions are dominant. Figure 6 : Figure6: Measured DY double ratios as a function of the absolute dilepton rapidity within the detector acceptance, at center-of-mass energies of 7 and 8 TeV, plotted for different mass ranges and as predicted by NNLO FEWZ 3.1 with CT10 and NNPDF2.1 NNLO PDF calculations.There are six mass bins between 20 and 1500 GeV, from left to right and from top to bottom.The uncertainty bands in the theoretical predictions combine the statistical and PDF uncertainties (shaded bands); the latter contributions are dominant.The exact definition of R det is given in Eq. (7). Table 1 : Typical systematic uncertainties (in percent) at low mass (below 40 GeV), in the Z peak region (60 < m < 120 GeV), and at high mass (above 200 GeV) for the dielectron and dimuon channels; "-" means that the source does not apply.
10,329.2
2014-12-02T00:00:00.000
[ "Physics" ]
New Series of Hydrogen-Bonded Liquid Crystal with High Birefringence and Conductivity Liquid crystals with high dielectric anisotropy, low operational thresholds, and significant birefringence (Δn) represent a key focus in soft matter research. This work introduces a novel series of hydrogen-bonded liquid crystals (HBLCs) derived from 4-n-alkoxybenzoic, 4-alkoxy-3-fluorobenzoic derivatives (nOBAF), 4-alkoxy-2,3-fluorobenzoic derivatives (nOBAFF), and 2-fluoro-4-nitrobenzoic acid. The HBLCs were characterized using Fourier transform infrared spectroscopy, and their thermal behavior was evaluated via differential scanning calorimetry. Optical observations were conducted using polarized optical microscopy. The results indicate that mixtures containing benzoic acid with a bilateral fluorine substituent exhibit both SmA and SmC phases, while those with a unilateral fluorine substituent exhibit nematic and SmA phases. Moreover, an increase in the length of the alkoxy chain results in an expanded mesophase temperature range. This study demonstrates that the presence of a fluorine substituent and the incorporation of an NO2 group in the molecular structure result in an increase in dielectric permittivity, DC conductivity, dielectric anisotropy, and birefringence. Furthermore, display applications necessitate nematic LCs with a high birefringence [23-27] to ensure optimal performance across a wide range of spectral regions, including visible light [23], near infrared (NIR) [24], mid infrared (MIR) [25], millimeter wave [26], and terahertz [27].LC mesogens have been developed by incorporating the acetylene group (C≡C) into the mesogenic segment, extending the π-electron conjugation of the molecule structure and allowing for high birefringence.However, these materials typically exhibit relatively high rotational viscosity and melting points.Various attempts have been made to reduce these parameters, including the introduction of unsaturated alkene groups, such as alkyloxy or but-3-enyl, into the LC molecules.This has been performed in order to facilitate the formation of nematic phases with lower rotational viscosity [3,28]. It is well-established that the inclusion of a fluorine atom into a molecule can significantly alter its physical and chemical properties due to its high electronegativity, low polarizability, and strong bond strength.Consequently, the lateral fluorine substituent has been employed to reduce the rotational viscosity and the melting point of LCs [28][29][30][31].In addition, this substituent also expands the temperature range of the nematic phase [8].Moreover, the impact of lateral substituents is contingent upon the number and position of the fluorine atom.Researchers have developed fluorinated calamitic mesogens that exhibit broad nematic temperature ranges and low melting points.Dabrowski et al. demonstrated that replacing hydrogen atoms with fluorine destabilizes the SmE phase and leads to a highly birefringent nematic phase [29].The incorporation of a lateral fluorine substituent in LCs enhances the optical, electrical, and the temperature range of the nematic phase, rendering it an attractive candidate for photovoltaic applications. Significant research has been conducted with the objective of designing and synthesizing hydrogen-banded LCs (HBLCs) and helical ferroelectric liquid crystal for applications in GHz [32] and THz [33,34] frequency domains.In particular, the potential of dichroism-free HBLC as an optical material for THz devices was demonstrated [33] in comparison to those using dichroic LC materials.The relationship between the molecular structures of LCs and their properties was elucidated, as demonstrated by the references [12, 13,26,32,33].It is of particular importance to note that such HBLCs possess a high degree of polymorphism, displaying both smectic and nematic phases.It has been reported that HBLCs exhibit low anisotropy and high driven voltage [31,33,35], which highlights the need to improve these parameters.Consequently, the magnitude of the birefringence (∆n) and the dielectric anisotropy (∆ε) can be chemically controlled by the introduction of polar groups, such as NO 2 and CN groups, into the mesogen core. In this report, HBLCs mixtures with varying molecular shape anisotropy will be examined.The bi-component mixtures contain one compound with a fluorine and a NO 2 group, while the second one possesses either zero, one, or two fluorine atoms.This study specifically examines the effects of position and number of fluorine atoms of the second compound on the thermal, dielectric, and electro-optic properties of the HBLC mixtures. FTIR Analysis In order to gain further insight at the molecular level, FTIR spectroscopy was employed.The FTIR spectra of 9OBAF, FNBA, and FNBA/9OBAF are presented in Figure 1a-c, respectively.The spectra of 9OBAF and FNBA/9OBAF exhibit a broad band at 2700-3300 cm −1 , which is assigned to the ν(O-H) mode of carboxylic acid groups.In addition, the ν(C=O) mode was observed in the form of sharp bands at 1683 cm −1 , 1694 cm −1 , and 1703 cm −1 for 9OBAF, FNBA, and FNBA/9OBAF, respectively.The observed differences in the wavenumbers of the three compounds in conjunction with the absence of absorbance at 3500 cm −1 , which is characteristic of the free O-H group, provide compelling evidence that the FNBA and 9OBA molecules are hetero-associated through hydrogen bonding between the carboxylic acid groups.It is also noteworthy that a comparable outcome was observed for the other synthesized compounds. Phase Behavior The phase transition temperatures and the temperature range of the LC phase of the compounds under study were investigated using DSC and POM techniques.As an illustration, DSC analysis and POM texture investigations were presented for FNBA/10OBA and FNBA/9OBAF mixtures.The DSC thermogram of FNBA/10OBA displays four endothermic peaks upon heating and three exothermic peaks upon cooling, which correspond to the presence of two mesophases (Figure 2a).The spectra of 9OBAF and FNBA/9OBAF exhibit a broad band at 2700-3300 cm −1 , which is assigned to the ν(O-H) mode of carboxylic acid groups.In addition, the ν(C=O) mode was observed in the form of sharp bands at 1683 cm −1 , 1694 cm −1 , and 1703 cm −1 for 9OBAF, FNBA, and FNBA/9OBAF, respectively.The observed differences in the wavenumbers of the three compounds in conjunction with the absence of absorbance at 3500 cm −1 , which is characteristic of the free O-H group, provide compelling evidence that the FNBA and 9OBA molecules are hetero-associated through hydrogen bonding between the carboxylic acid groups.It is also noteworthy that a comparable outcome was observed for the other synthesized compounds. Phase Behavior The phase transition temperatures and the temperature range of the LC phase of the compounds under study were investigated using DSC and POM techniques.As an illustration, DSC analysis and POM texture investigations were presented for FNBA/10OBA and FNBA/9OBAF mixtures.The DSC thermogram of FNBA/10OBA displays four endothermic peaks upon heating and three exothermic peaks upon cooling, which correspond to the presence of two mesophases (Figure 2a).The spectra of 9OBAF and FNBA/9OBAF exhibit a broad band at 2700-3300 cm −1 , which is assigned to the ν(O-H) mode of carboxylic acid groups.In addition, the ν(C=O) mode was observed in the form of sharp bands at 1683 cm −1 , 1694 cm −1 , and 1703 cm −1 for 9OBAF, FNBA, and FNBA/9OBAF, respectively.The observed differences in the wavenumbers of the three compounds in conjunction with the absence of absorbance at 3500 cm −1 , which is characteristic of the free O-H group, provide compelling evidence that the FNBA and 9OBA molecules are hetero-associated through hydrogen bonding between the carboxylic acid groups.It is also noteworthy that a comparable outcome was observed for the other synthesized compounds. Phase Behavior The phase transition temperatures and the temperature range of the LC phase of the compounds under study were investigated using DSC and POM techniques.As an illustration, DSC analysis and POM texture investigations were presented for FNBA/10OBA and FNBA/9OBAF mixtures.The DSC thermogram of FNBA/10OBA displays four endothermic peaks upon heating and three exothermic peaks upon cooling, which correspond to the presence of two mesophases (Figure 2a).Upon cooling from the isotropic phase, this compound exhibits the conical focal texture of SmA and the fan-shaped textures of SmC, as illustrated in Figure 3a,b.Furthermore, the heating scan indicates that the mixture FNBA/9OBAF melts at 96 • C and transitions to an isotropic phase at 132 • C (see Figure 2b).The mixture exhibits two liquid crystal (LC) mesophases, as identified by polarizing optical microscopy (POM) in Figure 3c,d.Upon cooling, the temperature range of [T Iso-N − T N-SmA = 115.4− 101 • C] displays the nematic phase with a Schlieren texture, whereas the temperature range of [T N-SmA − T SmA-Cr = 101 − 71.6 • C] exhibits the SmA phase characterized by a conic focal texture.Table 1 presents the mesomorphic transition temperatures (in degrees Celsius) and enthalpy (in Joules per gram) upon cooling.The phase transitions of the FNBA/14OBAF mixture are analogous to those of the FNBA/9OBAF mixture.In comparison to FNBA/9OBAF, the temperature of crystallization is observed to decrease, while the Iso-N phase transition temperature increases by 6 • C.This results in an expanded temperature range of the LC phase.It should be noted that the occurrence of superimposed thermal peaks at the SmA-N phase transition is a consequence of the coexistence of nematic and SmA phases, as evidenced by POM observations. Molecules 2024, 29, x FOR PEER REVIEW 4 Upon cooling from the isotropic phase, this compound exhibits the conical texture of SmA and the fan-shaped textures of SmC, as illustrated in Figure 3a,b.thermore, the heating scan indicates that the mixture FNBA/9OBAF melts at 96 °C transitions to an isotropic phase at 132 °C (see Figure 2b).The mixture exhibits two li crystal (LC) mesophases, as identified by polarizing optical microscopy (POM) in Fi 3) Iso Dielectric Properties Figure 4 illustrates the frequency dependence of the real (ε′) and imaginary parts of the complex permittivity at different temperatures for the FNBA/10OBA sys as an example. Dielectric Properties Figure 4 illustrates the frequency dependence of the real (ε ′ ) and imaginary (ε ′′ ) parts of the complex permittivity at different temperatures for the FNBA/10OBA system, as an example. At frequencies below 1000 Hz, the real (ε ′ ) and imaginary (ε ′′ ) parts of the permittivity decrease as the frequency increases, due to the ionic contribution.In the higher frequency regime, ε ′ becomes nearly constant, corresponding to the static dielectric constant of the LC.However, the imaginary part of the complex permittivity (ε ′′ ) exhibits the soft mode (fluctuations of the short molecular axis), which is characteristic of the smectic and nematic phases [36,37].In addition, the present data show relatively high ε ′′ values as compared to the dielectric results obtained for 10OBAFF/8OBA [31], which are due to the presence of the high polar NO 2 group.Dielectric spectroscopy is a valuable tool for studying the phase transitions of the LCs [38,39].Figure 5 illustrates the effect of temperature on the dielectric spectra by plotting ε ′ (T) for FNBA/90BAF.In the isotropic phase, ε ′ remains constant during the cooling process and then increases significantly to reach its maximum value at the Iso-N phase transition.In the nematic phase, the dielectric constant (ε ′ ) remains relatively constant and does not vary significantly with the temperature.Discontinuities were observed at the N-SmA and SmA-Cr phase transitions, which are consistent with the findings in [37].At frequencies below 1000 Hz, the real (ε′) and imaginary (ε″) parts of the per mittivity decrease as the frequency increases, due to the ionic contribution.In the highe frequency regime, ε′ becomes nearly constant, corresponding to the static dielectric con stant of the LC.However, the imaginary part of the complex permittivity (ε″) exhibits th soft mode (fluctuations of the short molecular axis), which is characteristic of the smecti and nematic phases [36,37].In addition, the present data show relatively high ε″ values a compared to the dielectric results obtained for 10OBAFF/8OBA [31], which are due to th presence of the high polar NO2 group.Dielectric spectroscopy is a valuable tool fo studying the phase transitions of the LCs [38,39].Figure 5 illustrates the effect of tem perature on the dielectric spectra by plotting ε′(T) for FNBA/90BAF.In the isotropi phase, ε′ remains constant during the cooling process and then increases significantly to reach its maximum value at the Iso-N phase transition.In the nematic phase, the dielec tric constant (ε′) remains relatively constant and does not vary significantly with th temperature.Discontinuities were observed at the N-SmA and SmA-Cr phase transitions which are consistent with the findings in [37].At frequencies below 1000 Hz, the real (ε′) and imaginary (ε″) parts of the permittivity decrease as the frequency increases, due to the ionic contribution.In the higher frequency regime, ε′ becomes nearly constant, corresponding to the static dielectric constant of the LC.However, the imaginary part of the complex permittivity (ε″) exhibits the soft mode (fluctuations of the short molecular axis), which is characteristic of the smectic and nematic phases [36,37].In addition, the present data show relatively high ε″ values as compared to the dielectric results obtained for 10OBAFF/8OBA [31], which are due to the presence of the high polar NO2 group.Dielectric spectroscopy is a valuable tool for studying the phase transitions of the LCs [38,39].Figure 5 illustrates the effect of temperature on the dielectric spectra by plotting ε′(T) for FNBA/90BAF.In the isotropic phase, ε′ remains constant during the cooling process and then increases significantly to reach its maximum value at the Iso-N phase transition.In the nematic phase, the dielectric constant (ε′) remains relatively constant and does not vary significantly with the temperature.Discontinuities were observed at the N-SmA and SmA-Cr phase transitions, which are consistent with the findings in [37].Conversely, the substantial enhancement in ε′ and ε″ at low frequencies due to the ionic contribution indicates the potential occurrence of ionic diffusion phenomena.This can be described by the following equation [16]: Conversely, the substantial enhancement in ε ′ and ε ′′ at low frequencies due to the ionic contribution indicates the potential occurrence of ionic diffusion phenomena.This can be described by the following equation [16]: In these equations, q represents the electric charge, d stands for the cell gap, ε 0 is the permittivity in free space, k B represents the Boltzmann constant, T stands for the absolute temperature, n is the bulk ionic concentration, and ε b ′ represents the intrinsic dielectric constant of the LC bulk. To extract the ionic concentration (n ion ) and the diffusion coefficient D, the spectra of ε ′ and ε ′′ were fitted together in the range of 1-5 kHz using Equations ( 1) and (2).Table 2 presents the values for the diffusion constant D, the ionic concentration n ion , and the mobility of free ions µ.The latter quantity was calculated using the diffusion coefficient D [38].As anticipated, an elevation in temperature is accompanied by an enhancement in all parameters.The rise in the ionic concentration at elevated temperatures can be attributed to the thermal energy acquired by the ions, which enables them to detach from the LC molecules and become mobile.The rise in mobility and D is attributable to the reduction in viscosity at higher temperatures. The measurement of conductivity represents a reliable method for describing the ionic behavior in a sample, as it is proportional to the concentration of space charge [40]: The horizontal part of the curve for the SmA and nematic phases represents DC electric conductivity.This value was derived through a nonlinear fitting of the σ AC plot using frequency to the universal power law [40]: where σ DC is the DC conductivity, f c stands for the characteristic frequency, and m represents the degree of interaction between the mobile ions and their surroundings. It is anticipated that an increase in σ DC will be observed as a consequence of the temperature-dependent nature of both the ionic concentration and mobility.This is because σ DC is directly proportional to these parameters.It was observed that the values of σ DC in FNBA/9OBAF are higher than those in FNBA/10OBA (see Figure 6).This can be attributed to the higher mobility in the former mixture.Furthermore, these mixtures demonstrate a high level of DC conductivity in comparison to other calamitic LCs [38].This can be explained by the presence of molecules with high dipole moments along their molecular long axes, influenced by polar terminal groups (NO 2 ), fluoro substituents, and ester groups.Furthermore, a comparison of the σ DC values of FNBA/9OBAF and FNBA/10OBA reveals that the lateral fluoro atom, which facilitates the delocalization of π-electrons in the aromatic group, plays a pivotal role in the enhancement of this parameter.As reported by Manabe et al. [22], a high longitudinal dipole moment and a lateral fluoro substituent lead to the appearance of high dielectric permittivity.Based on these considerations, it appears that a large dipole moment in the long axis of a rod-like molecule and the lateral fluoro substituent are of primary importance for the appearance of high conductivity.Another contributing factor is the degree of molecular order, which improves ionic mobility and, therefore, the conductivity.ester groups.Furthermore, a comparison of the σDC values of FNBA/9OBAF an FNBA/10OBA reveals that the lateral fluoro atom, which facilitates the delocalization o π-electrons in the aromatic group, plays a pivotal role in the enhancement of this pa rameter.As reported by Manabe et al. [22], a high longitudinal dipole moment and lateral fluoro substituent lead to the appearance of high dielectric permittivity.Based o these considerations, it appears that a large dipole moment in the long axis of a rod-lik molecule and the lateral fluoro substituent are of primary importance for the appearanc of high conductivity.Another contributing factor is the degree of molecular order, whic improves ionic mobility and, therefore, the conductivity. Properties of the Nematic Phase Figure 7 shows the real component (ε′) of the FNBA/9OBAF in planar (ε⊥) and ho meotropic (ε∥) alignments over a frequency range of [1 Hz-10 MHz].At lower frequen cies, there is a significant decrease in ε′ as the frequency increases due to the ionic con tribution, which is analogous to that observed in FNBA/10OBA.At frequencies rangin from 10 3 to 10 5 Hz, ε′ remains nearly constant, representing the static permittivity.At thi range, ionic impurities are no longer able to follow the periodic inversion of the electri field.The values of ε∥ are higher than those of ε⊥, indicating a positive dielectric anisot ropy. Properties of the Nematic Phase Figure 7 shows the real component (ε ′ ) of the FNBA/9OBAF in planar (ε ⊥ ) and homeotropic (ε ∥ ) alignments over a frequency range of [1 Hz-10 MHz].At lower frequencies, there is a significant decrease in ε ′ as the frequency increases due to the ionic contribution, which is analogous to that observed in FNBA/10OBA.At frequencies ranging from 10 3 to 10 5 Hz, ε ′ remains nearly constant, representing the static permittivity.At this range, ionic impurities are no longer able to follow the periodic inversion of the electric field.The values of ε ∥ are higher than those of ε ⊥ , indicating a positive dielectric anisotropy. π-electrons in the aromatic group, plays a pivotal role in the enhancement of this pa rameter.As reported by Manabe et al. [22], a high longitudinal dipole moment and a lateral fluoro substituent lead to the appearance of high dielectric permittivity.Based on these considerations, it appears that a large dipole moment in the long axis of a rod-like molecule and the lateral fluoro substituent are of primary importance for the appearance of high conductivity.Another contributing factor is the degree of molecular order, which improves ionic mobility and, therefore, the conductivity. Properties of the Nematic Phase Figure 7 shows the real component (ε′) of the FNBA/9OBAF in planar (ε⊥) and ho meotropic (ε∥) alignments over a frequency range of [1 Hz-10 MHz].At lower frequen cies, there is a significant decrease in ε′ as the frequency increases due to the ionic con tribution, which is analogous to that observed in FNBA/10OBA.At frequencies ranging from 10 3 to 10 5 Hz, ε′ remains nearly constant, representing the static permittivity.At this range, ionic impurities are no longer able to follow the periodic inversion of the electric field.The values of ε∥ are higher than those of ε⊥, indicating a positive dielectric anisot ropy.C, respectively, indicating that the target compounds FNBA/9OBAF possess high dielectric anisotropy.Furthermore, when compared to the dimeric 9OBAF (∆ε = 0.6), the present compound exhibits a significantly larger ∆ε value.The presence of the NO 2 group in these compounds is responsible for their non-symmetric configuration, increased dipole moment, and higher polarizability as compared to the dimeric compound.Consequently, a larger number of molecules align longitudinally, increasing the dielectric anisotropy. graph indicates a decrease in dielectric anisotropy with increasing temperature significant decrease occurring at the Iso-N phase-transition temperature.The obta values are 4.6, 5.3, and 6.5 at 112 °C, 108 °C, and 104 °C, respectively, indicating target compounds FNBA/9OBAF possess high dielectric anisotropy.Furthermor compared to the dimeric 9OBAF (Δε = 0.6), the present compound exhibits a sign larger Δε value.The presence of the NO2 group in these compounds is respon their non-symmetric configuration, increased dipole moment, and higher polar as compared to the dimeric compound.Consequently, a larger number of m align longitudinally, increasing the dielectric anisotropy.The optical anisotropy, or birefringence (Δn), is defined as the difference the ordinary index (no) and the extraordinary index (ne): Δn = ne − no.This param crucial physical property of HBLCs and plays an essential role in their applicatio ure 9 illustrates the temperature dependence of Δn, which exhibits a slight decre the nematic range with increasing temperature and a rapid decrease near the Isotransition.The curve illustrates a high Δn value for FNBA/9OBAF.This compo hibits higher Δn values (0.27 at 104 °C) as compared to the well-known 5CB (Δn and E7 (Δn = 0.24) [41] and the dimeric compounds nOBAF (Δn = 0.2), while it than that obtained in tolane LCs (0.3-0.4) [29].This indicates that the incorporatio NO2 group and fluorine substituent in the molecular structure enhances the Δn the LC.The optical anisotropy, or birefringence (∆n), is defined as the difference between the ordinary index (n o ) and the extraordinary index (n e ): ∆n = n e − n o .This parameter is a crucial physical property of HBLCs and plays an essential role in their applications.Figure 9 illustrates the temperature dependence of ∆n, which exhibits a slight decrease over the nematic range with increasing temperature and a rapid decrease near the Iso-N phase transition.The curve illustrates a high ∆n value for FNBA/9OBAF.This compound exhibits higher ∆n values (0.27 at 104 • C) as compared to the well-known 5CB (∆n = 0.17) and E7 (∆n = 0.24) [41] and the dimeric compounds nOBAF (∆n = 0.2), while it is lower than that obtained in tolane LCs (0.3-0.4) [29].This indicates that the incorporation of the NO 2 group and fluorine substituent in the molecular structure enhances the ∆n value of the LC. target compounds FNBA/9OBAF possess high dielectric anisotropy.Furthermor compared to the dimeric 9OBAF (Δε = 0.6), the present compound exhibits a sign larger Δε value.The presence of the NO2 group in these compounds is respon their non-symmetric configuration, increased dipole moment, and higher polar as compared to the dimeric compound.Consequently, a larger number of m align longitudinally, increasing the dielectric anisotropy.The optical anisotropy, or birefringence (Δn), is defined as the difference the ordinary index (no) and the extraordinary index (ne): Δn = ne − no.This param crucial physical property of HBLCs and plays an essential role in their applicatio ure 9 illustrates the temperature dependence of Δn, which exhibits a slight decre the nematic range with increasing temperature and a rapid decrease near the Isotransition.The curve illustrates a high Δn value for FNBA/9OBAF.This compo hibits higher Δn values (0.27 at 104 °C) as compared to the well-known 5CB (Δ and E7 (Δn = 0.24) [41] and the dimeric compounds nOBAF (Δn = 0.2), while it than that obtained in tolane LCs (0.3-0.4) [29].This indicates that the incorporatio NO2 group and fluorine substituent in the molecular structure enhances the Δn the LC.The threshold voltage (V th ) is defined as the voltage required to induce the Freedericksz transition.Figure 10 depicts the variation in V th as a function of temperature.The synthesized FNBA/9OBAF blend exhibits threshold voltages between 4.2 and 5.7 V and between 3.8 and 5.2 V, respectively.The former values were achieved when the color of the texture began to change.The latter values were obtained from capacitance measurements.The latter values were found to be approximately 1 V lower than those obtained by POM observations across the entire investigated temperature range.It should be noted that the obtained threshold voltages from the FNBA/9OBAF blend were lower than those from the other HBLC material previously reported in the literature [31,33,35].For example, the dimeric compound presents a higher threshold voltage (6-7 V) [31,35].The enhanced dielectric anisotropy is responsible for the relatively elevated V th , as described by the following equation [15]: Figure 9. Birefringence as a function of temperature for the FNBA/9OBAF mixture. The threshold voltage (Vth) is defined as the voltage required to induce the Freedericksz transition.Figure 10 depicts the variation in Vth as a function of temperature.The synthesized FNBA/9OBAF blend exhibits threshold voltages between 4.2 and 5.7 V and between 3.8 and 5.2 V, respectively.The former values were achieved when the color of the texture began to change.The latter values were obtained from capacitance measurements.The latter values were found to be approximately 1 V lower than those obtained by POM observations across the entire investigated temperature range.It should be noted that the obtained threshold voltages from the FNBA/9OBAF blend were lower than those from the other HBLC material previously reported in the literature [31,33,35].For example, the dimeric compound presents a higher threshold voltage (6-7 V) [31,35].The enhanced dielectric anisotropy is responsible for the relatively elevated Vth, as described by the following equation [15]: It is crucial to acknowledge that comparable data have also been documented by Yamaguchi [33]. Polymer Dispersed Liquid Crystals In order to demonstrate one of the potential applications of the synthesized HBLC mixtures, polymer-dispersed liquid crystal (PDLC) samples were prepared.PDLCs have garnered significant interest due to their versatile electro-optical properties, which render them suitable for a wide array of applications [42][43][44][45][46]. Figure 11 illustrates an image from the POM observations of a PDLC film elaborated by the thermally induced phase separation of FNBA/9OBAF as a HBLC blend with styrene as a monomer.The image demonstrates the presence of a phase-separated sample morphology, revealing a polystyrene matrix phase-separated from liquid crystalline domains consisting of a FNBA/9OBAF HBLC blend.It is crucial to acknowledge that comparable data have also been documented by Yamaguchi [33]. Polymer Dispersed Liquid Crystals In order to demonstrate one of the potential applications of the synthesized HBLC mixtures, polymer-dispersed liquid crystal (PDLC) samples were prepared.PDLCs have garnered significant interest due to their versatile electro-optical properties, which render them suitable for a wide array of applications [42][43][44][45][46]. Figure 11 illustrates an image from the POM observations of a PDLC film elaborated by the thermally induced phase separation of FNBA/9OBAF as a HBLC blend with styrene as a monomer.The image demonstrates the presence of a phase-separated sample morphology, revealing a polystyrene matrix phaseseparated from liquid crystalline domains consisting of a FNBA/9OBAF HBLC blend. However, because the electric field responsible for reorienting the LC molecules is inversely proportional to the radius (R) of the HBLC domains, a radius of several tens of micrometers, as illustrated in Figure 11, would result in a relatively low electric field.A straightforward formula for assessing the reorienting field is provided by the following equation: Given the high dielectric anisotropy, the estimated reorienting electric field is relatively low.However, because the electric field responsible for reorienting the LC molecul inversely proportional to the radius (R) of the HBLC domains, a radius of several te micrometers, as illustrated in Figure 11, would result in a relatively low electric fiel straightforward formula for assessing the reorienting field is provided by the follow equation: 𝐾 𝜀 ∆𝜀 Given the high dielectric anisotropy, the estimated reorienting electric field is tively low. The HBLC mixtures FNBA/10OBA, FNBA/9OBAF, FNBA/14OBAF, FNBA/7OB and FNBA/12OBAFF were prepared by dissolving a blend of FNBA and either ben acid or fluoro-benzoic acid in a 1:1 molar ratio in DMF.After thorough mixing, the ple was allowed to cool slowly in order to evaporate the ethanol completely unti sample mass became constant.The resulting powders were subjected to vacuum dr for a minimum of 20 h prior to use. Figure 12 depicts the structures of these compou The HBLC mixtures FNBA/10OBA, FNBA/9OBAF, FNBA/14OBAF, FNBA/7OBAFF, and FNBA/12OBAFF were prepared by dissolving a blend of FNBA and either benzoic acid or fluoro-benzoic acid in a 1:1 molar ratio in DMF.After thorough mixing, the sample was allowed to cool slowly in order to evaporate the ethanol completely until the sample mass became constant.The resulting powders were subjected to vacuum drying for a minimum of 20 h prior to use. Figure 12 depicts the structures of these compounds. Measurement Set-Up and Instruments Fourier transform infrared spectra were recorded on the dry powders in attenuated total reflectance mode (FTIR-ATR) using a Nicolet™ iS50 FT-IR spectrometer (THORLABS, Munich, Germany).The spectra were recorded in the range of 4000-400 cm −1 at a resolution of 4 cm −1 and using 32 scans for each spectrum.A differential scanning calorimeter (DSC) was utilized to perform the calorimetric measurements, with a nitrogen purge gas applied.The Perkin-Elmer DSC 7 apparatus was employed to perform the cooling and heating cycles at a rate of 3 °C per minute.The DSC measurements Measurement Set-Up and Instruments Fourier transform infrared spectra were recorded on the dry powders in attenuated total reflectance mode (FTIR-ATR) using a Nicolet™ iS50 FT-IR spectrometer (THORLABS, Munich, Germany).The spectra were recorded in the range of 4000-400 cm −1 at a resolution of 4 cm −1 and using 32 scans for each spectrum.A differential scanning calorimeter (DSC) was utilized to perform the calorimetric measurements, with a nitrogen purge gas applied.The Perkin-Elmer DSC 7 apparatus was employed to perform the cooling and heating cycles at a rate of 3 • C per minute.The DSC measurements were performed on samples with a mass of approximately 3-4 mg.The enthalpies and transition temperatures were determined using the Perkin-Elmer Pyris software version 13.1.1.Polarized optical microscopy (POM) was conducted using an Olympus BX50 polarized optical microscope, equipped with a digital CCD camera (Sony XCD-U100CR).In order to investigate the thermal, electro-optic, and dielectric behavior, the compounds were filled by capillarity in the isotropic phase into commercially available planar alignment cells (EHC from Japan) with a thickness of 5.5 µm and an active area of 0.25 cm 2 .The dielectric permittivity (ε* = ε ′ − iε ′′ ) of the HBLC mixtures was determined using an impedance/gain phase analyzer (Solartron SI1260) coupled to a 1296 dielectric interface in the frequency range from 1 Hz to 10 MHz.The parallel permittivity (ε ∥ ) was determined using a homeotropic cell, while the perpendicular permittivity (ε ⊥ ) was measured under an AC voltage of 0.5 V. Consequently, the dielectric anisotropy ∆ε, defined as the difference between the parallel and perpendicular permittivities, ε ∥ and ε ⊥ , respectively, could be estimated. The temperature of the sample was meticulously regulated to within a margin of ±0.1 • C through the utilization of a Linkam TMS 94 apparatus.To assess the electro-optic responses of the HBLC mixtures, an electric voltage was applied to the cell, which was connected in a series to an external electric resistance (1 kΩ).Subsequently, the sample was positioned between crossed polarizers under a polarizing microscope.An Agilent 33220A waveform function generator was employed to apply the electric voltage, which could be either a positive direct current (DC) or a sinusoidal signal with adjustable amplitude and frequency.The birefringence ∆n was calculated using the phase difference δ, as given by the following formula: δ = 2πd ∆n/λ (8 where d is the cell gap, and λ = 546 nm represents the wavelength.The parameter V th is of particular importance in the context of LCs, as it represents the minimum voltage value required to reorient LC molecules.In the present case, the value of V th was obtained through a process of observation and measurement.The observations were made using a polarizing optical microscope (POM) [36], while the measurements were taken using an Agilent LCR meter (E4980A).The frequency of the applied voltage, which ranged from 0 to 15 V, was 1 kHz.The specifics of this procedure have been delineated in [37]. Polymer-dispersed liquid crystal (PDLC) films were prepared using the thermally induced phase separation technique, which involved a prepolymer mixture comprising a styrene monomer, FNBA/9OBAF, and azobisisobutyronitrile (AIBN) as the initiator.The purified and freshly prepared monomer (40 wt-%) was combined with the HBLC blend (60 wt-%) and a catalytic amount of AIBN, resulting in the formation of an optically homogeneous prepolymer mixture.Subsequently, the mixture was injected into 10-µmthick ITO-coated LC cells (HG cells from AWAT, Warsaw, Poland) via capillary action and heated to 70 • C.This controlled heating process enables the precise regulation of the polymerization reaction, resulting in the formation of well-dispersed LC domains within the polystyrene matrix. Conclusions In conclusion, new fluorinated HBLCs derived from 4-n-alkoxybenzoic acid (10OBA), 2-fluoro-4-nitrobenzoic acid (FNBA) and lateral fluorine-substituted derivatives (nOBAF and nOBAFF) have been successfully analyzed.The investigated mixtures exhibited a diverse array of mesophases, including SmA and nematic phases, contingent on the composition of the HBLC mixture, including the length of the alkoxy chain and the number of fluorine atoms.The structural variations and composition significantly impacted the temperature range of the mesophase.It is noteworthy that the HBLCs under investigation exhibited high dielectric permittivity and DC conductivity.Moreover, the nematic phases exhibited high birefringence and dielectric anisotropy, which can be attributed to increased molecular polarizability and dipole moment.This study emphasizes the significance of molecular design and composition in tailoring the properties of HBLCs for specific applications.In particular, it highlights the potential of enhancing optical, electrical, and thermal characteristics relevant to advanced materials and device technologies through the manipulation of molecular composition. 3c,d.Upon cooling, the temperature range of [TIso-N − TN-SmA = 115.4− 101 °C] display nematic phase with a Schlieren texture, whereas the temperature range of [TN-SmA − T = 101 − 71.6 °C] exhibits the SmA phase characterized by a conic focal texture.Ta presents the mesomorphic transition temperatures (in degrees Celsius) and enthalp Joules per gram) upon cooling.The phase transitions of the FNBA/14OBAF mixtur analogous to those of the FNBA/9OBAF mixture.In comparison to FNBA/9OBAF temperature of crystallization is observed to decrease, while the Iso-N phase trans temperature increases by 6 °C.This results in an expanded temperature range of th phase.It should be noted that the occurrence of superimposed thermal peaks a SmA-N phase transition is a consequence of the coexistence of nematic and SmA ph as evidenced by POM observations. Figure 6 . Figure 6.Plot of the frequency dependence of the conductivity for the (a) FNBA/10OBA and (b FNBA/9OBAF mixtures. Figure 6 . Figure 6.Plot of the frequency dependence of the conductivity for the (a) FNBA/10OBA and (b) FNBA/9OBAF mixtures. Figure 6 . Figure 6.Plot of the frequency dependence of the conductivity for the (a) FNBA/10OBA and (b FNBA/9OBAF mixtures. Figure 8 Figure 8 illustrates the variation in dielectric anisotropy with temperature.The graph indicates a decrease in dielectric anisotropy with increasing temperature, with a significant decrease occurring at the Iso-N phase-transition temperature.The obtained ∆ε values are 4.6, 5.3, and 6.5 at 112 • C, 108 • C, and 104 • C, respectively, indicating that the target compounds FNBA/9OBAF possess high dielectric anisotropy.Furthermore, when compared to the dimeric 9OBAF (∆ε = 0.6), the present compound exhibits a significantly Figure 8 . Figure 8.The variation in the dielectric anisotropy for the FNBA/9OBAF mixture as a fu the temperature. Figure 8 . Figure 8.The variation in the dielectric anisotropy for the FNBA/9OBAF mixture as a function of the temperature. Figure 8 . Figure 8.The variation in the dielectric anisotropy for the FNBA/9OBAF mixture as a fu the temperature. Figure 9 . Figure 9. Birefringence as a function of temperature for the FNBA/9OBAF mixture. Table 2 . Electro-physical parameters of the studied compounds.
8,234.8
2024-07-01T00:00:00.000
[ "Materials Science", "Physics" ]
Wearable Internet of Things Gait Sensors for Quantitative Assessment of Myers–Briggs Type Indicator Personality Gait is a typical habitual human behavior and manifestation of personality. The unique properties of individual gaits may offer important clues in the assessment of personality. However, assessing personality accurately through quantitative gait analysis remains a daunting challenge. Herein, targeting young individuals, standardized gait data are obtained from 114 subjects with a wearable gait sensor, and the Myers–Briggs Type Indicator (MBTl) personality scale is used to assess their corresponding personality types. Artificial intelligence algorithms are used to systematically mine the relationship between gaits and 16 personality types. The work shows that gait parameters can indicate the personality of a subject from the four MBTI dimensions of E‐l, S‐N, T‐F, and J‐P with a concordance rate as high as 95%, 96%, 91%, and 91%, respectively. The overall measurement accuracy for the 16 personality types is 88.16%. Moreover, a personality tracking experiment on all the subjects after one year to assess the stability of their personality is also conducted. This research, which is based on a smart wearable Internet of Things gait sensor, not only establishes a new connection between behavioral analysis and personality assessment but also provides a set of accurate research tools for the quantitative assessment of personality. advanced sensing and AI-related technologies to develop a systematic and accurate measurement tool for assessing personality characteristics. People's behaviors are closely related to their personality traits. [1,9]Behavioral analysis has powerful practical applications in mental health therapy and organizational psychology.The use of habitual behavioral patterns to assess personality can effectively prevent the interference of the social desirability effect and other subjective factors. [10]Besides, behavior can be observed, recorded, and analyzed systematically with the aid of technological devices.[13][14][15][16] For instance, research teams from Nagoya University and Tsinghua University examined the relationship between facial behavior and personality. [11,12,15]Other research teams, such as those from MIT, investigated the relationship between voice and personality. [13,14,16]However, owing to various confounding factors, including gender and age effects, the concordance rate of such physiological indicators, with conventional psychometric personality measures, is low.Hence, systematically quantifying and evaluating the various dimensions of personality based on behavioral information from faces and voices may be difficult, and other behavioral indicators of personality should be identified to increase accuracy. Gait is one of the salient features of human behavior, which is generated by the combined action of the brain and nerves. [17]Gait also reflects an individual's cognition, character, etc., and provides clues to individuals' mental and health conditions. [18]ompared with other biometrics, gait is difficult to camouflage; thus, it can offer a highly objective measurement.Collado-Vázquez et al. believed that gait reflects internal characteristics, [19] while others showed that walking speed in adulthood can reflect personality to a certain extent. [20]For example, two dimensions of the Big Five Personality Traits are generally related to gait speed and reduced gait speed. [21]Specifically, amplitude of upper and lower extremity movements and walking speed are associated with aggression. [22]However, the analysis of gait and personality in the aforementioned studies was not systematic and comprehensive, and gait was mostly limited to walking velocity. [25][26][27] Examples of recent work using motion and pressure sensor data to relate gait features to human characteristics are shown in Table 1.Experiments at the Newcastle Neuroscience Institute demonstrated that gait is highly reliable in judging personality traits. [28]A research team from Shanghai Jiao Tong University used the Kinect system to discover gait characteristics that may be related to personality. [29]Another team from Changwon National University in South Korea used specific electric charges (GaitRite program) to understand the gait patterns of the Myers-Briggs Type Indicator (MBTI) personality types. [30]Meanwhile, a psychology team at Carleton University applied gait research to crime detection analysis. [31]Most recent studies examined the correlation between gait and personality, but no systematic and quantitative evaluation on how gait can explain personality preferences exists.Hence, a comprehensive examination of the processes behind gait-based patterns is necessary to advance our understanding of personality to a considerable extent. Currently, there are many assessment tools to describe personality and the most prominent and influential methods are 1) Cattell's 16-factor personality model, 2) the three dimensions of Eysenck Personality (Eysenck Big Three) model, 3) the MBTI, and 4) the Big Five assessment.Perhaps the most important reason that Cattell's 16-factor model never gained full academic acceptance is that it is harder to understand than simpler models such as the Eysenck Big Three or the Big Five; however, Eysenck's model is insufficient to account for the complexity of the wide range of human personalities. [32]Currently, the Big Five factors and the MBTI are the two most commonly used personality models, both of which are based on Carl Jung's inside-out dichotomy and provide personality insights.In essence, the Big Five assessment measures how many traits a person has and is a feature-based approach, [33] while the MBTI assesses what preferences each person has on dominant functions and represents a choice of how people think and behave-which is a type-based approach that is consistent with the way humans choose to walk. The MBTI is a psychometric inventory for assessing individuals' preferences in the dominant function, representing their choice in the way they think and behave and exhibiting consistency with the way they choose to walk.The MBTI is a typebased approach first developed by American psychologist Katherine Cook Briggs (1875-1968) and her psychologist daughter, Isabel Briggs Myers, based on the theory of mental types of the renowned psychoanalyst Carl G. Jung and their long-term observations and research on differences in human personality. [34,35]The MBTI serves as a reliable aid for defining personality types based on adequate research and validation, [36] including applications in career development and team building. [37,38]tructurally, the division of the MBTI into four dimensions is a natural classification method, which matches machine learning technology and is naturally friendly in terms of technical application.Owing to its practicality, the MBTI is widely used in industries for personnel selection, career planning, and talent development.This evaluation model is used in 115 countries and available in 29 languages and demonstrated satisfactory reliability and validity in recent years. [39,40]Figure 1 presents the MBTI 16 personality composition and structural chart.In the innermost circle lies the four worldviews, each with two dimensions.The outer circles include the 16 personality types around four pairs of human worldview categories. [41]Myers described the two dimensions of the four worldviews as "preference" in four pairs of categories: extraverted (E)-introverted (I), sensory (S)-intuitive (N), thinking (T)-feeling (F), and judging (J)-perceiving (P).The differences in the characteristics of the four dimensions are not mutually exclusive but habitual preferences, which also correspond well to Holland's "thing-person" dichotomy.Each of the 16 personality types is associated with a specific pattern of personality traits. 2,43] However, from the experimental point of view, it is mostly limited to high-tech gait laboratories or expensive complex systems.With the growing demand for portable and precise analysis systems, [44] wearable technology has transformed the accessibility of gait analysis, providing an opportunity to assess human behaviors outside the laboratory. [45,46]hus, such technology is becoming increasingly popular and advancing toward mainstream development.In human activity recognition, standard datasets that can be used for learning can be customized. [47]herefore, in this study, we develop a single microelectromechanical systems (MEMS) inertial sensor (with wireless real-time transmission, small and portable, low cost, and fast response) that can support the parallel acquisition of the motion states of multiple objects and be used for standard data acquisition of gait.As shown in Figure 2, the physical features and capability of our proposed system are compared with other previous studies using sensors.The figure clearly indicates the advantages of our proposed system in terms of weight, volume, number of sensors, and "trait detection" functionality over existing systems. In this study, we demonstrated a one-to-one correspondence between collected standardized gait data and personality type using only IMU sensors.By quantifying gait kinematic parameters to analyze the gait performance of groups of different or the same personality types, we systematically describe the different preferences of the MBTI 16 personality types.The four preference dichotomies include the source and focus of energy (extroverted-introverted), the way of understanding the world and information (sensory-intuitive), the benchmark for judging the world (thinking-feeling), and the way of processing or coping with the external world (judging-perceiving).These four aspects provide theoretical explanations for clarifying the relationship between gait and personality.Next, we extracted gait characteristics from IMU data as predictive factors and achieved an accurate description of personality preferences and measurement of 16 personality types through machine learning algorithms.This significantly enhanced our understanding of personality.We successfully conducted research on human personality using IMU data and proposed a method for modeling IMU data based on gait and personality using machine learning. Experimental Section 2.1. Materials The framework of the system approach for assessing personality based on the self-developed intelligent wearable Internet of Things (IoT) sensor is shown in Figure 3.The first box illustrates the data acquisition platform (part A of Figure 3), the second box presents the data processing and analysis (part B), and the last box is the machine learning algorithmic prediction (part C). We obtained the data for this study from 114 university students.Specifically, we collected gait data from all the participants who completed an online questionnaire containing the MBTI and personal information.This study was approved by the Research Ethics Committee (Northeastern University Ethics Committee), and all the participants indicated informed consent.We used several exclusion criteria for collecting the data, specifically, we limited the age of the participants to 18-28 years and set the education level to cover all stages, from the undergraduate level to the postgraduate level, and the health requirement to having no history of nerve, muscle, bone, and other diseases.A summary of the subjects' age, height, weight, and body mass index is provided in Table 2. In addition to the individual circumstances, to properly capture the reasonable relationship between walking and personality, considering the actual walking environment is crucial, including the weather, weight, route, obstacles, and so on.As the gait of the subjects differed under the observation of different numbers of experimenters, [48] we designed a 50 m  26 m rectangular route in the school and MEMS-based sensor, as shown in Figure 4, and informed the subjects to not bear weight.The sensor was worn on the right ankle to achieve real-time normalized gait data acquisition.The sensor is composed mainly of a six-degree-of-freedom micro-IMU and new low-power microcontroller N52832 that can support Bluetooth.The IMU employs the MPU9250 chip, which integrates a 3-axis gyroscope and a 3-axis accelerometer.The entire sensor is a 12.0 mm  12.5 mm  1 mm miniature device, its power consumption is as low as 5 mA, its value is accurate to eight decimal places, and it is stable and reliable.We also used the sensor to collect the acceleration and angular rate data of the X-, Y-, and Z-axes.The collected data are transmitted wirelessly to the mobile phone through Bluetooth for storage.After familiarization with the scene, the subjects were asked to walk naturally and independently.At the same time, our acquisition system has a vision sensor, which can transmit the walking picture of the foot to the mobile phone in real-time for saving.The video data are provided for subsequent reference and analysis to verify the validity of the data collected by the IMU sensor. We divided the personality acquisition process into three stages: pretest explanation, test taking, and posttest communication. Pretest explanation: One day before the test, we explained the purpose, significance, content scope, type, and specific procedures of the test to the participants.Before the test, we instructed the participants to read the instructions and adjust their emotional and physiological states accordingly. Test taking: This stage involved the participants taking an online quiz after reading the instructions for each section. Posttest communication: After the participants completed the test, we synthesized the test results and issued a personality result report to each of the participants (including explanations of the basic dimensions; descriptions of their strengths and possible blind spots, including as partners or parents; career development analysis suggestions; and so on).Finally, we communicated and explained the report results positively and objectively to each participant to ensure maximum understanding. Data Analysis The sampling rate of the experimental device used in the experiment was 50 Hz.The main sources of error during the data collection process were the random electronic noise generated by the sensing device and the uncertain jitter phenomenon of the sensor in motion.To reduce the effect of the errors brought about by the noise, we preprocessed the raw data (removed outliers and denoised).In addition, we screened the collected questionnaire information according to indicators such as completion time, degree of completion, and completion speed.The basic information of the sample is shown in Table 2.After the preprocessing, we compared and matched the data to the personality types one by one.Each personality dimension demonstrated differences in the maximum value or time within or between the groups, as shown in Figure 5, which helped in the subsequent data analysis. We obtained the four-dimension preference index from the questionnaire and divided the participants into 16 personality types.The distribution of the personality types of the subjects is illustrated in Figure 6. Gait Stability In exploring the relationship between gait and personality, to ensure the stability of the relationship between the two elements, we conducted tracking experiments on gait and personality.For gait, we used a gait experimental protocol across time, that is, a second gait data acquisition period for all the subjects in the same outdoor environment before and after a year of normal life.We preprocessed the gait data from the second acquisition period and compared them with the data from the previous year.It can be seen from Figure 7 that the gait of the same personality types was relatively stable before and after one year, whereas the gait of different personality types exhibited obvious differences before and after one year.Nevertheless, some subtle differences were evident in the gait data of the same personality types. Personality Stability At the same time, we also conducted a personality tracking experiment on all the subjects after one year in a different environment (i.e., in different classrooms) to assess the stability of their personality.According to the theory of the 16 personality types, stability is positive. [40]In response, we conducted a test-retest reliability assessment on the participants' personality.As shown in Figure 8 and 83% of the subjects (the generally accepted standard in the field is within the range of 70%-90%) demonstrated no personality changes, indicating the positive stability of personality in this study. During the gait data collection phase, we performed each experiment continuously.To avoid errors during the transition period, we compared the obtained data with the videos recorded during the experiment.We divided the preprocessed time series gait data into two parts, that is, the stance phase and swing phase, and then selected 10 consecutive gait cycles that were relatively stable under natural motion for each individual for the data analysis. Feature Extraction First, we extracted the time-domain statistical properties from the preprocessed acceleration and angular acceleration data.Second, we calculated the gait kinematic parameters from the preprocessed acceleration and angular acceleration data.In analyzing the gait parameters, we improved the gait parameter algorithm suitable for the ankle area according to the temporal and spatial variation laws of limb movements when the human body is walking.The algorithm extracted multiple features and described microscopic differences in different types of gait from multiple dimensions.We selected some common gait kinematic parameters (Table 3) including distance parameters such as step length, step length, and foot angle (including pitch angle and roll angle) and time parameters such as single step time, stride time, cadence, pace, stance phase time, swing phase time, gait cycle, ratio of stance phase and swing phase time, and ratio between the two phases and other parameters.We calculated and analyzed the statistical characteristics of some of the parameters.The final dataset included 170 gait parameter features and 20 personality criteria (16 personality types and four preferences) per person. Model Based on the gait motion analysis of the above four personality dimensions, we used machine learning algorithms to evaluate the actual correlation between gait and personality.We took the corresponding gait data as meaningful personality predictors and employed five commonly used machine learning algorithms including decision tree, logistic regression, support vector machine, random forest, and Naive Bayes to predict the personality types.The subset accuracy is 1.0 if the entire predicted label set of a sample strictly matches the true label set, and 0.0 otherwise.If b y i is the predicted value of the ith sample and y i is the corresponding true value, then the proportion of correct predictions divided by the number of samples is defined as Equation ( 1), where 1(x) is the indicator function. accuracyðy, b yÞ In the binary classification task, the terms "positive" and "negative" refer to the prediction of the classifier, and the terms "true" and "false" refer to whether that prediction corresponds to an external judgment.Given these definitions, we can formulate the following: t p (true positive), f p (false positive), f n (false negative), and t n (true negative).Intuitively, precision is the ability of a classifier not to label negative samples as positive, and recall is the ability of a classifier to find all positive samples.The F-measure (F β and F i measures) can be interpreted as a weighted harmonic average of precision and recall.When β = 1, F β and F i are equivalent.We have to define equations as follows: 3. Results Analysis of the Relationship between Gait Characteristics and Personality We performed dimensionality reduction on all gait features obtained from the algorithm to reduce their complexity, enhance interpretability, and facilitate visualization.Specifically, we initially computed the correlation coefficients between gait features and personality values using the correlation coefficient method, and selected features with correlation coefficients greater than 0.3.Subsequently, we applied the random forest feature importance ranking algorithm to rank the selected features from the previous step.We then compared and analyzed the prediction factors that contributed over 90% to the four dimensions, and presented the results in Figure 9.We combined the gait data and personality information to explore the correlation between walk and personality.We also tried to uncover the connection behind the correlation.By visualizing the quantitative data (Figure 9), we systematically analyzed and explained the differences between the four dimensions of E-I, S-N, T-F, and J-P between the groups.We believe that identifying the similarities and differences between the groups of personality preferences can help us understand the reasons behind the covariation between personality and behavior, specifically, which gait parameters can explain the similarities and differences in the groups and how such similarities and differences can be explained.We believe this is necessary to advance our understanding of personality to a considerable extent.In order to make the analysis of gait and personality more comprehensive, we have added an analysis of the relationship between gait and personality for different genders.This is because gender is an important factor that affects gait and personality analysis. E-I Dimension The E-I dimension describes the way an individual directs his/her energy.Extroverts feel energized when spending time with others or in busy, active environments.In addition, extroverts tend to be expressive and outspoken.By contrast, introverts feel energized by spending quiet time alone or with a small group.Introverts also tend to be conservative and considerate. [41]e found that the difference between the two types in the process of gait movement was obvious in the parameters, as shown in Figure 9a.The range distribution of angular velocity in the forward direction of the extroverts was wider and more diffused than that of the introverts, and most were higher than those of the introverts.The acceleration data distribution of the introverts in the forward direction was not as wide as that of the extroverts, and the data were relatively concentrated at the large value, thereby indicating that they were more focused on walking compared with the extroverts.The variance of the speed of the extroverts in the Y-axis direction was higher than that of the introverts on average and more dispersed, indicating that their walking fluctuated considerably.Moreover, the acceleration range in the vertical direction of the extroverts was wider and higher than that of the introverts.Judging from the angular velocity in the Y-axis direction, we determined that the introverts were mostly higher than the extroverts, and the data changed within a small range.The stride frequency of the introverts was higher than that of the extroverts, and the data were similar, and the distribution was more concentrated, thereby indicating that the introverts have faster and shorter strides.Our comprehensive findings determined the following: the individuals who walked briskly, with a large range, and wantonly were extroverts, whereas those who walked cautiously, with a tight rhythm, and relatively calmly and attentively were introverts.This finding also confirmed the idea that introverts prefer to enjoy their time. Based on the results of gender analysis of gait and personality presented in Figure 10 and 11, it was observed that female extroverts generally exhibit better social interaction and socializing skills, which may be reflected in their gait.They are more likely to display a wider roll angle, which could be associated with increased social interaction, observation, and reaction during walking.Extroverts may be more responsive to external stimuli, which could result in a larger roll angle while walking.In contrast, introverts tend to focus more on their own feelings and thoughts, thereby concentrating on maintaining a steady pace while walking.This results in a lower roll angle but higher lateral acceleration during their gait.In males, the median vertical acceleration and the standard deviation of acceleration in the forward direction of extroverted (type E) individuals were significantly lower than those of introverted (type I) individuals.This finding may reflect that extroverted individuals tend to exhibit more stable and direct behavioral traits, while introverted individuals may focus more on internal thinking and reflection, leading to greater variation and fluctuation in their actions.This observation emphasizes that an individual's personality traits can impact their behavior, which can even be observed during everyday activities. S-N Dimension The S-N dimension describes how an individual obtains information.Sensibles focus on their senses and are interested in information that can be seen, heard, felt, and so on directly.Intuitives S, c) T-F, d) J-P.Note: "a" represents acceleration, "v" represents velocity, and "g" represents angular acceleration; statistical features include "range"range, "med"median, "var"sample variance, "Hm"harmonic mean, "min"minimum, "max"maximum, "std"standard deviation, "mean"average , "cadence"steps per second, "area" sum of amplitudes, "rms"root-mean-square value, and "roll"angle of counterclockwise rotation around the positive direction of the X-axis. focus on abstract levels of thinking and are interested in theories, patterns, and explanations. [41]e found that the difference between the two types in the process of gait movement was obvious in the parameters, as shown in Figure 9b.From the figure, we can see that in the Y-axis direction, the variance of the angular velocity of the sensibles was higher than that of the intuitives on average, and the fluctuation range was large.In the forward direction, the acceleration of the intuitives was mostly higher than that of the sensibles, and the variance was large, that is, the data fluctuated considerably.The average speed of the intuitives was slow and concentrated, and they walked smoothly.In the vertical direction, the speed of the intuitives hastened, and their walking was stable.The roll value of the sensibles was higher than that of the intuitives, and the data were widely distributed.Regarding the explanation for this outcome, in walking movement, sensibles obtain cues from their environment through perception, whereas intuitives, who are alert to their surrounding environment, obtain cues through intuition.Overall, those who walked briskly and actively were sensibles, whereas those who walked steadily and demonstrated effective thinking abilities were intuitives. According to the results, there were significant differences in walking patterns between N-type and S-type females.Specifically, N-type individuals exhibited a higher range of lateral angular velocity and a higher average forward direction while walking.This can be explained by the tendency of N-type individuals to focus on abstract thinking, future trends, and meanings, which is reflected in their more flexible and open walking style, making them more adaptable to different situations.In contrast, S-type individuals pay more attention to details and specific experiences, resulting in a more stable walking pattern that emphasizes maintaining a straightforward direction.In males, S-type individuals exhibited a significantly lower range of vertical acceleration and maximum acceleration while walking compared to N-type individuals.This reflects the sensory orientation of individuals, with S-type individuals focusing more on the specific details and sensory experiences of their surroundings, leading to a greater emphasis on maintaining a stable and regular pace while walking, rather than focusing on abstract thinking and future trends like N-type individuals, resulting in a smaller range of vertical acceleration. T-F Dimension The T-F dimension describes how an individual makes decisions.T-people tend to make decisions using logic and are interested in finding the most logical and reasonable options.F-people tend to make decisions using personal values and are interested in how their decisions will affect other people and whether they align with their values. [41]e found that the difference between the two types in the process of gait movement was obvious in the parameters, as shown in Figure 9c.The area value of the T-people in the forward direction was mostly higher than that of the F-people and very concentrated, thereby indicating that their acceleration range was wider.The acceleration rms of the T-people in the Y-axis direction was large, and the data were scattered.In addition, the area value of the speed of the T-people was large in the vertical direction, and the data range was large.The mean value of the cadence of the F-people was slightly high, and the distribution was concentrated, thereby indicating that their walking rhythm was fast and close.Overall, the T-people walked steadily and cautiously, whereas the F-people walked tightly and quickly. In our observations of female walking patterns, we found that T-type individuals exhibited significantly lower values in terms of maximum lateral angular velocity, while displaying significantly higher average vertical acceleration compared to F-type individuals.This can be attributed to the tendency of T-type individuals to prioritize logic and facts, focusing more on details and planning, resulting in a preference for stable and organized movements during walking.In contrast, F-type individuals place more emphasis on emotions and interpersonal relationships, leading to fewer dynamic changes in the vertical direction.In males, T-type individuals showed a significantly higher range of forward acceleration and maximum lateral acceleration compared to F-type individuals.This reflects the personality trait of T-type individuals, who prioritize logic and rationality, leading them to adopt a more direct and goal-oriented walking style, resulting in higher forward acceleration and lateral acceleration.In contrast, F-type individuals place more emphasis on emotions and values, leading to a preference for smooth and cautious strides while walking, resulting in lower acceleration values. J-P Dimension The J-P dimension describes how a person processes the structure of the world around him/her.J's appreciate structure and order and enjoy following a plan, whereas P's appreciate flexibility and spontaneity, enjoy being open, and can change their minds at any time. [41]e found that the difference between the two types in the process of gait movement was obvious in the parameters, as shown in Figure 9d.From the figure, we can see that in the vertical direction, the average value of the area value of the speed of the J's was slightly higher than that of the P's, but the variation was small, thereby indicating that the velocity range in this direction was wider.The acceleration range of the J's in the forward direction was larger than that of the P's, and the distribution was more concentrated, thereby indicating that the acceleration fluctuation range of the J's when walking was wider.The harmonic mean and median of the angular velocity of the J's were small and concentrated in the Y-axis direction, which meant that their angular velocity was concentrated in a relatively small value.Overall, the J's walked relatively steadily, whereas the P's walked smoothly and freely, focusing on creativity and randomness. When walking, females who are J-type individuals exhibit significantly lower values in terms of maximum vertical angular velocity and pitch angle integration area compared to P-type individuals.This can be attributed to the tendency of J-type individuals to prioritize organized and planned actions, as well as a stronger sense of responsibility.Therefore, J-type individuals tend to exhibit more cautious and controlled movements while walking, which is reflected in their lower vertical angular velocity and pitch angle integration area.In males, J-type individuals showed a significantly lower variance in forward acceleration and average vertical acceleration compared to P-type individuals.This reflects the characteristics of the two personality types.J-type individuals typically prioritize organized and planned actions, preferring stability and accuracy.Therefore, they exhibit a lower variance in forward acceleration while walking, helping to maintain a consistent pace.In contrast, P-type individuals are more open and adaptable, exhibiting a higher average vertical acceleration, reflecting a faster response to environmental changes. Personality Measurement The classification results of five classifiers are shown in Figure 12.Compared with other methods, random forest has the best performance in four classification dimensions: accuracy, precision, recall, and F1-score.So, we selected the random forest algorithm to predict the four MBTI preferences. The results of the confusion matrix are presented in Figure 13a.Accuracy for the E-I dimension was 95%, accuracy for the S-N dimension was 96%, accuracy for the T-F dimension was 91%, and accuracy for the J-P dimension was 91%.All the average accuracy rates were higher than 90%.This accuracy exceeded current predictions based on features such as face, voice, or everyday behaviors to demonstrate the high interpretability of the gait parameters selected in this study, and their actual correlation with the 16 personality types. Furthermore, we chose a cross-validation method to verify the reliability of the model and used the average value of the accuracy of the results as an estimate of the accuracy of the algorithm to describe the reliability of the experimental method.For the analysis of the experimental results, we selected metrics such as accuracy, precision, recall, and the F1-score (Table 4) to evaluate the performance of our trained models. Based on the prediction results of the above four dimensions and our existing sample size, we predicted the 16 MBTI personality types.Owing to the uneven distribution of the proportion of each personality type and the intersection of the personality dimensions for each personality type, we used a boosting-based technique to create a robust learner that can make accurate predictions of the 16 personality types for the participants.The results from the questionnaire test were used as the true label, and the prediction accuracy of the 16 types achieved 88.16% (i.e., the percentage of all correctly predicted samples versus all samples).The prediction accuracy of each type can be found at the diagonal cell of Figure 13b, where the other numbers show how the sample was misclassified. Discussion Some previous studies confirmed the correlation between human walking patterns and psychological characteristics, including personality traits. [23,24,47]However, limited by measurement methods for gait and personality or the experimental environment, the application of engineering technology methods in psychological research is uncommon, and reports on systematic and quantitative measurement and research on personality through gait behavior are few. [49]In this study, first, we determined the subjects' 16 personality types through standard MBTI assessment and obtained and measured their ankle movement data using a self-developed wearable sensor.Second, we solved the gait information parameters via coding to quantify the correlation between gait and personality with a machine learning algorithm and systematically described the specific relationship between the four dimensions of the MBTI personality types and gait behavior.In this study, we found that gait had a high degree of explanation for the preferences of the E-I and S-N dimensions, with an accuracy rate of over 95%.Specifically, the extroverts walked briskly, with a considerable range, and freely, giving the impression that they were very energetic.By contrast, the introverts tended to be slightly restrained and tight paced when walking as well as relatively calm and focused.The sensibles walked briskly and regularly and were accustomed to obtaining clues from the environment through perception during the walking process.Meanwhile, the intuitives had effective thinking skills and walked steadily.The accuracy rate of the gait parameters in describing the preferences of the T-F and J-P dimensions was above 90%, which was slightly lower than that of the gait parameters in describing the E-I and S-N dimensions.The T-people walked steadily and cautiously and tended to be effective thinkers, whereas the F-people walked tightly and hastily.Furthermore, the P's were creative and casual in their walking movements, walking relatively actively, loosely, and adventurously, whereas the J's walked relatively steadily.These findings can extend research on the correlation between gait and personality, and some are consistent with those of previous studies. [21]n addition, we verified the timeliness and stability of the gait-personality relationship model, and the results showed that gait was relatively stable for at least one year.The personality comparisons also met the standard test-retest level; thus, the results of both assessments were positive.However, we also noticed some limitations of our study.Although our proposed method is relatively objective, and the measurement accuracy obtained for the 16 personality types is as high as 88.16%.The results are higher than those obtained by other current methods of personality research, as shown in Table 5.The model may have errors due to the uncertainty or instability of human personality type variation over time (i.e., the reference data for personality types determination using MBTI may have errors over time).Among such errors, the low accuracy for the ISTJ, ESFJ, and ISFP types may be due to "sample misjudgment" owing to changes in certain personality dimensions during the retest.Some small changes in the model can make a big difference in the results, which is especially evident in personality.Therefore, future research can expand the sample size to examine more subjects (different ages, cultural differences, geographical differences, and so on) and use other advanced AI algorithms to evaluate the validity of gait movement for the personality measures. Conclusion The walking styles of people of different personality types show differences and details in gait features.Currently, vision-or voice-based personality prediction systems have an accuracy rate of around 80% or less.In this study, we introduce a system for MBTI personality measurement through gait using a single wireless IoT wearable motion sensor.The system collects human gait movement data and uses them for the accurate measurement of the four dimensions of personality (E-I, S-N, T-F, and J-P) and 16 corresponding personality types (ISTJ, ISFJ, INFJ, INTJ, ISTP, ISFP, INFP, INTP, ESTP, ESFP, ENFP, ENTP, ESTJ, ESFJ, ENFJ, and ENTJ).We extracted 170 gait parameter features based on an optimized algorithm from the ankle motion data and determined the most significant features to describe the difference in the gait of the four dimensions of personality.To perform the binary classification in each personality dimension, we tested a variety of machine learning algorithms (decision tree, logistic regression, support vector machines, random forest, and Naive Bayes) to find the model with the highest accuracy.Based on the experimental data, we observe that the random forest algorithm demonstrates the best performance and obtains results with a prediction accuracy of more than 90% for each of the four MBTI dimensions.Finally, we use a boosting-based learner to predict the 16 personality types and obtain a measurement accuracy of 88.16%.Beyan et al. [50] Big Five Video CNN þ LSTM 77% Marouf et al. [51] Big Five Text NB, RF, DT, SLR, SVM 61.89-72.13% Mawalim et al. [52] Big Five Multimodal RF 63%-70% Our work MBTI IMU gait data NB, RF, DT, LR, SVM 88.16% Figure 1 . Figure 1.MBTI 16 personality composition and structural chart.In the innermost circle are the four worldviews, each with two dimensions, the outer colorful circles show the 16 personality types. Figure 2 . Figure2.Radar chart comparing this study and other related studies in various dimensions.Note: "Analysis method" represents the depth of the method in the study, which is accumulated sequentially from the inside to the outside. Figure 3 . Figure 3. Overall system diagram.Part A shows the experiment and hardware diagram; self-developed MEMS IoT sensor is on the left, right side is the simulated experimental environment.Part B presents the data algorithm analysis, left side shows the main differences in gait characteristics of the four personality dimensions, and the right side shows the algorithm structure.Part C is the prediction results of the algorithm, the lower part is the prediction results of the four personality dimensions, and further corresponds to the above 16 personality measurements. Figure 4 . Figure 4. Experimental setup and data map.a) Experimental environment and the custom-built sensing device.b) The corresponding gait data graph while walking. Figure 5 . Figure 5. Preprocessed data corresponding to 16 personality types; the upper solid line is the acceleration data, and the lower dotted line is the angular velocity data.Note: The symbol "a" represents acceleration, and "σ" represents angular velocity.The unit of acceleration g is 9.8 m s À2 . Figure 6 . Figure 6.Distribution map of subjects' personality types.a) Distribution and proportion of personality types.b) Preference distribution of all subjects in four dimensions. Figure 7 . Figure 7. Gait stability analysis.Comparison of gait data after preprocessing, data of one gait cycle of subjects with 16 personality types before and after a year, the shaded area is the gait fluctuation range before and after one year.Note: The unit of acceleration g is 9.8 m s À2 , and Dg represents the angular velocity. Figure 8 . Figure 8. Personality stability analysis.Graph of the relative stability of personality types.The vertical axis in the graph represents the number of people. Figure 10 . Figure 10.Comparison of gait differences in four dimensions of female personality. Figure 11 . Figure 11.Comparison of gait differences in four dimensions of male personality. Figure 12 . Figure 12.Comparison of classification results of personality preferences of different dimensions based on five machine learning models. Figure 13 . Figure 13.Confusion matrix for predicting four dimensions and 16 personality types of MBTI by gait parameters.The color bar is shown on the right.Note: The darker the color, the higher the accuracy rate; part without number marks has an accuracy rate of 0. Table 1 . Recent research work on the classification/recognition of various "human information" based on inertial measurement unit (IMU) sensors. Table 2 . Summary information of the test subjects. Table 3 . Gait parameters used in this study. Table 4 . Performance index evaluation of optimal results. Table 5 . Comparison with results from other current personality research methods.
8,927.8
2023-12-19T00:00:00.000
[ "Computer Science", "Psychology" ]
Redetermination of ethyl (3a-cis)-3a,8b-dihydroxy-2-methyl-4-oxo-3a,8b-dihydro-4H-indeno[1,2-b]furan-3-carboxylate monohydrate The crystal structure of the title compound, C15H14O6·H2O, has been redetermined from single-crystal X-ray data. The structure was originally determined by Peet et al. [J. Heterocycl. Chem. (1995), 32, 33–41] but the atomic coordinates were not reported or deposited in the Cambridge Structural Database. The ethyl substituent is disordered over two sites with refined occupancies of 0.815 (6) and 0.185 (6). The indeno group is almost planar [maximum deviation 0.0922 (14) Å] and makes an angle of 68.81 (4)° with the furan ring. The fused ring molecules are assembled in pairs by intermolecular O—H⋯O hydrogen bonds. The resulting dimers are also hydrogen bonded to the water molecules, forming double-stranded chains running along the a axis. The crystal structure of the title compound, C 15 H 14 O 6 ÁH 2 O, has been redetermined from single-crystal X-ray data. The structure was originally determined by Peet et al. [J. Heterocycl. Chem. (1995), 32, 33-41] but the atomic coordinates were not reported or deposited in the Cambridge Structural Database. The ethyl substituent is disordered over two sites with refined occupancies of 0.815 (6) and 0.185 (6). The indeno group is almost planar [maximum deviation 0.0922 (14) Å ] and makes an angle of 68.81 (4) with the furan ring. The fused ring molecules are assembled in pairs by intermolecular O-HÁ Á ÁO hydrogen bonds. The resulting dimers are also hydrogen bonded to the water molecules, forming double-stranded chains running along the a axis. The ethyl substituent of the tile compound is disordered over two sites with refined occupancies of 0.815 (6) and 0.185 (6). The indeno moiety is almost planar, with atoms C11 and C12 deviating by -0.0574 (13) and 0.0922 (14) Å, respectively, from the indeno plane. The angle between the indeno group and the furan ring is 68.81 (4) °. Experimental A mixture of ninhydrin (1.78 g) and ethyl acetoacetate (1.27 ml) in molar ratio 1:1 were refluxed in acetone for thirty minutes in presence of Mg/HCl. The reaction mixture was filtered and dried at low pressure. The dried mass was crystallized with solvent system diethyl ether and hexane to give transparent crystals (mp 373-376 K) of title compound (2.68 g). The melting point was determined on a Kofler block melting point apparatus and is uncorrected. Refinement Hydrogen atoms not belonging to the water molecule were placed at calculated positions and refined as riding on their parent atoms, using SHELXL97 (Sheldrick, 2008) defaults [C-H = 0.93 Å, N-H = 0.86 Å and U iso (H) = 1.2U eq (C,N)]. The hydrogen atoms of the water molecule were included in the refinement riding on the O atom with U iso = 1.5U eq (O). We have chosen to model the positional disorder of the ethyl substituint with two groups. The ethyl C atoms were refined anisotropically with the U ij values restrained to behave isotropically, with the ISOR instruction, and each C atom of one group was given the same displacement parameters as the corresponding atom of the other group with EADP instructions. The geometries of the two groups were made equivalent with SADI intructions. Special details Geometry. All e.s. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
865.2
2009-10-03T00:00:00.000
[ "Chemistry" ]
KINETIC SPECTROPHOTOMETRIC DETERMINATION OF SOME FLUOROQUINOLONE ANTIBIOTICS IN BULK AND PHARMACEUTICAL PREPARATIONS A simple and sensitive kinetic spectrophotometric method was developed for the determination of some fluoroquinolonea antibiotics; gemifloxacin mesylate, moxifloxacin hydrochloride and gatifloxacin in bulk and in pharmaceutical preparations. The method is based upon a kinetic investigation of the oxidation reaction of the drugs with alkaline potassium permanganate at room temperature for a fixed time of 20 min for gemifloxacin and 15 min for moxifloxacin or gatifloxacin. The absorbance of the coloured manganate ion was measured at 610 nm. The absorbance–concentration plots were rectilinear over the ranges of 2.0-20, 4.0-24 and 4.0-40 μg mL for gemifloxacin, moxifloxacin and gatifloxacin, respectively. The concentrations of the studied drugs were calculated using the corresponding calibration equations for the fixed-time method. The determination of the studied drugs by the fixed concentration and rate constant methods was also feasible with the calibration equations obtained but the fixed time method has been found to be more applicable. The different experimental parameters affecting the development and stability of the colors were carefully studied and optimized. The proposed method was applied to the determination of the studied drugs in pharmaceutical formulations. INTRODUCTION Fluoroquinolones are the second-generation members of quinolone antibiotics fluorinated in position 6 and bearing a piperazinyl moiety at position.They are considered to be the most effective gram-positive and gram-negative pathogens to combat infection caused by microorganisms that are resistant to other microbials, such as tetracyclines.Also, they have some activity against mycobacteria, mycoplasmas, rickettsias, and the protozoan Plasmodium falcisparm [1,2].There is a substantial body of literature related to both the mechanism of their action as DNA gyrase inhibitors, and the influence of systematic structural modifications on their biological activity. The kinetic spectrophotometric methods are becoming of a great interest in the pharmaceutical analysis [43,45].The application of these methods offered some specific advantages [46] such as (1) simplicity owing to elimination of some experimental steps such as filtration and extraction prior to absorbance measurements; (2) high selectivity due to the measurement of the increase or decrease of the absorbance as a function of reaction time instead of measuring the concrete absorbance value; and (3) absence of interference from the background sample color and/or turbidity and possibility of avoidance of interference from other active compounds present in the commercial product that may be resisting the established reaction conditions. The literatures are still lacking analytical procedures based on kinetics for determination of the investigated drugs (GMF, MOX and GTF) in commercial dosage forms.No attempts have been made yet to determine the studied drugs by any kinetic spectrophotometric method.The present study describes the development and validation of a selective and simple kinetic spectrophotometric method for the determination of GMF, MOX and GTF by measuring the absorbance at 610 nm after oxidation reaction with alkaline KMnO 4 in an attempt to evaluate the studied drugs in pure forms as well as in dosage forms. EXPERIMENTAL Apparatus All absorption spectra were made using Kontron 930 (UV-Visible) spectrophotometer (Germany) with a scanning speed of 200 nm/min and a band width of 2.0 nm, equipped with 10 mm matched quartz cells. Materials and reagents All the chemicals were of analytical reagent grade and the solvents were of spectroscopic grade.Pharmaceutical grade gemifloxacin mesylate (GMF) was supplied by El-Obour Modern Pharmaceutical Industries Co., El-obour city, Kaliobeya, Egypt, its potency was 99.99±0.39%.Moxifloxacin hydrochloride (MOX) reference standard was provided by Sabaa, Kahira Company, Cairo, Egypt, its purity was 100.01±0.71%.Gatifloxacin sesquihydrate (GTF) reference standard was provided by EPCI, Egyptian Company for Pharmaceutical and Chemical Industries, S.A.E., Beni Suef, Egypt, its potency was 99.65±0.74%. Pharmaceutical preparations The different pharmaceutical preparations were purchased from the commercial source in the local market.Factive tablets were obtained from Oscient Pharmaceuticals Corporation, USA; Flobiotic tablets were obtained from Hikma Pharmaceuticala PLC, Cairo, Egypt.GemiQue tablets were obtained from El-Obour Modern Pharmaceutical Industries Co., El-obour city, Kaliobeya, Egypt, labelled to contain (320 mg GMF per tablet).Avelox® tablets were obtained from Bayer, Germany, Moxiflox tablets were obtained from EVA Pharm, Cairo, Egypt.Moxifloxacin tablets were obtained from Sabaa International Company for pharmaceuticals and chemical industries, S.A.E.labelled to contain (400 mg MOX per tablet).Tequin® tablets were obtained from Bristol Myers Squibb Company, Egypt, Floxin tablets were obtained from Global Napi Co, Egypt.Gatiflox tablets were obtained from (EPCI, Egyptian Company for Pharmaceutical and Chemical Industries, S.A.E., Beni Suef, Egypt), labelled to contain (400 mg GTF per tablet). Stock solutions Stock standard solutions of GMF, MOX and GTF (100 µg mL -1 ) were prepared by dissolving an exact weight (10 mg) of the studied drugs in 2.0 mL of 0.05 M NaOH, and further diluted to 100 mL with double distilled water in a 100 mL measuring flask.These solutions were found to be stable for at least one week without alteration when kept in the refrigerator. Reagents Potassium permanganate (Merck, Germany): 5.0×10 −3 M aqueous solutions were freshly prepared by dissolving 83.3 mg of pure KMnO 4 in 100 mL of hot double distilled water followed by filtration through sintered glass.Potassium permanganate solutions were freshly prepared and its molarity checked titrimetrically.Sodium hydroxide (El-Nasr Chemical Co., Cairo, Egypt): 0.5 M aqueous solution was prepared by dissolving 2.0 g NaOH in 100 mL of double distilled water. Recommended general procedures Initial rate method.Aliquots of standard GMF, MOX or GTF (100 µg mL -1 ) solutions (0.1-1.6 mL) were transferred into a series of 10 mL calibrated volumetric flasks.To each flask was added 1.0 and 1.5 mL of 0.5 M NaOH for (GMF or GTF) and MOX, respectively, followed by 1.0 and 1.5 mL of 5.0 x 10 -3 M KMnO 4 for (GMF or GTF) and MOX, respectively and the volume was made up to the mark with double distilled water at ambient temperature (25 ± 2 • C).After mixing, the contents of each flask were immediately transferred to the spectrophotometric cell and the increase in absorbance of the colored manganate ion as a function of time was measured at 610 nm.The initial rate of the reaction (ν) at different concentrations was evaluated by measuring the slope of the tangent to the absorbance-time curve.The calibration graphs were constructed by plotting the logarithm the initial rate of reaction (log ν) versus the logarithm of the molar concentration of the drug (log C).The amount of the drug was calculated either from the calibration graph or the regression equation. Fixed time method. A fixed times of 9.0, 12 and 15 min for GMF, MOX and GTF, respectively, were selected for the fixed time method.At this preselected fixed time, the absorbance of each sample of drug solution was measured at 610 nm against a reagent blank prepared similarly except without the drug.The calibration curve was obtained by plotting the absorbance against the initial concentration of drug.The amount of the drug was computed either from a calibration curve or regression equation. Procedure for the determination of the studied drugs in pharmaceutical formulations A total of 20 tablets of each drug were mashed and finely powdered.An accurately weighed quantity of the mixed contents of the tablets, equivalent to 100 mg of the drug was extracted into 50 mL of 0.005 M hydrochloric acid solution, stirred for 15 min then filtered using Whatmann No. 42 filter paper into a 100-mL volumetric flask to isolate the insoluble excipients.The residue was washed with two 10 mL portions of 0.005 M hydrochloric acid solution and the washings added to the filtrate and diluted to volume with the same solvent.Aliquots of the tablet solutions were treated as under the above recommended procedures.The nominal content of the tablets was determined either from a previously plotted calibration graph or using the corresponding regression equation. Determination of the molar ratio The Job's method of continuous variation [47] was employed.Stock equimolar solutions (5.0 x 10 -4 M) of drugs and reagent were prepared.Series of 10 mL portions of the stock solutions of the drugs and the analytical reagent were made up comprising different complementary ratios [0:10, 1:9, 9:1, 10:0, inclusive] in 10-mL calibrated flasks.The solutions were further manipulated as described under the general recommended procedures for each reagent and data treatment. Absorption spectra The absorption spectrum of aqueous potassium permanganate solution in the alkaline medium exhibited an absorption band at 525 nm.The addition of GMF, MOX or GTF to this solution produced a new characteristic band at 610 nm (Figure 1) due to the formation of green color manganate ion, which resulted in the oxidation of the studied drugs by potassium permanganate (KMnO 4 ) in alkaline medium (NaOH).The intensity of the color was increased with time and therefore, a kinetically based method was developed for the spectrophotometric determination of these drugs in pharmaceutical formulations.The absorbance of the oxidation product remains stable for at least 8.0 hours. Optimization of the reaction conditions The optimum conditions for the proposed method responsible for the formation of green colored manganate ion were studied and maintained throughout the experiment.The spectrophotometric properties of the colored product as well as the different experimental parameters affecting the color development and its stability were carefully studied and optimized.Such factors were changed individually while the others were kept constant.These factors include concentration of the reagents (KMnO 4 and NaOH), diluting solvents, and temperature. Effect of KMnO 4 concentration The effect of the KMnO 4 concentration on the initial rate of the reaction (ν) was studied using different volumes (0.25-3.0 mL) of 5.0 × 10 −3 M KMnO 4 or in the range 1.25 x 10 -4 to 1.5 x 10 -3 M. The initial rate of reaction and hence maximum absorbance increased with increasing the concentration of KMnO 4 and became constant at 1.0 mL of 5.0 x 10 -3 M KMnO 4 (5.0 x 10 -4 M) for (GMF or GTF) and 1.5 mL of 7.5 x 10 -4 M KMnO 4 (5.0 x 10 -3 M) for MOX.Thus, the adoption of 5.0 x 10 -3 M KMnO 4 in the final solution proved to be adequate for the maximum concentration of GMF, MOX or GTF used in the determination process (Figure 2). Effect of sodium hydroxide concentration The influence of the concentration of NaOH on the absorbance of the reaction product (MnO 4 -2 ) and the reaction rate was also examined by taking 20, 24 and 40 µg mL −1 of GMF, MOX and GTF, respectively, 1.5 mL of 5.0×10 −3 M KMnO 4 solution and varying volume (0.25-3.0 mL) of 0.5 M NaOH.The maximum absorbance was obtained with 1.0 and 1.5 mL of 0.5 M NaOH for (GMF or GTF) and MOX, respectively, after which further increase in volume of NaOH caused no change in absorbance.Hence, 1.5 mL of 0.5 M NaOH was found to be the most suitable concentration for maximum absorbance (Figure 3). Effect of temperature The effect of temperature on the initial rate of reaction was studied.The reaction was carried out at room temperature (25 ºC) (298 K) and at elevated temperatures (40-100 ºC) using a thermostatically controlled water bath.It was observed that the studied drugs react faster with potassium permanganate and the color intensity slightly increased with temperature.At higher temperatures the reaction product decomposed and manganese dioxide (MnO 2 ) produced resulting in poor reproducibility.Nevertheless, the subsequent experiments were carried out at room temperature to simplify the analytical procedure (i.e.avoid using extra equipment; water bath) on the expense of lower limit of detection. Effect of diluting solvent The effect of diluting solvent was also studied.Different solvents such as double distilled water, ethanol, acetonitrile and dimethyl sulfoxide were used.It was found that double distilled water was the best solvent as it gave the highest absorbance reading, which offered distinct advantage over other methods. Stoichiometry and reaction mechanism The stoichiometry of the reaction of KMnO 4 with each of the studied drugs was investigated by Job's method [47].The results indicated that the ratio of drug to KMnO 4 was 1:1.Based on this ratio, the reaction path-way was postulated to proceed as shown in Scheme 2. The reaction was first order with respect to the drug concentration. GTF Oxidized GTF Scheme 2. The reaction pathway of GTF with KMnO 4 in presence of NaOH. Kinetics study of the reactions The stoichiometry of the reaction was studied by the limiting logarithmic method [48].This was established by performing two sets of experiments.In the first set, KMnO 4 concentration was varied while keeping a constant concentration of drug.In the second set, the KMnO 4 concentration was kept constant while varying the concentration of drug.The logarithms of the absorbance were plotted against the logarithm of [drug] or [KMnO 4 ] concentration (Figure 4 A and B) to evaluate the slope of the respective line to determine the order of reaction of the drug with respect to KMnO 4 or vice versa.The slopes of the two straight lines were calculated and found to be unity in each case. A plot of log absorbance versus log [KMnO 4 ] gave slope values of 0.9125, 1.0063 and 0.8609 for GMF, MOX and GTF, respectively.Also plot of log absorbance versus log [Drug] gave straight lines with slopes of 0.9498, 0.8566 and 0.9776 for GMF, MOX and GTF, respectively.Thus, it was concluded that the combining molar ratio between drug and KMnO 4 is 1:1.Hence the results indicated that a mole of potassium permanganate is consumed by a mole of the drug. Evaluation of the kinetic parameters Under the optimized experimental conditions, the concentration of each drug was determined using an excess of KMnO 4 and NaOH solution with respect to the initial concentration of drug.As a result, a pseudo first order condition was obtained with respect to their concentrations.However, the initial rate of the reaction revealed to follow a pseudo first order and was found to obey the following equation: For C KMnO 4 ≥ 1.0 x 10 -3 M and C NaOH ≥ 1.0 x 10 -1 M. Equation 1 could be reduced to equation 2. Rate of the reaction = where K′ is the pseudo first order rate constant, C is the concentration of drug, n is the order of the reaction.The rate of the reaction may be estimated by the variable-time method measurement as ∆A/∆t, where A is the absorbance and t is the time in seconds.Taking logarithms of rates and drug concentrations (Table 1), Eq. 2 is transformed to: The linear regression analysis using the method of least square was made to evaluate slope, intercept and correlation coefficient, pseudo-order rate constant and order of the reaction which are indicated in Table 1. The apparent rate constant and activation energy The absorbance-time curves at different temperatures (25-100 ºC) were generated using fixed concentration of each drug and KMnO 4 (5.0×10 - M).From these curves the apparent rate constants were calculated.The activation energy, defined as the minimum kinetic energy that a molecule possesses in order to undergo a reaction was determined using Arrhenius equation [49]. where k is the apparent rate constant, A is the frequency factor, E a is the activation energy, T is the absolute temperature (ºC + 273), and R is the gas constant (1.987 calories degree -1 mol -1 ).The values of log k were plotted as a function of 1/T.Straight lines with slope (= -E a /2.303R) values of -258, -231 and -251 for GMF, MOX and GTF, respectively, were obtained (Figure 8).From these data, the activation energy was calculated and found to be 4.93, 4.43 and 4.81 kJ mol -1 for GMF, MOX and GTF, respectively.These low activation energies explained that KMnO 4 could be used as a useful reagent in the development of a sensitive spectrophotometric method for the determination of GMF, MOX and GTF. Appraisal of kinetic methods The quantitation of the studied drugs GMF, MOX and GTF under the optimized experimental conditions mentioned above would result in pseudo-first-order with respect to their concentrations where KMnO 4 concentration was at least 121 times of the initial concentration of GMF, at least 50 times of the initial concentration of GTF, or at least 164 times the initial concentration of MOX.However, the rate of reaction will be directly proportional to the drug concentration in a pseudo-first-order rate equation as follows: where K' is the pseudo-first-order rate constant.Equation 5 was the basis for several experiments, which were performed to obtain the drug concentration using the rate data.Initial rate, rate constant, fixed-concentration and fixed-time methods [50] were tested and the most suitable analytical method was selected taking into account the applicability, the sensitivity (i.e. the slope of the calibration graph), correlation coefficient (r) and intercept (a). Initial-rate method In this method, graphs of the rate (at the beginning of the reaction) versus drug concentration were not easy to obtain, because the first step of the reaction was too fast to follow, so tangents of the curve at zero-time were not easy to draw.Therefore, this method could not be applied. Rate-constant method The best way to obtain an average K value for the reaction is to plot the logarithm of the concentration or the logarithm of any related property versus time.The slope of the line is -K'/2.303,from which the rate constant is obtained.If a straight line is obtained, it indicates that the reaction is first order.Graphs of log (absorbance) versus time over the concentration ranges 4.12 x 10 -6 -4.12 x 10 -5 M (2.0-20 µg mL -1 ) for GMF, 9.13 x 10 -6 -5.48 x 10 -5 M (4.0-24 µg mL -1 ) for MOX and 9.94 x 10 -6 -9.94 x 10 -5 M (4.0-40 µg mL -1 ) for GTF were plotted and all appeared to be rectilinear.Pseudo-first-order rate constants (K') corresponding to different concentrations of the investigated drugs [C] were calculated from the slopes multiplied by -2.303 (Table 2). Validation of the method Linearity After optimizing the reaction conditions, the fixed time was applied to the determination of the studied drugs in pure form over the concentration ranges of 2.0-20, 4.0-24 and 4.0-40 µg mL −1 for GMF, MOX and GTF, respectively.Linear plots (n = 6) with good correlation coefficients were obtained.Table 5 present the performance data for the proposed spectrophotometric method, including molar absorptivities, Sandell's sensitivities, linearity ranges and regression equations calculated from calibration graphs.Other statistical parameters such as the intercept (a), the slope (b) and the relative standard deviation, relative error percentages are also given in Table 5.The high values of the correlation coefficients of the regression equations indicate good linearity over the working concentration ranges.The % recoveries of the three studied drugs compared with that obtained by the reported methods were given in Table 5. Statistical analysis [51] of the results obtained by the proposed and reference methods; GMF [25], MOX [33] and GTF [16] using student's t-test and variance ratio F-value revealed no significant difference between the performance of the two methods regarding the accuracy and precision. Detection and quantitation limits In accordance with the recommendations of (ICH, 2005) [52] the limit of detection was evaluated from the relationship, LOD = 3.3σ/s, where σ is the standard deviation of replicate determinations of the blank and s is the slope of the calibration graph.On the other hand, the limit of quantitation, LOQ, is defined as 10σ/s.The detection and quantitation limits of the studied fluoroquinolones using the proposed spectrophotometric procedure are presented in Table 5. Obviously, the LOD and LOQ values as well as the concentration ranges are lower due to the higher sensitivity which is offered by this technique. Precision and Accuracy The accuracy and precision of the proposed methods were carried out by six determinations at four different concentrations.Percentage relative standard deviation (RSD%) as precision and percentage relative error (RE%) as accuracy of the suggested method were calculated.Table 6 shows the values of relative standard deviations for different concentrations of the drugs determined from the calibration curves.These results of accuracy and precision showed that the proposed methods have good repeatability and reproducibility.The proposed methods were found to be selective for the estimation of GMF, MOX and GTF in the presence of various tablet excipients.For this purpose, a powder blend using typical tablet excipients was prepared along with the drug and then analyzed.The recoveries were not affected by the excipients and the excipients blend did not show any absorption in the range of analysis. Robustness and ruggedness The robustness of an analytical method refers to its capability to remain unaffected by small and deliberate variations in method parameters and provides an indication of its reliability in regular analysis.Robustness test of the proposed method was performed with deliberate small changes at volume of 0.005 M KMnO 4 (± 0.1 mL), volume of 0.5 M NaOH (± 0.1 mL) and reaction time (± 2.0 min).Only one parameter was changed in the each experiment.Each deliberate small change was analyzed 5.0 independent series containing known concentration of each drug.The results (Table 7) showed good recovery (97.45-101.5 %) with low relative standard deviation (0.58-1.36 %).The results indicated that the small variations in any of the variables did not significantly affect the results.Therefore the proposed method is robust to the small changes in experimental conditions.Ruggedness was also tested by applying the methods to the assay of the studied drugs using the same operational conditions on two different instruments at two different laboratories and different elapsed times.Results obtained from lab-to-lab and day-to-day variations were reproducible, as the RSD ≤ 3.0%. Applications of pharmaceutical preparations The proposed kinetic (fixed time) spectrophotometric methods were applied to the determination of the studied drugs in their pharmaceutical formulations.Common tablet excipients did not interfere with the analysis.In addition, the proposed method enabled the determination of GMF, MOX and GTF in their dosage forms (tablets) without any interference from the inactive ingredients clearly demonstrates the selectivity of the proposed methods.Reference methods were adopted for the assay of the studied dosage forms.These methods include the reference spectrophotometric methods for GMF [25], MOX [33] and GTF [16] dosage forms.These results were compared with those obtained from the reference spectrophotometric methods for GMF, MOX and GTF dosage forms by statistical analysis with respect to the accuracy (by student's t-test) and precision (by F-test).No significant differences were found between the calculated and theoretical values of t-and F-tests at 95% confidence level proving similar accuracy and precision in the determination of the studied drugs by the proposed and references methods Table 8 CONCLUSION The proposed method is simple, accurate, precise, sensitive, rapid, low cost and relatively selective compared to the reference methods.Furthermore, the proposed method does not require elaboration of procedures, which are usually associated with chromatographic methods, nor does it use extraction or heating.Furthermore, the proposed method does not require sophisticated instrumentation and tedious procedure usually associated with reference methods. In the light of all these merits, the proposed method can be considered useful and convenient for quality control and routine determination of the studied drugs in pharmaceutical dosage forms. Figure 2 .Figure 3 . Figure 2. Effect of volume (mL) of KMnO 4 (5.0×10−3 M) on the absorbance of the reaction products at the optimum wavelengths. Figure 8 . Figure 8. Arrhenius plot for the reaction of KMnO 4 (5.0 x 10 -3 M) with the studied drugs in presence of NaOH (0.5 M).T and K are the absolute temperature and the apparent rate constant, respectively. Table 1 . Lograthims of rates for different concentrations of GMF, MOX and GTF at room temperatures and 610 nm. Table 4 . Regression equations for the studied drugs at fixed time. Table 5 . Experimental and analytical parameters for the kinetic spectrophotometric determination of the studied drugs using KMnO4. a A = a + bC where c is the concentration in µg mL -1 .b Theoretical value for t and F at 95% confidence level. Table 6 . Inter-day and intra-day accuracy and precision for the determination of GMF and MOX in bulk powders by the proposed method (fixed time). a Average of six determinations. Table 7 . . Results of robustness test of the proposed method. a average of three determinations.SD = Standard deviation and RSD = Relative standard deviation. Table 8 . Application of the proposed method to the determination of the studied drugs in its pharmaceutical preparations.Theoretical values for t and F-values at five degree of freedom and 95% confidence limit are (t =2.776) and (F = 6.26). a Five independent analyses.b
5,639.2
2013-09-09T00:00:00.000
[ "Chemistry", "Medicine" ]
Induction of release and up-regulated gene expression of interleukin (IL)-8 in A549 cells by serine proteinases Background Hypersecretion of cytokines and serine proteinases has been observed in asthma. Since protease-activated receptors (PARs) are receptors of several serine proteinases and airway epithelial cells are a major source of cytokines, the influence of serine proteinases and PARs on interleukin (IL)-8 secretion and gene expression in cultured A549 cells was examined. Results A549 cells express all four PARs at both protein and mRNA levels as assessed by flow cytometry, immunofluorescence microscopy and reverse transcription polymerase chain reaction (PCR). Thrombin, tryptase, elastase and trypsin induce a up to 8, 4.3, 4.4 and 5.1 fold increase in IL-8 release from A549 cells, respectively following 16 h incubation period. The thrombin, elastase and trypsin induced secretion of IL-8 can be abolished by their specific inhibitors. Agonist peptides of PAR-1, PAR-2 and PAR-4 stimulate up to 15.6, 6.6 and 3.5 fold increase in IL-8 secretion, respectively. Real time PCR shows that IL-8 mRNA is up-regulated by the serine proteinases tested and by agonist peptides of PAR-1 and PAR-2. Conclusion The proteinases, possibly through activation of PARs can stimulate IL-8 release from A549 cells, suggesting that they are likely to contribute to IL-8 related airway inflammatory disorders in man. Background Respiratory epithelium acts as the first tissue to meet inhaled pathogens and is capable of releasing inflammatory mediators and cytokines in response. Respiratory epithelial cells can synthesize and secrete a variety of proinflammatory cytokines such as IL-8, IL-1, IL-6, granulocyte-macrophage colony stimulating factor (GM-CSF) [1] and RANTES [2] which regulate cell behavior including growth, secretion, migration in physiological and pathological conditions. The importance of serine proteinases in the development of airway diseases has been emphasized in recent years. Of particular importance is that the potential role of tryptase [3] thrombin [4] and elastase [5] in the development of asthma, in which these serine proteinases were not only been over-secreted [4,6,7], but also found to play a role in induction of cytokine hypersecretion in airways [8,9]. However, the potential mechanism, through which these serine proteinases carry out their actions in respiratory tract, remains unclear. Since increased level of IL-8 in the airways reported to be closely correlated to asthma [10], we investigated the effect of tryptase, thrombin, trypsin, and elastase on IL-8 secretion and gene expression in A549 cells, a type II alveolar epithelial cell line from human adenocarcinoma, in the present study. In recent years, PARs have been identified as receptors for serine proteinases. Among them, PAR-1 is a receptor of thrombin and trypsin [11]; PAR-2 is a receptor of trypsin, tryptase [12] and elastase [9]; PAR-3 [13] and PAR-4 [14] are receptors of thrombin. Activation of PARs could profoundly alter secretion ability of numerous cell types such as histamine release from human mast cells [15], IL-6 release from airway epithelial cells [8], IL-1 release from fibroblast [16], and IL-8 release from human oral epithelial cells [17]. We therefore investigated the effect of the agonists of all four types of PARs on IL-8 release from A549 cells in the current study. Since expression of PARs on A549 cells is crucial for the understanding of actions of the serine proteinases tested, we also investigated the expression PAR-1, PAR-2, PAR-3 and PAR-4 on A549 cells with immunocytochemical techniques and reverse transcription polymerase chain reaction (RT-PCR) in the present study. Induction of IL-8 release by serine proteinases Thrombin at concentrations of 1-10 U/ml provokes a concentration dependent release of IL-8 from A549 cells following 16 h incubation period. Approximately 8 fold increase in IL-8 release is observed at 16 h following incubation with 10 U/ml thrombin ( Figure 1A). The time course study shows that increased release of IL-8 induced by thrombin begins within 2 h, and lasts at least to 16 h ( Figure 1B). At the concentrations from 1 to 300 ng/ml, trypsin is able to stimulate a 'bell shape' release of IL-8 from A549 cells following 16 h incubation period. The maximum release of 5.1 fold is observed when 3 ng/ml of trypsin was added to A549 cells. At the time of 8 h, however, a dose dependent release of IL-8 from A549 cells is achieved with 100 and 300 ng/ml trypsin. Small but nevertheless significant release of IL-8 is also observed with 300 ng/ml trypsin following 2 h incubation ( Figure 2). Also in Figure 2, it is clearly observed that the basal accumulated secretion of IL-8 from A549 cells is time dependent with 2.7 ± 0.7, 173 ± 54 and 329 ± 91 pg/ml being secreted following 2, 8 and 16 h incubation periods, respectively. Trypsin at concentration of 300 ng/ml fails to stimulate IL-10, IL-16, IL-17 and IL-18 secretion from A549 cells following 8 h incubation period (data not shown). Tryptase at the concentrations from 0.125 to 2 µg/ml induces a concentration dependent IL-8 secretion from A549 cells. Approximately 4.3 fold increase in release of IL-8 is observed when 2 µg/ml of tryptase was incubated with cells for 16 h, and as little as 0.25 µg/ml tryptase is able to provoke a significant release of IL-8 from A549 cell at 16 h following incubation ( Figure 3A). Time course study reveals that increased release of IL-8 induced by tryptase begins within 2 h, and lasts at least to 16 h ( Figure 3B). Elastase, however, only at the concentrations of 0.1 and 0.3 µg/ml elicits significant release of IL-8 following 16 h incubation period, and the quantity of IL-8 released from A549 cells in response to 0.3 µg/ml elastase is similar to that induced by 2 µg/ml of tryptase ( Figure 3A). Time course study shows that elastase induced release of IL-8 occurs after 8 h incubation and maintains at least to 16 h ( Figure 3B). Effect of thrombin on the release of IL-8 from A549 cells Figure 1 Effect of thrombin on the release of IL-8 from A549 cells. Cells were incubated (A) with various concentrations of thrombin at 37°C for 16 h, or (B) with 10 U/ml of thrombin for 2 h, 8 h and 16 h. Values shown are mean ± SE for 5 separate experiments. *P < 0.05 compared with the response to medium alone control. Inhibition of IL-8 release induced by proteinases by their inhibitors Hirudin, a specific thrombin inhibitor is able to inhibit thrombin-induced secretion of IL-8 at both 2 and 16 h following incubation. The maximum inhibition of approximately 89% is observed when 10 U/ml of hirudin was added to cells for 16 h (Table 2). Similarly, specific trypsin inhibitors SBTI and α 1 -antitrysin are able to completely abolish trypsin-induced secretion of IL-8 at both 8 and 16 h following incubation (Table 3). It is observed also that MSACK, an inhibitor of elastase completely abrogates elastase induced release of IL-8 (Table 4). In contrast, inhibitors of tryptase, benzamine and leupeptine are only able to inhibit tryptase induced IL-8 secretion by 47.5% and 6.5%, respectively following 16 h incubation (Table 4). Hirudin (Table 2), SBTI, and α 1 -antitrysin (Table 3), benzamidine, leupeptine and MSACK (Table 4) alone at the concentrations tested have little effect on IL-8 secretion from A549 cells. Expression of PARs by A549 cells FACS analysis shows that A549 cells express all four PARs regardless they are permeabilized or not ( Figure 4A). Immunofluorescent cell staining shows that PAR-2 seems mainly stained on the membrane surface of A549 cells, whereas PAR-1, PAR-3 and particularly PAR-4 predominately stained in cytoplasm ( Figure 4B). An agarose gel electrophoresis revealed that A549 cells express mRNAs for all four PARs ( Figure 4C). The amplified RT-PCR products of PAR-1, PAR-2, PAR-3 and PAR-4 mRNAs were sequenced and they all correspond to published sequences of PAR genes (data not shown). Induction of IL-8 release by agonists of PARs SFLLR-NH 2 , a specific PAR-1 agonist peptide stimulats a concentration dependent secretion of IL-8 from A549 cells following 16 h incubation ( Figure 5A), whereas its reverse peptide RLLFS-NH 2 has no effect on IL-8 release. The maximum release of IL-8 is 15.6 fold induced by 300 Effect of trypsin on the release of IL-8 from A549 cells µM of SFLLR-NH 2 following 16 h incubation period (Figure 5A). However, TFRGAP-NH 2 , an agonist peptide of PAR-3 and its reverse peptide PAGRFT-NH 2 at concentrations 0.1, 1, 10 and 100 µM do not show any influence on IL-8 release from A549 cells following 16 h incubation period (data not shown). The time course study shows that SFLLR-NH 2 induced release of IL-8 occurs after 8 h incubation and sustains at least until 16 h ( Figure 5B). While SLIGKV-NH 2 and tc-LIGRLO-NH 2 , two specific agonists of PAR-2 induce concentration dependent secretion of IL-8 from A549 cells following 8 and 16 h incubation periods ( Figure 6B,6C) only tc-LIGRLO-NH 2 is able to stimulate IL-8 release at 2 h ( Figure 6A). The maximum release of IL-8 is approximately 79 and 6.6 fold over baseline induced by 100 µM of tc-LIGRLO-NH 2 at 2 h and 100 µM of SLIGKV-NH 2 at 8 h, respectively. VKGILS-NH 2 has little effect on IL-8 release, but tc-OLRGIL-NH 2 appears to induce a significant release IL-8 from A549 cells. However, the extent of release of IL-8 induced by tc-OLRGIL-NH 2 is much less than that induced by tc-LIGRLO-NH 2 ( Figure 6). SLIGKV-NH 2 and tc-LIGRLO-NH 2 at concentration of 10 µM fail to stimulate IL-10, IL-16, IL-17, and IL-18 secretion from A549 cells following 8 h incubation period (data not shown). At a concentration of 10 µM, GYPGQV-NH 2 , an agonist peptide of PAR-4 induces a 3.5 fold increase in IL-8 release from A549 cells. However, at a higher concentration (100 µM), it stimulates less IL-8 secretion ( Figure 7A). VQG-PYG-NH 2 , a reverse peptide of GYPGQV-NH 2 , at the concentrations tested does not show any influence on IL-8 release ( Figure 7A). The time course study shows that GYPGQV-NH 2 induced release of IL-8 occurs after 8 h incubation and sustains at least until 16 h ( Figure 7B). Effect of serine proteinases and agonists of PARs on expression of IL-8 mRNA in A549 cells Thrombin, trypsin, tryptase and elastase stimulate an increase in expression of IL-8 mRNA in A549 cells when they were incubated with the cells. However, the tryptase induced up-regulation of expression of IL-8 mRNA only lasts for 2 h, whereas thrombin, trypsin (declined after 8 h) and elastase provoked expression of IL-8 mRNA continues until 16 h. Up to 6.8, 22.3, 9.9 and 7.8 fold increase in expression of IL-8 mRNA is observed with thrombin, trypsin, tryptase and elastase, respectively following incubation with A549 cells (Figure 8). Dramatically enhanced expression of IL-8 mRNA is found when SFLLR-NH 2 , SLIGKV-NH 2 or tc-LIGRLO-NH 2 was incubated with A549 cells for 2 h. At 8 and 16 h following incubation, however, IL-8 mRNA expression induced by SFLLR-NH 2 , SLIGKV-NH 2 or tc-LIGRLO-NH 2 is greatly decreased (Figure 8). GYPGQV-NH 2 and TFRGAP-NH 2 at 100 µM has little influence on IL-8 mRNA expression in A549 cells ( Figure 8). At the concentration of 100 µM, RLLFS-NH 2 , VKGILS-NH 2 , tc-OLRGIL-NH 2 , PAGRFT-NH 2 and VQGPYG-NH 2 , the reverse peptides of agonists of PARs have little effect on IL-8 mRNA expression in A549 cells (data not shown). Discussion It is demonstrated that human serine proteinases including thrombin, tryptase, elastase and trypsin are potent stimuli of IL-8 secretion from A549 cells, which suggests that they are likely to play a role in IL-8 related airway inflammatory disorders such as asthma, chronic obstructive pulmonary disease and cystic fibrosis. As little as 5.6 nM of thrombin is able to stimulate approximately 2 fold increase in IL-8 secretion, and 56 nM of thrombin induces 8 fold increase in IL-8 release, indicating that this proteinase is a potent secretagogue of IL-8 release from A549 cells. Human mast cell tryptase, an established mediator of inflammation [18] at a concentration as low as 3.7 nM induces twice more IL-8 secretion over baseline release, and trypsin, a potential mediator of airway inflammation [19] at a concentration of 0.042 nM provokes approximately 3 fold increase in IL-8 secretion from A549 cells, suggesting that tryptic enzyme in airways may play a role in stimulation of IL-8 hypersecretion from airway epithelium. Similarly, elastase, a well-established mediator of airway inflammation at a concentration of 10.2 nM elicits 4.4-fold increase in IL-8 release, indicating that it is a potent secretagogue of IL-8 release from A549 cells as well. At a concentration of 345 nM, elastase was also found to be able to induce IL-8 and MCP-1 secretion from human gingival fibroblasts [9]. However, at the concentrations higher than 62.5 nM elastase could disarm PAR-2 within 10 min following incubation with human lung epithelial cells [20]. These findings suggest that elastase at lower concentrations induce cytokine release from A549 cells, but at higher concentrations may inactivate PAR-2 on human lung epithelial cells including A549 cells. It was impossible for us to examine the effect of elastase at the concentrations higher than 62.5 nM with our experimental system as at the concentration of 0.6 µg/ ml (20.4 nM), elastase was able to dissociate A549 cells from plate after 8 h incubation and the suspended cells died soon after (assessed by trypan blue staining). This phenomenon may explain the reason for which elastase at the concentration of 0.6 µg/ml fails to enhance IL-8 release. The similar phenomenon is also observed with trypsin at the concentrations higher than 1 µg/ml. These findings implicate that the detachment of bronchial epithelium observed in chronic airway inflammation may result from the hydrolytic activities of elastase and trypsin. Time course study shows that IL-8 release induced by thrombin, tryptase and trypsin initiates within 2 h following incubation, whereas the response to elastase occurs only after 8 h incubation period. This indicates that elastase and the other proteinases tested may adopt different mechanisms in induction of IL-8 release from A549 cells. The concentrations of tryptase and elastase used in the present study should be achievable under pathological conditions as the level of tryptase in asthmatic bronchial alveolar lavage fluid was 13.2 ng/ml [21] and the levels of elastase in asthmatic and cystic fibrosis sputum were 27 and 466 ng/ml, respectively [22]. While information on the levels of thrombin and trypsin in respiratory fluids are not available, a report described that trypsin-like activity was 46.9 mU/ml in mucoid sputum from patients with asthma [23] might implicate that the concentrations of thrombin and trypsin used in the current study ought to be achievable in the inflammatory airways. A549 cells have been reported to secrete some 10 pg/ml of elafin and 3 ng/ml of secretory leukocyte protease inhibitor (SLPI) following 24 h incubation [24]. This concentration of elafin, an inhibitor of elastase should not affect the action of elastase on A549 cells, but the concentrations of SLPI, an inhibitor of trypsin and elastase may reduce the stimulatory action of the lower concentrations of trypsin or elastase on A549 cells. Hirudin inhibites approximately 87% thrombin induced IL-8 secretion; SBTI and α 1 -antitrypsin completely abolish trypsin induced IL-8 secretion and MSACK eliminats 97% elastase induced IL-8 secretion, suggesting strongly that the actions of these proteinases on A549 cells are dependent upon their intact catalytic sites. Since the known substrates of these proteinases on cells are PARs, the expression of PARs on A549 cells was investigated in the present study. To our surprise, benzamidine and leupeptine at a concentration of 30 µg/ml (a quite high concentration for the study on cells based on our previous work [15] are only able to inhibit tryptase induced IL-8 secretion by 47.5% and 6.5%, which suggests that IL-8 secretion induced by tryptase may not depend on its enzymatic activity, and there may be a receptor other than PARs being involved in the process. The similar findings on tryptase were observed previously in other studies [25]. However, to our knowledge, this is the first work examining the effects of thrombin, trypsin, tryptase and elastase on IL-8 release from airway epithelial cells under the same conditions. It has been reported that a number of human cell types express more than one member of PAR family. Thus, platelets express PAR-1 and PAR-4 genes [14,26], endothelial cells express PAR-1, PAR-2, and possibly PAR-3 [13,27], fibroblast express PAR-1, PAR-2, PAR-3 and PAR-4 genes [28], smooth muscle cell express PAR-1, PAR-2, and PAR-3 genes [29] and respiratory epithelial cells express PAR-1, PAR-2, PAR-3 and PAR-4 genes and possibly proteins [8]. In the present study, we find that A549 cells express all four PARs at both protein and mRNA levels. Since expression of the PARs was observed under both permeabilized and non-permeabilized conditions, it is most likely that all these four PARs are located in both the cytoplasma and the plasma membrane surface of the cells. SFLLR-NH 2 , tc-LIGRLO-NH 2 , SLIGKV-NH 2 and GYPGQV-NH 2 stimulates approximately 15.6, 79, 6.6, and 3.5 fold increase in release of IL-8, implicating that there are appropriate mechanisms to carry out IL-8 release process in response to PAR-1, PAR-2 and PAR-4 activation in A549 cells. However, A549 cells do not show any response (in terms of IL-8 release) to PAR-3 activation. Activation of A549 cells to release IL-8 by agonists of PARs indicates that the actions of thrombin, tryptase, elastase and trypsin on A549 cells are most likely carried out through hydrolytic cleavage of N-termini of PARs. The time course shows that the influence of agonists of PAR-1 and PAR-2 on A549 cells initiates within 2 h following incubation, but the action of agonist of PAR-4 on cells appears only after 8 h incubation. These observations suggest that the actions of thrombin on A549 cells are mainly (if not all) carried out through PAR-1, but not PAR-4, whereas the influence of trypsin on cells is most likely through both PAR-1 and PAR-2. It is hard to understand the slower response of cells to elastase and at least partially enzymatic activity independent actions of tryptase on A549 cells. These obviously require further investigation. Using various concentrations of agonist peptides of PARs to stimulate A549 cells may better reflect the actions of these peptides on the cells, which reinforces the previous finding [8]. Up-regulation of expression of IL-8 gene in A549 cells by thrombin, trypsin, tryptase, elastase, PAR-1 and PAR-2 agonist peptides indicates that IL-8 released from A549 cells induced by these stimuli is most likely being newly generated, rather than pre-stored in the cells. The observation that relatively small quantity of IL-8 was released during the first 2 h of incubation in response to the above stimuli also supports our view. While the influence of tryptase and trypsin on IL-8 gene expression does not appear to have been studied previously, the report which found elastase [36,37] and thrombin [38] up-regulated IL-8 gene expression in human epithelial cells may support our current findings. To our knowledge, this is the first work examining IL-8 gene expression in response to sev-eral serine proteinases in epithelial cells under the same conditions. The parallel investigation of the actions of serine proteinases on A549 cells may contribute to easier understanding of the role of these proteinases in regulation of IL-8 gene expression. It is difficult to understand the reason why GYPGQV-NH 2 does not significantly upregulate IL-8 gene expression, but stimulates IL-8 release from A549 cells at 16 h following incubation. It could be that the significantly increased IL-8 gene expression occurs between 8 and 16 h incubation period, but we did not examine it. Analysis of expression of PARs on A549 cells by flow cytometry (A), Immunofluorescent microscopy (B) and RT-PCR (C) Effect of PAR-2 agonist peptides tc-LIGRLO-NH 2 and SLIGKV-NH 2 and their reverse peptides, tc-OLRGIL and VKGILS-NH 2 on IL-8 release from A549 cells Effect of SFLLR-NH 2 , an agonist peptide of PAR-1 and its reverse peptide RLLFS-NH 2 on IL-8 release from A549 cells Values shown are Mean ± SE for five separate experiments performed in duplicate. *P < 0.05 compared with the response to medium alone control; † P < 0.05 compared with the response to RLLFS-NH 2 at the same concentration. SFLLR-NH 2 Induction of inflammatory mediator release from airway epithelial cells by agonists of PARs has been demonstrated previously. Thus, agonist of PAR-1 stimulated plateletderived growth factor secretion from lung epithelial cells [39]; agonists of PAR-2 stimulated IL-8 secretion from 16 HBE cells [40], GM-CSF and eotaxin release from human pulmonary epithelial cells [41] and matrix metalloproteinase-9 release from A549 and primary cultured small airway epithelial cells [42], and agonist of PAR-4 stimulated IL-8 secretion from human respiratory epithelial cells [8]. Our findings further strengthen the view that through activation of PARs, serine proteinases are actively involved in the pathogenesis of airway inflammation. However, since A549 cells is not a normal airway epithelial cells, it may not fully represent the events happening in normal airway epithelial cells in response to the stimuli above in real life. Conclusion serine proteinases tested are potent stimuli of IL-8 secretion from A549 cells, and the influence of these proteinases on airway epithelial cells are most likely through activation of PARs. Induction of IL-8 secretion by proteinases indicates that they are likely to contribute to the pathogenesis of airway inflammatory disorders. Development of proteinase inhibitor drugs may be valuable for treatment of these diseases. Identification of expression of mRNA of PARs The expression of mRNA of PARs by A549 cells was investigated with RT-PCR technique. Total RNA was isolated using TRIzol reagent according to the manufacturer's instructions. Briefly, cells were lysed directly by adding 1 ml of TRIzol Reagent to a 3.5 cm diameter dish (1 ml per 10 cm 2 ). A total of 200 µl of chloroform was added, and tubes were then centrifuged at 12,000 g for 15 min at 4°C, after which the aqueous phase was transferred to new tubes, RNA was precipitated by adding 0.5 ml of isopropyl alcohol, and then centrifuged at 12,000 g for 10 min at 4°C. Finally, 1 ml of 75% (v/v) ethanol was added to the pelleted RNA, and centrifuged at 7,500 g for 5 min at 4°C. Total RNA was quantified by measuring absorbance ratios at 260/280 nm. cDNA was prepared by reverse transcriptase using a commercial RNA-PCR kit, and reactions were performed according to the manufacturer's instructions. For each reaction, 1 µg of total RNA was reversely transcribed using oligo-d (T) and PAR-4 RT primers according to the protocol. The cDNA was amplified using forward and reverse specific primers for amplifying human PARs. β-actin was used as an internal control. Primers for PAR-1, PAR-2 and PAR-3 were designed based on PAR sequences in Genbank using Omiga software; Primer for PAR-4 was designed as described by Kahn et al [26]. Primers were prepared by Invitrogen Biotechnology Co., Ltd. The primer sequences were summarized in Table 1. The conditions for amplification were: for PAR-1, PAR-2, and PAR-3, the PCR mixture was heated at 94°C for 2 min followed by 35 cycles at 94°C for 30 sec, 67°C for 30 sec, 72°C for 1 min and 72°C for 10 min for 1 cycle; for PAR-4 and β-actin, the PCR mixture was heated at 94°C for 2 min followed by 35 cycles at 94°C for 30 sec, 55°C for 30 sec, 72°C for 30 sec and 72°C for 10 min for 1 cycle. Electrophoresis was conducted in 1.5% agarose gels that were stained with SYBR Green I Nucleic Acid Gel Stain and photographed under UV light. PCR products were then sequenced. Quantitative real-time PCR IL-8 mRNA expression in A549 cells was determined by real-time PCR following the manufacture's protocol. Briefly, total RNA was isolated from the stimulated A549 cells using TRIzol Reagent. cDNA was synthesized from 5 µg of total RNA by using Superscript first strand synthesis system for RT-PCR and oligo-dT primers. A doublestranded DNA binding dye method was used for quantitative PCR/RT-PCR. Real-time PCR was performed in the ABI Prism 7700 Sequence Detection System (Perkin Elmer
5,618.2
2006-05-15T00:00:00.000
[ "Biology", "Medicine" ]
Thin Film Deposition of MoP, a Topological Semimetal Robert : MoP is a topological semimetal which has drawn attention due to its unique electrical and optical properties resulting from massless electrons. In order to utilize these properties for practical applications, it is necessary to develop a technique to produce high-quality, large-scale thin films of this 2D material. We report below our initial results of growth of MoP thin films using atomic layer deposition (ALD), where the film grows layer-by-layer. These films were grown on 5 cm × 5 cm silicon oxide coated Si wafers. Resistivity versus temperature measurements show that these films are metallic and includes a partial superconducting phase. The magnetoresistances of both the longitudinal and Hall currents measured at 1.8 K show a strong effect of the magnetic field on the resistivity. Density functional theory was employed to determine the lattice constants of the MoP crystal. These parameters were in good agreement with those obtained from the Rietveld fit to the XRD spectrum of the films. Introduction Conventional metals and insulators have been routinely used to fabricate electronic devices since their inception; however, the topological form of these materials has recently opened up a whole new frontier. These topological materials have surface states which are induced by the inversion of the bulk band structure, which give rise to spin-polarized electronic states with linear energy-momentum dispersion at their edge or surface [1,2]. In the case of topological insulators (TI), band inversion leads to metallic surface states where spin and momentum are fixed and perpendicular to each other [1][2][3]. Therefore, electrons carrying opposite spins propagate in opposite directions which, in principle, should prevent the backscattering that requires the electrons to flip their spin. Similar surface states also exist in topological semimetals (TS); however, unlike TI, which may retain only a few residual bulk carriers, topological metals retain their bulk carriers, thereby maintaining their metallic character [1,3]. In order to utilize these properties in practical devices, the first step is to be able to produce films of this material over wafer scale areas. In this communication, we describe our initial results of a technique used to grow uniform, thin films of MoP, a topological semimetal (TS) over large areas, and we present some of the properties of these thin films. TS are grouped as either Dirac or Weyl. MoP is a Weyl semimetal, which is characterized by topological Fermi arcs on the surface and chiral magnetic effects in the bulk [4][5][6]. Weyl semimetals with spontaneously broken time-reversal symmetry exhibit a large intrinsic anomalous Hall effect originating from the Berry curvature [7,8]. MoP is a 2D material, and it has low resistivity and high carrier density; hence, it has attracted special attention as a replacement for Cu interconnects in semiconductor devices at the sub-7nm technology node [5,[9][10][11]. Factors leading to low resistivity in this semimetal are the linear dispersion and spin momentum locking at the topologically protected surface states that could suppress electron backscattering from the Cu grain boundaries and other imperfections in the damascened interconnects [5,9,12]. Moreover, it has been shown that contrary to conventional conductors, the resistance of MoP nanowires decreases with the reduction in the nanowire diameter [13]. This new class of quantum materials is particularly suitable for nanoscale electronic devices where the topological states can be enhanced because of their large surface-to-volume ratios. MoP films have also been shown to be effective catalysts for hydrogen evolution reactions [14,15]. Materials and Methods Ideally, a technique for growing a new material for microelectronic devices should be compatible with the existing fabrication process. To date, a few methods have been reported for the growth of MoP thin films. The most common growth technique is to replace the chalcogenide of either MoO 2 or MoS 2 films with P by reacting them with PH 3 [16,17]. There is also a report describing MoP growth using chemical vapor deposition on liquid metals [14]. For most electronic applications of MoP, it will be preferable to grow thin films of this material on large area silicon substrates. To employ the large Hall current for room-temperature topo-spintronics applications, it is necessary to fabricate these materials as thin or ultrathin films. One of the possible techniques that enables such fabrication is atomic layer deposition (ALD). The MoP films reported below were grown on SiO 2 -coated silicon substrates using ALD. ALD is a stepwise growth process where the precursors are alternately injected into the growth area; following each reaction, excess species and by-products are removed with an inert gas (N 2 ). As a result, uniform, high quality films are grown over large areas by sequential self-limiting surface reactions [18]. This process enables precise control of the film composition and thickness as the growth proceeds layer by layer. We previously utilized this technique for the growth of other 2D semiconductors, including Bi 2 Se 3 , a topological insulator [19]. All these films have been grown in a Microchemistry F-120 reactor which can hold two 5 cm × 5 cm substrates per run. The substrates for MoP films consisted of n-type Si wafers coated with a 320-nm-thick layer of thermal silicon oxide. Since film growth occurs via van der Waals epitaxy, latticematched substrates are not required [20]. The precursors for MoP growth were MoCl 5 and PH 3 (5% PH 3 , balance N 2 ). A hot wire was used in the PH 3 channel for its decomposition that facilitated film growth at a lower temperature. The Mo source temperature was set at 98 • C, and the carrier gas was nitrogen. The pulse sequence per cycle was as follows: MoCl 5 pulse width of 1.0 s; N 2 purge of 1.2 s; and PH 3 pulse width of 5 s; followed by 1.2 s N 2 purge. Uniform films at a constant growth rate were obtained over a temperature range of 390 • C to 440 • C, which defined the ALD growth window. The films reported here were grown at 400 • C, where the growth rate was 0.2 nm per cycle. This growth technique does not require a "wetting" layer on the substrate since the growth mechanism starts with the surface saturated with a chemisorbed layer of the reacting species [18]. The constant growth rate (growth/cycle) within the ALD window was determined by optimizing the process parameters such as the temperature, pulse widths, and flow rates. Energy dispersive spectroscopy (EDS) analysis was used to determine the stoichiometry of these films. Films grown within the ALD temperature window had a Mo:P atomic ratio of close to 1:1 (P atomic % = 48.34, Mo atomic % = 51.66). DFT Studies Open-source plane wave Density Functional Theory (DFT) program Quantum Espresso was used to compute the density of states (DOS), band diagram, and the lattice constants of hexagonal MoP. The electronic structure of MoP was calculated using the fully relativistic Perdew-Burke-Ernzerhof (PBE) ultra-soft pseudopotentials [21]. A plane wave cut-off of 48.95 Rydbergs and Fermi smearing of 0.001 Rydbergs. A k-mesh of 9 × 9 × 4 was used for the electron calculations. The crystal of hexagonal MoP produced from the simulation structure is shown in Figure 1. The lattice constants and angles determined from the simulation are a = b = 0.322 nm and c = 0.319 nm, and the α, β, γ angles were 90, 90, and 120 degrees, respectively. These parameters are consistent with those reported in the literature [14]. Crystallographic Properties The crystalline properties of these thin films were examined using grazing incidence (0.5 • ) X-ray diffraction (XRD). The XRD pattern of a 24 nm thick MoP film is shown in Figure 2. The XRD plot shows peaks at 28.0, 32.2, 43.1, 57.2, 65, and 67.4 • , which have been reported previously for the hexagonal phase of MoP; the results are also in agreement with JCPDS (24-771) diffraction data [22,23]. In addition to the MoP peaks, two weaker peaks were also recorded. These additional peaks were found to belong to the Mo 8 P 5 phase of molybdenum phosphide (JCPDS 26-1274). Formation of Mo 8 P 5 under high pressure has been reported previously [24]; however, the ALD process is a non-equilibrium growth process (flowing only one source at a time), unlike conventional growth processes. Hence, presence of the Mo 8 P 5 in ALD growth is not surprising. The XRD pattern indicates that the MoP film is oriented along the c-axis. The broad peak at around 23 degrees is from the underlying amorphous SiO 2 film of the substrate [25]. The Rietveld curve fit of this x-ray spectrum with FullProf software was used to remove the background and extract the lattice parameters. The lattice parameters extracted were a = b = 0.322 nm and c = 0.318 nm of the hexagonal phase, where the α, β, γ angles were 90, 90, and 120 degrees, respectively. These values are in close agreement with the lattice constants and angles extracted from our DFT simulation, which were a = b = 0.323 nm and c = 0.321 nm. To rule out the formation of terraces and other growth defects, we examined a sample surface with a scanning electron microscope (SEM). The SEM analysis did not reveal any texture indicating absence of major defects in the grown films. The SEM image is provided in the Supplementary Materials. Transport Properties The transport properties of carriers were examined using samples of approximately 1 cm × 1 cm that were cleaved from larger substrates. The film thickness of these samples was 24 nm. Indium solder was used to connect Cu wires along the edge of these substrates in a Hall bridge configuration. First, the longitudinal resistivity (ρ xx ) of the film was measured versus the temperature, as shown in Figure 3. As expected, the resistivity of the film decreased as the temperature was lowered, indicating a metallic film. It is interesting to note that at about 6 K, there was a sharp drop in resistivity, but it did not reach zero. This feature was repeatable and we believe this indicates the presence of a partial superconducting phase. This T C temperature is in agreement with the T C value of (5.8 K) that was reported for Mo 8 P 5 in [24]. From the XRD data, it is clear that the amount of the Mo 8 P 5 phase comprises only a small fraction of the sample; therefore, the residual resistance is attributed to non-superconductive MoP (T C = 1 K) and indium (T C = 3.4 K), which was used to contact the wires with the sample. Magnetoresitance Electron transport in topological semimetals is strongly affected by the external magnetic field (B); hence, the measurement of magnetoresistance (MR) is commonly used as a signature indicator. The magnetic field dependence of both longitudinal and Hall resistivities were determined by applying the field perpendicular to the sample surface. The magnetoresistance for both transverse and longitudinal current transport were measured at a constant temperature of 1.8 K, where magnetoresistance (MR) is expressed as a change in resistivity due the field divided by the resistivity at zero field, i.e., MR = ρ B −ρ B=0 ρ B=0 . The field dependence of the MR for the longitudinal current transport is shown in Figure 4. At lower magnetic fields, there is a parabolic relationship between the applied magnetic field and the magnetoresistance. The parabolic profile of the plot of MR and the field is one of the characteristics of topological semimetals. This can be shown by using the following expression to determine resistivities with and without the field [26,27]: where e is the electronic charge, µ the mobility, and n and p are the concentrations of electrons and holes. A common feature of all such systems is the existence of charge neutrality, where the concentrations and mobility of the positively and negatively charged particles are equal and the system is electrically neutral [27]. Then, in cases of charge neutrality or charge compensation, MR = (µB) 2 . However, at higher fields, the MR in these films seems to approach saturation, probably due to the presence of the second phase. The Hall conductivity (σ xy ) and the longitudinal conductivity (σ xx ) at 1.8 K were extracted from their resistivities using the following relationships: σ xx = ρ xx ρ 2 xx +ρ 2 xy and σ xy = ρ xy ρ 2 xx +ρ 2 xy [9]. The experimental setup measured the magnitude of Hall voltage on the contacts; thus, for the Hall conductivity we first removed the contact resistance background and then inverted the sign of conductivity at the crossover (B = 0) field. The Hall and longitudinal conductivity are plotted against the magnetic field in Figure 5. In both cases, the largest change in the conductivities occurs at low fields where the magnetoresistance is low. Conclusions We have demonstrated the successful thin-film growth of MoP, a topological semimetal, using atomic layer deposition. The resistivity versus temperature measurement of the films shows a metallic behavior and the presence of a partial superconducting phase. With the introduction of a magnetic field perpendicular to the current paths, a parabolic relationship between the magnetic field and the resistivity is introduced. A DFT calculation of the crystal structure of MoP was performed to determine its lattice constants. These lattice parameters were in good agreement with those extracted from an XRD spectrum. Clearly, our growth conditions are not optimum since the films include a small amount of an unwanted phase. As we continue to refine the growth process, we are confident we can upgrade the quality of these films by optimizing conditions for a single-phase growth. Additionally, although the size of samples in our reactor is limited to 5 cm × 5 cm, the ALD technique is highly scalable, as evident by its use in the semiconductor fabs. The maturing of this technology will facilitate the widespread application of these novel materials in applications ranging from optics to spintronics, including interconnects for the next generation of semiconductor devices. Author Contributions: All three authors contributed equally to this work; all three authors reviewed and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Data Availability Statement: Data for the study are available upon reasonable request by contacting the corresponding author.
3,346
2023-02-24T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]