text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Anomalous mechanical materials squeezing three-dimensional volume compressibility into one dimension
Anomalous mechanical materials, with counterintuitive stress-strain responding behaviors, have emerged as novel type of functional materials with highly enhanced performances. Here we demonstrate that the materials with coexisting negative, zero and positive linear compressibilities can squeeze three-dimensional volume compressibility into one dimension, and provide a general and effective way to precisely stabilize the transmission processes under high pressure. We propose a “corrugated-graphite-like” structural model and discover lithium metaborate (LiBO2) to be the first material with such a mechanical behavior. The capability to keep the flux density stability under pressure in LiBO2 is at least two orders higher than that in conventional materials. Our study opens a way to the design and search of ultrastable transmission materials under extreme conditions.
Section S1 Matching condition between linear compressibility and volume compressibility
If there exists a matching direction (θ, φ) along which the linear and volume compressibilities coincide in a material, then the following condition must be satisfied: where (θ,φ) is determined by the spatial ellipsoid with the principal axes of α X , α Y , and α Z , and varies in the range [minimum( X , Y , Z ), maximum( X , Y , Z )]. Clearly, for a normal mechanical system where all three principal axes are PLC, (θ,φ) is always smaller than X + Y + Z . The equation (1) cannot be fulfilled and the matching direction cannot be found.
Moreover, the matching conditions for the anomalous linear compressibilities are shown as follows: (i) if only a NLC axis is introduced into the normal mechanical system, say, X < 0, then the range of (θ,φ) value now is changed to the direction (θ, φ) for (θ,φ) = V can be found. However, the absolute value of the NLC component is smaller than that of either PLC component in majority of NLC materials, and thus this matching condition is seldom satisfied in practice. Although this matching condition could be achieved if NLC occurs in two-dimension, its practical application would be hindered by the weak angle tenability owing to the low anisotropy within the two-dimensional negative compressibility plane.
(ii) if only a ZLC axis is introduced into the normal mechanical system, say, X = 0, Equation (1) cannot be fulfilled and the matching direction cannot exist. For the zero area (or volume) compressibility materials this matching condition cannot be satisfied as well, since the compressibility coefficient for no material is exactly equal to zero, even for diamond (0.75/TPa) and osmium (0.72/TPa), the most incompressible materials in nature.
(iii) if a ZLC axis is independently introduced into the NLC system, then minimum( Y , Z ) ~ 0, and the equation (2) can always be satisfied. This means that the matching direction can be always found in the mechanical system coexisting with NLC, ZLC and PLC. Raman and b. infared spectra. The measured and calculated spectra are in good agreement, and almost all main peaks in the experimental spectra can be assigned in the simulated spectra, demonstrating that no defects and adsorbed species exist in the sample. Table S5. Positive and negative linear compressibilities are represented by red and blue surfaces, and volume compressibility is represented by green sphere, respectively. Clearly, for all materials no intersecting line exists between the volume and linear compressibility surfaces, indicating that they cannot squeeze the three-dimensional volume compressibility into one dimension.
Table S5
| Linear compressibilities (α l ) and volume compressibility (α V ) in LiBO 2 , Ag 3 Co(CN) 6 , diamond, graphite, copper, and quartz, as well as the relative fluctuation of flux density and optimal transmission direction in these materials as they move from sea level to the Mariana Trench. The minimum relative fluctuation of flux density is defined as the product of the compressibility of transmission cross-section and the pressure at Mariana Trench (0.11GPa) along the optimal direction among the integer angles (in degree) closest to the exact matching direction between volume and linear compressibilities. All the linear compressibility values at the integer angles closest to the matching curve in LiBO 2 are listed in Table S6. For other materials the optimal transmission direction is along the largest PLC axis, since its compressibility value has the smallest difference with the volume compressibility. | 1,006.6 | 2020-11-05T00:00:00.000 | [
"Materials Science"
] |
Metaheuristics Based Modeling and Simulation Analysis of New Integrated Mechanized Operation Solution and Position Servo System
College of Mechanical Engineering, Wuhan Institute of Shipbuilding Technology, Wuhan, Hubei 430050, China Department of Computer Technical Engineering, e Islamic University, 54001 Najaf, Iraq Faculty of Computing Sciences, Gulf College, Muscat, Oman Department of Environmental and Safety Engineering, University of Mines and Technology, Tarkwa, Ghana Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, Tamilnadu, India Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India
Introduction
Position servo control is an important part of the motion control system. It requires that the output position quantity completely duplicates the input position quantity and its changing trend, accurately controls the coordinate position of moving parts, and realizes fast and accurate motion. In other words, su cient position control accuracy, position tracking accuracy, and fast enough tracking speed are taken as his main control objectives [1]. e regulating of a motor's velocity and position based on a feedback signal is known as servo control. e velocity loop is the most basic type of servo loop. e velocity loop creates a torque command to lessen the error between the velocity command and velocity feedback. e process of providing actual torque in response to servo control loop torque commands is known as motor control. Because modifying PID controllers for positional control systems takes time, considerable e ort has been spent on analysing the servo systems. Usually, servo systems require a controller in addition to speed control, which again is frequently handled by cascading or series linking a position loop and a speed loop. A single PID position loop is occasionally used to provide position and speed control in the absence of an explicit velocity loop [2,3]. At present, the general motion servo control is still a widely used classical PID control method, and its advantage is a simple algorithm and easy to implement. Pneumatic servo technology has been used more and more widely in the field of automation production because of its advantages such as low price, simple structure, fast response speed, high power-volume ratio, and no pollution. Among them, the most widely used is the position servo system, and the model is the key to study the pneumatic servo system. According to the servo system regulation theory, the servo system is usually divided into an open-loop, semiclosed-loop, and closed-loop system [4]. Setting servo gains allows you to fine-tune (or correct) servo loops for each application. Stronger servo gains improve performance, but they also increase the danger of system instability. Low-pass filters are typically used in series only with a velocity loop to ease good adsorption difficulties [5,6]. Open-loop systems have no measurement feedback. Semiclosed loop and closed-loop systems have measurement feedback links, in which the semiclosed loop system only the measuring element installed on the rotating shaft of the motor can detect the physical quantity related to angular displacement. e regular derusting and spraying maintenance of ships is an important part of the daily maintenance of ships, which is of great significance for prolonging the service life of ships and maintaining good operating conditions [7]. At present, manual derusting, physical derusting, and chemical derusting are mostly used. High-pressure water jet derusting and wall-climbing robot derusting have been gradually applied in recent years, but, from the overall situation, there are widespread problems such as low operating efficiency, poor environmental protection, and less than ideal process quality.
Pressure regulators are available for a wide range of fluid, gas, and air applications. ey occur in a variety of shapes and sizes, but they all have three functional aspects in common: a pressure-lowering or restricting valve, a pressure sensor component, and a temperature-reducing or restrictive control element. While a typical mechanical position control system consists of a double-acting pneumatic cylinder, one or more control valves, sensors, and controller hardware requirements (also known as a pneumatic servo system). e most common type of valve is a proportional valve, sometimes known as an on/off solenoid valve. A servomotor is a rotary actuator used in operations that demand precise control of rotational velocity, angular position, and acceleration. It is linked to a certain encoder. e proportional-integral-derivative (PID) controller has been used for decades due to its ease and efficacy [8,9]. With the simplification of the model, a lot of important information about the system will be lost, resulting in the reduction of simulation accuracy. Long, Z. proposed the mathematical model of the control relationship of the hydraulic motor tracking servo motor and adopted the PID parameter setting combined control algorithm. e driver of the tracking control strategy is simulated based on the mathematical model. e results show that the main synchronization performance parameters of the hydraulic motor tracking servo motor composite drive control system are in good condition [10]. PID controllers are widely used in industry today. PID controllers make around 85-90 percent of all controllers used in the industry. Position control methods are highly unsustainable when utilised in a closed-loop configuration [11,12]. Tong, X. R. et al. proposed a design scheme for a new real-time electronic countermeasures simulation system. e modeling and realization methods of each part of the whole simulation system are described, and the real-time performance of the system is realized. An electronic countermeasure simulation system is a key part of personal military training and can also make a realistic evaluation of the performance of modern equipment and technology [13].
A dc servo motor is just a conventional dc motor with the few small manufacturing variations. DC servo motors must meet two physical criteria: low inertia and strong starting torque. Low inertia is achieved by decreasing the armature diameter and, as a result, the rotor length until the desired power output is obtained [14,15]. A new comprehensive mechanization solution for hull derusting and spraying is proposed, and the key technology realization methods are discussed and analyzed, which provides a reference for a comprehensive mechanization solution for large hull derusting.
Basic Principle of Position Servo.
A servo motor is a motor that rotates with high precision. Servo motors sometimes have a control circuit that provides feedback on the current position of the motor shaft, allowing them to revolve with amazing accuracy. When you want to spin an object at a specific angle or distance, you use a servo motor. It is nothing more than a simple motor linked to a servo mechanism. According to the servo system regulation theory, the servo system is usually divided into open-loop, half closed-loop, and closed-loop systems. Open-loop systems have no measurement feedback. Semiclosed-loop and closed-loop systems have measurement feedback links, in which only the measuring elements installed on the rotating shaft of the motor in the semiclosed-loop system can detect physical quantities related to angular displacement [16,17]. e open-loop system has no measurement feedback signal, and its accuracy is poor, while the semiclosed-loop and closed-loop systems can control the speed and position according to the results of the comparison between the detected feedback signal and the instruction signal, which has a higher control accuracy. e precision of the semiclosed-loop system is lower than that of the closed-loop system, but it is simpler and easier to adjust. Is a three-ring motion servo control system with a semiclosed loop. e outer loop is a position loop, and the inner loop contains a speed loop and a current loop. e working principle of position servo control is as follows: e actual position information of the photoelectric encoder feedback signal processing circuit is compared with the theoretical position information transmitted by the computer to obtain the following error. According to the following error, the digital quantity of the feed speed instruction is calculated by the position controller. e digital quantity is converted by D/A and used as the input speed instruction of the speed ring of the servo drive unit. e servo unit drives the coordinate axis to move and realizes the position control.
Mathematical Model of Position Servo Control System.
According to the servo system regulation theory, the servo system is usually divided into open-loop, half closed-loop, and closed-loop systems. Open-loop systems have no measurement feedback. Semiclosed-loop and closed-loop systems have measurement feedback links, in which the semiclosed-loop system only the measuring element installed on the rotating shaft of the motor can detect the physical quantity related to angular displacement. e working principle of position servo control is as follows: the actual position information of the feedback signal processing circuit of the photoelectric encoder is compared with the theoretical position information transmitted by the computer to obtain the following error. According to the following error, the digital quantity of the feed speed instruction is calculated by the position controller [18]. e digital quantity is converted by D/A and used as the input speed instruction of the speed ring of the servo drive unit. e servo unit drives the coordinate axis to move and realizes the position control. e simplified position loop of the motion control system is shown in Figure 1.
Among them, where T is the sampling period, T S is the time constant of the motor, and K m is the gain of the speed loop. erefore, the open-loop transfer function is where K V is the open-loop gain of the system, K D is the digital-to-analog conversion gain, and K A is the gain of the measuring device.
Analysis of Position Servo PID Control Algorithm.
PID control is one of the earliest developed control strategies. Because of its simple algorithm, convenient adjustment, good robustness, and high reliability, it is widely used in industrial control [19,20]. PID (proportional, integral, and differential) control is based on classical control theory and is the most extensively used control approach in continuous systems. e schematic diagram of the PID control system is shown in Figure 2. In Figure 2, represents the output of the controller, K p represents the scale coefficient, T i represents the integral time constant, and T d represents the differential time constant.
e proportional gain K p is introduced to reflect the deviation signal of the control system in a timely manner. When the system deviation occurs, the proportional adjustment link immediately produces the adjustment effect, which makes the system deviation change rapidly to the decreasing trend. e integral function is introduced to eliminate the steady-state error of the system, improve the no-difference degree of the system, and ensure the realization of no-static tracking of the set value. e function of the differential link is mainly to improve the response speed and stability of the control system.
Computer control is a kind of sampling control. It can only calculate the control quantity according to the deviation value of sampling time and carry out discrete control but cannot continuously output the control quantity and realize continuous control like an analog controller [9,10]. erefore, the integral and differential terms in the formula cannot be directly and accurately calculated in the computer but can only be approximated by the numerical calculation method. If T is the sampling period, the discrete sampling time point i k D is used to represent the continuous time T, and the sum is used to replace the integral; increment replaces differentiation. If the sampling period T is small enough, the approximate calculation will be quite accurate, and the controlled process is very close to the continuous control process. erefore, the digital PID control algorithm can be obtained as follows: e above control algorithm provides the position u i of the actuator each time the output, so it can be called the position-type PID control algorithm. Each output of this algorithm is related to the whole past state, and it is easy to generate large accumulated errors in calculation. In practice, an incremental PID control algorithm is often adopted, and the control quantity at the sampling time can be derived as follows: When the incremental algorithm is used, the control increment u(k) of the computer output corresponds to the increment of the actuator position. Incremental PID is only a slight improvement on the algorithm, and there is no Mathematical Problems in Engineering 3 essential difference with positional PID. However, compared with the position-type PID, because it only outputs the control increment each time, that is, the change of the corresponding position of the actuator, the output change range is small, so when the processor fails, the production process will not be seriously affected. In addition, since its maximum output at each time is U (K) Max, no disturbance can be achieved when the control is switched from manual to automatic. e calculation workload is small, there is no need to add the calculation formula, only two historical data E (k−1) and E (k−2) are used, and the translation method is usually adopted to save the two historical data. According to the mathematical model of the position servo system, the PID control algorithm of MATLAB is compiled, and the simulation experiments of the typical input ramp input response and the unit step input response of the position servo controller are completed respectively. For the control system, in the case of not allowing overshoot, the best control effect can be obtained theoretically when the control parameters of the PID controller are respectively selected through experiments, as shown in Figure 3. In order to investigate the robustness of the PID control algorithm, when the system is running, change the openloop gain of the transfer function of the control object, add a random component, and take K � (1 + rand * sin Δ). As can be seen from Figure 4, the PID control algorithm still maintains good dynamic characteristics.
rough the analysis of the graph, it can be seen that it has a small following error, which is more important in the position servo control system, and directly determines the control accuracy of the control system.
Mathematical Modeling of Rust Removal and Spraying.
Dust removal followed by spraying is an important task for the proper functioning of the servo system. It can degrade motor efficiency and performance by overheating, polluting lubricants, hastening wear, and causing winding damage. Carbon dust is conductive, which means it might cause shorts if it gets into the wrong area. e control system of rust removal and spraying mainly includes three subsystems, which are pneumatic position servo system control, pneumatic pressure servo system control, and pneumatic angle control. Pneumatics enables rapid, high-force point-to-point motion. Servo pneumatics has the same speed and force capacities as standard pneumatics, but with the added benefit of greater positioning precision not just at stroke's endpoints, but also in the middle. Servo pneumatics monitors and regulates air flow in addition to delivering position feedback, allowing for precise control of the force produced. Because the pneumatic angle control requires low precision, so it only needs to be controlled by the ordinary solenoid valve on and off, and the pneumatic pressure control and pneumatic position control are only different in the sensor feedback, so the following will be the establishment of the mathematical model of the pneumatic position servo system and pneumatic pressure control, and it is basically the same. Control technologies mainly include AC servo drive, pneumatic servo control, electronic parameter detection, and computer remote control. Table 1 shows the main drivers and associated control objects.
In Table 1, the horizontal and vertical movements are driven and controlled by servo (or stepping), the cylinder support adopts proportional force control technology, and the position of the end-effector of the end manipulator of the rust-removing cylinder adopts pneumatic position servo control. e cylinder stroke is set as 2L, the middle position of the cylinder is set as the initial position, and the y displacement of this point is recorded as zero. At the same time, the right movement is set in the positive direction, and the left movement is set in the negative direction. e flow continuity equation is where q m1 , q m2 are the gas flow into cavity I and II, respectively; R is the gas constant; A is the cross-sectional area of the cylinder. e force balance equation of cylinder piston can be obtained from Newton's second law.
where F m is the friction force (N), which is composed of Coulomb friction force and static friction force; F I is the external load force (N). Ignoring the friction force, its incremental form is e flow equation of the throttle port of the pneumatic servo valve is where p t is the pressure at the throttle (bar); T s is the gas temperature at the valve inlet (K); A t is the throttle port area (mm 2 ). tMATLAB is an excellent numerical calculation program produced by American Mathw Orks, which contains the authoritative and practical computer tools in the field of simulation-MATLAB/SIMULINK. e control simulation model was established by SIMULINK, and it includes four subsystems: control, large cavity pressure, small cavity pressure, and friction. e displacement and load can be set by step, and the PID control subsystem can select the PID parameters according to the positive and negative errors. When the absolute error is less than 0.5 mm, set the voltage in the center to limit the maximum output voltage and other functions; the pressure subsystem can choose the calculation formula of the flow of each throttle according to the control voltage. e friction subsystem is built according to the analysis principle of the friction force of the cylinder in 2.5 sections.
e subsystem can calculate the friction force according to the positive and negative speed and whether the speed is zero, so as to achieve a more accurate simulation of the friction force of the cylinder. In the model, the atmospheric force acting on the piston rod is placed in the "load," so that the simulation model can easily obtain the required simulation data of displacement, two-cavity pressure, and control voltage and realize the accurate simulation of the proportional direction flow valve-controlled cylinder pneumatic position servo system. e obtained simulation data were plotted by MATLAB. Simulation and experimental displacement response curves are similar. Since the maximum output voltage is set, the middle of the curve is approximately straight, and the speed is uniform when the curve rises. When the response approaches the target value, the curve has a transition curve under the action of the differential coefficient, indicating that the response velocity drops down. e pressure simulation curve of the large cavity and the experimental (relative pressure) curve have the same variation trend, both of which show that the initial pressure rapidly rises to the maximum value and then decreases to the steady-state value at a faster speed. Due to the complexity of the experimental situation, the experimental curve fluctuates greatly. e pressure curve of the small-cavity experiment has a process of decline at the beginning and then rises. e pressure curve of small cavity simulation also has a downward trend at the beginning, and then enters the rising stage, and is relatively stable after entering the steady state.
Experimental Analysis
e hardware platform of the pneumatic servo position tracking control system is mainly composed of standard cylinder, proportional servo valve, displacement sensor, pressure sensor, data acquisition card, and industrial control computer. e cylinder is DSMI-40-270 vane swinging cylinder produced by FESTO Company in Germany. e diameter of the swinging cylinder is 40 mm, the stroke is 270, the maximum effective cross section of the proportional flow valve is 8 mm 2 , the pressure sensor is Honeywell 4000PC, MLO2POT26002TLF displacement sensor (accuracy 0.5 mm), and the realization of A/D and D/A is completed by Advantage Microelectronics's PCL-812PG data acquisition card. Compressed air is 0.51 MPa. When the system works, the industrial computer sends out the control signal that needs to be tracked and drives the servo valve after D/A conversion and amplification. e displacement sensor detects the angle signal of the rotation of the shaft and feedbacks it to the computer through A/D conversion and compares it with the specified input to obtain the deviation control quantity, so as to realize the continuous trajectory control. e proportional flow valve-controlled swing cylinder system described in the first two sections is used for simulation and experimental research.
Mathematical Problems in Engineering
Values of parameters in the model: σ * � 0.528, Z � 70 × 10 − 6 kg · m 2 , θ 10 � θ 20 � 10 0 , T � 293K; Figure 5 shows the comparison of the simulation results and experimental results of open-loop control. e experimental conditions are as follows: air supply pressure p s � 0.51 MPa, load moment of inertia J � 112.78 kg · cm 2 , and control quantity μ � 0.4 V.
It is clear from Figure 5 that both curves are almost the same. is defines the efficiency of the proposed work. So, the model can well reflect the system characteristics and verify the correctness of the mathematical model of the system. e variation rule of the simulation curve and experiment curve is basically the same, which fully proves the effectiveness of the simulation model. e simulation model can reflect the characteristics of the actual system and can be used to analyze the system characteristics and study the control strategy.
Conclusions
Aiming at the position and pressure control problems of the pneumatic servo system involved in the ship hull derusting and spraying operation, the mathematical model of the pneumatic position servo system was established. Based on the principle of the original position servo system, the mathematical model of the system was established, and the PID controller was designed. Pneumatic position servo control system comprises PID control link, PID system experiment throughput, and PID control simulation data comparison. e simulation results show that the system has a fast response, no overshoot, and no shock, and the error accuracy is less than ±20 mm, which meets the requirements of precision and stability in the process of ship derusting and spraying. e simulation model can well reflect the system characteristics and verify the correctness of the mathematical model of the system. e simulation model can not only obtain the displacement simulation curve but also obtain the two-chamber pressure simulation curve of the cylinder, which can accurately realize the computer simulation of the pneumatic position servo system.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,227.6 | 2022-06-18T00:00:00.000 | [
"Engineering"
] |
Genome-Wide CRISPR-Cas9 Screen Reveals the Importance of the Heparan Sulfate Pathway and the Conserved Oligomeric Golgi Complex for Synthetic Double-Stranded RNA Uptake and Sindbis Virus Infection
When facing a viral infection, the organism has to put in place a number of defense mechanisms in order to clear the pathogen from the cell. At the early phase of this preparation for fighting against the invader, the innate immune response is triggered by the sensing of danger signals. Among those molecular cues, double-stranded RNA (dsRNA) is a very potent inducer of different reactions at the cellular level that can ultimately lead to cell death. Using a genome-wide screening approach, we set to identify genes involved in dsRNA entry, sensing, and apoptosis induction in human cells. This allowed us to determine that the heparan sulfate pathway and the conserved oligomeric Golgi complex are key determinants allowing entry of both dsRNA and viral nucleic acid leading to cell death.
negative charge that is able to interact electrostatically with the basic residues that are exposed by viral surface glycoproteins. This allows viruses to increase their concentration at the cell surface and thus the possibility to interact with their specific entry receptor (2). For instance, alphaviruses, such as Semliki Forest virus (SFV) and Sindbis virus (SINV), are enveloped positive-strand RNA viruses that contain two glycoproteins at the envelope, namely, the proteins E1 and E2. E2 is involved in the interaction of the virus particle with the cell surface (3,4), while E1 serves in the fusion process (5).
Once the virus is inside the cell, the replication of the viral genome represents another critical step for triggering the antiviral immune response. Double-stranded RNA (dsRNA) is a ubiquitous pathogen-associated molecular pattern (PAMP) recognized by the cellular machinery, which can arise as a replication intermediate for viruses with an RNA genome or from convergent transcription for DNA viruses (6). In mammals, dsRNA recognition is driven by specific receptors, including the cytoplasmic RIG-like receptors (RLRs) and endosomal Toll-like receptors (TLRs) (7). Sensing of dsRNA by these receptors results in the activation of a complex signaling cascade leading to the production of type I interferon (IFN), which in turn triggers the expression of IFN-stimulated genes (ISGs) and the establishment of the antiviral state (8). The ultimate outcome of this vertebrate-specific antiviral response is translation arrest and cell death by apoptosis (9).
The revolution brought by the discovery of the CRISPR-Cas9 technology has provided biologists with an invaluable tool for editing the genome at will and easily performing individual gene knockout (KO) (10). This technique is perfectly suited to perform genome-wide screens in a relatively fast and easy-to-implement manner, especially when the readout is based on cell survival. For this reason, numerous CRISPR-Cas9 loss-of-function screens have been performed based on cell survival after infection with different viruses (11)(12)(13). These approaches allowed the identification of novel virus-specific as well as common factors involved in antiviral defense mechanisms or in cellular permissivity to virus infection.
Here, we chose to take advantage of the fact that dsRNA is almost always detected in virus-infected cells (6) and is a potent inducer of apoptosis to design a genome-wide screen aimed at identifying host genes that when edited resulted in increased cell survival to dsRNA and viral challenge. To this aim, we performed a CRISPR-Cas9 screen based on cell survival in HCT116 cells after either cationic lipid-based transfection of an in vitro-transcribed long dsRNA or infection with the model alphavirus SINV, which replicates via a dsRNA intermediate.
Our results indicate that genes involved in limiting attachment and therefore entry, be it of the synthetic dsRNA or SINV, are vastly overrepresented after selection. We validated two genes of the heparan sulfate pathway (namely, SLC35B2 and B4GALT7) as being required for dsRNA transfectability and SINV infectivity. We also identified and characterized COG4, a component of the conserved oligomeric Golgi (COG) complex, as a novel factor involved in susceptibility to dsRNA and viral-induced cell death linked to the heparan sulfate biogenesis pathway.
RESULTS
Genome-wide CRISPR-Cas9 screen based on cell survival upon dsRNA transfection identifies factors of the heparan sulfate pathway. In order to identify cellular genes that are involved in the cellular response to dsRNA, which culminates with cell death, we performed a CRISPR-Cas9 genome-wide loss-of-function screen in the human colon carcinoma cell line HCT116. This cell line is highly suitable for CRISPR-Cas9 genetic screening procedures (14) and can be easily infected with SINV with visible cytopathic effects at 24 and 48 hours postinfection (hpi) (see Fig. S1A in the supplemental material). Moreover, transfection of an in vitro-transcribed 231-bp-long dsRNA by a cationic lipid-based transfection reagent in HCT116 cells led to strong cell death at 24 and 48 hours posttreatment (hpt) (Fig. S1B).
We generated a Cas9-expressing HCT116 monoclonal cell line (Fig. S1C) that we stably transduced with the human genome-wide lentiviral Brunello library composed of 76,441 single guide RNAs (sgRNAs) targeting 19,114 genes, as well as about 1,000 nontargeting sgRNAs as controls (15). We then applied a positive selection by lipofection of 30 million transduced cells per replicate with the synthetic long dsRNA, and we collected the surviving cells 48 h later. In parallel, the same initial amount of stably transduced cells was left untreated as a control (input) for each replicate (Fig. 1A). DNA libraries from the input samples were generated, sequenced, and quality checked. In particular, we verified the sgRNA coverage by observing the presence of the 4 guides per gene for 18,960 genes (99.2% of the genes) and 3 sgRNAs per gene for the remaining 154 genes (0.2% of the genes) (see Data Set S1 in the supplemental material).
Using the MAGeCK software (16), we assessed the normalized read count distribution of the control and dsRNA-treated biological triplicates, which, despite a quite homogenous sgRNA distribution, showed the presence of few outliers upon selection (Fig. 1B). We identified eight genes that were significantly enriched with a false discovery rate lower than 1% (FDR1%). Among those genes, four belonged to the heparan sulfate biosynthesis pathway (namely, SLC35B2, B4GALT7, EXT1, and EXT2) and three were components of the conserved oligomeric Golgi complex (namely, COG3, COG4, and COG8) ( Fig. 1C; see Data Set S2 in the supplemental material). In particular, all four sgRNAs targeting each of the SLC35B2, B4GALT7, and COG4 genes were enriched upon dsRNA selection (Fig. S1D).
Heparan sulfate is a linear polysaccharide that is covalently attached to core proteins in proteoglycans (PGs) on the cell surface (for review, see reference 17). Among many properties, HS plays a role in binding protein ligands and as a carrier for lipases, chemokines, and growth factors (17,18), but also as a viral receptor (19). HS biosynthesis takes place in the Golgi, where most of the biosynthetic enzymes are anchored to the Golgi membrane (20).
We first validated the resistance phenotype to dsRNA of SLC35B2 and B4GALT7, the two top hits identified in the screen (Fig. 1C, Fig. S1D), by generating two individual knockout clones for each gene by CRISPR-Cas9 editing in HCT116cas9 cells (see The observed resistance to dsRNA in the mutants could occur at many different steps, namely, dsRNA liposome attachment and entry, recognition, induction of the IFN pathway, or apoptosis. To test whether the first step was affected, we employed a nucleic acid delivery method that was not based on cationic lipid transfection. In particular, we used nucleofection (an electroporation-based transfection method) to introduce long dsRNAs into HCT116 cells, and we showed that this approach restored cell death in SLC35B2 and B4GALT7 knockout cells (Fig. 1D, right part of the graph). In addition, we performed liposome-based transfection of an in vitro-transcribed Cy5labeled dsRNA in SLC35B2 and B4GALT7 KO cells and assessed the Cy5 fluorescence at 48 h posttransfection by fluorescence-activated cell sorter (FACS) analysis ( Fig. 1E and F). Although the number of Cy5 positives (Cy5ϩ) cells was not significantly different in the B4GALT7 KO clones and was only slightly lower in the SLC35B2 KO cells than in wild-type (WT) cells (Fig. 1E), we observed a significant reduction of at least 80% of the median Cy5 fluorescence in both B4GALT7 and SLC35B2 KO cells relative to the control (Fig. 1F), thereby indicating a significant drop in the number of transfected Cy5-labeled RNA molecules per cell.
We also confirmed that liposome-based transfection of nucleic acids, such as plasmidic DNA, was impaired in SLC35B2 and B4GALT7 KO cells by transfecting a green fluorescent protein (GFP)-expressing plasmid using Lipofectamine 2000 in wild-type or knockout cells (see Fig. S3A, left, and Fig. S3B in the supplemental material). Nonetheless, GFP expression could be restored in all cell lines upon nucleofection (Fig. S3A, right). To establish whether impairment of the HS synthesis is directly linked to a defect in dsRNA entry and increased cell survival, we measured the extracellular HS levels in SLC35B2 and B4GALT7 KO cells. We measured a substantial reduction of the extracellular HS staining, as assessed by FACS measurement of two independent SLC35B2 and B4GALT7 KO clones compared with HCT116 wild-type cells (Fig. 1G). To confirm the importance of HS at the cell surface for liposome-based transfection, we mimicked the HS-defective phenotype by removing extracellular HS in parental HCT116cas9 cells either enzymatically (with heparinase) or chemically (with sodium chlorate [NaClO 3 ]) ( Fig. S3C and D). We tested the transfectability of a GFPexpressing plasmid by measuring either the relative number of GFP-positive cells or the relative median of GFP intensity of fluorescence by FACS analysis. Although the relative number of GFP-positive cells was not significantly reduced by heparinase treatment (Fig. S3C), it caused a reduction in GFP intensity in HCT116cas9-treated cells, thereby recapitulating the GFP plasmid lipofection defect observed in SLC35B2 and B4GALT7 KO (Fig. S3D). Moreover, this effect correlated with the reduction of extracellular HS by enzymatic treatment quantified by FACS (Fig. S3E), which demonstrated that extracellular HS are crucial for transfection by lipofection. In the case of NaClO 3 treatment, despite the reduction in both the relative number of GFP-positive cells (Fig. S3C) and the relative median of GFP intensity of fluorescence ( Fig. S3D) compared with the control, we could not observe a correlation with a decrease in overall extracellular HS (Fig. S3E). This result could be due to the fact that while a mix of heparinase I and III removes every kind of extracellular heparan sulfates, NaClO 3 impairs only the O-sulfation (21).
Taken together, our results show that knocking out SLC35B2 and B4GALT7 results in reduced levels of extracellular HS, which in turn impairs liposome-based transfectability of HCT116 cells. Moreover, the validation of these two top hits indicates that other candidates might be suitable for further analysis and may also have an impact on the dsRNA resistance phenotype.
COG4 is involved in dsRNA-induced cell death partly via the heparan sulfate pathway. Among the significant hits of our genome-wide screen were proteins related to the COG complex, namely, COG4, COG3, and COG8. The COG complex is a heterooctameric complex containing 8 subunits (COG1 to COG8) interacting with numerous proteins mainly involved in intra-Golgi membrane trafficking, such as vesicular coats, Rab proteins, and proteins involved in the SNARE complex (22,23). This interaction with the trafficking machinery is crucial for the proper functionality of the Golgi apparatus, and mutations in the COG complex result in severe cellular problems, such as glycosylation defects (24)(25)(26)(27), which are due to mislocalization of recycling Golgi enzymes (28,29).
Since we retrieved three out of the eight COG family members in our CRISPR-Cas9 screen suggesting their importance in dsRNA-induced cell death, we tested the effect S. pyogenes Cas9 protein were transduced with the lentiviral sgRNA library Brunello (MOI, 0.3). Thirty million transduced cells per replicate were selected with 1 g/ml puromycin to obtain a mutant cell population to cover at least 300ϫ the library. Selective pressure via synthetic long dsRNA (1 g/ml) was applied to induce cell death (in red). DNA libraries from input cells and cells surviving the dsRNA treatment as three independent biological replicates were sequenced on an Illumina HiSeq 4000 instrument. Comparisons of the relative sgRNA boundance under the input and dsRNA conditions were done using the MAGeCK standard pipeline. (B) Median normalized read count distribution of all sgRNAs for the input (in black) and dsRNA (in red) replicates. (C) Bubble plot of the candidate genes. Significance of robust rank aggregation (RRA) score was calculated for each gene in the dsRNA condition compared with that of input using the MAGeCK software. The number of enriched sgRNAs for each gene is represented by the bubble size. The gene ontology pathways associated with the significant top hits are indicated in orange and green. (D) Viability assay. Cells were transfected (80,000 cells; 1 g/ml) or nucleofected (200,000 cells; 400 ng) with synthetic long dsRNA, and cell viability was quantified 24 h (nucleofection) or 48 h (transfection) posttreatment using PrestoBlue reagent. The average of at least three independent biological experiments Ϯ SD is shown. One-way ANOVA analysis; *, P Ͻ 0.05. (E, F) Cy5-labeled dsRNA (80,000 cells; 1 g/ml) was transfected into HCT116cas9, B4GALT7#1 and 2, and SLC35B2#1 and #2 cells; and Cy5 fluorescence was quantified using FACS (10,000 events). The relative number of the Cy5-positive (Cy5ϩ) cells (E) and the relative median of Cy5 intensity of fluorescence (F) compared to those of HCT116cas9 cells are shown. The average of three independent biological experiments Ϯ SD is shown. Paired t test analysis; *, P Ͻ 0.05. (G) Quantification of extracellular heparan sulfates. FACS analysis of HCT116 control or KO clones stained with the HS-specific antibody 10E4 (in red) compared to unstained samples (in blue) (10,000 events). One representative experiment out of three is shown. of their inactivation by CRISPR-Cas9. As COG4 is the most enriched COG gene in our screen, we generated a polyclonal COG4 KO HCT116cas9 cell line and validated their dsRNA resistance phenotype resulting in an increase survival in response to synthetic dsRNA transfection (see Fig. S4A in the supplemental material). We also observed a reduction in the relative number of Cy5-positives (Cy5ϩ) cells and the median Cy5 fluorescence in COG4 KO cells relative to the control by FACS analysis (Fig. S4B and C).
To further confirm the involvement of the COG complex, we also tested the effect of dsRNA transfection in previously generated HEK293T KO COG3, COG4, and COG8 cells ( Fig. 2A) (30). Interestingly, while COG8 mutants did not display a significant survival phenotype in response to dsRNA lipofection, COG3 and COG4 KO HEK293 cells did. In addition, the survival phenotype could be complemented by stable expression of a COG4-GFP construct compared with COG4 KO cells ( Fig. 2A). Moreover, although we could not detect a decrease in the relative number of Cy5-positive (Cy5ϩ) cells in COG4 KO cells relative to the controls (Fig. 2B), the median Cy5 fluorescence in COG4 KO cells was significantly reduced compared with both HEK293T and COG4-rescued cells (Fig. 2C), thereby indicating a significant decrease in the number of transfected Cy5-labeled RNA molecules per cells.
In agreement, dsRNA accumulation appeared to be significantly reduced, but still present, in HEK293T COG4 KO cells compared with control cells, as determined by reverse transcriptase quantitative PCR (RT-qPCR) analysis of dsRNA isolated from cells 24 h after transfection (Fig. 2D), and this correlated with reduced IFN-beta accumulation in HEK293T KO COG4 cells compared with control cells (Fig. 2E).
These results indicated that dsRNA transfectability was strongly reduced but not completely impaired in the absence of COG4 and that dsRNA could be still detected in COG4 KO cells in order to activate type-I IFN response.
We confirmed the reduced internalization of dsRNA in COG4 mutant cells by transfecting rhodamine-labeled poly(I·C), a synthetic dsRNA analog, in HEK293T COG4 KO or WT cells (Fig. 2F) and by counting the number of poly(I·C) foci per cell at 6, 12, and 24 h posttransfection (Fig. 2G). We could observe a significant reduction in rhodamine-positive foci in HEK293T COG4 KO during the time course, suggesting a defect in dsRNA internalization, which could explain the increased survival phenotype.
In order to assess whether the COG4 KO survival phenotype was associated with a defect in the heparan sulfate pathway, we stained extracellular HS and measured the HS expression by FACS analysis. We observed a decrease of extracellular HS in KO COG4 cells compared with control cells (WT and rescued), which demonstrated that the COG complex is related to the HS biosynthesis pathway (Fig. S4D).
The reduction in extracellular HS could correlate with a decrease in transfectability and explain the survival phenotype in KO COG4 cells. Surprisingly, however, lipofection of a GFP-expressing plasmid indicated that HEK293T COG4 KO cells are still transfectable with a plasmid DNA compared with control cells, as observed by FACS analysis (Fig. S4E and F).
Altogether, these findings indicate that the COG complex is involved in HS biosynthesis and that removal of COG4 results in a lower accumulation of HS at the cell surface, which most likely translates to a reduced transfectability of dsRNA. However, as opposed to the observations in SLC35B2 or B4GALT7 KO cells, the cells are still transfectable with a plasmid DNA and, although to a lower extent, with dsRNA. Interestingly, the increased cell survival phenotype of COG4 KO cells upon dsRNA transfection does correlate with a reduced, but still measurable, IFN- production.
Cell survival-based genome-wide CRISPR-Cas9 screen identifies COG4 as a permissivity factor to SINV. SINV is a small enveloped virus with a single-stranded RNA genome of positive polarity. The virus belongs to the Togaviridae family, Alphavirus genus, and is considered the model for other medically important viruses, such as chikungunya virus (CHIKV) and Semliki Forest virus (SFV). During its infectious cycle, SINV produces dsRNA as a replication intermediate and induces cytopathic effects in mammalian cells, leading to cell death within 24 to 48 h postinfection (31).
In order to identify host genes that are related to SINV-induced cell death and infection, we performed a CRISPR-Cas9 knockout screen in HCT116cas9 cells, which are susceptible to this virus ( Fig. 3A and Fig. S1). After transduction with the CRISPR lentiviral genome-wide knockout library, puromycin-resistant HCT116 cells were infected with SINV-GFP at a multiplicity of infection (MOI) of 0.1 and selected for cell survival. Using the MAGeCK software (16), we assessed the normalized read count distribution of the control and SINV-infected biological triplicates, which, despite a quite homogenous sgRNA distribution, showed the presence of few outliers upon selection (Fig. 3B). We identified two genes that were significantly enriched with a false , and rescued HEK293T cells were infected with SINV GFP for 24 and 48 h at an MOI of 1, and the supernatant was collected in order to measure viral production. The fold change in titer relative to HEK293T arbitrarily set to 1 is shown. The average of three independent biological experiments Ϯ SD is shown. Paired t test analysis; *, P Ͻ 0.05. discovery rate lower than 25% (FDR25%), notably SLC35B2 and B4GALT7 (see Data Set S3 in the supplemental material, Fig. 3C). Genes of the heparan sulfate pathway have been previously found in genome-wide CRISPR-Cas9 loss-of-function studies looking for factors involved in the accumulation of viruses, such as influenza, Zika, and chikungunya viruses (11,32,33). Interestingly, among the top-ranking hits, we retrieved COG4, which was not previously associated with SINV infection (Fig. 3C).
To validate the involvement of COG4 during SINV infection, we infected HEK293T, COG4 KO, or COG4 KO-rescued HEK293T cells with SINV and measured cell viability at 24, 48, and 72 hpi. The cell viability assay revealed that the COG4 KO cells were less sensitive at early times points of SINV infection (24 and 48 hpi), but this tendency disappeared at 72 hpi (Fig. 3D). In agreement, the determination of viral titer by plaque assay showed that COG4 KO HEK293T cells produced significantly fewer infectious viral particles than HEK293T or COG4-rescued cells at 24 hpi but not at 48 hpi, underlining a possible delay in the infection and virus-induced cell death (Fig. 3E). We also observed that GFP accumulated to lower levels in COG4 KO than in WT cells both at 24 and 48 h postinfection (hpi) (Fig. S5A). Finally, we noticed that the reduced viral production in COG4 KO cells was associated with a reduced accumulation of viral dsRNA in the cytoplasm when we infected WT or COG4 KO cells with SINV-GFP and performed immunostaining with the anti-dsRNA J2 antibody 24 hpi (Fig. S5B). Overall, our results indicate that COG4 expression is needed for an efficient SINV infection and that its absence can delay the infection, thereby increasing cell survival in COG4 KO cells.
DISCUSSION
Several CRISPR-Cas9 screens aimed at identifying factors required for infection by specific viruses have been described in the literature, but to our knowledge, none has been designed to look at the effect of the only common factor between all those viruses, i.e., dsRNA. Here, we used the Brunello sgRNA lentiviral library to screen for genes involved in HCT116 cell survival to synthetic dsRNA transfection and to SINV infection. This screen allowed us to identify components of the heparan sulfate biosynthesis pathway and of the COG complex that are critical host factors in the cellular response to both long dsRNA transfection and SINV infection challenges. It has been reported that cell survival-based CRISPR screens for viral host factors are biased toward genes linked to the initial steps of the infection and even more so to viral entry (11,34). Thus, in our case, HS is a well-known factor required for SINV entry due to virus adaptation to cell culture (35). We also retrieved genes of the HS pathway in our dsRNA-based screen, and we confirmed the importance of extracellular HS for dsRNAinduced toxicity. This is mostly due to a decrease of cell transfectability when HSs are missing, which is linked to the fact that the polyplexes used for transfection are positively charged and can interact electrostatically with glycosaminoglycans (36,37). Our work also addresses limitations of survival-based CRISPR-Cas9 screens. Thus, in this study, the selection pressure was too strong to allow the identification of genes intervening after the entry step, thereby making the screen less sensitive. Adjusting the selection procedure by reducing the concentration of dsRNA or increasing the duration of treatment may allow the identification of hits not solely implicated in the transfection process but also the innate response of the cells to either dsRNA or the transfection agent. Alternatively, new strategies should be designed to overcome this problem, such as using fluorescent-based cell sorting in order to be less stringent in the selection.
In addition to the HS pathway, we identified members of the COG complex, and more specifically COG4, as factors involved in dsRNA transfection and SINV infection. Loss-of-function COG4 mutant cells show a dsRNA-resistant phenotype as well as a reduction in extracellular HS expression, which is similar to previously published reports for other COG proteins (33,38). Surprisingly, even if the removal of COG4 expression results in a defect in the HS pathway, we were still able to transfect the KO COG4 cell line either with a plasmid encoding GFP or, although to a lesser extent, dsRNA. In addition, the dsRNA molecules that are able to enter into COG4 KO HEK293T cells are still sufficient to induce the IFN- mRNA production, indicating that the innate immune response is still functional in these mutant background. Nonetheless, cell death induced by dsRNA appears to be lower in COG4 KO cells, most likely due to a delay in the infection.
Future work will be needed to assess whether this phenotype upon SINV infection is correlated only with a defect in HS biogenesis in COG4 mutants or with other functions of COG4. The increased cell survival to synthetic dsRNA transfection and viral infection of COG4 KO cells compared to WT cells opens several interesting perspectives. Indeed, since the COG complex is related to glycosylation and membrane trafficking (23,25,39,40), deficiency in one or more of its components could potentially lead to a glycosylation and/or subcellular localization defect of components of the innate immune response or of the apoptosis pathway, although this possibility remains to be formally proven. The difference of transfectability of plasmid DNA and dsRNA in COG4 KO cells is also intriguing and could indicate that different kinds of nucleic acids do not necessarily use the exact same routes to enter the cells upon liposome-based transfection. Finally, there could be other defects linked to COG deficiencies (39, 41) that could account for our observations, and elucidating those defects will require further work. It is particularly interesting that COG3 and COG4 knockout cells display a dsRNA-induced cell death resistance phenotype, while COG8 mutants do not. This finding implies that only part of the COG complex is involved in dsRNA uptake.
In conclusion, our work uncovered COG4 as a new player in HS production, which is required for both SINV infection and dsRNA transfection. These results also highlight that synthetic dsRNA is a powerful tool for identifying novel key pathways of the cellular response to RNA viruses.
SINV wild type (SINV WT) or SINV expressing the green fluorescent protein (SINV GFP) were produced as previously described (44) in BHK-21 cells. In SINV GFP, the promoter of SINV subgenomic RNA was duplicated and inserted at the 3= extremity of the viral genome, and the GFP sequence was then inserted after this new promoter. Cells were infected with either SINV WT or SINV GFP at an MOI of 10 Ϫ1 , and samples were harvested at 24 or 48 hours postinfection (hpi).
Standard plaque assay. Ten-fold dilutions of the viral supernatant were prepared. Fifty-microliter aliquots were inoculated onto Vero R cell monolayers in 96-well plates for 1 hour. Afterward, the inoculum was removed, and cells were cultured in 2.5% carboxymethyl cellulose for 72 h at 37°C in a humidified atmosphere of 5% CO 2 . Plaques were counted manually under the microscope. For plaque visualization, the medium was removed and cells were fixed with 4% formaldehyde for 20 min and stained with 1ϫ crystal violet solution (2% crystal violet [Sigma-Aldrich], 20% ethanol, and 4% formaldehyde). J2 immunostaining. HEK293T or KO COG4 HEK293T cells were plated onto a Millicell EZ slide (Millipore) and were infected with SINV at an MOI of 0.1 for 24 h. Cells were fixed with 4% formaldehyde diluted in 1ϫ phosphate-buffered saline (PBS) for 10 min at room temperature (RT), followed by incubation in blocking buffer (0.2% Tween X-100, 1ϫ PBS, and 5% normal goat serum) for 1 h. J2 antibody (Scicons) diluted in blocking buffer at 1:1,000 was incubated overnight at 4°C. Between each step, cells were washed with 1ϫ PBS-0.2% Tween. Secondary antibody goat anti-mouse Alexa 594 (ThermoFisher) diluted at 1:1,000 in 1ϫ PBS-0.2% Tween was incubated for 1 h at room temperature. After 4=,6-diamidino-2-phenylindole (DAPI) staining (1:5,000 dilution in 1ϫ PBS for 5 min), slides were mounted with a coverslip over antifading medium and observed by epifluorescence microscopy using the BX51 (Olympus) microscope with a 40ϫ lens objective.
Generation of HCT116cas9 line. The HCT116cas9 cells, expressing the human codon-optimized Streptococcus pyogenes Cas9 protein, were obtained by transducing the wild-type HCT116 colorectal carcinoma cell line (ATCC CCL-247) with a lentiCas9-BLAST lentiviral vector (no. 52962; Addgene). Briefly, wild-type HCT116 cells were cultured in standard DMEM (Gibco) medium supplemented with 10% fetal bovine serum (FBS; Gibco) and 100 U/ml of penicillin-streptomycin (Gibco) at 37°C in 5% CO 2 . The cells were transduced at 80% confluence in a 10-cm tissue culture plate, using 6 ml of lentiviral supernatant supplemented with 4 g/ml of Polybrene (H9268; Sigma) for 6 hours. The transduction medium was replaced with fresh growing medium for 24 h before starting the selection. Transduced HCT116cas9 cells were selected for 10 days and maintained in growing medium supplemented with 10 l/ml of blasticidin (Invivogen).
High-titer lentiviral sgRNA library production. The production of a high-titer human sgRNA Brunello lentiviral library, which contains 4 sgRNAs per gene (15) (no. 73178; Addgene), was performed by transfecting HEK293T cells in five 15-cm tissue culture plates using the polyethyleneimine (PEI) (linear; molecular weight [MW], 25,000; no. 23966-1-A; Polysciences) transfection method (45). Briefly, for each 15-cm plate containing 20 ml of medium, 10 g of sgRNA library, 8 g of psPAX2 (plasmid no. 12260; Addgene), and 2 g of pVSV-G (plasmid no. 138479) diluted in 500 l of 150 mM NaCl were combined with 40 l of PEI (1.25 mg/ml) dissolved in 500 l of 150 mM NaCl. The mix was incubated for 30 minutes at room temperature, and the formed complexes were added dropwise on the cells. After 6 hours, the medium was replaced, and the viral supernatant was collected after 48 hours and after 72 hours. The supernatant was filtered through a 0.45-m polyethersulfone (PES) filter and the viral particles concentrated 100 times using the Lenti-X concentrator (TaKaRa) before storage at Ϫ80°C. The viral titer was established by counting puromycin-resistant colonies formed after transducing HCT116 cells with serial dilutions of the viral stock. HCT116cas9 cells were transduced with lentivirus-packaged Brunello sgRNA library at an MOI of 0.3. The lentiviral library has been sequenced to verify that all the lenti-sgRNA are represented.
Genome-wide CRISPR-Cas9 knockout screens. For each replicate (n ϭ 3), 5 million stably transduced cells/dish were seeded in six 15 cm 2 plates in order to keep a 300ϫ representativity of the sgRNA library. Untreated samples (input) were collected as controls. One day later, cells were either lipofected with 1 g/ml dsRNA-citrine or infected with SINV at MOI of 0.1 and cultured at 37°C and 5% CO 2 . Cells were washed with 1ϫ PBS 48 hours posttreatment, to remove dead cells, and fresh medium was added to surviving clones. Cells were expanded, and all cells were collected 6 days post-dsRNA transfection and 18 days post-SINV infection.
Genomic DNA was isolated by resuspending the cell pellet in 5 ml of resuspension buffer (50 mM Tris-HCl [pH 8.0], 10 mM EDTA, and 100 g/ml RNaseA), and 0.25 ml of 10% SDS was added and incubated 10 min at RT after mixing. After incubation, the sample was sonicated and incubated 30 min at RT with 10 l of proteinase K (10 mg/ml). A total of 5 ml of a phenol/chloroform/isoamyl alcohol solution was added, followed by a centrifuge step for 60 min at 12,000 ϫ g and 20°C. The upper phase was transferred into a new tube, and 500 l of 3M NaAc and 5 ml of isopropanol was added and incubated overnight at RT, followed by a centrifuge step for 30 min at 20°C and 12,000ϫ g. The pellet was washed using EtOH and dissolved in H 2 O.
Illumina P5-and P7-barcoded adaptors were added by PCR on genomic DNA (gDNA) samples according to the GoTaq protocol (Promega). PCR amplicons were gel purified and sequenced on a HiSeq 4000 instrument (Illumina) to obtain about 30 million reads for each sample. The enrichment of sgRNAs was analyzed using MaGeCK with default parameters (16). The primers used to generate the PCR products are listed in Table S1 in the supplemental material. The results of the dsRNA and SINV screen are available in Data Set S1 and S2, respectively Generation of monoclonal SLC35B2 and B4GALT7 and polyclonal COG4 knockout HCT116 cell lines. The sgRNA expression vectors targeting SLC35B2, B4GALT7, or COG4 (sgRNA sequences selected were the 2 most enriched sgRNA from the Brunello library in the dsRNA screen) genes were produced by annealing the "sense" and "antisense" oligonucleotides (Table S1) at a concentration of 10 M in 10 mM Tris-HCl (pH 8.0) and 50 mM MgCl 2 in 100 l. The mixture was incubated at 95°C for 5 minutes and then allowed to cool down to room temperature. The oligonucleotide duplex thus formed was cloned into the BbsI restriction site of the plasmid pKLV-U6gRNA (BbsI)-pGKpuro2ABFP (no. 62348; Addgene). The lentiviral supernatant from the single transfer vector was produced by transfecting HEK293T cells (ATCC CRL-3216) with the transfer vector psPAX2 packaging plasmid ( no. 12260; Addgene) and the pVSV envelope plasmid (no. 8454; Addgene) in the proportion 5:4:1 using Lipofectamine 2000 (ThermoFisher) reagent according to manufacturer's protocol. Standard DMEM (Gibco) supplemented with 10% fetal bovine serum (FBS; Gibco) and 100 U/ml of penicillin-streptomycin (Gibco) was used for growing HEK293T cells and for lentivirus production. One 10-cm plate of HEK293T cells at 70% to 80% confluence was used for the transfection. The medium was replaced 8 hours posttransfection. After 48 h, the medium containing viral particles was collected and filtered through a 0.45-m PES filter. The supernatant was directly used for transfection or stored at Ϫ80°C. A 6-well plate of HCT116cas9 cells at 80% confluence was transduced using 600 l of the lentiviral supernatant (300 l of each lentivirus produced for each duplex) supplemented with 4 g/ml Polybrene (Sigma) for 6 h. The transduction medium was then changed with fresh DMEM for 24 hours, and then the transduced cells were selected using DMEM containing 10 g/ml blasticidin (Invivogen) and 1 g/ml puromycin (Invivogen). Genomic DNA was isolated from individual colonies, and KO clones were screened by PCR (primers in Table S1). The expected WT band for SLC35B2 is 469 bp and the mutant band is 132 bp. For B4GALT7, the WT band is 341 bp and mutant band is 180 bp. For laboratory purposes, the SLC35B2 clones have been generated into HCT116cas9 cells that are expressing mCherry and citrine due to integration of miReporter-PGK (no. 82477; Addgene).
Nucleic acid delivery. Transfection using Lipofectamine 2000 (no. 11668019; Invitrogen) was performed following the manufacturer's instructions. For nucleofection, cells were nucleofected using Nucleofector SE solution and reagent into a Nucleocuvette following the manufacturer's instructions using the 4D-Nucleofector system (Lonza). The cell number and nucleic acid amounts are indicated in each figure legend. P-EGFP-N1 (plasmid no. 2491; Addgene) was used in transfection and nucleofection experiments as a control. | 7,835.8 | 2020-11-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Accuracy of cup position following robot-assisted total hip arthroplasty may be associated with surgical approach and pelvic tilt
This study aimed to investigate the accuracy of cup placement and determine the predictive risk factors for inaccurate cup positioning in robot-assisted total hip arthroplasty (THA). We retrospectively analyzed 115 patients who underwent robot-assisted THA between August 2018 and November 2019. Acetabular cup alignment and three-dimensional (3D) position were measured using pre- or postoperative computed tomography (CT) data. Absolute differences in cup inclination, anteversion, and 3D position were assessed, and their relation to preoperative factors was evaluated. The average measurement of the absolute differences was 1.8° ± 2.0° (inclination) and 1.9° ± 2.3° (anteversion). The average absolute difference in the 3D cup position was 1.1 ± 1.2 mm (coronal plane) and 0.9 ± 1.0 mm (axial plane). Multivariate analysis revealed that a posterior pelvic tilt [odds ratio (OR, 1.1; 95% confidence interval (CI), 1.00–1.23] and anterior surgical approach (OR, 5.1; 95% CI, 1.69–15.38) were predictive factors for inaccurate cup positioning with robot-assisted THA. This is the first study to demonstrate the predictive risk factors (posterior pelvic tilt and anterior surgical approach) for inaccurate cup position in robot-assisted THA.
Preoperative plan and surgery. Preoperative CT scans from the level of the iliac wing to the femoral condyle were obtained. The slice thickness was 1 mm, and the CT data were transferred to the MAKO planning module. Subsequently, preoperative planning was performed to determine the optimal component size, angle, and position using the three-dimensional (3D) templating software of the MAKO robotic hip system before surgery. The target cup inclination angle was usually fixed at 40°, and the anteversion angle was determined preoperatively according to Widmer's combined anteversion theory 21 with respect to the functional pelvic plane 22 . Pelvic tilt was defined as the inclination angle of the anterior pelvic plane, determined by both the anterior superior iliac spines and the pubic tubercles, relative to the table plane. The patients underwent THA with a Trident hemispherical cup and Accolade II or Exeter v40 stems (Stryker Orthopaedics, Mahwah, NJ, USA).
The robot-assisted THA procedures were performed with the MAKO robotic hip system, which is a robotassisted computer navigation system that uses the RIO robotic arm (MAKO Rio Robot) to ream the acetabulum and place the acetabular component. After placement of the acetabular cup, one or two screws were inserted, and the intraoperative cup alignment was confirmed by touching five points at the cup edge using the navigation pointer. Full-weight bearing was allowed 1 day after surgery in all patients.
Postoperative evaluation. Postoperative CT scans were obtained 1 week after surgery, and the data were transferred to the OrthoMap 3D Navigation System (Stryker Orthopaedics). The computer-aided design models of the implants were manually adjusted for postoperative multi-planar reconstruction of the CT images (Fig. 1). Cup inclination and anteversion angles were measured with respect to the functional pelvic plane. To analyze the accuracy of cup alignment, we compared the absolute differences in cup alignment between the postoperative measurements, navigation records, and preoperative plans. To assess the cup position in the axial axis, a normal vector line passing through the cup center was drawn in the axial views of the preoperative plans on the MAKO workstation ( Fig. 2) and the postoperative reconstruction image on the OrthoMap 3D workstation (Fig. 3). To assess the cup position in the coronal axis, a horizontal line passing through the cup center was drawn in the coronal views on the preoperative plans and postoperative images (Figs. 2 and 3). The distance between the outer edge of the cup and the medial edge of the acetabulum on the line was measured, and the absolute differences in distance between that assessed in the preoperative plan and the postoperative measurement were calculated in the axial and coronal views (Figs. 2 and 3).
Statistics. All data are expressed as mean ± standard deviation (SD) unless otherwise indicated. The differences in cup angles between that evaluated in preoperative plans and postoperative CT measurements were analyzed by paired t-test ( Table 2). The average measurements of the absolute differences in cup alignment in cases where the posterior approach was used by the surgeons were analyzed using one-way analysis of variance. The outliers of accurate acetabular cup placement were defined as an absolute difference in inclination or anteversion of more than 5° and a difference in distance between preoperative plans and postoperative CT measurements of more than 3 mm, according to previous reports 18,23,24 . To identify predictive factors of the outliers in postoperative cup malposition with robot-assisted THA, outliers and non-outliers were compared using (Tables 3 and 5) and the unpaired t-test for continuous variables (Tables 4 and 6). Additionally, we performed a multivariate analysis to test the association between acetabular shape, surgical approach, and pelvic tilt with outliers of cup placement (Table 7). Odds ratios (ORs) and 95% confidence intervals (95% CIs) were calculated using the Fisher's exact test and multivariate analysis. The database was analyzed using SPSS version 16.0 (IBM Corp., Armonk, NY, USA), and p-values < 0.05 were considered statistically significant. For Fisher's exact test, a power calculation using G*Power 3 25 determined that a minimum total sample size of 114 patients would be sufficient to ascertain whether there was a significant difference. A power of 0.8, prespecified significance level of α < 0.05, and allocation ratio (N1/N2) based on the preliminary study with 30 cases (with an OR of 5 or more) was set as a clinically meaningful difference. For the unpaired t-test, we calculated the effect size using means and SDs based on the Hedges' g for each parameter and a 95% CI for effect sizes 26 . Ethics. This study complies with the Declaration of Helsinki, study protocols were approved by the ethics committee of the Kobe University Graduate School of Medicine, and all participants provided informed consent for participation.
Results
Robot-assisted THA accurately reproduced the preoperative plan of cup angles and positions. Table 2 provides the average angles of cup inclination and anteversion documented in the preoperative plans, intra-operative navigation records, and postoperative measurements. The average inclination angle in the preoperative plan was 40°, and the average anteversion angle was 19.2°. The average angles did not reveal any remarkable change between the preoperative plans and postoperative measurements ( Table 2). The average measurements of absolute differences in cup alignment (postoperative CT measurement-preoperative plan) were 1.8° ± 2.0° (inclination) and 1.9° ± 2.3° (anteversion) for all cases (Table 2). Additionally, we compared the Reproducibility of cup angles using robot-assisted THA with changes in the pelvic tilt. No notable differences were observed in the frequencies of cup inclination or anteversion outliers on treated sides, surgical approaches, and acetabular shape (Table 3). Table 4 presents the mean values of BMI, age, and pelvic tilt angles of patients in the outlier and non-outlier groups. The average BMI and age of the patients revealed no significant differences for inclination and anteversion angles. However, the mean pelvic tilt angle was significantly different between the outlier and non-outlier groups for anteversion (p = 0.029) ( Table 4).
Reproducibility of 3D cup positions using robot-assisted THA with variations in surgical approaches. The accuracy of 3D cup positioning was estimated, and the average absolute differences in Table 2. Results of average cup angles. www.nature.com/scientificreports/ the distance (postoperative CT-preoperative plan) were 1.1 ± 1.2 mm (coronal plane) and 0.9 ± 1.0 mm (axial plane). No significant differences were observed in the frequency of 3D cup position outliers on the treated sides and acetabular shapes. However, we observed a significant difference in the frequency of 3D cup position outliers between patients who underwent surgery via an anterior or posterior approach (p-value = 0.004) ( Table 5). Table 6 presents the mean values of BMI, age, and pelvic tilt angles of patients in the outlier and non-outlier groups. The average BMI, age, and pelvic tilt were not significantly different between the outlier and non-outlier groups regarding the 3D cup position.
Radiographic inclination (°) Radiographic anteversion (°)
Predictive factors for outliers of postoperative cup placement with robot-assisted THA. We noted that two predictive factors (surgical approach and pelvic tilt) were related to postoperative cup malposition with robot-assisted THA based on unpaired t-test results for continuous variables and Fisher's exact test results for nominal observations. However, the predictive factors for outliers could be dependent on multiple confounders. Therefore, these two significant predictive factors were used as covariates for multivariate analysis. We discerned that outliers of postoperative cup anteversion with robot-assisted THA were significantly associated with posterior pelvic tilt (OR, 1.1; 95% CI, 1.00-1.23), and the outliers of cup position were associated with an anterior approach (OR, 5.1; 95% CI, 1.69-15.38) ( Table 7).
Discussion
We examined the reproducibility of cup placement with robot-assisted THA in this study. This is the first study to demonstrate that a posterior pelvic tilt and anterior surgical approach were significantly associated with postoperative inaccurate cup positioning. 27 . Their results support our findings that BMI did not affect the accuracy of cup inclination, anteversion, or 3D cup position with robot-assisted THA.
Previous studies have demonstrated a higher complication rate comprising loosening of the acetabular component and postoperative dislocation after THA in DDH patients [28][29][30] . The reason for the higher complication rate is inaccurate acetabular cup positioning due to an inadequate acetabular roof, a double acetabular floor, the presence of osteophytes, and a difficulty in identifying the accurate orientation of the acetabulum [28][29][30] . However, several studies described an accurate cup placement with CT-based navigation in DDH cases 12,31,32 . Similarly, we demonstrated no differences in the accuracy of cup positioning between DDH and non-DDH patients after robot-assisted THA.
Several reports demonstrated that the posterior approach could achieve accurate 3D cup positions with navigation THA 33,34 . Nakahara et al. demonstrated in a case series using the posterior approach that the CT-based navigation accuracy of the implant position, which was defined as differences between the postoperative measurements on CT images and intraoperative records on the navigation system, was within a 2-mm difference 34 . However, we did not find any literature focused on 3D cup positions of the mini-anterior surgical approach with robot-assisted THA. In this study, we observed that a mini-anterior approach affected 3D implant position with robot-assisted THA. Minimally invasive surgical approaches are currently used for THA. Mini-incision THA reduces postoperative pain and blood loss, leads to a speedy recovery, and reduces hospital stay compared to THA performed using a standard approach 35 . However, some researchers are concerned that mini-incision THA may introduce new potential problems related to a reduced visual field during surgery, such as implant malposition, neurovascular injury, and poor implant fixation 36 . The visual field during manual registration of landmarks during a mini-anterior approach in the supine position was much smaller than that during a posterior approach, with difficulty in the registration of the anterior and posterior wall edges (Fig. 4). A sufficient visualization of the surgical field is essential for the registration of the cup positioning, and the idea of improving accuracy despite the approach is needed for robot-assisted THA implantation.
We also determined that a posterior pelvic tilt affected the accuracy of postoperative cup anteversion with robot-assisted THA. Previous simulation studies suggest that changes in pelvic tilt would lead to changes in the acetabular anteversion 37 . Yamada et al. reported that the accuracy of cup anteversion in CT-based navigation was lower in patients with a greater posterior pelvic tilt 15 . Hasegawa et al. also stated that posterior pelvic tilt affected the accuracy of postoperative cup anteversion angle in a mini-anterolateral approach using an accelerometerbased portable navigation system 38 . We had previously reported that preoperative posterior pelvic tilt was significantly associated with greater pelvic motion on the axial axis during surgery 39 , and the greater motion could cause anteversion errors even with robot-assisted THA.
Our study had some limitations. First, the number of patients included in the study was too small to analyze all the aspects of robot-assisted THA. Secondly, this was not a randomized trial but a retrospective cohort study with inherent limitations. | 2,822.4 | 2021-04-07T00:00:00.000 | [
"Medicine",
"Engineering"
] |
An Inverse Problem for Quantum Trees with Delta-Prime Vertex Conditions
: In this paper, we consider a non-standard dynamical inverse problem for the wave equation on a metric tree graph. We assume that the so-called delta-prime matching conditions are satisfied at the internal vertices of the graph. Another specific feature of our investigation is that we use only one boundary actuator and one boundary sensor, all other observations being internal. Using the Neumann-to-Dirichlet map (acting from one boundary vertex to one boundary and all internal vertices) we recover the topology and geometry of the graph together with the coefficients of the equations.
Introduction
This paper concerns inverse problems for differential equations on quantum graphs. Under quantum graphs or differential equation networks (DENs) we understand differential operators on geometric graphs coupled by certain vertex matching conditions. Network-like structures play a fundamental role in many problems of science and engineering. The range for the applications of DENs is enormous. Here is a list of a few.
-Structural Health Monitoring. DENs, classically, arise in the study of stability, health, and oscillations of flexible structures that are made of strings, beams, cables, and struts. Analysis of these networks involve DENs associated with heat, wave, or beam equations whose parameters inform the state of the structure, see, e.g., [1].
-Water, Electricity, Gas, and Traffic Networks. An important example of DENs is the Saint-Venant system of equations, which model hydraulic networks for water supply and irrigation, see, e.g., [2].
Other important examples of DENs include the telegrapher equation for modeling electric networks, see, e.g., [3], the isothermal Euler equations for describing the gas flow through pipelines, see, e.g., [4], and the Aw-Rascle equations for describing road traffic dynamics, see e.g., [5].
-Nanoelectronics and Quantum Computing. Mesoscopic quasi-one-dimensional structures such as quantum, atomic, and molecular wires are the subject of extensive experimental and theoretical studies, see, e.g., [6], the collection of papers in [7][8][9]. The simplest model describing conduction in quantum wires is the Schrödinger operator on a planar graph. For similar models appear in nanoelectronics, high-temperature superconductors, quantum computing, and studies of quantum chaos, see, e.g., [10][11][12].
Here, T is arbitrary positive number, q j ∈ C([a 2j−1 , a 2j ]) for all j, and f ∈ L 2 (0, T). The physical interpretation of conditions (3) and (4), and some other matching conditions was discussed in [20].
The well-posedness of this system is discussed in Section 2; it will be proved that In what follows, we refer to γ 0 as the root of Ω and f as the control.
We now pose our inverse problem. Assume an observer knows the boundary condition (5), and that (6) holds at the other boundary vertices, and that the graph is a tree. The unknowns are the number of boundary vertices and interior vertices, the adjacency relations for this tree, i.e., for each pair of vertices, whether or not there is an edge joining them, the lengths { j }, and the function q. We wish to determine these quantities with a set of measurements that we describe now. We can suppose v N is the interior vertex adjacent to γ 0 with e 1 the edge joining the two, see Figure 1. Our first measurement is then the following measurement at γ 0 : We show that from operator R 01 one can recover 1 and the degree Υ N of v N . Then by a well known argument, see [21], one can then determine q 1 . Having established these quantities, in our second step, we propose to place sensors on the edges incident to v N , and using these measurements together with R 0,1 to determine the data associated to these edges. Note that the one control remains at γ 0 . The goal is to repeat these steps until all data associated to the graph have been determined. To define the interior measurements we require more notation. For each interior vertex v k we list the incident edges by {e k,j : j = 1, ..., Υ k }. Here e k,1 is chosen to be the edge lying on the unique path from γ 0 to v k , and the remaining edges are labeled randomly, see Figure 2. Then the sensors measure We show that we do not need sensors at e k,1 , e k,Υ k . Thus the total number of sensors is 1 + ∑ N j=m+1 (Υ j − 2). It is easy to check that this number is equal to |Γ| − 1. We denote by R T the (|Γ| − 1)-tuple (R 0,1 , R N,2 , R N,3 , ....) acting on L 2 (0, T).
Results
Let be equal to the maximum distance between γ 0 and any other boundary vertex. Our main result is the following Theorem 1. Assume q j ∈ C([a 2j−1 , a 2j ]) for all j. Suppose T > 2 . Then from R T one can determine the number of interior and boundary vertices, the adjacency relations of the tree, q, and the lengths of the edges.
Discussion
We now compare this result to others in the literature. We are unaware of any works treating the inverse problem on general tree graphs with delta-prime conditions on the internal vertices. The most common conditions for internal vertices are continuity together with Kirchhoff-Neumann condition: ∑ j∈J(v k ) ∂u j (v k , t) = 0 and all references in this paragraph assume these conditions. In [21], the authors assume that controls and measurements take place at all boundary vertices but one. The authors use an iterative method called "leaf peeling", where the response operator on Ω is used first to determine the data on the edges adjacent to the boundary, and then to determine the response operator associated to a proper subgraph. In [21], the leaf peeling argument includes spectral methods that require knowing R T for all T. In [22], the methods of [21] are extended to the case where masses are placed at internal vertices, see also [23]; however these methods still require knowledge of R T for all T. Also in [22], it is proven that that for a single string of length with N attached masses and T > 2 , R T 01 is sufficient to solve the inverse problem. In particular, [23] uses a spectral variant of the boundary control method, together with the relationship between the response operator and the connecting operator. In [24,25], a dynamical leaf peeling argument is developed for a tree with no masses and with response operators at all but one boundary points, allowing for the solution of the inverse problem for finite T sufficiently large. An important ingredient in their leaf peeling is determining the response operators associated with subtrees, called "reduced response operators", from the response operator associated to the original tree. In all of these papers, it is assumed that there are no interior measurements. In [26], the iterative methods from [24,25,27] are adapted to a tree with masses placed at internal vertices, with a single control at the roof and measurements there and at internal vertices. For other works on quantum graphs, see [1,16,19,[28][29][30][31].
A special feature of the present paper is that we use only one control together internal observations. This may be useful in some physical settings where some or most boundary points are inaccessible. Another potential advantage of the method presented here is that we recover all parameters of the graphs, including its topology, from the (|Γ| − 1)-tuple response operator acting on L 2 (0, T). In previous papers, the authors recovered the graph topology from a larger number of measurements: the (|Γ| − 1) × (|Γ| − 1) matrix (boundary) response operator or, equivalently, from (|Γ| − 1) × (|Γ| − 1) Titchmarsh-Weyl matrix function. In [32], the inverse problems on a star graph for the wave equation with general self-adjoint matching conditions was solved by the (|Γ| − 1) × (|Γ| − 1) matrix boundary response operator.
Preliminaries
In what follows, we use the notations where H n (R) are the standard Sobolev spaces. We define the Heaviside function by H(t) = 1 for t > 0, and H(t) = 0 for t < 0. Then, we define H n ∈ F n as the unique solution to d n dt n H n = H; Here δ(t) denotes the Dirac delta function supported at t = 0. In this section and those that follow, we drop the superscript T from R T when convenient. Consider a star shaped graph with edges e 1 , ..., e N . For each j, we identify e j with the interval (0, j ) and the central vertex with x = 0, see Figure 3. Recall the notation q j = q| e j , and u j ( * , t) = u( * , t)| e j . Thus, we consider the system Let u f solve (10)- (15), and set For (11), it is standard that the waves have unit speed of propagation on the interval, so g j (t) = 0 for t < 1 and all j. It will be useful first to consider the vibrating string on an interval.
Representation of Solution on an Interval and Reduced Response Operator
We adapt a representation of u f (x, t) developed in [27], where only Dirichlet control and boundary conditions were considered. Fix j ∈ {1, ..., N}. We extend q j to (0, ∞) as follows: first evenly with respect to x = j , and then periodically. Thus q j (2k j ± x) = q j (x) for all positive integers k.
Define w j to be the solution to the Goursat-type problem A proof of solvability of this problem can be found in [33]. Consider the IBVP on the interval (0, j ): Let Then, the solution to (17)- (20) on e j can be written as In what follows, we only consider t ≤ T for some finite T, so all sums will be finite.
Let us now change the condition (20) toũ x ( j , t) = 0. In this case, the solution becomes To represent the solution of the wave equation on the edge e 1 in a star graph, we must account for the control at x = 1 . Thus it will also be useful to represent the solution of a wave equation on an interval when the control is on the right end. Consider the IBVP: Thus We now show that the system (10)-(15) is well-posed. Recall that F 1 was defined in (9), and g j (t) = u f j (0, t).
Theorem 2.
(a) If f ∈ L 2 (0, T), then there exists a unique solution u(x, t) solving the system (10)- (15), and mapping Proof. On [0, j ] with j ≥ 2, the wave will be generated by the "control" ∂(u f j )(0, t), whereas on [0, 1 ] the wave is generated by the two controls ∂(u Here, p is independent of j by (12). We have that u f is given by Note that v f has already been explicitly determined in (23). Thus, by (21), we have an explicit solution for u f if we can solve for p. We now prove the existence, uniqueness, and regularity of P.
By (21) and (26), we have for j ≥ 2, For j = 1, we have by (21), (24), and (26) that We remark that at the moment, we have not yet solved for either P or g j for any j. Let α = min{ j , j = 1, ..., N}.
We solve for P with an iterative argument using steps of length 2α. The iterations are necessary because the upper limits in the sums in (27), (28) increase with time due to reflections of the wave at the various vertices. In what follows, we label by G(t) various terms that we have already solved for, which by (24), includes v f (0, t). For t ≤ 1 we have by unit wave speed that P(t) = 0. Suppose now t ∈ [ 1 , 1 + 2α]. Then, t − 2 j ≤ 1 + 2α − 2 j < 1 , and hence P(t − s) = 0 for s ≥ 2n j , for all j with n ≥ 1. By (13), we have and hence from (27) and (28) we get It is easy to show that this is a Volterra equation of the second kind (VESK), and so admits a unique solution P with P L 2 ( 1 , 1 +2α) ≤ F L 2 (0,2α) . Furthermore, by differentiating this equation we get p L 2 ( 1 , 1 +2α) ≤ f L 2 (0,2α) .
Proposition 1.
For R 1 one can determine q 1 , 1 , and N.
Proof. Using (25), it is easy to see that The lemma now follows easily from (21).
In what follows, we refer toR 0,j (s) as the "reduced response function". For f (t) = δ(t), we denote the solution to the system (10)-(15) as u δ . We also use the following.
This result holds from the proof of Theorem 2, the unit speed of wave propagation, and the properties of wave reflections off x = j , see [33]. The details are left to the reader.
The following result follows from (22), (25), and Lemma 2. The details of the proof are left to the reader.
Solution of Inverse Problem
Here, we establish some notation. We recall the following notation: for v k we list the incident edges by {e k,j : j = 1, ..., Υ k }. Here, e k,1 is chosen to be the edge lying on the path from γ 0 to v k , and the remaining edges are labeled randomly. Now let k 0 be some fixed interior vertex, and let j 0 satisfy 1 < j 0 ≤ Υ k 0 . Denote by Ω We define an associated response operator as follows. Let Γ j 0 Then we define an associated reduced response operator with associated response functionR k 0 ,j 0 (s). Suppose we determinedR k 0 ,j 0 . It would follow from Proposition 1 that one could recover the following data: j 0 , q j 0 , and Υ k , where v k is the vertex adjacent to v k 0 in Ω j 0 k 0 . In this section we will present an iterative method to determine the operatorR k 0 ,j 0 from the (|Γ| − 1)-tuple of operators, R T , which we know by hypothesis for some T > 2 . An important ingredient is the following generalization to a tree of Corollary 1.
Lemma 3.
Let T > 0, and let R T k,j be associated with (33)-(38), defined by (7) and (8). The response function for R T k,j has the form Here, r k,j ∈ F 1 , and the sequence {γ n } is positive and strictly increasing. If T is finite then the sums are finite.
Proof. The proof follows from the proof of Corollary 1, together with the transmission and reflection properties of waves at interior vertices, and reflection properties at boundary vertices.
Fix T > 2 . The rest of this section shows how to recoverR k 0 ,j 0 from R T .
Step 1 For the first step, let v k 0 be the vertex adjacent to the root γ 0 , with associated edge labeled e 1 . By Proposition 1, we can use R T 0,1 to recover Υ k 0 , 1 , q 1 .
Step 2
Consider e k 0 ,2 . In Step 2, we show how to solve forR k 0 ,2 , see Figure 5. Since v k 0 is the root of Ω 2 k 0 , the following equation is essentially a restatement of Lemma 1 to trees; the details of its proof are left to the reader.
Since we know 1 and q 1 , we can solve the wave equation on e 1 with known boundary data. We identify e 1 as the interval (0, 1 ) with v k 0 corresponding to x = 0. Then u f , restricted to e 1 , solves the following Cauchy problem, where we view x as the "time" variable: Since the function R 0,1 (s) is known, we can thus uniquely determine u f (0, t) = u f (v k 0 , t) and is determined. We now show how p and u δ 2 (v k 0 , t) can be used to determineR k 0 ,2 (s). The following equation follows from the definition of the response operators for any f ∈ L 2 : In what follows, it is convenient to extend f (t) ∈ L 2 (0, T) as zero for t < 0. By Lemma 1 and by an adaptation of Lemma 2 to general trees, we have the following expansions: Here, r k 0 ,2 ∈ F 1 and a(s) ∈ F 1 , and {ξ k } and {β n } are positive and increasing. Clearly a(s), r k 0 ,2 (s), {ψ m }, {θ m }, {φ n }, {γ n }, can all be determined by R 0,1 and R k 0 ,2 , whereas for nowr and the sets {α m }, {ξ m } are unknown. Inserting (39),(41) and (42) into (40), we get Here all sums have 1 as lower limit of summation. Proof. We mimic an iterative argument in [26]. Differentiating (43) and then matching the delta singularities, we get Since the sequences {γ n }, {ζ l }, {ξ m } are all strictly increasing, clearly we have γ 1 = ζ 1 + ξ 1 , so that φ 1 = α 1 ψ 1 , and so ξ 1 = γ 1 − ζ 1 and α 1 = φ 1 /ψ 1 . We represent that the set {φ 1 , γ 1 }, {ζ 1 , ψ 1 } determines the set {ξ 1 , α 1 } by We now match the term δ(t − γ 2 ) with its counterpart on the right hand side of (44). There are three possible cases.
Repeating this procedure as necessary, say for a total of N 2 times, we solve for {ξ 2 , α 2 }. We represent this process as We must have N 2 finite by (44) and the finiteness of the graph. Iterating this procedure, suppose for p ∈ N we have Here N p is chosen to be minimal, and so γ N p = ζ 1 + ξ p . We wish to solve for {ζ p+1 , φ p+1 }. We can again distinguish three cases: Case 1: γ (N p +1) = ζ k + ξ j , ∀j ≤ p, ∀k. Note that we know {ξ j } p 1 and {ζ k }, so these inequalities are verifiable. In this case, we must have γ (N p +1) = ζ 1 + ξ p+1 and ψ 1 α p+1 = φ (N p +1) , so we have determined α p+1 , ξ p+1 in this case.
Case 2:
There exists an integer Q and pairs {ζ i n , ξ j n } Q n=1 , with j n ≤ p, such that Note that all the numbers {ζ i n , ξ j n } have been determined, so these equations can be all verified. We can assume all pairs {ζ i n , ξ j n } satisfying (45) with j n ≤ p are listed. In this case, we have either Case 2i: φ (N p +1) = α j 1 ψ i 1 + ... + α j Q ψ i Q . It follows then that γ (N p +1) = ζ 1 + ξ p+1 , and We thus solve for ξ p+1 , α p+1 . Case 2ii: It follows then that α (N p +1) = ζ 1 + ξ p+1 , and we have to repeat this process with γ (N p +2) .
Hence, we can solve for {ξ p : p ≤ L}, {α p : p ≤ L} for any positive integer L given knowledge of R T 0,1 , R T k 0 ,2 for T = T(L) sufficiently large.
It remains to solve forr. In what follows, we setR(s) = 0 for s < 0. We use G(t) to denote various functions that we have already established to be determined by R 0,1 and R k 0 ,2 . Having already solved for {ξ n , α n }, we can eliminate from (43) the Heavyside functions to get, recalling ζ 1 = 1 , We solve this with an iterative argument. Let α = min m {ζ l+1 − ζ l }. For t < ζ 1 + α, we have for l > 1 that t − ζ l < 0 so r(t) = 0. Hence Letting r = t − ζ 1 , we get We solve this VESK to determiner(s), r < α. Now for t < ζ 1 + 2α, we have for l > 1 that t − ζ l < α, and so those terms in (46) with t − ζ l can be absorbed in G to again give We solve this VESK to determiner(s), r < 2α. Iterating this procedure, we solve forr(s) for any finite s.
Step 3 Because R k 0 ,j are determined by assumption for j = 2, ..., Υ k 0 − 1, the functions u f j (v k 0 , t) are determined. In Step 2, we showed u f 1 (v k 0 , t) is also determined. Hence by (4), u f Υ k 0 (v k 0 , t) is also determined. We can now carry out the argument in Step 2 on the remaining edges e k 0 ,3 , ..., e k 0 ,Υ k 0 incident on v k 0 to determineR k 0 ,j for all j.
Step 4 For each j = 2, ..., Υ k 0 , we use Proposition 1, to find the associated j , q j together with the valence of the vertex adjacent to v k 0 . Careful reading of Steps 2 and 3 shows that we can use R T 0,1 and R T k 0 ,j for any T > 2( 1 + j ).
Step 5 Let v k 1 , ... be the vertices adjacent to v k 0 , other than γ 0 . We now iterate Steps 2-4 for the each of these vertices. Choose for instance v k 1 . If it were a boundary vertex, this fact would be determined in Step 4, and then this algorithm goes to the next vertex, which we, for convenience, still label v k 1 . We can thus assume v k 1 is an interior vertex. Let us label an incident edge (other than e 2 := e k 0 ,2 ) as e 3 := e k 1 ,3 , see Figure 6. We wish to determineR k 1 ,3 . Mimicking Step 2, let u δ solve (1)-(6), let b(t) = ∂u δ 3 (v k 1 , t). We have the following formula holding by the definition of response operators: t 0R k 1 ,3 (s) b(t − s)ds = t 0 R k 1 ,3 (s)δ(t − s)ds.
Of course R k 1 ,3 (s) is assumed to be known. We determine b as follows. We have, from Step 2, that p(t) = ∂u δ 1 (v k 0 , t) is known. We identify e 2 as the interval (0, 2 ) with v k 1 corresponding to x = 0.
Since q 2 , 2 , and R k 0 ,2 are all known, we can thus determine b(t) = y x (0, t). The rest of the argument here is a straightforward adaptation of Steps 2-4 above. The details are left to the reader.
Step 6 Arguing as in Step 5, we determineR k,j for all other vertices adjacent to v k 0 and their associated edges. The details are left to the reader.
Steps above 6 Clearly this procedure can be iterated until all edges of our finite graph have been covered.
Conclusions
In this paper, we applied the ideas of the boundary control and leaf peeling methods to solve an inverse problem on a tree featuring non-standard, delta-prime vertex conditions on the interior. Our method required using only one boundary actuator and one boundary sensor, all other observations being internal. Using the Neumann-to-Dirichlet map (acting from one boundary vertex to one boundary and all internal vertices) we recovered the topology and geometry of the graph together with the coefficients q j of the equations. It would be interesting to see a numerical implementation of our method. It would also be interesting to adapt our methods to quantum graphs with cycles.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 5,989.2 | 2020-11-17T00:00:00.000 | [
"Mathematics"
] |
Automated chart review utilizing natural language processing algorithm for asthma predictive index
Background Thus far, no algorithms have been developed to automatically extract patients who meet Asthma Predictive Index (API) criteria from the Electronic health records (EHR) yet. Our objective is to develop and validate a natural language processing (NLP) algorithm to identify patients that meet API criteria. Methods This is a cross-sectional study nested in a birth cohort study in Olmsted County, MN. Asthma status ascertained by manual chart review based on API criteria served as gold standard. NLP-API was developed on a training cohort (n = 87) and validated on a test cohort (n = 427). Criterion validity was measured by sensitivity, specificity, positive predictive value and negative predictive value of the NLP algorithm against manual chart review for asthma status. Construct validity was determined by associations of asthma status defined by NLP-API with known risk factors for asthma. Results Among the eligible 427 subjects of the test cohort, 48% were males and 74% were White. Median age was 5.3 years (interquartile range 3.6–6.8). 35 (8%) had a history of asthma by NLP-API vs. 36 (8%) by abstractor with 31 by both approaches. NLP-API predicted asthma status with sensitivity 86%, specificity 98%, positive predictive value 88%, negative predictive value 98%. Asthma status by both NLP and manual chart review were significantly associated with the known asthma risk factors, such as history of allergic rhinitis, eczema, family history of asthma, and maternal history of smoking during pregnancy (p value < 0.05). Maternal smoking [odds ratio: 4.4, 95% confidence interval 1.8–10.7] was associated with asthma status determined by NLP-API and abstractor, and the effect sizes were similar between the reviews with 4.4 vs 4.2 respectively. Conclusion NLP-API was able to ascertain asthma status in children mining from EHR and has a potential to enhance asthma care and research through population management and large-scale studies when identifying children who meet API criteria.
Background
According to a report from the Agency for Healthcare Research and Quality, asthma is one of the five most burdensome diseases in the United States [1]. Asthma is the most common chronic illness in childhood, affecting 4-17% of children in the United States [2] and 2.8-37% of children worldwide depending on countries [3] with significant healthcare, social, and academic burden. [4,5] Despite the availability of evidencebased guidelines for asthma management and effective asthma therapies, there has been virtually no change between 1990 and 2010 in years lived with asthma-related morbidity in the United States [2].
One of the major challenges in the current asthma research is inconsistency in study results reported in genome-wide association studies [6], clinical trials [7,8], and studies addressing heterogeneity of asthma [9,10], making it difficult to apply these results to clinical practice and advancement of the field. Apart from the true biological heterogeneity of asthma, other important sources of variability in the above studies include inconsistent asthma criteria (physician diagnosis vs. subjective determination based on diverse asthma criteria) and ascertainment processes (chart review vs. surveys), which may obscure a better understanding of biological heterogeneity of asthma. As an example, literature showed asthma status according to parental report was associated with significant misclassification bias [11], and studies including younger children (e.g., < 3 years old) for whom asthma diagnosis rarely occur may not use physician diagnosis nor International Classification of Diseases (ICD) codes for asthma. The latter is very important for many reasons. First, the burden of disease is greatest in preschoolers with a significantly higher proportion of emergency department visits, more hospitalizations, more sleep disturbances, and more limitation of family activities/ play than older children [12,13]. Second, the irreversible impairment in lung function may occur during the preschool period, suggesting a window of opportunity to perhaps prevent irreversible damage [14]. It is possible that the repeated and cumulative lung injury caused by various respiratory infections that are frequent at this age may be causal or important intercurrent factors affecting lung growth and asthma persistence. Despite these limitations, these approaches are still frequently used for large-scale asthma studies. While implementing laboratory tests can be considered, it can be impractical for studies based on large cohorts.
Until today, manual chart review is the most accurate method to identify asthma cases regardless of physician diagnosis of asthma, but this becomes a challenge for large-scale studies. There is an emerging need for developing a medical informatics approach like natural language processing (NLP) that processes free-text and classifies asthma status at a patient level in the era of electronic health records (EHR).
In the medical community, Asthma Predictive Index (API) [15] is a validated criterion and can be a potential option for asthma studies and help to reduce the variability described above in the future research work for asthma in children. We recently demonstrated feasibility of using NLP algorithms for an existing other asthma criteria, Predetermined Asthma Criteria [16][17][18], which was originally developed by Yunginger et al. [19] and has been used extensively in research for asthma epidemiology [20,21]. At present, given the potential suitability of API to a retrospective study [22,23], the recommended use of API by National Asthma Education and Prevention Program guidelines [24], and unavailability of NLP algorithm for API, developing and validating an NLP-based API algorithm would be worthwhile.
To date, there is no NLP algorithm enabling automated chart review for EHR to ascertain asthma status in children based on API. Therefore, the main aim of this study was to develop and validate an NLP algorithm to identify patients that meet API positive criteria by assessing criterion and construct validity in a retrospective study.
Methods
The study was approved by the institutional review boards at the Mayo Clinic and Olmsted Medical Center, located in Olmsted County, Minnesota.
Study setting
Demographic characteristics of the population of Rochester and Olmsted County were similar to those of the U.S. Caucasian population, with the exception of a higher proportion of the working population of this community being employed in the health care industry [19]. Olmsted County has a few important epidemiological advantages for conducting retrospective studies such as this because medical care is virtually self-contained within the community. In addition, research authorization for using medical records for research purposes is obtained from the patients the first time they ever register with a provider in the community. The rate of granting this authorization is about 95% in Olmsted County [25]. Once this permission is granted, each patient is assigned a unique identifier under the auspices of the Rochester Epidemiology Project, which has been continuously funded by the National Institute of Health (NIH) since 1966 [26]. Using this unique identifier, all clinical diagnoses and events, and detailed information from every interaction among the patients and providers are retrieved from detailed patient-based medical records [26]. As this resource has been electronically available since 1997 (i.e., the inception of the EHR at Mayo Clinic), it enables us to retrieve all asthma-related events and associated free-text information (e.g., symptoms, visits, and medications) electronically to ascertain asthma status based on API [15].
Study design
This is a cross-sectional study nested in a birth cohort study, which was designed to develop and validate an NLP algorithm for ascertaining asthma status by API (NLP-API) using convenience samples. The NLP algorithm was developed on the training cohort and evaluated on an independent test cohort for which asthma status by manual chart review (CW) was already available based on API. Criterion validity was assessed by determining concordance of asthma status by API between NLP-API and manual chart review. Construct validity of NLP-API was assessed by determining the association between asthma status ascertained by NLP algorithms and the known risk factors for asthma.
Study subjects
There were two cohorts enrolled in this study. The first cohort was used to develop the NLP algorithm (i.e., training cohort, n = 87) and the second one was used for validating the results (i.e., test cohort, n = 427). The training cohort was made up of subjects who were all born after the implementation of the EHR at the Mayo Clinic (i.e.1998-2002) [16]. Briefly, the training cohort were children who were enrolled in the Mayo Clinic sick child care program and their parents agreed to participate in a previous study assessing factors associated with parents' care-seeking behavior for mild acute illness of young children. Of the original 115 children, subjects were excluded due to the following reasons: 1) change of research authorization status (n = 3), 2) adopted children (n = 4; one of the major criteria of API is parental history of asthma), and 3) primary care at a non-Mayo site (n = 21; NLP was only available for Mayo EHR during the study period).
The validation part of the study utilized a random sample of the 2002-2006 population-based birth cohort who had been enrolled in a previous asthma study and had medical records mainly at Mayo Clinic, Rochester, Minnesota [27]. Briefly, the original study enrolled 579 subjects comprised of 282 late-preterm infants (34 0/7 to 36 6/7 weeks of gestation) and 297 gender-and birth year-matched term infants (37 0/7 to 40 6/7 weeks of gestation) randomly selected from the 2002-2006 birth cohort born in Olmsted County, Minnesota. In this study, a total of 152 subjects were excluded due to the following reasons: 1) change in research authorization status (n = 17), 2) adopted children (n = 3), and 3) primary care outside Mayo Clinic (n = 132), leaving 427 study subjects for this present study.
Asthma predictive index (API)
The presence of asthma was defined if frequent wheezing episodes (i.e., two or more wheezing episodes within one year) AND either one of the major criteria or two of the minor criteria are satisfied (Table 1) [15,22,23]; otherwise, study subjects were considered non-asthmatics. An index date of asthma was defined as the earliest date when the API criteria were met. The operational procedures for API for retrospective studies were described in a previous study [23].
Development of NLP algorithm for API
The overall process for the NLP-API algorithm to ascertain asthma status is depicted in Fig. 1. Our NLP algorithm used three different sources of data-i.e., clinical notes, laboratory data, and patient-provided information. Parents' asthma information (one of major criteria) was identified from both clinical notes (family history section of the patient's chart) and patient-provided information, and an eosinophil value was extracted from the lab data if tested for any reason. The other items of API criteria (i.e., eczema, allergic rhinitis, and wheezing ± colds) were extracted from clinical notes using pattern-based rules, assertion status (e.g., non-negated, associated with patient), and section constraints (e.g., diagnosis section). Then, we developed expert rules implementing API criteria (Table 1). Our algorithm was built in the open-source NLP pipeline MedTagger (https://sourceforge.net/projects/ohnlp/files/MedTagger/) developed by Mayo Clinic [16]. In this pipeline, there are two basic conceptual blocks to identify API-positive asthmatics. 1) A text processing component which finds evidence text in EHRs to match specific API criteria, and 2) A patient classification component which decides the patient's API status based on available evidence.
Asthma risk factor variables
These variables were collected during the previous study [27] and utilized for this study to assess construct validity (i.e., association between NLP-API (vs. manual chart review) and asthma risk factors for comparison purpose). These included birth weight, small for gestational age, Asthma is determined by frequent wheezing episodes (two or more wheezing episodes within one year) plus at least one of major criteria or two of minor criteria mode of delivery (Cesarean section vs. vaginal delivery), gestational age, a family history of asthma and other atopic conditions such as allergic rhinitis or atopic dermatitis, a history of patient's allergic rhinitis or eczema, maternal smoking during pregnancy, passive smoking exposure after birth (up to the first 6 years of life), and breast-feeding history. As all asthmatics did not fulfill the same criteria, we also assessed individual items included in API as risk factors for determining construct validity.
Statistical analysis
Performance of NLP-API was assessed for both criterion and construct validity. For criterion validity, we calculated agreement rate, Kappa index, and validation index (sensitivity, specificity, positive predictive value, and negative predictive value) for concordance in asthma status between NLP-API and manual chart review as gold standard. Using logistic regression models, construct validity was assessed by determining the association of asthma status ascertained by NLP-API with the known risk factors for asthma as NLP-API is expected to be correlated with the known risk factors if NLP algorithm reflects true asthma. The associations were summarized by odds ratios and their corresponding 95% confidence intervals. All analyses were performed using JMP statistical software package (Ver 10; SAS Institute, Inc., Cary, NC).
Study subjects
Characteristics of the test cohort are summarized in Table 2. 209 (48%) were male, and 315 (74%) were female. The median age (interquartile range) at the last follow-up date was 5.3 years (3.6, 6.7). 36 (8%) met the API by manual chart review, and 39 (9%) and 102 (24) had history of allergic rhinitis and eczema by physician diagnosis during the study period.
Concordance in asthma status between NLP-API and manual chart review (criterion validity)
For the test cohort, Kappa index and agreement for asthma status between NLP algorithm and manual chart review were 0.86 and 97%, respectively. Sensitivity, specificity, positive predictive value, and negative predictive value for NLP algorithm in predicting asthma status were 86%, 98%, 88%, and 98%, respectively. Overall, these results were similar with regard to gender, ethnicity and gestational age ( Table 3).
Association of asthma status of NLP-API with the known risk factors (construct validity) The results for construct validity are summarized in Table 4. A correlation analysis of each of the risk factors with asthma status by NLP and manual chart review was run on the test cohort independently. Asthma status by both NLP and manual chart review were significantly associated with the known asthma risk factors. For example, children with asthma compared to those without asthma had higher odds for having a history of allergic rhinitis, eczema, family history of asthma, and maternal history of smoking during pregnancy (p value < 0.05).
For the factors of family history of other atopic conditions, passive smoking exposure, gestational age, birth weight, childcare attendance, and breastfeeding history, the direction and effect sizes were comparable between manual chart review and NLP-API, although the associations were not statistically significant.
Discussion
Our study results suggest that developing an NLP algorithm for API mining from EHR was feasible, as demonstrated by both criterion and construct validity. Our NLP-API algorithm has a potential to overcome the current challenges for asthma ascertainment in asthma care and research, enabling large-scale asthma studies by identifying children who meet API, current asthma guideline-recommended criteria [24]. Our study results show that the NLP-API asthma status was highly correlated with manual chart review, and this concordance was not affected by gender, ethnicity, or gestational age, suggesting criterion validity ( Table 3). The study findings suggest 88% positive predictive value and 99% negative predictive value for ascertaining asthma. Discrepancies between NLP-API and manual chart review were in part because 1) the abstractors reviewed parental records for parental history of asthma, but NLP used only child's medical records (e.g., the note section of Family history), and 2) NLP often misinterpreted "cold" in "wheezing without cold" although both NLP and human abstractor used the pre-defined definition of "cold" [22,23]. Also, the associations of known risk factors for asthma (e.g., maternal smoking during pregnancy) with the NLP-API determined asthma status were similar to those determined by manual chart review, suggesting construct validity. In contrast, the widely used method of ascertaining asthma status such as ICD-9 codes showed poor sensitivity as noted in our previous study-sensitivity of ICD codes was 31% whereas the NLP had 97% sensitivity although different asthma criteria was used [16,28]. Studies based on self-reporting of asthma status such as questionnaire and survey data are subject to significant misclassification bias. For example, almost a quarter of parents whose children were admitted to the hospital for asthma did not report a history of asthma in their children [11]. There have been studies based on ascertaining asthma status with the help of lab tests (e.g., eosinophils) or biomarkers for asthma ascertainment [29], but these tests are impractical when a large number of patients or large study cohorts are involved. Importantly, our study results suggest that the NLP algorithm for API not only ascertains asthma status but also identifies associated individual risk factors such as a family history of asthma, allergic rhinitis, and atopic dermatitis which are part of API [30].
EHRs have been around since the 1960s when they were introduced as a technique to guide and teach medical professionals about how to handle medical knowledge [31]. In the early twenty-first century, a need for standardizing national data, as transmitting health information across organizational and regional boundaries was brought forth [32]. The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 directed the Office of the National Coordinator for Health Information Technology (ONC) to promote the adoption and meaningful use of electronic health records. By 2014, 3 out of 4 (76%) hospitals had adopted at least a basic EHR system [33], and there are growing trends of applying EHR to clinical and translational research. For example, clinical studies are facilitating investigators in identifying the characteristics of patients and discovering phenotypes from EHR such as Electronic Medical Records and Genomics (eMERGE) [34]. NLP algorithms have been used for automated encoding of text data into structured data [35], extraction of molecular pathways from articles [36], translation of information from chest radiographs [37] as well as identification of medical complications in postoperative patients from EHR [38]. Thus, given the growing worldwide deployment of EHR systems and the usefulness of free-text embedded in medical records in capturing textbased events, an NLP algorithm will be an important tool to overcome the current challenges of processing largescale data in disease ascertainment or phenotypic characterization.
Indeed, in our previous studies [16][17][18], we were able to establish the application of NLP algorithms for asthma ascertainment using an existing other asthma criteria, and to our knowledge, this was the first exploration to apply NLP algorithms to asthma ascertainment as other algorithms using NLP for asthma were for identifying physician diagnosis of asthma or asthma guideline, but not for applying existing asthma criteria [39][40][41]. This current study is an extension of our previous work by adding an NLP algorithm for API. Given the lack of gold standard to ascertain asthma status, National Asthma Education and Prevention Program guidelines suggest using API for asthma management. API has been used for prospective as well as retrospective studies [22,42]. As EHRs are likely to continue to be used in clinical research described above, an NLP algorithm for API can be beneficial in the future.
Our study results have several implications in clinical practice, research, and public health. In clinical practice, it enables clinicians and health care systems to apply NLP-API for population management strategies. Using NLP-API, health care systems may identify children who meet API during early childhood (e.g., < 3 years old) on a regular basis for their better access to preventive and therapeutic interventions for asthma, and temporal and geographic trends of outcomes to these interventions may be assessed and monitored at a population level. The impact of a delay in diagnosis of asthma on asthma outcomes can also be examined. Allocation of resources could then be guided by this surveillance system. In research, while NLP-API addresses the limitations of the current methods of asthma ascertainment, it is an innovative approach enabling large-scale clinical studies minimizing methodological heterogeneity of asthma ascertainment due to human biases and mistakes when they review medical records, especially for a large volume of patients. There is a noteworthy finding in our error analysis to examine discrepancies of patient's API status between NLP-API and manual chart review. Independent third reviewer (an allergist) assessed the discrepancies, and we found that the NLP-API was able to capture API-positive patients that were missed by manual chart review as humans could miss or overlook asthma-related events during the review of large volumes of medical records. Thus, the NLP approach might potentially open up a venue to improve or correct human errors in processing a large volume of data or text review, and this needs to be further studied. Use of a computer based algorithm for ascertaining asthma becomes helpful in public health surveillance as it allows health care systems to monitor the trends of asthma prevalence and incidence in real-time and assess the impact of asthma on serious health outcomes (e.g., susceptibility to serious and common infections including vaccine preventable diseases) [21].
The main strength of our study is the epidemiological advantages of conducting retrospective studies in a study setting that is virtually a self-contained health care environment. In addition, under the auspices of the Rochester Epidemiology Project, we were able to capture all inpatient and outpatient asthma-related events for this present study from birth to the last follow-up date [26]. Our NLP algorithm has a unique capability to determine asthma index (inception) date, which helps researchers discern temporality [22]. An additional strength of this study is the unique aspect of NLP algorithm incorporating free-text data (e.g., asthma symptoms), lab data (e.g., eosinophil count), and structured data (e.g., self-reported response collected at clinic visit).
Limitations of our study includes a retrospective study design with a relatively small sample size, and thus, we were not able to fully address the associations between asthma status and certain risk factors such as second-hand smoking exposure and breastfeeding history. Although it was not statistically significant, NLP-API still showed strong associations with the expected direction for these factors. Another potential limitation is the portability of applying NLP-API to different EHR and health care systems. Appropriate adjustments to NLP algorithms may be necessary to address the intrinsic heterogeneity of EHR in order to produce a desirable performance of NLP algorithms at a different study setting. While our previous study demonstrated the portability of NLP asthma ascertainment tool based on a different asthma criteria using EHR [18,43], the study result of NLP-API needs to be validated in different clinical settings to ensure the portability. In addition, there were the difficulties of semantic understanding of complex assertion status in clinical narratives (e.g., identifying asthma-related concepts that are negated, hypothetical, or associated with other family members), resulting in false negatives and false positives. Intrinsic limitation of EHR reliability collected from the sources may not represent complete history of patient's conditions (eg, parents' asthma collected from family history section and patient provided information) and thus affect the final asthma ascertainment. Our earlier work based on a prospective cohort study showed a close correlation between medical events (mild acute illnesses) in EMR and those captured by prospective follow up [44,45]. Lastly, our cohorts used for the training (Mayo Clinic sick child care cohort) and test (preterm-weighted cohort) from a single center may not represent general population in children.
Conclusion
In conclusion, our NLP-API algorithm may prove to be valuable not only in the research realm where it can aid with large-scale clinical studies, but it also has the ability to help the clinician as a population management tool as well becoming a method of surveillance for the public health sector.
Funding
This was supported by National Institute of Health (NIH)-funded R01 grant (R01 HL126667). This study also utilized the resources of the Rochester Epidemiology Project, which is supported by the National Institute on Aging of the National Institutes of Health under Award Number R01 AG034676.
Availability of data and materials
No additional data are available. Data will not be shared following the institutional IRB policy under the current IRB approval for the study protocol.
Authors' contributions YJ had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. HK: contributed to the acquisition, analysis, and interpretation of the data; drafting the manuscript and revising it for important intellectual content; and approving the final manuscript. SS: contributed to the study concept and design; acquisition, analysis, and interpretation of the data; drafting the manuscript and revising it for important intellectual content; and approving the final manuscript. CW: contributed to the study concept and design; acquisition, analysis, and interpretation of the data, revising the manuscript for important intellectual content, and approving the final version. ER: contributed to the study concept and design; acquisition, analysis, and interpretation of the data, revising the manuscript for important intellectual content, and approving the final version. MP: contributed to the study concept and design; revising the manuscript for important intellectual content, and approving the final version. KB: contributed to the study concept and design; revising the manuscript for important intellectual content, and approving the final version. HK: contributed to the study concept and design; revising the manuscript for important intellectual content, and approving the final version. IC: contributed to the study concept and design; revising the manuscript for important intellectual content, and approving the final version. JC: contributed to the study concept and design; revising the manuscript for important intellectual content, and approving the final version. GV: contributed to the acquisition of data, revising the manuscript for important intellectual content, and approving the final version. HL: contributed to the study concept and design; analysis and interpretation of the data; revising the manuscript for important intellectual content, and approving the final version. All authors read and approved the final manuscript.
Ethics approval and consent to participate This study was approved by the institutional review boards at the Mayo Clinic (14-009934) and Olmsted Medical Center (008-OMC-15). General authorization to review medical records for research in accordance with Minnesota Statute was checked, and no medical records were reviewed of any child for whom the general research authorization has been refused. As some of the subjects have left the community, it is practically impossible to locate all subjects who are eligible to the study and obtain consent for this proposed study. Therefore, the need for consent was waived by the institutional review boards at two institutions mentioned above.
Consent for publication
Not applicable | 6,094.2 | 2018-02-13T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Chemical Process Fault Diagnosis Method Based on Deep Learning
To address the problem that the traditional fault diagnosis method for chemical processes under big data relies too much on expert experience and fault features are difficult to distinguish, a deep learning-based fault diagnosis method is proposed, which combines convolutional neural network (CNN), long and short-term memory (LSTM) and attention mechanism (AM). In this method, the spatial sequence features of the input signal are extracted by the CNN adaptively, while the LSTM extracts the time-series features of the signal. Finally, the model performance is enhanced by introducing the attention mechanism and using the SoftMax layer as a classifier for fault diagnosis, so that the model can notice the important features of the faults with the interference of noise. Simulation validation of the method in this paper is performed using the TE chemical process data set, and it is demonstrated that the method can be used for chemical process fault diagnosis studies. Finally, compared with other fault diagnosis methods, the method is more accurate and has certain superiority.
Introduction
Since the era of big data, the use of artificial intelligence algorithms has become more and more popular in research.For most scholars, deep learning algorithms have become the key to solving problems in fault diagnosis research.Deep learning is an important direction of machine learning research.Machine learning relies on algorithms to train data and make decisions on tasks [1] ; deep learning uses deep learning networks to represent the features of data.However, traditional machine learning methods for fault diagnosis of chemical processes rely on expert knowledge and experience.In contrast, deep learning excels at feature extraction and classification, as well as feature function mapping, which automatically learns and extracts features directly layer by layer, making it more suitable for situations where the structural properties of the equipment are unclear, reducing the use of human resources, and allowing direct manipulation of equipment data without an empirical background.Secondly, deep learning can analyze the equipment state changes embedded in data changes, and such complex correlations are more difficult for general algorithms to learn, especially when facing massive data with high dimensionality and multiple types.Therefore, deep learning has a broad application prospect in fault diagnosis, not only can it provide a variety of more effective diagnostic methods, but it can also greatly improve the accuracy of troubleshooting.
At present, there are many advanced research results for deep learning methods in the field of fault diagnosis.Combining the existing literature and comparing multiple deep learning models, the research object of this paper is more suitable to use CNN as the basic fault diagnosis model, and introduce the long-and short-term memory network and attention module on this basis, so that the model can complete the fault diagnosis task more accurately [6] .
Convolutional Neural Network (CNN)
CNN are not very different from traditional neural networks, since they are the same as layer networks.The difference is that the function and form of the layers of convolutional neural networks have changed, which can be said to be an improvement of traditional neural networks.CNN can reduce the number of parameters of deep learning network models by sharing weights, establishing local perceptual fields, and adding pooling layers, thus reducing the memory occupied by the models and alleviating the overfitting of models.Equation ( 1) is the calculation of the convolution layer.
where represents the l channel in the j layer, f is the activation function, is the set used to calculate the l channel in the j layer, is the l-1 channel in the i layer, * is the convolution operator, is the convolution kernel vector, and is the bias of the l channel in the j layer [10] .
The pooling layer, also called the convergence layer by some scholars, is essentially a process of downsampling.With pooling layers, deep learning models are simplified, and the computation speed is increased and the robustness of target features is enhanced.Therefore, the main role of the pooling layer is to extract target features and avoid overfitting the model by reducing parameters and computation.The following equation ( 2) calculates the maximum pooling and equation (3) calculates the average pooling where , represents the value of the l neuron in the i channel of the t layer, S indicates the size of the pooled kernel, and , represents the value of the l neuron in the i channel of the j layer.
The fully connected layer follows the convolutional layer and the pooling layer, and its neurons are usually connected to all the neurons in the pooling layer to transform all the feature matrices in the pooling layer into a one-dimensional feature macrovector.
Long short-term memory(LSTM)
LSTM can be considered a better RNN because it can solve the problem that RNNs cannot cope with long-range dependencies, and LSTM controls the transmission state through a special gating unit, which can decide which information in the input signal can be retained and which information needs to be forgotten.Therefore, it is a good choice to apply LSTM to tasks that require "long-term memory" such as fault diagnosis [8] .
There are three main gates within the LSTM, which act as forgetting gates and selectively forget the input coming in from the previous node.The input gate controls the input at this stage and selectively "remembers" it.The value of the current state output is determined by the output gating, The forgetting gate is calculated as Equation ( 4), the input gate is calculated as Equation ( 5), and the output gate is calculated as Equation (6) [7] .
where is the sigmoid activation function, ℎ is the hidden layer state vector at time t-1, is the input vector at time t, is the bias of the corresponding gate, and is the weight of the corresponding gate [5] .
Attention Mechanismx
The purpose of the attention mechanism is to highlight the features that want to be analyzed by analyzing the raw data and discovering the correlations that exist in it.When the attention mechanism is used for fault diagnosis tasks, it allows the model to focus more on certain specific information, thus ignoring irrelevant noise and improving the performance of the model to better accomplish the task [2] .
Fault diagnosis model
Traditional fault diagnosis methods have certain limitations, and the results obtained by fault diagnosis using CNN only have certain errors.Considering the superior performance of LSTM in handling fault diagnosis tasks and the good effect of the attention mechanism in emphasizing data features [3] , the fault model selected for use in this paper is shown in Figure 1 specifically.The signal is first fed into a convolutional neural network, and the time series features of the input data are extracted by mining the spatial features of the signal and feeding them into the LSTM network through the maximum pooling operation and the adaptive extraction function of the convolutional neural network itself [4].Finally, the degree of correlation between different variables is captured by an attention mechanism and used to distinguish between noise and features to be retained.In the end, fault diagnosis is performed by softmax function.The fault diagnosis flow chart is shown in Figure 2.
Experimental verification
Deep learning methods are differentiated based on whether labels are required.When sufficient data and labels are available, supervised learning-based methods are usually used, in which a model with sufficient performance can be trained to separate normal data from each fault data to achieve fault detection and diagnosis.When data labels are insufficient or difficult to obtain, unsupervised learningbased methods are usually used, which can perform high-dimensional feature representation of data with unknown labels.For fault detection tasks, data operating under normal conditions are usually modeled to isolate deviations from the normal state.For fault diagnosis tasks, unsupervised learning is used for feature representation to represent data information in an unsupervised training manner, and the classification of faults is accomplished by supervised training.In this paper, we will use the TE process dataset for fault diagnosis using supervised learning methods.
Data set
Data is the basis of deep learning methods, and a realistic fault diagnosis model needs actual representative chemical process data first.In practical research, it is difficult to obtain real chemical process data because it often takes a long time to collect a large amount of industrial data sufficient for fault diagnosis, and the current data sharing problem has not yet been solved.Therefore, scholars have used simulation to obtain chemical process data by adding different disturbing factors to simulate the types of faults that may occur in real industries.Combining a variety of industrial process datasets used in existing studies, the Tennessee Eastman (TE) chemical process dataset is chosen to be used in this paper based on what is needed for the troubleshooting task [9] .
.Simulation Verification
In this paper, the TE process simulation model is run under different fault states, and the data required by the experiment is collected.After research, in order to not lose generality, six fault types are selected in this paper.The specific types of faults are given in Table 1, which are Faults 1, 2, 5, 9, 11 and 18 respectively.Faults 1, 2 and 5 are step faults, Faults 9 and 11 are random faults, and Fault 18 is unknown.In addition, under normal circumstances, there are 7 kinds of TE process fault diagnosis research and verification.
Table 1.Fault type.During the TE process simulation run, each category of the training set was run for 45 hours respectively, and 800 samples were collected for each category; the test set was run for 24 hours respectively, and 480 samples were collected for each category.The time step was divided into 20, 3 variables with constant values were removed, and the remaining 50 variables were selected to make the data set.The form of training lumped samples is 280×20×50, and the form of test lumped samples is 168×20×50.The recognition accuracy of the CNN-LSTM-AM neural network on the TE data set is shown in Figure 3, and the specific parameters of the grid used for this deep learning model are given in Table 2.According to the experimental analysis, the recognition accuracy of the test set was stable at 97.67% under the optimal parameter settings in Table 2, and the training time was 36.3 minutes during the training of this experimental method.In order to prove that this deep learning algorithm is superior compared with other fault diagnosis methods, the results of diagnosis results of this method and other methods are compared, and the comparison results are shown in Figure 4. From Figure 4, it can be seen that the fault diagnosis method used in this paper has the highest accuracy rate.
Fault number Fault description Type 1
The content of component B remains constant while the feed flow ratio of component A to component C changes ( | 2,501.8 | 2023-11-01T00:00:00.000 | [
"Chemistry",
"Computer Science",
"Engineering"
] |
SOCIO-ECONOMIC DETERMINANTS OF HOME GARDENING PRACTICES AMONG HOUSEHOLDS IN UNIVERSITY OF NIGERIA COMMUNITY: HECKMAN DOUBLE STAGE SELECTION APPROACH
The study identified different food crops, fruits and vegetables found around homes, constraints of home garden practice, and also socio-economic factors influencing home garden contribution to household’s food consumption. Primary data were collected for the study. Two-stage random sampling procedure was used to select80 respondents for the study. The data collected were analyzed using descriptive statistics such as percentages, means and Heckman sample selection model. The result shows that different food crops, fruits and vegetables that are majorly planted by most respondents include maize (82.5%), mango (50.0%) and fluted pumpkin leaf (81.2%) which are used for different purposes such as food, medicine and ornaments. The result from the Heckman two-stage analysis shows that in the first stage marital status (-1.7912) and female household size (0.3748) are statistically significant at 1% probability level, while income (4.6e-06) was statistically significant at 5% probability level on the home gardening practice. In the second stage, experience in home gardening (1.1089) was statistically significant at 1% probability level on contributions of home gardening to household’s food consumption. The study revealed that home garden practices was constrained by factors such as high cost of inputs, inadequate access to water, pest and diseases etc. The government and concerned agencies such as NGOs should provide and subsidize these promptly to households as incentives to increase their home garden practices. The study recommends there should be institutionalization of those socioeconomic factors that promote home gardening practice.
INTRODUCTION
There is problem of hunger and malnutrition faced by people living in developing countries under substandard living conditions (Galhena et al., 2013). According to Galhena et al. (2013), over half billion of people in the world are faced with food insecurity and with global population expected to increase to 9 billion by 2050 pose a further serious danger on food security of the world. In order to meet world food security proposition, food production is estimated to increase by 70% in order to meet average caloric requirement of the world's population by year 2050.
Hence, there will be continuous needs to increase food production. Thus, different numbers of strategies to increase food production and food security are needed (Marsh, 1998;Idrisa et al., 2008) and home gardening is one of the suggested strategies.
Decades ago, small plots of land near houses have been used by family as home garden and it forms part of household's food system (Okvat and Zautra, 2011;Baiyegunhi, 2015). Home garden can be described as owned, rented or borrowed land, either on the same property as the residence or on adjacent land such as a vacant lot by household (Gray et al., 2014;Taylor and Lovell, 2014). A prominent structural characteristic of the home garden is the great diversity of species with many life forms varying from fruit crops, e.g., banana, plantain, mango, coconut, oil palm and food crops, e.g., cassava, sweet potatoes, cocoyam, yam and vegetables etc. (Zick et al., 2013;Ibrahim et al., 2019). Besides, home gardening helps to increase food production. This is encapsulated in year-round production of food and a wide range of other products such as fuel wood, fodder, spices, medicinal plants and ornamentals (Wang and MacMillan, 2013;Tamiru, et al., 2016). It enhances food and nutritional security in many socioeconomic and political situations. It also improves family health and human capacity, empowering women and preserving indigenous knowledge and culture (Hawkins et al., 2013;Scott et al., 2014). Owing to the role of home garden in food security, households have continued to practice it year after year. Households find home garden to be important to provision of varieties and nutritive foods that meet household's food and nutrition security, improvement of health (provision of medicinal plants), income generation as parts of the produce such as vegetables and fruits are offer for sale, shelter, climate regulation, and shade (Guitart et al., 2012).
Consequently, home gardening has been found to be important complementary source of foods apart from main farms contributing to household's food security and livelihoods. It remains the most food access close to households and readily available (Reyes-García et al., 2012;Baiyegunhi, 2015). Reyes-García et al. (2012) argued that home gardening is efficient in cash and energy flow. Thus, home gardening enables dissipation of concerns about hike in food prices, destructive impact of agricultural technologies on the environment as well as the health consequences of pesticides on food (Poulsen et al., 2014;Lin et al., 2017).
In Nigeria, many societies have traditionally simulated forest conditions in their farms and gardens in order to obtain the beneficial effects of forest structures particularly in the urban cities. According to Reyes-García et al. (2012), home gardening focuses majorly on edible crops. It provides easy dayto-day access to an assortment of fresh and nutritious foods for the household and accordingly those homes obtained more than 50% of the vegetables, fruits, and tubers, from their garden (Tamiru et al., 2016). Galhena et al. (2013) reported that home gardening has been tested and proved locally to be a good strategy widely accepted to help solve the problem of food insecurity, alleviate hunger and increase food production particularly in many developing countries, Nigeria inclusive.
Furthermore, some individuals who possess home garden leave it unattended as they feel there is no need for practicing home garden as they lack information on its benefits (FAO, 2004;Reyes-García et al., 2014). FAO (2004) also reported that some individuals who practice home garden do it to pass out time and don't give enough attention required by the home garden, thus, not getting the maximum benefits that can be obtained from these home gardens. Hence, this study aims to fill these gaps in knowledge. Based on the foregoing, the study identified different food crops, fruits, and vegetables that are cultivated in the home garden and also examined the socio-economics factors influencing home gardening practice.
Study Area
The study is carried out in the University of Nigeria, Nsukka because there are a lot of home garden practices by the University staff particularly those who live in the staff quarters because of the peculiarity of the household composition. There is enough space of land around the houses to practice home garden. They grow different kind of food crops, vegetables, and fruit trees around the houses. The University is located on 871 ha of hilly savannah in the town of Nsukka, about 80 km north of Enugu, and enjoys a very pleasant and healthy climate (NPC, 2006). Additionally, 209 hectares of arable land are available for an experimental agricultural farm and 207 ha for staff housing development in the campus. The University has house units of 530 for senior staff and 62 units for junior staff (University of Nigeria Housing Unit, 2019).
Sample and Sampling Procedure
Simple random sampling was used to select eight staff streets from the 18 staff streets within the campus which include Ezenweze, Cartwright, Ikejiani, Fulton Avenue, Eni-Njoku, Umukanka, Alvan Ikoku and Ezeala Streets. The streets have almost equal number of residential houses. Consequently, 10 households were randomly selected from each street to make 80 households for the study.
Data Collection
The instrument used for the data collection was a structured questionnaire. Variables measured includes: different food crops and vegetables planted in the home garden, household's perception of home garden contributions to household's food consumption and constraints of home garden practice.
Also, information socio-economic characteristics such as age, gender, marital status and household's size are collected.
Data Analysis
Data were analyzed using both descriptive statistics (i.e., mean and percentages) and inferential statistics (i.e., Heckman selection model). The model for inferential statistics is specified below.
Model Specification Heckman sample selection model
Heckman model combines Probit and linear regression model together in a single model. Heckman model is able to observe households in the sample that do not practice home gardening. Hence, it truncates the households that do not practice home gardening to be able to assess home gardening contribution to household food consumption based on the household that practiced home gardening (positive observation). Thus, the Heckman sample selection model is composed of the continuous component 2 |U = 1 and the discrete component 1 ( ). Suppose that the regression model of primary interest is: (1) However, due to a certain selection mechanism; We observe only Ni out of N observations y1 * for which ui * > 0: u = (u * > 0) (3) = * u (4) Probit model: where Y1 is binary outcome (1 if household practices home gardening or 0 otherwise), Y2 is dependent variable (household's proportion of food consumption from home gardening per month on the scale of 1 to 10). The respondents were asked to score the output from home garden to their household food consumption on the scale of 1 to 10. The dependent variable was measured in percentage. Hence the scale is converted to percentage. Then, X1, … , X9 are independent variables/explanatory variables which explain the socio-economic factors that affect home gardens' contribution to household food consumption; X1 is age of respondent (in years), X2 is sex of respondent (male = 1, female = 0), X3 is male household size (numbers), X4 is female household size (numbers), X5 is income (Naira), X6 is marital status (married = 1, otherwise = 0), X7 is years of formal education (years), X8 is garden size (ha), X9 is years of practice of home gardening (years), and µ is error term. Table 1 shows that the common food crops grown in home garden were maize, cassava, yam, potato, tomato, melon, cocoyam, cucumber, ginger, three leaf yam, okro, garden egg, black beans, aloe Vera and pepper (Adebisi et al., 2019). It is indicated that majority (82.5%) of the respondents grew maize in their garden. The second major food crop grow by the respondents was cassava as indicated by 66.2% of the respondents. Also, cocoyam and yam were grown by 23.8% and 26.2% of the respondents respectively. Okro was grown by 17.5% of the respondents. Tomato and garden egg were grown by 16.2% and 15.0% of the respondents, respectively. However, cucumber (1.2%), ginger (1.2%), three-leaf yam (3.8%) and aloe Vera (1.2%) were the least grown food crops by the households. All the crops grown are used for household's food consumption except aloe Vera that is use for medicinal purpose. Table 2 shows different fruits present in home garden. About 27.5% of the respondents had pawpaw in their garden, 50.0% had mango, 13.5% had cashew, 30.0% had avocado while plantain was grown by 35.0% of the respondents and they were all use as for food. Also, 33.8% of the respondents had orange. Fruits like sour sap (7.5%), Moringa (7.5%), coconut (1.2%), African cherry (2.5%) were not common in-home gardens. Most of the fruits were used for food except few like mango, avocado were used for both food and medicinal purpose. Moringa and coconut were used mainly for medicinal purpose. Table 3 shows that the majority (81.2%) of the respondents' plant fluted pumpkin popularly known as "ugu" and they all used it for food. About 2.5% of the respondents used it as medicine. The second vegetable vastly grown by respondents was green which was grown by 32.5% and also used for food. Bitter leaf was grown by 17.5% of the respondents, also used for food. Scent leaf was grown by 16.2% of the respondents and they all used it for food. However, 2.5% of the respondents grew it and use it as medicine. African rosewood leaves popularly known as "oha" was grown by 6.2% of the respondents and used as food too. Jute leaves (ewedu), wild spinach (utazi), bush buck (ukazi) and lemon grass were grown by 2.5%, 1.2%, 1.2% and 1.2% each by the respondents, respectively. These are all used for food except the lemon grass which was used as medicine.
Food Crops in Home Gardens and Their Uses
From the result of the study, different food crops such as cassava, cocoyam, sweet potatoes, yam, just to mention a few are planted in the home garden. This is supported by finding of Zick et al. (2013), that edible food crops are the major composition of home garden. Likewise, the result of study is accordance to the findings of Wang and MacMillan (2013) and Tamiru et al. (2016) that most produce from home garden are used for food, medicine and ornamental. Also, Reyes-Garcia et al. (2012) supported the result of the findings that vegetables are also part of home garden composition.
Influence of Socioeconomic Factors on Home Gardening Practice
Heckman sample selection model results presented in Table 4 captured both the home gardening practice and the proportion of household food consumption coming from home garden. The model is good fit with a significant chi 2 = 28.89 Prob > chi2 = 0.0007.
First, marital status (-1.7912) is statistically significant and has a negative relationship with home gardening practice at 1% probability level. This is against a-priori expectation and this could be interpreted to mean that married households have less time because major shares of their time are allocated to official jobs, children care and other home duties. Marital status of the garden caretaker was reported to be among variables with significant positive influence on home garden participation among some households in the Peruvian amazon (Coomes and Perrault, 2008) which is contrary to the findings. While female household size (0.3748) significantly and positively influenced household's home gardening practice at 1% probability level. This means household dominated with females will likely involve in home gardening because women are good home managers. This depicts that household with more females' practice home gardening. The reasons may be because they would have more time to devote to home garden practices as most females are not bread winners of the family. This is in line with findings of Schreinemachers et al. (2016), that most women manage their home garden. This is accordance with apriori expectation that women will find the resources from home gardening more useful and supportive. Likewise, Zick et al. (2013) noted that women may find resources from home gardening interesting because it helps in improving household's nutrition, hence, women are households' nutrition managers (Wang and Glicksman, 2013). Income of the household head (4.60e-06) is statistically significant (P < 0.05) and positively influence home garden practice. The result shows that household with high income will likely invest part of it in home garden. This is consistent with the findings of Shupp et al. (2015), that income also influences home gardening participation. Experience in home gardening (1.1089) is significant (P < 0.001) and positively related to the proportion of households share of food coming from home gardening. This is accordance with apriori expectation that household with more experience will have varieties and diversities foods coming from gardening. 24 Note: there was no figure for "gender" under home gardening practices because it was removed to cause imbalances in the variables of the two-stage model in order for it run. **variables significant at 5% probability level ***variables significant at 1%probability level Table 5 shows that pest and diseases attack (M = 4.26) was the serious constraint in home garden practice. This may be attributed to the fact that pest and diseases of plants and animals were prevalent in the study area. The next major constraint observed was high cost of inputs (M = 3.74). Inputs such as fertilizers, seedlings, implements, etc., are very expensive to purchase. Also, inadequate access to water (M = 3.65) posed as a major constraint as result shows that water was not adequate for cultivation. Besides the major constraints, other serious constraints that caused limitation of home garden practice in the study area were: limited availability of land for farming (3.10), poor access to information about climate change (3.18), postharvest losses (3.48), inadequate storage facilities (3.36), low price of agricultural commodities (3.33), poor market for agricultural commodities (3.25), shortage of family or hired labour (3.58), poor soil fertility and soil erosion (3.05). These really agree with the findings of Shupp et al. (2015) and Lin et al. (2017), who also reported similar factors as barriers to home garden practice. From the responses of interviewees, these factors are considered very serious and that if something is not done about it home garden practice will be limited and the benefit from it will automatically be reduced.
CONCLUSION/RECOMMENDATIONS
Although tropical home gardens especially those in Africa have not received enough attention from scientist and researchers, they continue to play a vital role in the livelihoods of many households. They are expected to be even of more importance and significance to marginal people as population continues to rise. The study revealed the importance of home gardening to household food consumption. The study showed different components found to be common in home gardening such as vegetables which include fluted pumpkin, scent leaf, green, bitter leaf, and water leaf. Also, arable crops consist of maize, cassava, yam, tomato, cocoyam, okro, and pepper. Whilst fruit consist of pawpaw, mango, avocado pear, plantain, orange and oil palm which are commonly grown by households for food, medicine and ornaments. Likewise, important socioeconomic factors that promote home gardening were identified to be marital status, female household size, experience in home gardening and income. In addition, major factors that constrained home gardening are high cost of input, pest and disease attack, inadequate access to water, limited availability of land for farming and poor soil fertility and soil erosion. Therefore, the study recommended that there should be institutionalization of those socioeconomic factors that promote home gardening practices. The study revealed that home garden practices were constrained by factors such as high cost of inputs, inadequate access to water, pest and diseases etc. Hence, government and concerned agencies such as NGOs should provide and subsidize these promptly to households as incentives to increase their home garden practices. | 4,152.4 | 2020-07-22T00:00:00.000 | [
"Economics"
] |
Estimation of tertiary dentin thickness on pulp capping treatment with digital image processing technology
Received Oct 17, 2018 Revised Aug 17, 2019 Accepted Aug 30, 2019 Dentists usually observe the tertiary dentin formation after pulp capping treatment by comparing periapical radiograph before and after treatment visually. However many dentists find difficulties to observe tertiary dentin and also they can’t measure exactly the thickness of the tertiary dentin. The aims of this study is to assist the dentists to measure the area of tertiary dentin and calculate the dentin formation using b-spline image processing. The dental radiograph of 38 patients of pulp capping in the Dental Hospital Universitas Muhammadiyah Yogyakarta, Indonesia. Each of patient visited dental hospital 3 times. First, the patient got an application of pulp capping material. Second, after one-week treatment and temporary restoration and the third, after more than one month with the composite as the final restoration. Every visited the patient take a radiograph. Dentist placed the dot from the patient's radiograph. The dots were combined and processed with digital image processing. The b-spline method changed the dot to one area. After the calculation, the dentist can see whether there was dentin formation which means it is one of the treatment success indicators. Dentist has the better view to measure the dentin formation by providing area value of its tertiary dentin thickness calculation. We compare the result to the program calculation using the b-spline method and visual observation from the dentist. This study indicated the thickness of tertiary dentin can be measured by this program with an accuracy of 94.2%. Therefore, dentist can make tertiary dentin thickness prediction from patient’s radiograph.
INTRODUCTION
According to the Research on Fundamental Health Ministry of Health Republic of Indonesia in 2007, the prevalence of caries is 72% of Indonesia's population where 45.5% of them are active caries that have not been treated. In 2013, dental caries in children 5-9 years was 28.9%. This increase in 2007 which initially amounted to 21.6% [1,2]. Various ways of cavities have been developed, ranging from the treatment, fillings, to extraction of the tooth. Pulp capping is the one of the endodontic treatment to maintain the pulp vitality by application of pulp capping material to stimulate tertiary dentin formation. Many pulp capping research has been done by many researchers to improve the pulp capping method [3][4][5][6]. Many of them use manual evaluation of pulp capping treatment by observing the periapical radiograph to observe the condition of filling, whether there are hermetic restoration and the thickness of tertiary dentin. Observing the tertiary dentin thickness, the dentist evaluate visually by comparing the results of radiograph before and after pulp capping treatment. They make a conclusion whether a formation tertiary dentin thickness after treatment. It shows one of the successful indication of pulp capping treatment. In addition to qualitative information, the quantitative data of dentin thickness is also indispensable as supporting data for next treatment. Some researcher already has done many types of research about pulp capping [7].
This fact becomes an opportunity for digital image processing technology to contribute and to assist the dentist in estimating the thickness of tertiary dentin to obtain the quantitative value. Computer-aided in dentistry has already developed in many cases. Jose already uses computation in morphometric mandibular index [8]. Some researchers proposed many tehniques to enhance radiograph images quality to ease diagnosis [9][10][11][12]. However, the teriary dentin thickness optimation hasn't been done.
Digital image processing technology has also been used in applications related to disease diagnosis and dental care. Caries detection using processing image has been performed by deformable template technique [13]. This technique is a polygon template that defines the general shape of human tooth shape. After each tooth was found its form and location, caries is identified with grayscale analysis method which is detecting the edges and segmenting among caries and tooth surfaces. The caries section will be recognized because of the radiographic image will be dark tinted while the tooth color is bright. In addition to detecting caries, other research also attempted to classify the types of caries, i.e., primary and secondary caries with a grayscale-based method of processing intensity [14].
Segmentation or separation between tooth objects and other parts get a lot of attention from researchers. The researchers propose the methods based on congruency of the local structure of the image [15]. This method was unaffected by changes in image size, rotation, translation, change of light and noise. Another method of texture-based extraction features using grey level co-occurrence matrix is also proposed to separate each tooth in the radiograph [16]. This method is used by many researcher, however it can not get good result for tertiary dentin thickness calculation. Supervised learning using Bayesian classifier was also developed [17].
Some of researchers using the b-spline algorithm to calculate the area in medical. The b-spline method widely uses in the medical field. Lehmann [18] used b-spline degree 2, 4, and 5 in image processing. Recently, Grove proves that b-spline is used in modelling of the medical image [19]. Some of them used in MRI and Echoradigraph. Segmentation and Tracking have been done by Pedrosa [20]. Pedrosa This various in 2D, 3D, and 4D image. From the result above, b-spline is good enough to help make a curve for medical area which has many curves in the radiograph.
In dentistry, the use of b-spline has been done by Leung [21]. Leung minimizes the dental digital subtraction radiograph's bending mismatches with the b-spline method. Digital subtraction radiograph can improve disease detection. However, the bending and grid can lead mismatches to the radiograph. Therefore, b-spline can help to increase the accuracy of the radiograph's data. Digital image processing research related to dentistry is now widely developed. However, research studies on the use of image processing technology have not been used to assist the treatment of pulp caps yet, particularly to estimate the thickness of tertiary dentin. That is an opportunity to contribute to the application of image processing technology in the field of dentistry. On the other hand, this research would help dentists to overview the result of pulp capping efficiently and easily.
RESEARCH METHOD
Caries need to be restored for preventing the larger cavity in order the tooth can be maintained or no need to be extracted. Pulp capping treatment aims to protect the pulp from irritation and maintain vitality. Figure 1(a)-(c) are tooth illustrations with caries, after temporary restoration and after composite restoration, respectively. Observation of tertiary dentin thickness was done by comparing the thickness after treatment as shown in Figures 1(b) and Figure 1(c) with prior treatment as shown in Figure 1(a). The thickness of tertiary dentin after treatment is one of the indication of the successful of pulp capping treatment.
Ethical clearance standardized by The Medical Research Ethics Committee of Universitas Muhammadiyah Yogyakarta was implemented in this study. Subjects were the periapical radiograph of 38 pulp capping treatment patients. Each of patient visited dental hospital 3 times. First, the patient got an application of pulp capping material. Second, after one-week treatment and temporary restoration. Lastly, after more than one month with the composite as the final restoration. Informed consent signed by patients as a subject agreement. The research will be carried out in three main part: 1) collecting data, 2) the process of estimated tertiary dentin thickness and 3) result validation, as shown in Figure 2. Detailed
Collecting data
In this part, dental radiographic photos are obtained from the periapical radiograph data of patients whose do the pulp capping treatment at UMY Dental Hospital. Periapical radiograph should include photos before and after pulp treatment. Once it is collected, the data will be classified by dentist into three data sets in according to image quality, which is high, medium and low. Classification of data is done to facilitate the parts of making methods and analysis of results. The next part is the doctor determines the relevant points of tertiary dentin this is necessary because the points as a reference when doing the estimation process. The output of this part is the availability of dental radiograph digital image data, where the success indicator is the complete available data, covering all the photographs required when the pulp cage treatment evaluation is performed.
Process estimation tertiary dentin thickness
The following explanation parts when making estimations:
Pre-process
The pre-processing part is preparing the image to be processed in a later part. This part consists of improving image quality through increased contrast and noise reduction, in particular aiming to clarify the edge of the tertiary dentin area. There are some photos of the same object but shot with a different distance; this will affect the calculation of the thickness estimation. Therefore, a solution is resizing on photographs that have different distances. When the photos have met the criteria, then the photos will be loaded into the program to perform the calculation process. The image is resized to 360x600 portrait image and 800x480 landscape image. The dentist also can adjust the position of the image that really fit their need.
Define of point in dentin edges
Fully automatic dentin area calculation is very complicated because of poor image quality, and the edge of the dentin area is not visible very clearly. In this study, the calculation is done semiautomatically which is dentist need to determine some starting points along the dentin route. The point determination is done by clicking on the desired dentin edge image as shown in Figure 3. Points that have been selected by the dentist will be symbolized as 'x'. The dentist will determine the points as much as 6 -8 dots. Point determination depends on the image to be estimated if the image has tertiary dentin with varying curvature then required the determination of the maximum point. The specified points are very influential in the calculation process.
Connecting points with b-spline method
The connection of the starting points is done automatically using the B-spline method. A B-spline is a generalization of the Bézier curve [22]. B-spline overcomes some of the drawbacks of Bézier curve, B-spline has a different approach to representing piecewise polynomial curves. To connect between points, we need a plot function. The plot function is used to connect point to point linearly. Plotting a function using the command, we calculated by the functions evaluation at x-values x1, x2, ..., xn and making a curve that the points{(xk, yk)}nk=1, where yk = f(xk) is passed through the curve [23]. The main f spacing in B-form is the range that includes all clauses. This clauses mean that while the first node and the last nodes are perfect multiplicity by k and k is the spline's order. The f can be solved reliably at the end of the basic interval. The curve representation is generated by 'fnplt'. This representation is intended to first and last at the fixed point, despite everything of what is done otherwise, multiplexing is done when f as spline. Yet, since they are zero B-splines except their support, B-form functions are zero exclude the basic range of that format. This is different from a function of a form whose value excluding the basic range of the form is given by the extensions to the left, more to the right, of each part of the polynomial [24].
The cubic spline curve is applied after the plot function. Eugene Lee is chosing the point results in spline curve with sequence point for the j-th point [25] centripetal scheme as the aggregated square root of chord length.
∑√||
(: , + 1) − (: , )||2) < Periodic cubic spline curve is bulided if there isn't any repeated point. However, the corners result double points [26]. Using the cubic spline curve function will change the b-spline as shown in Figure 4. The cube spline forms an irregular shape that forms tertiary dentin as shown in Figure 5(a) and will be calculated the area. Figure 4. B-spline using cubic spline curve Figure 5. (a) Image with selected tertiary dentin area, (b) Convert to Binary Image
Calculate dentin thickness
Since the top and bottom edges of the tertiary dentin are not straight lines, the estimated tertiary dentin thickness is not calculated from the upper edge to the lower edge. The calculation is done by calculating the pixels' number in the tertiary dentin area. To be able to a calculation of pixel area is converted into the binary image, as in Figure 5b. A binary image (0 and 1 symbol image) is an array from a gray image. An image is composed to be discrete values that have been displayed in the array. The image area are marked with 1, whereas the background is marked with 0. Some of geometric and topological features can be calculated from the projected object region [27]. We can obtain the Binary image using poly2mask function for convert region of interest (ROI) polygon to region mask [28]. This function serves to separate the tertiary dentin area therefore that the tertiary dentin pixel value area is one whereas beyond it is worth 0. Thus, the number of pixels is obtained by adding up all the pixels in the image.
RESULTS AND ANALYSIS
From the radiographic photographs of patients who examine the UMY dental hospital, the procedure that can be done by the dentist is calculated by the program. The treatment is a success when the result of the tertiary dentin after treatment is thicker than before previous untreated tertiary dentin. There are 38 patients with three photos each with varying length of time. The following sample radiographs are evaluated on the tertiary dentin area. From this 38 patients, our experiment use 114 picture to train as before and after treatment process. The validation process is used with comparing manual process by dentist.
Data that has been collected got refinement proses which the picture has wrong naming or cropping dental radiograph. The total picture is 114 pictures which 135 pictures are be tested to estimate the thickness of tertiary dentin.this section, it is explained the results of research and at the same time is given the comprehensive discussion. Results can be presented in Figures, graphs, tables and others that make the reader understand easily. The discussion can be made in several sub-chapters. From the patient's dental radiograph who got tooth repairment from different types of tooth or carries, the dentist is needed to identify the dot to dot of the tertiary dentin or pulp chamber. Figure 6 is the image of the before, in and after treatment. In Figure 6, we can see the dental image of the patient, it takes 8 points determined by the dentist to be able to get the maximum accuracy. In case, there are some photos that position when shooting is not fixed or the same. As a result, there is a change in the position of the tooth in the picture thus will affect the shape of the tertiary dentin area. Therefore, a solution that can be taken is to consult with dentist each photo to be estimated can maximize the accuracy when doing calculations.
The thickness calculation of tertiary dentin can be done by enumerating pixel value in the image. The image has been converted to the binary image. The value of binary image calculated '1' binary value in the area. Therefore, the area inside the line is tertiary dentin. From the radiograph and wire, the value will be: tertiary e = value 1 From the equation (1) above, we can conclude the tertiary dentin value for this case is: Therefore, we can get the tertiary dentin area as above value. Every Pixel area multiplies with 0.0025 to get the millimetre area. This equation to calculate the millimetre area of the dental radiography. From Figure 7, there are codes like "In", "K1" and "K2". The "in" code means that the patient has not been treated with the pulp cap. Code "K1" shows tooth that has been treated for pulp caps for several weeks, while the "K2" is the thickness code for several months after treatment. Tertiary dentin thickness increase depends on the duration of after treatment, but tertiary dentin thickness can grow only a few millimetres per month. This is the problem of the dentist when they want to know whether the treatment of pulp caps is satisfied or not with qualitative methods. Using quantitative method, the dentist will know faster, whether the dentin tertiary is increasing or not. The rising value can be concluded that the dentist succeeded in performing the treatment of the pulp cap to the patient. However, there is some data after the treatment of the thickness of the pulp thickness of tertiary dentin decreased, when it compares to dentin before treatment, this can occurs in because there is erosion can be elimination in caries or dead cells. Table 1, the manual observation from the dentist and the computerized inspection, there is almost no difference at all. There is only one patient's data that different with the computerized one from 15 patient's data and 45 periapical radiograph images. Although there is a patient that the dentin is decreasing, the result from manual and the digital one is same. However, there is a failed data that hardly interpreted by the system. This failure could happen because there is the different angle of dental radiography from "in" image to "k1" and 'k2" image. The different perspective of the picture can make the different judgment from the system to identify the thickness of the tertiary dentin. However, the dentist needs o place the dot in the right place, and that also does in every image of a radiograph. There would be distortion in ISSN: 2088-8708 Estimation of tertiary dentin thickness on pulp capping treatment with digital image … (Slamet Riyadi) 527 the calculation. For example, P21, Dentist as expert's judgment there is thickness raising. However, the K2 radiograph isn't good enough that results in no improvement in the dentin thickness. Therefore, the data that failed because of two categories. First, there is no good quality of dental radiograph. Second, tertiary dentin thickness estimation is different with expert judgment. The quality of K2 radiograph isn't good. The filling is good and hermatic. K1 In
K2
Filling is just slightly hermatic K1 In Table 2 is the result of the calculation which there is 3 bad or distortion of the dental radiograph that can lead to the different calculation. The bad quality of the dental radiograph or distortion will be emitted from accuracy calculation. However, this can help to improve the hospital quality dental radiograph assurance before it can be calculated in the program. From that table, we can conclude that there are 35 data with 2 failed data. This 2 data got the different result from expert/dentist analysis. From the table, the accuracy is 94.2 %.
CONCLUSION
Our experimental result indicates that b-spline calculation through our program has the same value as the visual evaluation method by the dentist as expert's judge. The dentist will not be necessary to compare between photos, with the quantitative method the dentist can know how much tertiary dentin thickness increase than before. The quantitative method that can be obtained from image processing can define some points in the edge of tertiary dentin and will calculate in that area in millimeter. It has accuracy 94.2 %. In this research, the maximum marks that can be indicated in the edge are eight times and the minimum is six times.
Dental radiograph has low intensity and contrast which one that is the biggest problem in this research which can be done if the dental radiograph quality improving. Therefore, the calculation of the dentin tertiary thickness area is using a semi-automatic system. However, the weakness is the dentist must very careful when determining the point of the edge. Our future research will expand the developing software to calculate area with an automatic system, so the dentist is not necessary to define the point on the edge of tertiary dentin. For further, the research will develop a system in mobile phone apps that help the dentist to check their patient anytime and anywhere. | 4,688.6 | 2020-02-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
An Enhanced Secure Heuristic-stochastic Routing Arithmetic in Mpls Network
To improve routing security in MPLS network, base on the stochastic routing algorithm, we propose a proac-tive mechanism we call enhanced secure heuristic-stochastic routing (ESHSR), which brings to bear Bayes-ian principle, explores the existence of multiple routes and forces packets to take alternate paths probabilis-tically. In this paper, we investigate game theoretic techniques to develop routing policies which make interception and eavesdropping maximally difficult. Through simulations, we validate our theoretical results and show how the resulting routing algorithms perform in terms of the security/delay/drop-rate, and we contrast them with the mechanism, secure stochastic routing (SSR). We observed that our scheme makes routing more secure than traditional secure stochastic routing, as they make use of the information of detecting the other side's behavior.
Introduction
The purpose of traffic engineering (TE) [1][2][3][4][5][6][7][8] is to improve network performance through the optimization of network resources.The emerging Multi-Protocol Label Switching (MPLS) technology has introduced an attractive solution to TE in IP networks.MPLS can efficiently support the explicit routes setup through the use of Label Switched Paths (LSPs) between the ingress Label Switched Router (LSR) and the egress LSR.Hence it is possible to balance the traffic through the network, thus improving the network utilization and minimizing the congestion.However, one of the most obvious attacks to a communication network is packet interception which prevents data originating from one (or several) nodes to reach the destination.Eavesdropping can be thought as a "passive" form of interception, in which packets are "snooped" but not removed from the network.In "traditional" shortest-path routing protocols, the path over which a data packet travels is fairly predictable and easy to determine.Even if several paths with the same number of hops exist, routing algorithms typically select one of the possible options and utilize that same path for all packets.Indeed, a study by Zhang et al. [9] reveals that Internet routes are fairly persistent (e.g., often the same route between a source-destination pair persists for days; only 10% of the routes persist for a few hours or less).This makes IP networks vulnerable to packet interception and/or eavesdropping attacks.Notable exceptions to single-path routing schemes are Equal-Cost Multi-Path (ECMP) [10] and OSPF Optimized Multi-Path (OSPF-OMP) [11].However, these algorithms were developed to increase throughput and not to make routing robust to attacks.In practice, they do not introduce unpredictability and therefore packet interception is fairly easy to achieve.
In this paper, we describe enhanced secure heuristicstochastic routing, or ESHSR, whose main goal is to make packet interception maximally difficult.These algorithms explore the existence of multiple paths between two network nodes and route packets to minimize predictability.Routers compute all possible paths between a source-destination pair and, according to a given probability distribution, assign some probability to each nexthop.The net effect is that data packets traverse random paths on their way from the source to the destination.We should point out that, unlike the secure stochastic routing, SSR [12], we take a proactive and heuristic approach to making routing less vulnerable to attacks.In other words, according to partially detecting attacker's behavior, packets are always sent along multiple paths according to some probability.
Enhanced Heuristic-Stochastic Routing
We consider a MPLS network where multiple parallel LSPs exist between any given ingress LSR and egress LSR pair.The main objective is to distribute the traffic at each ingress LSR among the multiple LSPs so as to balance the load through the network and thus improving the network performance.Take the routing problem as a game between the network designer that specifies the routing algorithm and an adversary that attempt to intercept data in the network.We consider here a zero-sum game in which the designer wants to minimize the time it takes for a packet to be sent from node 1 to node n, and the adversary wants to maximize this time.To accomplish this, the adversary attempts to intercept the packet at particular links in the network.For short we say that the adversary scans link l when she attempts to intercept the packet at that link.
L
We start by considering an on-line game in which the adversary selects a new link to be scanned every time the packet arrives at a new node and makes the selection knowing where the packet is, and the player determines a new path to forward data and makes the selection knowing the link to be scanned in the previous time.For generality, we take the probability of intercepting a packet to be link dependent and denote by l the probability of intercepting a packet traveling in link , given that link is being scanned by the adversary.P q q q q a qa b l , The state n is an absorbing state, i.e., , , t t t t P q q q n a l b l , The cost to be optimized is the average time it takes to send the package from node 1 to node n and can be written as: To optimize this cost, for each node the player that designs the routing chooses the distribution The two-person zero-sum game just defined falls in the class of stochastic shortest path games considered in [12].In [12], it has been proved that the game exists a saddle solution point, however, In [13], the player just selects the stochastic next hop, it's too blind to do like this, and even if we do like this, it's still possible that the data can be Interception or eavesdropping by the adversary, and it's very possible to give birth to the routing loop.In our scheme, SHSR, we adjust every l based Bayesian principle termly, and then adjusts routing strategy to make the transmission more secure.People uses Bayesian principle to modify the prior probability, and get the new posterior probability constantly, here, we suppose the adversary has K types, here, the type means which link the adversary will attack, and has H possible actions, uses according to the probability formula: now, if we observe the adversary's action , we can forecast the new posterior probability that the adversary belongs to so the stationary Markov chain above can be re-written as: | p t and , the player will detect the link that the adversary has attacked continually, and adjusts his faith about the probability that the adversary attacks every link, sequentially changes his routing strategy.It's very possible that the adversary also will adopt similar attack strategy with the player, the process that the player and the adversary change their faith about each other and strategy makes up of the game between them.
1 p t
Simulation Results
To evaluate the routing algorithm proposed in Section 3, we simulated the network in Figure 2, data were transmitted from the blue point to the red point, using the ns-2 network simulator [13].In the simulations presented, all links have propagation delay of 25 ms and bandwidth of 2Mbps.Each queue implements drop-tail queuing discipline with maximum queue size set to 100 packets for the case of the CBR simulations.All packets are 400 bytes long.The simulation time for each trial was 20 seconds.Experiments data were performed using CBR according to TCP connecting.
Similar to SSR in [14], we were interested in determining the effect of ESHSR on security, drop-rate, and packet transmission delay.We assumed here that the attacker chooses the set of links that maximizes the percentage of packets seen, i.e., the worst-case scenario.Base on the on-line game, we evaluate the routing algorithm SSR and ESHSR (red line represents ESHSR, blue line represents SSR), and Figure 3 shows the simulation results of the percentage of packets seen of them respectively.As expected, ESHSR is most secure than SSR, since according as the adversary's behavior in history, packets will be transmitted in the more secure path, not just be transmitted along stochastic path, Figure 4 shows the simulation results of average delay of SSR and ESHSR respectively, and Figure 5 shows the simulation results of drop-rate of SSR and ESHSR respectively, because under ESHSR, it's more difficult be seen than under SSR, the average delay and drop-rate under ESHSR are markedly smaller than in SSR.
to be scanned.
represents the prior probability that the adversary belongs to k , and then, we can get: | 1,922.2 | 2011-11-30T00:00:00.000 | [
"Computer Science"
] |
TEACHING AVIATION ENGLISH: ENHANCING TRANSLATION SKILLS THROUGH THE CONTENT AND LANGUAGE INTEGRATED LEARNING METHOD
Teaching aviation translators involves the simultaneous provision of broad branch competence and narrow specialization. This study is aimed to test whether the application of the Content and Language Integrated Learning Method (CLIM) is effective while teaching Aviation English and enhancing translation skills at the aviation university. The experimental study was conducted within the framework of an elective course designed as an English language program with a professional focus on "Translation in the Aviation Industry" for future graduates majoring in Translation. The experiment involved 95 students, with one academic group designated as the experimental group (n = 44) and the second as the control group (n = 51). The study followed a pretest-posttest experimental design. In the pretest stage, the homogeneity of CLIM and non-CLIM groups was statistically proved by the χ2 Pearson Correlation Coefficient. The experimental group received the CLIM methodology, while the control group was taught by a traditional approach. The post-test took into account both linguistic and content-related learning outcomes aimed at improving English proficiency in the aviation-oriented and translation direction within the CLIM and non-CLIM research groups. The research findings demonstrated that experimental group had significantly outperformed the control group during the final test. Teaching Aviation English and enhancing translation skills through the application of the CLIM method which encompassed aviation podcasts, subject matter videos, professional literature overview, stakeholders’ involvement, and engagement in the aviation environment, are effective.
Introduction
The realities of globalization in the first decades of the 21st century are synchronous with updating the content of most educational phenomena, including foreign language for specific purposes (LSP).The special mission entrusted by humanity to foreign languages as the basis for intercultural interaction has actualized the integration of language and subject approaches in professional education and activities, in particular in the aviation industry.Indeed, aviation is a specialized, high-tech industry that covers a wide range of activities related to communication, the use of various terminologies and their correct interpretation, etc. (Ragan, 1996).
The problem of teaching English for Specific Purposes, in particular the aviation terminology, has been systematically studied from different perspectives (Gollin-Kies et al., 2015;Estival, 2016).Kovtun et al. (2021) studied aviation subject matter competence as the basis for the successful professional activity of a translator in aviation.Achieving this competence is realized through modelling typology of activities for teaching subject matter in the course of aviation translation.Student motivation, practice-oriented learning methods, and communicative and informational learning tools have been recognized as the key components of educational activity modelling under this approach.The authors have proved the effectiveness of the synthesis of traditional and interactive exercises, illustrative audiovisual material, etc.
The strong civilizational development of the countries of the world and the strengthening of integration interactions in the field of aviation actualize the wellknown thesis of Crawford (1993) about the growing need for both broad branch competence and narrow specialization.It is not the first time that the philosophical contradiction of the antonymous pair "broad-narrow" becomes the basis of promising scientific research in general, and educational methods in particular.This study is concerned with introducing effective methods into the educational process to achieve an integrated result required by employers in the aviation industry and by students.This method is based on the interconnection between English, which is the basis of broad professional translation competence, and aviation content, which is a component of narrow professional competence in the relevant field.
The urgency of the problem allowed us to put forward a hypothesis that teaching Aviation English and enhancing translation skills will be effectively implemented at the aviation university through the application of the Content and Language Integrated Learning Method -CLIL.The basis of the hypothesis test is the theoretical principles of teaching aviation translation to university students and the practical experience of ensuring the appropriate educational process, a specially developed experimental CLIM teaching method, which affects the effectiveness of learning Aviation English by translation students to form a terminological corpus and the content of professional activities in aviation.
The need to introduce the advanced CLIM technology into the educational process of universities is reinforced by the argumentation of Krashen (1988) and Swan (19951;19952) monitor model.They believe that the language evolves when there is a genuine conversation centered around communication skills rather than fixating on the precision of words and grammar.In addition, it is important to create a positive learning environment using this technology, to promote the awareness of those who learn that the language can be mastered only by having motivation and being an active subject of the learning process.Such a view can be taken as the basis for integrating both subject and language approaches (Dalton-Puffer & Nikula, 2006).
In the theoretical study of the Ukrainian scientist Potapenko (2014), the main definitions of the CLIM Method are analyzed, its key concepts are examined, and the advantages and difficulties of its implementation are distinguished.From the point of view of practical aspects of implementing the CLIM Method, the article by Karimi et al. (2019) is important.In this article, the scientists describe the experience of implementing such technology to improve the learning of aviation English by Iranian students of aviation specialties, as well as their attitude and motivation to learn highly specialized features of English for aviation.It is noted that the technology contributed to the deepening of the aviation content knowledge of students, provided that they integrated thinking and cognitive awareness during the learning process.Notably, the results of this study demonstrated how CLIM methodology significantly improved students' motivation.It also led to an improvement in language and subject learning among the students of aviation specialties (Karimi et al., 2019, p. 764).The conclusion that CLIM is an effective modern technology is also considered valid in terms of the methodology on which the educational phenomenon "English for Specific Purposes" is based.A study by Dib and Addou (n.d.) integrated methodological principles and practical implementation.The scientists conducted pre and post-testing of one group of Zenata airport's air traffic controllers, which tested the effectiveness of their CLIM training.A significant contribution was made to the theory of technology and its practical implementation in the training of specialists in the aviation industry by one of the "practical reviews," which was devoted to experiments, methods, and data analysis (Dib & Addou).
The specified diversity of researched aspects of CLIM technology proves its relevance and determines the need for case studies of the method of complex (integrated) learning of content and language, and variations of its implementation in the educational process of translators and potential employees of the aviation industry.
The study aims to test the hypothesis that aviation university students will effectively teach Aviation English and enhance translation skills by applying CLIM and the experimental methodology in a real university educational process.
Research tasks: to investigate the real state of the problem under study; to develop the research methodology and the procedure for its implementation; to select the content and to apply it in classes; to develop special accompanying exercises; to test the suggested structure.
The authors consider that Aviation Subject Matter Competence as a component of the translator's professional competence is the basis of successful professional activity in the aviation industry and is formed in the educational process of the university based on the application of CLIL; the English language is classified as a "language for specific purposes".
Methodology
The research was conducted at the National Aviation University in Kyiv (Ukraine) as part of the "Translation in the Aviation Industry" course for students majoring in Philology (Germanic Languages and Literature (including the "Translation" specialization)."Translation in the aviation industry" is the authors' elective academic discipline designed as an English for Specific Purposes (ESP) course for the third-year Bachelor translation students.The course is aimed at mastering aviation English-Ukrainian translation.
Participants
A total of 95 student majoring in translation took part in the pedagogical experiment.One academic group was chosen as an experimental group (a total of 44 students) and the other one as a control group (a total of 51 students).As part of the experiment, the study included a control and a final evaluation.The experimental group (EG) followed the CLIM methodology.The control group (CG) was practicing according to the traditional methodology.
Research Design
The researchers started to collect data after the participants began their "Translation in the aviation industry" course.In the first lesson of the course, the pretest was administered to the translation students to examine their general expertise in technical aviation discourse and translation.In the pretest stage, the homogeneity of CLIM and non-CLIM groups was statistically proved using Pearson's χ 2 Correlation Coefficient (Social Science Statistics).
The next stage of the research was designing and piloting the experimental CLIM methodology for experimental studies.Both groups (EG and CG) underwent 5 intermediate tests designed to monitor the progress of acquiring Aviation English and mastering translation skills of aviation subject matter (ASM) texts selected according to the course syllabus.Two groups worked independently.While the EG students were constantly engaged in CLIL, the CG students only worked on traditional exercises from their textbooks.
In the last lesson of the "Translation in the aviation industry" course, the posttest with the same task as the pretest was administered to check aviation language and content learning outcomes.The teachers graded the students' performance on different aspects: vocabulary, grammar, semantics and pragmatics.
According to the translation performed at pre and post-task, three quality levels were determined (low, satisfactory and high).The results' reliability was checked by the χ 2 Pearson Correlation Coefficient (Social Science Statistics, n.d.).
The pedagogical experiment lasted during one academic term (4 ECTS credits) in the framework of the course.
Data Collection and Analyzing
In this study, 3 instruments were employed for data collection (pretest, intermediate tests, and posttest).
The first instrument, the pretest, was related to students' language and content knowledge of the ASM text and translation.It was a specialized text on avionics called "Cockpit" that students had to translate from English into Ukrainian.
The outcomes of students' translations were evaluated under 2 criteria: aviation language (terminology-oriented) and aviation content (subject matter-oriented).
Monitoring the ongoing process of the pedagogical experiment, researchers systematically collected and analyzed data through the second instrument, intermediate tests.The intermediate results were placed into the register books of both EG and CG.
The third method, post-testing, focused on language and content related learning results.These outcomes were related to the improvement of aviationrelated English proficiency and interpretation with or without CLIM practice in the monitored groups.The test revealed participants' achievements in Aviation English and subject matter knowledge.Students were suggested to translate the text "Flight deck" adopted to the pretest "Cockpit".It was done to assess the language and content knowledge before and after the experimental teaching.Additionally, on the 285 same day, students underwent an oral examination where they were required to discuss topics related to aviation.
Pretest Evaluation
The authors systematized the data obtained during the preliminary testing.
Teachers evaluated students' English-Ukrainian translations of the aviation text by two criteria: aviation language (terminology-oriented) and aviation content (subject matter-oriented).As a result, they have defined three levels of quality.
The high-quality level of performing translation indicated using correct aviation terminology in translation (aviation terms, abbreviations and notions), rendering aviation realia and slang that assumes extra-linguistic knowledge as well as following grammar and stylistic rules of Aviation English.
The translation was performed at a satisfactory quality level.However, there were some inaccuracies in the terminology due to lack of experience in the aviation sector.Nevertheless, its meaning remained clear to an aviation expert.
In the translation with a low-quality standard, the specialized terminology was inaccurately rendered, and subject matter information was not detected.
Therefore, the sense of professionally oriented situations was distorted.
The information presented in Figure 1 illustrates the advancement of both EG and CG students and the improvement of their translation performance in the three quality levels at the initial stage of testing.The results showed that both groups lacked a sufficient level of aviationrelated English proficiency as well as the knowledge in this discipline.The experimental CLIM methodology was introduced in the EG to improve their results.
Meanwhile, a conventional instructional approach was employed in the CG.
Homogenizing Groups on the Pretest
The uniformity of the samples was verified through an examination of the Following the calculations, we obtaine χ 2 cr = 5.991 (at p ≤ 0.05) and χ 2 cr = 9.21 (with p ≤ 0.01).The "Axis of significance" results of the pre-test (Figure 2): Pearson's χ 2 criterion was calculated using the formula: We obtained χ 2 emp = 0.197.Therefore, χ 2 cr > χ 2 emp, our empirical value confirms the H0 hypothesis: two variables do not differ significantly from that which is normally distributed.
Intermediate tests
During Test 4. Aviation crashes and accidents.(Task: make a report about a human factor in make a report about a human factor in aviation).
Test 5. Aviation safety and security (Task: discuss the various factors that impact aviation safety, including Controlled Flight Into Terrain (CFIT), bird strikes, volcanic ash, icing, wind shear, hijacking, etc.).
Figure 3 shows the mean values based on a five-point scale.
Figure 3 -Intermediate tests evaluation
As depicted in Figure 3, students in the experimental group (EG) exhibited greater current learning progress compared to those in the control group (CG), providing evidence for the effectiveness of the experimental CLIM methodology.
Reliability of Results
The independent Pearson's χ2 test was employed to assess the results of the CLIM and non-CLIM groups at the post-test stage (Table 3).Result: χ 2 emp = 12.656.
Differences between these distributions can be assumed to be valid if the χ 2 emp is equal to or higher than χ 2 0.05 = 5.991.Such changes can be considered even more reliable if χ 2 emp is equal to or higher than χ 2 0.01 = 9.21.Therefore, χ 2 emp exceeds the critical value, and differences between the distributions are statistically significant.Thus the H0 hypothesis is rejected.The CLIM methodology statistically proves to be effective.
Discussion
The research outcomes showed that CLIM students' performance in tests was higher compared to the other group who took part in the traditional ESP and translation class.Following the CLIL methodology, EG students were more confident in communicating in English on aviation topics.They had professional skills in handling and translating aviation terms and concepts.In addition, EG students were able to translate ASM texts adequately in both linguistic and pragmatic contexts.
The research was based on such scientific categories as communicative, competence and integrative approaches; Aviation English; the experimental CLIM methodology.The developed and tested methodology is reproducible, it develops translation students' listening, reading, speaking and translation skills aimed at professional activities in the aviation industry.
Communicative, competence and integrative approaches
We consider the interaction of communicative, competence and integrative approaches as a synergy that provides a total effect because when two or more factors interact, their effect significantly outweighs the effect of each component.
Since communication as a concept and action integrates traditional phenomena, i.e. messages, connection, and communication and presents many directions of human interaction in the world, communicative and competence approaches in education are recognized as basic, as stated in numerous studies of international scientists (Brumfit & Johnson, 1979;Grognet & Crandall, 1982).We followed this conclusion when developing CLIM experimental methodology.
An integrative method was recognized as a driving force for advancing education.Specifically, skills that can be transferred and are cultivated according to the criteria of academic standards are considered equivalent to crucial competencies within the higher education framework across Europe (Bennett et al., 1999).They encompass the abilities to locate and analyze information; communicate; interact; make presentations; plan and solve problems; develop socially; analyze; evaluate and find the right way out of a problem situation; see global perspectives; have an active civic position; be aesthetically responsible (Fallows, & Steven, 2000).In our view, the outlined list summarizes the professional characteristics of an aviation translator, and as a result, it was considered in the development of the CLIM experimental methodology.
During our research, we also relied on the thesis by Klein (2005).The author believes that the integrative approach serves as a comprehensive concept encompassing various elements, including structures, tactics, and undertakings that connect different gaps.These may include the transition from high school to college, the integration of general education with the major.This method bridges the gap between introductory and advanced levels, and links experiences within and outside the classroom.It also harmonizes theory with practice, and connects diverse disciplines and fields.(p.8).
Therefore, without analyzing many other scientific views on these approaches, we note that in the educational process of the university, communicative, competence, and integrative approaches form a kind of triangular umbrella, under the "protection" of which translation students master Aviation English as a professional language.
Aviation English
As a language of professional activity in the translation of aviation terminology, aviation English has been studied in various aspects.Notably, the majority of studies focus on the learning of aviation-related English by future students of aviation specialties as part of their compulsory disciplines.They are vital for the professional activities of future specialists in the aviation sphere.The practical results of these studies proved that the most widespread approaches to Aviation English teaching to aviation translation students are competence, communicative and contextual.The context of the English aviation language is being updated, in particular following the requirements of the International Civil Aviation Organization, which is discussed in articles by Kukovec (2008).The scholar underlines the importance of teaching English for aviation and wireless communication following the requirements for pilots' language proficiency that the International Civil Aviation Organization has recently established.In particular, this refers to new views on the role of specific professional expressions.After all, the ICAO standards are meant to cover many routine scenarios and include some potential emergency or non-standard situations.The authors emphasize that the prescribed phrases cannot cover all possible scenarios and reactions.Thus, it is necessary to have a language that goes beyond the narrow set of ICAO phraseology.
There is a need for aviation English based on a proficient knowledge of standard English (p.129).
After examining the linguistic and psycholinguistic aspects of wireless The research investigated the views and understanding of in-service teachers regarding content and language-integrated learning (CLIL) and bilingual education.
The authors posit that effective teamwork and support from administrators are essential elements for the successful execution of CLIL.
We acknowledge the assertion that CLIM has been employed as an instructional method for teaching foreign languages, where language forms are acquired indirectly through non-linguistic material, seen as constructive (European Commission, 2006;Marsh, 2002).Karimi et al. (2019) conduct a thorough examination of the contributions made by different scholars to the evolution of CLIM theory and its practical application.The author describes how to improve the learning of aviation English by pilots.Two theses were recognized as important for our research.Firstly, all academic aviation programs implemented worldwide offer English as the official and standardized language of aviation communication.In the first place, a significant portion of aircraft and airline manuals, along with pilots' documents, flight plans, and airport control procedures, are typically written in the English language.Secondly, the introduction of CLIM has led to a higher proficiency in foreign languages and enhanced comprehension of aviation-related information.
Furthermore, students' motivation has shown an increase compared to conventional non-CLIM methods, although measuring or verifying this motivation proved challenging during the CLIM implementation.
So, it was found that the scientific basis of CLIM is actively formed by both scientists and pedagogical practice.We conducted experimental testing to evaluate the effectiveness of the CLIM methodology on students' learning of aviation-related English.The modern content, developed through a combination of communicative, competency, and integrative approaches, was implemented through podcasting and video technologies such as aviation podcasts and YouTube videos.Additionally, we utilized a review of professional literature from aviation-related forums and magazines, engaged aircraft stakeholders by inviting experienced pilots and aviation specialists, and immersed students in the aviation environment through visits to exhibitions, museums, and factories.The primary focus of the study was to assess the impact of the CLIM methodology on improving students' comprehension of aviation-related English, including vocabulary, content understanding, and the quality of translation in English-Ukrainian ASM texts.To achieve this, a specially designed CLIM experimental teaching methodology was introduced and tested in the EG.
CLIM -Experimental teaching methodology
The CLIM techniques generated substantial material for educational purposes, facilitating the development of language skills such as listening, reading, speaking, and translation.Learners who embraced the CLIM approach demonstrated proficiency in mastering aviation English and gained the ability to translate aviation terminology and texts.The experimental CLIM method incorporated diverse elements, including aviation podcasts, thematic videos, a review of professional literature, and engaging students in activities related to the aviation field, shaping the content of the methodology.
Podcasts for learning aviation terminology
The Experimental Group students were trained in listening to aviation experts talk about various professional situations and how to deal with them.
We used the most-viewed podcasts to help students master aviation vocabulary and topics through listening: To help students to listen, teachers explicit subject-matter vocabulary.
Future translators should take notes while listening.Afterwards, they discuss their listening in pairs or with an entire group.
Subject matter videos
Subject-related videos were used in CLIM to develop translation competence on aviation topics.The content-oriented audiovisual material with specially developed training activities enhanced students' skills in translating ASM texts.
Stakeholders involvement
Stakeholders play an essential role in the modern educational process.
Involving aviation stakeholders can greatly enhance students' learning motivation and improve subject knowledge.In our CLIM experiment, aviation pilots from Windrose Airlines attended special workshops, round tables, internet conferences and thematic webinars.
The decision-making workshop was a brainstorming activity aimed at building strong professional-academic cooperation between pilots and students who study Aviation English.The workshops aimed to search for effective ways to overcome linguistic challenges in the pilot's professional activity like passing ICAO English Test.
Round tables, internet conferences and thematic webinars were organized to engage students in active discussion with pilots on aviation topics.Pilots shared their professional experiences and eagerly answered students' questions.
Engagement in the aviation environment
In the CLIM methodology, it is very important to acquire knowledge in a professional environment.The suggested hypothesis that teaching Aviation English and enhancing translation skills are effectively implemented by aviation university students through applying CLIM method is confirmed.The essence of the tested experimental technique is concentrated in Figure 5.In light of the findings, it is suggested that university program developers should consider the use of the CLIM methodology in their syllabi.In this way, university teachers are expected to apply it in their language and translation classes.
This type of strategy favors the development of meaningful content for meaningful learning and strengthens translation skills.Besides, the use of experimental teaching methods is recommended for future interpreters as well as for future pilots and others involved in the aviation professional community.As far as the teaching experiment was conducted using a blended learning format due to the pandemic and war, the proposed methodology can be applied both in traditional classrooms and within e-learning.
This study was limited in the framework elective academic discipline "Translation in the aviation industry", designed as an English for Specific Purposes (ESP) course for the third-year Bachelor translation students and aimed at mastering aviation English-Ukrainian translation.The elective course is not viewed as final in every detail, but it is going to be further improved based on teaching experience to meet the students' and employers' needs more adequately.In particular, the prospects of further research could include practical tips for designing CLIM activities, suggestions for integrating aviation-specific content into language instruction, and insights into how to effectively measure the students' feedback on CLIL-based courses.
Figure 1 -
Figure 1 -The advancement of EG and CG students and the enhancement of their translation proficiency demonstrated in the pre-test
Figure
Figure 2 -«Axis of significance» on the pretest the course, enrolled CLIM and non-CLIM students underwent 5 intermediate tests.The tests were designed to monitor the progress of acquiring Aviation English and mastering translation skills in EG and CG.Each test was done after finishing a separate subtopic within the course.Tests 1, 4, and 5 were designed to elicit aviation subject matter (to measure content learning).Tests #2 and #3 were designed to assess the quality of translation from aviation English to Ukrainian.The primary goals were to gauge language proficiency and evaluate translation skills.Test 1. Fundamental Concepts in Aviation.(Task: listen to the speaker discuss various aircraft types and respond to the questions).Test 2. Aircraft structure.(Task: translate the text on aircraft instruments and systems from English into Ukrainian).Test 3. Airport operation.(Task: watch a video about airport layout and make an annotative interpretation from English).
Figure 4
Figure 4 displays the post-test results, indicating the aviation English proficiency and translation skills levels in both the experimental group (EG) and control group (CG).
Figure 4 -
Figure 4 -Distribution of posttest scores among EG (experimental group) and CG (control group) students at different levels discourse and considering the factors contributing to communication errors in wireless communication, along with analyzing the psychophysiological aspects of pilots' in-flight activities (such as information overload, time constraints, and constant stress),Kovtun, Melnyk, Khaidarie, and Harmash (2019) have pinpointed specific exercises for aviation students.These exercises aim to fulfill language criteria for ensuring safety, clarity, and efficiency in communication within the sphere of civil aviation.In particular, they refer to exercises aimed at preparing students to switch from one language to another.It is well known that translation is a multifaceted phenomenon, where adequacy is considered the key factor.Aviation translation is based on Aviation Subject Matter Competence.A misunderstanding caused by an inadequate translation can lead to unpredictable consequences(Stitt, 2016).We have conducted several studies on improving the methodology of teaching a foreign language to students at the aviation university, tested the content-oriented structure of Aviation Subject Matter Competence of translation students within the framework of the English-Ukrainian language pair, proved that audiovisual material is an important means of visualizing images, with the help of which Aviation Subject Matter Competence of translation students is successfully formed.The positive results obtained in the course of previous studies became the basis for the development of the CLIM experimental methodology.The suggested methodology is based on the previously proven statement that the prerequisite for an adequate English-Ukrainian aviation translation in professional activity is the formed Aviation Subject Matter Competence, which integrates students' terminologically significant aviation knowledge.Thus, Aviation English is both a content and a means of training specialists in the aviation industry, including translators.CLILThe hypothesis that Aviation English knowledge and translation skills will be effectively formed in aviation university students through the application of CLIM is based on the results of previous studies: audiovisual material and special exercises contribute to the formation of Aviation Subject Matter Competence of students, which is a prerequisite for their ability to perform adequate English-Ukrainian aviation translation in professional activity.The need for innovative changes in training students under the realities of life (global pandemic, war in Ukraine) led to the comprehensive mastering by university teachers of distance education resources, particularly Internet content available for educational purposes.According to our approach, such educational tools should be included in the experimental CLIM methodology as mandatory components: aviation podcasts, subject-related videos, professional literature overview, inviting stakeholders, and engaging in the aviation environment.The developed experimental technology takes into account the opinion of CLIM researchers who study various aspects of its implementation.Researchers from Columbia University, specifically Mcdougald and Pissarello (2020), released findings from their study titled "Content and Language Integrated Learning: In-Service Teachers' Knowledge and Perceptions Before and After a Professional Development Program."The article proves that the mixed-methods' research has led to significant progress in the implementation of CLIM.
Figure 5 -
Figure 5 -Modelling of the CLIM experimental methodology
.
Table 1 -χ 2 -criterion calculation of the empirical distributions in the Experimental Group and Control Group after the pre-test Pearson's χ 2 Correlation Coefficient between the two empirical data indicators(Social Science Statistics, n.d.).Following the pre-test outcomes, the statistical hypothesis H0 was affirmed, indicating that the two variables do not exhibit a significant difference from a normal distribution.Table1displays the results of the χ 2 -criterion, which compares the empirical distributions in the experimental group (EG) and control group (CG).Critical and empirical values are presented in Table2
Table 2 -
Critical and empirical values of χ2 at V 2
Table 3 -
χ 2 -criterion calculation of the empirical distributions in EG and CG on the posttest | 6,553.4 | 2023-12-06T00:00:00.000 | [
"Linguistics",
"Education"
] |
Convergence Rates of Latent Topic Models Under Relaxed Identifiability Conditions
In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.
Introduction
We consider the classical Latent Dirichlet Allocation (LDA) model for topic modeling of a collection of unlabeled documents (Blei et al., 2003).Let V be the vocabulary size, K be the number of topics and denote conveniently each of the V words in the vocabulary as 1, 2, • • • , V .Let θ = (θ 1 , • • • , θ K ) where θ k ∈ ∆ V −1 = {π ∈ R V : π ≥ 0, i π i = 1} be a collection of K fixed but unknown topic word distribution vectors that one wishes to estimate.The LDA then models the generation of a document of m words as follows: (1) Here Categorical(π) is the categorical distribution over [V ] parameterized by π ∈ ∆ V −1 , meaning that p(x = j|π) = π j for j ∈ [V ], and ν 0 is a known distribution that generates the "mixing vector" h ∈ ∆ K−1 .In the original LDA model (Blei et al., 2003) ν 0 is taken to be the Dirichlet distribution, while in this paper we allow ν 0 to belong to a much wider family of distributions.The objective of this paper is to study rates of convergence for estimating θ from a collection of independently sampled unlabeled documents X 1 , • • • , X n .Each document is assumed to be of the same length m. 1 The estimation error between the underlying true model θ and an estimator θ is evaluated by their Wasserstein's distance: where π : [K] → [K] is a permutation on K.When K and V are fixed, the ℓ 1 -norm in the definition of Eq. ( 2) is not important as all vector ℓ p norms are equivalent.
When θ satisfies certain non-degenerate conditions, such as {θ j } K j=1 being linear independent (Anandkumar et al., 2012(Anandkumar et al., , 2014) ) or satisfying stronger "anchor word" (Arora et al., 2012) or "pseparability" conditions (Arora et al., 2013), computationally tractable estimators exist that recover θ at an n −1/2 rate measured in the Wasserstein's distance d W (•, •).The general case of θ being nonseparable or degenerate, however, is much less understood.To the best of our knowledge, the only convergence result for general θ case in the d W ( θ, θ) distance measure is due to Nguyen (2015), who established an n −1/2(K+α) posterior contraction rate for hierarchical Dirichlet process models.We discuss in Sec.1.1 several important differences between (Nguyen, 2015) and this paper.
We analyze the maximum likelihood estimation of the topic model in Eq. ( 1) and show that, with a relaxed "finite identifiability" definition, the ML estimator converges to one of the finitely many equivalent parameterizations (see Definition 2 and Theorem 1 for a rigorous statement) in Wasserstein's distance d W (•, •) at the rate of at least n −1/4 even if {θ j } K j=1 are non-separable or degenerate.Such rate is shown to be optimal by considering a simple "over-fitting" example.In addition, when {θ j } K j=1 are assumed to be linear independent, we recover the n −1/2 parametric convergence rate established in (Anandkumar et al., 2012(Anandkumar et al., , 2014)).
In terms of techniques, we adapt the classical analysis of rates of convergence for ML estimates in (Van der Vaart, 1998) to give convergence rates under finite identifiability settings.We also use Le Cam's method to prove corresponding local minimax lower bounds.At the core of our analysis is a binomial expansion of the total-variation (TV) distance between distributions induced by neighboring paramters, and careful calculations of the "level of degeneracy" in the TV-distance expansion of topic models, which subsequently determines the convergence rate.
Related work
In the non-degenerate case where {θ j } K j=1 are linear independent, Anandkumar et al. (2012Anandkumar et al. ( , 2014)); Arora et al. (2012) applied the method of moments with noisy tensor decomposition techniques to achieve the n −1/2 parametric rate for recovering the underlying topic vectors θ in Wasserstein's distance.Extension and generalization of such methods are many, including supervised topic models (Wang & Zhu, 2014), model selection (Cheng et al., 2015), computational efficiency (Wang et al., 2015) and online/streaming settings (Huang et al., 2015;Wang & Anandkumar, 2016).Under slightly stronger "anchor word" type assumptions, Arora et al. (2012) developed algorithms beyond spectral decomposition of empirical tensors and Arora et al. (2013) demonstrated empirical success of the proposed algorithms.
Topic models are also intensively studied from a Bayesian perspective, with Dirichlet priors imposed on the underlying topic vectors θ.Early works considered variational inference (Blei et al., 2003) and Gibbs sampling (Griffiths & Steyvers, 2004) for generating samples or approximations of the posterior distirbution of θ.Tang et al. (2014); Nguyen (2015) considered the posterior contraction of the convex hull of topic vectors and derived an N −1/2 upper bound on the posterior contraction rate, where N = log n n + log m m + log m n .Nguyen (2013Nguyen ( , 2016) ) further considered the more difficult question of posterior contraction with respect to the Wasserstein's distance.Apart from the Bayesian treatments of posterior contraction that contrasts our frequentist point of view of worst-case convergence, one important aspect of the work of (Tang et al., 2014;Nguyen, 2015Nguyen, , 2013Nguyen, , 2016) ) is that the number of words per document m has to grow together with the number of documents n, and the posterior contraction rate becomes vacuous (i.e., constant level of error) for fixed m settings.In contrast, in this paper we consider m being fixed as n increases to infinity.
Our work is also closely related to convergence analysis of singular finite-mixture models.In fact, our n −1/4 convergence rate can be viewed as a "discretized version" of the seminal result of Chen (1995), who showed that an n −1/4 rate is unavoidable to recover mean vectors in a degenerate Gaussian mixture model with respect to the Wesserstein's distance.Difference exists, however, as topic models have a K-dimensional mixing vector h for each observation and are therefore technically not finite mixture models.Ho & Nguyen (2016) proposed a general algebraic statistics framework for singular finite-mixture models, and showed that the optimal convergence rate for skewednormal mixtures is n −1/12 .More generally, singular learning theory is studied in (Watanabe, 2009(Watanabe, , 2013)), and the algebraic structures of Gaussian mixture/graphical models and structural equation models are explored in (Leung et al., 2016;Drton et al., 2011;Drton, 2016).
Limitations and future directions
We state some limitations of this work and bring up important future directions.In this paper the vocabulary size V and the number of topics K are treated as fixed constants and their dependency in the asymptotic convergence rate is omitted.In practice, however, V and K could be large and understanding the (optimal) dependency of these parameters is important.We consider this as a high-dimensional version of the topic modeling problem, whose convergence rate remains largely unexplored in the literature.
Our results, similar to existing works of Anandkumar et al. (2012Anandkumar et al. ( , 2014)), are derived under a "fixed m" setting.In fact, the convergence rates remain nearly unchanged by uniformly sampling 2 or 3 words per document, and it is not clear how longer documents could help estimation of the underlying topic vectors under our framework.In contrast, the posterior contraction results in (Tang et al., 2014;Nguyen, 2015) are only valid under the "m increasing" setting.We conjecture that the actual behavior of the ML estimator should be a combination of both perspectives: m ≥ 2 and n → ∞ are sufficient for consistent estimation, and m growing with n should deliver faster convergence rates.
Finally, the ML estimator for the topic modeling problem is well-known to be computationally challenging, and computationally tractable alternatives such as tensor decomposition and/or nonnegative matrix factorization are usually employed.In light of this paper, it is an interesting question to design computationally efficient methods that attain the n −1/4 convergence rate without assuming separability or non-degeneracy conditions on the underlying topic distribution vectors.
Additional notations
For two distributions P and Q, we write d TV (P ; Q) = 1 2 |dP −dQ| = sup A |P (A)−Q(A)| as the total variation distance between P and Q, and KL(P Q) = log dP dQ dP as the Kullbeck-Leibler (KL) divergence between P and Q.For a sequence of random variables {A n }, we write A n = O P (a n ) if for any δ ∈ (0, 1), there exists a constant C > 0 such that lim sup n→∞ Pr 2 Main results
Assumptions and finite identifiability
We make the following regularity assumptions on θ and ν 0 : (A1) There exists constant c 0 > 0 such that θ j (ℓ) > c 0 for all j ∈ [K] and ℓ ∈ [V ]; (A2) ν 0 is exchangeable, meaning that ν 0 (A) = ν 0 (π(A)) for any permutation π : Condition (A1) assumes that all topic vectors {θ j } K j=1 in the underlying parameter θ lie on the interior of the V -dimensional probabilistic simplex ∆ V −1 , which was also assumed in previous work (Nguyen, 2015;Tang et al., 2014).We use Θ c 0 to denote all parameters θ that satisfies (A1).The assumption (A2) only concerns the mixing distribution ν 0 which is known a priori, and is satisfied by "typical" priors of h, such as Dirichlet distributions and the "finite mixture" prior be the likelihood of X i with respect to parameter θ, where p θ,h (x) = K j=1 h j θ j (x).Alternatively, we also write p θ,m (X In the classical theory of statistical estimation, one necessary condition to consistently estimate θ from empirical observations {X i } n i=1 is the identifiability of θ, loosely meaning that different parameter in the parameter space gives rises to different distributions on the observables.
In the context of mixture models, the classical notion of identifiability is usually too strong to hold.For example, in most cases θ 1 , • • • , θ K can only be estimated up to permutations, provided that ν 0 is exchangeable.This motivates us to consider a weaker notion of identifiability, which we term as "finite identifiability": Finite identifiability is weaker than the classical/exact notion of identifiability in the sense that two different parameterization θ, θ ′ ∈ Θ is allowed to have the same observable distributions (almost everywhere), making them indistinguishable from any statistical procedures.On the other hand, finite identifiability is sufficiently strong that non-trivial convergence can be studied for any infinite parameter space Θ.Below we give a few examples on finite identifiable or non-identifiable distribution classes.
Example 2. The LDA model (1) with K ≥ 2 topics and m = 1 word per document is not finitely identifiable, because any parameterization θ = (θ 1 , • • • , θ K ) with the same "average" word distribution θ = 1 K K k=1 θ k yields the same distribution of documents, and for any θ there are infinitely many θ ′ that matches exactly its average distribution θ.
MLE and its convergence rate
The maximum likelihood estimation θ ML n,m is defined as where p θ,h is the likelihood function defined in Eq.
(3).To analyze the convergence rates of θ ML n,m , we introduce a notion of degeneracy as follows: Definition 3 (Order of degeneracy).Let X = [V ] be the vocabulary set and µ be the counting measure on X .Let X m = [V ] m be the product space of X and µ m be the product measure of µ.
Note that δ k does not need to be on the simplex ∆ V −1 .
We are now ready to state the main convergence theorem for the ML estimator.
under p θ,m (or equivalently p θ,m ), where in O P (•) we hide dependency on ν 0 , m and θ; 2. (Local minimax rate).Suppose p(m) < ∞.Then there exists a constant r θ > 0 depending only on ν 0 , m and θ such that where Remark 1.Our proof for the lower bound part of Theorem 1 actually proves the stronger statement that, for any θ ′ ∈ Θ n (θ), there exists constant τ > 0 such that no procedure can distinguish θ and θ ′ with success probability smaller than τ , as n → ∞.Note that Eq. ( 7) is a direct corollary of this testing lower bound by Markov's inequality.
Remark 2. The lower bound in Theorem 1 does not necessarily match the upper bound, because p * = p(m) in general.However, in two important special cases p * = p(m) and hence matching bounds are proved: if {θ j } K j=1 is linear independent and m ≥ 3, in which p * = p(m) = 1 and an n −1/2 convergence rate is optimal; or if θ j = θ k for some j = k and m ≥ 2, in which p * = p(m) = 2 and an n −1/4 convergence rate is optimal.
To fully understand the convergence rate of topic models using Theorem 1, it is important to understand the degeneracy structure d m,p (θ) for different parameter sub-classes.In the following sections we give some concrete results on the first-order and second-order degeneracy structures d m,1 and d m,2 .Throughout we assume K, m ≥ 2 and (A1), (A2) hold unless otherwise specified.
First-order degeneracy
We first state a sufficient condition for the topic model to be identifiable on the first order.
Note that with the conclusion of Lemma 1, we have that p * = p(m) = 1 if m ≥ 3 and hence the ML estimator has an optimal n −1/2 convergence rate.This essentially recovers the convergence result of (Anandkumar et al., 2012(Anandkumar et al., , 2014)), albeit by a different estimator (MLE instead of method of moments).
Lemma 1, as well as the results of Anandkumar et al. (2012Anandkumar et al. ( , 2014) ) requires two conditions: that {θ j } K j=1 being linear independent, and that m ≥ 3, meaning that there are at least 3 words per document.It is an interesting question whether both conditions are necessary to ensure first-order identifiability.We give partial answers to this question in the following two lemmas.
Lemma 2 shows that a certain degree of separability on θ is necessary to ensure first-order identifiability, and Lemma 3 shows that the m ≥ 3 condition is also necessary, unless there are only two topics present.The case where {θ j } K j=1 are distinct but linear dependent, however, remains open.
Second-order degeneracy
The following lemma shows that topic models are generally second-order identifiable, without any separability or non-degeneracy conditions imposed on θ.
Lemma 4 shows that, for any underlying parameter θ, if there are at least 2 words per document then p * ≤ 2, and hence θ is (finitely) identifiable by the ML estimator with an n −1/4 convergence rate.This conclusion holds even for the "over-complete" setting K ≥ V , under which existing works require particularly strong prior knowledge on θ (e.g., {θ j } K j=1 being i.i.d.sampled uniformly from the V -dimensional probabilistic simplex) for (computationally tractable) consistent estimation (Anandkumar et al., 2017;Ma et al., 2016).
Proofs
In this section we prove the main results of this paper.To simplify presentation, we use C > 0 to denote any constant that only depends on V, K, m, ν 0 and c 0 .We also use C θ > 0 to denote constants that further depends on θ ∈ Θ c 0 , the underlying parameter that generates the observed documents.Neither C nor C θ will depend on the number of observations n.
Before proving the main theorem and subsequent results on concrete values of d m,p , we first prove a key lemma that connects the defined degeneracy criterion with the total-variation (TV) distance between measures corresponding to neighboring parameters.The finite identifiability of {p θ,m } can then be established as a corollary of Lemmas 5 and 4.
Proof.If p(m) = ∞ then the inequality aumotatically holds.Suppose p(m) = p and assume by way of contradiction that p(m ′ ) = p ′ > p for some m ′ > m.Invoking Lemma 5 and the data processing inequality, we know that for sufficiently small ǫ > 0, sup On the other hand, because p(m) = p, we know that Eqs. ( 10) and ( 11) clearly contradict each other by considering θ ′ such that ǫ ≤ d W (θ, θ ′ ) ≤ 2ǫ and letting ǫ → 0 + .Thus, we conclude that p(m ′ ) ≤ p(m).
Proof of Theorem 1
We use a multi-point variant of the classical analysis of maximum likelihood (Van der Vaart, 1998, Sec. 5.8) to establish the rate of convergence for MLE, and Le Cam's method to prove corresponding (local) minimax lower bounds.
Proof of upper bound.Let θ ∈ Θ c 0 be the underlying parameter that generates the data, and Θ c 0 (θ) be the (finite) set of its equivalent parameterizaitons.For ǫ > 0, define For any θ, θ ′ ∈ Θ c 0 , and X 1 , • • • , X n ∈ X m i.i.d.sampled from the underlying distribution p θ,m , define the "empirical KL-divergence" KL(p θ,m p θ ′ ,m ) as By definition of the ML estimator, we know Furthermore, we know that the "population" version of Eq. ( 12) must be correct: KL(p θ,m p θ ′ ,m ) > 0 for all θ ′ ∈ Θ c 0 ,ǫ (θ), because KL(P Q) = 0 implies d TV (P ; Q) = 0, and all θ ∈ Θ c 0 satisfying d TV (p θ,m ; p θ,m ) = 0 are contained in Θ c 0 (θ) and thus excluded from Θ c 0 ,ǫ (θ) by definition.Therefore, to prove convergence rate of the MLE it suffices to upper bound the perturbation between empirical and population KL-divergence and lower bounds the population divergence for all θ ∈ Θ c 0 ,ǫ (θ).
We first consider the simpler task of bounding the perturbation between KL n (p θ,m p θ ′ ,m ) and its population version KL(p θ,m p θ ′ ,m ).Note that KL n (p θ,m p θ ′ ,m ) is a sample average of i.i.d.random variables.Using classical empirical process theory, we have the following lemma that bounds the uniform convergence of KL n towards KL; its complete proof is given in the appendix.Lemma 6.There exists C θ > 0 depending only on θ, c 0 , m, ν 0 such that As a corollary, by Markov's inequality we know that for all δ ∈ (0, 1), with probability 1 − δ We next establish a lower bound on KL(p θ,m p θ ′ ,m ) for all θ ′ ∈ Θ c 0 ,ǫ (θ).Let m ′ ≤ m be the integer that gives rises to p * = p(m ′ ).By Pinsker's inequality and the data processing inequality, we have that for any θ ′ ∈ Θ c 0 ,ǫ (θ), Subsequently, invoking Lemma 5, we have that for all 0 < ǫ ≤ ǫ 0 inf where ǫ 0 > 0 is a constant defined in Lemma 5 that only depends on K, V, ν 0 , c 0 and m ′ .Furthermore, because Θ c 0 ,ǫ 0 (θ) is a subset of Θ c 0 that does not depend on ǫ, and that all θ ∈ Θ c 0 satisfying d TV (p θ,m ; p θ,m ) = 0 are included in Θ c 0 (θ) and thus excluded in Θ c 0 ,ǫ 0 (θ), we have that where γ θ is a constant that does not depend on ǫ.Subsequently, for all sufficiently small ǫ > 0 inf Combining Eqs. ( 12), ( 13) and ( 14) with ǫ ≍ n −1/2p * we complete the proof of convergence rate of the ML estimator.
Proof of lower bound.Invoking Lemma 5 we have that where C θ > 0 is the constant in Eq. ( 8) and r θ is the constant in the definition of Θ n (θ), both independent of n.In addition, for all θ, θ ′ ∈ Θ c 0 the following proposition upper bounds their KL-divergence using TV distance: Proposition 1.There exists a constant C > 0 depending only on V, K, ν 0 , c 0 and m such that, for all θ, θ At a higher level, Proposition 1 can be viewed as an "exact" reverse of the Pinsker's inequality with matching upper and lower bounds for the KL divergence.It is not generally valid for arbitrary distributions, but holds true for our particular model with θ, θ ′ ∈ Θ c 0 because both p θ,m and p θ ′ ,m are supported and bounded away from below on a finite set.We give the complete proof of Proposition 1 in the appendix.
Let θ ′ be an arbitrary parameterization in Θ n (θ), and let p ⊗n θ,m = p θ,m × • • • × p θ,m be the n-times product measure of p θ,m , Using Eq. ( 15), Proposition 1 and the fact that the KL-divergence is additive for product measures, we have Subsequently, using Pinsker's inequality we have .
By choosing r θ to be sufficiently small we can upper bound the right-hand side of the above inequality by 1/2.Applying Le Cam's inequality we conclude that no statistical procedure can distinguish θ from θ ′ using n observations with success probability higher than 3/4.The lower bound is thus proved by Markov's inequality.
Proof of Lemma 1
This lemma is essentially a consequence of (Anandkumar et al., 2014), which developed a √ nconsistent estimator for linear independent topics via the method of moments.More specifically, the main result of (Anandkumar et al., 2014) can be summarized by the following theorem: K j=1 w j θ j 2 is the least singular value of the topics vectors, and σ 0 > 0 is a positive constant.Then there exists a (computationally tractable) estimator θ n such that for all θ ∈ Θ σ 0 ,c 0 , where C σ 0 > 0 is a constant that only depends on V, K, ν 0 and σ 0 .
We remark that the original paper of (Anandkumar et al., 2014) only considered the case where ν 0 is the Dirichlet distribution.However, our assumption (A2) is sufficient for the success of their proposed algorithms and analysis.
Proof of Lemma 4
For any ℓ ∈ Here the last identity holds because ν 0 is exchangeable.Since
Proof of Proposition 1.We prove a more general statement: if P and Q are distributions uniformly lower bounded by a constant c > 0 on a finite domain D, then there exists a constant C > 0 depending only on c such that KL(P Q) ≤ C • d 2 TV (P ; Q).This implies Proposition 1 because for any θ ∈ Θ c 0 , p θ,m is uniformly lower bounded by c m 0 on X m .Let µ be the counting measure on D. Using the definition of KL divergence and second-order Taylor expansion of the logarithm, we have On the other hand, d TV (P ; Q) = D |P − Q|dµ ≥ D (P − Q) 2 dµ.Therefore, KL(P Q) ≤ (1/2c 2 + 1/c) • d 2 TV (P ; Q). | 5,644.8 | 2017-10-30T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
The Fisher Effect in an Emerging Economy : The Case of India
The objective of this study is to test the relationship between short-term nominal interest rate and inflation in the context of the Indian financial market. To achieve this objective we perform Augmented Dickey-Fuller unit root test to check for stationarity and thereafter we test for co-integration using the Engle-Granger method and further corroborate the findings of this test with the Johansen-Juselius method. Lastly, we perform the Granger causality test. Monthly data of inflation and nominal short term interest rates for the period from April 1996 to August 2004 were used. We find that expected inflation and nominal short-term interest rates are co-integrated in the Indian context. Thus, the present study doesn’t reject the Fisher effect in the Indian financial market. This test shows that expected inflation is Granger caused by nominal short term interest rates. These findings are important in the context of financial market policies in emerging economies like India.
Introduction
The objective of this study is to test the relationship between nominal short-term interest rate (NSTIR) and inflation in the context of the Indian financial market.To achieve this objective we perform Augmented Dickey-Fuller (ADF) and the Phillips-Perron (PP) test on these two time series to diagnose their stationarity.Where a unit root is found the series is first differenced.Thereafter we test for co-integration using the Engle-Granger method and further corroborate the findings of this test with the Johansen-Juselius method.We find that inflation and nominal short-term interest rates are co-integrated in the Indian context.Thus, the present study doesn't reject the Fisher effect in the Indian financial market.Lastly, we perform the Granger causality test.This test shows that inflation is Granger caused by NSTIR.
The motivation for the study comes from two perspectives.Firstly, India is an emerging economy and findings of the study would help policy makers to take suitable policy initiatives in that economy.U Secondly, all empirical studies concerning the Fisher hypothesis have primarily focused on US and European economies.No published study to our knowledge exists that has examined the relationship in the context of an emerging economy like India.The paper is organized as follows: the next section is a snapshot of the Indian economy, section 3 reviews literature on the Fisher hypothesis, section 4 is about data and methods, section 5 presents the findings of the study and section 6 concludes.
A snapshot of the Indian economy
The Indian economy achieved a growth rate of 8.2 per cent in the financial year 2004 and is considered to be the fastest growing free-market democracy in the world.It is one of the world's largest food producers, which produces 600 million tonnes of food grains every year holding a buffer stock of nearly 50 million tonnes of food grains (wheat and rice) in 2003-2004.It is also the second largest exporter of rice and the fifth largest exporter of wheat in the world; its agricultural exports account for nearly 14.2 percent of its total exports.The Indian services sector is growing consistently at a rate of 7 percent per annum and accounted for almost half of the country's GDP in the 2004 financial year.India's foreign exchange reserves stood at a record high of $120.78 billion in July 2004.
The financial markets and financial institutions in India are quite well developed.The financial institutions in India comprise deposit taking institutions like commercial banks and cooperative banks, long-term financial institutions like the Industrial Development Bank of India (IDBI), savings institutions like the Unit Trust of India (UTI), life and general insurance companies, superannuation funds and non-bank financing companies.Commercial banks dominate the financial sector in India.The assets of Indian commercial banks formed 64% of the total assets of the financial institutions in India.As at the end of June 2003, there were 295 commercial banks comprising 27 public sector banks, 32 private sector banks, 40 foreign banks and 196 regional rural banks.The cooperative banking sector consisted of 52 urban cooperative banks and 16 state cooperative banks (RBI-RTPB, 2003).The following financial indicators of Indian commercial banks may be of interest.
Even prior to India's political independence in August 1947, she had a well-developed stock market.India's major financial markets could be grouped under five broad categories: the money market, foreign exchange market, debt market (government securities market), equity market and the derivatives market.
Literature on the Fisher effect
The Fisher's hypothesis is regarded as one of the most important hypotheses in macroeconomics.Fisher (1930) postulated that the nominal interest rate consists of an expected 'real' rate plus an expected inflation rate.He claimed a one-to-one relationship between inflation and interest rates.Real interest rates he claimed were unrelated to the expected rate of inflation and were determined entirely by the real factors in an economy, such as the productivity of capital and investor time preference.
It has implications in the context of real purchasing power of money, asset valuation and capital market efficiency, and is important Many studies in the United States and Europe have tested the Fisher hypothesis over the years.These studies have yielded mixed results.The studies of Fama (1975), Atkins (1989), Mishkin (1992) and Crowder and Hoffman (1996) found support for the Fisher hypothesis but studies such as those by Mishkin (1981Mishkin ( , 1984)), Barthold and Dougan (1996) and Rose (1988) have shown contradictory results.Some other studies like those of MacDonald and Murphy (1989), Wallace and Warner (1993) and Engsted (1996) found that findings varied with time periods and across countries.As already noted these studies were in the context of US and Europe.Almost all the studies have examined the relationship between NSTIR and inflation and no strong evidence of the existence of the Fisher effect was noticed in these studies.In the context of emerging economies like India, Thomas Paul (1984) examined the Fisher effect.It has been more than two decades now and important changes have taken place in the Indian economy.When the Thomas Paul study was conducted, the economy was very much repressed.However, since 1991, the government has followed a policy of market liberalisation.The economic situation in India has changed dramatically.Many regulations have been removed and the economy is on a high growth rate path.Consequently, there is a need to examine the Fisher effect in changed economic conditions.However, we have not come across any study that has done this in recent years, that is, after the Thomas Paul Study.This study thus bridges a major gap in the literature by examining the Fisher hypothesis in an emerging economy and could help guide further research in this area.
Data and methods
The data required for the study was collected from the Handbook on Indian Statistics published by the Reserve Bank of India which is available at their website.Monthly consumer price index values and monthly yield rates on Treasury bills of 90 days were used.The period covered was from April 1996 to August 2004 (101 months) as the data of these years is available at the Reserve Bank of India website.Monthly inflation rates were calculated as the first difference of the natural logarithm of the consumer price index.Almost all the previous studies on the Fisher hypothesis have examined the relationship between NSTIR and inflation.The question that this study addresses is similar to that of Engsted (1996): whether or not nominal short-term interest rates reflect expected inflation.The procedure that we follow to investigate this phenomenon follows.Firstly, we examine whether the two series under investigation are stationary.We do this by applying the ADF unit root test.Secondly, we examine if the first difference of these series is stationary and in that case perform the Engle-Granger test of co-integration.Thereafter we investigate the relationship between NSTIR and inflation by ordinary least squares regression.Such regressions include 'the possibility of obtaining spurious or dubious results in the sense that superficially the results look good but on further probing they look suspect' (Gujarati, 1995, p. 724).This situation has also been described by Granger and Newbold (1974) and by Phillips (1986).Thirdly, we perform the Johansen-Juselius (1990) procedure to further confirm results of the above Engle-Granger test of co-integration.Finally, we run the Engle-Granger Causality test together with the Error Correction Model to examine whether the two series display any causal relationship.The present study is different from the Engsted (1996) study as it uses monthly data and represents a longer time series than does Engsted's study, which used quarterly data.
Unit root test
Table 2 reports the results of the ADF unit root test.The results reveal that the null hypothesis of unit root can't be rejected at the levels for inflation rate series and short-term interest rate series.However, the results of ADF for both the series at first difference show that the series are now stationary.Thus NSTIR and inflation rate are both I (1) processes.
Co-integration Tests: The Engle-Granger Method
Given that both the processes are of the same order of integration, one can now proceed to test for co-integration.We estimate the long-term relationship in linear form by the ordinary least squares method and present the results in Table 3 below.
The model is not a good fit when the inflation series is regressed on NSTIR in level.The values of R 2 and adjusted R 2 are insignificant.The coefficients are not significant either.We conclude that the standard regression interpretation of the coefficients is not valid.This leads us to the Engle-Granger test of the residuals from this regression.The ADF and the PP unit root test were applied to the residuals.The results from these tests are presented in Table 4 below and suggest that the residuals are strongly stationary and the series are co-integrated.
Co-integration Tests: The Johansen-Juselius Method
For bi-variate time series, the Engle-Granger co-integration method described above should be adequate.However, to further corroborate the above results we apply a more general technique developed by Johansen (1988Johansen ( , 1991) ) and by Johansen and Juselius (1990).They proposed a maximum likelihood estimation procedure, which allows researchers to estimate simultaneously the system involving two or more variables.
To test the hypothesis of no co-integrating relations (r = 0) against the general alternative of r > 0 the trace test statistic has a calculated value of 30.044.The 10% critical value is 28.4 and so the null is rejected.To test the null that r = 0 against the alternative that r = 1, the maximum eigenvalue test statistic is reported as 22.506; again this is more than the 10% critical value of 19.0.The general conclusion is that there is evidence to support the co-integrating relation in the data series.These findings are similar to that of the Thomas Paul (1984) study.
Granger causality
The fact that the two series are co-integrated doesn't mean that one causes the other.To test whether NSTIR causes expected inflation needs to be checked.We deploy the test of Granger causality to check this.We perform two tests.In test 1 our null hypothesis is that NSTIR doesn't Granger cause expected inflation.For test 2 our null hypothesis is that expected inflation doesn't Granger cause NSTIR.We include the relevant error correction term.The results are as shown in Table 5.
In summary, Granger causality test results show that short-term nominal interest rates help in predicting future inflation.
Conclusion
This study examined the relevance of the Fisher effect in the context of an emerging economy like India.Data required for the study was available from the Handbook on Statistics of the Reserve Bank of India.The period covered was from April 1996 to August 2004 (100 months observations).The ADF unit root test showed that the series of expected inflation and nominal short-term interest rates are not stationary at levels but are both I (1) processes.Thereafter the co-integration test (Engle-Granger method) was used which showed that the series are co-integrated.This finding was confirmed by the Johansen-Juselius method.Finally we used the Granger causality test with error correction model to determine the direction of the relationship.The results showed that short-term nominal interest rates do help in predicting future inflation in the Indian context.
Table 1 .
Financial indicators of banks in India As of June 2003 Reserve Bank of India, Mumbai.Indian Banks' Association, 2003.Performance Highlights of Banks: Public Sector Banks, Indian Banks' Association, Mumbai) @ As of September 2003.(Source: Reserve Bank of India (RBI).2003.Report on Trend and Progress of Banking in India (RTPB),
Table 4 .
Results of ADF and PP Tests on the Residuals from Long-run Regression
Table 5 .
Granger causality test with error correction model: Vector Error Correction estimates | 2,864.4 | 2009-02-09T00:00:00.000 | [
"Economics"
] |
Preface
Foreword The 23rd edition of Innovative Manufacturing Engineering & Energy International Conference (IManEE 2019) is the result of a collaboration between the Department of Manufacturing and Industrial Management of the University of Piteşti (Romania) and the Department of Manufacturing Engineering of the “Gheorghe Asachi” Technical University of Iaşi (Romania). The first conference was organized in May 1996 at Iaşi and, since 1999, the conferences have been organized alternately, in Iaşi and Chişinău, by the collaboration of the Machine Manufacturing Departments from the “Gheorghe Asachi” Technical University of Iaşi (Romania) and from the Technical University of Moldova from Chisinau (Republic of Moldova). By considering the new trends and requests valid in the field of manufacturing engineering from the entire world, starting from 2013, the name of the conference was changed in „Innovative Manufacturing Engineering” and later in „Innovative Manufacturing Engineering and Energy”. Over the last few years, other departments from universities in Romania (Braşov, Bucureşti, Cluj, Galaţi and Piteşti) or other countries (Serbia, Greece) have been involved in organizing the conference. List of Honorary Committee, Scientific Committee, Organizing Committee, Keynote Speakers, Conference Chair, Sections Chairs, Sponsors and Partners are available in this PDF.
Foreword
The 23 rd edition of Innovative Manufacturing Engineering & Energy International Conference (IManEE 2019) is the result of a collaboration between the Department of Manufacturing and Industrial Management of the University of Piteşti (Romania) and the Department of Manufacturing Engineering of the "Gheorghe Asachi" Technical University of Iaşi (Romania).
The first conference was organized in May 1996 at Iaşi and, since 1999, the conferences have been organized alternately, in Iaşi and Chişinău, by the collaboration of the Machine Manufacturing Departments from the "Gheorghe Asachi" Technical University of Iaşi (Romania) and from the Technical University of Moldova from Chisinau (Republic of Moldova). By considering the new trends and requests valid in the field of manufacturing engineering from the entire world, starting from 2013, the name of the conference was changed in "Innovative Manufacturing Engineering" and later in "Innovative Manufacturing Engineering and Energy". Over the last few years, other departments from universities in Romania (Braşov, Bucureşti, Cluj, Galaţi and Piteşti) or other countries (Serbia, Greece) have been involved in organizing the conference.
It is the first time this conference is hosted by University of Piteşti, being for the fifth time when the Manufacturing and Industrial Management Department is involved in organizing this conference. This event takes place in the year when Faculty of Mechanics and Technology of the University of Piteşti celebrate 50 years from its foundation, representing a recognition of her prestige. The prestige enjoyed by the Faculty of Mechanics and Technology is the result of the didactical and scientific research activities carried out by competent teachers who, together with the graduates, managed to make known on a national and international level the existence of a technical faculty in Piteşti, closely related to the automotive industry.
The process of submitting and reviewing of the conference papers has been managed using the Easychair platform (https://easychair.org). The conference has benefited of support of 86 reviewers, from 10 countries, each paper being reviewed by two or three reviewers. Finally, 140 papers were selected by the Scientific Committee of the over 210 papers proposed to be presented at the IManEE 2019 International Conference. The authors of the accepted papers come from Universities and Research Institutes from 14 countries such as Algeria, Bulgaria, France, Germany, Greece, Iraq, Israel, Moldova, Norway, Poland, Romania, Russia, Slovakia and Ukraine.
The IManEE 2019 International Conference is structured in ten sections: Advanced Machining and Surface Engineering; Assembling Technologies, Forming Technologies, Additive Manufacturing; Non-Conventional Technologies in Manufacturing, Welding Technologies; Advanced Materials; Design and Analysis, CAD/CAM/CAE/CAx Technologies; Flexible Manufacturing, Automation and Robotics in Technological Processes; Mechanical and Manufacturing Equipment, Devices and Instrumentation; Innovation, Creativity, Learning and Education in Engineering; Industrial and Product Management, Quality and Evaluation; Automotive & Transport Engineering; Smart Grids, Energy Efficiency in Buildings, Energy systems and energy management, Energy policies, Environmental technologies and studies, Renewable energy technologies and energy saving materials. These sections cover most of the manufacturing engineering domains, but also some areas specific to automobile and transport engineering and energy.
We hope the IManEE 2019 Conference will be a good opportunity to change information of mutual interest in the above-mentioned fields and for new collaborations among participants to develop that will lead to joint research projects for the benefit of all.
The organizing committee wishes all the participants to be successful and expresses conviction that we will have the opportunity to meet again at University of Piteşti at other future IManE&E conference editions. | 1,062.6 | 1983-12-31T00:00:00.000 | [
"Materials Science"
] |
Vulnerability of Slums to Livelihood Security: A Case Study of 3 JJ Clusters, Delhi
Vulnerability has been defined as the characteristics of a person or a group of persons i.e. in terms of their capacity to cope with, anticipate, resist and recover from the impacts of natural or man-made hazards or any external event. Vulnerability is also defined as the inability to withstand the effects of hostile environment. Hostile environment refers to livelihood security in this research. Concept of vulnerability is described within five categories of livelihood security, which are economic, social, education, food and health. The parameters for assessing the vulnerability of slums for different location are within the five categories of livelihood security that are economic security, social security, education security, food security and health security.
Introduction
Livelihood is a means of making a living. It incorporates people's abilities, assets, income and actions required to secure the requirements of life. A livelihood is sustainable when it enables people to cope with and recover from shocks and stresses (such as natural disasters and economic or social disturbances) and improve their well-being and that of upcoming generations without undermining the natural environment or resource base. becomes vulnerable when it fails to cope with or recover from such stresses and shocks. Vulnerability @ IJTSRD | Available Online @ www.ijtsrd.com | Volume -1 | Issue -6 | Sep -Oct 2017 Akhil Chhibber Master's Student pursuing Urban and Regional Planning, Department of Planning, and Architecture, Vijayawada Vulnerability has been defined as the characteristics of a person or a group of persons i.e. in terms of their capacity to cope with, anticipate, resist and recover made hazards or o defined as the inability to withstand the effects of hostile environment. Hostile environment refers to livelihood security in this research. Concept of vulnerability is described within five categories of livelihood security, education, food and health. The parameters for assessing the vulnerability of slums for different location are within the five categories of livelihood security that are economic security, social security, education security, food Vulnerability, livelihood security, health security, social security, economic security, food Livelihood is a means of making a living. It incorporates people's abilities, assets, income and s required to secure the requirements of life. A livelihood is sustainable when it enables people to cope with and recover from shocks and stresses (such as natural disasters and economic or social being and that of oming generations without undermining the natural environment or resource base. Livelihood becomes vulnerable when it fails to cope with or recover from such stresses and shocks. Vulnerability of a slum can be assessed on countless scales like location of the slums and status of housing, availability and accessibility of basic services like water supply, drainage and toilets, nature of occupation/employment, access with social, physical and economic accessibility to health services, status of gender, education, social capital and existence of development organizations and activities.
Research questions and Methodology
To assess the level of vulnerability to livelihood security for different slum locations certain research questions need to be answered: 1. What are the parameters used to assess the vulnerability to livelihood security for slums? 2. Does the location of slums have any effect on livelihood security of slums? 3. Which location of slums is highly vulnerable to livelihood security? 4. What parameter of livelihood security is most important that affects vulnerability the most? The research began with an extensive literature review on concepts related to vulnerability and livelihood security. The research gap identified was helpful in framing the aim of the assess the level of vulnerability to livelihood security for chosen slum clusters. The parameters were developed for assessing the vulnerability to livelihood security and vulnerability index was calculated with respect to the various kinds of livelihood security viz food, education, economic, health and social security and also with respect to the three chosen locations.
Research questions and Methodology
To assess the level of vulnerability to livelihood security for different slum locations certain research questions need to be answered: What are the parameters used to assess the vulnerability to livelihood security for slums? Does the location of slums have any effect on livelihood security of slums? Which location of slums is highly vulnerable to velihood security is most important that affects vulnerability the most? The research began with an extensive literature review on concepts related to vulnerability and livelihood security. The research gap identified was helpful in framing the aim of the study that is to assess the level of vulnerability to livelihood security for chosen slum clusters. The parameters were developed for assessing the vulnerability to livelihood security and vulnerability index was calculated with s of livelihood security viz food, education, economic, health and social security and also with respect to the three chosen locations. The main tools used for analysis are Arc GIS 10.1, MS Excel and SPSS. The outcome of the research is a vulnerability index and a livelihood security matrix drawn with reference to different slum locations.
Study area: Delhi
The case area taken for this research is the capital city of India i.e. New Delhi ( Figure 1). Haryana borders it on three sides and by Uttar Pradesh to the east. The case area for this research is JJ Cluster in New Delhi. Delhi, capital of India, is home to about 2 million persons living in slums and it is assessed that 45% of its population lives in unofficial colonies, JhughiJhompri (JJ) and urban villages. The slum is connected by road to NH 24.The slum has rail connectivity to AnandVihar railway station, AnandVihar metro station and AnandVihar old railway station. The detailed land use of the slum is depicted in figure 3. Okhla Slum is located in South Delhi district located at Delhi border. It is well connected by road and rail and is not far from the airport, as it is located in the centre of the National Capital Territory. Okhla is also connected to the Delhi Metro. The land use of Okhla Slum is depicted in figure 4. Zhakira slum is located in the North West district of Delhi. The JJ Cluster is well connected through road and public transport. Further, two Delhi Metro stations namely Keshav Puram and Kanhiya Nagar located in its proximity enhances its connectivity. The land use of Zakhira slum is depicted in figure 5.
Data
Primary data was collected through household questionnaire survey key informant survey and also through observation survey. The main themes included in the questionnaire are the different components of livelihood security that are economic security, health security, education security, food security and social security. The data was collected at 2 levels. The data for health, education, economic, food and social scenario of the slum dwellers was collected at household level and data for the age, sex, education, occupation, income, education level, mode of transport and transportation cost was collected at individual level. The secondary data was collected from various government departments such as Delhi slum map that shows location of different slums clusters, list of notified slums and household size of JJ cluster i.e. with location and Slum Free City Plan of Action (SFCPoA) draft report from Delhi Urban Shelter Improvement Board (DUSIB). Households were selected randomly from the 3 slums. The size was determined using the Cochrane formula: Page: 442 each of the seven major components for a district were calculated, they were averaged using the below equation to obtain the slum level Vulnerability Index: which can also be expressed as: Where , is the Vulnerability Index for slum s, equals the weighted average of the five major components. The weight to each sub-component has been assigned based on expert opinion. Low vulnerability index ranges from 0.01-0.35, medium vulnerability index ranges from 0.35-0.70 and high vulnerability index ranges from 0.70-0.10. 6. Results and Discussion 6.1 Vulnerability based on Securities 6.1.1 Economic Security: Based on the economic security index i.e. vulnerability in terms of economic security (figure 6), it can be inferred that the major parameters that increase the level of vulnerability are average family savings which are directly linked to the household income thereby leading to vulnerability with regard to structure of house, occupation and access to workplace. The sub-components such as unnecessary expenditure and female-headed households are not vulnerable and are secure in case of economic security. The major issue identified is that 25% of the total population does not perform any activity for source of occupation. This population majorly includes old aged people, housewife, widow and a few young members. As a result there is no family savings in the households. Even the family who do have some savings, it is less than Rs. 2000 per month. The majority of income being earned by the households is spent on their source of living i.e. food. Figure 6: Vulnerability to Economic Security based on sub-components 6.1.2 Health Security: Based on the vulnerability index to health security, it can be inferred that the major parameters that increase the level of vulnerability in terms of health security are availability of individual or community toilets, dependency on open area for toilet, lack of availability of water supply pipelines, no provision for drainage and a lack of immunization (figure 7). The other sub-components including electricity, morbidity and access to health care facilities have low level of vulnerability. In case of slum areas where a high rate of illness has been considered, no proper immunization service has been observed in the slum area that is an important factor that increases the level of vulnerability to health security. Lack of proper water supply i.e. in terms of quality and quantity and dependency on open ground for toilets. Open/No drain in the study area is one of the reasons that leads to a high level of illness among the slum households which thereby leads to an increase in level of vulnerability in terms of health security.
Education Security:
In terms of educations security, there is proper provision of education facility with a low level of vulnerability in terms of access to education facilities (figure 8). The major issue is illiteracy among adults. The children in the study area are found to be less vulnerable as compared to adults when education security is being considered. After health security, social security is the component that affects livelihood security the most. The major sub-components that increase the level of social security are participation by the CBO/NGO, access to CBO/NGO, involvement in planning process and social assistance ( figure 10). Further other subcomponents including girl child in teenage, household size and availability of widow in households have been observed to have a low level of vulnerability when social security is considered. The issues are that no NGO/CBO participation is there for the betterment of slums. Further social assistance is an issue, as slum dwellers have to depend on third party loan providers i.e. at an interest rate of 5-10% that increases the rate of vulnerability of slum households in terms of social security. Page: 444 6.2.3 Education Security: In case of education security, Zhakira is highly vulnerable i.e. because of more number of illiterates as compared to other locations ( figure 13). Zhakira is followed by Okhla leading to Gazipur having the least level of vulnerability in terms of education security.
Livelihood Security, locationand Vulnerability
The radar diagram (figure 18) depicts the different components of livelihood security and how they contribute to increase or decrease in the level of vulnerability in terms of livelihood security. The maximum affected is Gazipur in terms of health security, which is because of its location along the drain. Further the economic security is also highly vulnerable in Gazipur. The least level of vulnerability has been observed in food security i.e. in Zhakira. The reason for this is availability and accessibility of food. This is due to less expenditure on food and the households having savings. Considering all the components of the livelihood security, food and education security are the least vulnerable and economic, health and social security are highly vulnerable. Figure 18: Livelihood Security, location and Vulnerability
Major Issues
The level of vulnerability in terms of social security is highest in Okhla. The reason for this is more dependency on third party loan providers for social assistance to the slum dwellers. Gazipur is the location that is highly vulnerable in terms of livelihood security, which is because of its location. The major issues identified with respect to health security are lack of availability of water supply i.e. in terms of quality and quantity, lack of availability of toilets, dependency on open land for toilet, no provision of drain network for the slum dwellers i.e. open/no drain and no immunization service in the areas which are vulnerable to health security. In terms of education security the major issue is the low level of literacy rate of adults, both males and females. High rate of expenditure on food is the major problem that makes the food security highly vulnerable. Lack of CBO/NGO participation, no involvement of slum dwellers n planning process and dependency of slum dwellers on money lenders for social assistance are some of the factors deepening the vulnerability towards social security of slum dwellers. Economic security is vulnerable due to nil family savings or little family savings accounting for less than Rs. 2000 per month and the informal employment of slum dwellers as street vendors or construction workers.
CONCLUSION
As a result of this research, it was identified that health security affects livelihood security the most, followed by social and economic security. The study along 3 different slums locations helps us identify the slum cluster located along environmentally sensitive zone i.e. along Naalah/drain is highly vulnerable to livelihood security when compared with other slum location. Gazipur slum i.e. the case area located along drain was highly vulnerable to livelihood security in the 3 slum locations. It is necessary to propose facilities in terms of infrastructure, social, economic etc. for the slum dwellers so as to make their livelihood secure. Education and food security were found to be least vulnerable in comparison to health, social and economic vulnerability. All the three slums need intervention in terms of provision of infrastructure. Since Gazipur is located near environmentally sensitive zone and is also observed to be highly vulnerable so it needs to be relocated. Considering Zhakira where housing structure has been observed to be pucca, slum infrastructure upgradation can be an option. Similarly in case of Okhla, housing up-gradation which includes upgradation of kutcha/semi-pucca structures into pucca and thereby provision of basic infrastructure components including toilet, water supply, drainage etc. can be thought of as one of the solutions to lessen the overall vulnerability of the slum. Also social assistance programs need to be initiated for elder population, widows along with provision of food at low cost.
This research will be helpful for further researchers, municipalities, development authorities etc. and work as a tool/index so as to assess the level of vulnerability to livelihood security of slums. As this research was limited to 3 JJ clusters, future research can be done for city/state level vulnerability assessment. Future researchers can also amend the index by adding/deleting sub-components based on the study area and issues of the beneficiaries. | 3,547.6 | 2017-10-12T00:00:00.000 | [
"Computer Science"
] |
Non-directional radial intercalation dominates deep cell behavior during zebrafish epiboly
Summary Epiboly is the first coordinated cell movement in most vertebrates and marks the onset of gastrulation. During zebrafish epiboly, enveloping layer (EVL) and deep cells spread over the vegetal yolk mass with a concomitant thinning of the deep cell layer. A prevailing model suggests that deep cell radial intercalations directed towards the EVL would drive deep cell epiboly. To test this model, we have globally recorded 3D cell trajectories for zebrafish blastomeres between sphere and 50% epiboly stages, and developed an image analysis framework to determine intercalation events, intercalation directionality, and migration speed for cells at specific positions within the embryo. This framework uses Voronoi diagrams to compute cell-to-cell contact areas, defines a feature-based spatio-temporal model for intercalation events and fits an anatomical coordinate system to the recorded datasets. We further investigate whether epiboly defects in MZspg mutant embryos devoid of Pou5f1/Oct4 may be caused by changes in intercalation behavior. In wild-type and mutant embryos, intercalations orthogonal to the EVL occur with no directional bias towards or away from the EVL, suggesting that there are no directional cues that would direct intercalations towards the EVL. Further, we find that intercalation direction is independent of the previous intercalation history of individual deep cells, arguing against cues that would program specific intrinsic directed migration behaviors. Our data support a dynamic model in which deep cells during epiboly migrate into space opening between the EVL and the yolk syncytial layer. Genetic programs determining cell motility may control deep cell dynamic behavior and epiboly progress.
Introduction
Vertebrate gastrulation combines three principle coordinated cell movements to establish the three germ layers and extend the embryonic axes (Keller, 2005;Leptin, 2005;Solnica-Krezel, 2005). Epiboly represents the spreading of embryonic cells over a vegetal yolk mass, resulting in ectoderm to cover the embryo. Emboly (ingression, involution, invagination) relocates the mesendodermal anlagen into the inner embryo. Convergence of vegetal and lateral cells to the dorsal axis and extension establish the long axis of the embryo. Epiboly, emboly and convergence each pose major challenges to the understanding of large-scale coordinated cell movements (Keller et al., 2008). Here, we investigate how blastoderm cells behave dynamically to achieve epiboly spreading over the yolk cell in the zebrafish embryo.
The zebrafish blastoderm is composed of enveloping (EVL) and deep cell layer (DCL). Beneath the blastoderm is the yolk syncytial layer (YSL), which contains yolk syncytial nuclei (YSN), and is continuous vegetalwards with the yolk cytoplasmic layer (YCL) (Warga and Kimmel, 1990;Solnica-Krezel and Driever, 1994;Kimmel et al., 1995). Epiboly of the zebrafish embryo is initiated by animalward doming of the yolk cell and vegetalward movement of YSL and vegetal EVL margin (Fig. 1A). Vegetalward spreading of the DCL results in thinning of the blastoderm layer at the animal pole and covering of the yolk mass. Several mechanisms have been shown to contribute to epiboly. In the yolk cell, long vegetal arrays of microtubules drive epiboly of the YSL (Strähle and Jesuthasan, 1993;Solnica-Krezel and Driever, 1994). It is unclear whether EVL cells actively spread or are passively drawn vegetally, although lamellipodia formation of EVL cells supports active migration behavior (Lachnit et al., 2008). DCL radial intercalation is regarded as the main driving force for deep cell epiboly by promoting thinning of the blastoderm layer (Keller, 1980;Warga and Kimmel, 1990). The DCL is comprised of six to eight levels of blastomeres during early gastrulation and becomes two to three levels thin by the end of gastrulation. As deep cells do not form strict cell layers but rearrange dynamically during migration (Fig. 1C,D; supplementary material Movies 1, 2), we use the term (depth) levels as opposed to cell layers inside DCL to denote virtual layers of average cell diameter width. A
Cell intercalations in epiboly 846
prevailing model suggests that differential adhesiveness due to differential radial expression of the adherens junction molecule E-cadherin (E-cad) between interior and exterior levels of epiblast cells (i.e. higher cdh1 expression in the exterior levels than the interior levels) promotes unidirectional radial intercalation of epiblast cells from the interior to the exterior levels (Kane et al., 2005;Málaga-Trillo et al., 2009;Schepis et al., 2012). Indeed, loss of function phenotypes for E-cad show severe epiboly delay (Babb and Marrs, 2004;Kane et al., 2005;Shimizu et al., 2005). However, our recent study (Song et al., 2013) failed to identify a gradient of E-cad protein expression among blastomeres, and revealed that deep cell layer thinning is likely independent of E-cad expression gradients, but that E-cad mediated mechanisms controlling migration efficiency of blastomeres are crucial for deep cell epiboly.
Here, we analyze global deep cell migratory behavior in wildtype (WT) and MZspg mutant embryos. MZspg mutant embryos are deficient in the Pou5f1 (homolog of mammalian Oct4) transcription factor, and develop a severe delay in epiboly, while emboly proceeds similar to WT (Lunde et al., 2004;Reim and Brand, 2006;Lachnit et al., 2008). Quantification of radial and lateral intercalation dynamics of blastomeres reveals that radial intercalation is symmetric along the animal-vegetal axis of the embryo, which is not in line with the prevailing model of directed radial intercalation driving deep cell epiboly (Kane et al., 2005;Málaga-Trillo et al., 2009). Instead speed and migration efficiency of blastomeres appear to be crucial for the deep cell epiboly.
Zebrafish gastrulation is initiated with symmetric radial intercalation of blastomeres
To investigate intercalation mechanism during zebrafish early gastrulation, we analyzed the trajectories of blastoderm cell nuclei in embryos labeled with NLS-tomato (Tomato fluorescent protein with nuclear localization signal) between sphere and 50% epiboly stage (Song et al., 2013). In this time window, epiboly leads to significant thinning of the animal cap blastoderm. Most nuclei were not continuously tracked throughout the complete time window, but trajectories capture nuclei motion only between two mitoses, as NLS-tomato is released into the cytoplasm during mitosis. These datasets allow extraction of the exact position of cell nuclei, but do not reveal cell boundaries. Putative individual cell regions and outer boundaries (''membranes'') were estimated by image analysis to allow the study and visualization of cell intercalations (see Materials and Methods). We preferred experimental nuclear over membrane label because knowledge of the exact positions of the nucleus as center of gravity of the cells facilitates analysis and classification of cell movements.
The paradigm we used for cell intercalation analysis (see Materials and Methods) is shown in Fig. 1B. We determined upward (into more exterior level), downward (into more interior level), and lateralward (intra-level) intercalation events of blastomeres (Fig. 1C,D; supplementary material Movies 1-3). The workflow of the analysis is shown in Fig. 2, and the algorithms used in each step of the workflow described in Materials and Methods. To obtain a quantitative understanding of cell behavior during epiboly, we analyzed the total number of intercalations in each WT embryo dataset (Fig. 3A). Surprisingly, the total number of upward and downward intercalations was in the same range, with slightly more downward intercalations. This observation does not support the prevailing model that asymmetric radial intercalation of epiblast cells, i.e. inserting predominantly from an interior level into a more exterior level, drive DCL flattening (Kane et al., 2005). We further analyzed whether the epiboly delay phenotype of MZspg embryos may correlate with different intercalation behavior. The total and relative number of upward and downward intercalations was significantly lower in MZspg embryos than in WT, while the ratios between the upward and downward intercalations of blastomeres were balanced both in WT and MZspg embryos ( Fig. 3A-C). However, factors other than the total number of intercalations in a specific direction may affect epiboly progression, including directional bias in subsequent intercalations of individual cells, and dynamic aspects of cell movement. We further investigated both possibilities in detail.
Location of intercalations and intercalation history
We next examined the number of intercalations along depth levels. Upward and downward intercalations were mainly distributed between the first and third DCL level in both WT and MZspg embryos (Fig. 3D,E; supplementary material Fig. S1). However, in MZspg embryos, the distribution of intercalations was shifted towards deeper levels compared to WT. This may be mainly due to significantly reduced thinning of the DCL in MZspg embryos during the two-hour time window (supplementary material Movie 1). Therefore, in WT intercalations can only be detected in the first three levels at the end of the two-hour recording time, while in MZspg embryos the deep cell layer is still thicker and intercalations can be detected in all levels. Furthermore, the first DCL depth level shows the highest number of lateralward intercalations both in WT and MZspg embryos. This is reminiscent of previous reports that cells in the first DCL level are connected to EVL by E-cad mediated adherens junctions, suggesting them to be dragged by EVL during epiboly (Shimizu et al., 2005). However, cells in the first DCL level frequently moved back into the deeper levels both in WT and MZspg embryos during the two-hour observations, suggesting that adherens junctions were dissociated. We next measured the number of blastomeres performing subsequent intercalation events towards the three different directions (up-, down-, and lateralward; Fig. 3F,G). During the first intercalation event WT blastomeres performed 28% upward, 41% lateralward, and 31% downward intercalations (Fig. 3F). In . Pairwise distances (blue), enclosing angles (green) and contact areas (red). (C,D) Computational detection and classification of radial intercalations from 3D time-lapse recording (supplementary material Movies 1, 2). Embryo stages: sphere to 50% epiboly. The rendering shows lateral views (animal pole at top) with raw nuclei fluorescence (grey), tracked nuclei positions (crosses) and calculated cell boundaries (cyan). Arrows indicate direction of cell migration. Upward (green), downward (red), and lateralward intercalations (blue) were detected along an 18 mm thick animal-vegetal oriented sheet transecting the embryo along its dorsoventral axis (shown here as y-projection representing 18 mm orthogonal to the z-stack). In the circled areas, a blastomere intercalates between two neighboring cells (yellow crosses) located in adjacent more exterior level (C) or in adjacent more interior level (D). These two groups of cells were separately rendered in 3D (right). Scale bars: 100 mm.
MZspg embryos 25% upward, 49% lateralward, and 26% downward intercalations were measured (Fig. 3G). Following their first intercalation many blastomeres performed a second and third intercalation in any of those three directions. These data clearly indicate that most blastomeres have the ability to perform subsequent intercalations in any direction with similar directionality distribution as in previous events. Supplementary material Fig. S2 shows the intercalation history of cells grouped by the depth level position of their first intercalation event, confirming that there is no directional bias. Similar observations were made for MZspg mutant embryos. In summary, cells have no directional intercalation bias based on previous intercalation events, which we interpret to exclude that some extrinsic signal may initiate irreversible cell-intrinsic processes that would determine directionality.
Quantification of intercalation directionality
To investigate differences in global radial intercalation rates, we compared average motion directionality and cell speed during intercalation events between WT and MZspg embryos. Blastomeres intercalate laterally in a rotationally symmetric distribution around the animal-vegetal axis, both in WT and MZspg embryos (Fig. 4). In contrast, radial intercalation directions are distributed polarized in animal as well as vegetal direction along the animal-vegetal axis both in WT and MZspg embryos ( Fig. 4B,C,G and Fig. 4E,F,H). Strikingly, 3D Depth levels (shaded grey along x-axis) were numbered and distance was measured starting from the EVL in vegetal direction. To be able to compare different depth levels, the absolute number of intercalations (summed over 6 embryos for WT and MZspg each; supplementary material Fig. S1) was normalized by the total number of cells observed for each distance. The x-axis is truncated at 4.0, where the number of measured intercalations becomes too small to provide meaningful results. (F,G) Summarized intercalation history of all individual cells (sum over six embryos for each genotype). The graph presents up to three successive intercalations of individual blastomeres, indicating upward, downward, or lateralward directions. The root node (leftmost) denotes all cells performing the first intercalation event. The absolute number and relative fraction of intercalations is given at each node. Errors are given by 95% confidence intervals assuming Poisson noise. total intercalation events may be affected by modulation of motility of blastomeres (Song et al., 2013), but may also be caused by a delay in the onset of epiboly.
Analysis of spatial and temporal patterns of intercalation
The time window of our analysis spans from sphere to 50% epiboly stages, a period during which different forces may contribute to epiboly. Doming of the yolk may affect cells in a different manner as compared to progress of epiboly between 30% and 50% epiboly. Therefore, we reanalyzed our data in three time windows covering the first 42 minutes roughly equivalent to doming, the time from 42 to 84 minutes equivalent to early epiboly stages, and from 84 to 126 minutes equivalent to 30-50% epiboly. The precision in determining developmental stages is estimated to be in the range of ten minutes between different embryo recordings, which argued against analysis of even shorter time windows. We find that during doming, in WT there are significantly less total intercalations (Fig. 5A), but the ratio between lateral, upand downward intercalations is not much different from the later two time windows (Fig. 5B). We also compared the total number of intercalations between WT and MZspg in each time window, and find that while there are significantly less intercalations in MZspg during time windows T1 and T2, the intercalation rate is similar between both genotypes in time window T3 (supplementary material Fig. S4B). The later onset of epiboly in MZspg thus contributes to the differences in intercalation behavior between the genotypes.
We also analyzed the depth distribution of radial intercalations in WT in each time window (supplementary material Fig. S4A), and found that the depth profile changes slightly from the first time window, when up, down and lateral intercalations appear at similar rates at depth levels from 1.5 to 3 cells distant from EVL, to the last time window, when the normalized number of intercalations in these three directions is higher in depth layers closer to the EVL than in deeper layers.
We further analyzed whether cells in the animal central portion of the blastoderm at the animal pole may behave differently from those located more towards the vegetal margin. We defined an inner sector S1 representing the central animal cells, and an outer sector S2 representing the more vegetally located and marginal cells (Fig. 5D). Given that S1 contained less cells than S2, we normalized the number of intercalations in each sector to the cell number. The inner sector S1 has higher up-and downward intercalation rates, with the downward intercalations nearly as strong as the lateral intercalations (Fig. 5E,F). In sector S2 the number of lateral intercalations significantly exceeds the downward intercalations. When analyzing the depth distribution of intercalation rates, inner sector S1 cells have a similar distribution of intercalation directions from layers 1.5 through 3.5, while in sector S2 the normalized number of intercalations per cell decreases in deeper layers (supplementary material Fig. S4C).
Quantification of radial intercalation dynamics
Given the importance of effective movement for intercalations, we investigated the influence of loss of Pou5f1 activity on the dynamics of cell behavior during radial intercalation by measuring effective speed and average instantaneous speed of blastomeres undergoing intercalations during early gastrulation (Fig. 6A). Both median effective and median average instantaneous speed was significantly higher in WT embryos than in MZspg embryos (Fig. 6B,C), suggesting Pou5f1-dependent mechanisms are important for control of migration speed of intercalating blastomeres. Supplemental to the analysis of migration speed during intercalation, measured absolute effective displacements and cell path lengths are given in supplementary material Fig. S5. We also determined the total number of intercalations, which was significantly higher in WT embryos than in MZspg (Fig. 6D). These data together indicate that Pou5f1 affects the total number of intercalations of blastomeres by controlling cell motility, especially migration speed of cells.
Discussion
Gastrulation is an excellent model to study mechanisms controlling coordinated movements of large numbers of cells. However, even for the earliest gastrulation movement, epiboly, there is little understanding of the mechanisms that regulate this movement spatially and temporally throughout the embryo. Here, we used the zebrafish for a detailed analysis and description of intercalation cell behavior during the first two hours of zebrafish gastrulation, from sphere stage to doming of the yolk and epibolic spreading of cells up to 50% epiboly. We aimed to record most cell movements based on the position of their cell nuclei in one coherent data stack, which limited our analysis to about 50% (C) Average motion directionality analyzed for WT embryos for each of the time windows T1 to T3. The occurrence probability for an intercalation with certain migration direction and displacement is indicated by color. Isocontours (white) denote lines of equal probability. Cross-sections of 3D directionality distributions are given for the y-z plane. (D) To analyze potential differences in intercalation behavior between an inner sector located centrally at the animal pole and an outer sector encompassing more marginal and vegetal cells, the 3D space of the image data stack was separated into an inner sector S1 (orange) and an outer sector S2 (green), visualized in lateral (left) and animal pole views (right). (E,F) Number (E) of lateralward, upward, and downward intercalations in WT for the sectors S1 and S2 normalized by the number of cells for each sector. The data are summed over 6 embryos each. (F) Relative number of upward or downward intercalations in each sector normalized to the number of lateralward intercalations. epiboly stage, as we were not able to image throughout the embryo with a confocal laser scanning microscope at later stages. While other techniques have enabled whole embryo documentation (Keller et al., 2008) and analysis of surface movement of cells, such data have not been analyzed for cell behavior orthogonal to the surface, which is essential for analysis of radial cell intercalations. Our global cell intercalation study therefore focused on early epiboly stages, while previous analyses of small regions of the embryo investigated cell behavior at late epiboly stages from 70 to 90% epiboly (Kane et al., 2005).
We used a mathematical three point model to analyze intercalation cell behavior, which enabled us to apply image analysis algorithms to automatically detect and characterize cell intercalations throughout the 3D data volume and the two hour time lapse recording. The results showed that upward, downward and lateral intercalations occur throughout the deep cell layers, and surprisingly revealed similar rates of up-and downward intercalations with regard to the EVL surface, which argues against intercalations directed towards the EVL to be the major force to shape spreading and thinning of deep cells during epiboly. Observing individual cells also revealed no long-term bias in intercalation directionality: following a first intercalation, cells that performed a second intercalation did not show any bias in up-, down-, or lateralward direction. Thus, it appears that cells during early epiboly do not appear to become intriniscally programmed to intercalate in a defined direction only.
We also investigated changes in cell behavior in three time windows for doming, early epiboly and 30-50% epiboly. We found that in WT embryos, the intercalation directionality is not very prominent during doming, but a clear animal-vegetal directional bias is established during early epiboly, with similar upward versus downward intercalation distribution along this axis. We also found that the depth distribution of intercalations changes as epiboly progresses: while during dome stage up-, downward and lateral intercalations appear at similar low frequencies in layers one to three cell diameters away from the EVL, during mid-epiboly a profile is established in which the frequency of intercalating cells is higher in the upper cell layers compared to deeper ones.
The cell intercalation data raise the question whether they are sufficient to explain blastoderm thinning and epibolic spreading towards the vegetal pole. First, it appears counter-intuitive that a high number of both up-and downward intercalations has been detected, because if they occur in the same layers, this would effectively eliminate any net effect on expanding the DCL. However, the depth profile of intercalation events normalized to cell numbers in each depth layer reveals that from dome stage, when the profile is even, a gradient of intercalation rates establishes with higher intercalation rates in deep cell layer one to two as compared to layers three to four. Together with the slightly higher propensity for downward intercalations, this may effect a net redistribution of cells by intercalation to promote thinning and spreading of the DCL. We attempted to evaluate quantitatively the number of cells exiting the inner sector S1 in comparison to the number of radial intercalations (supplementary material Fig. S6). To visualize temporal changes we performed the analysis in eight 15 minute time windows, and also determined the ratio of the number of cells leaving the sector S1 and the number of radial intercalation events. Supplementary material Fig. S6 reveals that the number of cells leaving sector S1 trails behind the number of radial intercalations, until shortly before 50% epiboly, when exiting cells and radial intercalations occur at a ratio of approximately one. This analysis would be consistent with up-and downward intercalations partially compensating each other during doming and early epiboly, while intercalations may drive epiboly more effectively when the directionality (Fig. 5C) and the steeper radial profile of intercalations (supplementary material Fig. S4A) are established at 50% epiboly. However, it has been impossible for us to exactly quantitate the contribution of intercalation events to epibolic spreading, because in the time window analyzed, also the number of cells approximately doubles as the asynchronous thirteenth cell cycle progresses, and towards the end of the recording cells also have only about half of the volume each as compared to sphere stage.
Two models have been put forward about the forces that drive DCL epiboly (Keller et al., 2003). In one model, radial intercalation of deep cells is the driving force to spread the DCL. Here, directional cues would have to orient the intercalation behavior. Adhesion gradients, specifically of Ecad, have been proposed to direct radial intercalation during late epiboly to predominantly occur in the direction towards the EVL (Kane et al., 2005). Our analysis reveals that such a directional intercalation cannot be detected during early epiboly stages. In the second model (Keller et al., 2003), radial intercalation may be a more indirect effect of migrational spreading over yolk cell or EVL surfaces. For zebrafish, in this model epiboly of the YSL and EVL would open a space into which deep cells migrate. The prevalence of intercalation orthogonal to the EVL surface observed here may be caused by this type of intercalation effectively filling space opened up by EVL/YSL epiboly. Here, the dynamics and effectiveness of deep cell migration would be crucial for DCL epiboly progress, which is confirmed by our measurements. This is also consistent with the changes observed in MZspg mutant embryos, in which E-cad trafficking and adhesion is affected in a way to reduce effective cell movements (Song et al., 2013). Our study provides a new approach to investigate dynamic behavior and intercalations of individual cells within a tissue during embryo development. This method may also be exploited in other fields such as cancer research to quantify epithelial-mesenchymal transitions in vivo.
Materials and Methods
Zebrafish maintenance and image acquisition AB/TL strain was used as WT control. For embryos devoid of Pou5f1 function maternal and zygotic spg m793 mutants (MZspg m793 ) were used. 3D time-lapse recording of global blastomere migration was performed by Song et al. (Song et al., 2013). All nuclei were labeled by microinjection of nls-tomato mRNA (50 pg) at the one-cell stage. The 3D time-lapse stacks were recorded using a LSM5 Live Duo confocal microscope (Zeiss, Jena) with a Zeiss LD LCI Plan-Apochromat 256/0.8 objective lens. Laser wavelength: 532 nm, filter: BP 560-675, z stack depth: 109.4 mm, duration: 126 min with 1.05 min intervals.
Image quantification
All analyses presented here depend on the position of the nuclei recorded in the primary data. However, virtual cell boundaries are used to define neighborhoods of cells and provide features for the detection of cell intercalations. The full image analysis pipeline is described in the following paragraphs and depicted in Fig. 2.
Infer cell boundaries
Putative individual cell regions and outer boundaries (''membranes'') were estimated by a 3D Voronoi diagram using the nuclei positions as seeds. The maximal size of a cell was limited to a sphere with 20 mm radius to obtain reasonable cell regions at the borders. With this approach, the size of bordering cells (especially the flat EVL cells) is overestimated in outward direction. However, inner cell boundaries are estimated well and the conducted analysis described in the following is not affected.
Intercalation model
Cell intercalations were modeled based on the intercalation model depicted in Fig. 1B and Fig. 2. This model describes an intercalation in terms of the angles (w i , w j and w k ) and contact areas (a ij , a jk and a ki ) of three cells (i, j, and k). The three cells start in a triangular configuration (time point T1) and end in a linear configuration (time point T3). Time point T2 models the point when cells i and j lose their contact. The start and end values of the six features for an ideal intercalation were manually defined by inspecting several clear intercalation events in the data set. Furthermore we assume a linear transition for all features from the start to the end values, except for a ij , which drops to zero between T1 and T2 and stays zero between T2 and T3.
Detect intercalation events
For each cell the above described intercalation model was fitted to the trajectories using every possible combination of start time, end time and neighboring cells. The mean squared errors from the fit and the deviation of the fitted parameters to the ideal parameters were linearly combined (using manually specified weights) to a single error d and converted to a score s with range [0…1] by taking the exponential of the negated error, s5exp(2d). From these spatiotemporal scores local maxima in time are selected as events and overlapping events are joined to a single event. Finally detected intercalation events with low reliability (score ,0.85) were discarded. The exact value of this threshold is not critical for the further analysis, as there is a smooth transition between intercalations and nonintercalations, and only relative numbers are considered.
Compute relative local motion
The motion of intercalating cells relative to the local tissue was computed by successive temporal registration of local groups of cells. This compensates for translational motion from both global motion and local growth motion of the tissue. The resulting relative raw cell path is depicted schematically in supplementary material Fig. S5A (dashed black). From this raw cell path we compute the main motion direction, the calculated effective displacement and the revised cell path (supplementary material Fig. S5A). The main motion direction is found by principle component analysis (PCA) and represents the line of best fit with respect to the raw cell path. The extremal points in this main direction are used to refine the start and end point and the time window of the intercalation event, resulting in the calculated effective displacement and the revised cell path.
Perform event verification
To reduce false positive detections, a subsequent verification step discards events that are not likely to be intercalations. First, intercalating cells are likely to show an effective displacement in the range of a cell diameter (cf. Fig. 1B) and second, intercalating cells are likely to perform a rather directed motion through neighboring cells. Therefore, we discarded events with less than 6 mm absolute effective displacement and directedness r dir less than 0.85, with r dir being the ratio of the first eigenvalue to the sum of eigenvalues obtained from principle component analysis (PCA) of the revised cell path.
Compute basic statistics
Form the verified intercalation events, some basic statistics are plotted. The calculated effective displacement is used for the absolute effective displacement (supplementary material Fig. S5B) and effective speed (Fig. 6B). The revised cell path is used for quantifying cell path length (supplementary material Fig. S5C) and average instantaneous speed (Fig. 6C).
Compute directionality distributions
3D directionality distributions were computed by density estimation, i.e. by accumulating measurements into a single 3D distribution with the starting point of the event centered at the origin ( Fig. 4A-F). High density peaks are obtained when many events show similar direction. To describe isotropy and polarity, the 3D directionality distributions were projected onto the unit sphere and modeled by spherical harmonics (Rose, 1995) basis functions Y lm (supplementary material Fig. S3A). The resulting expansion coefficients c lm with zero order (m50) in every band l (i.e. c l0 ) describe the distribution of the signal from North to South pole (averaged along the latitudes). For the present application, especially the second coefficient c 20 is important (supplementary material Fig. S3A,B). It is positive, if the signal is located at the poles, negative if the signal is located at the equator, and zero, if the signal is homogenous.
Fit EVL surface
An anatomical embryo coordinate system was defined by fitting a smooth surface to EVL cell nuclei for each time point.
Assign intercalation location and direction
The fitted EVL surface is used as anatomical reference. The direction of the intercalation and the directions of successive intercalations were obtained by measuring the angle of the calculated effective displacement (Fig. 6A, blue) to the surface normal. The locations of the intercalations were obtained by measuring the distance to this surface. For representation the distances are discretized into cell diameters (Fig. 3D,E; supplementary material Fig. S1 and Fig. S4A,C). The reference cell diameter was measured by the statistics of nearest neighbor distances for all cells and time steps in all datasets. We used the median value d516.1095 mm as an estimate for the reference cell diameter. Directions were classified into ''lateral'', if the displacement component in direction to the EVL was within 6 ffiffi ffi 3 p 4 d (,7 mm) and ''upward'' or ''downward'' otherwise.
Statistical analysis
Statistical significance in terms of directionality of intercalation in WT and MZspg (Fig. 4) was evaluated using the non-parametric Wilcoxon rank sum test. The test was based on the expansion coefficient c 20 describing isotropy and polarity (supplementary material Fig. S3, n56 samples per class). For the description of WT and MZspg, measurements were averaged (Fig. 4, Fig. 5C) or summed over 6 datasets per class (Fig. 3, Fig. 5A,B, Fig. 5E,F, Fig. 6D; supplementary material Figs S1, S2, S4). Standard MATLAB boxplots were used to plot class distributions (Fig. 6B,C; supplementary material Fig. S5B-D): the central mark is the median, the edges of the box are the 25 th and 75 th percentiles, the whiskers extend to the most extreme data points not considering outliers, and outliers are plotted individually (red crosses). The medians are significantly different at the 5% significance level if their comparison intervals (notches) do not overlap. In Fig. 3A-E, Fig. 5A,B, Fig. 5E,F, Fig. 6D, supplementary material Figs S1 and S4 the error is given by the 95% confidence intervals assuming Poisson noise when counting intercalation events. All computations and statistical evaluations were performed in MATLAB (The MathWorks Inc.) or C++. | 7,302.4 | 2013-06-03T00:00:00.000 | [
"Biology"
] |
Helicobacter pylori plasticity region genes are associated with the gastroduodenal diseases manifestation in India
Background Almost all Helicobacter pylori infected person develop gastritis and severe gastritis is supposed to be the denominator of peptic ulcer diseases, which may lead to gastric cancer. However, it is still an enigma why few strains are associated with ulcer formation, while others are not related with any disease outcome. Although a number of putative virulence factors have been reported for H. pylori, there are contradictory results regarding their connotation with diseases. Recently, there has been a significant attention in strain-specific genes outside the cag pathogenicity island, especially genes within plasticity regions. Studies demonstrated that certain genes in this region may play important roles in the pathogenesis of H. pylori-associated diseases. The aim of this study was to assess the role of selected genes (jhp0940, jhp0945, jhp0947 and jhp0949) in the plasticity region in relation to risk of H. pylori-related diseases in Indian population. Methods A total of 113 H. pylori strains isolated from duodenal ulcer (DU) (n = 61) and non-ulcer dyspepsia (NUD) subjects (n = 52) were screened by PCR and Dot-Blot to determine the presence of these genes. The comparative study of IL-8 production and apoptosis were also done by co-culturing the AGS cells with H. pylori strains of different genotype. Results PCR and Dot-Blot results indicated that the prevalence rates of jhp0940, jhp0945, jhp0947 and jhp0949 in the H. pylori strains were 9.8, 47.5, 50.8, 40.9 % and 17.3, 28.8, 26.9, 19.2 % isolated from DU and NUD, respectively. IL-8 production and apoptotic cell death were significantly higher in H. pylori strains containing jhp0945, jhp0947 and jhp0949 than the strains lacking those genes. Results indicated that the prevalence of jhp0945, jhp0947 and jhp0949 are associated with increased risk of severe diseases in India. Conclusion Our study showed that presence of jhp0945, jhp0947 and jhp0949 were significantly associated with symptomatic expressions along with the increased virulence during in vitro study whereas jhp0940 seems to be negatively associated with the disease. These results suggest that jhp0945, jhp0947 and jhp0949 could be useful prognostic markers for the development of duodenal ulcer in India.
Background
Helicobacter pylori is a Gram negative microaerophilic bacterium that infects more than 50 % of world population by selectively colonizing the human stomach [1]. Although most infections are asymptomatic, 10-15 % of H. pylori infected individuals develop chronic inflammation leading to atrophic gastritis, peptic ulcer as well as gastric adenocarcinoma and gastric mucosa-associated lymphoid tissue lymphoma (MALT) [2][3][4]. It may also contribute to childhood malnutrition and increase the risk or severity of infection by other gastrointestinal
Open Access
Gut Pathogens *Correspondence<EMAIL_ADDRESS>1 Division of Bacteriology, National Institute of Cholera and Enteric Diseases, P 33, CIT Road, Scheme XM, Beliaghata, Kolkata 700010, India Full list of author information is available at the end of the article pathogens such as Vibrio cholerae, especially in developing countries. In India, around 65-70 % populations are infected with H. pylori [5,6]. The conundrum of H. pylori study is that infection remains latent in majority of the infected patients, while only approximately 15-20 % of infected individuals become symptomatic for peptic ulcer (duodenal or gastric) as a long-term consequence of infection. Infection usually starts in early childhood and the bacteria have a unique capacity to live in gastric milieu for lifelong unless eradicated by specific antibiotic treatment. It is still unclear what determines the outcome of an infection and the apparent paradox suggests that mere presence of H. pylori in the stomach is insufficient to cause gastric disease, rather requiring additional conditions. However, it is thought to involve interplay between the virulence of the infecting strain, host genetics and environmental factors. Experience with other bacterial pathogens suggests that H. pylori-specific factors may exist that influence the pathogenicity of H. pylori.
H. pylori bear an arsenal of specific virulence factors. Among them the cytotoxin-associated gene-pathogenicity island (cag-PAI), vacuolating associated cytotoxin gene A (vacA), outer inflammatory protein A (oipA), blood group antigen binding adhesin (babA), lipases and lipopolysaccharides (LPS) are potentially toxigenic to initiate the process of inflammation in the host gastric tissues and have been studied in great details to understand their association with gastroduodenal diseases. The gene that encodes CagA is part of a ~40 kb horizontally acquired DNA segment in the H. pylori genome known as cag-PAI [7]. cagA was the first reported gene in H. pylori strains that considered as a marker for the presence of cag-PAI, which include a number of other genes associated with increased virulence [8]. The cag-PAI also contain genes encoding a type IV secretion system, to ensure efficient translocation of the CagA protein into the host epithelium. One potential discordant that has complicated identification of certain disease-specific H. pylori virulence factors is the substantial geographic diversity in the prevalence of H. pylori virulence factors. Although presence of cagA is significantly associated with the disease status in Western countries, but in Asian countries (including Japan, China and India) this correlation was not observed as majority of the H. pylori strains in this region carry cagA gene [7,9,10].
Several studies reported the unusual genetic heterogeneity of H. pylori in terms of allelic diversity, which has established it as a species with a very high population recombination rate, and also enabled to identify the strains from various populations of different geographic regions [11]. Comparative analysis of the full genome sequences of two H. pylori strains (26695 and J99) indicated several regions whose G + C content was lower than that of the rest of H. pylori genome, indicating horizontal DNA transfer from other species. H. pylori contains an open pan-genome, in which each individual is found to possess distinct set of non-core, or strain-specific, genes [11]. On the basis of comparative analysis of the first sequenced H. pylori genomes, it can be said that these strain-specific genes are mostly found in genomic regions that had previously been coined as plasticity zones, a designation initially used to describe a particular genetic locus with high variation between the first two H. pylori genome sequences [11]. The availability of more sequencing data and more complete H. pylori genome sequences makes it clear that parts of the plasticity zones are generally organized as genomic islands that may be incorporated in one of quite a few different genetic loci. Approximately half of the strain-specific genes of H. pylori are positioned in the plasticity region [12]. For example, this plasticity region in strain J99 is continuous and 45 kb long whereas it is 68 kb discontinuous in strain 26695. Among the 38 open reading frames (ORFs) of the plasticity zone (jhp0914-jhp0951) in strain J99, only six are present in strain 26695 [13][14][15][16][17]. Although various representative genes of these plasticity regions have been recommended as disease markers, e.g. dupA for duodenal ulcer [18,19], or jhp950 for marginal zone B cell MALT lymphoma [20], the functions of the plasticity zones are still not clear yet. The different combinations of genes within plasticity regions are directly related to the variability of the gene content of H. pylori [21].
It is not clearly understood whether strain-specific genes or combinations of strain specific genes force the severity of gastric mucosal inflammation and the risk of various H. pylori-mediated diseases. Additionally, the functional importance of the majority number of open reading frames (ORFs) in the plasticity region remain unrevealed. Recently it has been reported that jhp0940, jhp0945, jhp0947, and jhp0949 of Western H. pylori strains showed an association with an increased chance of gastroduodenal disease and an increase in inflammatory cytokines [15,17,22]. However, in other studies, the role of selected genes in the plasticity region in relation to the risk of H. pylori-related disease and the severity of gastric mucosal damage was debatable and uncertain. Furthermore, the reported associations need to be confirmed in other geographic regions, since geographic differences with regard to virulence genes of H. pylori have been demonstrated [16,23,24].
Indian H. pylori strains are genetically distinct than East Asian and Western strains [24]. Moreover, our recent study showed that presence of strains with intact cag-PAI was found more frequently in Kolkata than in Southern India indicating regional variation in the H. pylori gene pools [9]. In addition, India constitutes about 1/5th of the world's population and there were no reports regarding the distribution of different plasticity genes and their correlation to disease from India except the dupA gene. These observations and our continuing interest in the dynamics of genetic traits associated with H. pylori infection and disease association motivated us to perform the present study for examining the prevalence of jhp0940, jhp0945, jhp0947and jhp0949 of H. pylori and their relation to H. pylori-related disease in Indian population along with their role in in vitro study.
Results
Among the enrolled 171 subjects suffering from gastroduodenal problems, a total of 113 H. pylori strains were isolated by culture method. Based on the visual examination of the stomach and duodenum during endoscopy, subjects were divided into two groups: nonulcer dyspepsia (NUD) and duodenal ulcer (DU) patients. All the strains were isolated from these two groups: (1) 61 DU patients and (2) 52 NUD. Out of 61 DU cases, the mean age difference was 47.3 ± 9.8 and among 52 NUD subjects, the mean age difference was 31.6 ± 9.9. The genomic DNA from these 113 strains was used for further PCR based analysis.
Distribution of jhp0940, jhp0945, jhp0947 and jhp0949 and their disease association
Prevalence of these selected genes in the plasticity region of H. pylori among NUD and DU patients in Indian population was screened by PCR and dot blot hybridization ( Fig. 1a, b). All the strains yielded 480-bp product similar to ureB gene, indicating the identity of H. pylori DNA. All samples considered negative by PCR were confirmed as negative by dot blot hybridization. The prevalence rates of jhp0940, jhp0945, jhp0947, and jhp0949 in patients with H. pylori were 13.3 % (15/113), 38.9 % (44/113), 39.8 % (45/113) and 31 % (35/113), respectively (Fig. 2). There was a low frequency of jhp0940 gene in Indian population and 17.3 and 9.8 % strains were positive in NUD and DU patients, respectively (Table 1). jhp0940 is almost 2 times higher in NUD than the DU even though difference was not significant.
jhp0945 gene was found in 28.8 and 47.5 % of the strains isolated from NUD and DU patients, respectively indicating that jhp0945 gene was significantly associated with DU than NUD (P = 0.042 and OR 2.23; 95 % CI 1.02-4.88) ( Table 1). 14 (26.9 %) of the 52 patients with NUD and 31 (50.8 %) of the 61 patients with DU were colonized by a jhp0947-positive strain. In the univariate analysis, when patients with NUD and DU were compared, the presence of the jhp0947gene was positively associated with DU (P = 0.009; OR 2.80; 95 % CI 1. 27-6.19).
Similarly, jhp0949 gene was detected in 19.2 and 40.9 % of the strains isolated from NUD and DU patients, respectively. Results demonstrated the significant association of jhp0949 with DU than NUD (P = 0.013 and OR 2.92; 95 % CI 1.23-6.87). Among the 35 jhp0949 positive strains, 33 were positive for jhp0947 and 25 strains were also positive for all three elements (jhp0945, jhp0947 and jhp0949). So, the presence of jhp0949 was completely linked with that of jhp0947 in Indian population and was roughly associated with that of jhp0945 (Fig. 3). In our study, only one strain containing all four ORFs (jhp0940, jhp0945, jhp0947, and jhp0949) was detected.
The cagA and vacA status were determined using primers and protocols described earlier [23,25]. cagA was present in 87.6 % (99/113) of the tested strains from this region. 68 % (77/113) of the strains had vacA s1m1 allele. Other two alleles s1m2 and s2m2 of vacA were present in 18.6 % (21/113) and 13.3 % (15/113), respectively (Fig. 4). Status of cagA and vacA genes did not have any correlation with the presence of plasticity region genes Fig. 1 a Genotyping of plasticity region genes in Indian H. pylori isolates. The images shown are from a representative gel electrophoresis of a PCR amplification product of plasticity region genes from Indian isolates and J99 control strain with (1) jhp0940F and jhp0940R primers [22], (2) jhp0945F and jhp0945R primers [22], (3) jhp0947F and jhp0947R primers [22], and (4) jhp0949F and jhp0949R primers [22]. indicating the presence of these two virulence-associated genes independent of the plasticity region genes.
Three ORFs positive strains trigger more apoptosis in AGS cells
The cell cycle analysis with propidium iodide reveals distribution of cells in three major phases of the cell cycle (G1, S and G2/M) and makes it possible to detect unhealthy cells with fractional DNA content. The cells in the sub-G0 phase represent apoptotic cells. After
Strains harboring three ORFs cause more induction of caspase-3 in AGS cells
We also assessed the effects of combined effects of these three ORFs regarding apoptosis via the level of caspase-3 activity. Cleavage of caspase-3 is a regular concluding pathway in caspase-mediated cell death initiated by any agents. Activation of Caspase-3 by H. pylori was determined by measuring cleavage of the colorimetric substrate DEVD-pNa as described in methods. We measured the degree of caspase-3 activity in H. pylori infection by using biochemical assays of lysates from cells infected with the aforesaid group of strains having particular genotype (n = 3 in each group). As shown in Fig. 7 pylori plasticity zones reported earlier should in fact be considered as mobile genetic elements with conserved gene content, rather than regions of genome plasticity. They have also suggested that the high prevalence and wide distribution of these regions throughout all H. pylori populations might provide an as yet unknown fitness benefit to their hosts. Studies indicated that Novel protein antigen (JHP0940) from plasticity region in H. pylori elicited strong and significant levels of tumor necrosis factor alpha and interleukin-8 in human macrophages [26]. Moreover, according to Kim et al. [27] JHP0940 is a catalytically active protein kinase that translocates into cultured human cells and with that kinase activity is capable of indirect upregulation of NF-κB p65 phosphorylation at Ser276. These observations might suggest the putative role of jhp0940 in chronic gastric inflammation and, possibly, the various other outcomes of H. pylori infection, including gastric cancer.
The prevalence of jhp0940 in Western and East Asian isolates has been reported as 17.2 and 23.5 %, respectively. Studies with Brazilian patients indicated that the jhp0940 gene was found in only three of 200 H. pylori strains tested [28]. Other studies demonstrated that 62 % (71 from 114 strains) and 53.1 % of the H. pylori strains isolated from Pakistan and Mexico were positive for jhp0940, respectively [15,29]. But our study showed that only 13.3 % of the H. pylori isolates from India were positive for jhp0940. According to Occhialini et al. [17]., in a Costa Rican population about 41.2 % isolates from gastric cancer patients were jhp0940 positive, where all the isolates from gastritis patients were found negative (P < 0.0006). Study from Pakistan also demonstrated that gastric ulcer (GU) was more significantly associated with jhp0940 (17 patients, 77 %; P = 0.003) than with gastritis (14 patients, 39 %) [15]. In contrary, Sugimoto et al. [22] reported that jhp0940-positive Western isolates were significantly associated with absence of gastric ulcer or duodenal ulcer [0.21 (0.05-0.94) and 0.31 (0.12-0.78], respectively). Our study is partially in accord with this Western finding as jhp0940 is almost two times higher in NUD than the DU in Indian population although the difference was not significant. Our results might propose that jhp0940 has a protective effect on gastroduodenal diseases although findings from various research groups from different parts of the world produced contradictory reports regarding the prevalence as well as its effect. The absence of jhp0940 was supported by a probability of deletion because there are reports showing deletion of some genes from evolved H. pylori strains isolated from advanced stages of gastric diseases when chronic atrophic gastritis progressed to gastric cancer in the same patient over a time of four years [30]. Thus, it is possible that due to high rate of evolution of the bacteria, jhp0940 might get deleted from H. pylori strains during progression of gastritis to duodenal ulcer. However, there are no reports on how the bacteria modulate these types of deletions within a single host during disease progression and to justify such rapid evolution.
Studies from Turkey, Costa Rica, and Netherlands reported that the prevalence of jhp0945 status was parallel between H. pylori from patients with peptic ulcer and that from patients with gastritis but the sample size was small in those studies [17]. Another comprehensive study by Sugimoto et al. [22] among 300 H. pylori isolates from Western population presented a significant association between jhp0945-positive isolates and gastric ulcer, duodenal ulcer, and gastric cancer. Same study also reported that the jhp0945 status was associated with an increased risk of gastric ulcer (odds ratio (OR) 2.58, 95 % CI 1.06-6.27) in East Asia using univariant analysis. Our results are consistent with this finding and showed that jhp0945 status is significantly associated with DU, which supports the finding of Sugimoto et al. [22].
The prevalence of jhp0947 in East Asian isolates was only 5.5 % [22]. In 2000, Occhialini et al. [17] suggested about a more frequent distribution of jhp0947 in gastric cancer isolates (64.7 %) than in those from gastritis patients (34.6 %). Moreover, Santos et al, [28] described that the presence of the jhp0947 remained associated only with gastric cancer (OR 2.94, 95 % CI, 1.86-4.64) and with duodenal ulcer disease (4.84, 2.13-10.96) using multivariate analysis. Moreover, Yakoob et al. [15] found a significant association of jhp0947 with chronic active inflammation and multivariate analysis demonstrated that the ORF was associated with DU in Pakistan and our study also showed that the jhp0947 status was associated with a significantly increased risk of duodenal ulcer.
One study in a Dutch population established that the presence of jhp0949 was significantly associated with duodenal ulcer compared to gastritis [31]. Another study reported that 83.7 % of the H. pylori strains isolated from Mexican children was positive for jhp0949 [29]. Sugimoto et al. [22] reported that there were no significant association between the development of gastroduodenal disease and the status of jhp0949 in East Asian and Western population. But the present study in Indian population demonstrated significant association in the prevalence of jhp0949 between patients with NUD and those with H. pylori related duodenal ulcer. These differences may well reflect the geographic variation of the H. pylori isolates used in various studies. Further, this study also displayed that IL-8 production and apoptotic cell death were significantly higher in H. pylori strains containing jhp0945, jhp0947 and jhp0949 than the strains lacking those genes. Together, these results may emphasize that the presence of jhp0945, jhp0947 and jhp0949 were significantly associated with symptomatic expressions whereas jhp0940 seems to be negatively associated with the disease status.
Conclusions
In conclusion, this was the first study in India to assess the relationship between plasticity region genes and clinical manifestations. The findings of the study suggest that jhp0945, jhp0947 and jhp0949 could be useful prognostic markers for the development of peptic ulcer in Indian population whereas jhp0940 seems to be negatively associated with the disease. committee had approved the study. The record regarding the patient information was kept blind during the experimental procedures and the disease status was decoded during the data analysis. Two biopsies, one from antrum and the other from corpus of the stomach, were taken during endoscopy, from each individual. Biopsies obtained in 0.6 ml of Brucella broth (Difco Laboratories, Detroit, MI) with 15 % glycerol were transported to the laboratory in ice-cold condition and were stored at −70 °C until culture.
H. Pylori culture
In the laboratory, Brucella broth containing the specimen was vortexed for 2 min and 200 µl of the mixture was streaked on Petri plates containing brain heart infusion (BHI) agar (Difco Laboratories) supplemented with 7 % sheep blood, 0.4 % IsoVitaleX, amphotericin B (8 µg/ ml) (Sigma Chemicals Co., St. Louis, MO), trimethoprim (5 µg/ml), vancomycin (6 µg/ml) (Sigma Chemicals) and Nalidixic acid (8 µg/ml) (all from Sigma). Plates were incubated for 3-6 days at 37 °C in a double gas incubator (Heraeus Instrument, Germany) which maintains an atmosphere of 85 % N 2 , 10 % CO 2 , and 5 % O 2 [32]. The H. pylori colonies were identified by their typical colony morphology, appearance on Gram staining and positive reactions in urease, catalase and oxidase tests along with the urease PCR. Bacteria were sub-cultured at 37 °C on the above medium and under the same microaerophilic condition.
Extraction of genomic DNA
Cells were harvested from the culture plates and washed with phosphate-buffer saline (pH 8.0) followed by centrifugation at 3000 rpm for 1 min. The pelleted cells were resuspended in 540 µl of TE buffer (10 mM Tris-HCL, 1 mM EDTA), 60 µl of 10 % Sodium dodecyl sulfate (SDS) (Sigma) and 9 µl of Proteinase K (20 mg/ml) (Invitrogen, Carlsbad, CA), mixture was incubated at 50 °C for 1 h followed by addition of 100 µl of 5 M NaCl, 80 µl of 10 % CTAB solution and then again incubated at 65 °C for 10 min. The DNA was extracted according to the standard phenol-chloroform-method [33].
PCR amplification
PCR amplification was performed in a final volume of 20 µl containing template DNA (2-20 ng), 2 µl of 10× Buffer (Roche, Germany), 2.5 mM dNTPs (Roche) and 10 pmol of corresponding primers (Table 2) in the presence of 1U of Taq DNA Polymerase (Roche). The cycling program has the following condition: initial denaturation at 95 °C for 3 min followed by 30 cycles of denaturation at 94 °C for 1 min, annealing at 55 °C for 1 min, extension at 72 °C for 1 min, and final extension at 72 °C for 7 min. Genomic DNA from the strain J99 and 26695 were included as positive and negative control respectively. The PCR products were analyzed by 1.5 % agarose gels (containing 0.5 µg of ethidium bromide per ml) in 1X TAE buffer. Gels were scanned under UV light and analyzed with Quantity One software (Bio-Rad, Hercules, CA). The size of product was confirmed by using molecular weight marker.
IL-8 assay
All the bacterial strains were cultured in 7 % serum containing BHIA plates for 24 h at 37 °C under microaerophilic conditions. In order to obtain in vitro IL-8 secretion from gastric epithelial cells, AGS (human gastric adenocarcinoma cell line) cells were plated (2.5 × 10 5 cells/ml) into 24 well plates and cultured for 24 h. H. pylori (multiplicity of infection (MOI) of 100) were added to cultured cells. After 8 h. of infection, IL-8 levels in the supernatant were assayed in duplicate three times using a commercially available specific ELISA kit (Genetix, India) following the manufacturer's protocols.
Cell cycle analysis
AGS cells (1 × 10 6 cells/ml in each well) were infected with exponentially growing H. pylori culture. After 24 h of infection cells were fixed in 70 % chilled ethanol and were kept at 4°C for further analysis. Prior to analysis cells were washed in 2 % fetal bovine serum (FBS) containing PBS (pH 7.4) and the cell pellets were stained with propidium iodide (50 µg/ml) containing DNase-free RNase (0.1 mg/ml). Cells were then acquired on flow cytometer and the data was analyzed in FACS Diva (Becton-Dickinson, USA) software.
In vitro caspase-3 activity assay
The AGS cells were plated (2.5 × 10 6 cells/ml in each plate) into petriplate (60 mm dias) to perform this experiment and cultured for 24 h. The cells were then infected with one day old H. pylori culture (multiplicity of infection [MOI] of 100). After 24 h of infection the AGS cells were collected by centrifugation at 1000×g for 10 min at room temperature, which were then washed twice with PBS. A suspension of these cells were prepared then in lysis buffer at a density of 10 7 cells/ml and kept on ice for 10 min. The cell debris was discarded by centrifugation at 16,000×g for 5 min at 4 °C, and the cell supernatant was then used for the colorimetric assay of caspase-3 activity using commercially available kit (Abcam, Cambridge, UK). Protein concentrations were measured using the Bio-Rad protein assay according to the manufacturer's protocols.
Statistical analysis
Each experiment was performed at least thrice in duplicates and results expressed as mean ± standard error of the mean (SEM). Statistical analysis was done by T test and ANOVA (wherever applicable). Univariate analysis was done to determine Odds ratio (OR) and Confidence interval (CI). Calculations were done using Graph Pad Prism software (version 5, Graph Pad Software Inc, USA) and P values <0.05 were considered to be significant. | 5,790.4 | 2016-03-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Transcriptional network structure assessment via the Data Processing Inequality
Whole genome transcriptional regulation involves an enormous number of physicochemical processes re- sponsible for phenotypic variability and organismal function. The actual mechanisms of regulation are only partially under- stood. In this sense, an extremely important conundrum is related with the probabilistic inference of gene regulatory net- works. A plethora of different methods and algorithms exists. Many of these algorithms are inspired in statistical mechanics and rely on information theoretical grounds. However, an important shortcoming of most of these methods, when it comes to deconvolute the actual, functional structure of gene regulatory networks lies in the presence of indirect interactions. We present a proposal to discover and assess for such indirect interactions within the framework of information theory by means of the data processing inequality. We also present some actual examples of the applicability of the method in several instances in the field of functional genomics.
Introduction
Hence DPI is thus useful to quantify efficiently the dependencies among a large number of genes because eliminates those statistical dependencies that might be of an indirect nature, such as between two genes that are separated by intermediate steps in a transcriptional cascade. We will outline an algorithmic implementation of the DPI within the framework of GRN inference and structure assessment.
Outline
• Introduction • Motivation • The gene network inference problem • The joint probability distribution approach (Guilt by association) • Information theoretical measures and the data processing inequality (DPI) • Applications • Conclusions and perspectives Most common pathologies are not caused by the mutation of a single gene, rather they are complex diseases that arise due to the dynamic interaction of many genes and environmental factors. To construct dynamic maps of gene interactions (i.e. GRNs) we need to understand the interplay between thousands of genes.
One important problem in contemporary computational biology, is thus, that of reconstructing the best possible set of regulatory interactions between genes (a so called gene regulatory network -GRN) from partial knowledge, as given for example by means of gene expression analysis experiments.
Outline
• Introduction • Motivation • The gene network inference problem • The joint probability distribution approach (Guilt by association) • Information theoretical measures and the data processing inequality (DPI) • Applications • Conclusions and perspectives Several issues arise in the analysis of experimental data related to gene function: • The nature of measurement processes generates highly noisy signals • There are far more variables involved (number of genes and interactions among them) than experimental samples.
• Another source of complexity is the highly nonlinear character of the underlying biochemical dynamics.
The gene network inference problem
Information theory (IT) has resulted on a powerful theoretical foundation to develop algorithms and computational techniques to deal with network inference problems applied to real data. There are however goals and challenges involved in the application of IT to genomic analysis.
The applied algorithms should return intelligible models (i.e. they must result understandable), they must also rely on little a priori knowledge, deal with thousands of variables, detect non-linear dependencies and all of this starting from tens (or at most few hundreds) of highly noisy samples.
The gene network inference problem
There are several ways to accomplish this task, in our opinion, the best benchmarking options for the GRN inference scenario, are the use of sequential search algorithms (as opposed to stochastic search) and performance measures based on IT, since this made feature selection fast end efficient, and also provide an easy means to communicate the results to non-specialists (e.g. molecular biologists, geneticists and physicians).
Outline • Introduction • Motivation • The gene network inference problem • The joint probability distribution approach (Guilt by association) • Information theoretical measures and the data processing inequality (DPI) • Applications • Conclusions and perspectives
The deconvolution of a GRN could be based on optimization of the Joint Probability Distribution of gene-gene interactions as given by gene expression experimental data could be implemented as follows:
The joint probability distribution approach
Here N is the number of genes, Φ i 's are interactions (i.e. correlation measures) and Z is a normalization factor (called a Partition function). The functional H is termed a Hamiltonian (in analogy with statistical physics) The joint probability distribution approach Estimating MI between gene expression profiles under high throughput experimental setups typical of today's research in the field is a computational and theoretical challenge of considerable magnitude. One possible approximation is the use of estimators. Under a Gaussian kernel approximation, the JPD of a 2-way measurement is given as: • Mutual information allows to distinguish different kinds of 2-way interactions (one particular case of interest is that of triplets): Hernández The Data Processing Inequality (DPI) for Markov chains Definition: Three random variables X, Y and Z are said to form a Markov chain (in that order) denoted X Y Z if the conditional distribution of Z depends only on Y and is independent of X. That is, if we know Y, knowing X does not tell us any more about Z than if we know only Y.
If X, Y and Z form a Markow chain, then the JPD can be written:
P(X,Y,Z) = P(X) P(Y|X) P(Z|Y)
The Data Processing Inequality Theorem: If X, Y and Z form a Markov chain X Y Z then | 1,241.8 | 2012-08-31T00:00:00.000 | [
"Computer Science"
] |
Quadrotor UAV flight control via a novel saturation integral backstepping controller
ABSTRACT In this paper, in order to reduce the influence on quadrotor flight from different external disturbances, a novel nonlinear robust controller is designed and used in the quadrotor system. At first, a nonlinear dynamic model of the quadrotor is formulated mathematically. Then, a quadrotor flight controller is designed with the method of classical backstepping control (CBC) and the nonlinear system using this controller is proved to be asymptotically stabilized by the Lyapunov stability theory when there is no external disturbance. At last, a new nonlinear robust controller established by the introduction of both the saturation function and the integral of error into CBC is designed and named as saturation integral backstepping control (SIBC). The boundedness of the nonlinear system under external disturbances is verified by the uniformly ultimately bounded theorem of the nonvanishing perturbation. The numerical simulations of hovering and trajectory tracking are carried out using MATLAB/SIMULINK taking the external disturbances into consideration. In addition, a series of outdoor flight experiments were completed on the actual experimental equipments of quadrotor UAV under the time-varying disturbance from wind. According to the simulation and flight experiment results, the proposed SIBC strategy shows a superior robustness than CBC and integral backstepping control (IBC) strategy.
Introduction
As a new kind of small unmanned aerial vehicle (UAV), quadrotor aircraft has been widely concerned and used in military surveillance, rescue, disaster monitoring, photography and agricultural mapping due to its many advantages such as flight by high maneuverability and agility, hovering, vertical take-off and landing, etc. [1][2][3]. Despite the advantages of quadrotor compared to helicopter in terms of efficiency, dimensional flexibility, smaller spacial requirement and safety, its application has been greatly hindered owing to the complicated flight control design of the quadrotor, for it is an underactuated system with six outputs and only four control inputs [4,5]. In addition, the quadrotor system has high nonlinear, strongly coupled, multivariable, time-varying nature and can be easily affected by external disturbances. Therefore, a control strategy with excellent disturbance restraining capability is urgently required to achieve autonomous flight such as hovering, trajectory tracking, take-off and landing [6].
As a regressive design method, the backstepping control is commonly used in nonlinear control methods, because of its superior advantages such as better design flexibility and higher stability compared to other control methods. Based on Lyapunov stability theory, the backstepping control can combine controllers design with Lyapunov function selection perfectly, with the design process for controllers equaling the process of stability proof. Thus, the system can keep asymptotic stability by choosing a Lyapunov function reasonably. However, since the classical backstepping control (CBC) method could hardly resist the external disturbances, many improvements have been done to enhance the capability of resisting disturbance for the backstepping control. For example, to solve the trajectory tracking problem, an adaptive control algorithm is derived based on backstepping [17]. A new approach for the attitude control of a quadrotor aircraft is proposed by combining the backstepping technique and a nonlinear robust proportional integral (PI) controller [18]. Enhanced backstepping controller based on proportional derivative (PD) control is obtained, in which particle swarm optimization (PSO) algorithm has been utilized to determine the controller parameters [19].
Although, the disturbance restraining capability for the backstepping control has been proved to be improved to a certain extent under one kind of external disturbance in the previous reports, the different effects on quadrotor flight between various external disturbances have not been studied, and the proof for the uniformly ultimately boundedness of quadrotor control system in the nonvanishing perturbation condition was almost ignored. Therefore, in this work, three kinds of external disturbances (constant disturbance, periodic disturbance and random disturbance) have been taken into consideration separately during the flight control of quadrotor. In order to reduce the influence of these disturbances to the quadrotor flight (such as hovering and trajectory tracking), a novel saturation integral backstepping control (SIBC) has been proposed by combining saturation function and integral of error with the CBC. In addition, the boundedness of the nonlinear system is verified by the uniformly ultimately bounded theorem of the nonvanishing perturbation. Simulation and flight experiment results indicate that compared with CBC and integral backstepping control (IBC), SIBC strategy shows higher anti-disturbance capacity to the three disturbances in hovering and trajectory tracking respectively. This paper is organized as follows: a detailed dynamics model of the quadrotor is presented in Section 2. Classical backstepping control is described in Section 3. A saturation proportional integral backstepping controller and the proof of uniformly ultimately boundedness for the system are proposed in Section 4. The simulation results of two cases (hovering and trajectory tracking) are presented in Section 5. The outdoor flight experiments of the quadrotor are given in Section 6. Conclusions are presented in Section 7.
Control principle
The quadrotor is an underactuated system because it has six degrees of freedom but only four inputs. Each input has one rotor to generate the propeller forces. As shown in Figure 1, the four rotors were recorded as rotor 1, 2, 3 and 4. By varying the speed of the four motors, the quadrotor can produce three attitudes, namely pitch, roll and yaw. The altitude of the vehicle will be changed by varying the four rotor speeds with the same quantity. In order to keep balance or produce yaw motion, two pairs of rotors (rotor 1, 3 and rotor 2, 4) should rotate in two different directions, respectively. Roll motion is produced when the speed of rotor 2 is different from that of rotor 4. Similarly, when the speed of rotor 1 is different from that of rotor 3, pitch motion will be produced. The flight mechanisms of the quadrotor are shown in Figure 2.
Dynamic model
An earth fixed frame E(x e , y e , z e ) and a body fixed frame B(x b , y b , z b ) are used to study the system motion of quadrotor. The absolute position of the vehicle and the three Euler angles (roll, pitch and yaw) are described by ξ = [x, y, z] T and η = [φ, θ, ψ] T respectively in earth fixed frame E. The roll angle (φ) rotation around x baxis, the pitch angle (θ ) rotation around y b -axis and the yaw angle (ψ) rotation around z b -axis are shown in Figure 1. The linear velocity is denoted as V = [u, v, w] T and the angular velocity of the airframe as Ω = [p, q, r] T in the body fixed frame B [22,23]. The relation between the body fixed frame velocity and the earth fixed frame velocity can be written as [19,20] where R and N are the translation and rotation matrices, and they are given as below: where the abbreviations S (·) , C (·) and T (·) denote sin(·), cos(·) and tan(·), respectively. Using the Newton-Euler approach, the translational and rotational dynamic equations of motions can be written as follows: i , F i and ω i as the thrust force and speed of the rotor i respectively, b as the thrust factor, F d = k dξ = diag −k dx −k dy −k dz ξ and F g = 0 0 −mg T are the aerodynamic drag force and the gravitational force, respectively. Ω × IΩ is the gyroscopic effect due to rigid body rotation, while k dmx p k dmy q k dmz r T are torques produced by the propeller system torque and the aerodynamic torque, respectively, k dx , k dy , k dz , k dmx , k dmy and k dmz are drag coefficients, M f is given as where l and d are the distance from the rotors to the centre of mass and the drag factor, respectively.
Then, the final dynamic model of quadrotor can be formulated as where J r is the rotor inertia and ω r = ω 2 + ω 4 -ω 1 -ω 3 . The control inputs are given as
Classical backstepping control for the quadrotor UAV
Because the quadrotor UAV at a low speed, in this section, the aerodynamic drag force F d and the aerodynamic torque M d are ignored, while the external disturbance is not taken into account either. The nonlinear dynamic equation is described as [19] where the state vector and input vector are given as below: The two virtual control inputs are defined as follows: The dynamics model (6) can be rewritten as follows: and the abbreviations are given below: The state trajectory of quadrotor can track a desired reference trajectory turbance by using a suitable control law which was obtained with the CBC method. Take the control input U 1 for example, the design of CBC is given step-by-step as follows: Step 1. Introduce the first tracking error as The first Lyapunov function is chosen as the derivative of V 1 with respect to time iṡ For the purpose of stabilizing e 1 , a stabilizing function is designed as Substitutingẋ 1 by Equation (17), then the Equation (16) can be rewritten aṡ where the parameter k 1 is a positive constant.
Step 2. The deviation of α 1 from the desired valueẋ 1 can be defined as the second tracking error the derivative of e 2 can be represented aṡ The second Lyapunov function is selected as the derivative of V 2 with respect to time iṡ Step 3. For the purpose of stabilizing e 2 , the control law U 1 is given as where parameter k 2 is a positive constant. C φ > 0, C θ > 0 according to Assumption 4 and m > 0, so the g(x 11 ) is nonzero. Substituting (23) into (22), the derivative of V 2 can be rewritten aṡ namelyV 2 (e 1 , e 2 ) is negative semi-definite. According to Lyapunov stability theory, the nonlinear system of quadrotor with Equation (8) is asymptotically stabilized using the control law (23). The designs of the other control inputs are similar to U 1 .
Saturation integral backstepping control for the quadrotor UAV
The quadrotor UAV would be affected by some unpredictable external disturbance in actual application. Three different kinds of disturbances (constant disturbance, periodic disturbance and random disturbance) have been taken into account as the key disturbances in this section. The CBC method could hardly resist these external disturbances. Therefore, some effective auxiliary control is necessary to eliminate the effect on the quadrotor flight from the external disturbances. In this section, the saturation function and the integral of error are introduced into CBC to enhance its control robustness. With the external disturbance being taken into consideration, the nonlinear dynamic Equation (8) should be rewritten as where δ = δ 1 δ 2 δ 3 δ 4 δ 5 δ 6 T is defined as the external disturbance vector, and the bound of the disturbance is |δ i | ≤ β(i = 1 ∼ 6), while β is a given positive constant.
Taking U 1 as an example again and the design process of CBC and the reference [24] for reference. The design of SIBC is given step-by-step as follows: Step 1. Introducing the first tracking error as A new error variable (the integral of error e 1 ) is defined as The first Lyapunov function is chosen as where λ 1 is a parameter of integral and the derivative of V 1 with respect to time iṡ A new stabilizing function is designed as where the parameter k 1 is a positive constant.
Step 2. The second tracking error is defined as The derivative of e 2 can be represented aṡ Taking (31) into (29),V 1 (p 1 , e 1 ) iṡ The second Lyapunov function is selected as The derivative of V 2 with respect to time iṡ V 2 (p 1 , e 1 , e 2 ) =V 1 (p 1 , e 1 ) + e 2ė2 = − k 1 e 2 1 + e 2 (e 1 +ė 2 ) = − k 1 e 2 1 + e 2 (e 1 +ẍ 1d −ẍ 1 +k 1ė1 + λ 1ṗ1 ) In order to restrain the uncertain disturbance and stabilize the system, the control law U 1 can be designed as where the parameter k 2 is a positive constant, ε 1 is a design parameter, and the saturation function sat(e 2 /μ 1 ) is defined as follows:
Remark 4.1:
The design process and uniformly ultimately boundedness proving process of the attitude controllers U 2 , U 3 , U 4 and virtual control inputs u x , u y are similar to those of U 1 . u x and u y which were derived from Equation (45) and are substituted into Equation (11) to achieve the desired value φ d and θ d of the roll angle φ and the pitch angle θ . φ d and θ d which used as the control inputs of U 2 and U 3 are described as The quadrotor control scheme is shown in Figure 3.
Case 1: hovering problem
In the hovering simulation, the desired value of position and attitude is given by The simulation results show that the quadrotor can keep hovering with CBC, IBC and SIBC strategy from 0 to 6 s as shown in Figures 4 and 5. However, after the addition of constant disturbance from the 6th second on, the quadrotor could not keep hovering with CBC strategy at the original position. The IBC or SIBC was able to restore the quadrotor hovering at the original position after a few seconds, although some vibration occurred as the constant disturbance was added into the system. In addition the amplitude and the restore time using SIBC are much smaller than those using IBC as shown in Figure 4(a-d). Figure 4(e) and (f) show the change of roll and pitch using the three control strategies. When the periodic disturbance was added at the 6th second, the position and yaw of UAV have shown a certain vibration relative to the initial value. However, the amplitude using SIBC is much smaller than those using CBC and IBC, which indicates that SIBC can keep quadrotor hovering more stable than either one of the other two control strategies as shown in Figure 5(a-d).
Therefore, the SIBC strategy shows a better performance of the disturbance restraint in the hovering condition compared to the other two control strategies. Figures 6 and 7 show the control inputs of the three control strategies under two different external disturbances, respectively.
Case 2: trajectory tracking problem
In the trajectory tracking simulation, the control objective is to ensure that the quadrotor can track the desired trajectory which is adopted by the helical trajectory. The desired trajectory is described as The initial values are given as ψ = 0, z = 0.5, x = 0, y = 1. The simulation is conducted based on 4order Runge-Kutta method with sampling time fixed on t = 0.01s, and the simulation time is 30 s. Three different external disturbances are given from 0th second on. Firstly, the constant disturbance is given and random error under the constant disturbance, periodic disturbance and random disturbance situations, respectively. These demonstrate that the CBC and IBC strategies exhibits weak disturbance restraint to trajectory tracking under the above three disturbances. In contrast the errors for the three disturbances convergence rapidly and stay stable within a small region after a transitory fluctuation respectively using the SIBC strategy as shown in Figures 8(c), 9(c) and 10(c). The results indicate that the addition of saturation function and integral of tracking error into the original control laws can remarkably restrain the above disturbances respectively with a relatively high tracking accuracy.
Form Figures 11-13, the tracking errors of x, y, z and ψ are much smaller using SIBC under the three
Experimental results
In order to illustrate the effectiveness of the proposed SIBC strategy, a series of trajectory tracking experiments of quadrotor UAV under the random disturbance from wind outdoors are presented in this section.
Experimental equipments
The actual experimental equipments of quadrotor UAV are shown in Figure 17. The main processor is STM32F407 (1M FLASH, 192K RAM, operates at 168 MHz) which is employed to control the propulsion system by PWM signals. A 12-Channel 2.4 GHz remote control system RadioLink-AT10 is used to keep communication with the main processor. The sensors (including ICM-20602, barometric pressure sensor SPL06 and 3-axis electronic compass AK8975) are utilized to measure the accelerations and angular rates in three directions, the altitude and the course angle, respectively. The relative height and the horizontal position of quadrotor UAV are measured by laser radar TFmini and Ultra Wideband (UWB), respectively. The quadrotor UAV powered by a 2600 mAh LI-Po battery can keep flying for 10-15 mins.
Flight experimental results and analysis
The trajectory tracking experiment of quadrotor UAV outdoors is shown in Figure 18. The outdoor temperature was 2°C, and the speed of the wind was around 1 ∼ 2 m/s with uncertain wind direction. The desired trajectory on the x-y plane is described as x d − y d = 0, the desired value on z-axis is z d = 1.5m, and the initial value are x(0) = y(0) = 1m, z(0) = 1.5m. The flight time is 10 s.
As shown in Figure 18, the UWB distance detection operates at Tri-anchors model, in which the three anchors are vertically placed with the distance between anchor 0-1 and 0-2 both being 3 m. The experimental results using CBC, IBC and SIBC are shown in Figure 19. As shown in Figure 19(a) and (b), the trajectories on x-y plane and z-axis using SIBC are in accordance with the desired trajectories with higher goodness of fit compared with those using CBC and IBC. In addition, the root mean square (RMS) errors of trajectory using CBC, IBC and SIBC are 1.14, 0.47 and 0.24 m, respectively, which indicates the RMS errors using SIBC is obviously smaller than those using CBC and IBC with the RMS errors reduced about 50 ∼ 80%.
The flight experiment results are in accordance with the simulation results in Section 5, which further confirmed that the SIBC strategy can restrain the uncertain external disturbances more effectively than the other two control strategies, and it will be a valuable control method in quadrotor UAV control field.
Conclusions
In order to reduce the effect from different external disturbances on quadrotor which is a highly unstable nonlinear system in actual application, a novel nonlinear robust controller SIBC for quadrotor is presented in this work. By introducing saturation function and integral of error into CBC, the SIBC strategy can remarkable reduce the interference with quadrotor system from the external disturbances such as constant disturbance, periodic disturbance and random disturbance. The boundedness of the nonlinear system has been proved by the uniformly ultimately bounded theorem of the nonvanishing perturbation. The results of hovering and trajectory tracking simulation and experiment show that the anti-disturbance capacity of SIBC is much better than that of CBC and IBC, which means that the SIBC with an excellent robustness has a potential application as a novel control strategy in actual quadrotor flight.
Disclosure statement
No potential conflict of interest was reported by the authors. | 4,506.6 | 2019-04-03T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Mixture Model Clustering Using Variable Data Segmentation and Model Selection: A Case Study of Genetic Algorithm
: A genetic algorithm for mixture model clustering using variable data segmentation and model selection is proposed in this study. Principle of the method is demonstrated on mixture model clustering of Ruspini data set. The segment numbers of the variables in the data set were determined and the variables were converted into categorical variables. It is shown that variable data segmentation forms the number and structure of cluster centers in data. Genetic Algorithms were used to determine the number of finite mixture models. The number of total mixture models and possible candidate mixture models among them are calculated using cluster centers formed by variable data segmentation in data set. Mixture of normal distributions is used in mixture model clustering. Maximum likelihood, AIC and BIC values were obtained by using the parameters in the data for each candidate mixture model. Candidate mixture models are established, to determine the number and structure of clusters, using sample means and variance-covariance matrices for data set. The best mixture model for model based clustering of data is selected according to information criteria among possible candidate mixture models. The number of components in the best mixture model corresponds to the number of clusters, and the components of the best mixture model correspond to the structure of clusters in
Introduction
Analysis of clusters by means of mixture distributions is called mixture model cluster analysis [1]. Mixture model based clustering is one of the clustering methods for partitioning of p -dimensional multivariate data into meaningful subgroups [2]. Each component in the mixture model of multivariate normal densities corresponds to a cluster in multivariate data. The number of components in mixture model is determines the number of clusters in multivariate data [3]. The number of components in mixture model determines the number of clusters and the structure of components in mixture model forms the structure of clusters in multivariate data.
Mixture model of multivariate normal densities is defined as ( ) Bozdogan [4] proposed a method for choosing the number of clusters, subset selection of variables, and outlier detection in the standart mixture model cluster analysis. Bozdogan [5] developed a method for mixture model cluster analysis using model selection criteria and defined a new informational measure of complexity. Soffritti [6] identified multiple cluster structures in a data matrix. Bozdogan [7] proposed a computationally feasible intelligent data mining and knowledge discovery technique that addresses the potentially daunting statistical and combinatorial problems presented by subset regression models. McLachlan and Chang [8] studied mixture modelling for cluster analysis. In their approach to clustering, the data can be partitioned into a specified number of clusters k by first fitting a mixture model with k components.
Galimberti and Soffritti [9] used model based clustering methods to identify multiple cluster structures in a multivariate data set. Durio and Isaia [10] developed a method for model selection in mixture of normal densities. Scrucca [11] used information on the dimension reduction subspace obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances. His method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the cluster structure contained in the data.
Seo and Kim [12] developed root selection method for identifying the underlying group structure in the data using finite mixtures of normal densities. Fraley et al. [13] defined a method of normal mixture modeling for model based clustering, classification, and density estimation studied. A model selection algorithm for mixture model clustering is defined by Erol [14]. Huang et al. [15] studied model selection for Gaussian mixture models. Their method is statistically consistent in determining the number of components. They used a modified EM algorithm [16] and applied to simultaneously select the number of components, and to estimate the mixing weights.
Galimberti and Soffritti [17] studied conditional independence for parsimonious model based Gaussian clustering. They asumed that the variables can be partitioned into groups resulting to be conditionally independent within components, thus producing component-specific variance matrices with a block diagonal structure. McLachlan and Rathnayake [18] studied the number of components in terms of density estimation. Wei and McNicholas [19] used mixture model averaging for clustering. Model-based clustering of high-dimensional data studied by Bouveyrona and Brunet-Saumardb [20].
A new data mining method with a new genetic algorithm using variable data segmentation and model selection for mixture model clustering of multivariate data is proposed in this study. The genetic algorithm has 6 steps. These steps are: (i) Variable data segmentation, (ii) Determining total number of cluster centers, (iii) Computing total number of mixture and candidate models, (iv) Obtaining candidate mixture models as binary string representation, (v) Calculating parameter estimation of possible (candidate) mixture models from sample and (vi) Selecting the best model among candidate mixture models. The proposed mixture model clustering based on variable data segmentation and model selection will be explained on a data set, known as namely Ruspini data set [21]. Akogul and Erisoglu [28] proposed a new approach for determining the number of clusters in a model-based clustering analysis. Akogul and Erisoglu [29] used the information criteria on determining the number of clusters in the model correctly and effectively. Celeux et al. [30] proposed an approach to determining the number G of components in a mixed distribution in model-based clustering. Gogebakan and Erol [31] used model-based clustering of normal mixture distributions in the semi-supervised classification of clusters in the mixture model. The multivariate data set consists of two real or numeric valued variables with each variable containing four partitions, so they are heterogeneous.
The Method
The proposed data mining clustering method with a genetic algorithm for mixture model clustering of multivariate data based on model selection using variable data segmentation will be explained on Ruspini data set [21] in the following sections.
Determination of Heterogeneous Variables in Multivariate Data for Variable Data Segmentation
A heterogeneous variable is a variable that its values have at least two subgroups otherwise it is considered as a homogeneous variable. Each of two variables 1 X and 2 X in Ruspini data set [21] are heterogeneous each with four segmentations. Variable data segmentation is the first step of genetic algorithm for the proposed mixture model clustering based on model selection. Number of partitions of each variable data can be obtained by applying mixture of univariate normal distributions to each variable in data set. Mixture of univariate normal distribution is of the form where ( ) 2 (4) where i µ and i σ denotes mean and standard deviations of component probability density functions respectively. In order to reveal partitions in each variable data log-likelihood, Akaike Information Criteria (AIC) [22] and Bayesian Information Criteria (BIC) [23] values are examined in mixture of univariate normal distributions. The number of components in each mixture of univariate normal distribution mixture models for each variable data corresponds to the number of variable data partitions for each variable in data set. By evaluating the results in Table 1 and Table 2, one can see that the optimal number of components is 4 for the mixture models for each variable data of 1 X and 2 X . Let k i be the number of partitions in X i for graphical methods such as histograms and cumulative distribution plot should be used in determining the segmentations of each variable [24]. Probability plots and histograms showing the variable data partitions for 1 X and 2 X are illustrated in Figure 1 and Figure 2. According to the results in Table 1 and Table 2, and in This partitions forms sixteen cluster centers in Ruspini data set [21] as illustrated in Figure 3. Segmentations in variables data forms the cluster centers are shown Figure
Computations for Total Number of Cluster Centers
The assumption of proposed method is that each column and row must have at least one cluster center in Figure 3. The method proposed by Servi and Erol [24] can be used to compute the minimum and maximum number of cluster centers, denoted by min where p denotes number of variables and k s denotes the number of partitions in each variable data for 1 X and 2 X . Thus, k 1 =k 2 =4 for Ruspini data set [21]. n × data matrix for X is of the form Partitions of 1 X variable data in 1 n elements is of the form . So the minimum number of cluster centers is 4 and the maximum number of cluster centers is 16 for Ruspini data set [21]. Partitions of variables data and cluster centers are illustrated in Figure 3.
Observations for variables can be assigned to partitions of variables using clustering algorithms such as k-means algorithm [25]. So variable data segmentations are obtained from both graphical methods: such as probability plots and histograms; and computational methods: such as mixture of univariate normal distributions and k-means. Variable data partitions and their sizes for variable 1 X and 2 X in Ruspini data set [21] are given in Table 3. Mean vectors and variance-covariance matrices of candidate cluster centers are obtained for construction of mixture models using variable data segmentations in multivariate data set or Ruspini data set [21]. General form of mean vectors in component probability density functions, thus bivariate normal probability density functions, corresponding to each candidate cluster center is of the form Mean vectors and variance-covariance matrices are used in construction of mixture models for mixture model clustering using variable data segmentation and model selection.
Computations for Total and Possible Number of Mıxture Models Usıng Cluster Centers
The total number of mixture models for cluster centers, obtained from variable data segmentation and denoted by Total M , for Ruspini data set [21] can be computed by the relation proposed by Erol [14] as follows where max C as in (6). Minus one term is used to eliminate the case of no cluster center.
Total M
can be obtained as 16
1 65535
Total M = − = for Ruspini data set [21]. The number of cluster centers, the number of total mixture models, the number of possible mixture models and the number of free parameters in mixture models are given in Table 4. Some cases of mixture models does not satisfy the assumption which is each column and row has at least one cluster center, so they are eliminated. The remaining mixture models are called candidate mixture models. The number of possible or candidate mixture models can be computed using the relation formula proposed by Cheballah et al. [26] as where n and m corresponds to number of partitions in variables 1 X and 2 X respectively. Indices i and j are used for the number of cluster centers. k denotes the cases for the number of cluster centers in mixture models. Table 4. The number of cluster centers, the number of total mixture models, the number of possible mixture models and the number of free parameters.
Binary String Representation Of Possible Mixture Models Using Cluster Centers
Mixture model clustering using variable data segmentation and based on model selection uses a genetic algorithm. The genetic algorithm is used to calculate the information criteria of each candidate mixture model. String representation of each candidate model consists of 1 and 0 digits. In Table 5 the zeros and/or the ones represent whether the centers used in construct of the mixture model or not. Binary string representations of possible mixture models with each corresponding to one of 41503 possible models numbers is given in Table 4. For instance the binary string representation of the saturated mixture model that uses all cluster centers is given in Table 5.
List Of Possible Mixture Models Using Cluster Centers
Each binary string representation of candidate mixture model corresponds to one of 41503 possible mixture models.
General form of mixture model with k (4≤k≤16) components and having binary string representation is of the form as variance-covariance matrices for component density function is of the form: where component density function is probability density function for bivariate normal distribution.
There are 24 possible mixture models with four components ( 4 k = ) of the form as in (11). Parameters in the mixture models are ( ) (12), (13) and (14) (12), (13) and (14) There are 7480 possible mixture models with ten components ( 10 k = ) of the form as in (11). Parameters in the mixture models are ( )
Estimation of Parameters for Possible Mixture Models Using Cluster Centers
Mixture model clustering using variable data segmentation and based on model selection, proposed in this study, is a data mining method. The method developed for mixture model clustering has its own genetic algorithm explained in the previous sections. Since variable data segmentation applied to each variable in the data set; mean vectors, variance-covariance matrices and mixing proportions for each component of possible mixture models can be estimated from the sample. The complexity of mixture model clustering using variable data segmentation and based on model selection is less than other clustering methods. Each binary string representation, as in Table 5, corresponds to one of 41503 possible mixture models of the form as in (11). The where k denotes the number of components in the possible mixture models. The estimate of mean vectors for component density functions are of the form The estimate of variance-covariance matrices for component density functions are of the form (15), (16) and (17) (15), (16) and (17) (15), (16) and (17) (15), (16) and (17) (15), (16) and (17) (15), (16) and (17) (15), (16) and (17) (15), (16) and (17)
Computation of Information Criteria for Possible Mixture Models
Likelihood function for the mixture of multivariate normal densities is defined as and log-likelihood function for the mixture of multivariate normal densities is computed as log (π, µ, Σ) log( π (x ; µ , Σ )) 1 1 The Maximum Likelihood Estimation method is used in mixture distributions to obtain the parameters in the data set [27]. Log-likelihood function values for possible mixture of bivariate normal densities are computed using the estimated values of ( ) for the Ruspini data set [21].
Akaike's information criterion (AIC) can be computed by  IC -2 log (π, µ, Σ) 2 Bayesian information criterion (BIC) can be computed by where l og (π, µ, Σ) L is the value of log-likelihood function for possible mixture of multivariate normal densities; d is the number of free parameters in possible mixture of bivariate normal densities and n is the number of observation. The number of free parameters in possible mixture of multivariate normal densities d can be computed by: where k is the number of components, p is the number of variables or dimension in mixture model [5]. Log-likelihood function, AIC and BIC values are computed from partitions of variables data using mean vectors and variance-covariance matrices. Log-likelihood function, AIC and BIC values will be used as criteria for selecting the best mixture model of bivariate normal densities. All calculations are performed using MATLAB.
Selection of The Best Model In a Set of Possible Mixture Models
Selection of the best mixture model among possible mixture of bivariate normal densities for the Ruspini data set [21] according to the information criteria is performed using the values of log-likelihood function, AIC and BIC. The mixture model having maximum Log-likelihood function value and, the mixture model having minimum AIC and BIC values is selected as the best mixture model among the possible 41503 mixture models. The string representation of the best mixture model is given in Table 6. The number of components, log-likelihood, AIC and BIC values of the best mixture model is given in Table 7. The best mixture model is selected as the mixture of four component bivariate normal densities for Ruspini data set [21]. The best mixture model is the 12 th mixture model among 41503 possible mixture models. The scatter plot and the surface plot of the best mixture model is illustrated in Figure 4.
Conclusions
In this study, a new data mining method using genetic algorithm for mixture model clustering based on variable data segmentation and model selection was developed and performed on Ruspini data set. In the developed genetic algorithm, we calculated the number of candidate cluster centers and structures resulting from segmentation of heterogeneous variables. All mixture models that can be formed from these candidate cluster centers and the number of possible mixture models that are appropriate for the hypothesis were calculated. Possible mixture models corresponding to candidate cluster centers were generated using genetic algorithm. In order to be able to compute possible mixture models, string representation of each possible mixture model was obtained. To be used in calculations, unknown parameters for possible mixture of bivariate normal distributions were calculated from the sample. The information complexity of the proposed mixture model clustering is less than other clustering methods that is why algorithms such as Expectation and Maximization (EM) is not used in computations for estimation of parameters. According to the calculated values thus, log-likelihood, AIC and BIC, the best mixture model that matches the best data clustering structure for Ruspini data set was decided.
It can be heuristically stated that the partitions in the heterogeneous variable data affects and determines the number and structure of clusters in data set with no matter what the number of the variable in data is. The clustering method proposed in this study is developed specially for model based clustering of big data.
As a future work, the proposed method will be applied on human brain studies. The study will cover, the number of human brain function centers, magnitude of these brain function centers, correlation between these brain function centers, and constructing mixture models for these brain function centers of human behaviours and activity movements. Furthermore, the method can be applied on robotics, artificial intelligence and logical circuit design for decision making applications. | 4,202.8 | 2019-09-23T00:00:00.000 | [
"Computer Science"
] |
Tumor microenvironment-responsive nanoparticles for cancer theragnostic applications
Background Cancer is one of the deadliest threats to human health. Abnormal physiochemical conditions and dysregulated biosynthetic intermediates in the tumor microenvironment (TME) play a significant role in modulating cancer cells to evade or defend conventional anti-cancer therapy such as surgery, chemotherapy and radiotherapy. One of the most important challenges in the development of anti-tumor therapy is the successful delivery of therapeutic and imaging agents specifically to solid tumors. Main body The recent progresses in development of TME responsive nanoparticles offers promising strategies for combating cancer by making use of the common attributes of tumor such as acidic and hypoxic microenvironments. In this review, we discussed the prominent strategies utilized in the development of tumor microenvironment-responsive nanoparticles and mode of release of therapeutic cargo. Conclusion Tumor microenvironment-responsive nanoparticles offers a universal approach for anti-cancer therapy.
Background
Cancer is one of the leading causes of mortality worldwide. Chemotherapy is one of the clinically practiced treatments for cancer. Over the past few decades, efforts have been made to deliver of small-molecule anticancer drugs to solid tumor however, therapeutic efficacy of these drugs are limited by many factors including low bio-availability, poor water solubility and poor targeting to tumor region [1]. The introduction of nanotechnology for cancer treatment has prompted the development of various nanomedicines, which are more effective and safer than conventional cancer therapies [2]. In spite of extensive research on developing tumor targeted nanomedicine, many tumors are still characterized by poor diagnosis and high mortality [3].
A major challenge faced by these cancer nanomedicines is their efficient delivery to the target solid tumors [4]. The systemic delivery of nanoparticles to the tumor site used in nanomedicine is mainly based on "active" and "passive" mechanisms [5]. Nanoparticles with long systemic circulation properties tend to accumulate in the tumor interstitial space through a passive mechanism, where selective accumulation is mainly achieved by an enhanced permeability and retention (EPR) effect and is highly dependent on the leaky vasculature and impaired lymphatics intrinsic in fast-growing tumors. In active mode, the periphery of the nanoparticles is conjugated or decorated with molecular ligands such as antibodies, peptides, biological proteins and cell-specific ligands, which may enhance the cellular uptake of nanoparticles through receptor-mediated endocytosis [6]. The active targeting of nanoparticles with targeting ligands leads to increased drug accumulation at the target tumor site, but the actual effect is limited by various tumor microenvironmental factors such as tumor heterogeneity, hypoxia and endosomal escape [7].
In recent decades, various stimuli-responsive polymers and nanoparticles that can exhibit a dramatic change in physicochemical properties in response to environmental factors, such as pH, temperature, light, reduction/oxidation, enzymes, have been designed and are now often utilized for targeted drug delivery technology. In addition to enhanced accumulation in the tumor sites mediated by active and passive targeting mechanisms, stimuli-responsive nanoparticles can facilitate augmented drug release, efficient and uniform distribution of therapeutic drug throughout the tumor and enhanced cellular uptake in response to the tumor microenvironment (TME) [6].
Compared to normal tissue, TME possesses several unique characteristics, such as acidic pH [8][9][10][11][12], hypoxia [6,[13][14][15][16], and higher levels of certain enzymes [17][18][19][20]. Compared to traditional nanoparticles that rely on active and passive mechanisms for tumor targeting, TME-responsive nanoparticles have several advantages. Active targeting depends on the specific interaction of a targeting moiety and/or ligands with surface receptors present on the cancer cells. The distribution and density of these receptors varies among cancer cell populations, which thus limits the broader applicability of these nanoparticles. TME-responsive nanoparticles depend on the general physiological features found in all tumors, thus offering a universal approach for anti-cancer therapy such as the site-specific release of anti-cancer drugs via TME-associated abnormal pH, hypoxia, enzymes, the redox environment and reactive oxygen species (ROS). This review describes the current status of TME-responsive nanoparticles and their functional mechanisms as exploited for targeted cancer therapy. It begins with a brief description about the common attributes of TME followed by nanoparticles activated by TME. In this review, representative examples of TME-activatable nanoparticles developed with enhanced tumor specificity and therapeutic efficacy by exploiting the unique physiological characteristics of TME (Scheme 1) are summarized.
Targeting of common attributes of TME As briefly mentioned above, TME possesses a variety of unique characteristics that can be utilized for the development of TME-targeted nanoparticles ( Table 1). The extracellular pH in the TME is usually more acidic (pH 6.5 to pH 6.9) than the physiological pH of normal tissue (7.2 to 7.5) [21]. This acidic TME is due to the higher glycolysis rate of cancer cells to obtain the energy required for survival by converting glucose into lactic acid [6]. The pH variation in tumor cells may play an important role in designing a pH-responsive cancer targeting system. Another unique characteristic is hypoxia, wherein cells residing deep in the tumor mass are deprived of oxygen [22] due to irregular vasculature networks inside the solid tumor [23,24]. The cells in these hypoxic regions proliferate more slowly than well-oxygenated cells, and these slow-growing cells are less susceptible to conventional anti-proliferating drugs. In addition to pH and hypoxia, the tumor microenvironment also shows altered expression of certain enzymes within tumors, which could be utilized for the TME-specific release of therapeutics [6]. Most enzymes overexpressed in the TME are from the protease family, such as membrane metalloproteinases (MMP), or from the lipase family, such as phospholipase A2 [25][26][27]. The specificity of enzymes to their substrates has led to the development of enzyme-responsive nanomaterials, with potential application to targeted delivery. Tumor cells in the TME experience increased potential in terms of oxidative stresses due to elevated levels of superoxide anion radicals, hydroxyl radicals and hydrogen peroxide [28]. To overcome this oxidative stress, tumor cells usually upregulate reduction potential by expressing redox species such as superoxide dismutase (SOD) and glutathione (GSH). Due to the upregulated redox level in tumors, Scheme 1 Summary of unique characteristics of TME used to develop TME-responsive nanoparticles the overall potential (oxidative/reductive) in TME is high. This dysregulation of oxidation and reduction potentials in TME makes them excellent candidates for designing TME-targeted nanoparticles. In addition, cancer cells possess elevated levels of reactive oxygen species (ROS) compared to normal cells because of the aerobic metabolism caused by oncogenic transformation [29]. All these endogenous TME stimuli offer a great opportunity for the development of TME-activatable nanoparticles.
Nanoparticle activation by TME-associated abnormal pH A variety of pH-sensitive nanoparticles have been designed in recent decades and have characteristic functionalities in the molecular structure, where pKa values are close to the tumor interstitial pH. When these nanoparticles reach tumors where the microenvironmental pH is slightly acidic, a pH-dependent structural transformation occurs. The acidic environment at the tumor site triggers the protonation of pH-sensitive moieties, thereby disrupting the hydrophilic-hydrophobic equilibrium within the nanoparticle, in turn causing structural transformation and the release of therapeutic cargo loaded inside (Fig. 1). Generally, pH-responsive nanoparticles are fabricated either using acid-sensitive linkers or ionizable groups [30].
Reactive oxygen species (ROS)
ROS-responsive drug release [57] Fig. 1 Schematic illustration of pH activation of nanoparticle by tumor microenvironment pH-sensitive drug delivery system. The pH-dependent property of pHis is due to the presence of lone-pair electrons on the unsaturated nitrogen in the imidazole group of pHis. Our group previously reported a variety of pHis-based polymeric micelles for the delivery of doxorubicin (DOX) [30][31][32][33]. Poly (ethylene glycol) methyl ether acrylate-block poly(L-lysine)-block-poly(L-histidine) triblock co-polypeptides were synthesized for pH-responsive drug delivery. The nanoparticles were found to be stable at physiological pH (7.4) but were dramatically destabilized in acidic pH due to the presence of pHis blocks [33]. The pH-induced destabilization of the nanoparticle enables the controlled release of DOX, followed by a dose-dependent cytotoxicity in murine cancer cells. Nanoparticles have also been designed to demonstrate a pH-dependent change in surface charge. One of the most commonly investigated systems is based on zwitterionic polymers, as they have cationic and anionic groups that control surface charge in response to pH. In acidic pH, these zwitterionic polymers have a positive charge, and in basic pH, they have a negative charge. However, when these zwitterionic polymers are in neutral pH, they are overall neutral with balanced populations of positive and negative components and they become more hydrophobic. However, upon entering tumor cells, the balance between positive and negative charges will be broken and thereby cause conformational changes, facilitating drug release in tumor cells. Kang et al. [34] have reported the fabrication of tumor microenvironment responsive theragnostic with a pH-dependent fluorescence turn on/off property. The nanoparticles were constructed by encapsulating a photothermal dye (IR 825) in the carbonized zwitterionic polymer. Before accumulating in the tumor site, these nanoparticles displayed quenching of fluorescence due to the hydrophobic interaction with neutral pH and π-π stacking. The slight change in the pH in TME enabled the charge of the nanoparticles to be altered, leading to the release of IR 825 and recovered fluorescence. These types of nanoparticles can simultaneously be used for diagnosis and photothermal therapy.
pH-responsive nanoparticles have also been developed by conjugating nanocarriers with acid-labile linkage such as hydrazone [35,36], orthoester [37,38], imine [39,40], phosphoramidate [41], whose hydrolysis ensures rapid the release of the drug. Liao et al. [42] have reported the synthesis of tumor targeting and pH-responsive nanoparticles for the enhanced delivery of DOX. The nanoparticles were prepared through the covalent bonding of DOX to hyaluronic acid (HA) backbone by hydrazone linkage. In aqueous solution, hyaluronic acid-hydrazone linkage-doxorubicin (HA-hyd-DOX) could self-assemble into nanoparticles. Active targeting of the nanoparticles was achieved through receptor-mediated binding of HA to CD 44, which are overexpressed in most cancer cells. These types of polymeric prodrugs could selectively release the drug in response to changes in pH. One of the major drawbacks of pH responsive nanoparticles is non-responsiveness of pH-responsive nanoparticles in the perivascular region because the acidic pH need for responsiveness is found in region far from the blood vessels. Moreover, the difference in pH between normal and tumor tissues are not significant enough for generating the responsiveness.
Nanoparticles activation by hypoxia
Due to the central role of hypoxia in enhancing tumor angiogenesis, metastasis, epithelial to mesenchymal transition, tumor invasiveness and suppression of immune reactivity [23], there has arisen great interest in the development of nanoparticles that can target the hypoxic regions within the tumor. For example, He et al. [43] reported the fabrication of dual-sensitive nanoparticles with hypoxia and photo-triggered release of the anticancer drug (Fig. 2). The authors developed dualstimuli nanoparticles through the self-assembly of polyethyleneimine-nitroimidazole micelles (PEI-NI) further co-assembled with Ce6-linked hyaluronic acid (HC). Hypoxia-mediated activation was achieved by the incorporation of nitroimidazole (NI), a hypoxia-responsive electron acceptor. Hydrophobic NI segments would be converted to hydrophilic 2-aminoimidazole under hypoxic conditions, thereby aiding in the release of the anticancer drug (doxorubicin, DOX) loaded inside the nanoparticles.
Another hypoxia-sensitive moiety is the azobenzene group. The azobenzene (AZO) group was introduced between the polyethylene glycol (PEG) and PEI for the construction of nanocarrier for the delivery of siRNA [44]. When these particles entered into hypoxic TME, the azobenzene bond was cleaved to trigger de-shielding of the PEG coating and the subsequent release of PEI/ siRNA nanoparticles. The exposed positive charge on the particles further facilitated the enhanced cellular uptake of PEI/siRNA nanoparticles. Xie et al. [45] reported the development of hypoxia responsive nanoparticles for the codelivery of siRNA and DOX. In this study, polyamidoamine (PAMAM) dendrimer was conjugated to PEG using AZO, which is a hypoxia-sensitive linker to form PAMAM-AZO-PEG (PAP). DOX was loaded into the hydrophobic core of PAMAM, and hypoxia-inducible factor 1a (HIF-1a) siRNA was electrostatically loaded onto the surface of PAMAM through ionic interactions between the anionic siRNA and amine groups of PAMAM. The PEG in PAP would prevent the nanoparticles from opsonization and prolong their circulation time in the blood. Upon reaching the tumor and exposure to hypoxic TME, PEG groups would be detached from the PAMAM surface due to the breakage of the AZO group to amino aromatics, causing the exposure of positively charged PAMAM. Once PAMAM has been taken up by tumor cells, PAMAM escapes from endosomes through the proton pump effect and releases the DOX and HIF-1a siRNA.
Yang et al. [46] have reported one pot synthesis of hollow silica nanoparticles encapsulated with catalase (CAT) and Ce6 doped into the silica lattice. CAT is a water soluble H 2 O 2 decomposing enzyme which triggers the decomposition of H 2 O 2 to H 2 O and O 2 . The nanoparticles were further modified with mitochondrial targeting moiety ((3-carboxypropyl) triphenyl phosphonium bromide (CTPP)) and pH responsive charge convertible polymer through electrostatic interaction. Upon reaching acidic tumor microenvironment, the polymeric coating would undergo charge conversion from negative to positive, thereby enhancing the cellular internalization. The mitochondrial targeting moiety helps in enhancing photodynamic therapy induced cell death and the catalase encapsulated inside would decompose the tumor endogenous H 2 O 2 , thereby overcoming hypoxic environment in the tumor and enhancing the photodynamic therapy of solid tumors. These types of smart nanoparticles can overcome the limitations of conventional photodynamic therapy. Despite the advances in the development of hypoxia responsive nanoparticles, getting these nanoparticles into hypoxic region is quite challenging as these regions are typically located deep inside the tumor with less vasculature, where the mass transport is through diffusion. For most of the nanoparticle systems, the diffusion rate would be insufficient within solid tumors and hence nanocarriers with higher diffusion rate of small molecules would be a better option for carrying and releasing hypoxia-activated prodrugs within TME.
Nanoparticles activation by enzymes TME also have upregulated levels of enzymes such as matrix metalloproteinase (MMP), which is predominantly [19]. The upregulated levels of MMP enzymes in TME makes them the most common target for enzyme-based TME nanoparticles. Sun et al. [47] reported the development of MMP-2 activatable nanoprobes, which can be used for selective and specific intracellular imaging of the tumor (Fig. 3). The nanoprobe was constructed through the self-assembly of hexahistidine-tagged (His-Tagged) fluorescent protein and nickel ferrite nanoparticles. The nickel ferrite nanoparticles functioned as protein binders of His-Tagged fluorescent protein and fluorescent quencher. The nanoprobe was reported to be turned on by the presence of MMP-2, leading to enhanced cellular uptake and the restoration of fluorescence, thereby enabling the visualization of nanoparticles within tumor tissue. Ma et al. [48] reported the fabrication of polymeric conjugate for mitochondrial targeting for paclitaxel (PTX) delivery. The polymeric conjugate consists of a PAMAM-based dendrimer core into which triphenylphosphine and PTX were conjugated through an amino bond and disulfide bonds, respectively. To enhance the circulation time of the polymeric conjugate in the blood, PEG was conjugated via the MMP-2 sensitive peptide (GPLGIAGQ). The conjugates accumulate in tumor tissue through the EPR effect. Once the conjugate enters tumor cells, the PEG layer is detached from PAMAM by cleavage of MMP-2 sensitive peptide by the action of MMP-2. The conjugate would then target the mitochondria via triphenylphosphine and PTX would be released in the cytoplasm.
Ansari et al. [49] have reported the synthesis of theranostic nanoparticles which possess enzyme specific drug release and in vivo magnetic resonance imaging (MRI). The nanoparticles were synthesized through the conjugation of ferumoxytol (FDA approved iron oxide nanoparticles) to MMP-14 activatable peptide conjugated to azademetylcolchicine (ICT) (CLIO-ICT). Upon reaching tumor the CLIO-ICT would be converted from non-toxic form to toxic form by the action of MMP-14 thereby, releasing potent ICT. This type of nanoparticles also enables the real-time monitoring of accumulation and localization of drug at the tumor site through MRI imaging.
Another type of enzyme whose levels are known to be upregulated in various cancer subtypes is β-galactosidase (β-gal) [50]. Sharma et al. [50] have developed theragnostic prodrug for the treatment of colon cancer using receptor mediated targeting and enzyme responsive activation. In this study β-gal was used for both targeting asialoglycoprotein (ASGP) receptors and activation of prodrug. When delivered these nanoparticles would be preferentially taken up by the colon cancer cells through receptor mediated endocytosis and the anti-cancer drug will be released by the enzymatic activation. One of the major concerns in enzyme responsive therapy is the heterogenous expression of the target enzyme in different types of cancer and difference in the level of target enzyme at different stages of cancer. To develop more effective and precise enzyme responsive delivery vehicles, more better understanding of the spatial and temporal patterns of enzyme at target site is needed.
Nanoparticle-activated by redox environment
The intracellular GSH level inside TME are in the range of 0.5-10 × 10 − 3 M, which is four times higher than the GSH levels in normal tissues [28]. Intracellular compartments such as the cytosol, mitochondria, and cell nucleus are known to contain a much higher concentration of GSH than extracellular fluids. Such drastic differences in the GSH level between TME and other normal tissue could be utilized as a promising platform to design nanoparticles to selectively release therapeutic drugs in a triggered fashion after delivery to the tumor cells [32,51]. The introduction of bio-reducible disulfide bonds has attracted much interest in the design of redox-responsive nanoparticles that can release their payloads efficiently in intracellular reductive environments.
Sun et al. [52] have reported the synthesis of a redox-sensitive drug delivery system for the treatment of laryngopharyngeal carcinoma (Fig. 4). The redox-sensitive amphiphilic polymer was synthesized by conjugating heparosan with deoxycholic acid through disulfide bonding. The polymer formed self-assembled nanoparticles that can disassemble via reductive cleavage of the di-sulfide bonds and trigger drug release in the intracellular environment. Our group has also reported the synthesis of zwitterionic polymer-based hybrid nanoparticles with glutathione and endosomal pH-responsiveness [31,32]. GSH-responsive drug delivery systems could selectively deliver the drug in TME and enhance the antitumor efficacy of the nanoparticle.
Zhou et al. [53] have reported the synthesis of redox sensitive drug delivery system based on dextran and indomethacin. Redox responsive polymer (DEX-SS-IND) was fabricated through the introduction of di-sulfide bridge (cystamine) in between based on dextran and indomethacin. Anti-cancer drug, DOX was encapsulated inside the core-shelled micelles formed by self-assembly of DEX-SS-IND. In reducing environment, the DEX-SS-IND depolymerizes and releases DOX. In-vivo Fig. 4 Schematic illustration of self-assembled micelle and GSH triggered release of DOX. Reproduced with permission [52] Copyright © 2018, Elsevier antitumor efficacy of DOX loaded DEX-SS-IND micelles were more compared to DOX loaded non-redox responsive polymer. Xia et al. [54] have reported the synthesis of polycarbonate-based core-crosslinked redox responsive nanoparticles (CC-RRNs) for the targeted delivery of DOX. CC-RRNs were synthesized by the click reaction between PEG-b-poly (MPC)n (PMPC), α-lipoic acid and 6-bromohexanoic acid. The di-sulfide cross linked core is formed by the addition of catalytic amount of dithiothreitol (DTT). CC-RRNs demonstrated controlled release of DOX under redox condition. Such multifunctional responsive systems hold the key for future developments in TME-assisted nanomedicine. However, it would be noted that exact intracellular fate of redox sensitive nanoparticles is not clearly understood. Studies have reported that cell surface thiols can affect the internalization of di-sulfide conjugated peptides [55]. Hence, a better understanding about the intracellular trafficking of the nanoparticles is required for development of nanoparticle-activated by redox environment.
Reactive oxygen species (ROS) responsive nanoparticles
In cancer cells, the level of ROS is higher than in normal cells, due to the constant production of ROS as the byproducts of aerobic metabolism caused by oncogenic transformation [29]. This higher level of ROS in tumors could be utilized for the development of ROS-responsive nanoparticles, which could enhance site-specific drug release. The most commonly used characteristic groups employed for the development of ROS-responsive systems are boronic ester [56], thioketal [57] and sulfide [58] groups. Such ROS-responsive systems can lead to the development of drug carriers for efficient delivery of chemotherapy.
Sun et al. [57] have developed ROS-responsive micelles for enhanced drug delivery applications. For the development of ROS-responsive micelles, a ROS-sensitive thioketal linker with a π-conjugated structure was conjugated into methoxy (polyethylene glycol) thioketal-poly(ε-cap rolactone) (mPEG-TK-PCL) micelles. The micelles were formed through the self-assembly of mPEG-TK-PCL and DOX was then loaded through physical encapsulation. The DOX-loaded mPEG-TK-PCL micelles demonstrated enhanced anticancer activity due to the rapid cleavage of the thioketal linker in the presence of increased ROS levels in cancer cells, thereby accelerating drug release and augmenting cancer cell inhibition. Xu et al. [1] have developed a ROS-responsive prodrug through the thioketal linkage of PEG and DOX (Fig. 5). The prodrug was then used as a drug carrier to further encapsulate DOX and form DOX-loaded prodrug micelles. DOX-loaded prodrug micelles demonstrated superior anti-tumor efficacy over non-responsive DOX-loaded poly (ethylene glycol)-block-polycaprolactone (PEG2k-PCL5k) micelles.
Yu et al. [59] have reported the synthesis of chalcogen containing polycarbonate for ROS responsive PDT. The ROS responsive polycarbonate was prepared by the ring opening polymerization of cyclic carbonate monomers with ethyl selenide, phenyl selenide or ethyl telluride group. PEG was employed as macro-initiator to prepare amphiphilic block co-polymers, which forms spherical nanoparticles of less than 100 nm. These nanoparticles completely dissociate in the presence of ROS while remain stable in neutral phosphate buffer. To check the ROS responsive drug release potential of these nanoparticles, DOX and Ce6 were loaded. Upon laser irradiation, Ce6 would generate 1 O 2 which will trigger the degradation of the nanoparticle resulting in the faster release of DOX. Even though numerous ROS responsive nanoparticles have been reported for biomedical application, there are several challenges needed to be addressed such as the biocompatibility of the ROS sensitive linker used, stability of the linker during circulation and at the normal cells. Since the levels of ROS changes with variations in patients and disease conditions the selection of linkers and carrier should be intensively considered for personalized application.
Multi-stimuli responsive nanoparticles
To obtain greater specificity and efficacy, the various stimuli responsive drug delivery system discussed above were often used in combinations. Xiong et al. [60] have reported the synthesis of pH/redox sensitive micelles for the delivery of DOX and gold nanoparticles (GNPs). The micelles comprise of an amphiphilic copolymer of poly(ε-caprolactone)-ss-poly(2-(dimethylamino) ethyl methacrylate) (PCL-SS-PDMAEMA). The PDMAEMA will protonate in acidic conditions, thereby enhancing the hydrophilicity and swelling of the micellar shell and di-sulfide bond will be cleaved when exposed to abundance of GSH, thereby causing the disassembly of the micellular structure. DOX was loaded in the hydrophobic PCL core and GNPs in the hydrophilic PDMAEMA region. GNPs work as a contrast agent for tumor imaging and diagnosis through computed tomography (CT). The core shell micelles showed better drug release in tumor cells by pH triggered swelling and GSH triggered disassembly.
Chen et al. [61] have demonstrated pH /H 2 O 2 responsive nanoparticles to modulate tumor hypoxia. In this study, human serum albumin (HSA) was pre-modified with either photosensitizer chlorine e6 (Ce6) or with pro-drug of cisplatin and then the HSA was used as a template for formation of manganese dioxide (MnO 2 ). Under acidic condition, MnO 2 would decompose and reactive with H 2 O 2 to produce O 2 , which will help in overcoming the tumor hypoxia-associated resistance to PDT. Upon intravenous injection, the nanoparticles accumulate in tumor region through EPR effect and then degrade into smaller HAS complexes which possess better intra-tumor penetration ability.
Conclusion and Perspectives. In recent decades, various tumor-targeting technologies have developed with a compromise between efficacy and safety. The effort to design nanoparticles that can selectively accumulate at tumor sites by passive and active targeting mechanisms has improved cancer treatment with limited success. Recent advances in the development of TME targeted nanoparticle-based therapy have been summarized in this review. TME-targeted nanoparticle-based therapies exploit the unique characteristics of TME, such as acidic pH, hypoxia, redox species, upregulated levels of enzymes and reactive oxygen levels. To further develop nanoparticles with higher theragnostic performance with minimal harmful side effects due to anti-cancer therapy, a combination of TME targeted nanoparticles and with immunotherapy would be beneficial. | 5,541.2 | 2018-08-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Preparation and Reflectance Spectrum Modulation of Cr2O3 Green Pigment by Solution Combustion Synthesis.
An amorphous precursor of Cr2O3 pigment was prepared via solution combustion synthesis. After calcination at 1000 °C for 1.0 h, the precursor was converted into well-crystallized submicron Cr2O3 crystals with uniform particle distribution and low aggregation. Furthermore, Ti, Co, and Fe were doped into the lattice of Cr2O3, and the effects of these dopants on the reflectance spectroscopy modulation as well as chromatic properties were investigated in detail. As a result, a series of Cr2O3 pigment samples sharing similar spectra within the wavelength from 400 to 1600 nm with green plants could be successfully fabricated.
Introduction
In recent years, chromic oxide (Cr 2 O 3 ) green pigment has been extensively utilized due to its high stability, pronounced tinting strength, good migration resistance, and low cost [1][2][3]. Notably, sharing the similar color with green plants, it has been used to manufacture camouflage coatings for a long time [4]. However, through an analysis of diffuse reflectance spectra, it can be confirmed that the spectral reflectance characteristics of chromic oxide green pigment is apparently different from that of natural plants. Figure 1 shows the approximate reflectance spectrum of green plants. For most natural plants, their intrinsic green colors are ascribed to the characteristic reflection peaks at 550 nm on the reflectance spectrum. The peak values highly depend on the plant species, the leaf age, and the chlorophyll, with fluctuations of 10% to 20% even more. Due to the characteristic absorption of chlorophyll, two valleys (around 450 nm and 680 nm) are distinguished on visible bands of the reflectance spectrum. By marked contrast, an upsurge of reflectance can be found when the wavelength increases from 680 to 800 nm, after which the reflectance keeps at a high level until the wavelength reaches 1300 nm. Therefore, it can be confirmed that a platform of near-infrared light (NIR) is formed. In reality, the average reflectance of NIR platform varies from 40%-70% according to the species of plants. Besides, moisture in leaves can selectively absorb light in the region of 1300-1600 nm, thereby a valley is Recently, it has been reported that the reflectance spectrum of Cr2O3 can be modulated by doping Al, Mo, La, Pr, V, Ti, Fe, et al. [6][7][8]. Effects of correlative factors, including the particle size, uniformity, crystal boundaries, and crystal defects on NIR reflectance are studied at the same time. In summary, crystal boundaries and defects prove to be non-negligible factors that could absorb near infrared light and decrease NIR reflectance [8]. Particle size and distribution affect diffuse reflection following the Fresnel formula, since the particle sizes of most samples prepared in these work are of the same order of magnitude with a wavelength of visual near-infrared light [9].
However, most previous studies mainly focused on promoting the NIR reflectance of Cr2O3 in order to provide a highly NIR reflective cool pigment. The comprehensive influence of these dopants on visible NIR diffuse reflectance spectra was neglected. Besides, the difficulties in controlling particle size as well as in the elimination of crystal defects reduced the experimental accuracy of previous studies.
In this study, an ideal chromic oxide green pigment consisting of submicron Cr2O3 crystals was introduced at first. To be more precise, the submicron Cr2O3 crystals are expected to be separated and fully crystallized with low aggregation. Under such conditions, adverse effects of the crystal boundary or defects on NIR reflectance could be ignored. Therefore, the preparation of submicron Cr2O3 crystals could offer a foundation for the further research of reflectance spectrum modulation. After that, Cr2O3 pigments were prepared via an improved solution combustion synthesis method [10,11]. Ti, Co, and Fe were doped into the lattice of Cr2O3, and the effects of these dopants on reflectance spectroscopy modulation as well as chromatic properties were investigated. Finally, a series of Cr2O3 pigments sharing similar spectra within the wavelength from 400 to 1600 nm with green plants were prepared.
However, most previous studies mainly focused on promoting the NIR reflectance of Cr 2 O 3 in order to provide a highly NIR reflective cool pigment. The comprehensive influence of these dopants on visible NIR diffuse reflectance spectra was neglected. Besides, the difficulties in controlling particle size as well as in the elimination of crystal defects reduced the experimental accuracy of previous studies.
In this study, an ideal chromic oxide green pigment consisting of submicron Cr 2 O 3 crystals was introduced at first. To be more precise, the submicron Cr 2 O 3 crystals are expected to be separated and fully crystallized with low aggregation. Under such conditions, adverse effects of the crystal boundary or defects on NIR reflectance could be ignored. Therefore, the preparation of submicron Cr 2 O 3 crystals could offer a foundation for the further research of reflectance spectrum modulation. After that, Cr 2 O 3 pigments were prepared via an improved solution combustion synthesis method [10,11]. Ti, Co, and Fe were doped into the lattice of Cr 2 O 3 , and the effects of these dopants on reflectance spectroscopy modulation as well as chromatic properties were investigated. Finally, a series of Cr 2 O 3 pigments sharing similar spectra within the wavelength from 400 to 1600 nm with green plants were prepared. Taking synthesizing 0.1 mol Cr 2 O 3 as an example, 80 mL of deionized water was heated to 60 • C and then, the initial solution was acquired after 7.0 g of citrate acid and 10 g of urea were dissolved. a certain amount of tetrabutyl titanate was dispersed in 12.0 g of PEG 200, and the mixture was added to the above solution by individual drops. Different amounts of Co(NO 3 ) 2 ·6H 2 O, Fe(NO 3 ) 3 ·9H 2 O, and 0.2 mol Cr(NO 3 ) 3 ·9H 2 O were dissolved in the solution in order. The molar ratios of Cr, Ti, Co, and Fe for each sample are listed in Table 1; thus, the amount of tetrabutyl titanate, cobaltous nitrate, and ferric nitrate that is needed can be figured out. During the whole process, stirring was in need to promote dissolution, and the heating temperature was kept at 60 • C. Then, the mixed solution was evaporated and concentrated until half of the volume was left. Solution combustion synthesis was conducted with the aid of a self-designed device called a self-propagating combustion furnace. Figure 2a shows the diagrammatic sketch of this furnace, where the flame nozzle performs as the core component. Equipped with a corundum tube that is highlighted in the schematic, the details of the flame nozzle are exactly displayed in Figure 2b. Cr 2 O 3 powders were synthesized as follows. Firstly, the corundum tube was heated to 500 ± 1 • C and then concentrated solution stored in the reservoir was pumped into the silicon tube with an constant velocity (60-120 mL/min) using a peristaltic pump. a quartz tube inserted in the hole of a corundum plug acted as the bridge between the silicon tube and corundum plug to protect the silicon tube from thermal damage. No praying equipment was involved in this research. Once the solution entered the tube, continuous combustion synthesis was ignited due to the high inflammability of the mixture of nitrates and organics. With the use of flame nozzle, powders generated in the flame would be pushed out of the corundum tube (shown in Figure 2b) immediately, and then the combustion synthesis process could be continued without blocking issues. The powders collected in the recovery tank are described as the precursors of Cr 2 O 3 because subsequent heat treatment is still required to ensure the complete reaction, thereby improving the crystallinity. In this research, the precursor of S1 was divided into two batches and then calcined at 900 • C and 1000 • C for 1.0 h respectively with a heating rate of 10 • C/min. The calcination temperatures of remaining samples were fixed at 1000 • C. Taking synthesizing 0.1 mol Cr2O3 as an example, 80 mL of deionized water was heated to 60 °C and then, the initial solution was acquired after 7.0 g of citrate acid and 10 g of urea were dissolved. A certain amount of tetrabutyl titanate was dispersed in 12.0 g of PEG 200, and the mixture was added to the above solution by individual drops. Different amounts of Co(NO3)2·6H2O, Fe(NO3)3·9H2O, and 0.2 mol Cr(NO3)3·9H2O were dissolved in the solution in order. The molar ratios of Cr, Ti, Co, and Fe for each sample are listed in Table 1; thus, the amount of tetrabutyl titanate, cobaltous nitrate, and ferric nitrate that is needed can be figured out. During the whole process, stirring was in need to promote dissolution, and the heating temperature was kept at 60 °C . Then, the mixed solution was evaporated and concentrated until half of the volume was left. Solution combustion synthesis was conducted with the aid of a self-designed device called a self-propagating combustion furnace. Figure 2a shows the diagrammatic sketch of this furnace, where the flame nozzle performs as the core component. Equipped with a corundum tube that is highlighted in the schematic, the details of the flame nozzle are exactly displayed in Figure 2b. Cr2O3 powders were synthesized as follows. Firstly, the corundum tube was heated to 500 ± 1 °C and then concentrated solution stored in the reservoir was pumped into the silicon tube with an constant velocity (60-120 mL/min) using a peristaltic pump. A quartz tube inserted in the hole of a corundum plug acted as the bridge between the silicon tube and corundum plug to protect the silicon tube from thermal damage. No praying equipment was involved in this research. Once the solution entered the tube, continuous combustion synthesis was ignited due to the high inflammability of the mixture of nitrates and organics. With the use of flame nozzle, powders generated in the flame would be pushed out of the corundum tube (shown in Figure 2b) immediately, and then the combustion synthesis process could be continued without blocking issues. The powders collected in the recovery tank are described as the precursors of Cr2O3 because subsequent heat treatment is still required to ensure the complete reaction, thereby improving the crystallinity. In this research, the precursor of S1 was divided into two batches and then calcined at 900 °C and 1000 °C for 1.0 h respectively with a heating rate of 10 °C /min. The calcination temperatures of remaining samples were fixed at 1000 °C . The phase identification of the precursor and Cr 2 O 3 samples were carried out by powder X-ray diffraction (XRD, Rigaku, RINT-2000, Osaka, Japan) using Cu-Kα radiation. The morphology of samples was investigated using a scanning electron microscopy (Merlin, Carl Zeiss, Oberkochen, Germany). The visible NIR diffuse reflectance spectra (400-2500 nm) of Cr 2 O 3 samples were measured by a UV-vis-NIR spectrophotometer (LAMBDA750, PerkinElmer, Waltham, MA, USA). The diffuse reflectance spectra (400-700 nm) and colorimetric values (reported in a CIEL*a*b* colorimetric system) of Cr 2 O 3 samples as well as natural plant leaves were measured on an automatic differential colorimeter (SC800, CHN Spec, Hangzhou, China). All the optical photographs presented in this paper were taken by a digital camera (D5600, Nikon, Tokyo, Japan).
Powder X-Ray Diffraction Analysis
The XRD patterns of Cr 2 O 3 samples calcined at 900/1000 • C and the precursor without calcination are depicted in Figure 3. The absence of peaks in the precursor indicates a typical amorphous structure, which is different from the results in similar studies of solution combustion synthesis [10,11]. In this work, the additions of citrate acid, urea, and PEG 200 were optimized by multiple pre-experiments, and then stable combustion at lower temperatures was created. With the utilization of a flame nozzle in the combustion furnace, the precursor generated can merely stay in the high-temperature area of the flame for seconds before it is pushed out of the nozzle. The lower temperature of combustion and the short time exposed to the flame cannot guarantee the crystallization of Cr 2 O 3 ; thus, the amorphous structure is kept in the precursor. The phase identification of the precursor and Cr2O3 samples were carried out by powder X-ray diffraction (XRD, Rigaku, RINT-2000, Osaka, Japan) using Cu-Kα radiation. The morphology of samples was investigated using a scanning electron microscopy (Merlin, Carl Zeiss, Oberkochen, Germany). The visible NIR diffuse reflectance spectra (400-2500 nm) of Cr2O3 samples were measured by a UV-vis-NIR spectrophotometer (LAMBDA750, PerkinElmer, Waltham, MA, USA). The diffuse reflectance spectra (400-700 nm) and colorimetric values (reported in a CIEL*a*b* colorimetric system) of Cr2O3 samples as well as natural plant leaves were measured on an automatic differential colorimeter (SC800, CHN Spec, Hangzhou, China). All the optical photographs presented in this paper were taken by a digital camera (D5600, Nikon, Tokyo, Japan).
Powder X-Ray Diffraction Analysis
The XRD patterns of Cr2O3 samples calcined at 900/1000 °C and the precursor without calcination are depicted in Figure 3. The absence of peaks in the precursor indicates a typical amorphous structure, which is different from the results in similar studies of solution combustion synthesis [10,11]. In this work, the additions of citrate acid, urea, and PEG 200 were optimized by multiple pre-experiments, and then stable combustion at lower temperatures was created. With the utilization of a flame nozzle in the combustion furnace, the precursor generated can merely stay in the high-temperature area of the flame for seconds before it is pushed out of the nozzle. The lower temperature of combustion and the short time exposed to the flame cannot guarantee the crystallization of Cr2O3; thus, the amorphous structure is kept in the precursor. . XRD patterns of Cr2O3 samples (S1-900, S1-1000, S3, S6, and S12) and the precursor (S1) with an emphasized view on the shift of the reflection peaks.
After calcination at 900/1000 °C for 1.0 h, XRD patterns of the undoped Cr2O3 samples (S1-900, S1-1000) are in precise agreement with Cr2O3 (eskolaite, PDF reference pattern: 01-072-1207). Figure 3 also illustrates the XRD patterns of the doped Cr2O3 samples (S3, S6, S12) calcined at 1000 °C , where all the prominent diffraction peaks were indexed as Cr2O3 (eskolaite, PDF reference pattern: 01-072-1207), indicating that the addition of small amounts of Ti/Co/Fe does not significantly change the phase composition of Cr2O3. Besides, an emphasized view on the patterns from 32.5 to 37.5 degrees displayed a distinct shift of the peak toward a lower angle after Ti/Co/Fe was doped. From the periodic table of elements, it is easy to know that the atomic radius of Ti, Cr, Fe, and Co decreases sequentially. The same shift trend of diffraction peaks for the S3, S6, and S12 samples is probably . XRD patterns of Cr 2 O 3 samples (S1-900, S1-1000, S3, S6, and S12) and the precursor (S1) with an emphasized view on the shift of the reflection peaks.
After calcination at 900/1000 • C for 1.0 h, XRD patterns of the undoped Cr 2 O 3 samples (S1-900, S1-1000) are in precise agreement with Cr 2 O 3 (eskolaite, PDF reference pattern: 01-072-1207). Figure 3 also illustrates the XRD patterns of the doped Cr 2 O 3 samples (S3, S6, S12) calcined at 1000 • C, where all the prominent diffraction peaks were indexed as Cr 2 O 3 (eskolaite, PDF reference pattern: 01-072-1207), indicating that the addition of small amounts of Ti/Co/Fe does not significantly change the phase composition of Cr 2 O 3 . Besides, an emphasized view on the patterns from 32.5 to 37.5 degrees displayed a distinct shift of the peak toward a lower angle after Ti/Co/Fe was doped. From the periodic table of elements, it is easy to know that the atomic radius of Ti, Cr, Fe, and Co decreases sequentially. The same shift trend of diffraction peaks for the S3, S6, and S12 samples is probably because 4 mol% Ti among the dopants plays a major role during the doping process. Based on the Bragg equation, it could be inferred that the interplanar space increased when Cr was substituted by Ti, resulting in the left shift of diffraction peaks. Figure 4 gives information about the morphology of both precursors and calcined powders. Figure 4a shows that the precursor has an amorphous structure, which is well matched with the XRD result. a series of pores are distributed randomly, and a plate-like structure is more likely to be formed at this stage. Figure 4b, c displays the morphology of S1 calcined at different temperatures. It can be found that well-defined sub-micron crystalline grains with slight aggregation are observed in both two samples, while the grain size of S1-1000 is larger than that of S1-900, indicating that a higher calcination temperature is beneficial to accelerating the growth of Cr 2 O 3 grains. When compared with Cr 2 O 3 samples prepared by a hydrogen reduction of chromite ore [1] and thermal decomposition of chromium hydroxide [12,13], the serious aggregation and grain size distribution are efficiently optimized. From Figure 4d, we can know that the morphology of S12 is similar to that of S1-1000, where the grain size is estimated to be about 0.4 µm, implying that Fe/Co/Ti doping has no negative effects during this process. Therefore, the solution combustion synthesis is approved to be a good approach to obtain submicron Cr 2 O 3 crystals with better crystallinity and uniform distribution.
Morphological Analysis
Materials 2020, 13, x FOR PEER REVIEW 5 of 11 because 4 mol% Ti among the dopants plays a major role during the doping process. Based on the Bragg equation, it could be inferred that the interplanar space increased when Cr was substituted by Ti, resulting in the left shift of diffraction peaks. Figure 4 gives information about the morphology of both precursors and calcined powders. Figure 4a shows that the precursor has an amorphous structure, which is well matched with the XRD result. A series of pores are distributed randomly, and a plate-like structure is more likely to be formed at this stage. Figure 4b, c displays the morphology of S1 calcined at different temperatures. It can be found that well-defined sub-micron crystalline grains with slight aggregation are observed in both two samples, while the grain size of S1-1000 is larger than that of S1-900, indicating that a higher calcination temperature is beneficial to accelerating the growth of Cr2O3 grains. When compared with Cr2O3 samples prepared by a hydrogen reduction of chromite ore [1] and thermal decomposition of chromium hydroxide [12,13], the serious aggregation and grain size distribution are efficiently optimized. From Figure 4d, we can know that the morphology of S12 is similar to that of S1-1000, where the grain size is estimated to be about 0.4 μm, implying that Fe/Co/Ti doping has no negative effects during this process. Therefore, the solution combustion synthesis is approved to be a good approach to obtain submicron Cr2O3 crystals with better crystallinity and uniform distribution. Scanning electron microscopy photographs of samples, (a) precursor of S1, (b) S1-900, (c) S1-1000, and (d) S12.
Visible Near-Infrared Diffuse Reflectance Spectra Analysis
The diffuse reflectance spectra of Cr2O3 samples calcined at various temperatures and samples with Ti/Co/Fe doping have been measured, and the analysis results are shown as below. According to Figure 5, in the NIR range (780-2500 nm) of the pure Cr2O3 sample, the reflectance of S1-1000 is larger than that of S1-900, while there is no significant difference between the two spectra in the range from 400 to 780 nm. It can be concluded that calcination at 1000 °C creates a better condition for improving the NIR reflectance of Cr2O3 samples. Therefore, the calcination temperatures of the . Scanning electron microscopy photographs of samples, (a) precursor of S1, (b) S1-900, (c) S1-1000, and (d) S12.
Visible Near-Infrared Diffuse Reflectance Spectra Analysis
The diffuse reflectance spectra of Cr 2 O 3 samples calcined at various temperatures and samples with Ti/Co/Fe doping have been measured, and the analysis results are shown as below. According to Figure 5, in the NIR range (780-2500 nm) of the pure Cr 2 O 3 sample, the reflectance of S1-1000 is larger than that of S1-900, while there is no significant difference between the two spectra in the range from 400 to 780 nm. It can be concluded that calcination at 1000 • C creates a better condition for improving the NIR reflectance of Cr 2 O 3 samples. Therefore, the calcination temperatures of the remaining samples (S2 to S12) are fixed at 1000 • C. Compared with S1, significantly larger NIR reflectance can be achieved in Ti-doped Cr 2 O 3 samples S2 and S3, which show promising potential to be used as cool pigments. Meanwhile, a small rise of reflection peak at about 540 nm can be found after Ti doping.
Although reflectance in the region of 800-1300 nm is improved, diffuse reflectance spectra of Ti-doped Cr 2 O 3 is still different from that of green plants. Further work is needed to lower the reflection peaks at around 550 nm and simulate the valley in the region of 1300-1500 nm caused by moisture absorption.
Materials 2020, 13, x FOR PEER REVIEW 6 of 11 remaining samples (S2 to S12) are fixed at 1000 °C . Compared with S1, significantly larger NIR reflectance can be achieved in Ti-doped Cr2O3 samples S2 and S3, which show promising potential to be used as cool pigments. Meanwhile, a small rise of reflection peak at about 540 nm can be found after Ti doping. Although reflectance in the region of 800-1300 nm is improved, diffuse reflectance spectra of Ti-doped Cr2O3 is still different from that of green plants. Further work is needed to lower the reflection peaks at around 550 nm and simulate the valley in the region of 1300-1500 nm caused by moisture absorption.
Figure 5.
Influence of calcination temperature and Ti doping on diffuse reflectance spectra of Cr2O3 samples.
As shown in Figure 6, NIR reflection in the range of 1200-1700 nm for Co-containing Cr2O3 samples (S4) is significantly lower than that of S1-1000. This change can be attributed to the characteristic absorption of Co 2+ according to previous studies [8]. Although a characteristic absorption of Co 2+ (1200-1700 nm) cannot perfectly match with the first valley (1300-1600 nm) on the spectra of green plants, the gap of reflectance between Cr2O3 pigments and green plants is significantly narrowed, which is beneficial to reducing their recognition. Indeed, Co has already been used to produce Cr2O3 pigments used for camouflage by the Shepherd color company [14], as an absorption band of Co 2+ is still the best choice to simulate the valley (1300-1600 nm) of spectra of green plants until now. Besides, Co doping decreases the reflection peak at 540 nm from 30% (S1-1000) to 22% (S4). For samples S5 and S6, in which Co content is fixed at 4 at.%, the NIR reflection can still be improved after Ti doping when compared with S4, and the overall shapes of absorption bands are quite similar. Herein, it makes sense that Ti and Co can modulate the spectra of Cr2O3 when the amounts of additives are tiny. This is probably because Ti and Co are uniformly doped into the Cr2O3 crystal lattice rather than forming grain boundary segregation and secondary phases. Furthermore, NIR platforms emerge on the reflectance spectra of Co-containing Cr2O3 samples (S4, S5, S6), and the height of the platform can be adjusted flexibly by changing the amount of Ti. As shown in Figure 6, NIR reflection in the range of 1200-1700 nm for Co-containing Cr 2 O 3 samples (S4) is significantly lower than that of S1-1000. This change can be attributed to the characteristic absorption of Co 2+ according to previous studies [8]. Although a characteristic absorption of Co 2+ (1200-1700 nm) cannot perfectly match with the first valley (1300-1600 nm) on the spectra of green plants, the gap of reflectance between Cr 2 O 3 pigments and green plants is significantly narrowed, which is beneficial to reducing their recognition. Indeed, Co has already been used to produce Cr 2 O 3 pigments used for camouflage by the Shepherd color company [14], as an absorption band of Co 2+ is still the best choice to simulate the valley (1300-1600 nm) of spectra of green plants until now. Besides, Co doping decreases the reflection peak at 540 nm from 30% (S1-1000) to 22% (S4). For samples S5 and S6, in which Co content is fixed at 4 at.%, the NIR reflection can still be improved after Ti doping when compared with S4, and the overall shapes of absorption bands are quite similar. Herein, it makes sense that Ti and Co can modulate the spectra of Cr 2 O 3 when the amounts of additives are tiny. This is probably because Ti and Co are uniformly doped into the Cr 2 O 3 crystal lattice rather than forming grain boundary segregation and secondary phases. Furthermore, NIR platforms emerge on the reflectance spectra of Co-containing Cr 2 O 3 samples (S4, S5, S6), and the height of the platform can be adjusted flexibly by changing the amount of Ti.
The spectra of Fe and Ti co-doped samples (S8, S9, S10) are shown in Figure 7. It is clear that the reflection peak at around 550 nm decreases gradually with the addition of Fe. Although Fe doping reduces the NIR reflectance in the range of 800-1300 nm, it is tolerable considering that the reflectance is still higher than that of most green plants. Therefore, Fe can be introduced into Ti and Co co-doped samples to make further efforts to lower the reflection peak at around 550 nm. The spectra of Fe and Ti co-doped samples (S8, S9, S10) are shown in Figure 7. It is clear that the reflection peak at around 550 nm decreases gradually with the addition of Fe. Although Fe doping reduces the NIR reflectance in the range of 800-1300 nm, it is tolerable considering that the reflectance is still higher than that of most green plants. Therefore, Fe can be introduced into Ti and Co co-doped samples to make further efforts to lower the reflection peak at around 550 nm. As Figure 8 shows, the reflectance at around 550 nm of Ti, Co, and Fe co-doped samples is between 10% and 20%, so it matches with most green plants. Each spectrum of these samples owns an apparent NIR platform and an absorption band of Co 2+ . Particularly, the reflection spectra (400-1600 nm) of S10, S11, and S12 are in good accordance with that of the green plants given in Figure 1.
In conclusion, our doping strategy is convinced to take effects while fabricating camouflage coatings. The spectra of Fe and Ti co-doped samples (S8, S9, S10) are shown in Figure 7. It is clear that the reflection peak at around 550 nm decreases gradually with the addition of Fe. Although Fe doping reduces the NIR reflectance in the range of 800-1300 nm, it is tolerable considering that the reflectance is still higher than that of most green plants. Therefore, Fe can be introduced into Ti and Co co-doped samples to make further efforts to lower the reflection peak at around 550 nm. As Figure 8 shows, the reflectance at around 550 nm of Ti, Co, and Fe co-doped samples is between 10% and 20%, so it matches with most green plants. Each spectrum of these samples owns an apparent NIR platform and an absorption band of Co 2+ . Particularly, the reflection spectra (400-1600 nm) of S10, S11, and S12 are in good accordance with that of the green plants given in Figure 1.
In conclusion, our doping strategy is convinced to take effects while fabricating camouflage coatings. As Figure 8 shows, the reflectance at around 550 nm of Ti, Co, and Fe co-doped samples is between 10% and 20%, so it matches with most green plants. Each spectrum of these samples owns an apparent NIR platform and an absorption band of Co 2+ . Particularly, the reflection spectra (400-1600 nm) of S10, S11, and S12 are in good accordance with that of the green plants given in Figure 1. In conclusion, our doping strategy is convinced to take effects while fabricating camouflage coatings. Unfortunately, the diffuse spectra analysis shows that all the samples prepared fail to create the same characteristics of the waveband from 1600 to 2500 nm on the reflectance spectrum of green plants. In this research, no helpful element is found to simulate water absorption features at 1900 nm or 2500 nm. However, these pigments still have potential utilization in camouflage coatings for some objective reasons, which are as follows. First, only a small portion of lights in the region of 1600-2500 nm can reach the ground due to the radiation characteristics of solar and moisture absorption of the atmospheric layer [15]. Moisture absorption occurs again before the lights reflected by green plants and camouflage coatings are detected by the visible light and near-infrared sensors. So, it becomes extremely difficult to make a distinction between natural green plants and artificial camouflage coatings when the analysis is conducted on a waveband from 1600 to 2500 nm. In this case, the waveband from 1600 to 2500 nm is rarely used as an operation band by most of the army's detection equipment. Similar studies also feature discussions on this waveband [16,17]. All in all, the defects of Ti, Co, and Fe co-doped samples can be acceptable for practical applications. However, the authors still believe that some elements not mentioned in this article may be useful to create a perfect match, and many more studies still need to be done via the solution combustion synthesis method that we improved.
Chromatic Properties Analysis
Further study is conducted to evaluate the chromatic properties of some samples to make sure if they can be used to simulate the color of natural green plants. Ficus microcarpa, a kind of widely distributed evergreen tree of southern China as well as South and Southeast Asia, is chosen as a representative of natural green plants for its changeable green color. Diffuse reflectance spectra in the range of visible light (400-700 nm) of these samples as well as several Ficus microcarpa leaves (A to F) are shown in Figure 9. Meanwhile, the similar color to every sample and leaf generated by the colorimeter are listed after the serial number. Figure 9 shows that every Ficus microcarpa leaf tested has a reflection peak around 550 nm and the peak value ranges from 8% to about 25%. In contrast, S1 and S3 have reflection peaks around 540 nm, and their peak values are more than 30%. Obviously, the doping of Co lowers peak values, and then the spectra of S6 is close to the spectra of leaf A and B. As for Ti, Co, and Fe co-doped samples (S10, S11, S120), reflection peaks are getting even lower and the peak values range from nearly 16% to about 10%. Meanwhile, the positions of the reflection peaks shift from 540 nm (S1, S3, S6) to 550 nm (S10, S11) and 560 nm (S12) as the addition of Fe increases. Figure 9 also shows a reflection peak around 410 nm, which is noticeable on the spectrum of pure Cr2O3 sample (S1), which is gradually smoothed out by the doping of Ti/Co/Fe. It can be Unfortunately, the diffuse spectra analysis shows that all the samples prepared fail to create the same characteristics of the waveband from 1600 to 2500 nm on the reflectance spectrum of green plants.
In this research, no helpful element is found to simulate water absorption features at 1900 nm or 2500 nm. However, these pigments still have potential utilization in camouflage coatings for some objective reasons, which are as follows. First, only a small portion of lights in the region of 1600-2500 nm can reach the ground due to the radiation characteristics of solar and moisture absorption of the atmospheric layer [15]. Moisture absorption occurs again before the lights reflected by green plants and camouflage coatings are detected by the visible light and near-infrared sensors. So, it becomes extremely difficult to make a distinction between natural green plants and artificial camouflage coatings when the analysis is conducted on a waveband from 1600 to 2500 nm. In this case, the waveband from 1600 to 2500 nm is rarely used as an operation band by most of the army's detection equipment. Similar studies also feature discussions on this waveband [16,17]. All in all, the defects of Ti, Co, and Fe co-doped samples can be acceptable for practical applications. However, the authors still believe that some elements not mentioned in this article may be useful to create a perfect match, and many more studies still need to be done via the solution combustion synthesis method that we improved.
Chromatic Properties Analysis
Further study is conducted to evaluate the chromatic properties of some samples to make sure if they can be used to simulate the color of natural green plants. Ficus microcarpa, a kind of widely distributed evergreen tree of southern China as well as South and Southeast Asia, is chosen as a representative of natural green plants for its changeable green color. Diffuse reflectance spectra in the range of visible light (400-700 nm) of these samples as well as several Ficus microcarpa leaves (A to F) are shown in Figure 9. Meanwhile, the similar color to every sample and leaf generated by the colorimeter are listed after the serial number. Figure 9 shows that every Ficus microcarpa leaf tested has a reflection peak around 550 nm and the peak value ranges from 8% to about 25%. In contrast, S1 and S3 have reflection peaks around 540 nm, and their peak values are more than 30%. Obviously, the doping of Co lowers peak values, and then the spectra of S6 is close to the spectra of leaf a and B. As for Ti, Co, and Fe co-doped samples (S10, S11, S120), reflection peaks are getting even lower and the peak values range from nearly 16% to about 10%. Meanwhile, the positions of the reflection peaks shift from 540 nm (S1, S3, S6) to 550 nm (S10, S11) and 560 nm (S12) as the addition of Fe increases. Figure 9 also shows a reflection peak around 410 nm, which is noticeable on the spectrum of pure Cr 2 O 3 sample (S1), which is gradually smoothed out by the doping of Ti/Co/Fe. It can be found the spectra of sample S10 and leaf D are a good match in the range of 400-600 nm. Generally, samples S6, S10, S11, and S12 all have the potential to make green camouflage coatings considering that the average diffuse reflectance at 550 nm of most green plants is between 10% and 20%. However, it also needs to be pointed out that the doping of Ti/Co/Fe fails to simulate the valley around 680 nm on the reflectance spectrum of green plants, and further work still need to be done.
Materials 2020, 13, x FOR PEER REVIEW 9 of 11 found the spectra of sample S10 and leaf D are a good match in the range of 400-600 nm. Generally, samples S6, S10, S11, and S12 all have the potential to make green camouflage coatings considering that the average diffuse reflectance at 550 nm of most green plants is between 10% and 20%. However, it also needs to be pointed out that the doping of Ti/Co/Fe fails to simulate the valley around 680 nm on the reflectance spectrum of green plants, and further work still need to be done. Figure 9. Diffuse reflectance spectra (400-700 nm) of Cr2O3 samples (S1 to S12) and Ficus microcarpa leaves (A-F).
The L*, a*, and b* values of Cr2O3 samples as well as Ficus microcarpa leaves (A-F) are listed in Table 2. For Ficus microcarpa leaves, the values of L* and b* decrease quickly, but the value of a* changes little as the color turns from yellowish-green to deep-green. It can be found the value of a* of the undoped Cr2O3 sample S1 is much lower than those of the Ficus microcarpa leaves tested in this study, and Co, Fe doping is effective to improve the value of a*. So, compared with the undoped Cr2O3 sample (S1), the Ti, Co co-doped sample (S6) and Ti, Co, Fe co-doped sample (S11) have values of a* that are much closer to those of the leaves. Besides, we also find that the values of a* of S11 and S12 are already too high, indicating that the addition of Fe needs to be cut down when further study is conducted. The color variation of doped Cr2O3 samples can also be found in the photographs of Cr2O3 tablets prepared for the color test shown in Figure 10. Figure 9. Diffuse reflectance spectra (400-700 nm) of Cr 2 O 3 samples (S1 to S12) and Ficus microcarpa leaves (A-F).
Conclusions
The L*, a*, and b* values of Cr 2 O 3 samples as well as Ficus microcarpa leaves (A-F) are listed in Table 2. For Ficus microcarpa leaves, the values of L* and b* decrease quickly, but the value of a* changes little as the color turns from yellowish-green to deep-green. It can be found the value of a* of the undoped Cr 2 O 3 sample S1 is much lower than those of the Ficus microcarpa leaves tested in this study, and Co, Fe doping is effective to improve the value of a*. So, compared with the undoped Cr 2 O 3 sample (S1), the Ti, Co co-doped sample (S6) and Ti, Co, Fe co-doped sample (S11) have values of a* that are much closer to those of the leaves. Besides, we also find that the values of a* of S11 and S12 are already too high, indicating that the addition of Fe needs to be cut down when further study is conducted. The color variation of doped Cr 2 O 3 samples can also be found in the photographs of Cr 2 O 3 tablets prepared for the color test shown in Figure 10. found the spectra of sample S10 and leaf D are a good match in the range of 400-600 nm. Generally, samples S6, S10, S11, and S12 all have the potential to make green camouflage coatings considering that the average diffuse reflectance at 550 nm of most green plants is between 10% and 20%. However, it also needs to be pointed out that the doping of Ti/Co/Fe fails to simulate the valley around 680 nm on the reflectance spectrum of green plants, and further work still need to be done.
The L*, a*, and b* values of Cr2O3 samples as well as Ficus microcarpa leaves (A-F) are listed in Table 2. For Ficus microcarpa leaves, the values of L* and b* decrease quickly, but the value of a* changes little as the color turns from yellowish-green to deep-green. It can be found the value of a* of the undoped Cr2O3 sample S1 is much lower than those of the Ficus microcarpa leaves tested in this study, and Co, Fe doping is effective to improve the value of a*. So, compared with the undoped Cr2O3 sample (S1), the Ti, Co co-doped sample (S6) and Ti, Co, Fe co-doped sample (S11) have values of a* that are much closer to those of the leaves. Besides, we also find that the values of a* of S11 and S12 are already too high, indicating that the addition of Fe needs to be cut down when further study is conducted. The color variation of doped Cr2O3 samples can also be found in the photographs of Cr2O3 tablets prepared for the color test shown in Figure 10.
Conclusions
Cr 2 O 3 green pigments with a submicron scale, fine crystallization and low aggregation were successfully prepared by solution combustion synthesis. The study also proves that Ti, Co, and Fe all have an obvious effect on modulating the reflectance spectra of Cr 2 O 3 pigments. Doping Ti can significantly improve the NIR reflection of Cr 2 O 3 as well as improve the reflection peak around 540 nm slightly. The characteristic absorption band of Co 2+ can be found in the range of 1200-1700 nm after doping Co, and this band can be used to simulate the moisture absorption on the spectra of natural green plants. The addition of Fe can lower the reflection peak around 550 nm while keeping the NIR reflection at a high level. a series of Cr 2 O 3 pigments with reflection peaks ranging from 12% to 22%, on the NIR platform and an absorption band of Co 2+ were fabricated by Ti, Co and Ti, and Co, Fe co-doping. These pigments share similar diffuse reflectance spectra with natural green plants within the wavelength from 400 to 1600 nm. The chromatic coordinates L*, a*, and b* of several Cr 2 O 3 samples are much closer to those of green plant leaves when compared to undoped Cr 2 O 3 .
Conflicts of Interest:
The authors declare no conflict of interest. | 9,583 | 2020-03-27T00:00:00.000 | [
"Materials Science"
] |
Aerodynamic-driven topology optimization of compliant airfoils
A strategy for density-based topology optimization of fluid-structure interaction problems is proposed that deals with some shortcomings associated to non stiffness-based design. The goal is to improve the passive aerodynamic shape adaptation of highly compliant airfoils at multiple operating points. A two-step solution process is proposed that decouples global aeroelastic performance goals from the search of a solid-void topology on the structure. In the first step, a reference fully coupled fluid-structure problem is solved without explicitly penalizing non-discreteness in the resulting topology. A regularization step is then performed that solves an inverse design problem, akin to those in compliant mechanism design, which produces a discrete-topology structure with the same response to the fluid loads. Simulations are carried out with the multi-physics suite SU2, which includes Reynolds-averaged Navier-Stokes modeling of the fluid and hyper-elastic material behavior of the geometrically nonlinear structure. Gradient-based optimization is used with the exterior penalty method and a large-scale quasi-Newton unconstrained optimizer. Coupled aerostructural sensitivities are obtained via an algorithmic differentiation based coupled discrete adjoint solver. Numerical examples on a compliant aerofoil with performance objectives at two Mach numbers are presented.
Introduction
Topology optimization represents a radical departure from conventional sizing methods as it allows an optimum material distribution to be identified. It has been applied to aircraft structures for over two decades (Balabanov and Haftka 1996). In most applications, the technique is applied locally, e.g., to single ribs (Krog et al. 2004), so that the resulting structure can still be manufactured by traditional methods. Another important practical challenge of topology optimization, especially in a fluid-structure interaction (FSI) context, is the computational cost. While sometimes the analysis can be simplified by assuming "frozen" fluid loads (see Zhu et al. 2016), this assumption can lead approach, e.g., using the SIMP (solid isotropic material with penalization, Bendsøe 1989) or RAMP (rational approximation of material properties, Stolpe and Svanberg 2001) interpolation schemes, as an optimum solid-void topology is contingent on how important the local stiffness to weight ratio of the ersatz material is to the objectives. The material interpolation is such that intermediate density areas have lower stiffness to weight ratios than those of solid or void areas (the latter have some residual stiffness); therefore, such areas are undesirable in an optimally stiff structure. In general, aerodynamic objectives do not have this property and so a careful formulation that recovers it may be required, and explicit penalization of nondiscreteness has also been proposed (Stanford and Ifju 2009). Alternative topology optimization methods such as the level-set approach (e.g., Dunning et al. 2015) inherently produce a solid-void topology. However, the numerical challenges of the density-based approach are better understood (e.g., control of feature sizes) and their natural ability to introduce new holes, and be driven by a general optimizer (which easily allows other variables to be included in the optimization) is not shared by all level-set approaches.
In this work, we present a method to apply density-based topology optimization to a highly compliant airfoil, with the objective of improving its aerodynamic characteristics at multiple operating points. This differs from most previous topology optimization research in that the objectives are mostly aerodynamic, which for the reasons above, requires high-fidelity modeling of the fluid to be used. Furthermore, hyper-elasticity is considered on the structural side as strains are large and so accounting for nonlinear behavior (including buckling) may be necessary to accurately model the structural response. Due to the nature of the objectives and the large number of design variables, gradient-based optimization is used. The computational platform used is the multi-physics suite SU2 ) which has been verified for a range of aeronautical applications, e.g., in Palacios et al. (2014). The FSI primal and adjoint solution methodology is presented in Section 2 and the optimization strategy in Section 3. The numerical results obtained with the proposed strategy are presented in Section 4, and finally in Section 5, they are compared with results obtained using common ways to encourage solid-void solutions in densitybased methods, and we discuss why those failed due to the characteristics of the example problem.
Primal and adjoint coupled solution methods
A three-field partitioned formulation is adopted for the FSI problems (primal and adjoint). Coupled sensitivities are obtained with the discrete adjoint method. This is outlined next together with some details of the computational implementation in SU2.
Fluid dynamics
In the fluid domain, the flow is governed by the continuity, Navier-Stokes, and energy conservation equations, which in the ALE formulation may be written as where w = (ρ, ρv, ρE) is the vector of conservative variables. The convective and diffusive fluxes, and the volumetric sources are defined, respectively, as where ρ is the density, v andż, the flow and grid velocities in a Cartesian coordinate system, respectively (z are the grid displacements), p the pressure, E the total energy per unit mass, τ the stress tensor, κ the thermal conductivity, and T the temperature. All material properties refer to the fluid. For the viscous stress tensor τ , Newtonian fluid is assumed and bulk viscosity effects are ignored; furthermore, under the Boussinesq hypothesis, turbulence is modeled as increased viscosity. Menter's Shear Stress Transport turbulence model (Menter 1993) is used. Pressure and temperature are related with the conservative variables via the ideal gas equation of state.
Equation (1) is integrated in space using the finite volume method (FVM) on a dual grid with control volumes constructed using a median-dual vertex-based scheme, which results in the following semi-discrete integral equation for each volume i where the residual R i results from summing the discretized fluxes for all faces of the control volume and integrating the volumetric sources; we use the JST convective scheme for its robustness knowing that the values of drag coefficient will be overestimated due to artificial dissipation. To obtain a steady-state solution, (5) is marched implicitly in pseudo time (τ ), that is, the new solution w * is obtained by solving where the continuous temporal derivative has been replaced by a backward-Euler approximation and the tilde indicates that the linearization of the residual is approximate. For example, more weight is given to the Jacobian of the dissipation term to mitigate the non-diagonal dominance that results from central schemes.
We represent the fluid solution process, involving at each iteration the computation of the residual and the solution of the linear system in (6), by the fixed-point iteration The turbulence model equations are solved in the same manner but they are lagged; nevertheless, for the purposes of the fixed-point representation, the turbulence variables can be considered part of w.
Solid mechanics
The deformed state of a solid domain ( ) is governed by the point-wise equilibrium of linear momentum and of tractions on its surface ( ), that is, where ρ is the density,ü the acceleration with respect to an inertial frame, f the inertial body forces, σ the Cauchy stress tensor, n the outward surface unit normal, and λ the external tractions. To solve (8) for the structural displacements via the finite element method (FEM), its weak form is established applying the principle of virtual work (see Bonet and Wood 2008), where δE is the variation of the Green-Lagrange strain tensor with respect to δu, S is the second Piola-Kirchhoff stress tensor, r refers to the reference (undeformed) configuration of the structure, and c and c to the current configuration of the structure and its surface, respectively. For hyper-elastic materials, the relation between stress and strain is given by the strain energy density function , as S = ∂ ∂E . In particular, for a neo-Hookean material with Lamé constants μ and λ, it is given as where J is the determinant of the deformation gradient.
Equation (10) is linearized around the current state of deformation before being discretized, linear isoparametric elements are used in SU2 (Sanchez et al. 2016). The solution is then found iteratively via the Newton-Raphson method which again can be formulated as a fixed-point iteration u = S(u, λ(w, z)) (12)
Mesh deformation
The structural displacements at the fluid-structure interface (u ) are transferred to the interior nodes of the fluid domain using a linear elasticity analogy (Sanchez et al. 2016). This can be stated as the explicit relation (13), which represents the third field of the FSI problem.
Load and displacement transfer
In general, the fluid and structural meshes do not match at their interface, and an interpolation scheme is required to transfer displacements from the solid to the fluid side, and tractions in the opposite direction. A radial basis function (RBF) approach (see Beckert and Wendland 2001) is adopted in this work to generate the required interpolation matrices (H), that is where subscripts s and f stand for solid and fluid, respectively, and the fluid tractions are given by with n f the inward (with respect to the fluid domain) surface unit normal vector.
Partitioned coupling method
Under a suitable level of convergence of the fixedpoint iterations, the fluid solver can be considered the Dirichlet-to-Neumann operator F that maps displacements to tractions, and the structural solver the Neumann-to-Dirichlet operator S that does the opposite, i.e., λ f = F(u ) and u = S(λ f ) where for simplicity of notation the transfer and mesh deformation operations have been lumped with the solvers. These operators can be applied sequentially to produce a fixed-point iteration for the interface displacements, which naturally results in a block-Gauss-Seidel (BGS) approach to solve the interface problem. However, this often requires significant under-relaxation for strongly coupled problems. Thus, the interface quasi-Newton method IQN-ILS (Degroote et al. 2009) was also implemented in SU2 for this work, to accelerate convergence and improve the robustness of the coupled solver. The interface residual is given by and the new interface position, u * , by Note that u * =û , even for BGS if some relaxation is applied. The product of the (least-squares) approximate Jacobian inverse (r −1 u ) with the residual is obtained based on (up to) N previous values of r andû , that is where each column of W stores the difference between consecutive values ofû , that is and the coefficients c are obtained by solving the linear least-squares problem where analogously to W, V = r N − r N−1 , · · · , r 2 − r 1 , and the yet unknown residual at the next coupling iteration (r N+1 ) is desired to vanish and therefore assumed to be 0. We have observed that for the problems in this work, IQN-ILS is 1.5 times faster than BGS.
Coupled sensitivities
The coupled adjoint solution algorithm follows on the work of Albring et al. (2016) and Sanchez et al. (2018). Let G represent the fixed-point iterator for the coupled problem consisting of the concatenation of (7), (12), and (13), and let x = (w, u, z) represent the state of the coupled problem and α the parameters with respect to which some functional J is to be minimized, i.e., introducing the adjoint variables x = (w, u, z) the Lagrangian is defined as Differentiating with respect to the parameters and grouping some terms Defining the adjoint variables such that the term in parenthesis vanishes results in the adjoint fixed-point iterator which, considering the 3-field nature of the problem and the direct dependencies of each operator, becomes ⎛ where subscripts indicate differentiation with respect to the given variable (e.g., F w ≡ ∂ w F). Analogous to what was done for the primal problem, iterating on each adjoint variable with the remaining fixed allows one to define adjoint solvers for each field, that is Noting that by the construction of M, it is M u = 0 ∀ u / ∈ ; the adjoint interface displacements are identified as which allows defining the adjoint interface residual as The IQN-ILS method can then be applied to determine u and therefore x, which upon substitution into (25) yields the sensitivities. In SU2, the linearization of the fixedpoint operators around the converged state, to compute matrix-free Jacobian-transposed products, is done using the algorithmic differentiation (AD) tool CoDiPack (Sagebaum et al. 2019).
Optimization strategy
As mentioned in the introduction, aerodynamic optimization objectives generally do not benefit from an optimum stiffness to mass ratio, and therefore intermediate densities may not be avoided using the SIMP formulation. This is especially true when seeking passive load alleviation, as deformation is necessary, and in 2D where an increase in lift due to higher mass does not penalize drag as severely as it does in 3D due to higher induced drag. Therefore, an explicit mass objective or constraint may be required. However, we found this to make the optimization process less robust, as a strong-enough incentive to remove material often conduces to critically stable structures (for which it is difficult to fully converge the primal and adjoint problems). Furthermore, we also observed these strategies to increase the sensitivity to initial conditions. For example, starting from an initial configuration that produces more lift than required will cause material to be quickly removed from high strain energy areas. Moreover, mass is not a primary objective for the purposes of this study as we wish to assess the potential of manipulating aerodynamic performance via a material distribution. Therefore, it is not trivial to determine to what value mass should be constrained or how it should be weighted before combining it with the aerodynamic objective function, and numerical examples of this are included in Section 5. To avoid the aforementioned issues, we investigate a two-step process that reduces the interference between the goals of improving aerodynamic performance and producing a realizable topology.
Density-based topology optimization
The well-established density approach with continuous variables consists in specifying a design density at discrete locations of the solid domain, typically the element centroids, and making the local elasticity modulus a function of it. In this work, the modified SIMP formulation (31) is used to relate the elasticity modulus with the design variable as where E min is introduced to avoid a singular stiffness matrix and is typically at least three orders of magnitude smaller than the reference value for solid material (E 0 ), despite this relaxation, a direct sparse solver (Hénon et al. 2002) is required due to the non-Cartesian grids used in the numerical examples. For values of the penalization exponent p greater than 1, intermediate density areas have unfavorable stiffness to weight ratio, and so under the right objectives, constraints, and penalization, will tend to be eliminated. However, for intermediate densities to be realizable (via a two-phase micro-structured material), their properties must respect the Hashin-Shtrikman bounds for two-phase materials (Bendsøe and Sigmund 2004), which in 2D requires p ≥ 3 for the Poisson ratio of 1/3. Two well-known numerical difficulties associated with this approach are its lack of grid convergence (as the discretization is refined, more holes can be introduced) and the checker-boarding that may result from the over estimation of the stiffness of corner contacts. To avoid these issues, a discrete filtering operation (Bruns and Tortorelli 2001) is introduced between the design density variables exposed to the optimizer, and the physical densities considered for each finite element, i.e., where w ij = (R − ||x i − x j ||) j and N is the set of elements within radius R of element i. This conical filter kernel invariably results in intermediate density halo regions between solid and void regions. Filters built by combining the morphological operators (Sigmund 2007) dilatẽ and its counterpart erode (ρ i = 1 − f i (1 − ρ)) were able to produce discrete topologies (at β ≈ 200) for the canonical problems used to verify the implementation. However, numerical investigations for the FSI problem in this work showed they complicate the convergence of the optimization due to the non-linearity they introduce. In either case, derivatives with respect to the design densities are obtained in the adjoint post-processing step (25) using AD (α ≡ ρ); thus, analytical expressions for the Jacobian matrices G α and J α are not required.
Gradient-based optimization method
The numerical optimization examples considered in this work are characterized by a large number of design variables and relatively few constraints (excluding simple bound constraints) that from the physics point of view, do not need to be imposed strictly. Therefore, we solve the optimization problems with the exterior penalty method, i.e., problems of the form where h + = max(0, h). The L-BFGS-B implementation available with SciPy (Jones et al. 2001) is used as the unconstrained (but bounded) optimizer. The penalty parameters (a i and b j ) need to be gradually increased (usually by multiplying the previous value by a fixed factor r) until a predetermined small constraint tolerance is met. This creates the need for outer iterations as updating the parameters within L-BFGS-B (inner) iterations leads to bad approximations of the Hessian matrix. Although these outer iterations force an undesired reversion to steepest descent, they are also needed (and commonly used) to update topology optimization parameters. Our continuation strategy for these parameters, which we have found to be adequate for both structural and FSI problems, is given in algorithm 1.
Note that penalty parameters are increased geometrically, whereas problem parameters are simply a sequence of values through which the algorithm advances if all constraints are currently satisfied. Before the target values of the parameters are reached, loose convergence criteria are used for L-BFGS-B (e.g., 40 inner iterations). The objective function is shifted and scaled by representative minimum value and range, respectively; the constraints are shifted by their bounds and scaled by a reference value (the reciprocal of the bound unless otherwise specified). Doing so allows the same constraint tolerance (≈ 0.01) to be used, the penalty parameters to be initialized equally (a 0 i , b 0 i ∈ [1, 10]), and also updated with the same factor (r ∈ [1.4, 4]).
Strategy for discrete topologies
The proposed two-step strategy consists of first solving the natural aerostructural optimization problem, for example minimizing drag subject to a lift constraint, without additional penalties or manipulation of the objectives. This first step is conducted with fully coupled FSI modeling (so called multidisciplinary-feasible approach) and generally results in a non-discrete topology, but one defined by feasible FSI solutions and realizable since the SIMP exponent is still set according to the two-phase material bounds. The second step then aims to produce a discrete topology that replicates the response of the former, i.e., one that under the fluid loads known from the coupled FSI simulation results in the same deformed surface. This inverse design step is formulated similarly to a compliant mechanism design problem, for example the force inverter (see Bendsøe andSigmund 2004 or Sigmund 2007), but instead of focusing on the response of a single node, an error metric is defined for the entire interface as and used as a constraint in a mass minimization problem.
We solve the problems of both steps with algorithm 1. While it would be possible to perform the inverse design step simulating only the structure, we found that doing so can easily result in an unstable structure that buckles under the small variation of FSI loads caused by the small discrepancy between the target response (obtained from the fully coupled step) and the response of the discrete-topology structure. To mitigate this issue, a one-way FSI simulation is used instead, then based on the variation of the fluid loads due to the discrepancy in target and current surface coordinates, we define a stability metric based on points where the variation of work is positive, that is The one-way simulation is relatively inexpensive as good initial conditions for the flow field are available from the coupled FSI simulation, especially if the initial topology is almost feasible in error metric (35) (the topology obtained in the fully coupled step is not used as the starting point to avoid convergence to a poor local optimum, i.e., not discrete). The upper bounds for constraints based on equations 35 and 36 are set based on a reference value obtained by applying a small (≈ 1%) perturbation to the elasticity modulus of solid material. For clarity, it is worth considering a different view of the fully coupled step, which could be stated as finding surface displacements u * that minimize the objective function, subject to the optimization constraints, and to the additional one that ∃ρ : u * = S(ρ, λ * ). In other words, the outputs of the fully coupled step are (feasible) displacements and tractions (not the material distribution) which then become inputs of the inverse design step, tasked with producing a completely new material distribution that is discrete. An iterative procedure with the two steps could be proposed, in which case explicit penalization of non-discreteness would likely be required to maintain the discreteness achieved by the inverse design step. We emphasize that at no point do we decouple the multidisciplinary and multi-point nature of the problem, in a way in the inverse design step the FSI coupling conditions become optimization constraints, whereas in the fully coupled step they are inherently satisfied.
The optimization problem can alternatively be posed in a way that avoids the discreteness and stability issues. For example, by converting it to a compliance minimization problem with aerodynamic constraints while prescribing the topology of certain key areas, like the trailing edge. Here, we deliberately keep the design space as unconstrained as possible, and deal with the aforementioned issues in a non intrusive way to fully assess the capability of topology optimization to improve aerodynamic performance across multiple operating points.
Results
We consider two numerical examples, first a benchmark problem to demonstrate that the optimization approach of Sections 3.1 and 3.2 is adequate for geometrically nonlinear solid mechanics problems. Then, we consider a passive load adaptation FSI problem that motivated the strategy presented in Section 3.3.
Verification of the topology optimization implementation
We verify the optimization methodology chosen, and the implementation of the SIMP scheme, by reproducing published results for the classic tip-loaded cantilever problem, shown in Fig. 1. We take as reference the nonlinear topology optimization results of Buhl et al. (2000), where a material with elasticity modulus of 3 GPa and the Poisson ratio of 0.4 was considered. For this test, we use the morphology-based filters proposed by Sigmund (2007) to obtain solid-void topologies, namely the close (erode • dilate) and open (dilate • erode) strategies. The objective is to minimize end compliance (W tip ) subject to an equivalent mass constraint of 0.5, that is where δ y is the vertical displacement of the point where the load is applied. As geometric non-linearities are considered, Referring to algorithm 1, the objective is normalized by initial value, b 0 1 = 8 and r = 2, the constraint tolerance is 0.005. The value sequence for the filter parameter β ((33)) is {0.01, 1, 4, 16, 64, 200}. The initial small value makes both morphology filters equivalent to the simpler conical filter. The loose convergence criteria for L-BFGS-B are 40 iterations or a variation of objective function value less than 10 −5 . For the tight criteria, the latter is reduced to 10 −7 and the number of iterations unlimited. The domain is discretized with 10,000 square isoparametric elements, the filter radius (defining N (i) in (33)) is twice the element size but we note that the close and open filters are applied in two stages which effectively doubles the radius of the neighborhood. We have not found ramping the SIMP exponent to be advantageous for these simple cases; a constant value of 3 is used. As the loads are large, it is not practical to start the optimization with a uniform density distribution that also respects the mass constraint; therefore, we use the optimal density distribution for SIMP exponent of 1 and linear elasticity (which is obtained with the same process described above but without ever increasing the filter parameter). Figures 2 and 3 show the topologies obtained for P=60 kN with the open filter and for P=240 kN with the close filter respectively. Figure 4 shows the convergence history for both cases, and both the obtained topologies and compliance values (W tip (60 kN) = 4.36 kJ and W tip (240 kN) = 67.0 kJ) compare favorably with the reference results (4.65 kJ and 66.5 kJ respectively). The optimization process requires on average 350 function evaluations, which again compares well with other sources (e.g., Sigmund 2007), and the obtained topologies after filtering are nearly perfectly discrete.
Baseline compliant airfoil from aerostructural shape optimization
The second example consists of a flexible airfoil operating at two distinct free-stream Mach numbers, 0.25 and 0.5, but at the same angle of attack (AOA) of 2 • with respect to the undeformed shape. Henceforth, quantities at the lower or higher speed will be super-scripted l or h, respectively. At low speed, C l l = 0.5 must be generated and the deformation of the airfoil kept below a limit; at high speed, we wish to minimize drag.
The ideal configuration for minimum drag is a symmetric airfoil operating at zero AOA. As the airfoil deforms passively and the AOA is fixed, this configuration could only be achieved by a structure with no stiffness, which would then be unable to produce the required lift at low speed. Therefore, constraining the deformation at low speed leads to higher drag at high speed, unless the internal structure of the airfoil responds in a nonlinear way. As we intend to exploit this nonlinear structural behavior to improve passive load alleviation, a baseline airfoil for which the low speed deformation limit significantly affects performance was first designed by shape optimization with different allowed values of trailing edge displacement (y max ). For a review of the FSI shape optimization capabilities of SU2, and verification of the relevant derivatives, see Venkatesan-Crome et al. (2019). The starting point for the optimization is a NACA0012 profile with 0.5 m chord, parameterized through the free-form-deformation box (FFD) shown in Fig. 5 of which 17 points are allowed to move in the vertical direction, the bottom left point is fixed to avoid translation of the airfoil. A further constraint is added to enforce that the final area be greater or equal than the initial, that is where α are the control point vertical displacements. For the RANS simulations, the fluid grid is an O-Grid with 77,924 nodes and sufficient wall cell size for y + ≈ 1, and the radius of the circular farfield boundary is 30 chords. The fluid is considered to be ideal and standard-sea-level properties are used for the free-stream state. The solid domain is discretized with 76,800 4 node quadrilateral elements resulting in 77,875 nodes (this level of refinement is to ensure sufficient resolution for topology optimization), the inside of the hollow region close to the leading edge is clamped, the vertical section of this region is located at 5% chord. The elasticity modulus considered is 50 MPa and the Poisson ratio 0.35. Again referring to algorithm 1, a 0 1 = b 0 1 = b 0 2 = 8 and r = √ 2, the constraint tolerance used was 0.01. Convergence criteria for the optimizer are as described for the benchmark topology problem. Table 1 shows the optimized drag and lift for different values of the deformation limit, Figs. 6 and 7 show the undeformed and deformed at Mach 0.5 (in red) airfoils for the 10 mm (0.02c) and 6 mm (0.012c) deformation limits respectively.
The optimum drag increases 3.7% by reducing the deformation limit from 10 to 6 mm. We note that the deformation constraint is not met entirely by an increase in structural stiffness (which could be achieved by increasing area) but mostly by the reduction in pitching moment that results from the reflexed camber line (compare Figs. 6 and 7).
Fully coupled optimization step
We consider the 3.7% increase in drag a significant-enough trade-off between stiffness and performance. Consequently, the airfoil optimized for y max of 6 mm at Mach 0.25 in the previous investigation is taken as the starting point for topology optimization. The trailing edge displacement constraint previously used is replaced by a compliance constraint with upper bound equal to the compliance of the Optimization results for 6 mm constraint, undeformed (black) and deformed at Mach 0.5 (red) configurations initial design. The change of constraint function is necessary since, due to the trailing edge region not being forced to be solid, it could be possible for a much more flexible airfoil to respect the local constraint by employing large amounts of camber near the trailing edge in the deformed configuration, thereby producing the required lift at the expense of increased drag. With the area constraint no longer required, the fully coupled step is stated as where W l ref is the compliance (W = u · λd ) of the shape optimized structure at Mach 0.25 (1.141 J), and ρ the design densities (before filtering) introduced in Section 3.1. The elasticity modulus is 100 MPa, double the value used in the baseline shape optimization to allow material to be removed while maintaining compliance. The outermost 3 layers of elements (3.6% of the local thickness) are prescribed to be solid, elsewhere the initial density was 0.8, the density filter radius is 2 mm (approximately 3 times the largest element size). For this case, the SIMP exponent was gradually increased, taking the values 2, 2.5, and 3 following the strategy of algorithm 1. Figure 8 shows the history (inner iterations of L-BFGS-B) of drag and lift coefficients, compliance, and penalized objective function (f ), the large spikes correspond to the increases of SIMP exponent. Figure 9 shows the final topology and Mach number contours at the Mach 0.5 condition. As expected, the material distribution is not discrete, until around 30% chord the topology is mostly solid, around the center section the topology confers little bending stiffness acting mostly as support for the wet surfaces. Notably the trailing edge region has compliant hinges which allow it to work as a mechanism, and the importance of this will be explained below.
The drag coefficient is reduced to 0.008812 (3.1% reduction) which is only 0.47% higher than what was obtained for the shape optimization with y max of 10 mm, but as expected the topology obtained in this step is not discrete.
Inverse design step
The inverse design step is stated as a mass minimization problem with constraints on the target deformed shape of the airfoil (35) and on the stability metric (36), that is The results for this step are shown in Fig. 10, where the black line shows the target deformed surface, and the red line shows the verified deformed surface obtained from the coupled FSI analysis loop for the topology obtained in this step. To reduce the computational cost of the inverse design step, the stability constraints based on (36) were only enabled after the design became feasible with respect to the geometric constraints ( ). Table 2 summarizes the twostep topology optimization process listing the aerodynamic coefficients and the equivalent mass (for SIMP exponent of 3) at the major checkpoints. Fig. 10 Results of inverse design step, density contours, target (black) and actual (red) deformed surfaces With the inverse design step, some of the improvement obtained in the fully coupled step is lost as the drag coefficient increases 0.39%; however, the equivalent mass is reduced by 37%. The reduced performance is mostly due to strong coupling effects that amplify the effect of the small discrepancy between target and obtained shapes. In the inverse design step, the surface error metric is constrained to 1%; however, when the performance of the resulting topology is verified, this metric increases to 2% at Mach 0.5. One of the responsible physical mechanisms is that regions of unsupported airfoil skin tend to form bumps as the airfoil flexes, and these bumps cause a reduction of the pressure (due to a local acceleration of the flow) which in turn increases the bump size. This is the main effect the stability function helps to mitigate; Fig. 11 shows how this function leads to material being added to the skin to increase its bending stiffness. Coupling a bump-based shape parameterization method (e.g., Hicks-Henne) with the topology variables could potentially reduce the importance of this positive feedback mechanism. That was not deemed necessary in this work due to the subsonic speeds; however, the effect would be more significant at transonic speeds as bumps can produce shocks.
The seemingly insignificant increase in surface error metric (from 1 to 2%) results in a 9% increase in lift (but no significant changes to the flow field). We hypothesize that this sensitivity of the design is not due entirely to our proposed method but also due to the performance metric we sought to optimize, which results in a system with strong nonlinear characteristics as its apparent stiffness changes significantly between low and high speeds. Finally we note that topology optimization results often require some form of post-processing to which the results may likewise be sensitive (for example, to remove vestigial
Comparison with conventional approaches to encourage solid-void topologies
Two common ways to encourage solid-void solutions, in problems that do not necessarily benefit from them, are to penalize intermediate densities more severely (e.g., using higher values of SIMP exponent) and to manipulate the problem formulation such that overall stiffness becomes important (e.g., embedding mass reduction into the optimization goals). The results of both these approaches are presented and discussed in this section.
Weighted objective function
First note that the low speed lift and compliance constraints require the airfoil to have a minimum stiffness; recall that the AOA is set relative to the undeformed configuration and so an airfoil that is too flexible will not produce the target lift. Therefore, with a weighted objective function of mass and drag, a stiffness-based problem can be recovered by giving no importance to drag and focusing solely on mass Fig. 13 Weighted average approach results, filtered structural density and Mach number contours, deformed configuration from fully coupled step in red for reference minimization (which would not be desirable). To test the weighted objective approach, we used weights of 0.8 for drag and 0.2 for mass (after scaling the functions), which based on the drag-mass trade-off from the fully coupled step to the inverse design step (see Table 2) should be sufficient. Moreover, numerical experiments with higher weighing of mass were less successful due to poor stability of the optimization.
The optimization process is as described for the fully coupled step except for the change of objective function and the SIMP exponent which was not ramped, being fixed at 3 from the start instead (ramping it made little difference in the results of Section 4). The convergence history is shown in Fig. 12, the optimization stalls relatively early (the last 8 iterations required 66 function and gradient evaluations) resulting in the topology of Fig. 13, where the deformed configuration from the fully coupled step is also shown for comparison.
The response of the structure is similar and the drag is lower but within one count of what was obtained with the two-step approach. The equivalent mass is 0.485, lower than in the fully coupled step, as expected, but higher than in the inverse design step, due to the less discrete material distribution (see Fig. 13).
Higher SIMP penalization
It is known that challenging topology optimization applications may require a SIMP exponent higher than that suggested by the Hashin-Shtrikman bounds to converge to solid-void solutions. To test this approach, the weighted objective optimization was continued with the SIMP exponent increased from 3 to 4 in steps of 0.25 every outer iteration of the exterior penalty method. The resulting material distribution, shown in Fig. 14, is almost indistinguishable with no significant topological changes. The drag coefficient was not improved and the equivalent mass increased to 0.542 as a result of the stiffness reduction in intermediate density areas. The lack of change could be due to the solution for SIMP exponent of 3 being a local optimum. However, the general features of the solution, dense truss-like structure at mid chord and large voids towards the trailing edge, develop very early in the optimization (note the quick decrease in mass in Fig. 12). As material is removed mostly from low strain energy areas, we hypothesize these solution features are inherent to the presence of the mass objective, and how quickly and easily it can be targeted by the optimizer (the mass function is linear).
Gradual mass minimization
One would then expect that gradually introducing the mass objective could avoid rapid convergence to the poor local optima described above, this was attempted for the results of the fully coupled step in two steps. First by adding a constraint to the optimization problem (39) to gradually lower the equivalent mass to 0.5, the initial penalty function parameters were selected such that the initial penalization was equivalent to two drag counts. Then by switching to constraining drag below 0.00885 while minimizing mass, this is required since once both mass and compliance constraints are satisfied there is no longer an incentive to improve discreteness (one of the two must be a scarce resource). Figure 15 shows the convergence history, illustrating the more gradual decrease in mass, and Fig. 16 the obtained topology.
Although mass is reduced without significantly increasing drag, and both values are lower than those obtained after the inverse design step, the resulting topology is still far from discrete. We note that the optimization does not converge fully as the gradient of the penalized objective function (see Fig. 17) is not zero. Nevertheless, the line searches fail due to the much larger gradients at the weakly connected regions around 80% chord, thus resulting in a poor search direction.
Discussion
The trailing edge region is not highly stressed as bending moments are low; therefore, from a purely structural perspective, it does not require large regions of solid material. However, from the aerodynamic standpoint, the trailing edge is responsible for most of the load alleviation. While the topology features close to the leading edge mostly confer stiffness to the airfoil, the trailing edge behaves like a mechanism, one that under passive actuation notably increases the camber near the trailing edge at the higher speed. This localized camber leads to a more aft-loaded pressure distribution (see Fig. 18 light color line) that counteracts the effect of the reflexed camberline of the undeformed airfoil.
It is therefore plausible that this interference between aerodynamic and structural objectives leads to an intermediate design and search direction that stall the optimization. Although the problem is posed such that it benefits globally from high stiffness to mass ratio, locally (near the trailing Fig. 17 Gradual mass minimization approach, contours of derivative of penalized objective function with respect to density Fig. 18 Pressure coefficient distribution at Mach 0.5 for the baseline airfoil (6 mm constraint) and for the fully coupled step result edge) that is not what minimum drag requires. Different strategies can potentially be used to mitigate the interference between objectives without decoupling them; we note however that a strategy akin to the gradual mass minimization approach above will nearly double the computational cost, whereas the inverse design only adds 15% to the cost of the fully coupled step.
Concluding remarks
We have demonstrated how density-based topology optimization can be used to design the internal structure of a compliant airfoil with the objective of improving load alleviation. Better results were obtained than with shape optimization alone, as the nonlinear structural behavior introduced by the topology allowed the airfoil to better adapt to the different fluid loads at different speeds.
The proposed two-step methodology has addressed some shortcomings found when applying density-based topology optimization to non stiffness-based designs. In the first fully coupled step, coupled FSI simulations are considered but mass is not explicitly included as an objective or constraint. We have found this to improve the convergence of the optimization with only minor impact in performance, as it prevents the optimizer from making rapid adjustments that can cause the structure to buckle, and avoids early stopping due to convergence to poor local minima or stall due to poor search directions.
As mass is not considered in the fully coupled step, the resulting topology will, in general, not be discrete. Therefore, we have introduced a second inverse design step where a discrete topology is sought for which the airfoil response is the same. The inverse design step is computationally cheaper as it does not rely on coupled FSI simulations, and to avoid critical structures, we used a stability metric (36) whose computation requires only one FSI step (for which good initial conditions are available). Obtaining completely discrete topologies requires elaborate filtering strategies; these complicate the convergence of the optimization as the total number of iterations increases due to the need to ramp filter parameters. Moreover as the best filtering strategy is application dependent (even for simple problems), the optimization may have to be repeated for different settings. The two-step process greatly reduces the computational cost of testing different density filters at the expense of some performance, since in general only a very refined discrete topology could perfectly replicate the response of the optimum structure obtained in the fully coupled step. An iterative process, alternating between both steps, can also be considered for more complex problems but was not found necessary here. The methodology was compared with three more conventional approaches of encouraging solid-void topologies that all failed to provide a discrete material distribution for the example load adaptation problem.
We observed that due to the alleviation objective, the aerodynamic performance of the structure obtained in the inverse design step is sensitive to slight perturbations of the external shape. Therefore, it is likely that the performance would also be sensitive to approximations such as converting the not perfectly solid-void topology (due to the filter properties) to one that is completely discrete, and sensitive to eventual manufacturing inaccuracies. While repeating the inverse design step using different filter methodologies is not computationally expensive, thorough analysis of the robustness of the design with respect to any inaccuracies would still be required. Finally as nonlinear modeling of the structure is considered and strains are large, load-path analysis would have to be conducted for the complete operating range.
Overall the proposed approach offers a route for RANSbased topology optimization of FSI systems with focus on aerodynamic performance rather than on the realizability of the resulting topologies.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Replication of results
The source code developed for this work, the data, and the scripts required to run the optimizations are available at https://github.com/pcarruscag/SU2 under the tags SMO full coupled and SMO inv design.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommonshorg/licenses/by/4.0/. | 10,176.6 | 2020-05-14T00:00:00.000 | [
"Engineering",
"Physics"
] |
Preservation of small extracellular vesicles for functional analysis and therapeutic applications: a comparative evaluation of storage conditions
Abstract Extracellular vesicles (EVs) are nanovesicles involved in multiple biological functions. Small EVs (sEVs) are emerging as therapeutics and drug delivery systems for their contents, natural carrier properties, and nanoscale size. Despite various clinical application potentials, little is known about the effects of storage conditions on sEVs for functional analysis and therapeutic use. In this study, we evaluated the stability of sEVs stored at 4 °C, −20 °C, and −80 °C up to 28 days and compared them to fresh sEVs. Also, the effect of freeze-thawing circles on the quantity of sEVs was assessed. We found that different storage temperatures, along with shelf life, impact the stability of sEVs when compared to freshly isolated sEVs. Storage changes the size distribution, decreases quantity and contents, and impacts cellular uptake and biodistribution of sEVs. For functional studies, isolated sEVs are suggested to be analyzed freshly or stored at 4 °C or −20 °C for short-term preservation depending on study design; but −80 °C condition would be more preferable for long-term preservation of sEVs for therapeutic application.
Introduction
Extracellular vesicles (EVs) are cell-derived lipid bilayerenclosed nanoscale vesicles. EVs are known for being able to work as natural vehicles to deliver components such as proteins and RNA from donor cells to recipient cells so that cells can communicate with their neighboring and distant cells (Tkach & Thery, 2016).
EVs have been emerging as attractive therapeutic tools for their content molecules generated from their parent cells and have gained great interest as delivery platforms due to their natural carrying ability (Garcia-Manrique et al., 2018). Exosomes, a subtype of EVs, are more attractive drug delivery vehicles for their relatively small size and properties such as crossing the biological barrier, circulation stability, and inherent targeting (Elsharkasy et al., 2020). While strategies are being developed to isolate different types of EVs, differential ultracentrifugation remains the most commonly used method for exosome separation and concentration (Thery et al., 2006). For the lack of specific markers of EV subtypes, it is suggested to describe EVs separated by ultracentrifugation (around 100,000 Â g) as small EVs (sEVs) (Thery et al., 2018).
For therapeutic applications, sEVs are often obtained from cell culture. The collection of cell culture supernatant and differential ultracentrifugation-based exosomes isolation processes are time-consuming (Yang et al., 2020). However, little is known regarding how to store EVs before analyzing their contents, studying their functions, or for therapeutic applications. Generally, EVs are recommended to be stored at À80 C (Jeyaram & Jay, 2017), but how storage condition affects the characteristics of EVs has not been fully elucidated and there is a lack of comparative evaluation of different storage conditions. Hence, toward successful clinical translation of sEVs, here we isolate bEnd.3 cells-derived sEV by differential ultracentrifugation and tested effects of storage conditions on size, quantity, and protein/RNA content and properties related to therapeutic applications of sEVs.
Our data indicate that storage temperature affects the size, quantity, RNA/protein content, cellular uptake, and biodistribution of sEVs.
Isolation and characterization of sEVs
sEVs were prepared by differential centrifugation. The medium of bEnd.3 cells growing to 60% confluency (6 Â 10 7 cells) were replaced with EVs-depleted medium. 48 h after incubation, the supernatant was collected and then centrifuged at 300 Â g for 10 min, 2000 Â g for 10 min, 10,000 Âg for 30 min, and then filtered through a 0.2-lm filter. Afterward, the sEVs were pelleted by ultracentrifugation at 110,000 Â g for 70 min and washed with phosphate-buffered saline (PBS) at 110,000 Â g for 70 min then resuspended in PBS. All centrifugation process was performed at 4 C within a day to obtain freshly isolated sEVs. Images of sEVs fresh and after storage at different conditions for a week were observed by transmission electron microscopy (TEM). sEVs suspended in PBS were dropped onto the carbon film-coated copper grid and stained with 2% phosphotungstic acid. Images were captured using a Tecnai G2 Spirit TWIN Electron Microscope (FEI, Holland). The presence of protein markers CD63 (ab216130, Abcam, UK), TSG101 (ab125011, Abcam, UK), and Alix (sc53540, Santa Cruz Biotechnology, USA) on sEVs were detected via western blotting. Cell lysate and isolated sEVs were separated on SDS-PAGE gel and transferred onto PDVF membrane and analyzed using Amersham Imager 600 Imaging System.
Nanoparticle tracking analysis (NTA)
Since there has been a consensus that a low-temperature environment may be more suitable for storing sEVs, we thereby focusing on three common storage conditions: 4 C, À20 C, and À80 C. Freshly isolated sEVs were aliquoted and separately stored at 4 C, À20 C, or À80 C. Size distribution and concentration of sEVs fresh or after storage (3, 5, 7, 14, and 28 days) were analyzed using NTA (Nanosight NS300, Malvern, UK). Also, the effect of freeze-thawing circles (1-5 times) from À20 C, À80 C or liquid nitrogen to 4 C on the quantity of sEVs was assessed and compared via NTA. Before performing NTA, samples were diluted by 20 times and resuspended in PBS. Samples were injected into Nanosight NS300 using a continuous syringe pump at an infusion rate of 20. The movement of nanoparticles under the camera was recorded and captured for 3 Â 20 s. The detection threshold for nanoparticles was fixed at 3 for all tests.
Evaluation of contents in isolated sEVs
For contents in sEVs, we focus on two major contents: protein and RNA. Change of total protein level of sEVs after preservation was determined using a BCA Protein Assay Kit (MultiSciences Biotech Co., China). Change of level of tetraspanins CD63 in sEVs after preservation was evaluated using an enzyme-linked immunosorbent assay (ELISA) kit (CUSABIO Biotech Co. Ltd., China) according to the manufacturer's instruction. RNA in sEVs after preservation was extracted using a Total Exosome RNA and Protein Isolation Kit (Invitrogen, USA) according to manufacturer's instruction. Change of total RNA level of sEVs after preservation was evaluated using a Nanodrop 2000 spectrophotometer (Thermo Fisher Scientific, USA).
Cellular uptake study
To track sEVs in vitro, sEVs were labeled with PKH67 (green, Sigma-Aldrich, USA) as previously described (Li et al., 2020). For cellular uptake study, bEnd.3 cells were treated with PKH67 labeled autologous sEVs for 6 h, followed by fixing with paraformaldehyde (PFA) and staining with DAPI (Beyotime, China). Cellular uptake of PKH67-labeled sEVs in U87MG cells was observed using a confocal microscope (Leica TCS SP8 X, Leica, Germany).
Biodistribution study
To study the effect of storage conditions on the biodistribution of sEVs, healthy male BALB/c nude mice were employed as animal models. Isolated sEVs were labeled by carbocyanine dye DiR (Yeasen Biotechnology, China) for in vivo visualization (Li et al., 2020). Briefly, 10 lg of DiR was added to isolated sEVs fresh or after storage. After 20 min of incubation, unbounded DiR dye was removed by ultracentrifugation. DiR-labeled sEVs was resuspended in PBS. 100 lL of DiR-labeled sEVs were administrated to mice through tail vein injection and fluorescence was obtained using an AniView100 multimodal imaging system (Biolight Biotechnology Co., Ltd., China) at different time points. Ex vivo biodistribution was inspected after in vivo biodistribution monitoring. The animal study was carried out using the Institutional Animal Care and Use Committee (IACUC)approved procedures. Animals were purchased from SJA Laboratory Animal Co., LTD (Hunan, China) and housed according to the regulations of the IACUC.
Statistical analysis
Data were presented as mean values ± SD. Student's t-test was performed at the significance level of a ¼ 0.05 to evaluate differences between groups.
Characterization of sEVs
Isolated sEVs were characterized by size distribution, TEM, and protein markers. The enrichment of sEVs markers, CD63, TSG101, and Alix was identified via western-blotting ( Figure S1). The size distribution of fresh sEVs by NTA (Figure 1(A)) was matched as observed under TEM (Figure 1(B)). Images of sEVs under TEM after storage at different temperatures all showed the presence of sEVs, but 1-week storage would cause significant aggregations (Figure 1(B)).
NTA of sEVs
sEVs fresh or stored up to 28 days at different temperatures all exhibited fine size distribution by NTA (Figure 1(A)).
However, further analysis of particle quantity showed that the number of sEVs decreased quickly after storage for all conditions. À20 C and À80 C slowed the rate of decrease in nanoparticle numbers, but there was still a more than 40% loss of sEVs particle after 28 days of storage ( Figure 2(A)). Freeze-thawing also impacted significantly the number of sEVs. Freeze-thawing in liquid nitrogen seriously damaged sEVs, and running freeze-thawing circles between À20 C/ À80 C 4 C also contributed significantly to the loss of sEVs particles (Figure 2(B)). Results of the analysis of cumulative size distribution showed that the size range from D10 to D90 of sEVs was widened for all storage conditions (Figure 3), but À20 C enlarged the size more remarkably (Figure 3(D)). Similarly, a decreasing trend by the time of percentage of small particles (30-150 nm) in isolated sEVs was observed for all storage conditions, along with an increased percentage of large particles (150-500 nm) due to the loss of small particles (Figure 4).
Contents in sEVs
BCA test of total protein level showed that the storage of sEVs at 4 C resulted in decreased protein levels after a week, however, the storage of sEVs at À80 C showed no significant decrease of protein level during 28 days of preservation ( Figure 5(A)). Consistent with the total protein level, there was a sharp decrease of CD63 in sEVs at 4 C, but not at À80 C during 28 days of preservation. Besides, a significantly slowed decreasing trend of CD63 was observed at À20 C ( Figure 5(B)). RNA in sEVs was more stable than protein during preservation. There was no significant decrease in total RNA in sEVs at 4 C conditions within a week. We did not observe the loss of total RNA in sEVs at À20 C or À80 C during 28 days of preservation ( Figure 5(C)).
Cellular uptake
Storage conditions influenced cellular uptake of sEVs along with shelf life. Storage of sEVs at 4 C showed significantly decreased autologous cellular uptake efficiency; however, the uptake efficiency remained as highly as fresh for sEVs preserved at À80 C within three weeks. Also, there was a decreasing trend of uptake efficiency for sEVs preserved at À20 C, but the difference became significant after 14 days of storage ( Figure 6).
Biodistribution
Healthy mice were administrated with DiR-labeled sEVs fresh or after storage through tail vein injection and imaged at different time points to monitor biodistribution. Similar to our results of contents in sEVs and cellular uptake, fresh sEVs showed strong fluorescence signals in the whole body, ex vivo organs, and brain (Figure 7). For storage at 4 C or À20 C, we observed significantly decreased fluorescence signals of sEVs with shelf life (Figure 7(A)). The fluorescence signals can hardly be detected in gastrointestinal (GI) tracks (Figure 7(B,D)) as well as in brains (Figure 7(C,E)). For storage at À80 C, we observed stable fluorescence signals in mice and in ex vivo organs within 28 days of storage ( Figure 7(A,B)). However, fluorescence signals in brains were significantly decreased after 14 days of storage (Figure 7(C,E)).
Discussion
EVs have tremendous potentials for therapeutic applications. For clinical application of EV-based biopsy or therapy, storage conditions are supposed to have minimal impact on EV integrity, contents and functions. In this study, we investigated the effects of storage temperature and shelf life on properties of sEVs related to therapeutic use. We found that freshly isolated sEVs showed the best results in all tests. Different storage temperatures, along with shelf life, affect the stability of sEVs and their functions to varying degrees.
International Society for Extracellular Vesicles (ISEV) recommends storage of isolated EVs in phosphate-buffered saline at À80 C (Witwer et al., 2013), but more data is required for supporting the consensus. There have been several studies exploring favorable temperature for EVs storage, with inconsistent results. Sokolova et al. reported that the storage of exosomes derived from three different cell types (HEK 293 T, ECFC, MSC) at À20 C and freezing-thawing circles up to 10 times have minimal effect on size by NTA (Sokolova et al., 2011). In contrast, Lee et al. reported that À70 C was the favorable condition for HEK293 cells-derived exosomes isolated using the Exo-Quick kit for long-term storage for basic researches as there was less significant loss of exosomal protein and RNA compared to room temperature and at 4 C after 10 days of storage (Lee et al., 2016). In another study, Maroto et al. investigated the effects of storage temperature on the stability of airway exosomes, they found that 4 C and À80 C storage conditions for four days both affect the proteomic content of exosomes and suggested immediate analysis of exosomes for diagnostic and functional studies (Maroto et al., 2017). Similar to their results, Cheng et al. isolated HEK 293 T cells-derived exosomes by using the ExtraPEG method and investigated storage conditions on quantity and cellular uptake of exosomes (Cheng et al., 2019). They reported that storage at 4 C had the highest exosome concentration and exosomal protein levels for short-term storage (24 h); however, for long-term storage (over a week), exosomes showed the best stability when stored at À80 C.
Our data revealed that the optimal storage condition for sEVs may vary depending on the study purpose. Results of NTA produced acceptable size distribution graphs of sEVs during 28 days of storage for all storage temperatures ( Figure 1(A)). However, TEM results demonstrated aggregation of sEVs after a week of storage at all temperatures (Figure 1(B)). Further analysis revealed that storage temperature, along with shelf life, decreased significantly the quantity of sEVs (Figure 2). Freezing-thawing should be avoided as it damaged sEVs seriously. Storage of sEVs increased cumulative size distribution, especially at À20 C (Figure 3), but there was a notable loss of 30-150 nm particles for sEVs stored at 4 C ( Figure 4). Therefore, for integrity and quantity, sEVs are supposed to be stored at À80 C avoiding freezing-thawing, but short-term storage (within a week) at 4 C is also acceptable. For contents in sEVs, total protein and CD63 levels in sEVs decreased sharply at 4 C and the difference becomes significant after a week of storage. However, the total RNA level in sEVs started to decrease after 14 days of storage at 4 C. It is likely that the 4 C environments maintained the integrity of sEVs during the first week thus protected the RNA content. In contrast, we observed no significant decrease of RNA level for sEVs stored at À20 C or À80 C ( Figure 5). Therefore, for studies focusing on contents and functions, sEVs may be more suitable to be stored at À20 C or À80 C.
An important application of sEVs is as therapeutics or to be engineered as drug delivery systems. For therapeutic use, storage seems to be inevitable. Although a previous study reported that storage temperature did not influence the cellular uptake of exosomes considering the loss in quantities, the shelf life (not reported) may be too short to observe differences (Cheng et al., 2019), there has been a lack of standardized criterion of preservation condition of sEVs and little is known regarding impacts of storage temperature and shelf life on properties of sEVs as vehicles in vitro nor in vivo. In our study, we found that cellular uptake of sEVs decreased significantly soon after storage at 4 C, while cellular uptake of sEVs stored at À20 C or À80 C was relatively stable within 14 days. Given that storage at À20 C or À80 C decreased the number of sEVs significantly but not total protein or CD63 levels, it is possible that the decreased cellular uptake of sEVs (stored at 4 C) has resulted from the loss of their protein contents.
It has been reported that bEnd.3 cells-derived exosomes can cross the blood-brain barrier and enter the brain (Yang et al., 2015(Yang et al., , 2017. The intensity of fluorescence signal in the brain may be an indicator reflecting their stability at different storage temperatures for therapeutics applications. Mice receiving fresh sEVs showed the strongest fluorescence signals in the brain (Figure 7). Of all the storage temperature and shelf life tested, only sEVs stored at À80 C for a week showed high fluorescence intensity in the brain (Figure 7(C)), suggesting that storage influence significantly the brain targeting ability of bEnd.3 cells-derived sEVs. Hence, for therapeutic applications of sEVs, it is supposed to be used as soon as possible or can be stored at À80 C for short-term preservation.
Inconsistency in EVs isolation, characterization, and analysis limited the comparability between studies investigating storage effects on EVs. Isolation methods, affect the feasibility, yield, and purity of EVs (Shtam et al., 2018). Studies using commercial EVs isolation kit without technical details may have limited reproducibility for future studies, and materials may affect downstream profiling or functional analysis of EVs. In this regard, we thus used the most common differential ultracentrifugation method to isolate sEVs to provide a practical reference for future studies. Besides, it has been reported that the detection method influences the results of the characterization of EVs (Almizraq et al., 2017). Methods in previous studies may be inconsistent in the characterization and analysis of EVs and reduce the comparability. Future studies following standardized methods, such as those recommended by ISEV (Witwer et al., 2013;Thery et al., 2018), would aid progress in the field.
Aside from preserving isolated sEVs directly after resuspending in PBS, lyophilization, a common method for preservation of thermolabile materials (Assegehegn et al., 2020), has been used to preserve EVs for analysis (Stamer et al., 2011;Lydic et al., 2015) and to produce EVs formulations (Bari et al., 2019). Lyophilization can extend the shelf life of EVs and freeze-dried EVs may be stored directly at room temperature (Charoenviriyakul et al., 2018) to reduce cost. However, those studies are preliminary and lack standard protocols (Kusuma et al., 2018). The choice of appropriate cryoprotectant for sEVs preservation also requires further investigation.
Conclusion
In conclusion, our study provided relatively comprehensive information on the effects of storage conditions on sEVs in regards to their further functional analysis and therapeutic applications. To accelerate the clinical translation of sEVs, detailed storage protocols are warranted. Furthermore, the development of novel preservation methods is encouraged to increase the commercial availability of sEVs in the future. | 4,296.6 | 2021-01-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Early Development of UVM based Verification Environment of Image Signal Processing Designs using TLM Reference Model of RTL
With semiconductor industry trend of smaller the better, from an idea to a final product, more innovation on product portfolio and yet remaining competitive and profitable are few criteria which are culminating into pressure and need for more and more innovation for CAD flow, process management and project execution cycle. Project schedules are very tight and to achieve first silicon success is key for projects. This necessitates quicker verification with better coverage matrix. Quicker Verification requires early development of the verification environment with wider test vectors without waiting for RTL to be available. In this paper, we are presenting a novel approach of early development of reusable multi-language verification flow, by addressing four major activities of verification like Early creation of Executable Specification, Early creation of Verification Environment, Early development of test vectors and Better and increased Re-use of blocks. Although this paper focuses on early development of UVM based Verification Environment of Image Signal Processing designs using TLM Reference Model of RTL, same concept can be extended for non-image signal processing designs. Main Keywords are SystemVerilog, SystemC, Transaction Level Modeling, Universal Verification Methodology (UVM), Processor model, Universal Verification Component (UVC), Reference Model.
INTRODUCTION
Image signal processors (ISP) address different markets, including high-end smartphones, security/surveillance, gaming, automotive and medical applications. The use of industry standard interfaces and rich set of APIs makes the integration Image signal processing algorithms are developed and evaluated using C/Python models before RTL implementation. Once the algorithm is finalized, C/Python models are used as a golden reference model for the IP development. To maximize re-use of design effort, the common bus protocols are defined for internal register and data transfers. Vol. 5, No. 2, 2014 78 | P a g e www.ijacsa.thesai.org A combination of such configurable image signal processing IP modules are integrated together to satisfy a wide range of complex image signal processing SoCs [1].
SystemL
In Verification Environment of Image Signal Processing design as shown in figure 1, Host interface path is used to do programming of configurable blocks using SystemVerilog UVM based test cases. UVM_REG register and memory model [20] is used to model registers and memories of DUT. DUT registers are written/read via control bus (AXI3 Bus here) UVC. RTL control bus interface acts as target and control bus UVC acts as initiator. The target control interface of the ISP RTL is driven by control bus UVC (configured as initiator).After register programming is done, image data(random/user-defined) is driven to the data bus interface by the data bus UVC and the same data is also driven to the reference model. Output of the ISP RTL is received by the receiver/monitor of the data bus UVC. Scoreboard compares the output of RTL and reference model and gives the status saying whether the both output matches or not.
'C' test cases are used for programming of RTL registers/memories via CPU interface. C test cases control the SystemVerilog Data Bus UVC using Virtual Register Interface (VRI) [15], [18]. VRI layer is a virtual layer over verification components to make it controllable from embedded software. It gives flexibility to Verification Environment users to use the Verification IPs without knowing SystemVerilog.
Generally, development of Verification Environment for verification of designs is started after availability of the RTL. Thus, significant time is spent for setup and debugging of verification environment after release of RTL which results in delay in start and completion of verification of the designs. It is required to find ways to start developing the Verification Environment much before the arrival of the RTL so that when RTL is available, Verification Environment can be easily plug and play and verification of the designs can be started quickly. Use of TLM reference model of RTL for development of Verification Environment much before arrival of RTL proves to be good solution for the above mentioned problem. This paper is focusing on early development of UVM based Verification Environment of Image Signal Processing designs using TLM Reference Model of RTL before availability of the RTL. Early development of Verification Environment of Image Signal Processing designs is described in detail in Section II.
A. Modeling of ISP designs
A loosely timed high level model of the ISP block is generated at algorithmic functional level using C/C++/SystemC and with TLM-2 interface. The purpose of this model generation is to use this as a reference model. We may say it as a "Golden Reference Model" or "Executable Functional Specification" of the ISP designs. From functional and structural perspective this model can be divided in two major spaces.
First space -the algorithmic computational part, is mainly responsible for image processing using various algorithms involved for image manipulation from the incoming image stream data.
The second spacea TLM interface, is responsible for all kinds of communication to external IPs and other system blocks.
Register interface of this model is generated using IP-XACT tools. And algorithmic part is manually implemented.
B. Testing of Executable Spec only
To test the TLM ISP model, an environment is developed using Python (an open source scripting language) and Synopsys Pa-Virtualizer Tool Chain.
The test environment has following major components: A suitable TLM sub-system is designed. This TLM subsystem consists of various models namely; ISP functional model, AXI BFM, configurable clock generators model, configurable reset generator model, memory model, configurable interconnect etc. All these are pure SystemC models. AXI BFM is provided to interact with other part of the world.
ISP RTL block needs exhaustive verification, which is possible only when the RTL is ready. But, development of RTL design takes time, which means verification of RTL design can't be possible before it becomes available. To shorten this sequential activity, functional model of ISP is used to prepare the early verification environment.
A SystemVerilog test bench wrapper is created over SystemC/TLM ISP sub-system. This SystemVerilog test bench interface with the RTL verification environment.
D. Virtual Platform Sub-system
When all components of platform are in TLM/C, means C/C++ are used as modeling language; we call it a Pure Virtual platform. In typical verification environment, generally all verification components are not only TLM based but also of different verification languages thus making it a Multilanguage heterogeneous simulation environment. For developing early verification environment, TLM based subsystem is developed which consists of every block in TLM/C. This TLM based Sub-system is model of RTL.
In the above mentioned RTL verification environment, a processor model is used which enables us to early develop 'C' test cases for programming of RTL registers/memories via CPU interface. The challenge is to keep the verification environment independent of "C" test cases. We don't wish to compile every time whenever there is change in application code. To be able to achieve this, a sub-system is designed which consists of models of bus interfaces, like AXI BFM, a "generic" processor model, model of memory, etc. an independent "C" program/test case is written to do all the programming and configuration, which in turn runs on processor model of this sub-system. This sub-system is active element in programming phase, but becomes passive once the programming is complete.
Virtual platform sub-system can be represented in following block diagram.
E. Virtual Register Interface (VRI)
Today, most of the embedded test infrastructure uses some adhoc mechanism like "shared memory" or synchronization mechanism for controlling simple Bus functional models (BFMs) from embedded software.
In order to provide full controllability to the "C" test developer over these verification components, a virtual register interface layer is created over these verification environments which provides the access to the sequences of these verification environment to the embedded software enabling configuration and control of these verification environments to provide the same exhaustive verification at SoC Level. This approach addresses the following aspects of verification at SoC Level:
F. Flow used for Design Verification
Much before arrival of RTL, C/Python model of image signal processor designs is developed for algorithm evaluation. Then, TLM/SystemC model of the design is created from C/Python model. After proper exhaustive validation of the model with required test vectors, the model qualifies as an Executable Golden Model or Executable Specification means a 'living' benchmark for design specification. Enabling the use of TLM Model as DUT expedites development and better proofing of the verification environment with wider test vectors without waiting for RTL to be available.
Standard 'interfaces' are used to enable the reuse of verification components. In addition to standard method of bus-interface or signals level connectivity, UVM Multi-Language Open Architecture is used to connect System Verilog TLM port directly to SystemC TLM port which gives advantage of better simulation speed and better development/debug cycle in addition of clean, better and easy connectivity/integration of blocks. Presence of TLM components gives us flexibility to make backdoor direct access to the DUT registers and memories. In both above cases, control/data flows across both TLM and bus interface boundaries. This method enhances the chances of re-using different already existing blocks in flow. IP-XACT based tools are also used for automatically configuring the environment for various designs.
By the time RTL arrives, complete verification environment and test-vectors are ready with sufficient sanctity, thus eliminating the number of verification environment issues which may arise when actual RTL verification is started. When RTL arrives, the TLM/SystemC model is simply replaced with RTL block with reuse of maximum of other verification components. This enhances the rapid/regress testing of design immediately. Also same C test cases can be run on actual core. | 2,217.6 | 2014-08-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Study of the µ opioid receptor in cutaneous ulcers of leishmaniasis and sporotrichosis according to the complaints of local pain
Patients with cutaneous leishmaniasis or sporotrichosis with ulcerated lesions may present similar epidemiological and clinical characteristics. Local pain is often referred to in the sporotrichosis lesions, but not in cutaneous leishmaniasis. The µ Opioid Receptor (MOR) is indirectly associated to the production of cytokines, and is related to the epidermal proliferation.
Sporotrichosis in the state of Rio de Janeiro is mainly caused by Sporothrix brasiliensis [6]. In sporotrichosis, dermis presents at an early stage infl ammation with infi ltration of neutrophils, plasma cells, and lymphocytes that may or may not be intense. Gradually, the usual ulcerated cutaneous lesion typically exhibits a granulomatous dermatitis surrounding a suppurative abscess, with the presence of a central zone composed of neutrophils and a few eosinophils, and an outer zone of lymphocytes and plasma cells. At a later stage, granulomas mainly consist of epithelioid cells. Small abscesses may be seen within granulomas. The histopathological fi ndings are generally nonspecifi c and variable in different stages of the disease. The histopathological pattern is usually a combination of pyogenic and granulomatous reaction and may display epidermal hyperplasia, papillomatous acanthosis, hyperkeratosis, and intraepidermal microabscesses [7,8].
Experimental immunologic studies in the cutaneous lesions of leishmaniasis and sporotrichosis also show similar in situ profi les with high levels of activated type 2 macrophages and production of IL-4 and IL-10 [9].
Infl ammatory mediators are released and tissue acidifi cation activates nociceptive primary afferent neurons that stimulate the sensation of pain causing hyperalgesia [10].
Immunocytes are recruited and release interleukins performing their functions in the process of healing the cutaneous lesion in an orchestrated way. The cytokine cascade results in the activation of COX-2 dependent prostanoid and in the release of catecholamine from sympathetic fi bers [11]. Cytokines such as IL-1, IL-6, TNF- and IL-8 are related to the pain threshold.
On the other hand, opioid peptides render nociceptors less sensitive to excitation and thus inhibit the action of multiple excitatory mediators. Opioid peptides do not bind exclusively to one unique opioid receptor, but instead exhibit affi nity for various opioid receptors including μ-, ∂-and -opioid receptors [10,12]. Endogenous opioids as endorphins and encephalin act primarily on μ and ∂ opioid receptors. They are synthesized in vivo in order to modulate pain mechanisms and infl ammatory pathways, and mediate analgesia in response to painful stimuli by binding to opioid receptors on sensitive cutaneous nerves. Opioids produced by cells of the immune system and keratinocytes are capable of exerting additional effects, such as immunomodulation in cutaneous infl ammation. -endorphin is present in macrophages, monocytes, granulocytes, and lymphocytes, in secretory granules arranged at the cell periphery, ready for exocytose. During the early stages of infl ammation, as the leukocytes migrate to the site of infection, they (along with the resident cells) secrete various chemokines such as IL-1, IL-6, IL-8, which lead to hyperalgesia. In the late infl ammation stage, macrophages and lymphocytes secrete IL-4, IL-10 and IL-13 inhibiting the hyperalgesic pathways leading to hypoalgesia [13]. Pain perception depends upon the activation of specialized peripheral neurons called Primary Afferent Nociceptors (PANs) from primary afferent fi bers (A-, A-, and C-fi bers) [14].
The peripheral anti-nociceptive action of μ-opioid receptors (MOR) agonists is greatly increased in infl amed tissues. This is in part due to stimulation of the MOR synthesis in the Dorsal Root Ganglion (DRG) induced by cytokines, especially Neural Growth Factor (NGF), and its transport to peripheral terminals. In summary, MOR analgesia depends upon a set of widely distributed neural targets that include NAPs, ascending pain projection neurons and a top-down pain modulatory circuit [14].
Some experimental studies on nociception in leishmaniasis have been performed in order to better understand the profi le of cytokines related to pain, without a satisfactory conclusion [15,16]. The purpose of this study was to determine the profi le of MOR staining in the well established cutaneous ulcerated lesions of leishmaniasis and sporotrichosis in patients from Rio de Janeiro, Brazil, and to associate the MOR staining profi le with the presence or absence of pain in the cutaneous lesions of both diseases.
Reaction was detected by phosphatase alkaline kit (GBI Labs, Bothel, Washington, USA) and revealed with permanent red [17]. The sections were counterstained with hematoxylin.
Compact granuloma (mature epithelioid cells in clusters)
were present in 50% of the leishmaniasis cutaneous lesions, in There was no association between the clinical complaint of pain in the lesions and the intensity of the MOR staining. Table 2).
Discussion
The keratinocytes are surrounded by infl ammatory infi ltrate being stimulated and releasing cytokines and growing factors, including NGF and opioids. Probably the high intensity These data suggest that epidermal cytokine expression may be [14].
A weakness of this study is to be a retrospective study.
Further studies with larger number of patients and with gradation of local pain sensation would allow a better understanding on the issue of local pain in granulomatous infectious diseases of the skin.
In conclusion, the general concept that leishmaniasis presents mainly painless lesions or sporotrichosis presents painful lesions could not be directly demonstrated in this study. There was no association between MOR staining and the presence or absence of local pain in the cutaneous lesions of both diseases. Certainly, there must be pain modulation by local opioids, but the main modulation seems to be originated from the peripheral and central nervous system.
Financial Support
Coordenação de Aperfeiçoamento de Pessoal de Ensino Superior (CAPES) through code 001. | 1,288 | 2019-12-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
The squeezable nanojunction as a tunable light-matter interface for studying photoluminescence of 2D materials
We study photoluminescence (PL) of MoS2 monolayers in optical cavities that can be tuned in operando. Technically, we use the recently developed squeezable nanojunction (SNJ). It is a versatile mechanical setup that has been useful to study thermoelectric effects at electronic tunneling distances. Here, we emphasize on a cavity with 0–3 micrometer distance with optical access. Due to the tunable cavity, we see strong distortions of PL spectra. By an analysis of the ensemble, we identify a normalization protocol that gives access to disentangling the contributions from excitation, gating and emission. The systematic evolution of data reconfirms the drastic influence of the local electromagnetic mode budget on the spectral properties. The experiment further underscores the broadband application range of the SNJ technique, able for combining (nano-) electronic functionality with optical access and a tunable light-matter interface.
Introduction
The scanning tunneling microscope (STM) with its atomic resolution and picometer distance control has revolutionized nanoscience. For the investigation of two-dimensional (2D) layered materials, an analogue technique would be desirable which provides picometer distance control, and-different to STMunobscured optical access. The squeezable nanojunction (SNJ) technique provides many of these properties (it does, however without lateral atomic control). After a first appearance decades ago [1][2][3][4][5], it has recently been rediscovered for studying thermoelectric effects in single-molecule-like contacts [6,7]. Figure 1(a) displays a schematic of the SNJ: two chips (silicon carbide or fused silica) are mounted face to face with a spacer that impedes initial contact. By pressing via a spring from underneath, the distance of the chip surfaces can be fine controlled. This setup is ultrastable and transparent [6,7]. Different to our previous experiments we add a scanning confocal microscope for optical microscopy and spectroscopy. As a 2D luminescent sample, we place a monolayer MoS 2 flake on the surface of the upper chip; the lower chip is chosen to be either a mirror (see figures 1(b) and (c)) or a transparent dielectric.
Nearly any optical investigation of 2D materials includes an electromagnetic environment. Often, SiO 2 on silicon, fused silica or similar dielectrics are used as substrates. Any of such stacks brings in (multiple) reflections, which provide built-in Fabry-Perot (F-P)-like phenomena. The importance of the associated patterns of the electromagnetic field in which the 2D material is located has been recognized [8][9][10]. The method we present is suited to explore the interaction between a continuously varied F-P stack and the 2D material systematically on the very same sample spot. We see strongly distorted spectra and trace their evolution. The data raise awareness of potential artifacts and allow for a refined insight of PL in 2D materials. d) and (e) Experimental photoluminescence (PL) spectra excited at 532 nm with varied distance between MoS2 and gold mirror. When the MoS2 position crosses a node in the excitation mode ((b) and (d)) the overall PL intensity is strongly varied. When the MoS2 position crosses nodes in the emission modes, however, ((c) and (e)) PL spectra can be strongly distorted. Here, a dip is induced moving from low to high wavelengths when increasing the distance.
attached to the upper chip (figure 2(a)). Fully reflecting mirrors are absent in this section. A first experimental necessity is the determination of the distance d between the two plates. For this purpose, we use essentially white-light interferometry within the same optical setup.
The calibration curve thus obtained is displayed in figure 2(b) with nanometer resolution, see SI (available online at stacks.iop.org/2DM/8/045034/ mmedia). From here on, we present all data as a function of d. Figure 2(c) shows regular F-P lines which have a common origin at d = 0.
We now face to the PL measurement of the MoS 2 flake (figure 2(d)). For excitation, we use a green laser (λ laser = 532 nm). The well-known spectral feature at 680 nm, which is composed of the A exciton and the A − trion [11][12][13][14] is dominant. In this experiment, we emphasize on its distance dependence, which is only on a first glance periodic in d. Our setup allows, however, for a more detailed view on the interaction of the flake's optical response with the given electromagnetic environment. In this representation, one can identify variations in peak position (on the 10 nm scale), peak height and peak width. Altogether, the geometry introduces a significant distortion of the spectral features. Our extensive data set allows for further decomposition of the contributing effects. Normally, one is interested in the spectral features of the flake and considers the electromagnetic environment as a weak perturbation. Here, we study in detail the deviations caused by the environment. For this purpose, we divide the data by a distance-averaged spectrum. This leads to figure 2(e), where the enhancement/suppression factor is singled out. One can identify a regular pattern of peaks, the positions of which are located on horizontal and tilted lines.
We first address the horizontal lines, which appear equidistant in d. Their origin is the well-defined intensity variation of the excitation field. By tuning d, the interference conditions in the F-P stack are altered. As a result, the excitation field intensity is oscillatory in d with a periodicity of λ laser /2. In linear approximation, this affects the entire emission spectrum equally. If this was the only effect, we would expect only vertical periodicity, but not the obvious variations of spectral weight within the horizontal lines in figure 2(e).
The F-P stack, however, influences also the emission coupling strength. As the PL feature of MoS 2 has a significant spectral width, it inherits in parts the tilted lines of the white-light spectrum (cf. figure 2(c)). The result is a wavelength-dependent coupling function, which is now oscillatory in λ and d. We can separate it from the data by again normalizing the data set, now along the λ-axis (at least for those data where every horizontal line includes sufficient λ-averaging). This procedure approximately averages out the influence of the excitation. The result is displayed in figure 2(f): we find indeed the diagonal lines that are reminiscent of the white-light data (cf. figure 2(c)).
Hence, when light matter coupling is simultaneously strong both for excitation and emission, i.e. at (d) PL spectra. Upon variation of the geometry, the excitonic peak is warped by F-P effects. The distance-averaged spectrum ⟨PL⟩ d is drawn as a red line. (e) After normalization of the PL data from (d), the coupling factor c due to F-P is extracted. (f) Subsequent to the latter normalization, a second normalization compensates the F-P modulation of the excitation. Data at λ < 600 nm are affected by artifacts due the uncompensated influence of Raman features of the SiC substrate. (g) Coupling factor simulated within a transfer matrix model for normal incidence. (h) Coupling factor simulated for incidence angles up to 22 • with respect to the surface normal, with excellent agreement to the experimental data of (e). the crossing of horizontal and diagonal lines, a peak results in figure 2(e). Note that the data in figures 2(e) and (f) are purely based on experiments, only their interpretation relies on the F-P model. For comparison, we performed transfer-matrix method simulations [15], which compute the light-matter enhancement factor, but not the spectral properties of the sample flake. The result is displayed in figure 2(g) for normal incidence, which reproduces qualitatively the experimental data analysis of figure 2(e), however with subtle deviations in peak shape, intensity and position. The agreement between experiment and simulation is more accurate when, in addition, finite emission angles are included (θ max = 22 • for best fit to data, see figure 2(h), corresponding to an effective numerical aperture of NA = 0.37, which is slightly below the objective characterization of NA = 0.42). Main deviations occur at the left side of the presented spectra, because there, the signal is weak due to its suppression by the dichroic mirror.
In this regime of weak quality factors, we can approximately consider the excitation, the internal effects in the MoS 2 flake and the emission as separated phenomena. This can be mapped on a model where the corresponding rates factorize. As a consequence, we could successfully implement a protocol that separates the spectral properties of the flake and the spectral distortions of the electromagnetic (F-P) environment reliably without involving further model assumptions. When the quality factor of the resonators is further enhanced, we expect a more complex interplay of coupling strength and internal processes of MoS 2 [16].
Photoluminescence of MoS 2 in front of a mirror
Conceptually, the experiment is simplified when replacing one dielectric chip by a mirror (see figure 3(a)). Then, the physics essentially boils down to a 1D textbook picture of standing electromagnetic waves in front of a mirror (cf. figures 1(b) and (c)). The SNJ technique with its high spatial accuracy is able to move a MoS 2 flake continuously within the standing wave pattern. In particular, we can address the spatial region of the first and second node/antinode in front of the mirror. Obviously, the different wavelengths involved (excitation at 532 nm, PL signature from 600 to 800 nm) etc will have nodes at different positions, which is graphically displayed in figures 1(b) and (c). The sample flake can be moved through this pattern. In a distance range of 0-700 nm, the respective nodes are well ordered and separated. The electromagnetic environment induces strong spectral distortions to the PL spectra, both in intensity and shape. Figure 1(d) displays the evolution of the PL when the MoS 2 flake crosses a node in the standing wave of the excitation laser. Its influence on the PL is affecting the peak height by a factor of 10, but the shape of the PL feature is essentially unchanged. For the emission, the situation is more complex (figure 1(e)). In the selected window, there is always one wavelength with blocked emission, because the flake is in the respective node. By changing the distance, the resulting dip is shifted through the spectrum. This influences also the position of the PL peak and can even result in a split double peak. On the first sight, one would expect both an excitation and an emission modulation with sin 2 (2πd/λ ). A closer analysis that takes the dielectric contrast between the vacuum and SiC into account results in an even sharper modulation. The total body of data is depicted in figure 3(b) where the PL intensity is colorcoded as a function of λ and d. In order to distill out the electromagnetic coupling factor we normalize the measured spectra by division with the PL of the lowest distance, where spectral distortions are weakest. The result is displayed in figure 3(c) color-coded. Red displays parameter regions of PL amplification whereas blue indicates PL suppression. In this representation, one can clearly identify equidistant horizontal lines that originate in the excitation nodes. One can also see tilted lines originating from the emission node structure. Remarkably, the tilted lines converge for vanishing wavelength approximately in one single point. Its virtual distance d is slightly negative (d ≈ −20 nm) which is due to the reflection phase at the gold mirror surface. Also, the slope of the tilted lines deviates slightly from nλ/2 due to inclusion of emission angles ̸ = 0 • to the surface normal (NA > 0). We conclude that the MoS 2 PL feature with its significant spectral width experiences strong deformation by the electromagnetic environment. While the effect itself is not unknown, the experiments with the SNJ technique can quasi-continuously vary the electromagnetic environment and thus make the drastic influence of both excitation and emission mode structure accessible.
Photoluminescence of gated MoS 2 in an optical cavity
In many previous PL experiments, the influence of electrostatic gating on exciton/trion formation etc has been investigated thoroughly (within a static F-P stack) [11,12]. In the previous sections, we have demonstrated the strong impact of the tunable electromagnetic mode pattern. It is desirable to combine both parametric variations within a single experiment. For this purpose, we have defined a two-mirror cavity (cf. figure 4(a)). One mirror is a 50 nm gold layer on the bottom chip, the other is a partially transparent Al layer (5 nm thickness) on the upper chip, which is then overcoated with a transparent sputtered SiO 2 dielectric. The MoS 2 monolayer is deposited on the latter, such that approximately λ/4 conditions are met for the emission wavelength. The flake is electrically contacted by a gold electrode from the side. In contrast to the previous section with a single mirror, the second metal sheet not only provides an optical cavity with enhanced quality factor, it simultaneously allows to control the charge density within the flake independently via a gate voltage V G applied between the Al gate and the flake. Figures 4(b) and (c) display the PL spectra for a gate voltage V G = 0 V (b) and V G = −16 V, respectively. The overall shape of both patterns is similar. The main difference appears in intensity, further subtle differences in peak shape can be recognized. The optical coupling factor (figure 4(d)) is determined in full analogy to figure 2(e), i.e. by normalizing with an ensembleaveraged spectrum. A comparison to the previous single-mirror case shows two main differences: (a) the peak features are sharpened due to the improved cavity and (b) the peak features are asymmetric due to the asymmetric placement of the MoS 2 flake in the cavity.
We contrast this analysis with another that is intended to remove the influence of the coupling factor and takes into account mainly the electrical response: figure 4(e) plots the 'gateability' g, which we define as: i.e. the data plotted in figure 4(c) divided by those in figure 4(b). On the first sight, one may expect that the influence of the optical cavity should be cancelled by this normalization resulting in d-independent behavior of g (which would be the case if the MoS 2 PL quantum efficiency would be only a function of λ and V G ). The peak feature at 660 nm arises because the (uncharged) exciton and (charged) trion statistics are differently affected by the electrochemical potential [12]: if free electrons are present, excitons will combine with electrons and thus form trions, the latter of which decay mainly nonradiatively. Therefore, by applying a negative gate voltage, free electrons are removed from the MoS 2 and exciton PL features are enhanced.
The clear d-dependent patterns in figure 4(e) reveal, however, an interplay between optical and electrical parametrization. This modulation is periodic with half the excitation laser wavelength. We attribute this main effect to the dependence of the gateability on excitation intensity/generation rate (i.e. the quantum efficiency is also a function of the excitation amplitude [12].). Its modulation with d stems from the modulation of the laser intensity at the flake's position (with a period of λ laser /2). Remote from the peak region, a remaining modulation with d and λ can be observed, reminiscent of the diagonal lines of the coupling factor ( figure 4(d)). One may argue that these lines stem from wavelengthdependent variations of the radiative decay times of both excitons and trions [17], which interact with their gate-dependent decay statistics. However, the weak signals remote from the peak can also be affected by incomplete background subtraction so that artifacts cannot reliably be excluded. More information may be gained with time-resolved PL measurements [17][18][19]. Overall, the nontrivial structure of the gateablity underscores that the mode structure of the cavity is required to understand the gate response of PL in detail.
Conclusions and outlook
We have presented PL measurements of a monolayer MoS 2 flake. Due to a continuous variation of the electromagnetic environment/cavity, the interplay of the latter with PL becomes particularly clear. We observe variations of the PL intensity, which reach the order of 100 in our experiments but are theoretically not limited. Variations of the excitation mode at the position of the 2D-material cause predominantly intensity variations, whereas variations of the (wavelength dependent) emission modes may strongly shift or distort the spectral features. This is certainly more than a playful variation of artifacts: our experiments elucidate the strong impact of a purely dielectric F-P stack or, even more drastically, the impact of reflecting surfaces. Both are omnipresent in PL investigations of 2D materials. Finally, we have demonstrated experimentally that the interplay of electrostatic gating and geometry variations is not trivial. The method used, the SNJ, has proven to be valuable for optical investigations of 2D materials. Because it operates with a small mode volume, it can be further optimized towards the strong coupling regime [16,20].
MoS 2 preparation:
The MoS 2 flakes were prepared by mechanical exfoliation from a bulk MoS 2 crystal using bluetape. After mechanical exfoliation the thin MoS 2 flakes were transferred on polydimethylsiloxane film (5 × 5 mm 2 ). Monolayer (1L) MoS 2 flakes were identified by optical contrast and PL measurements. Then an individual 1L MoS 2 flake was transferred onto a chip using all-dry transfer method [21]. Finally, the successful transfer of the 1L MoS 2 flake onto the SiC chip was confirmed by PL measurements.
SNJ, chip preparation: 8 × 4 mm chips were cut from semi-insulating 4H-SiC wafer material and cleaned wet-chemically by a standard RCA procedure. By means of optical lithography and CF 4 plasma-etching 1 µm of the surface was removed, excluding mechanical contact points and sockets on which MoS 2 was subsequently placed (this helps to keep surface impurities from preventing the touching regime.). Electrical leads were fabricated by means of optical lithography and sputtering (Al, SiO 2 ) or ebeam evaporation (Au).
SNJ, mechanical setup: Chip holders including micrometer screws for lateral positioning of the two chips were used, allowing for lateral relative positioning accuracy of few µm. The bending force is exerted on the bottom chip via a mandrel that is tensioned via a piezo-spring-lever mechanism. The spring is pretensioned with a motorized screw to allow for a larger range. Mechanical parts are mounted into a Cryo-Vac vacuum vessel. The whole mechanical setup can be scanned inside the vacuum vessel with a motorized x-y stage. Measurements of figures 1-3 were performed under vacuum conditions and measurements of figure 4 under ambient conditions. All experiments were carried out at room temperature. A closer technical description is given in SI.
Optical setup: A custom-built confocal microscope with excitation capabilities including a 532 nm continuous wave laser and tungsten incandescent light (for orientation and white light interferometry) was used. Excitation light is directed to the sample via a dichroic mirror and focused via a LCPLFLN20xLCD Olympus objective specified with NA = 0.45, and characterized NA = 0.42. Laser spot sizes were on the order of 2 − 3 µm and therefore all measured properties restricted to this area (for white light interferometry due to pinhole). The variation of d within the focus area is orders of magnitude smaller than the wavelength, see SI. PL and reflected light is collected with the same objective, passes the dichroic mirror and is filtered with a long pass filter before detection. An Andor Shamrock 500i Spectrometer with Andor Newton 920 camera were used for spectroscopy. Motorized flip mounts were used for fully automated switching between light sources as well as between imaging and spectroscopy mode, allowing for over-night measurements.
Transfer matrix method (TMM) calculations: As a starting point we used the python TMM package [15]. Drastic numerical speed-up could be achieved by using the following steps: (a) directly substitute all fixed parameters with numerical values. (b) Evaluate transfer matrix multiplications symbolically using the SymPy package [22]. (c) Create numpy [23] functions (using numpy.frompyfunc). (d) Evaluate functions for all desired parameter combinations.
Normalization of reflectance spectra and distance calibration: Reflectance spectra were recorded at the same spot as PL spectra but with tungsten lamp illumination. Normalized reflectance spectra for distance calibration were produced with the following routine: (a) reflectance spectra for a wide range of (unknown) plate distances were recorded (by sweeping the piezo voltage V Piezo ). (b) Averaging for every λ along V Piezo over an integer number of oscillations to remove d-dependence, yielding a calibration spectrum. (c) Division of all (d-dependent) raw spectra by calibration spectrum. Finally, d was calculated by parameter optimization to the reflectivity within a TMM model of the material stack under investigation.
Simulation of coupling factors: Coupling factors were calculated as c = c exc × c emit with the excitation coupling factor c exc = c tmm (λ laser , d) and the emission coupling factor c emit = c tmm (λ, d). Herein, c tmm (λ, d) is the fraction of light intensity of incident light of wavelength λ with respect to the intensity at the position of MoS 2 , gained with a TMM model of the material stack under investigation. For inclusion of finite emission angles θ, angle-dependent emission coupling factors c tmm (λ, d, θ) were calculated and averaged over θ after weighting by sin (θ).
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 4,936.4 | 2021-01-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Maximizing value of genetic sequence data requires an enabling environment and urgency
Severe price spikes of the major grain commodities and rapid expansion of cultivated area in the past two decades are symptoms of a severely stressed global food supply. Scientific discovery and improved agricultural productivity are needed and are enabled by unencumbered access to, and use of, genetic sequence data. In the same way the world witnessed rapid development of vaccines for COVID-19, genetic sequence data afford enormous opportunities to improve crop production. In addition to an enabling regulatory environment that allowed for the sharing of genetic sequence data, robust funding fostered the rapid development of coronavirus diagnostics and COVID-19 vaccines. A similar level of commitment, collaboration, and cooperation is needed for agriculture.
Introduction
Access to, and use of, genetic sequence data (GSD) is a valuable public good that accelerates discovery, builds scientific capacity, and creates opportunities for increased agricultural productivity.Future access to GSD for all users must be ensured to realize its full potential.In July 2020, a global group of public and private sector authors explained the value of GSD and why open access and use of GSD is critical to the multiple demands of sustainable agriculture (Gaffney et al., 2020).The authors described the potential of GSD utilization for: • improving crop productivity and sustainability; • conservation of biodiversity and crop wild relatives; • building capacity in the global scientific community; • ensuring a level playing field among scientists regardless of location or organization.
Since the Gaffney et al. (2020) article, the value of open access and utilization of GSD has been clearly demonstrated through its role in the rapid deployment of coronavirus diagnostic technologies (Kituyi, M., 2020), development of effective COVID-19 vaccines, and publication of the Moderna and Pfizer vaccine sequences in open access databases (Winter, 2021).Investment in the generation, sharing, and use of GSD available through open access has likewise allowed scientific discoveries in crop plants to move quickly, generating progress and value for the four species described in the review articlesorghum (Sorghum bicolor), cassava (Manihot esculenta), pearl millet (Pennisetum glaucum), and tef (Eragrostis tef).Vast differences exist between funding for the COVID response and that of agricultural research and development, even as hunger and malnutrition claim more lives than COVID.This letter is an update on how utilization of GSD is helping meet the multiple demands of food security, especially through creative collaborations.It provides a comparison of funding committed to the COVID response and investment in agricultural research and development (R&D) and requests that international bodies and individual countries work to maintain open access to and use of GSD so that it can be accessed and utilized by all scientists in all countries to enable agricultural advances.
Hunger, malnutrition, and related illnesses kill an estimated 2 million children under the age of five annually (Alston et al., 2021).
Global food security is "on a razor's edge of sufficiency" (Cassman and Grassini, 2020).After approximately 100 years of stable, or falling, commodity prices paid to farmers (Sumner, 2009;Zulauf, 2016), three price spikes have occurred since 2000 (Cassman and Grassini, 2020).In 2021, a fourth price spike was underway with severe implications for regions in which food accounts for 40% or more of household spending (World Economic Forum, 2016).Science-based decision-making is critical to addressing challenges of food security.The scientific community in the public and private sectors must combine forces to optimize a globally collaborative research environment.Urgency is needed from policy-makers and regulatory bodies to ensure an even playing field, enabling all scientists, farmers and actors in the food chain to have access to the latest technologies.While agricultural R&D will never receive funding at the urgency and level of the COVID response, access to and sharing of technology will allow the agricultural research community to "punch above its weight".Updates on recent developments and use of GSD in the four crops featured in the original publication offer examples for the cooperation, partnering, and capacity building needed for near-term agricultural productivity and food security.
Examples of genetic sequence data value in neglected and under-utilized crops
Sorghum.Recently published research in sorghum demonstrates how GSD continues to improve productivity in a crop with a high level of diversity (Fig. 1).Tao et al. (2021) analyzed and assembled 13 genomes of cultivated and wild relatives of sorghum and combined them with three additional, publicly available genomes.These integrated data were used to create a pan-genome of 44,079 gene families, with 222.6 Mb of new sequence identified, enabling whole-genome comparisons across the most diverse genetics ever assembled.Genes responsible for grain shatter, seed dormancy, grain size, and a host of biotic and abiotic stressors were identified.This work demonstrates the value and need for broad and inclusive sequencing of crop species and their wild relatives, and for providing access and utilization of this data.The GSD has been made publicly available (China National GeneBank database, 2021), offering plant breeders, biotechnologists, and eventually farmers, more options for improving sorghum productivity under rapidly changing growing conditions.Because much of the genome is conserved across species, Tao et al. have also provided guide posts for how to proceed with sequencing and analysis efforts in other crops.Muleta et al., (2022) provide a further example of the value of genetic diversity and GSD in sorghum.The emergence of an aggressive biotype of sugarcane aphid (SCA) (Melanaphis sacchari) as a global pest has required greater use of synthetic insecticides in U.S. sorghum production.In Haiti, sorghum production had become near impossible, with crop losses reaching 30-70%.Over 50 years of globally shared plant material and knowledge among breeders, molecular biologists, and agronomists culminated in the identification of alleles conferring SCA resistance to sorghum.Resistant varieties are now widely available, and sorghum production is nearing pre-SCA resistant levels in Haiti.Importantly, this work also represents a salvaging of valuable genetics and conservation of biodiversity via "evolutionary rescue" (Alexander et al., 2014), only made possible by shared use of germplasm and GSD.
Cassava.Perhaps no crop can better benefit from using GSD than the staple tropical root crop cassava.Breeding cassava to enhance quality and productivity traits is challenging due to the crop's high degree of heterozygosity and inbreeding depression.Cassava improvement has suffered from underinvestment, meaning that well-targeted investments can bring significant improvements in traits important to cassava farmers, processors, and consumers.Despite recent investments the cassava research and development community remains relatively small and a critical mass of intellectual capacity can only be reached if GSD and other resources are freely accessed by all.Recent and important advances have been made towards these goals.The first publicly available cassava reference genome assembly, released in 2009, has benefited from continuous improvement.Version 7 has been available since 2019, with Version 8 presently under assembly.At each stage, the reference genome has become a more powerful tool for researchers to access and query as they seek to discover genes and gene pathways responsible for traits such as disease resistance, storage root bulking, flowering and post-harvest physiology.This resource has remained available in the public domain, housed within Phytozome at Plant Comparative Genomics portalof the US Department of Energy's Joint Genome Institute (https://phytozyme.jgi.doc.gov/pz/portal.html).
GSD is only valuable if it can be used by researchers and breeders to develop enhanced varieties and to uncover new biological information about the crop.Cassava researchers have established open access resources to achieve this.These include the International Cassava Genetic Map Consortium which has developed a reference map using more than 22,400 genetic markers (Bredeson et al., 2016) and placed these on the AM560 reference genome.A second example is Cassavabase.Cassavabase brings together 30 years of breeders' field data, allowing this wealth of information to be queried against GSD using publicly available digital tools (https://www.cassavabase.org/).
Cassava is highly heterozygous, meaning that significant differences in GSD exist for the same genes inherited from the two parents.Important traits, such as disease resistance, are often coded by only one of the parental copies (allele).In such cases it is necessary to assemble GSD for each parental copy (haplotype) separately to provide the resolution needed to identify a gene or genes responsible for a specific trait.Although technically challenging, recent advances in bioinformatics have allowed this to be achieved, including haplotype resolved genomes for select varieties (Kuon et al. (2019) and Mansfield et al. (2021); https://www.biorxiv.org/content/10.1101/2021.06.25.450005v1).These publicly available resources were critical to ongoing investigation into gene(s) responsible for resistance to cassava mosaic disease and cassava brown streak disease, two virus diseases that threaten food and economic security for cassava farmers in sub-Saharan Africa.They are also critical for enhancing gene editing capacities for the crop.An example is genome editing to understand and control flowering (Fig. 2), where multiple genes controlling flowering in cassava can be modified to synchronize flower production, offering breeders potential to perform sexual crosses of elite varieties at frequencies not previously possible.
Pearl Millet.Pearl millet is an important crop in low-input farming systems in some of the hottest and driest agro-ecologies in the world.Harnessing GSD and other crop improvement tools can accelerate genetic gain and productivity in these drought-prone regions (Kumar et al., 2021).Genetic sequencing and knowledge development from diverse cultivars, landraces, and mapping populations of pearl millet breeding lines from 27 countries (Sehgal et al., 2015) have been brought together within the "Pearl Millet inbred Germplasm Association Panel", and provide valuable material for genome sequencing (Varshney et al., 2017).By accessing these publicly available resources (Cegresources. icrisat.org, Kumar et al. (2021) identified gene groupings for plant height, flowering time, panicle length, and grain weight and quality useful for accelerating yield gain.This analysis will be useful for genomic-assisted breeding and crop development, an essential breeding strategy to improve production and enhance the adaptability of pearl millet in low-input farming systems.
Building further on these efforts, pearl millet inbred lines were recently sequenced as part of a collaboration between the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and Corteva Agriscience.The new sequencing data will be made publicly available to assist breeding programs and pearl millet growers everywhere.The references and the annotation currently in progress will be the foundation of collaboration, which also includes the sharing of CRISPR-Cas gene editing technology and expertise.
Tef. Tef is arguably one of the most neglected and under-utilized, semi-domesticated cereal species.It is, however, the most important crop in Ethiopia, with research steadily growing in recent years.The national tef improvement program at the Ethiopian Institute of Agricultural Research (EIAR), in collaboration with international partners, is developing high density linkage maps, an association panel, and genomic fingerprints for 49 recently released varieties.These research activities provide valuable, cross-referenced varietal genomic information for the genomics and phenotyping centers at Holeta and Debre-Zeit, Ethiopia, to generate basic information on the core tef collection.And to establish the genetic and molecular basis underlying adaptive traits in tef, a panel of 382 tef accessions have been re-sequenced at Colorado State University, USA, with the help of international public and private sector partners.Thirty years of climate data have been integrated with the passport data of some of these accessions, and both the GSD and background information will be shared in a public database and publication, respectively, representing another example of how open access to and use of GSD creates value.
The International Tef Research Consortium (ITRC) was established in 2019 to create a collaborative platform for tef researchers.Through this initiative, high-quality genomic sequence data for an improved tef cultivar 'Tsedey' (DZ-Cr-37) has been generated through a collaboration between the EIAR and Corteva Agriscience.High-quality reference genome sequence was completed utilizing the latest genomics technology and provides near gapless contiguity sequence.This GSD is available to all ITRC members and will soon be deposited in a public database.With the availability of this full reference GSD, genes controlling plant stature and herbicide tolerance have been identified in tef via comparative genomic approaches.
Applied in conjunction with genome editing tools, this GSD offers new possibilities for rapid productivity and quality gains greater than possible via traditional breeding alone.A recent example is ongoing efforts to produce semi-dwarf tef through gene editing of the "Green Revolution" genes to enhance resistance to lodging, thereby delivering a trait long sought by tef breeders.
Collaboration for generating GSD across tef accessions is also taking place between Bahir Dar University, Ethiopia and the National Institute of Agricultural Botany (NIAB), UK (Matthew Milner, personal communication).This work is focused on understanding natural variation among tef accessions for nutritional traits such as zinc and protein content.Findings from this work will increase understanding of genotypic diversity in tef and have value in increasing tef productivity across different agroecological zones, and allow researchers to identify and improve traits in a manner similar to Tao et al. (2021).
Open access to GSD facilitates productive partnerships
In the four crops profiled above, we highlight investment and capacity building in neglected and underutilized species, describe productive collaborations between the public and private sectors and between the global North and South.New ways of conserving biodiversity have been identified through use of GSD and via evolutionary rescue of plant genetic resources.New opportunities for understanding biotic and abiotic stress resistance (Massel et al., 2021), for improving nutrition and lowering the environmental impact of crop production are being identified (Bate et al., 2021;Eshed and Lippman, 2019).In each example, value created from the investments of time, money, and human capital can only occur through access and utilization of unregulated and non-monetized GSD.For this to continue, a globally harmonized regulatory and enabling environment is needed to ensure that all countries, regions, scientists, farmers, and consumers, benefit from good science and evidence-based decision making.
Investment in R&D is critically important.The value of investment and open access to GSD has never been more evident than during the ongoing COVID-19 pandemic, in which GSD continues to be widely shared.As of June 2020, nearly $22 Trillion of funding had been committed to the COVID response (Cornish, 2021) with over $50 billion devoted to vaccine R&D (Knowledgeportalia.org,2021).Funding differences between the COVID response and agricultural R&D, however, are stark.Global public funding for agricultural R&D declined after the global financial crisis of 2008-09 -"the first sustained drop in over 50 years" (Heisy and Fuglie, 2018), and in 2015 was estimated at $46.8 billion annually (Alston et al., 2021).The latest update on private sector agricultural R&D investment was estimated at $15.6 billion annually (Fuglie, 2016).While funding in the trillions of dollars for agricultural research are unlikely to materialize, similar levels of collaboration, cooperation, global capacity building, and urgency observed in response to COVID-19 are needed for agriculture and should be expected.Free access and utilization of crop GSD is a critical component of such an effort.In his book, "Hunger" (2015), Martín Caparrós documents the number of lives lost to hunger-related illness at 25,000 per day globally and asks the question: "How do we manage to live with ourselves knowing that these things are happening?" The scientific community continues to develop the tools needed to improve agricultural productivity, reduce hunger, and provide solutions for human health.For COVID, limited investment in public health services has prevented more effective control of the disease.For agriculture, delivery of impact is impaired due to long-term neglect of extension services and policy which limits the deployment of good science.A general misunderstanding and distrust of science is slowing delivery of results to those most in need for both agricultural R&D and the COVID response.Policy makers, and society, must now focus on creating an environment in which science thrives and is enabled to leverage and deliver the full potential of scientific discovery.
Fig. 1 .
Fig. 1.Access to and use of genetic sequence data from highly diverse germplasm creates opportunities for conservation of biodiversity and innovation for biotic and abiotic stress management.
Fig. 2 .
Fig. 2. Flowering in cassava plants, shown in this photo, is inconsistent and male and female flowering is often not synchronized, thus preventing breeders from making crosses.Sharing of genetic sequence data has helped the global cassava research community identify control mechanisms which improve synchrony of flowering, giving breeders greater opportunities for yield and quality improvements. | 3,687.6 | 2022-03-07T00:00:00.000 | [
"Biology"
] |
Effect of nano-silica on Portland cement matrix Efeito da nanossílica na matriz de cimento Portland
Abstract
Introduction
The Portland cement is widely used as a building material, with a production of more than 4.3 billions of tons in 2014 [1], albeit with the increasing environmental impacts, mainly by the emission of CO 2 and the consumption of non-removable raw materials. The use of nanoparticles is a current trend, which may play an important role for an efficient using of this binder. Comparing mixtures with and without nano-silica, many researches demonstrated a gain on the compressive strength of mixtures formulated with silica nano-particles [2][3][4][5][6][7][8].
The improvement on mechanical properties of these materials is mainly related to the packing effect and the pozzolanic reaction of nano-silica. Ghafari et al. [3] discussed a reduction on the porosity and a pore refinement for mixtures containing nano-silica, both determined by Mercury Intrusion Porosimetry (MIP). Further, Givi et al. [4] and Haruehansapong et al. [5] published similar results for concrete containing nano-silica or nano-titanium. The above results are similar to those reported by Zhang and Li [6] for water absorption. Yu et al. [7] evaluated the pozzolanic reaction of nanosilica, considering the higher mass loss from hydrated products (CSH/CAH), and the consumption of calcium hydroxide Ca(OH) 2 . These hydration products were measured by thermo-gravimetry. Rong et al. [8] reported similar results, investigating the reduction of calcium hydroxide peak, measured by X-ray diffraction. The rheological behavior of suspensions was also modified due to the incorporation of silica nano-particles, presenting a considerable reduction in the workability, and in the fluidity, according to Zapata et al. [9] and Berra et al. [10]. A consequence of the increasing in the apparent yield stress and viscosity [11,12], was the higher water demand of mixtures containing nano-silica, as demonstrated by Quercia et al. [13]. The aim of this research is to evaluate the effect of nano-silica on Portland cement matrix.
Materials and experimental program
A Portland cement CPV-ARI Votorantim (CPV) was employed as binder, the nano-silicas CEMBINDER 8 (nS 1 ), CEMBINDER 30 (nS 2 ) and CEMBINDER 50 (nS 3 ) Akzo Nobel were also employed as raw materials. The chemical composition of raw materials was measured on molten samples, using fluorescence spectrometer P'ANalytical Axios Advanced. The Specific Surface Area (SSA) was measured by gas adsorption (B.E.T.) using equipment BELSORP MAX. The real density was determinated by picnometry of liquids. The Table [1] presents the physical and chemical properties of raw materials. The densities of nano-silicas nS 1 , nS 2 and nS 3 were calculated from density and mass concentration of suspensions: 1.4 g/ cm 3 and 50%, 1.10 g/cm 3 and 30%, 1.05 g/cm 3 and 10%, respectively, resulting in 2.33, 2.25 and 2.54 g/cm 3 . A Poly-carboxylic acid (PC) BASF ADVA 505 was employed as dispersant additive.
The Figure [1 [14]. The particle size distribution of Portland cement was determined with a laser granulometer Malvern 2200. The particle size (1) where: CPFT -cent percent finer than (%); D P -Particle diameter (μm); D S -Particle diameter of smaller particle (μm); D L -Particle diameter of larger particle (μm); q -coefficient of distribution; The water to solids ratio (w/s) and the dispersant (PC) content were adjusted in order to obtain a rheological behavior compatible with molding by casting. The Table [3] presents all studied formulations. The inter particle separation (IPS) were calculated from these results and are described in Table [3]. The volumetric concentration of suspensions (Vs) was calculated from water and solids contents; volumetric surface area (VSA) was calculated from product of specific surface area (SSA) and density of compositions, following Funk and Dinger [15]. The initial porosity (P 0 ) was estimated applying the linear packing model developed by Yu and Standish [16] and Yu et al. [17]. The suspensions of nano-silica and the dispersant were previously diluted with deionized water. The mixing of paste was conducted in a laboratory mixer applying the following process: (i) dry powder was added to the recipient and mixed at 60 rpm during 60s; (ii) 2/3 of the suspension (water + dispersant + nano-silica) was added and mixed at 60 rpm during 120s; (iii) 1/3 of the suspension (water + dispersant + nano-silica) was mixed at 60 rpm during 120s. The rheogram was measured 30 seconds after mixing, using a concentric cylinders geometry. The tests were carried for 10 g of paste and shear rates varied between 10 and 100 rpm. All these tests were done at 23 o C. The Bingham model was applied to calculate the apparent yield stress and viscosity of suspensions. Height cylindrical samples (2:5 cm) were molded and manually compacted in order to avoid molding defects. Samples were kept at room temperature (
Results and discussions
The Figure [3] shows the rheological properties of pastes formulated with Portland cement CPV and nano-silica. The suspension's yield stress presents a direct relation with the content of nano-silica or with the volumetric surface area (VSA), as seen on the Table [3]. The addition of nano-silica reduces the interparticle separation (IPS), and consequently an increasing tendency on suspension yield stress and viscosity was observed.
Results are similar to those reported by Senff et al. [11] e Hou et al. [12] for mixtures containing silica nano-particles considering these same rheological properties. Flatt and Bowen [18] restricted to Van der Walls the forces for modeling the yield stress of ceramic suspensions, which is inversely proportional to the inter particle separation (1/IPS 2 ). Pileggi et al. 2000 [19] presents the concept of particle crowding index (PCI), which relates the inter-particle separation (IPS) and the diameter of particles. This index presented a direct relationship with the viscosity of ceramic suspensions.
The Figure [ 4] shows the compressive strength of mixtures containing Portland cement CPV and nano-silica, the adjusting of surface was obtained by linear interpolation. The effect of water/solids and content of nano-silica was plotted shows that the optimum nano-silica content varies according to the water to solids ratio. Yazdanbakhsh and Grasley [20] suggested that the theoretical maximum achievable dispersion of nanoinclusions varies according to water contend in cement pastes. Isfahani et al. [21] presented results those confirm this hypothesis, for mixtures formulated with water to binder ratio of 0.5, 0.55 and 0.65.
Published results by Mendes et al. [2] shows the same value for the optimum content of nano-silica for two different mixtures, containing 10 and 20 wt.% of silica fume, but both formulated with a water/powder 0.27. The limit of solubility of the nanosilica into the cementiteous matrix also depends on the particle size of nano-dispersions.
The Figure [5] shows the pore size distribution of mixtures formulated with Portland cement CPV and nano-silica; open porosity (P 0 ) presents a reduction tendency as water/powder ratio and the content of nano-silica decreases. The capillary pores (0,01-1 μm) and the pores of air-entrapped bubbles (10-1000 μm) are observable for all studied formulations. All mixtures presented a gapped pore size distribution. For the nanometric pores (< 100 nm or 0,1 μm), mixture containing 3 wt.% of nano-silica shows the finer nanometric porosity, and the size distribution of nano-pores varied according to the content of nano-silica. A modification on the pore structure at nanometric scale of Portland cement matrix was achieved. This refinement of pores for mixtures containing nano-silica indicated the combined effect of the particle packing and hydration products (CSH/CAH/CH) on microstructure, mainly for pores smaller than 10 nm or 0,01 μm.
The Figure [6] shows X-ray diffraction results of cement pastes containing nano-silica. For the mixture containing 3 wt.% of nano-silica, the Alite phase (C 3 S) showed the minimum value for the peak at 2θ = 29.5 o [14]. Due to the hydration reaction [7,8], and confirmed by the increasing on pores smaller than 10 nm. The peak of calcium hydroxide (CH) at 2θ = 18.1 o [14], also presented a considerable intensity when compared to the initial diffractogram of Portland cement CPV, without Portlandite [14]. As a consequence of the nucleation and the pozzolanic reaction of silica nanoparticles. The calcium/silica ratio (Ca/Si) of these compositions varies from 2.0 for the mixture containing 1.7 wt.% of nano-silica to 1.4 for the composition containing 11 wt.% of silica nano-particles. This variation shall modify the stoichiometric structure of calcium silicate hy-
Figure 4
Effect of water/solids ratio and content of nano-silica on compressive strength drates (C-S-H), as demonstrated by Hara and Inoue [22] for colloidal silica and calcium hydroxide suspensions, and the consumption of calcium hydroxide from pozzolanic reaction.
Conclusion
The main effect of nano-silica on rheological behavior of Portland cement matrix is to reduce the inter-particle separation, increasing the apparent yield stress and viscosity of suspensions, due to the high specific surface area of nano-particles. As consequence, the increasing on the water demand and on the consumption of dispersant, were needed.
Considering the compressive strength of Portland cement matrix, the optimum content of nano-silica varies according to the water/ solids ratio. For large amount of nano-silica, the increasing on the water demand leads to a reduction on the compressive strength.
The effects of nano-silica on the microstructure of Portland cement matrix are the increasing on the hydration reaction and the pore refinement, due to the pozzolanic reaction and packing effect of nanoparticles. Allowing the modifying of microstructure at a nanometric scale.
Acknowledgements
The authors acknowledge the Araucária Foundation and the Coordination for the Improvement of Higher Education Personnel | 2,193.8 | 2019-05-29T00:00:00.000 | [
"Materials Science"
] |
Quasinormal modes in the field of a dyon-like dilatonic black hole
Quasinormal modes of massless test scalar field in the background of gravitational field for a non-extremal dilatonic dyonic black hole are explored. The dyon-like black hole solution is considered in the gravitational $4d$ model involving two scalar fields and two 2-forms. It is governed by two 2-dimensional dilatonic coupling vectors $\vec{\lambda}_i$ obeying $\vec{\lambda}_i (\vec{\lambda}_1 + \vec{\lambda}_2)>0$, $i =1,2$. The first law of black hole thermodynamics is given and the Smarr relation is verified. Quasinormal modes for a massless scalar (test) field in the eikonal approximation are obtained and analysed. These modes depend upon a dimensionless parameter $a$ ($0<a \leq 2$) which is a function of $\vec{\lambda}_i$. For limiting strong ($a = +0$) and weak ($a = 2$) coupling cases, they coincide with the well-known results for the Schwarzschild and Reissner-Nordstr\"om solutions. It is shown that the Hod conjecture, connecting the damping rate and the Hawking temperature, is satisfied for $0<a \leq 1$ and all allowed values of parameters.
Introduction
The recent discovery/detection of gravitational waves [1][2][3] has strengthen a long-living interest to quasinormal modes (QNMs) [4][5][6][7][8][9][10][11][12][13], predicted by Vishveshwara in 1970. The detected gravitational waves were emitted during the final (ringdown) stage of binary black hole mergers. The frequencies of these waves were governed by a certain superpositions of decaying oscillations, i.e. QNMs. The careful analysis of these experiments may be rather important since it can shed some light on nature of gravity in a strong field regime. a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>From the mathematical point of view, the quasinormal mode (QNM) problem can be reduced to studying the solutions to a wave equation for a scalar function Φ(t, x) chosen in the following form where Φ * = Φ * (x) obeys a Schrödinger-type equation defined on a certain domain of real line R = (−∞, +∞), where ε > 0 is some parameter, e.g. ε = 1; for reviews see [10][11][12][13]. For asymptotically flat black-hole solutions the functions Φ * (x) are defined on R. In this case x is chosen as a socalled tortoise coordinate (in the body of the paper denoted as R * ), and (at least) for certain known spherically symmetric solutions (e.g. Schwarzschild and Reissner-Nordström ones) the potential V (x) is a positively defined smooth function, having sufficiently fast fall off (to zero) in approaching either to the horizon (x → −∞) or to the spatial infinity (x → +∞). Usually QNM frequencies ω are defined as complex numbers obeying Re ω > 0 and Im ω < 0, such that the wave functions (1) are exponentially damped in time as (t → +∞), corresponding to asymptotically stable perturbations. The QNM frequencies appear for the solutions to equation (2) which behave as outgoing waves at spatial infinity: Φ * (x) ∼ e iωx ε for x → ∞ (Re ω > 0) and ingoing ones at the horizon: Φ * (x) ∼ e − iωx ε for x → −∞ with exponential growth (in |x|) for |Φ * (x)| as |x| → ∞ (due to Im ω < 0).
For calculation of QNM [12,13] there exists a (most popular) method, introduced in Refs. [7][8][9], which may be called as analytical continuation method. The most transparent version of this method was recently proposed (and verified) in Ref. [14]. Here we consider for simplicity the case of the potential defined on all real axis R = (−∞, +∞) to avoid the boundary problems. (For more involved and subtle case when the Schrödinger operator and effective potential are defined on (0, +∞) and proper boundary condition should be specified see Ref. [15].) The prescription is as follows: one should start with the Schrödinger equation for non-relativistic quantum particle (of mass 1/2) moving in the inverted potential −V (x): where Ψ = Ψ (x) is the wave function. For a potential under consideration the inverted potential −V (x) may have certain bounded states described by discrete spectrum energy levels [14] tells us that QNM frequencies for the potential V (x) may be obtained from bounded states for the inverted potential −V (x) by putting formallyh = iε in (2). Hence, due to this prescription we get the following QNM frequencies where n = 0, 1, . . . is called as (QNM) overtone number. It should be noted, that the method suggested in Ref. [14] for computation quasinormal frequencies of spherically symmetric black holes, relates them to bound state energies of anharmonic oscillators by using the analytic continuation inh. It was stated in Ref. [14], that the known WKB results are easily reproduced by this method and, moreover, "the perturbative WKB series of the quasinormal frequencies turn out to be Borel summable divergent series both for the Schwarzschild and for the Reissner-Nordström black holes.
Here we study QNM spectrum in eikonal approximation for a special dyon-like dilatonic black hole solution from Ref.
The relation between the 2-forms and color charges are given by where τ 2 = vol[S 2 ] is magnetic 2-form, which is volume form on 2-dimensional sphere and τ 1 is an "electric" 2-form. We note that in the case of one scalar field ϕ and two coupling constants λ 1 , λ 2 the dyon-like ansatz was considered recently in Refs. [17,18,28,29]. For λ 1 = λ 2 = λ our solutions from Ref. [17] were dealing with a trivial noncomposite generalization of dilatonic dyon black hole solutions in the model with one 2-form and one scalar field which was considered in Ref. [16], see also Refs. [24][25][26][27], and references therein.
The solutions with one scalar field from Refs. [17,18] may be embedded to the solutions under examination by considering the case of collinear dilatonic coupling vectors: where e 2 = 1, λ 1 + λ 2 = 0. The paper is organised as follows: in section 2 we review the main properties of the black hole dyon solution from Ref. [30]. In section 3 we consider the physical parameters and particular cases of the dyonic black hole solutions. In section 4 we analytically derive the eikonal approximation for frequences of QNM corresponding to massless test scalar field in the background metric of our dyonic-like black hole solution and study their features. In section 5 we consider two limiting cases a = +0 and a = 2, corresponding to the Schwarzschild and Reissner-Nordström black hole solutions. In section 6 we test/check the validity of the Hod conjecture [40] for the solution under consideration when 0 < a ≤ 2. Finally, we summarize our conclusion in section 7.
Black hole dyon solutions
The action of a model containing two scalar fields, 2-form and dilatonic coupling vectors which was considered in Ref.
Here and in what follows we put c = 1 (where c is the speed of light in vacuum.) We consider a dyonic-like black hole solution to the field equations corresponding to the action (7) which has the following form [30] where Q 1 and Q 2 are (color) charges -electric and magnetic, respectively, µ > 0 is the extremality parameter, dΩ 2 = dθ 2 + sin 2 θ dφ 2 is the canonical metric on the unit sphere S 2 (0 < θ < π, 0 < φ < 2π), τ = sin θ dθ ∧ dφ is the standard volume form on S 2 , with P > 0 obeying All the rest parameters of the solution are defined as follows i = 1, 2 and Here the following additional restrictions on dilatonic coupling vectors are imposed Correspondingly, we note that is valid for λ 1 = − λ 2 . Due to relations (18) and (19) the Q 2 s are well-defined. Note that the restrictions (18) imply relations λ s = 0, s = 1, 2, and (8).
Indeed, in this case we have the sum of two non-negative terms in (16): ( λ 1 + λ 2 ) 2 > 0 and due to the Cauchy-Schwarz inequality. Moreover, C = 0 if and only if vectors λ 1 and λ 2 are collinear. Relation (20) implies For non-collinear vectors λ 1 and λ 2 we get 0 < a < 2 while a = 2 for collinear ones. This solution may be verified just by a straightforward substitution into the equations of motion.
The calculation of scalar curvature for the metric ds 2 = g µν dx µ dx ν in (9) yields
Particular cases and physical parameters
Here we analyze certain cases and physical parameters corresponding to the solutions under consideration.
Non-collinear and collinear cases
Non-collinear case. For non-collinear vectors λ 1 and λ 2 (0 < a < 2) we obtain as R → +0 and hence we have a black hole with a horizon at R = 2µ and singularity at R = +0. Collinear case. For collinear vectors λ 1 , λ 2 from (6) obeying λ 1 + λ 2 = 0 we obtain ν i = 0, a = 2 and where λ 1 λ 2 > 0. By changing the radial variable, R = r − P, we get a little extension of the solution from Ref. [17] The metric in these variables coincides with the wellknown Reissner-Nordström metric governed by two parameters: GM > 0 and Q 2 < 2(GM) 2 . We have two horizons in this case. The electric and magnetic charges are not independent but obey Eqs. (24). Note that to be consistent with the literature the net charge here is related to the charge of the Reissner-Nordström black hole as follows Q 2 = 2Q 2 RN .
Gravitational mass and scalar charges
The definition of the ADM gravitational mass is obtained from Eq. (9) in the weak field regime by comparing with In turn, the scalar charge vector Q ϕ = (Q 1 ϕ , Q 2 ϕ ) is derived from (10) in the weak field regime using the following definition: where ν is given by Eq. (16). By combining relations (27) and (28) we obtain the following identity This formula does not contain vectors λ s . The identity (29) may be verified by using (14), (17) and the following relation For further analyses it is convenient to introduce the following dimensionless parameters We obtain The function f * (p, a) is monotonically increasing in p on (0, +∞) for any a ∈ (0, 2] since Due to (32) and lim p→+∞ f * (p, a) = 4/a 2 relation (32) defines a one-to-one correspondence between p ∈ (0, +∞) and q ∈ (0, 2 √ 2 a ) for any (fixed) a ∈ (0, 2]. Thus, we have The inverse map p(q) = p(q, a) is defined for any a ∈ (0, 2] as follows
Black hole thermodynamics
In this subsection we consider black hole thermodynamics by calculating the Hawking temperature and entropy, checking the first law (of black hole thermodynamics) and testing the Smarr relation.
To this end, for simplicity here we puth = c = k B = 1. The Bekenstein-Hawking (area) entropy S = A/(4G), associated with the black solution (12) at the horizon at R = 2µ, where A is the horizon area, reads while the related Hawking temperature is following one It may be verified that relations (27), (36) and (37) imply the first law of the black hole thermodynamics dM = T dS + ΦdQ (38) as well as the Smarr formula where Relations (38), (39) may be presented in the following form i = 1, 2. In derivation of relations (41), (42) the following identity is used Let us clarify the physical sense of potentials (43). The first relation in (11) for F (1) = dA (1) , has a special solution for 1-form: Thus, we get i.e. Φ 1 is coinciding up to a factor 1/(2G) with the value of the zero component of the first Abelian gauge field A (1) , or electric potential, (in chosen gauge) for the field of electric charge at the horizon. Now let us consider the magnetic term in (11). The calculation of Hodge-dual gives us * F (2) and hence (see (10)) For S-dual 2-form we get and we can choose a corresponding 1-form as follows Hence, i.e. Φ 2 is coinciding up to a factor 1/(2G) with the value of the zero component of the dual Abelian gauge fieldà (2) , or dual electric potential, (in chosen gauge) at the horizon, which corresponds to the field of magnetic charge modulated by scalar fields.
Quasinormal modes
In this section we derive quasinormal modes (in eikonal approximation) for our static and spherically symmetric solution with the metric given (initially) in the following general form where A(u), B(u), C(u) > 0 and dΩ 2 = dθ 2 + sin 2 θ dφ 2 . Note that in this section and below we use the Planck units, i.e. we puth = G = c = 1.
We consider a test massless scalar field defined in the background given by the metric (54). The equation of motion in general is written in the form of the covariant Klein-Fock-Gordon equation where µ, ν = 0, 1, 2, 3. In order to solve this equation we separate variables in function Ψ as follows where Y lm are the spherical harmonics. Equation (55), after using (56) yields the equation describing the radial function Ψ * (u) and having a Schrödinger-like form and γ ′ = dγ/du, and l is the multipole quantum number, l = 0, 1, . . . . Taking into account above expressions one can examine a dyon-like black hole solution which has the following form where f (R) and C(R) according to Eq.(10) can be written as where H(R) = 1 + P/R is the moduli function, µ, P > 0 and 0 < a ≤ 2 as shown earlier. After using the "tortoise" coordinate transformation the metric takes the following form For the choice of the tortoise coordinate as a radial one (u = R * ) we have A = B = f and Thus, the Klein-Fock-Gordon equation becomes where ω is the (cyclic) frequency of the quasinormal mode and V = V (R) = V (R(R * )) is the effective potential so that V is the eikonal part of the effective potential. Here and below we denote F ′ = dF dR * = f dF dR . In what follows we consider the so-called eikonal approximation when l ≫ 1.
The maximum of the eikonal part of the effective potential is found from the condition or, equivalently, which yields the corresponding radius The inequality (72), which is valid for 0 < a ≤ 2, is a trivial one.
It may be readily verified by using quadratic equation for Z = R − 2µ and D > 0 that for all 0 < a ≤ 2. Here R 0,− is another root of the quadratic equation (70) , which corresponds to the "location" under the horizon and is irrelevant for our consideration. The maximum of the eikonal part of the effective potential thus becomes In Fig. 1 we plot the reduced eikonal part of the effective potential V /(l(l + 1)) (l = 0) as a function of the radial coordinate R (left panel), and tortoise coordinate R * (right panel). As can be seen from examples presented in figure for special fixed values of P and µ, the maximum of the effective potential is largest for a = 0 case and smallest for a = 2 case. The case with a = 1 is in the middle. At large distances the effective potential tends to zero, as expected.
The second derivative with respect to the tortoise coordinate is given by where D is defined in (72). The square of the cyclic frequency in the eikonal approximation reads as following [12,13] where l ≫ 1 and l ≫ n. Here n = 0, 1, . . . is the overtone number. By choosing an appropriate sign for ω we get the asymptotic relations (as l → +∞) on real and imaginary parts of complex ω in the eikonal approximation where H 0 = 1 + P R 0 , F 0 = 1 − 2µ R 0 and R 0 , D are given by (71), (72), respectively.
In Fig. 2 we constructed real part of the QNM frequency, Re(ω), as a function of parameter P for different values of a and µ = 1/2. In the right panel we have a three-dimensional plot of Re(ω) versus P and a. Here one can notice that in the limiting case when a = 0 we recover constant Re(ω) identical to the case of the Schwarzschild black hole.
In Fig. 3 we constructed the imaginary part of the QNM frequency with negative sign, -Im(ω), as a function of parameter P for different values of a and µ = 1/2 in analogy to Fig. 2. At first sight Figs. 2 and 3 seem similar. However, according to Eqs. (78) and (79) this is not the case.
Remark. It was shown in Ref. [41] that parameters of the unstable circular null geodesics around stationary spherically symmetric and asymptotically flat black holes are in correspondence with the eikonal part of quasinormal modes of these black holes. See also [42,43] and references therein. But as it was pointed out in Ref. [44] this correspondence is valid if: (a) perturbations are described by a "good" effective potential, ( b) "one is limited by perturbations of test fields only, and not of the gravitational field itself or other fields, which are non-minimally coupled to gravity." Here we do not consider this correspondence for our solution, postponing this to future publication.
Limiting cases corresponding to the Schwarzschild and Reissner-Nordström black holes
In this section we consider two limiting cases a = +0 and a = 2 corresponding to the Schwarzschild and Reissner-Nordström metrics, respectively. a) Let us first consider the case when a = +0. This limit may be obtained in a strong coupling regime when where e 1 2 = e 1 2 = 1, and In this case the relations (78) and (79) for QNM in the eikonal approximation read as follows where r 0 = R 0 = 3M corresponds the position where the black-hole effective potential attains its maximum. Note that r 0 = 3M is the radius of the photon sphere for the Schwarzschild black-hole. These results have been obtained in Ref. [7] and our outcomes are consistent with them. b) Now let us consider the case when a = 2. As was mentioned above this takes place for collinear vectors λ 1 , λ 2 . One can also obtain the limit a = 2 in the weak coupling regime when dilatonic coupling vectors obey (80) and In this case the eikonal QNM (see (78) and (79)) read where r 0 = 3M/2+(1/2) 9M 2 − 4Q 2 = R 0 +P corresponds to the position of the unstable, circular photon orbit in the Reissner-Nordström spacetime. These results have been obtained in Ref. [45] (for n = 0) and our outcomes are compatible with them when the relation (for our notation) Q 2 = 2Q 2 RN is applied.
Hod conjecture
Here we test/check the conjecture formulated by Hod [40] on the existence of quasi-normal modes obeying the inequality where T H is the Hawking temperature.
Recently the Hod conjecture has been tested in theories with higher curvature corrections such as the Einstein-Dilaton-Gauss-Bonnet and Einstein-Weyl for the Dirac field [46]. It has been shown that in both theories the Dirac field obeys the Hod conjecture for the whole range of black-hole parameters [46].
Here we test/check this conjecture by using eikonal relations (79) for Im(ω) and the relation for the Hawking temperature (37). For our purpose it is sufficient to check the validity of the inequality for all p = P/µ > 0, where In (88) we use the limiting "eikonal value" given by the first term in (79) for the lowest overtone number n = 0.
In Table 1 we present the results for the numerical testing of the Hod bound by using obtained relations for the eikonal QNM in the ground state n=0. It turned out that the Hod bound is valid (in the eikonal regime) in the range 0 < a ≤ 1. There are maximum y max = y max (a) and limiting y lim = y lim (a) values of function y(p, a) for different values of parameter a in the considered range. It may be verified that for 0 < a < 1 and y lim (1) = 0. The relation for y lim (a) just follows from for 0 < a < 1.
We denote the value of p corresponding to y max (a) as p 0 = p 0 (a). For increasing a, the values of p 0 (a) and y max (a) increase and y lim (a) decrease. We obtain y max (1) ≈ 0.847 and y lim (+0) = 0. For decreasing a, both y max (a) and y lim (a) approach a finite value, corresponding to the Schwarzschild case 4 3 √ 3 ≈ 0.7698 when a → 0. In Fig. 4 we illustrate y = y(p, a) as a function of p. In the left panel we build two dimensional plot for a = 0.2, 0.4, 0.6, 0.8, 1.0 and in the right panel we construct three dimensional plot for the range 0 < a ≤ 1, where the Hod conjecture holds. Thus, we are led to the following proposition.
Proposition. The dimensionless parameter y = y(p, a) from (88) obeys the inequality: y(p, a) < 1 for all p > 0 and a ∈ (0, 1]. For 0 < a < 1 this proposition is proved analytically in Appendix. For a = 1 it is justified by our numerical bound y < y max (1) ≈ 0.847.
Here the limit p = +∞ corresponds to extremal (black hole) case which is not considered here.
Remark. Recently, in Ref. [47] some example of the violation of the Hod conjecture has been found for certain (scalar gravitational) perturbations around D = 5 Gauss-Bonnet-de Sitter black hole solution.
In Table 2 we present some critical values p crit , which are obtained through the condition y(p crit ) = 1 (with y calculated for the ground state n=0) and q crit corresponding to p crit according to Eq. (94) for various values of the model parameter a obeying 1 < a ≤ 2.
Here we use the following relation (see Eq. (32)) where q ext corresponds to extremal case. (Here G = 1.) In Fig. 5 we illustrate y = y(p, a) as a function of p. In the left panel we build two dimensional plot for a = 1.2, 1.3, 1.4, 1.6, 2.0 and in the right panel we construct three dimensional plot for the range 1 < a ≤ 2. In this case the Hod inequality (88) holds in the range p ∈ (0, p crit (a)), while for p ∈ (p crit (a), +∞) it breaks. Remark. In Ref. [48], the eikonal QNM frequencies for charged scalar field in the space-time of a charged Reissner-Nordström black hole were obtained analytically in the regime l 2 ≥ Qq * ≥ l, where Q is the electric charge of the black hole and q * is the electric charge of the field. In this regime the obtained fundamental frequencies were shown to saturate the Hod bound. It should be noted that this result can not be applied to our analysis for a = 2, since we deal here with q * = 0 case.
Conclusions
We have examined a non-extremal black hole dyon-like solution in a 4-dimensional gravitational model with two scalar fields and two Abelian vector fields proposed in Ref.
[30]. The model contains two vectors of dilatonic coupling vectors λ s = 0, s = 1, 2, obeying λ 1 = − λ 2 and additional relations (18). We have also presented some physical parameters of the solutions: gravitational mass M, scalar charges Q i ϕ , Hawking temperature, black hole area entropy. In addition, we considered the first law of black hole thermodynamics and checked the validity of the Smarr relation for our model.
In fact this is a special solution with dependent electric and magnetic charges, see (17). In the case of non-collinear vectors λ 1 , λ 2 the metric of the solution describes a black hole with one (external horizon) and singularity hidden by it. For collinear vectors λ 1 , λ 2 the metric coincides with the Reissner-Nordström metric possessing two horizons and hidden singularity.
We have studied the solutions to massless (covariant) Klein-Fock-Gordon equation in the background of our static and spherically symmetric metric, by using the variable separation method. The Klein-Fock-Gordon equation is simplified in the tortoise coordinate leading to radial equation gov-erned by effective potential. This potential contains the parameters of solution such as P > 0, µ > 0 and dimensionless parameter a ∈ (0, 2] depending upon the coupling vectors λ s which are the initial parameters of the model. The physical quantities, such as mass, color charges and scalar charges contain some of these parameters. Here we mainly focused on the eikonal part of the effective potential and calculated the value of the radial coordinate (radius) R 0 corresponding to the maximum of this part of the effective potential. Knowing the maximum of the eikonal part of the effective potential and corresponding radius, we have calculated the cyclic frequencies of the quasinormal modes in the eikonal approximation. We have also considered two limiting cases reducing to the Schwarzschild and Reissner-Nordström solutions, when the parameter of the solution a accepts two distinct values, i.e. a = +0 and a = 2, respectively. Thus, we have made sure that our outcomes are consistent with the previous results in the literature.
We have also tested the validity of the Hod conjecture for our solution by using QNM frequences in the eikonal approximation with the lowest value of the overtone number n = 0. It turned out that the Hod assumption holds in the range of 0 < a ≤ 1. The conjecture is valid in this range since it is supported by examples of states with large enough values of angular number l. For 1 < a ≤ 2 we have found that the Hod bound is satisfied for n = 0, and small enough values of charge Q (Q/M < q crit (a)) and big enough values of l (l ≫ 1).
It would be interesting to explore in detail QNM frequencies in the vicinity of a = 0 and a = 2, by using the treatment of Ref. [49], e.g. extending the results for the Schwarzschild and Reissner-Nordström solutions by using expansion in a small parameter (a or a − 2). Another issue of interest is the numerical calculation of QNM frequencies by using higher-order WKB formula for certain lower levels (labelled by l and n), see Ref. [50], e.g. verifying the Hod conjecture for 1 < a < 2, calculating grey-body factors etc. All these issues may be addressed in our future studies. | 6,368.6 | 2021-03-19T00:00:00.000 | [
"Physics"
] |
Content-specific broadcast cellular networks based on user demand prediction: A revenue perspective
The Long Term Evolution (LTE) broadcast is a promising solution to cope with exponentially increasing user traffic by broadcasting common user requests over the same frequency channels. In this paper, we propose a novel network framework provisioning broadcast and unicast services simultaneously. For each serving file to users, a cellular base station determines either to broadcast or unicast the file based on user demand prediction examining the file's content specific characteristics such as: file size, delay tolerance, price sensitivity. In a network operator's revenue maximization perspective while not inflicting any user payoff degradation, we jointly optimize resource allocation, pricing, and file scheduling. In accordance with the state of the art LTE specifications, the proposed network demonstrates up to 32% increase in revenue for a single cell and more than a 7-fold increase for a 7 cell coordinated LTE broadcast network, compared to the conventional unicast cellular networks.
I. INTRODUCTION
Explosive user traffic increase in spite of scarce wireless frequency-time resources is one of the most challenging issues for the future cellular system design [1].LTE broadcast, also known as evolved Multimedia Multicast Broadcast Service (eMBMS) in the Third Generation Partnership Project (3GPP) standards [2], is one promising way to resolve the problem by broadcasting common requests among users so that it can save frequency-time resources [3].The common user requests can be easily found in, for example, popular multimedia content or software updates in smart devices.By harnessing these overlapping requests of users, LTE broadcast enhances the total resource amount per cell.This plays a complementary role to the prominent small cell deployment approach providing more resource amount per user by means of reducing cell sizes [4].
To implement this technique in practice, it is important to validate the existence of sufficiently large number of common requests.According to the investigation in [5], discovering meaningful amount of common requests is viable even in YouTube despite its providing a huge amount of video files.That is because most users request popular files; for instance, 80% of user traffic may occur from the top 10 popular files.On the basis of this reason, AT&T and Verizon Wireless are planning to launch LTE broadcast in early 2014 to broadcast sports events to their subscribers [6].
The number of available common requests and its resultant saving amount of resources in cellular networks are investigated in [7], but it focuses on broadcast (BC) service while neglecting the effect of incumbent unicast (UC) service.Joint optimization of the resource allocations to BC and UC are covered in [8], [9] in the perspectives of average throughput and spectral efficiency.The authors however restrict their scenarios to streaming multimedia services where data are packetized, which cannot specify the content of data as well as the corresponding user demand of the files.
Leading from the preceding works, we propose a BC network framework being specifically aware of content and able to transmit generic files via either BC or UC service.The selection of the service depends on the following content characteristics: 1) file size, 2) delay tolerance, and 3) price discount on BC compared to UC.These characteristics are able to represent a content specified file in practice.For easier understanding, let us consider a movie file as an example.It is likely to be large file sized, delay tolerable (if initial playback buffer is saturated), and sensitive to the per-bit price of BC under usage-based pricing [10] owing to its large file size.An update file of a user's favorite application in smart devices can be a different example, being likely to be small file sized, delay sensitive, and less price sensitive.
Furthermore, this study devises a policy that a base station (BS) solely carry out BC/UC service selection based on user demand prediction.Corresponding to the policy, we maximize the network operator's revenue without user payoff degradation by jointly optimizing BC resource allocation, file scheduling, and pricing.To be more specific, the following summarizes the novelty of the proposed network framework.
• BC/UC selection policy: a novel BC/UC selection policy is proposed where a BS solely assigns one of the services for each user by comparing his expected payoffs of BC and UC if assigned, without degrading user payoff.• BC resource allocation: optimal BC frequency allocation amount is derived in a closed form, showing the allocation is linearly increased with the number of users in a cell, and inversely proportional to UC price.• BC pricing: optimal BC price is derived in a closed form, proving the price is determined proportionally to the number of users until BC frequency allocation uses up the entire resources.• BC file scheduling: optimal BC file order is derived in an operation-applicable form as well as a closed form for a suboptimal rule suggesting smaller sized and/or more delay tolerable files should be prioritized for BC.As a consequence, we are able to not only estimate revenue in a closed form, but also verify the revenue from the proposed network keeps increasing along with the number of users unlike the conventional UC only network where the revenue is saturated after exhausting entire frequency resources.Considering 3GPP Release 11 standards, we foresee up to 32% increase in revenue for a single LTE broadcast scenario and more than a 7-fold increase for a multi-cell scenario.
II. SYSTEM MODEL
A single cellular BS simultaneously supports downlink UC and BC services with W frequency bandwidth where BC files are slotted in a single queue.The BS serves N number of mobile users who are uniformly distributed over the cell region.Let the subscript k indicate the k-th user for k ∈ {1, 2, • • • , N }, and define φ k 's as the locations of users.User locations are assumed to be fixed during T time slots, but change at interval of T independent of their previous locations.Let the subscripts u and b represent UC and BC hereafter, and P u and P b respectively denote UC and BC usage prices per bit.In order to promote BC use, the network offers price discount on BC so that it can compensate longer delay of BC.
A. User Request Pattern
Each user independently requests a single file at the same moment with a unit interval T time slots.Let the subscript i represent the i-th popular file for i ∈ {1, 2, • • • , M } where M denotes the number of all possible requests in a given region.Assume user request pattern follows Zipf's law (truncated discrete power law) as in YouTube traffic [5].It implies the file i requesting probability p i is given as i −γ /H where H = M j=1 j −γ for γ > 0. Note that larger γ indicates user requests are more concentrated around a set of popular files.
B. Network Operation
The following example sequentially describes the BS's operation to serve a typical user k requesting file i.
1) Common request examination: by inspecting user requests, BS becomes aware of the file i's size f i as well as the number of file i requests n i .by inspecting f i , n i , and θ ik , BS allocates BC frequency amount W b , and sets BC price P b as well as optimizing BC file scheduling in a revenue maximizing order.4) BC/UC selection: meanwhile in 3), BS assigns either BC or UC to user k in order to maximize revenue without inflicting the user's payoff loss.Note that the pricing scheme we consider is similar to time-dependent pricing [10] in respect of its flattening user traffic effect by adjusting P b over time.The target offloading traffic by the pricing is, however, novel since the conventional scheme aims at the entire user traffic but the proposed at content-specific traffic captured by n i .
C. Resource Allocation
BS allocates W b amount of BC frequency for handling the entire BC assigned requests.In compliance with the 3GPP Release 11 [2], the earmarked amount cannot be reallocated to UC requests during T as Fig. 1 visualizes.For each UC request, BS allocates a normalized unity frequency resource, to be addressed with a realistic unit in Section IV.
D. User Payoff
Let U ik denote the payoff of user k when downloading file i via UC.Consider the payoff has the following characteristics: logarithmically increasing with f i ; logarithmically decreasing with its downloading completion delay after exceeding θ ik [11]; and linearly decreasing with cost under usage-based pricing [10].Define r u k as the spectral efficiency when user k is served by UC.Consider delay sensitive UC users such that UC downloading completion delays always make them experience QoE degrading delays., i.e. f i /r u k > (θ ik + 1).Additionally, we neglect any queueing delays on UC.The payoff U ik then can be represented as follows.
Note that U ik > 0 as we are only interested in the users willing to pay for at least UC service.
In a similar manner, consider B ik indicating the payoff of user k when downloading file i via BC.Let r b k denote the BC spectral efficiency of user k.We further define s i as the size of the broadcasted files until the BC downloading of file i completes.This captures the effect of BC file scheduling.The payoff B ik can be represented as below.
To maximize revenue while guaranteeing at least UC payoff amount, BS compares U ik and B ik , and assigns either UC or BC service, to be further elaborated in Section III-A.
E. Wireless Channel
We consider distance attenuation from difference user locations φ k , and adaptive modulation and coding (AMC) which changes modulation and coding schemes (MCS) depending on wireless channel quality [12].While UC can adaptively adjust MCS based on its serving user's channel quality, the MCS for BC resorts to aim at the worst channel quality user because BC has to apply an identical MCS to all its users.BC average spectral efficiency is therefore not greater than the UC's.
To be more specific, as Fig. 2 illustrates, we consider a cell region A divided into A h and A l .BS can provide high efficiency r h to A h , but low spectral efficiency r l to A l for r l ≤ r h .Let |A| denote the area of a region A. The probability that user k is located within [13].Define r u as UC average spectral efficiency of user k, represented as: Similarly, average BC spectral efficiency r b is given as: where N b denotes the number of BC users.Note that ( 5) is because N b is an increasing function of N .
III. REVENUE MAXIMIZING BC NETWORK MANAGEMENT
In order to maximize revenue, we optimize BC frequency bandwidth W b , price P b , and file scheduling.For more brevity, assume sufficiently large N such that BC average spectral efficiency is approximated as r l as in (5).
A. BC/UC Selection Policy and Problem Formulation
We firstly propose a BC/UC selection policy guaranteeing allowable user payoff, and then formulate the average revenue maximization problem under the policy.Assume that users predict to be served by UC as default, and hence BS should guarantee at least the amount of UC payoff for every service selection.For user k, revenue maximizing service selection policy is described in the following two different user payoff cases: 1) If B ik ≥ U ik , BS firstly assigns UC as much as possible until UC resource allocation reaches (W − W b )T because P u ≥ P b .After using up the entire UC resources, BS then assigns BC; 2) If B ik < U ik , BS resorts to assign UC in order to avoid payoff loss.Note that this policy not only maximizes revenue, but also, albeit not maximizes, enhances user payoff.
For simplicity without loss of generality, assume the required resource amount for UC user demand exceeds the entire UC resources, (W − W b )T .As there is no more available UC resource, P u is set as a maximum value due to no price discount motivation on UC.It results in the revenue from UC is fixed as P u (W −W b )T .By contrast, the revenue from BC still can be increased if B ik ≥ U ik holds.As a consequence, the average revenue in a cell region A is represented as follows.
The left and right halves of L 0 respectively indicate the average revenues from BC and UC, and 1 {•} is an indicator function which becomes 1 if a condition inside the function is satisfied, otherwise 0. Unfortunately, L 0 is an analytically intractable nonlinear function due to 1 {B ik ≥ U ik }.In order to detour the problem, consider the following Lemma.Lemma 1.For (P u −P b )f i < 1, the inequality L 0 ≥ L holds where L is defined as: and Proof: See Appendix.Note that θ i indicates the aggregate delay tolerance of file i among users for a given f i and r u k .Additionally, the assumption (P u − P b )f i < 1 does not imply small sized files since f i is a normalized value.Applying L in the result of Lemma 1, the lower bound of L 0 , yields the corresponding problem formulation given as: The last inequality condition means BC files are slotted in a single queue while BS transmits each file only once.In respect to L in P1, the following sections sequentially derive optimal BC network components, W * b , P * b , and s * i .
B. BC Frequency Allocation
Define F as M i=1 f i p i implying the average requesting file size per user, which is a given value independent of our network design.Consider small f i and sufficiently large N as assumed at the beginning of Section III, we can derive a closed form solution of the optimal BC frequency allocation in the following Proposition.Proof: See Appendix.The proposition shows the optimal BC frequency allocation is determined regardless of BC spectral efficiency r b and price P b .Moreover, it provides the network design principles that the BC frequency amount is proportional to N and inversely proportional to UC price P u .The latter is because it becomes necessary to enhance BC downloading rate by allocating more amount of frequency to BC when BC service becomes less price competitive (smaller P u ).
C. BC Pricing
We can derive the optimal BC price in a closed form in the following Proposition.
Proposition 2. Optimal BC price is given as follows.
Proof: See Appendix.The result shows that P * b is strictly increasing with N within the range from P u /2 to P u .It implies price increase is more effective to enhance revenue than price discount although the discount may promote more BC use.This result plays a key role to design a BC file scheduler for detouring a recursion problem in Section III-D.In addition, it is worth mentioning that BC file scheduler affects P * b by adjusting S * since s * i therein varies along with the order of BC files, to be further elaborated in the following section.
D. BC File Scheduler
Each file i is tagged with a weighting factor w i by BS.BS examines the scheduling file priorities by comparing w i 's.The file scheduling affects s i defined in Section II-D, so we maximize L in terms of s i as follows.
Proposition 3. (Optimal Scheduler) Broadcasting files in a descending order of w * i is the optimal scheduling rule maximizing L in P1 where Proof: For a given P * b , consider the subproblem of P1: P2. min Applying the Smith's indexing rule in [14] and Proposition 2 leads to yield the result of the statement in Proposition 3. Note that w * i is recursive since S * in w * i is a function of s * i which is also a function of w * i .This cannot be solved analytically, and therefore we resort to derive the value by simulation in Section IV.In order to provide more fundamentally intuitive understanding, we consider the following suboptimal but closed form solution.
Corollary 1. (Suboptimal Scheduler) Broadcasting files in a descending order of w * i is a suboptimal scheduling rule enhancing L in P1 where Proof: Exploiting the boundary values of P * b in Proposition 2 at Proposition 3 enables to bypass the recursion problem, completing the proof.
Although the proposed scheduler is suboptimal, it still shows close-to-optimal behavior, to be verified by Fig. 3 in Section IV.The suboptimal scheduler provides the following network design principle: more delay tolerable (larger θ i ), more popular (larger p i ), and/or smaller files (smaller f i ) should be prioritized for BC if f i is sufficiently small such that P u f i /2 < 1.
E. Revenue Gain
In a revenue perspective, we compare the proposed BC/UC network and conventional cellular networks where only UC operates.As a performance metric, we consider revenue gain R defined as the revenue of the proposed BC/UC network divided by that of the UC only network.By combining Propositions 1-3, our proposed network framework shows the following revenue gain.Proposition 4. The revenue gain R is given as follows.
where G := (0.5 + M i=1 s * i θ i p i /S * ) Proof: Applying the results of Propositions 1-3 into L yields the following maximized revenue of the proposed network: N F (P * b − G/2) + P u W T .Dividing it by the UC only network's revenue P u W T while applying Proposition 2 concludes the proof.
Interestingly, the proposed network always achieves positive revenue gain for sufficiently large files such that P u > G where G defined in Proposition 4 is a decreasing function of f i (recall S * in G and s * therein is an increasing function of f i by definition in Section II-D).For those files, the revenue gain R increases with the order of N 2 , converging to the order of N for large N when P * b = P u as the effect of N diminishes.It is worth mentioning that R grows even when frequency-time resources become scarce (smaller W T ) thanks to the thrifty nature of BC in frequency.In addition, the result captures the design of BC file scheduler affects revenue by adjusting S * (and G, a function of S * ).
IV. NUMERICAL RESULTS
We consider two different LTE broadcast network scenarios in accordance with 3GPP Release 11 standards [2].Revenue gains of (a) a single cell and (b) 7 cell coordinated LTE broadcast networks under the following environmtnets: with the optimal/suboptimal scheduler, without scheduler, lower popular file concentration γ of user requests, and larger number of possible requesting files M
A. Single Cell LTE Broadcast
The first scenario is a typical single cell operates LTE BC, having the number of users N up to 200 with the entire frequency amount W given as 10 MHz.For BC, BS is able to allocate up to 60% of W .For UC, BS allocates average 2.5 MHz to a single UC user until the downloading completes.At A h , the average spectral efficiency r h is given as 2.4 bps/Hz whereas r l at A l is 45 % degraded from r h where |A l | = 9|A h |.These correspond to MCS index 19 with 64QAM and the index 12 with 16QAM respectively [12].The number of possible requesting files M in the cell is fixed as 2,000, and the Zipf's law exponent γ is set as 1 as default.File sizes are uniformly distributed from 160 to 634 MBytes, which may correspond with 4.8 to 19 minute long 1080p resolution video content.User delay threshold θ ik is uniformly distributed from 0.6 to 6 seconds.Furthermore, T is set as 2 minutes and P u as 2.6 a normalized value having no unit.Fig. 3(a) shows up to 32% gain in revenue for a single cell LTE broadcast network, including the effect of the 4.7% increment from the suboptimal scheduler proposed in Section III-D.Moreover, scheduler design becomes more important when N increases due to its increasing effect on revenue gain.In addition, the result captures the revenue gain is highly depending on user request concentration γ (Zipf's law exponent) as well as the number of possible requesting file M in a cell.Specifically, doubling γ from 0.5 decreases revenue gain by up to 12.7%, and M from 2,000 does by 16.3%.
B. 7 Cell Coordinated LTE Broadcast
The second scenario we consider is a Multicast Broadcast Single Frequency Network (MBSFN) [12] where 7 neighboring cells are synchronized and operate LTE broadcast like a single cell.Assuming we neglect inter-cell interference, all the simulation settings are the same as in the single cell case except for the increased entire frequency amount W by 70 MHz and the number of users N by up to 1,400.As a result, Fig. 3 shows the proposed network with the suboptimal scheduler achieves up to 720% revenue.The result also verifies that the revenue gain increasing rate with respect to N converges to a linear scaling law when P * b = P u (see Fig. 5 at N ≥ 770) as expected in Section III-E The effect of gain increment by the scheduler increases as anticipated in the single cell case for small N .This tendency, however, is no longer valid after exceeding N = 770, where having the maximum 70.6% revenue increment by means of the suboptimal scheduler, and the effect of scheduler diminishes along with increasing N .The reason is there is no more available BC frequency since then, and thus revenue cannot be increased by any operations of BS other than the increasing number of common requests due to N .This behavior can be further justified by Fig. 4 and 5 respectively representing the linear growing rates of W * b and P * b with increasing N , as well as the convergence to the maximum values for N ≥ 770.
V. CONCLUSION
In this paper, we propose a BC network framework adaptively assigning BC or UC based on user demand prediction by examining content specific information such as file size, delay tolerance, and price sensitivity.For the purpose of the network operator's revenue maximization, the proposed framework jointly optimizes resource allocation, pricing, and file scheduling under a novel BC/UC selection policy.
Although a BS solely assigns BC or UC service without informing users of the possible selections, the proposed policy does not degrade but even enhance user payoff.In addition, this study provides closed form solutions that enables to understand the fundamental behavior of the proposed framework and give meaningful network design insights; for instance, revenue gain scaling order becomes N from N 2 as N increases.We consequently observe up to 32% increase in revenue for a single cell and more than 7 times for 7 cell coordinated LTE broadcast networks compared to the conventional networks.
The future work we are heading in is to extend the proposed framework into more general multi-cell scenarios which may rigorously incorporate inter-cell interference modeling.
APPENDIX
Proof of Lemma 1: Let X k denote 1 {B ik > U ik }.Since X k 's are independent of n i , we can apply Wald's identity [15], yielding The lower bound of X k is derived as follows.
Combining these results completes the proof.
Proof of Proposition 1 and 2: The lower bound of average revenue gain L is a concave function with respect to P b as well as W b .We therefore can find the unique optimal point (P * b , W * b ) via convex programming.Let P b be fixed, and consider L in terms of W b , yielding the solution given as: Similarly, for a fixed W b , the optimal BC price is given as follows.
Combining ( 9) and ( 10) proves Proposition 1.For Proposition 2, N/S * increases with N since s * i < N due to f i < 1 where s * i is only a function of N in S * .This proves P * b is an increasing function of N , completing the proof.
Fig. 1 .
Fig. 1.Time-frequency resource allocation for unicast and broadcast services where W b amount of frequency is allocated for broadcast while unity is allocated for unicast during T time slots
Fig. 2 . 2 )
Fig. 2. Wireless channel model where a cellular base station provides average rate r h for region A h , and r l for A l 2) Delay tolerance examination: user k marks his requesting priority of the file i as in conventional peerto-peer (P2P) services (e.g.high/low).Assuming BS has the full knowledge of users' quality-of-experience (QoE) patterns, this priority information corresponds to delay threshold θ ik , allowable delay without degrading QoE. 3) BC frequency allocation, pricing, and file scheduling:by inspecting f i , n i , and θ ik , BS allocates BC frequency amount W b , and sets BC price P b as well as optimizing BC file scheduling in a revenue maximizing order.4) BC/UC selection: meanwhile in 3), BS assigns either BC or UC to user k in order to maximize revenue without inflicting the user's payoff loss.Note that the pricing scheme we consider is similar to time-dependent pricing[10] in respect of its flattening user traffic effect by adjusting P b over time.The target offloading traffic by the pricing is, however, novel since the conventional scheme aims at the entire user traffic but the proposed at content-specific traffic captured by n i .
Proposition 1 .
Optimal BC frequency allocation W * b is given as follows.
Fig. 3.Revenue gains of (a) a single cell and (b) 7 cell coordinated LTE broadcast networks under the following environmtnets: with the optimal/suboptimal scheduler, without scheduler, lower popular file concentration γ of user requests, and larger number of possible requesting files M
Fig. 4 .Fig. 5 .
Fig. 4. Optimal broadcast frequency allocation of the 7 cell coordinated LTE broadcast network with the proposed suboptimal scheduler for increasing the number of users when γ = 1 and M = 2,000
ACKNOWLEDGEMENT
This research was supported by the Ministry of Science, ICT and Future Planning, Korea, under the Communications Policy Research Center support program supervised by the Korea Communications Agency (KCA-2013-001). | 6,169.8 | 2013-10-17T00:00:00.000 | [
"Computer Science"
] |
A New Image Processing Procedure Integrating PCI-RPC and ArcGIS-Spline Tools to Improve the Orthorectification Accuracy of High-Resolution Satellite Imagery
Given the low accuracy of the traditional remote sensing image processing software when orthorectifying satellite images that cover mountainous areas, and in order to make a full use of mutually compatible and complementary characteristics of the remote sensing image processing software PCI-RPC (Rational Polynomial Coefficients) and ArcGIS-Spline, this study puts forward a new operational and effective image processing procedure to improve the accuracy of image orthorectification. The new procedure first processes raw image data into an orthorectified image using PCI with RPC model (PCI-RPC), and then the orthorectified image is further processed using ArcGIS with the Spline tool (ArcGIS-Spline). We used the high-resolution CBERS-02C satellite images (HR1 and HR2 scenes with a pixel size of 2 m) acquired from Yangyuan County in Hebei Province of China to test the procedure. In this study, when separately using PCI-RPC and ArcGIS-Spline tools directly to process the HR1/HR2 raw images, the orthorectification accuracies (root mean square errors, RMSEs) for HR1/HR2 images were 2.94 m/2.81 m and 4.65 m/4.41 m, respectively. However, when using our newly proposed procedure, the corresponding RMSEs could be reduced to 1.10 m/1.07 m. The experimental results demonstrated that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline tools could significantly improve image orthorectification accuracy. Therefore, in terms of practice, the new procedure has the potential to use existing software products to easily improve image orthorectification accuracy.
Introduction
With the development of remote sensing technology, the spatial resolution and spectral resolution of remote sensing images have been greatly improved, and thus application fields of remote sensing technology are expanding [1][2][3].High resolution satellite remote sensing imagery is a basic spatial data source to construct the digital Earth and can be widely applied in multiple subjects and areas, such as geology, vegetation, agriculture, forestry, and oceanography, etc. [4][5][6], especially in disaster emergency monitoring, real-time monitoring of land cover, ocean monitoring, and Earth-crust displacement and ground settlement monitoring [1,2,7,8].
Given the fact that high resolution satellite images possess the characteristics of timeliness and authenticity, rapidly acquiring information, relatively low cost, no geographic restrictions, and abundant spatial information and texture information, etc., developing high resolution satellite image processing techniques and conducting various application studies have continuously received attention [7][8][9].However, it is still challenging for us to develop an effective and accurate image processing method in order to rapidly, automatically identify and extract useful information for various application purposes from processed remote sensing images [7,10].In the processing of high resolution satellite images, investigating orthorectification models of satellite imaging is not only their core content but also a basis for high accurate orthorectification of remote sensing images [11].This is because only if a mathematical model of satellite imaging is established can the mathematical relationship between three-dimensional spatial coordinates of ground control points (GCPs) and corresponding pixel coordinates of image points be reflected [12,13].
The mathematical models of satellite imaging are generally divided into two categories.One is a strict imaging geometry model based on imaging properties of a sensor, commonly referred to as a physical model, which is established according to a strict geometrical relationship among GCPs of imaging, the center of the sensor lens and the corresponding image points in a straight line [2].The other is the general imaging geometry model only based on a simple mathematical function unrelated to the specific sensor, commonly referred to as a rational function model, which is established according to a relationship between GCPs and the corresponding image points [14,15].Similar to the general imaging geometry model or called a rational function model, the homography transformation is also one of the most popular and efficient geometric transformation models, and is frequently used in alignment of images and related fields in a single coordinate system to acquire 3D information, to detect/measure geometric difference, or to increase the field of view or signal-to-noise ratio [16].Typically, homographies are estimated between images by finding feature correspondences in those images.Homographic transformation of an image can be implemented by multiplying the image coordinate with the homography transform matrix [17].Since the physical model is applicable for a collinear equation model, satellite orbit ephemeris parameters and sensor parameters are necessary.However, it is either very difficult or impossible for most users to obtain such parameters [18].Therefore, nowadays rational function models are adopted by most users for an orthorectification of remote sensing images, and commonly used models include the polynomial correction model and the rational polynomial coefficients (RPC) correction model [2,15].
The polynomial correction model is more suitable for a plain area with a relatively smooth terrain because it does not need to consider the spatial geometry relation of imaging process.Rather, the RPC correction model is widely used in mountainous regions with a greater topographic relief due to the relief factor.Furthermore, the rectification accuracy of the RPC model is close to the collinear equation model [15,19].However, there are some limitations for the RPC model.First, the RPC correction model is in the form of a fraction, so it may fail to work when the denominator is equal to zero [13,20].Second, the RPC correction model may only correct errors of GCP but cannot eliminate the image distortion, so the accuracy of corrected images would be still affected by the accuracy of DEM [15].Third, because the RPC correction model is established based on specific points, theoretically, it is not strictly applicable to other points [19].Additionally, since the accuracy of the RPC model is dependent upon the GCP accuracy, distribution and quantity, it is necessary for users to have a set of high-quality GCPs running the PRC model [13,18].Therefore, at present, for a high resolution satellite image covering a mountainous area with a greater relief, it is essential for users to increase the image orthorectification accuracy.
In this study, to improve the orthorectification result for high resolution satellite images covering mountainous areas, a new operational and accurate processing procedure was proposed and tested, and the proposed procedure made a full use of mutual complementary satellite image processing software functions of PCI-RPC and ArcGIS-Spline [21,22].To do so, the raw image data were firstly processed into an orthorectified image using PCI with a RPC correction model, and then the orthorectified image was further processed using an ArcGIS-Spline tool for a local geometric correction [23,24].Local geometric correction is one of the key technologies to improve the accuracy of image orthorectification.A correlation coefficient method is frequently adopted in the search area to obtain the most relevant Remote Sens. 2016, 8, 827 3 of 16 points (i.e., GCPs) because of the good consistency and pixel-level accuracy of the orthorectified image.Therefore, the spline function is established for these points respectively to calculate the correct value and obtain local geometric corrected coordinates point-by-point, so as to improve the accuracy of image orthorectification [19].Essentially, local geometric correction is used to transform physical coordinates to user coordinates so that the spatial information of ArcGIS database has a practical significance [2,25].Finally, based on the tested results, the proposed procedure was evaluated and relevant issues were discussed.
Study Area
The experimental area is located in Yangyuan County (39 which is in the transition zone of Loess Plateau, Inner Mongolia Plateau and the North China Plain, Figure 1a shows the location of the experimental region [7].It is mountainous in the northern and southern portions of the county and Sanggan River runs across the whole county from west to east.The geomorphic types include mountains, hills, plains, and rivers, etc.The terrain of the experimental area is high in southwest, low in northeast, high in southern mountains and low in northern mountains.Because of the special topographical characteristics, it is difficult to get high-accuracy orthorectification results for high resolution satellite images covering the study area [18].respectively to calculate the correct value and obtain local geometric corrected coordinates point-bypoint, so as to improve the accuracy of image orthorectification [19].Essentially, local geometric correction is used to transform physical coordinates to user coordinates so that the spatial information of ArcGIS database has a practical significance [2,25].Finally, based on the tested results, the proposed procedure was evaluated and relevant issues were discussed.
Study Area
The experimental area is located in Yangyuan County (39°53′N-40°22′N,113°54′E-114°48′E), which is in the transition zone of Loess Plateau, Inner Mongolia Plateau and the North China Plain, Figure 1a shows the location of the experimental region [7].It is mountainous in the northern and southern portions of the county and Sanggan River runs across the whole county from west to east.The geomorphic types include mountains, hills, plains, and rivers, etc.The terrain of the experimental area is high in southwest, low in northeast, high in southern mountains and low in northern mountains.Because of the special topographical characteristics, it is difficult to get high-accuracy orthorectification results for high resolution satellite images covering the study area [18].
Data Sets
In this study, the experimental data, including base image, high resolution warp image, and DEM (Digital Elevation Models) data, were collected and used as follows.
•
Base image: An image of the second national land survey of China was collected to be used as a base image (Ministry of Land and Resources of People's Republic of China), which was shown in Figure 1b.
•
DEM data: DEM SRTM data with 30 m spatial resolution were collected, which are shown in Figure 1c [18].So far, the DEM data are the best (resolution) we could use in the study area.
Rectification Models
Because the rational function models can run without satellite orbit and sensor parameters, they are widely used for orthorectification.Polynomial and RPC models are typical rational function models.
Polynomial Rectification Model
Polynomial rectification model is a mathematical model based on an image without regard to spatial geometry relationship of imaging process [27].The quadratic polynomial model is frequently adopted in the modeling, which is established based on a relationship between ground coordinates X, Y and image coordinates of control points x, y, as follows [14,28]: Based on the principle of least squares, the coefficients of quadratic polynomial a 00 , a 10 , a 01 , a 1).Then the new coordinates of images can be calculated according to the transformation coefficients [14].This method is usually adopted for plains with relatively smooth terrain, and is not recommended for mountainous regions.Therefore, it is not used for this study as this experimental area has a greater topographical relief [18].
RPC Rectification Model (1) Expression of RPC Model
The RPC (Rational Polynomial Coefficients) model is based on ground control points (GCPs) and DEM data to orthorectify an image [14].Actually, it is a broader and better expression of the sensor model, which is utilized to obtain the rows and lines of an image by the ratio and similar ratio of two polynomial functions respectively.Furthermore, both polynomials are the functions of ground coordinates.Therefore, the RPC model is for further generalization for polynomial model and linear transform model and it is suitable for different types of sensors [8,12].In the RPC rectification model, the image coordinates are the ratio of two polynomials in which the three-dimensional coordinates of GCPs are set as independent variables as follows in Equation ( 2) [2,15,20,25,27,29,30]: where (r n , c n ) and (X n , Y n , Z n ) are the normalized image coordinates (r, c) and ground coordinates (X, Y, Z) by translating and scaling in a RPC model [20,29,31].Generally, ground coordinates and image coordinates are translated and scaled into parameter values between (−1,1) to enhance the stability of parameter solutions, to reduce the computational errors caused by a large data magnitude, and to avoid causing a morbid matrix.The conversion relationship is shown in Equation ( 3) [19,20,29]: where (X 0 , Y 0 , Z 0 , r 0 , c 0 ) are the translating parameters of standardization, and also are the coordinates of the origin of a RPC model in the mapping coordinate system; (X s , Y s , Z s , r s , c s ) are the proportional parameters of standardization.In the polynomial P i (X, Y, Z) (i = 1, 2, 3, 4), the maximum and the sum power of each coordinate component would not be greater than three [29].According to Boccardo et al. [2], Hu et al. [15], Tao et al. [20], Li et al. [29], and Aguilar et al. [30], the polynomial representation is shown as follows: where the polynomial coefficients a 0 , a 1 , . . ., a 19 are designated as the coefficients of the rational polynomial function.
(2) Othorectification Principle of the RPC Model Substitute Equation (4) into Equation ( 3) and let 2), which can be written as: , then Equation ( 5) can be re-written as: Equation ( 6) may be written using theTaylorformula as follows: The error equation of Equation ( 7) is written as: Equation ( 8) can be re-written as: , so Equation ( 9) finally can be written in a vector and matrix form as: Based on Tao et al. [20] and Fraser et al. [13], the least squares solutions of coordinate corrections can be obtained from Equation (10) as follows: Substituting Equation (11) into Equation (7), the coordinates of orthorectified image can be obtained.Because Equation ( 7) is a linearized model, iteration is operated to obtain an optimal solution for the image coordinates.
All above is the basic principle of the orthorectification model based on the RPC model, in which the projection distortion caused by the topographical relief can be rectified by combining the RPC model with DEM data [15].Due to considering the relief factor, the rectification accuracy of the RPC model is only lower than that of a collinear equation model [13].Given the experimental area covering more mountainous area with a greater topographical relief, this study adopted the RPC model to orthorectify both HR1 and HR2 images [13,20].Although the RPC model can produce a higher accuracy of image orthorectification than the other models, due to the properties of the model itself, the PRC model still has some limitations, addressed in the introduction section.
Spline Function Model
A spline function is selected for local geometric correction because it can overcome the trouble of unstable and slow speed from high order interpolation [32].The properties of minimum modulus and best approximation of the spline function can explain the resolution of the variational problem.Geometrically, the spline function describes a smooth-curved "thin beam" which "clamps" both end points through each interpolation point [33].The cubic spline function is widely used because of its simple calculation, good stability, high precision, and certain smoothness.
Cubic spline function has two types of expression, first derivative and second derivative, both of which are continuous and have a unique solution.Suppose the spline function S 3, i (x) is in a small interval (x i−1 , x i ) (i = 1, 2, . . ., n), which can be obtained by an interpolation condition as follows [34][35][36][37]: Then Equation ( 12) can be re-written by the Hermite's interpolation formula [34][35][36] as follows: where According to the definition of the cubic spline function, the second derivative is continuous, so it can be written as: Calculate the differential coefficient of Equation ( 14) and then substitute it into Equation ( 15) to obtain Equation (18) as follows: where µ = h i h i +h i+1 ; , (1 ≤ i ≤ n − 1).Equation ( 18) contains n − 1 equations with n + 1 unknown variables, so it is necessary to add two more equations to obtain aunique solution, i.e., two boundary conditions.This model may adopt the first boundary condition, which is [37]: In this case, Equation ( 16) reduces two sum variables so that the unique solution for the equation can be obtained, and then the solution values of m i (i = 1, 2, . . ., n) can be substituted into Equation (14) to establish a primary spline function.
In the procedure of orthorectification, the spline function relationship of x, y can be established by Equation (14) according to the coordinates of the same GCPs identified from both base image and warp image [33,37].Actually, the values of m 0 , m n+1 can be replaced by the reciprocal values of the slopes of the straight lines connected by the first two points and the last two points, respectively, and the values of h i depend on the situation.Finally, the correction values of x, y can be calculated to correct coordinates of each GCP in a local area of an image [33].
A Procedureof Orthorectification
Figure 2 presents a flowchart to show the procedure of conducting orthorectification of images in this study [28,38,39].In this study, firstly, we set the base image as a reference image, the CBERS-02C HR images as warp images and overlaid DEM data over both base and warp images.Secondly, RPC rectification model was adopted and satisfied GCPs were identified and collected from the both base image and warp images.Thirdly, the conversion model was calculated and the warp images were re-sampled.Finally, the warp images were orthorectified and outputted with an acceptable accuracy [40,41].To do so, commercial software products PCI with RPC (Rational Polynomial Coefficients) model (PCI-RPC) and ArcGIS with Spline model (ArcGIS-Spline) were adopted to perform the proposed procedure of orthorectification of images.In order to improve the accuracy of orthorectifying satellite images that cover mountainous areas, we proposed a new operational and accurate processing procedure by making full use of the mutually compatible and complementary characteristics of the remote sensing image preprocessing software products PCI-RPC and ArcGIS-Spline tools [23,24].
CBERS-02C HR images as warp images and overlaid DEM data over both base and warp images.Secondly, RPC rectification model was adopted and satisfied GCPs were identified and collected from the both base image and warp images.Thirdly, the conversion model was calculated and the warp images were re-sampled.Finally, the warp images were orthorectified and outputted with an acceptable accuracy [40,41].To do so, commercial software products PCI with RPC (Rational Polynomial Coefficients) model (PCI-RPC) and ArcGIS with Spline model (ArcGIS-Spline) were adopted to perform the proposed procedure of orthorectification of images.In order to improve the accuracy of orthorectifying satellite images that cover mountainous areas, we proposed a new operational and accurate processing procedure by making full use of the mutually compatible and complementary characteristics of the remote sensing image preprocessing software products PCI-RPC and ArcGIS-Spline tools [23,24].(3) executing both the PCI-RPC and ArcGIS-Spline tools.The consistent ground control points (GCPs) used for processing images with the three steps were identified and collected for comparison, analysis and verification of orthorectified images created by the three operational steps.In mountainous and hilly areas, the number of GCPs for calibration should be more than that required in plain areas [18,42].When selecting GCPs from the both base image and warp images, the locations of GCPs should meet the following requirements:
•
All points should be evenly distributed to represent the entire experimental area [42,43]; Selected feature points should be clear enough to distinguish, such as road intersections, bridges corners, stadium horns, building corners, and wall angle positions, etc. [18,25]; It is necessary to select a certain number of GCPs at the mountainside and mountaintop [18,44]; • The mean residual errors (i.e., RMSEs) of GCPs should be controlled in one pixel in the plains and hills, two pixels in the mountains.In this study, because the re-sampling pixel size is 2.0 m and the study area was featured with mountainous areas mixed with a portion of plain area, the RMSEs of GCPs should be controlled from 2.0 m to 4.0 m [18,45].
Experiment and Results
Given the mountainous areas in the study area, the same locations of the 27 GCPs were identified and selected from the base image and the warp images (or the orthorectified images created using PCI-RPC model) for all three operation steps for calibrating orthorectification models and tools.The spatial distribution of the 27 GCPs located on the base image was shown in Figure 3. Three sets of orthorectified images processed by the PCI-RPC tool, ArcGIS-Spline tool and integrating PCI-RPC and ArcGIS-Spline tools, respectively, were obtained.Their rectified accuracies were verified, analyzed and compared.[18,42].When selecting GCPs from the both base image and warp images, the locations of GCPs should meet the following requirements:
•
All points should be evenly distributed to represent the entire experimental area [42,43]; Selected feature points should be clear enough to distinguish, such as road intersections, bridges corners, stadium horns, building corners, and wall angle positions, etc. [18,25]; • It is necessary to select a certain number of GCPs at the mountainside and mountaintop [18,44]; The mean residual errors (i.e., RMSEs) of GCPs should be controlled in one pixel in the plains and hills, two pixels in the mountains.In this study, because the re-sampling pixel size is 2.0 m and the study area was featured with mountainous areas mixed with a portion of plain area, the RMSEs of GCPs should be controlled from 2.0 m to 4.0 m [18,45].
Experiment and Results
Given the mountainous areas in the study area, the same locations of the 27 GCPs were identified and selected from the base image and the warp images (or the orthorectified images created using PCI-RPC model) for all three operation steps for calibrating orthorectification models and tools.The spatial distribution of the 27 GCPs located on the base image was shown in Figure 3. Three sets of orthorectified images processed by the PCI-RPC tool, ArcGIS-Spline tool and integrating PCI-RPC and ArcGIS-Spline tools, respectively, were obtained.Their rectified accuracies were verified, analyzed and compared.
Orthorectification by PCI-RPC Tool
At the first operation step, the satellite CBERS-02C HR images HR1 and HR2 were orthorectified by using the PCI-RPC model with the base image and DEM data.After re-sampling to 2.0 m pixel size by cubic convolution and calculating RMSEs [13], the orthorectified images a1 and a2 (corresponding to HR1 and HR2) were produced with the ArcGIS format of "img".Table 1 reports the residual errors of GCPs of orthorectified images a1 and a2.
Orthorectification by PCI-RPC Tool
At the first operation step, the satellite CBERS-02C HR images HR1 and HR2 were orthorectified by using the PCI-RPC model with the base image and DEM data.After re-sampling to 2.0 m pixel size by cubic convolution and calculating RMSEs [13], the orthorectified images a1 and a2 (corresponding to HR1 and HR2) were produced with the ArcGIS format of "img".Table 1 reports the residual errors of GCPs of orthorectified images a1 and a2.In Table 1, the RMSE of calibration GCPs for image a1 was 1.86 m, and for image a2 was 1.79 m.According to Cheng et al. [42], Wolniewicz [46], and Aguilar et al. [18], for mountainous areas, the maximum residual error should be controlled within 4 pixels, i.e., 8 m.Thus the orthorectification accuracies of calibration GCPs had been controlled within an allowable error range.
Geometric Correction by the ArcGIS-Spline Tool
At the second operation step, the HR1/HR2 images were geometrically corrected based on the base image and using ArcGIS-Spline tool.In this study, since HR1/HR2 images were only given the setting of image projection parameter, the number of GCPs directly affects the geometric correction accuracy [23,47,48].After calculating the RMSEs automatically (Table 2), the corresponding rectified images b1/b2 were created.In the table, the RMSEs of GCPs were 0.4199 m/0.2001 m for images b1/b2, respectively, which were controlled in the allowable error range [46].
Integrating the PCI-RPC and ArcGIS-Spline Tools
At the third operation step, the orthorectified images (a1/a2) created by using the PCI-RPC model were further processed by using ArcGIS-Spline tool with the same locations of calibration GCPs used for the rectification of images separately using the PCI-RPC model and ArcGIS-Spline tool, which could significantly improve the geometric corrected accuracy of the orthorectified images completed at the first operational step.The orthorectified images ab1/ab2 produced by using both PCI-RPC and ArcGIS-Spline tools were presented in Figure 4.The corresponding RMSEs of GCPs for rectified images ab1/ab2 were also listed in Table 2.In the table, it is clear that the RMSEs of GCPs were 0.0907 m/0.0507 m for images ab1/ab2, respectively, which indicates that errors were controlled in the allowable error range and also much lower than those created at the second operation step.
* "RMSE X" and "RMSE Y" represent the residual error values of GCPs in the directions of X and Y, respectively; "RMSE" represents the mean residual error values of GCPs.
In Table 1, the RMSE of calibration GCPs for image a1 was 1.86 m, and for image a2 was 1.79 m.According to Cheng et al. [42], Wolniewicz [46], and Aguilar et al. [18], for mountainous areas, the maximum residual error should be controlled within 4 pixels, i.e., 8 m.Thus the orthorectification accuracies of calibration GCPs had been controlled within an allowable error range.
Geometric Correction by the ArcGIS-Spline Tool
At the second operation step, the HR1/HR2 images were geometrically corrected based on the base image and using ArcGIS-Spline tool.In this study, since HR1/HR2 images were only given the setting of image projection parameter, the number of GCPs directly affects the geometric correction accuracy [23,47,48].After calculating the RMSEs automatically (Table 2), the corresponding rectified images b1/b2 were created.In the table, the RMSEs of GCPs were 0.4199 m/0.2001 m for images b1/b2, respectively, which were controlled in the allowable error range [46].
Integrating the PCI-RPC and ArcGIS-Spline Tools
At the third operation step, the orthorectified images (a1/a2) created by using the PCI-RPC model were further processed by using ArcGIS-Spline tool with the same locations of calibration GCPs used for the rectification of images separately using the PCI-RPC model and ArcGIS-Spline tool, which could significantly improve the geometric corrected accuracy of the orthorectified images completed at the first operational step.The orthorectified images ab1/ab2 produced by using both PCI-RPC and ArcGIS-Spline tools were presented in Figure 4.The corresponding RMSEs of GCPs for rectified images ab1/ab2 were also listed in Table 2.In the table, it is clear that the RMSEs of GCPs were 0.0907 m/0.0507 m for images ab1/ab2, respectively, which indicates that errors were controlled in the allowable error range and also much lower than those created at the second operation step.
Comparison of Image Rectification Approaches
A total of 15 validation GCPs were identified and selected from the base image and all three sets of rectified images created at the corresponding three operation steps to validate the accuracy of three sets of corrected images a1/a2, b1/b2, and ab1/ab2 (Figure 5).After automatically calculating the RMSEs of the 15 validation GCPs, their results were listed in Table 3.In the table, the RMSEs of the validation GCPs for the three sets of geometrically rectified images a1/a2, b1/b2, and ab1/ab2 were 2.94 m/2.81 m, 4.65 m/4.41 m, and 1.10 m/1.07 m, respectively.
Comparison of Image Rectification Approaches
A total of 15 validation GCPs were identified and selected from the base image and all three sets of rectified images created at the corresponding three operation steps to validate the accuracy of three sets of corrected images a1/a2, b1/b2, and ab1/ab2 (Figure 5).After automatically calculating the RMSEs of the 15 validation GCPs, their results were listed in Table 3.In the table, the RMSEs of the validation GCPs for the three sets of geometrically rectified images a1/a2, b1/b2, and ab1/ab2 were 2.94 m/2.81 m, 4.65 m/4.41 m, and 1.10 m/1.07 m, respectively.Through the comparative analysis of verification results (Table 3) among the three sets of rectified images a1/a2, b1/b2, and ab1/ab2, the experimental results demonstrate that:
The accuracy of validation GCPs was consistently the highest for images ab1/ab2 among the three sets of geometrically corrected images, which indicates that the orthorectification accuracy has been significantly improved by using the new image processing procedure of integrating PCI software with the RPC orthorectification model and ArcGIS-Spline tool [49]; The accuracy of validation GCPs was consistently the lowest for images b1/b2, which means that although running the ArcGIS-Spline tool could lead to high geometrical correction accuracy for the calibration GCPs in Table 2, for the other areas in the corrected images, the geometrical correction accuracy is actually very low, as the Spline model only works well for local geometric correction around GCPs without considering the topographic relief in the study area [20,42,43]; The accuracy of validation GCPs was secondary for images a1/a2 among the three sets of corrected images, which means that when conducting image geometric correction, incorporating Through the comparative analysis of verification results (Table 3) among the three sets of rectified images a1/a2, b1/b2, and ab1/ab2, the experimental results demonstrate that:
•
The accuracy of validation GCPs was consistently the highest for images ab1/ab2 among the three sets of geometrically corrected images, which indicates that the orthorectification accuracy has been significantly improved by using the new image processing procedure of integrating PCI software with the RPC orthorectification model and ArcGIS-Spline tool [49]; The accuracy of validation GCPs was consistently the lowest for images b1/b2, which means that although running the ArcGIS-Spline tool could lead to high geometrical correction accuracy for the calibration GCPs in Table 2, for the other areas in the corrected images, the geometrical correction accuracy is actually very low, as the Spline model only works well for local geometric correction around GCPs without considering the topographic relief in the study area [20,42,43]; • The accuracy of validation GCPs was secondary for images a1/a2 among the three sets of corrected images, which means that when conducting image geometric correction, incorporating DEM data (thus called orthorectification) will help improve the image geometric correction accuracy compared with the case without using DEM data in image geometric rectification.
Discussion
The test results show that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline Tools is operational and effective for improving the accuracy of image orthorectification by local geometric correction.The basic principle of the RPC model is to orthorectify images by using GCP and DEM data, and all the polynomials are the function of ground coordinates (longitude, latitude and elevation).For the image orthorectification, the projection distortion caused by the elevation difference can be rectified by the PCI-RPC with DEM data [8,12,14].In the RPC model, the errors caused by an optical projection system can be expressed by a function of a rational polynomial.The errors caused by the Earth's curvature, atmospheric refraction and lens distortion can be modeled by a quadratic rational polynomial.Some other unknown error with high order components, such as camera shake, can be expressed by a cubic rational polynomial [15,20].
Without sensor imaging parameters, the RPC model can not only guarantee the strict positioning accuracy with evenly distributed error, but also allow different geographic reference coordinate system.Therefore, it has great application potential in the field of high-resolution satellite imagery [26,28].However, the accuracy of orthorectified images is still affected by the DEM accuracy.Because the RPC correction model is in the form of fraction, the denominator will change obviously when the control points used to calculate the RPC parameters are in the non-uniform distribution or excessive parameterization.Therefore, it is easy to cause the morbidity of a normal equation obtained by a modeled function, and then affect the stability of model and decrease the accuracy of image orthorectification image [19,20,30,31].The calculation results obtained by orthorectifying images with the PCI-RPC model showed that the errors were obvious in both line and column directions, while the newly proposed image processing procedure integrating the PCI-RPC and ArcGIS-Spline models can correct the position errors of the RPC model.The ArcGIS-Spline model can improve the accuracy of the orthorectified image and realize the error compensation by the correction values of points with spline function.
The RPC model has 90 coefficients in total, including 10 normalized coefficients and 80 rational function coefficients.Therefore it is necessary to calculate RPC coefficients using a large number of control points [20,30].In this study, experimental data of CBERS-02C HR provided with corresponding RPC coefficients, so warp images could be orthorectified by the RPC parameters with DEM data.The RPC coefficients usually have two formats, RPC text format and RPB format.Remote sensing images CBERS-02C HR provided the second format, which included 4 groups of parameters, i.e., LINE_NUM_COEF, LINE_DEN_COEF, SAMP_NUM_COEF, SAMP_DEN_COEF, corresponding to the four coefficients of Equation (2), respectively.Each group includes 20 values, corresponding to the 20 coefficients of Equation (4).Consequently, the warp images could be orthorectified by the RPC model.
Comparing the accuracy of orthorectified images obtained by using only the RPC parameters provided by satellite data providers with the base image, the orthorectified results by the RPC model showed significant displacement of control points.For example, the RMSEs of GCPs for orthorectified image HR1 in X and Y directions were 1.78 m and 2.35 m, respectively.The RMSEs of GCPs for orthorectified image HR2 in X and Y directions were 2.06 m and 1.91 m, respectively.A large number of studies and data analyses show that the main cause of errors in the RPC model is the re-parameterization of thephysical sensor model.The errors of both interior and exterior orientation elements may cause the errors in RPC parameters, so it is necessary and urgent to develop new methods to improve the accuracy of image orthorectification [13,15,20].
The spline function is widely used because of its simple calculation, good stability, high accuracy, certain smoothness, and the significant accuracy improvement.However, with the increase of the number of control points, the number of unknown variables is also increased.It is difficult to calculate the unknown variables with a lot of control points [42,43].In order to solve these problems, based on the theory and method of spline function, this study adopted a cubic spline function with derivative to correct control points, which could convert the dual function into unary function to simplify the complexities of the problems, and to correct the coordinates of arbitrary points in a wide range to improve the accuracy of orthorectified images by using a local geometric correction.Compared with the base image, the results of the RPC and Spline (cubic) tools produced a dramatic accuracy improvement, and the orthorectification accuracies (RMSEs) of images HR1 and HR2 in X and Y directions were 0.72 m/0.83 m, 0.80 m/0.73 m, respectively.
In this study, SRTM data with a 30 m spatial resolution were collected as DEM data, up to now, which are the best DEM data we could use in the study area.If the higher spatial resolution DEM data are available for orthorectifying images, a better result for image orthorectification may be expected.However, this does not influence the performance of the proposed procedure for improving image orthorectification.
In this study, the experimental results fully demonstrate that the new procedure has a potential to improve the accuracy of rectifying images by using PCI and ArcGIS existing software products.The new image processing procedure integrating the PCI-RPC model and ArcGIS-Spline tool has resulted in the best image orthorectification result.The improvement of image geometric rectification may be explained by the following three points: (1) The software PCI with RPC orthorectification model considers three dimensional factors (X, Y, and Z coordinates) and the whole image scene for an image orthorectification.Given the great topographic relief in our study area (a mountainous area), if a geometric correction model only considers correcting X-Y two dimensional distortion for a warp image, it may be work well in a plain region.However, such a model for a mountainous area might be expected to work poorly due to not considering image distortion caused by the elevation variation.Thus, compared to the geometric correction result created by using ArcGIS-Spline which just considers X-Y two dimensional distortion of a warp image, the PCI-RPC model outperformed the ArcGIS-Spline (Table 3).
(2) The ArcGIS-Spline tool only considers two dimensions (X, and Y coordinates) and local areas around GCPs (i.e., local geometric correction).The Spline function can work well in a plain area in considering image two-dimensional distortion and may result in high local geometrical correction accuracy around GCPs [46].Consequently, the ArcGIS-Spline could create a very good geometrical correction result for the set of calibration GCPs (Table 2), but it worked very poorly for the validation GCPs (see its worst result in Table 3).
(3) The new image processing procedure that integrates PCI with the RPC orthorectification model and ArcGIS-Spline tool has a synergic advantage from the RPC model (performing a three-dimensional correction over the whole scene) and Spline tool (performing a two-dimensional correction working very well over a local area around GCPs).In this study, the orthorectified images (a1/a2) created by the PCI-RPC tool has been corrected for most three dimensional distortions at the first operation step, especially correcting the distortion caused by the elevation variation in a mountainous area.Therefore, continuously correcting the corrected images (a1/a2) with the ArcGIS-Spline tool (i.e., further correcting remained distortion at the X-Y dimension) might be expected to further improve the image orthorectification accuracy.The lowest RMSEs of validation GCPs for the corrected images (ab1/ab2) in Table 3 supported this expectation.
Conclusions
In this study, we analyzed various reasons for geometric distortions, identified the differences in typical models of image geometric rectification, and discussed the definitions and rectification theories of the Rational Polynomial Coefficients (RPC) model and Spline function.Relevant disadvantages and difficulties were discussed for executing the RPC model and Spline function using commercial software products for orthorectifying high-resolution satellite images.A new processing procedure was proposed by integrating PCI software with the RPC model (PCI-RPC) and ArcGIS with the Spline tool (ArcGIS-Spline) to improve the accuracy of image orthorectification.The new image processing procedure was tested using two scenes of high-resolution satellite images that were acquired from a mountainous area.The experimental results demonstrated that the newly proposed procedure could significantly improve the image orthorectification accuracy by comparing with the traditional procedures such as using either the PCI-RPC model or ArcGIS-Spline function.They suggest that the new procedure would have a broad potential application, and thus it is worthy and valuable to research and develop.
With the widening application of high-resolution satellite imagery, using existing commercial image processing packages, the development of operational and efficient satellite image processing procedures such as high accurate image orthorectification will benefit those users who have a limited knowledge of remote sensing image processing.
Figure1.
Figure1.The location of the experimental area and presentation of experimentally used image data: (a) the location of the experimental area; (b) base image; (c) DEM data; and (d) warp images: HR1/HR2.
Figure 1 .
Figure 1.The location of the experimental area and presentation of experimentally used image data: (a) the location of the experimental area; (b) base image; (c) DEM data; and (d) warp images: HR1/HR2.
Figure 2 .
Figure 2. The procedure used for orthorectifying images in this study.
Figure 2 .
Figure 2. The procedure used for orthorectifying images in this study.
Figure 3 .
Figure 3.The spatial distribution of the rectification GCPs located on the base image: (a) the rectification GCPs were used for rectifying image HR1; (b) the rectification GCPs were used for rectifying image HR2.
Figure 3 .
Figure 3.The spatial distribution of the rectification GCPs located on the base image: (a) the rectification GCPs were used for rectifying image HR1; (b) the rectification GCPs were used for rectifying image HR2.
Figure 5 .
Figure 5.The spatial distribution of the validation GCPs located on the base image: (a) the validation GCPs were used for validating images a1/b1/ab1; (b) the validation GCPs were used for validating a2/b2/ab2.
Figure 5 .
Figure 5.The spatial distribution of the validation GCPs located on the base image: (a) the validation GCPs were used for validating images a1/b1/ab1; (b) the validation GCPs were used for validating a2/b2/ab2.
* "RMSE X" and "RMSE Y" represent the residual error values of GCPs in the directions of X and Y, respectively; "RMSE" represents the mean residual error values of GCPs. | 9,185.4 | 2016-10-09T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Pressureless glass crystallization of transparent yttrium aluminum garnet-based nanoceramics
Transparent crystalline yttrium aluminum garnet (YAG; Y3Al5O12) is a dominant host material used in phosphors, scintillators, and solid state lasers. However, YAG single crystals and transparent ceramics face several technological limitations including complex, time-consuming, and costly synthetic approaches. Here we report facile elaboration of transparent YAG-based ceramics by pressureless nano-crystallization of Y2O3–Al2O3 bulk glasses. The resulting ceramics present a nanostructuration composed of YAG nanocrystals (77 wt%) separated by small Al2O3 crystalline domains (23 wt%). The hardness of these YAG-Al2O3 nanoceramics is 10% higher than that of YAG single crystals. When doped by Ce3+, the YAG-Al2O3 ceramics show a 87.5% quantum efficiency. The combination of these mechanical and optical properties, coupled with their simple, economical, and innovative preparation method, could drive the development of technologically relevant materials with potential applications in wide optical fields such as scintillators, lenses, gem stones, and phosphor converters in high-power white-light LED and laser diode.
T ransparent crystalline yttrium aluminum garnet (YAG; Y 3 Al 5 O 12 ) is particularly noteworthy due to its importance as a host material in solid state lasers [1][2][3][4] , phosphors [5][6][7][8] , and scintillators 9 . Commercial YAG materials are usually single crystals, which are grown by directional solidification from melts and demonstrate outstanding optical performances 10 . However, such single crystals are limited in size and shape, maximal chemical doping level, and crystal growth rate which imply high production costs. Transparent YAG ceramics have recently proved their ability to compete with these single crystals in domains including optics, electronics, and scintillating devices 1,[11][12][13] . Transparency in these polycrystalline materials is ensured by the absence of light scattering sites (pores, secondary phases) to avoid energy dissipation within the material 13 . Compared to single-crystal technology, transparent ceramics attract significant attention due to geometric versatility, relatively swift scalable manufacturing, and doping flexibility 1, [13][14][15] . Diverse sintering synthetic approaches have been employed for their elaboration, including vacuum sintering, hot isostatic pressing or spark plasma sintering with specific nanometer-scale raw powders 1 . There has been only very few reports of highly transparent nanocrystalline ceramic until the recent work on silicate garnets fabricated by direct conversion from bulk glass starting material in mutianvil high-pressure apparatus 16 . Nevertheless, all these processes require high-pressure and high-temperature sintering conditions, and it remains challenging to reach industrial production due to complex processes and reproducibility problems [16][17][18] . Ikesue et al. 19 reported the possibility to prepare transparent YAG ceramics by pressureless slip casting and vacuum sintering at about 1800°C but with large grain size (40-60 μm). However, even though fully densified ceramics with nanometer-scale grain sizes are promised to unprecedented optical, mechanical and electrical properties with applications in lasers, phosphors, and electrical devices 1,13,20,21 , pressureless fabrication of transparent nanoceramics has never been reported up to date.
To overcome the drawbacks of both single crystals and powder sintered ceramics, full crystallization from glass is regarded as a 24 , have been successfully prepared using such a bulk glass crystallization route. The reports demonstrate that near-zero volume contraction during glass crystallization is required to avoid crack formation and therefore to obtain a fully dense microstructure 23 . Unfortunately, in addition to the complex elaboration of stoichiometric Y 3 Al 5 O 12 glass bulk 29,30 , the large density difference between the glass (∼4.08 g/cm 3 ) 31 and the crystalline (4.55 g/cm3) YAG phases prevents transparency to be retained during crystallization using a full and congruent crystallization from glass approach 32 . This is the reason why Tanabe et al. developed transparent YAG glass-ceramics 33,34 .
Here we demonstrate the possibility to elaborate transparent YAG-based crystalline materials at room pressure via complete nanocrystallization of a 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 (AY26) parent bulk glass. The resulting YAG-Al 2 O 3 composite ceramics present a fully dense microstructure composed of YAG (77 wt%) and Al 2 O 3 (23 wt%) nanocrystals (Fig. 1). These biphasic ceramics demonstrate transparency from the visible up to the near infrared ranges (6 μm) and improved mechanical properties, especially higher hardness, compared to YAG single crystal and transparent ceramics. When doped by Ce 3+ , the YAG-Al 2 O 3 nanoceramic synthesized at 1100°C shows a 87.5% quantum efficiency, which is comparable with commercial YAG:Ce 3+ fluorescent powders (75-90%) 35,36 . The combination of the highperformance mechanical and optical properties with the thermal stability of this YAG-Al 2 O 3 ceramic material, coupled to its facile room pressure preparation, opens great potential applications in wide optical fields such as jewellery, lenses, scintillators and phosphor converters for high-power white-light LED, and laser diodes, which require limited light scattering in the materials 37 .
Results
Material synthesis and sample preparation. Transparent YAG-Al 2 O 3 ceramics were elaborated through full crystallization from a transparent yttria-alumina glass which composition deviates from stoichiometric YAG. A controlled Al 2 O 3 excess was indeed Fig. 2). As the crystallization of the AY26 glass composition led to materials with the highest transparency and YAG content, the work presented here focuses on this composition. Transparent bulk glass precursors ( Supplementary Fig. 3) were thus synthesized from a 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 composition using an aerodynamic levitation system equipped with a CO 2 laser 40,41 . This contactless method enables high-temperature melting (at around 2000°C) and free cooling when lasers are switched off (~300°C/s). It is expected that scaled, commercial production of larger glass samples could be attained using electric arc high-temperature melting industrial process. The amorphous nature of the AY26 glass was confirmed by both X-ray and electron diffraction (Supplementary Figs. 4 and 5). As demonstrated by scanning transmission electron microscopy, the glass appears homogeneous down to the nanometer scale ( Supplementary Fig. 5). Therefore, the 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 glass does not show any evidence of phase separation, as reported for various Al 2 O 3 -Y 2 O 3 melt compositions and which could explain the nanometer-scale crystallization 31,42,43 .
Differential scanning calorimetry (DSC) measurement on the AY26 glass collected as a function of temperature clearly shows glass transition at 887 ± 1°C followed by a strong exothermic peak at 931 ± 1°C. As illustrated by in situ high-temperature Xray powder diffraction (HT-XRD), this latter corresponds to the concomitant crystallization of glass into Y 3 Al 5 O 12 and δ-Al 2 O 3 phases (Fig. 2). Two small and broad DSC exothermic peaks can also be observed at higher temperatures, corresponding to the δto θ-Al 2 O 3 (T onset = 1061°C; T peak = 1164°C) and θto α-Al 2 O 3 (T onset = 1292°C; T peak = 1329°C)-phase transitions, respectively. These results are in agreement with previous studies on transition alumina-phase transformations from boehmite 44 . The enhanced thermal stability of transition Al 2 O 3 phases in the AY26 ceramics compared to previous works may be attributed to the large specific interfacial areas of the Al 2 O 3 nanodomains (∼145 m 2 /g), which can have significant influence on the transformation thermodynamics 45,46 . Following these observations, YAG-Al 2 O 3 biphasic nanoceramics were then simply prepared by crystallization of the AY26 glass via a single thermal treatment of 2 h at temperatures ranging from 950 to 1100 ℃. The resulting YAG-Al 2 O 3 composite ceramics show transparency even though limited light scattering can be observed in the visible range when the thermal treatment temperature is increasing ( Supplementary Fig. 6).
Material microstructure. The room temperature X-ray powder diffraction pattern of the AY26 glass sample crystallized at 1100°C for 2 h can be indexed with two crystalline phases, Y 3 Al 5 O 12 and transition Al 2 O 3 (the determination of the nature of the Al 2 O 3 polymorph is not straightforward given that both γ-Al 2 O 3 and δ-Al 2 O 3 phases exhibit close diffraction patterns. However, Rietveld refinement showed better agreement factors using the γ-Al 2 O 3 structural model). Quantitative phase analysis performed by Rietveld refinement led to 79 ± 2 wt% YAG and 21 ± 2 wt% Al 2 O 3 , in good agreement with the nominal formula (77 wt% YAG and 23 wt% Al 2 O 3 ). Moreover, using the fundamental parameters approach 47 , the crystallite size could first be estimated as 35 ± 2 nm. However, a strain effect correction had to be taken into account, which markedly improved the fit and increased the crystallite size to 51 ± 2 nm. The AY26 material heat treated up to 1400°C shows continuous grain growth upon heating as demonstrated by in situ HT-XRD (Fig. 2b). Analysis of these XRD data shows that full crystallization of the YAG-Al 2 O 3 glass composition is achieved from 900°C (no phase quantification evolution from this temperature (Fig. 2b, c)). One can note that the full crystallization from glass process enables YAG-based crystalline materials with small nanocrystals to be elaborated using an appropriate single heat treatment. Moreover, the growth rate remains relatively slow at high temperature, most probably because of the presence of Al 2 O 3 barriers, implying that a selflimited growth mechanism must be taking place 48,49 The bright field transmission electron microscope (TEM) micrograph presented Fig. 1a and the related selected area electron diffraction (SAED) pattern embedded clearly indicate that strong crystallization has occurred in the material. No amorphous area could be detected, supporting a full crystallization from glass process. Moreover, the nanometer scale of the YAG crystals suggested by XRD is confirmed and the size distribution appears relatively narrow (average size of 26 nm, σ = 6.7 nm). The presence of strain in the sample will be further confirmed by high-resolution scanning transmission electron microscopy (HRSTEM)-high angle annular dark field (HAADF) and the agreement between the crystal size determined by XRD and TEM will be discussed at that time. Even though no phase separation could be detected in the AY26 glass, the nanometerscale structuration of the related YAG-Al 2 O 3 composite ceramic (Fig. 1) is typical of a strong volume crystallization mechanism. The ceramic microstructure appears quite similar to the one observed in glass-ceramics elaborated from a nanometer-scale phase separated glass (spinodal decomposition) 50 . We stipulate that the high cooling rate from the melt prevents phase separation to take place during glass forming but we cannot exclude that nanoscale phase separation may occur upon crystallization heat treatment, which would explain the high nucleation rate and the observed nanostructure.
To better characterize the nanoscale microstructure of the transparent YAG-Al 2 O 3 ceramics, high-resolution transmission electron microscopy (HRTEM) and STEM-HAADF imaging were performed from thin foils of a ceramic sample crystallized at 1100°C for 2 h (Fig. 3). Once again the samples appear fully dense as no porosity could be detected. The HRTEM image clearly shows the presence of two different crystalline phases ( Fig. 1c; Supplementary Fig. 7). The main one, for which FFT (fast Fourier transform) patterns can be indexed with a garnet structure assigned to YAG, shows dark crystallites with uniform size distribution of about 30 nm (Fig. 1a). Unfortunately, the small size of the residual bright phase located between the YAG grains and the poor related FFT pattern does not allow unambiguous assignment to Al 2 O 3 ( Supplementary Fig. 7). However, as the difference of average atomic number (obtained from the sum of the atomic numbers (Z) of all atoms composing a phase divided by the number of atoms) between Al 2 O 3 (Z Al2O3 = 10) and YAG (Z YAG = 14) is much different, STEM-HAADF imaging (also called Z-contrast imaging) appears as an efficient imaging mode to distinguish both phases. Indeed, as presented Fig. 3a, STEM-HAADF images exhibit two different phases with strong contrast difference. STEM-EDX elemental maps and cationic composition profile (Fig. 3c, d) both demonstrate that the dark phase can be unambiguously assigned to pure Al 2 O 3 and the bright phase to YAG. The presence of pure Al 2 O 3 crystalline phase after glass crystallization is obviously related to the excess Al 2 O 3 in the AY26 glass compared to the YAG formula. This result demonstrates that all Y 2 O 3 present in glass has reacted to form YAG nanocrystals, and thus confirms the 77 wt% YAG crystalline fraction content in the YAG-Al 2 O 3 ceramic previously determined by X-ray powder diffraction, in agreement with a full glass crystallization process into YAG and Al 2 O 3 .
The homogeneous distribution of thin Al 2 O 3 areas around YAG nanograins has a great role in limiting their coarsening upon heating. XRD data recorded versus temperature show a slow and progressive increase of the YAG nanocrystals size whereas the YAG content remains constant. Moreover, the HRTEM and STEM-HAADF images (Figs. 1c and 3a, respectively) clearly show interconnected YAG nanocrystals sharing grain boundaries and forming a 3-D network. This microstructure is typical of a coalescence growth mechanism 51,52 . This coalescence effect is clearly illustrated on Fig. 3b, where the misfit between the two YAG grains crystallographic orientations is very small (about 6°between the (21-1) plans). Although this coalescence mechanism takes place, it induces strain at the grain boundaries (coalescence necks), in good agreement with the XRD Rietveld refinement presented Fig. 1b, which required strain correction. The small crystallite size determined by TEM (about 30 nm) is consistent with the average size determined by Rietveld refinement without the use of strain constraint. Now considering the grains as merged, the size of the coalesced domain is much larger, here again in good agreement with the size determined by Rietveld refinement with the use of strain constraint (about 50 nm).
The density of the AY26 glass material increases significantly (from 3.80 to 4.25 g/cm 3 ) during crystallization, implying that large volume shrinkage (11.8%) occurs ( Supplementary Fig. 8). Nevertheless, the AY26 ceramic remains fully dense without any porosity or crack. It is reasonable to infer that the important shrinkage stress is effectively released by structural relaxation of the very small transition Al 2 O 3 domains.
Although the refractive index of Al 2 O 3 (1.60) does not match that of YAG (1.82) 53 , the sample nanostructure enables high transparency to be retained during glass crystallization (the nanosize of both YAG and Al 2 O 3 grains minimizes light scattering according to the Rayleigh-Gans-Debye theory) 28 . The transmittance spectrum recorded in the UV-VIS-NIR and MIR regions is presented Fig. 1d. The photograph of the sample placed 2 cm above the text demonstrates the good transparency of the YAG-Al 2 O 3 composite ceramic. The transmittance of the YAG-Al 2 O 3 ceramic is also compared to the AY26 parent bulk glass, the YAG single crystal and the commercial YAG transparent ceramic (Supplementary Fig. 9). The transparency covers a wide wavelength range from the visible up to infrared regions (6 μm), similarly to YAG single crystals. The absorption band located around 3 μm is attributed to the absorption of the free hydroxyl (OH) group, which is commonly observed in oxide 1,4,5,37 . Luminescence properties were measured on a Ce 3+ -doped YAG-Al 2 O 3 composite ceramic (AY26 glass crystallized for 2 h at 1100°C). No segregation of Ce at the grain boundaries could be observed by EDS-STEM experiments even though this point cannot be totally excluded given the low amount of Ce doping. As the ionic radius of Ce 3+ is very similar to Y 3+ , Ce 3+ is expected to dope YAG crystals, which provide a good crystal field environment for luminescence. The photoluminescence spectrum presented Fig. 4b shows a typical Ce 3+ : 5d → 4f broadband emission. The internal quantum efficiency (QE) of the transparent 2% Ce 3+ substituted YAG-Al 2 O 3 nanoceramic (2% of Y 3+ ions are substituted by Ce 3+ ) reaches 87.5%, similarly to commercial YAG:Ce 3+ materials, which is promising for further application in the field of whitelight LED. The original nanostructure of the biphasic YAG-Al 2 O 3 ceramics may explain the high quantum efficiency by strong confinement effects which induce high luminous efficiency 57,58 . The QE is much superior to that of conventional SiO 2 -based YAG transparent glass-ceramics (about 30%) 34 . The high QE of the material can also be linked to the high crystalline quality of the YAG nanocrystals in the YAG-Al 2 O 3 composite ceramic 59 . The QE is also higher than the YAG nanoparticles synthesized by soft chemistry 58 (about 54%) which can therefore contain surface defects preventing high QE. The color coordinates of the Ce 3+ -doped YAG-Al 2 O 3 nanoceramic emission under a 465 nm LED excitation show a linear relationship with the thickness and the Ce 3+ doping concentration (Fig. 4c). Interestingly, the color coordinates for a 0.1% doping concentration and a 1.1 mm thickness are located at (x = 0.30 and y = 0.34), which approximates the white-light region. The efficiency, color temperature, and color rendering index of this white-light LED under a driving current of 60 mA are 108 lm/W, 7040 K, and 57.8, respectively. In comparison with the commercial fluorescent YAG:Ce 3+ materials, the present YAG-Al 2 O 3 transparent ceramics show great advantages such as low optical loss and high thermal stability.
The thermal conductivity of the small YAG-Al 2 O 3 composite ceramic disks (4 mm of diameter,~0.5-1 mm thickness) was successfully measured at different temperatures, as well as YAG single crystal and YAG transparent ceramic reference samples with similar size and shape ( Supplementary Fig. 11). The measured thermal conductivity at room temperature of the YAG-Al 2 O 3 composite ceramic is 4.2 W/m/K. For applications such as phosphor converters for high-power white-light LED and laser diodes, the thermal conductivity exhibits higher values than currently used polymer and glass-ceramic materials (Supplementary Fig. 11) 60,61 . In comparison with the commercial YAG single crystal and YAG transparent ceramic materials (9.8 W/m/K and 9.6 W/m/K, respectively), the thermal conductivity of the YAG-Al 2 O 3 composite nanoceramic is lower owing to the presence of numerous nano-boundaries inducing heavy phonon scattering as commonly observed in biphasic and nanostructured thermoelectric ceramics 62,63 . Nevertheless, the YAG-Al 2 O 3 nanocomposite ceramic shows a slow thermal conductivity decrease versus temperature, contrary to rapid decrease in the YAG single crystal and YAG ceramic materials ( Supplementary Fig. 11). The temperature dependence of the thermal conductivities of the YAG single crystal and YAG ceramic materials roughly obey the 1/T raw, as a result of the dominant phonon-phonon scattering effect. In the YAG-Al 2 O 3 nanocomposite, phonon scattering by the numerous nano-boundary becomes much stronger and the consequent thermal conductivity becomes much less temperature dependent. (The interfacial thermal resistance is almost temperature independent above room temperature 64 ). Even though the interfacial thermal resistance at the nano-boundaries of YAG-Al 2 O 3 decreases the thermal conductivity, this effect can be considerably compensated by the presence of secondary phase Al 2 O 3 , which has a much higher thermal conductivity (e.g., 10 W/ m/K at 500°C) than YAG 65 . Moreover, it should be noted that at 500°C, the YAG-Al 2 O 3 nanoceramic even presents similar thermal conductivity (3.7 W/m/K) as the YAG ceramic (3.6 W/ m/K), and remains slightly lower than the YAG single crystal value (4.7 W/m/K). The thermal conductivity of YAG-Al 2 O 3 nanoceramic at high temperature could be beneficial for future high-power applications which may be foreseen given the high thermal stability of the YAG-Al 2 O 3 ceramic beads. Therefore, we anticipate that the YAG-Al 2 O 3 ceramics will drive the development of technologically relevant optical and photonic materials.
In conclusion, the synthesis of fully dense YAG-Al 2 O 3 composite nanoceramics was performed by pressureless and complete crystallization from a 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 glass. The size of both YAG and γ-Al 2 O 3 nanocrystals is quite homogeneous and can be tailored as a function of the temperature and duration of the single crystallization heat treatment. At high temperature, both in situ XRD and HRSTEM-HAADF evidence crystal growth via coalescence effect. As a result of the nanometer-scale microstructure, these YAG-Al 2 O 3 ceramics present transparency from the visible up to the infrared (6 μm) region. The nanostructure of the YAG-Al 2 O 3 composite ceramics also induces strong mechanical properties (281 GPa elastic modulus and 23.6 GPa hardness). Moreover, under Ce 3+ doping, the internal quantum efficiency, 87.5%, reaches the level of commercial YAG:Ce 3+ single crystals. Taking into account the reported mechanical and optical properties, the thermal stability of the material and the simple fabrication process, these YAG-Al 2 O 3 ceramics are believed to be promising candidates for wide optical applications such as gem stones, lenses, scintillators and phosphor converters for high-power white-light LED and laser diodes.
Methods
Glass synthesis and crystallization. Commercial oxide powders (Al 2 O 3 , Y 2 O 3 and CeO 2 , Sinopharm chemical reagent co. ltd, 99.99% purity) were used as raw materials. Prior to synthesis, all precursors were heated at 800°C in a muffle furnace for at least 2 h to remove adsorbed water. After weighing, the 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 (AY26) powder mixture was homogeneously mixed using wet ball milling in ethanol, and pressed into pellets. Bulk samples of~60-200 mg were then levitated using O 2 flow and melted by a CO 2 laser at~2000°C 40,41,66 . The sample was kept in molten state for about 10-20 s to ensure homogeneity. Turning off the laser then induced rapid cooling (~300°C/s) and led to glass beads with a diameter of~2-5 mm (Supplementary Fig. 3). The glass beads were subsequently polished into disks (~1 mm thickness) and fully crystallized into transparent ceramics by a single crystallization heat treatment in an open-air atmosphere muffle furnace using a temperature between 950 and 1100°C. Al 2 O 3 -Y 2 O 3 glasses with various compositions ranging from 24 to 37.5 mol% of Y 2 O 3 (37.5% Y 2 O 3 corresponds to the stoichiometric YAG) were also synthesized by aerodynamic levitation coupled to laser heating, and further investigated during this work. The 74 mol% Al 2 O 3 -26 mol% Y 2 O 3 glass composition clearly leads to ceramics with the highest transparency. Appropriate glass crystallization temperatures were determined from differential scanning calorimetry (Setaram MULTI HTC 1600 instrument) measurements performed at a heating rate of 10 K/ min, using argon as a purging gas and with alumina pans as sample holders.
Phase identification and microstructure observation. Laboratory X-ray powder diffraction (XRD) measurements were performed using a Bragg-Brentano D8 Advance Bruker laboratory diffractometer (Cu Ka radiation) equipped with a lynxEye XE detector. Data were collected from 15 to 130°(2θ) at room temperature with a 0.02 step size and an acquisition time of 10 s per step. In situ X-ray powder diffraction measurements were performed on a similar diffractometer equipped with a linear Vantec detector. The AY26 glass powder was then placed on a platinum ribbon in an HTK16 Anton Paar chamber. Diffractograms were collected between 20 and 60°(2θ) with a 0.024°step size from room temperature up to 1400°C . Transmission electron microscopy (TEM) was used to observe the nanostructure of glass and ceramic materials. HRTEM, STEM-HAADF imaging, and EDS analysis were performed on a JEOL ARM200F FEG gun microscope fitted with an Oxford SDD X-Max 100 TLE 100 mm 2 EDS system and equipped with a spherical aberration corrector on the probe. Both the glass and ceramic (synthesized from glass at 1100°C for 2 h) samples were prepared by mechanical polishing using a tripod and inlaid diamond discs down to a 50 µm thickness. Observable foils were obtained by subsequent argon ion milling (PIPS). The ceramic sample was specifically prepared for STEM observations. During the mechanical polishing step realized with a tripod and inlaid diamond disc, a tilt was imposed to form a bevel on one side of the sample. Very thin areas were thus obtained, which enabled to minimize the final argon ion milling (PIPS) step.
Mechanical, optical, and thermal properties measurements. The transmittance of the glass and ceramic samples were recorded using a UV-VIS-NIR spectrophotometer (PERSEE TU-1901, Beijing, China) in the 190-2500 nm wavelength range. In the 2500-8000 nm range, the transmittance of the samples was determined by a Fourier-transform infrared spectrometer (Shimadzu FTIR 8200, Kyoto, Japan). Hardness and Young's modulus of the samples were measured using a nanoindenter (MTS-XP) with a force and displacement of 600 mN and 1400 nm, respectively. At least eight indentations per sample were carried out using a Berkovich-type diamond indenter. The indentation morphologies were observed with an atomic force microscope (AFM) (Nanoscope III, Digital Instruments, Woodbury, NY). The indenter load and displacement were continuously and simultaneously recorded during loading and unloading in the indentation process. The hardness and Young's modulus were determined from data acquired during loading based on the Oliver-Pharr model 67 . Emission and excitation spectra (280-800 nm) were recorded on a Hitachi F-7000 spectrofluorometer equipped with a Hitachi U-4100 spectrophotometer. Internal quantum efficiency values were measured by a Photal QE-2100 spectrofluorometer. On the basis of this setup, internal QE was calculated by the following equation: η ¼ Number of photons emitted Number of photons absorbed ¼ | 5,947.2 | 2018-03-21T00:00:00.000 | [
"Materials Science"
] |
Quantitative identification of dynamical transitions in a semiconductor laser with optical feedback
Identifying transitions to complex dynamical regimes is a fundamental open problem with many practical applications. Semi- conductor lasers with optical feedback are excellent testbeds for studying such transitions, as they can generate a rich variety of output signals. Here we apply three analysis tools to quantify various aspects of the dynamical transitions that occur as the laser pump current increases. These tools allow to quantitatively detect the onset of two different regimes, low-frequency fluctuations and coherence collapse, and can be used for identifying the operating conditions that result in specific dynamical properties of the laser output. These tools can also be valuable for analyzing regime transitions in other complex systems.
Video
In the video included as supplementary material we observe, as the pump current is ramped, the gradual transitions studied in the main text: from noise intensity fluctuations, to a regime where occasionally rare intensity dropouts occur, which then become more regular and frequent in the low frequency fluctuations region, and finally, with further increase of the laser current, the regular and well-defined intensity dropouts transform into fast and highly irregular intensity fluctuations. As discussed in the main text, in the LFF regime, due to intensity dropouts, the probabil- in the CC regime (I/I th = 1.2), the intensity pdf has a well defined cutoff. While at low current the pdf is Gaussian, in the CC regime the pdf is not Gaussian. We note that for I/I th = 1.02 the pdf displays a nontrivial structure which is due to the step-like recovery that occurs after a dropout, as shown in Fig. 3. These observations are in agreement and consistent with previous findings [1,2,3,4,5].
Second set of experimental observations
Here we present experiments performed with a different laser and feedback conditions compared to those in the main text, and we find qualitatively very similar results. The laser is a 685 nm HL6750MG semiconductor laser (Opnext HL6750MG) with solitary threshold current of I th = 28.29 mA. The feedback-induced threshold reduction and the feedback delay time are 15.42% and 5.3 ns respectively. Figure 4 displays the standard deviation, σ, of the intensity time-series vs. the laser pump current, for a sampling frequency of 5 GSa/s of the oscilloscope, and a very good agreement is seen with Fig. 3 in the main text. Figure 5(a) displays the number of events vs. the detection threshold, and here again a qualitative good agreement is found with Fig. 4(a) of the main text. It is worthwhile to note that the plateau also exists with a different detection method, as shows Fig. 5(c): instead of normalizing the time series to standard deviation equal to one, we normalize such that the maximum and minimum are equal to one and zero respectively. We note that these two methods differ in the sense that with the second method any threshold value within (0,1) will detect a certain number of events, while with the first method (used in the main text), the interval of detection thresholds depends on the pump current [as shown in Fig. 5(b) and Fig. 4(b) of the main text]. Nevertheless, with this alternative normalization one can also observe the existence of the plateau. Figure 6 displays the six OP probabilities vs. the pump current. We note a variation very similar to that shown in Fig. 5(a) in the main text.
MODEL
In order to further demonstrate the robustness of the experimental findings presented in the main text, we performed simulations of the Lang and Kobayashi (LK) rate equations [6] for the slowly varying complex electric field E and the carrier density N . The model equations are: where α is the linewidth enhancement factor, τ p and τ N are the photon and carrier lifetimes respectively, G = N/(1 + ε |E| 2 ) is the optical gain (with ε a saturation coefficient), µ is the pump current parameter (which is equal to the experimental control parameter -the normalized pump current-only at the solitary threshold [12], where both are equal to 1), η is the feedback coupling coefficient, τ is the feedback delay time, ω 0 is the solitary laser frequency, ω 0 τ is the feedback phase, β sp is the noise strength, representing spontaneous emission, and ξ is a Gaussian distribution with zero mean and unit variance. The model equations were simulated with typical parameters as in [11] [τ p = 0.00167 ns, τ N = 1 ns, α = 4.0, ε = 0.01, η = 10 ns −1 , and τ = 5 ns, β sp = 5 × 10 −5 ns −1 ].
Numerical results
In the framework of the LK model, it has been shown that the LFF intensity dropouts can be either transient or sustained [7,8], with the probability of observing sustained LFFs or stable emission depending on the relative widths of the win-dows where these regimes occur. For typical parameters, however, the LFF are a transient dynamics with a duration that increases with the pump current parameter [9,10]. Typical intensity time-series are shown in Fig. 7.
To compare with experimental observations we need to generate a sufficiently large number of dropouts, therefore, for each value of the pump current parameter, 20 trajectories of 50 µs were generated from random initial conditions.
In Fig. 8 we show that, taken together, the results of the analysis of the simulated data are in very good qualitative agreement with the experimental observations: the variation of the standard deviation, Fig. 8(a), the variation of the number of threshold-crossing, Fig. 8(b), and the variation of the OP probabilities, Fig. 8(c), with the pump current parameter are very similar to those encountered in the experimental data. The comparison between the shape of the experimental and simulated σ curve, shown in Fig. 9, allowed us to determine the five values of the pump current parameter that correspond to the experimental pump currents analyzed in the main text. For those values, as shown in Fig. 8(b), the variation of the number of events is very similar to that seen in the experiments.
However, it is worthwhile to note that the agreement is only qualitative: we note that the simulated dropouts are less depth than the experimental ones [in Fig. 8(b) the lowest detection threshold is -4σ]. A second discrepancy is seen in Fig. 9, where the experimental and simulated σ curves agree qualitatively well only if the horizontal axes are shifted (i.e., µ = 1 is shifted with respect to I/I th =1) and the vertical axes are re-scaled. The origin of these discrepancies could be the fact that in the simulations the LFFs are transient; also, the simple filtering used (a moving average in a time-window of 5 ns) might play a role. We remark that our goal here is only to demonstrate the robustness of our findings though a comparison with model simulations.
To conclude this comparison, in Fig. 10 we present the equivalent of Fig. 1, computed from the simulated time-series. Here again we observe a good qualitative agreement model simulations -experimental observations. Figure 10: As Fig. 1 but computed from simulated data. | 1,666 | 2016-11-18T00:00:00.000 | [
"Physics"
] |
Virtual Students Mobility and Exchange Programs: Case Study of Malaysia
This paper reports on a case study to comprehend some of the issues and challenges faced when organising virtual student mobility and exchange programs at university level. The main objectives are to understand what the expected issues and challenges are when students and lecturers organise and participate in virtual student exchange programs. The reason why university mobility and exchange programs need to be conducted virtually was initially due to the global pandemic where lockdown measures were imposed in many countries, causing strict control of movements of citizens. But, as the world moved out of the global pandemic stage, the benefits of virtual student exchange programs are now becoming more apparent. Four key elements were investigated in this case study including technological constraints, students and lecturers’ readiness, language barriers and also cultural differences. A qualitative approach was used to carry out this case study. Four respondents were intensively interviewed in focus group sessions to discuss the four elements above. In sum, the research participants do not agree that virtual student exchange and mobility programs will be as effective as physical ones.
INTRODUCTION
The COVID-19 global pandemic was a threatening health crisis around the world that began at the end of 2019.Our daily lives have undergone significant changes following this global pandemic and the higher education system had to be readjusted to comply with new norms.This global pandemic had indeed brought a huge influence on the economy as well as the Vol 12, Issue 3, (2023) E- ISSN: 2226-6348 To Link this Article: http://dx.doi.org/10.6007/IJARPED/v12-i3/19379DOI:10.6007/IJARPED/v12-i3/19379 Published Online: 20 September, 2023 fields of education around the world (Akil & Adnan, 2022, 2023;Prawoto et al., 2020).At that exact time, Malaysia followed suit and decided to close schools, colleges, and universities to prevent the spread of the virus.Consequently, student mobility came to a standstill and active global exchanges were severely affected.Even until this present moment in time mobility and exchange programs are picking up slowly, at least in Malaysia.
All the same, in recent years, there has been an increase in student mobility in higher education.This is due to a number of factors, including the increasing cost of tuition, the desire for students to gain international experience, and the wider availability of online courses (Adnan, 2018).The increasing cost of tuition is one of the main reasons why students are choosing to study abroad.In the United States, for example, tuition prices have been rising steadily for years.This has made it difficult for many students to afford a college education.As a result, many students are looking for ways to reduce the cost of their education (Rahmat et al., 2019).One way to do this is by studying abroad.By studying in another country where higher education is cheaper for instance, university and college students can take advantage of lower tuition rates.
The desire for students to gain international experience is another factor that has contributed to the increase in student mobility.In today's global economy, employers are looking for employees who have experience working in different cultures (Adnan et al., 2021).Studying abroad is a great way for students to gain this type of experience.Additionally, by studying abroad, students can learn about new cultures and customs.Finally, the availability of online courses has made it easier for students to gain study abroad 'experiences' albeit virtually.In the past, students who wanted to study abroad had to take courses at a local university.However, with the advent of online learning, students can now take courses from anywhere in the world.This has made it easier for students to get the education they need while still being able to save their money on expensive physical-related fees.Nevertheless, partly due to the global pandemic, according to Krishnamurthy (2020) new norms require new and unique solutions.Face-to-face learning activities including student exchange programs cannot be easily and openly implemented even after this global pandemic has been put under control.As such, learning activities have to be switched from physical to online mode using platforms such as Microsoft Team, Google Meet, Zoom, Google Classroom, Telegram, WhatsApp, and others, so that students will not drop out and still follow the teaching and learning process (Adnan, 2020a;Karim et al., 2020).The online teaching environment continues to get community attention after its implementation in the context of education in Malaysia when the Ministry of Education Malaysia (MOE) announced the closure of schools in early 2020.
The rise of 'virtual' student mobility and education exchange programs
Higher education is currently undergoing significant transformation as a result of the world's rapid pace of change, and virtual student mobility and exchange programs are at the forefront of this change.These forward-thinking educational strategies have the potential to not only shape the future of higher education in Malaysia but also in countries all over the world.The idea of mobility and exchange among students has always been appealing.Travel has traditionally been required in order to fulfil the goals of experiencing new cultures, expanding one's understanding of the world, and establishing relationships with people from other countries.Nevertheless, the landscape of higher education is undergoing shifts as a result of developments in technology and the arrival of the digital era.Numerous benefits can be gained from participating in virtual student mobility and exchange programs, which are made possible by high-speed internet and sophisticated online learning platforms.
This shift is being driven in large part by accessibility.Not all students in Malaysia, like those in many other countries, have the financial resources or the opportunities to travel outside of the country for the purpose of furthering their education.These geographical barriers are overcome by virtual programs, which give access to a wider variety of students regardless of where they are located or what their socioeconomic background is.Another essential component is how much value you get for your money.Traditional exchange programs can be quite pricey because they require participants to pay for their travel, lodging, and other living expenses while they are away from home.Virtual programs, on the other hand, significantly lessen these financial burdens, thereby enabling a greater number of people to pursue international education opportunities.
The rise in popularity of virtual programs is directly attributable to the development of various cutting-edge technologies.The use of high-quality video conferencing, digital laboratories, and interactive online platforms recreates the atmosphere of a traditional classroom setting.They may not be able to fully replicate the allure of face-to-face contact, but they come remarkably close to doing so, ensuring that the educational experience is rich and fulfilling.Resilience can also be demonstrated by virtual programs.They have shown themselves to be a reliable alternative during times of crisis, such as the COVID-19 pandemic.Virtual education can continue uninterrupted, whereas in-person interactions might be interrupted, jeopardising the continuity of the learning process.In addition, the positive effects that virtual programs have on the natural world cannot be overlooked.Reduced travel results in a smaller carbon footprint, which is consistent with the objectives of sustainability, a concern that is shared by educational institutions of higher learning all over the world.
Figure 1: Benefits of virtual students' mobility and exchange programs Our present study makes a significant contribution to a better understanding of the rapidly developing field of virtual student mobility and exchange programs, which are at the vanguard of reshaping the future of higher education in Malaysia as well as all over the world.Accessibility, cost-effectiveness, flexibility, diversity, and resilience are some of the benefits that they offer (see Figure 1 above).At present, they may not be able to completely replace in-person interactions, but they provide a compelling alternative that helps make international education more accessible and environmentally responsible.Students will be better prepared to thrive in a globally interconnected world if they participate in virtual student mobility programs as the digital age is embraced.
Accessibility
Cost-effectiveness Flexibility Diversity Resilience As these programs are poised to play a pivotal role in shaping the future of higher education, an empirical research project was carried out to explore these programs with the following objectives: First, to recognise the issues and challenges faced by lecturers, tutors, and instructors in overseeing virtual student exchanges; And second, to find out the issues and challenges faced by students throughout their involvement in virtual student exchanges.Based on the preceding paragraphs and these objectives, two research questions were operationalised as below.
RQ1.What are some of the issues and challenges faced by lecturers, tutors, and instructors in conducting online / virtual student exchange programs?RQ2.What are some of the issues and challenges faced by students during their participation in online / virtual student exchange programs?
LITERATURE REVIEW
In this section, relevant research literature is presented and reviewed to better contextualise the topic under study and to frame this empirical case study in a more concrete manner.
Internationalisation of higher education
The views regarding the internationalization of higher education contain two perspectives that are different, but they generally complement each other.From a macro perspective, university internationalization is a vital agenda of the Malaysian Ministry of Higher Education's Strategic Plan in an effort to realize the mission and vision of the national education system towards internationalization, where the diversity of university management practices in this country is in line with the National Higher Education Strategic Plan (PSPTN) (Fia, et al., 2022).Therefore, the National Higher Education Strategic Plan is seen as the foundation of Malaysia's national higher education transformation containing various ideas, initiatives, improvements, and international policies for university management.In that context, each local university is responsible for supporting this higher education mission towards internationalization through building strong relationships at the global level.At the same time, each institution of higher learning needs to fully support the international activities of academics, researchers, and students, create a conducive and international learning environment for students and also increase the promotion and recruitment activities of academic scholars who are world class.
However, a problem quickly arises.Is the country's commitment to the internationalization of higher education and the ultimate goal of achieving the target of being recognized as a centre of excellence in knowledge on the world stage, really achievable in the first place (Adnan & Smith, 2001)?From a micro perspective, the past decade has seen university 'citizens' heralded to play a more prominent role to effectively utilize the strength of physical resources and human capital as well as create ownership of intellectual property.This is in line with the policy of internationalization of the country's higher education system which places the strategic, comprehensive, and integrated implementation by various parties in an effort to highlight the visibility of Malaysia as a hub of academic excellence (Gunn & Mintrom, 2022).Every level of a Malaysian university or college is made to undergo a transformation of thinking and working practices in accordance with the work culture of universities with international status.But is this initiative supported by all members of the university community who are capable of propelling the country's higher education sector towards international credibility?There are other hurdles too, for instance the failure to successfully obtain external funds and research grants, and failure to spark the culture of research innovation that will produce international publications.Therefore, 'capability' must be understood as an effort to ensure the capacity of Malaysian academics who are not only qualified but also experienced to fill scholarly positions and underpin the human resource development of a university (Adnan, 2020b).
Student mobility trends during the past pandemic and present endemic periods
During the outbreak of the global pandemic, many countries implemented lockdown measures to curb and stop the spread of the COVID-19 virus (Atalan, 2020).In Malaysia, a similar measure was implemented, and it is known as the Movement Control Order or MCO.Back then, such measures were necessary because large proportions of the population around the world were still in the process of being vaccinated.Therefore, with such lockdowns in place, student mobility effectively came to a standstill (Caballini et al., 2021).Even the core of physical classroom learning was moved online.As a consequent, non-core, and non-critical activities such as student exchange programs, which are also deemed nonessential, were either totally stopped or moved online altogether.On the other hand, with the global scale vaccination efforts by multiple organizations or brands such as Pfizer BioNTech, AstraZeneca, Johnson & Johnson, Sinovac, Moderna and many more, cases slowly went down afterwards, and the pandemic was brought under control.Tight control measures were loosened up and the crossing of boundaries is allowed again as we marched towards the 'endemic stage' of the virus in Malaysia.
At this present time, student mobility is gaining traction once again as before the pandemic happened, with some extra precautions and hygienic practices like wearing masks and doing self-tests especially when travelling in public transport to international destinations.
Virtual mobility / exchange programs
Virtual mobility or exchanges are technology-based, classroom-to-classroom programs that connect students located in different geographical locations to develop intercultural understanding and for students to engage in project-based learning (Mangione & Cannella, 2021).Many of these exchanges are designed and facilitated by course instructors for students to establish dialogue and collaborate on various tasks or projects.Virtual exchanges vary in length as some last for a few weeks and others for a semester or longer.A typical student exchange program in which students are supposed to visit the host university in a different country, has been severely impacted by the global pandemic.But even though it dampens physical travelling, this actually allows for better opportunities in terms of time and cost saving in distance education.Still, if a student exchange program is moved and organised online, will this program bring the same positive effects intended, especially on the learning experience and experiential outcome?As we know, learning and cultural exchange do not happen only through formal education sessions.Most of it occurs during informal conversations between participants from different countries.
There are, in fact, many ways to learn about other cultures and to exchange one's own culture with others.One way is to travel and live in another country for an extended period of time, which is what typically happens during a 'gap year' or semester.This allows for a more immersive experience, and one can learn a great deal about the culture and the people.As previously mentioned, another good way to learn about other cultures is to take part in educational mobility or exchange programs.These can be either formal or informal.Formal programs are often sponsored by organizations or universities, and they usually involve structured activities and learning opportunities.Informal programs, on the other hand, are more likely to be unplanned and spontaneous.They often involve simply interacting with people from other cultures on a day-to-day basis.
Of course, both formal and informal mobility and cultural exchange experiences have their advantages and disadvantages.Formal programs can be very beneficial because they provide a structured environment in which to learn about another culture.However, they can also be quite expensive, and they may not always be available to everyone.Informal exchanges and cultural mobility experiences, on the other hand, are generally more affordable and accessible.They can also be more authentic since they often involve simply living in and interacting with another culture on a daily basis.However, they can also be more chaotic and less organized than formal programs.What is most important is that a student must take advantage of any opportunity that presents itself to gain new knowledge and understanding.Whether she or he chooses to participate in a formal mobility or exchange program or simply to interact with people from other cultures on a daily basis, the important thing is to always keep an open mind together with a willingness to learn.
Issues and challenges faced by lecturers and students
Without a shadow of doubt, there are many challenges and issues if student mobility or exchange programs are held online.For example, the study by Amiruddin et al., (2021) found that students have a high desire for actual classroom-based learning.Yet the online learning experience is affected as a result of various obstacles (Abd Karim et al., 2020).Among the obstacles are the problem of the device usage, poor Internet connections and on-task learning time.Another study by Baber (2021) showed that students are generally ready and have good motivation to accept learning using online modes.However, challenges and limitations during implementation should be refined to increase the effectiveness of the learning process.The challenge and limitation include students who are still unable to adapt to the notion of online learning as a whole, confusion in terms of delivery and lack of learning facilities.
With specific reference to mobility and/or exchange programs, due to the different nationalities and languages used, there will be a certain degree of language barrier between different parties.Hand gestures and facial expressions are often the best methods when language is not helpful, however these non-verbal cues in communication might not be as effective on the computer screen as compared to natural and real life exchanges that happen physically.This might also inevitably lead to the possibility of miscommunication especially when there are cultural differences across different nationalities in conducting work such as punctuality, active learning as well as whether the learning process is student-centred or teacher-centred (Zamari & Adnan, 2011).
METHODOLOGY
This empirical research project explored the topic of student mobility and exchange programs, in particular the opportunities and challenges linked to the past global pandemic situation.A qualitative approach was employed in the research to examine and understand the perspectives of students and lecturers towards the organisation of virtual exchange programs.Focus group sessions were held with the selected research participants.A total of four research participants were selected, two university students and two university lecturers respectively.The criterion for the selection of students is those who have joined a short-term international students' exchange program from the start to the middle of 2022.While the criterion for lecturers is that they have acted as international facilitators or tutors for a shortterm virtual international students' exchange program in the last few years before and after the global pandemic gripped the whole world.
The elements and constructs that were discussed during the focus group discussion sessions include the perspective of technological constraints which encompasses the tools used for virtual meetings, internet connection stability and readiness in terms of mobile hardware.Other elements explored were the students and lecturers' readiness to face their virtual students' exchange programs, expected language barriers, the effectiveness of online communication, and finally the cultural differences between all the parties involved in the virtual mobility or exchange program.
To organize the data collected from the field, descriptive notes were used and filed under different themes and sub-themes by reading them carefully.The thematic classification of data was done on the computer directly or on a broad sheet paper as per convenience (Khirfan et al., 2020).Pratt et al. (2020), deliberated the issue of qualitative coding for thematic analysis in qualitative research.They shared that qualitative coding comprises all the techniques for reliably classifying the social data on which very little or no order has been previously imposed by researchers.When data are classified by using existing theoretical models or pre-determined categories, the problems of analysis are mainly mechanical.However, when the social data have to be classified as per the concepts or categories or themes or subthemes that emerged in the process of investigation, the problems are very complex.Therefore, we must be careful in classifying qualitative data as it is necessary to develop the explicit set of instructions for ordering the data to derive meaningful generalizations.The main steps in qualitative coding as described by the authors are as follows.
First, clarify what is that is desired from the materials (as per the purpose to answer the research questions.Next, study the completed schedules or notes of interviews or participant observations very carefully.Then, work out the classes or possible groupings (using concepts, categories, and themes) and the indicators of the classes or groupings.And finally, fit the classes or groupings to the data, and code all the answers.Braun and Clarke (2021) argue that qualitative analysis is really the search for patterns in data and ideas that help explain the existence of those patterns.It starts even before a researcher goes to the field and continues throughout the research efforts.Last but not least, the researcher has to employ the emic perspective and document folk analyses but she or he must also equally retain the etic perspective in qualitative data collection and analyses.
DATA PRESENTATION AND ANALYSIS
As mentioned in the last section, a total of four elements and constructs were explored and studied during the focus group discussion sessions.The four elements are: (1) Technological constraints (tools, internet connection, software, and hardware, etc.); (2) Tutors' and students' readiness in using virtual platforms for the virtual mobility/ exchange program; (3) Language barriers and problems related to language competency; (4) Cultural differences and personality differences which were apparent before, during and after the program.The data collected are presented and analysed below as per typical qualitative data organisation and management.Data analysis means the categorizing, ordering, manipulating, and summarizing of data to obtain answers to research questions (Esteva et al., 2021).The purpose of analysis is to reduce data to intelligible and interpretable form so that the relations of research problems can be studied and tested.Lastly, the interpretation process takes the results of analysis, makes inferences pertinent to the research relations studied, and draws conclusions about these relations.
Technological constraints
Based on the interview sessions done with the participants, for the first element which is the technological constraints, all of them voiced out that "there will definitely be a technical glitch" that will occur during the program.This is because such errors are very common when online learning is conducted over the internet too.This is even worse when the communication is done from two geographical locations that are far away from each other and they do not share the same network provider.Slow and unstable internet connection will be a challenge.Frustrations might happen and the session will be delayed and not enjoyable anymore if such technical errors kept on happening.One lecturer shared the justification on this is that when she was teaching an online class as part of a virtual mobility or virtual exchange program.Students tend to lose focus and there is a lack of twoway communication during online teaching.She adds, "The engagement is not there between me and the students during the session.Sometimes, I'm not sure if the students are present in front of the computer and still listening and focusing or not."Hence, if future student exchange programs were held virtually, for many it might turn out to be just an online forum or online lecture class only, which is not much different compared to watching YouTube or a boring online conference.
Indeed, technological problems are a common occurrence during virtual student exchange programs.While most students are able to overcome these issues with the help of their peers and program administrators, some students may find themselves struggling to keep up with the pace of the program or feeling isolated from the other participants.Another common technological problem faced by students during virtual exchange programs is a lack of access to reliable internet.This can be a particular issue for students who are based in rural areas or who do not have access to a stable broadband connection.While many program administrators will provide participants with a list of recommended internet providers, it is ultimately up to the student to ensure that they have a reliable connection.If a student is struggling to access the internet, they may need to consider alternative ways of participating, such as using their mobile data allowance or finding a public Wi-Fi hotspot.In some cases, it may also be possible to arrange for a temporary broadband connection to be installed at the student's accommodation.
Another common technological challenge is compatibility issues between devices and software programs.For example, some students may find that they are unable to join video calls using their laptop because they do not have the correct software installed.In other cases, students may be using an older version of a software program which is not compatible with the latest version used by the other participants.If students struggle to resolve these issues, they should contact the program administrator for assistance although these issues might also be difficult to quickly resolve.Finally, another common technological difficulty faced by students during virtual exchange programs is a lack of understanding about how to use certain technologies.For example, some students may be unfamiliar with video conferencing software and therefore struggle to join or participate in online meetings.In other cases, students may be unsure how to share files or documents electronically.These being said, the research participants believe that these types of problems can sometimes be resolved with the help of online tutorials or by asking other participants for assistance.
Tutors' and students' readiness
In terms of readiness to be part of virtual students' exchange and mobility, the research participants generally responded that the 'readiness' is not there as well.Other than the tools and hardware issues, the participants believed that they sometimes do not know how to effectively organise and be part of a student exchange program virtually.This is because such virtual programs might require actual hands on especially when it comes to cultural sharing.However, if such programs are held online, it might be just a series of slides presentation or "dry introduction" to the culture without hands-on activities.The learning and cultural exchange activities will be severely limited to listening and questions asking only.
In truth, virtual or online student exchange programs are a relatively new concept and one that is still evolving.There are a number of reasons why students may not be ready for this type of program, including the fact that they are not used to being in an online environment, they are not used to working with people from other cultures and they may not have the necessary language skills.According to the participants, one of the main reasons for the lack of readiness during online student exchange programs is that students are not really used to being in an online environment.This can be a problem because they are not used to interacting with people from other cultures and they may not be able to communicate effectively.In addition, they may find it difficult to navigate the online environment and may not be able to find resources they need.
The lack of readiness of students during online student exchange programs can have a number of implications; most of them are related to psychological and emotional issues that cannot be easily ameliorated.First, it can lead to a feeling of isolation and loneliness for the students.They may feel isolated and lonely because they are not used to being in an online environment.Second, it can lead to a feeling of frustration and anger towards the program and the host institution.They may feel frustrated and angry because they are not used to working with people from other cultures.Third, it can lead to a feeling of disappointment and disillusionment with the program and the host institution.This might happen especially when they may not have the necessary language skills.Finally, it can lead to a feeling of anxiety and actual stress for the students.Again, being in an unfamiliar online environment might lead to these negativities.Due to these reasons, the lack of readiness of students during online student exchange programs can actually have a negative impact on them, the program, and both the home and host institutions.
Language and proficiency barriers
The research participants believe that language barriers and language related problems are also serious issues that "caused many hiccoughs" during virtual student exchange programs.This is especially so when the language barriers might be worsened due to limited capabilities to share hand gestures and facial expressions through their smartphones, computer tablets, or computer screens.Gadget mediated communication can make it difficult to pick up on nonverbal cues, such as body language and tone of voice, which can be essential for communication.There are other potential language problems as well, that can arise during online and virtual student exchange programs other than the lack of face-to-face interaction.
Additionally, their written communication through short texts and 'instant' messages can be easily misinterpreted, as there is no way to gauge the tone or intention behind the words uttered.This will surely lead to misunderstandings and conflicts in the long run.Another common problem is the use of slang or colloquialisms; these can be difficult to understand for those who are not familiar with them and can often lead to confusion or offence.Additionally, they can change rapidly, making it hard to keep up with the latest trends.Another potential issue is the different levels of proficiency among students.Some may be fluent in the local 'version' of the language being used, while others may only have a basic understanding.This can make it difficult to communicate effectively, as those with a limited understanding may struggle to keep up with conversations or may be unable to express themselves fully.Additionally, those who are more proficient may feel frustrated by having to simplify their speech or slow down their conversation at all times for the benefit of those who are less proficient.Finally, there is the issue of cultural differences.What is considered polite or appropriate in one culture may not be in another.This can lead to misunderstandings or offence, as well as a feeling of isolation or exclusion.
Despite these challenges, there are a number of ways to overcome them.One is to make use of online resources, such as dictionaries or translation tools.These can be helpful for understanding unfamiliar words or phrases.Additionally, there are a number of online forums and chatrooms where students can practice their language skills with others.Finally, it is important to remember that everyone is learning and that mistakes are part of the process.By being patient and tolerant of mistakes, we can create an inclusive and supportive environment where everyone can feel comfortable communicating in a foreign language.That being said, these strategies are clearly not part of the virtual students' exchange or mobility programs, and they might only add to the hassle of organising and joining such programs from the outset.
Cultural and personality differences
Another readiness issue that cropped up during online student exchange programs is that students are not used to working with people from other cultures.This can be a problem because they may not be able to understand the culture of the host institution and that of their fellow students in that institution.As a result, they might not be able to work effectively with their counterparts to ensure that the aims and objectives of the virtual students' exchange and mobility program are all achieved.Some of the concerns related to cultural and personality differences are as follows.
First and foremost, different cultures have distinct communication styles, which can include varying levels of directness, formality, and non-verbal communication.Understanding and respecting these differences is crucial to effective virtual communication.At the personal level, individuals also have different communication preferences based on their personality traits.Some university students may prefer written communication, while others may favour some sort of verbal discussions.Virtual programs should accommodate these variations, though this is easier said than done.Take the issue of dealing with time for example.Some cultures place a strong emphasis on punctuality and adherence to schedules, while others have a more relaxed approach to time.This can lead to misunderstandings and conflicts in virtual collaborations.In addition to that, personality traits like conscientiousness and time-management skills can also influence an individual's ability to meet deadlines and engage in virtual activities effectively.
From a different perspective, cultural norms can affect how groups function and make decisions.In some cultures, hierarchy and authority play a significant role, while others emphasize consensus and egalitarianism.As such, in virtual students' exchange and mobility programs, personality traits like extraversion and agreeableness can influence an individual's role within a virtual group.Extroverts may be more vocal, while introverts may contribute in quieter ways or not contribute at all.Cultural adaptability can vary, and students from some backgrounds may find it easier to adjust to virtual exchanges.Others may face more significant challenges in adapting to new technologies and virtual environments.That being said, certain personality traits for instance openness to experience and emotional stability can positively influence an individual's ability to adapt to virtual programs and cope with challenges.
On a final note, the respondents mentioned that cultural differences might not be such a big issue during virtual exchanges compared to other issues and problems.This is because both parties can align their expectations early and misunderstandings will be unlikely to happen, especially when the tolerance level is heightened when dealing with delegates from different nationality.This shows that in virtual student exchange and mobility programs, it is essential to provide cross-cultural training and support from the start, to help participants navigate their personal differences effectively.This can include cultural sensitivity training, clear communication guidelines, and strategies for building trust and rapport in virtual settings.Additionally, fostering a supportive and inclusive virtual community can help participants bridge cultural and personality gaps to achieve successful outcomes.
CONCLUSION
Unprecedented events brought by the global pandemic have caused many inconveniences to communities and societies around the world.Many activities including online learning and even student exchange program have to pushed to the cloud and carried out online.It is understandable that, in future, lockdown measures might need to be reinstituted when the whole world faces another global health emergency.However, when it comes to virtual experiences including virtual student exchange and virtual student mobility, the world is more than ready to ensure their success.Indeed, in recent years there has been a growing trend of students participating in online or virtual student exchange programs (Adnan, 2018).These programs provide an opportunity for students to study abroad without having to leave their home country.This type of exchange is becoming increasingly popular among higher education institutions even prior to the global pandemic rearing its ugly head, as it offers a number of benefits for both students and universities.
As the empirical data we collected show, one of the main advantages of virtual student exchange is that it is more affordable than traditional study abroad programs.This is because students do not need to pay for travel expenses or accommodation.Additionally, many universities offer scholarships and financial aid for students who participate in these programs.Another benefit of virtual student exchange is that it is more flexible than traditional programs.Students can choose when and for how long they want to participate in the program.This means that they can study abroad during their break, or even take a semester off from their university to participate in a program.Finally, virtual student exchange programs provide an opportunity for students to gain international experience without having to leave their home country.This is especially beneficial for students who may not be able to travel abroad due to financial or personal reasons.Additionally, these programs allow students to meet and interact with people from all over the world, which can help them develop a global perspective.Overall, virtual student exchange programs offer a number of benefits for both students and universities.These programs are more affordable and flexible than traditional study abroad programs, and they provide an opportunity for students to gain international experience without having to leave their home country.
At the opposite end of the spectrum, the empirical data we collected also suggest that 'actual' student exchange programs can help to create exposure and opportunity for students to leave their home country and enjoy the travelling journey and experience the world.However, the questions being posed is, will online or virtual student exchange program provide the same experience to students, in future?There are some foreseeable challenges, including language barrier, lack of participation, lack of actual interest, difficulty in cultural exchange and even technological issues that need to be ameliorated from the outset.Furthermore, there will also be a number of teething technological constraints causing annoying miscommunication or misunderstanding but these can be ironed out as the experience progresses.In terms of readiness, although the participants, the lecturers in particular, question the efficiency of having virtual student exchange programs they understand that inevitably virtual experience will figure more prominently in university learning spaces.Although online instruction is not as effective due to the lack of two-way communication between students and lecturers, in virtual exchange and virtual mobility programs, the aims and objective of the experience are perhaps not too rigid and actual program or course learning outcomes.
Of course, some of these unsolved issues and problems can continue to creep into virtual student exchange and virtual mobility programs whereby the participants might not enjoy the same levels of interaction compared to a physical session; language barrier will also continue to be an issue according to the research participants because students already find it challenging to communicate between themselves.The use of hand gestures and facial expressions that are limited by overseas video conferencing will make this process even more difficult.Cultural and personality differences too, will continue to make virtual environments difficult to navigate in virtual student exchange and virtual student mobility programs.But, as the participants in our empirical research effort observed, sometimes the only way to meet the challenge of navigating online and virtual spaces is to continue making strides into the virtual universe.Only by extending the boundaries of teaching and learning, instructing and training beyond the traditional classroom will we be able to become more proficient and efficient at bridging the physical new normal with the virtual universe. | 8,619.8 | 2023-09-20T00:00:00.000 | [
"Education",
"Computer Science"
] |
Bioactive compounds of Punica granatum L. wastes by high performance liquid chromatography analysis
Abstract The massive pomaces of Punica granatum L. exhibit a challenging losses exposure difficulty for the processing industries. The resent study was aimed to investigate the bioactive compounds of pomace extracts to introduce it to different industries such as pharmaceutical, food, medicinal, agricultural etcetera for optimum use. Four different extracts were prepared and the phenolic compounds were quantified using HPLC-DAD. Different amounts of phenolic compounds were detected in the samples including gallic acid, catechin, ellagic acid, rosmarinic acid, hesperidin, p-coumaric acid and chlorogenic acid. Gallic acid was major compound in all studied extracts of pomaces, with the maximum amount belonging to water extract (at 60 °C). The average amount of gallic acid detected in water extract (at 60 °C) of Punica granatum L. was 11.25 mg g−1 dry weight, while it was 3.24 3.02 and 1.09 mg g−1 dry weight for the extracts obtained by distilled water, methanol and methanol 80%, respectively. Graphical Abstract
Introduction
One of the family Lythraceae, subfamily Punicoideae which has been used as an edible fruit since antiquity, is pomegranate (Punica granatum L.). It is a native plant that greatly dispensed in south of Iran. Pomegranate is a source of antioxidants because of the presence of phenolic and tannin compounds (Loren et al. 2005). It is used in medicinal, food and cosmetic formulations (Finkel and Holbrook 2000) and can be introduced as a well source of antioxidants (Singh et al. 2001). From epochal to recent times, different parts of pomegranate have been used for different objects such as in diets (e.g. juices, jams, jellies, dressings, marinating, and wine), or as religious symbolism (e.g. righteousness, fullness, fertility, abundance), or for its medicinal values. High nutrient composition such as, oxalic acid, potassium, folate and vitamins E, C, B 6 and A is well demonstrated in pomegranate peels (Al Rawahi et al. 2014). Generally, low aromatic intensity is the main characteristic of pomegranate fruit (Wang et al. 2013). A hydroquinone pyridinium alkaloid from the leaves (Schmidt et al. 2005), punigratane, as a pyrrolidine alkaloid from the rind and with its efflux inhibition activity (Rafiq et al. 2016), tricetin 4 0 -O-b-glucopyranoside as a flavone glucoside and four ellagitannins and flavones (tricetin, luteolin, ellagic acid, and granatin B) from the flowers (Wu and Tian, 2019) of pomegranate were isolated. Moreover, antioxidant, antimicrobial, antidiabetic potential, antiparasitic activities and a-glucosidase and maltase inhibitory effects of pomegranate leaf, rind and flower extracts have been illustrated (El Dine et al. 2014;Rahmani et al. 2017;El Deeb et al. 2021). Previous studies have been on different parts of the healthy fruit of the plant after harvest without the fruit being mechanically pressed by the juicer. No study was observed under our conditions on the rest of the plant which is abandoned and discarded in factories after the dewatering step. As most parts of the fruit including exocarp, endocarp, pulp, stems and seeds that are not edible, it is not as popular as other family members. They are discarded as wastes in the environment. On the other hand, the value of these wastes is not well known. So, the investigation of phenolic compounds of pomace of pomegranate by HPLC-DAD was our aim to be able to introduce it to various industries for more use and application.
HPLC validation
The R 2 quantities from calibration curves of standard phenolic compounds were in the span from 0.985 to 0.999 which confirmed the linearity of the method. The RSD values for the accuracy studies were below 2.0%. The HPLC method was precise in the quantitative analysis of phenolic compounds.
Phenolic composition
The variance analysis illustrated significant differences in phenolic compounds among the different extracts in P. granatum (P < 0.01; Table S1). Our findings illustrated that gallic acid was the abundant phenolic compound in all studied pomegranate pomace extracts (PPEs) and it was agreed with other previous studies. According to the previous reports gallic acid is the main phenolic compounds in peel extract of pomegranate. The pomaces included the peels, also. Gallic acid and ellagic acid may be the compounds responsible for P. granatum anti-inflammatory effect. Hydrolyzable tannins have a polyhdric alcohol at their core, the hydroxyl groups of which are partially, or fully, esterified with either gallic acid or ellagic acid. They may have long chains of gallic acid coming from the central glucose core. On hydrolysis with acid or enzymes, the hydrolyzable tannins break down into their constituent phenolic acids and carbohydrates. As a result, Al Rawahi et al. (2014) was reported major phenolic compounds in P. granatum peel extract cultivated in Oman as gallic acid, illogic acid, punicalin, and punicalagin. The phenolic acids such as ellagic acid, gallic acid, chlorogenic acid, caffeic acid, vanillic acid, ferulic acids, trans-2hydrocinnamic acid and quercetin in pomegranate are identified (Bassiri-Jahromi and Doostkam, 2019). In our study the greatest content of gallic acid was determined in water extract at 60 C in the water bath. Phenolic compounds have a considerable structural diversity, characterized by the hydroxyl groups on aromatic rings. According to the number of phenol rings and the structural elements that bind rings to one another, such compounds are grouped and classified as simple phenols, phenolic acids, flavonoids, xanthones, stilbenes, and lignans. Phenolic compounds in P. granatum pomace in our study included 2 hydroxybenzoic acids and as hydrolyzable tannins (gallic acid and ellagic acid), 3 hydroxycinnamic acids (rosmarinic, p-coumaric and chlorogenic acids), one flavanon glycoside (hesperidin) and one flavan-3-ol (catechin). During the industrial extraction process, the tannins pass to juice while the high antioxidant capacity of pomegranate is attributed mainly to these compounds. At four PPEs in our present study, gallic acid and ellagic acid, were detected and identified. Caffeic acid was not found in our study. Caffeic acid, is biosynthesized by hydroxylation of coumaroyl ester of quinic acid (esterified through a side chain alcohol). This hydroxylation produces the caffeic acid ester of shikimic acid, which converts to chlorogenic acid. We did not find caffeic acid but the esters of it, rosmarinic acid and chlorogenic acid, were identified in all samples. Also, this can depend on the type of sample. The cultivar, genotypes, extraction methods, etc have a higher influence on the phenolic content. Different pomegranate cultivars had different polyphenol compositions. It is considerably associated with many factors such as cultivar type, growing region, maturity, cultivation, climate, edaphic condition, and storage situation. Rosmarinic acid exhibits antioxidant and anti-inflammatory effects and has recently been shown to protect neurons in vitro against oxygen-glucose deprivation.
Experimental
See supplementary material.
Conclusions
Different amounts of phenolic compounds were detected in the samples including gallic acid, catechin, ellagic acid, rosmarinic acid, hesperidin, p-coumaric acid and chlorogenic acid. Gallic acid was major compound in all studied extracts of pomaces, with the maximum amount belonging to water extract at 60 C in the water bath. According to the findings of this study, the pomaces of P. granatum are natural sources of phenolic compounds. The verified P. granatum pomace and its related bioactive components like flavonoid and phenolic compounds can have a powerful potential as a novel device for inhibiting different human diseases and a chemo prohibitive. Several beneficial effects are reported for these phenolic compounds, including antioxidant, anti-inflammatory, and antineoplastic properties. These compounds have been reported to have therapeutic activities in gastrointestinal, neuropsychological, metabolic, and cardiovascular disorders (Lin et al. 2018). It is estimated that total production amounts to around 3 million tons of pomegranate are produced in the world, annually, of which Iran produces approximately 28%, annual production of pomegranate has been recorded as 10,866,300 tons. After pressing of fruits for juice or oil, the solid remains are pomace. It includes the stems, seeds, pulp and skins of the fruit. During the Punica juice processing, about 40 to 50 percent of the products were retained. It is possible to keep the waste of these crops one hundred thousands of tons were estimated (Animal Science Research Institute of Iran (ASRI), 2015). Whereas desirable utilization of agricultural and food wastes such as pomaces of P. granatum will reduce costs and environmental hazards in which results from their disposal and remaining in the environment, our studies are continuing to investigate and introduce pomaces of P. granatum as strong natural resources and we are going to use these pomaces for other goals. | 1,936.2 | 2022-02-17T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Medicine"
] |
Defect activation and annihilation in CIGS solar cells: an operando x-ray microscopy study
The efficiency of thin-film solar cells with a Cu( In 1 − x Gax)Se2 absorber is limited by nanoscopic inhomogeneities and defects. Traditional characterization methods are challenged by the multi-scale evaluation of the performance at defects that are buried in the device structures. Multi-modal x-ray microscopy offers a unique tool-set to probe the performance in fully assembled solar cells, and to correlate the performance with composition down to the micro- and nanoscale. We applied this approach to the mapping of temperature-dependent recombination for Cu( In 1 − x Gax)Se2 solar cells with different absorber grain sizes, evaluating the same areas from room temperature to 100 ° C . It was found that poor performing areas in the large-grain sample are correlated with a Cu-deficient phase, whereas defects in the small-grain sample are not correlated with the distribution of Cu. In both samples, classes of recombination sites were identified, where defects were activated or annihilated by temperature. More generally, the methodology of combined operando and in situ x-ray microscopy was established at the physical limit of spatial resolution given by the device itself. As proof-of-principle, the measurement of nanoscopic current generation in a solar cell is demonstrated with applied bias voltage and bias light.
Introduction
In the last several years, the photovoltaic industry has grown dramatically, and large factories have achieved economies of scale capable of producing modules with power conversion efficiencies in the range of 17%-24.4%. These devices are based on silicon (Si), cadmium telluride (CdTe), and copper indium gallium diselenide (Cu ( -In x
Experimental
The typical CIGS grain size was m 1.6 m and m 0.8 m for the LG and SG samples, respectively, as determined by electron back-scattered diffraction imaging and in agreement with x-ray nanodiffraction measurements [29]. The solar cells were grown by MiaSolé using growth processes detailed in [44,45]. The 1.7-m 1.8 m thick absorber layer of CIGS is grown in an industrial roll-to-roll process on a stainless steel substrate coated with 700 nm of Na-doped Mo that serves as the back electrode. The front contact consists of 50 nm CdS and 200 nm ZnO as shown in figure 1.
For the temperature control during this experiment, a heating stage [46] developed for the in situ growth of CIGS [47,48] has been adapted for electrical XBIC & XBIV measurements in combination with XRF measurements. The experiments were conducted in N 2 atmosphere. The sample temperature was controlled to within 0.1 C of the nominal temperature via four thermocouples in the heating stage that were calibrated using a pyrometer. After changing the nominal temperature, 20 min were taken for stabilization and homogenization of the temperature distribution.
The scanning x-ray microscopy measurements were performed at the beamline 2-ID-D [49] of the advanced photon source (APS) at Argonne National Laboratory (ANL). Figure 1 shows schematically the key components of the experimental setup. The energy of the incident x-ray photons was set to 10.5 keV, just above the Ga K edge for maximum sensitivity to the absorber elements Ga and Cu. Note that at this incident energy the sensitivity to In is fairly reduced and there is negligible sensitivity to Se, Na or K. The x-ray beam was modulated at a frequency of 318 Hz by an optical chopper upstream of the focusing optics and detectors [21]. The chopper consists of m 300 m spring steel that enables a modulation ratio of > 10 12 between x-ray ON/OFF periods. Being located >20 cm upstream of the beam/sample interaction point, the chopper did not lead to any XRF signal beyond the background of the experimental setup.
After the chopper, the modulated x-ray beam passed an ion chamber for beam-intensity reference and was focused by a zone plate with central beam stop onto the sample. The numerical aperture of the optics was 1.2 mrad, and the probe size on the order of 140 nm. At the microprobe instrument of beamline 2-ID-D, the diffraction-limited resolution of about 50 nm cannot be achieved due to (a) beam incoherence and (b) vibrations between the x-ray optics and the sample.
Between the zone plate and the sample front contact, an order sorting aperture (OSA) removed higher orders, and a polyimide foil separated the heating stage environment from the experimental hutch. The sample surface was perpendicular to the incident x-ray beam, and the excident fluorescence x-ray photons were caught by an energy-dispersive single-element Si detector under a central angle of 43°to the incident beam.
For the 2D measurements, the heating stage including the sample was scanned across the probe beam. Optimized for high-speed measurements with limited beam damage and high throughput, we used the continuous, or fly-scan, mode [50] for fast XBIV measurements; the horizontal motion was the continuous one (inner loop), the vertical motion was not continuous (outer loop). The dwell time was 100 ms with a step size of 200 nm 200 nm. Only for the overview map of the SG sample (right panel of figure 2), a dwell time of 10 ms was used. For the overview map of the LG sample (left panel of figure 2), a step size of m ḿ 1 m 1 mwas used. To collect high-sensitivity XRF measurements combined with an XBIC measurement such as in the case of figure 9, we used the step-scan mode with 1 s dwell time and200 nm 200 nm step size. From the XRF spectra of the sample and thin-film references, the elemental distributions of Cu and Ga were determined using the analysis software MAPS [23,51,52]. The XRF data were corrected for self-absorption effects, as discussed in [24], with a sensitivity analysis included as well. Note that misalignment during the sample change from the LG to the SG sample caused ∼2×lower XRF counts in the SG sample, which appears as lower concentrations in figure 4 (part of the XRF detector was shielded by the heating stage). In addition, the XRF reference was measured under slightly different conditions, which causes an underestimation of the true Cu and Ga area densities that should, in fact, be similar for the LG and SG samples. The experimental uncertainty in the absolute quantification of area densities by XRF is not uncommon, despite the detection limit for Cu and Ga being on the order of 10 −6 (relative atomic concentration) under the given experimental conditions. Apart from the misalignment, errors in the absolute quantification arise from limited applicability of the thin-film limit, incomplete self-absorption correction, and errors of the concentrations in the reference. Note, however, that these systematic errors only affect the absolute concentration level, not the relative changes within maps, from which all conclusions will be drawn.
The temperature-dependent XBIV scans were performed over m ḿ 20 m 20 m, hence measurinǵ 100 px 100 px. Due to imperfect compensation of the thermal drift, we used image registration [53] to align the maps at different temperatures. For comparability of the maps, we cropped the maps to show only the areas that were measured at all temperatures in the case of the LG sample. In the case of the SG sample, stick-slip motion of the sample below 40 C and above 80 C did not allow for the measurement of the same area throughout the entire temperature range. Therefore, the comparable temperature range of the SG sample is limited to 40°C- 80 C. For the highest signal-to-noise ratio of the XBIC and XBIV measurements, we utilized lock-in amplification as introduced in detail earlier [21,22,28,39]. The front contact being exposed to the incident x-rays was grounded to suppress contributions of replacement currents for ejected electrons to the measured XBIV & XBIC signal, which should include only the signal from the absorber layer [17]. For the XBIV measurements 6 , the solar cell was operated under open-circuit conditions, and the signal was directly sent to the voltage input of the MFLI lock-in amplifier from Zurich Instruments, where a demodulator and a low-pass filter (10 Hz cut-off frequency (-3 dB BW), 48 dB oct slope) extracted the signal above the noise level. For the XBIC measurement 7 , the solar cell was operated with 100 mV forward bias voltage applied by a current pre-amplifier (SR570 from Stanford Research Systems), whose output was sent to the lock-in amplifier with identical demodulator and low-pass filter settings as for the XBIV measurements.
Microscopic defects
Prior to the investigation of the temperature dependence of recombination at nanoscopic defects, we evaluated the pristine state of the solar cells at a larger scale. Figure 2 shows the resulting XBIV measurements, representing typical areas of the LG and SG sample, several m 100 m away from the cell edges to avoid border effects. Throughout the manuscript, color scales show always the entire data range except for the differential voltage maps in figure 3, where the scale is the same for all maps for comparison.
When comparing the overview maps of the two samples, we note first the different voltage scale. The SG sample outperforms the LG sample, and relative variations in the SG sample are smaller. Based on the fact that the poorest areas limit the overall device performance, this suggests that the LG sample is limited by the large defect areas, where the voltage drops to <20% of the median. 18 C, respectively, with the samples being in pristine condition prior to extended x-ray exposure and heating. The white rectangles indicate the areas of high-resolution imaging in this study. 6 The XBIV signal can be thought of as the measurement of the V oc with highly localized electron-hole pair generation. 7 In most cases, the XBIC signal is a measurement of the short-circuit current I sc with highly localized electron-hole pair generation, but any point of the current-density curve I(V )=XBIC(V ) can be evaluated by applying a bias voltage as we demonstrate in figure 9.
Second, we note that the structure of high and low performing areas is different: in the LG sample, we can distinguish large islands, many μm 2 in area, with dramatically poorer performance compared to the large areas of relatively homogeneous high performance. In contrast, the SG sample does not show large under-performing islands, but the performance variations are spatially at a smaller scale.
The temperature dependence of the large poor-performing areas in the LG sample has been studied previously [39]. The reason for the poor performance of these defects is still under investigation, but no correlation was found with the area density of Zn, Ga, and Cu, which excludes pinholes in the absorber or poor ZnO coverage as reasons. Most likely, the low performance of these areas is related to junction effects, but the limited sensitivity to the Cd or S distribution prevented the ability to test correlations of the electrical performance with the CdS thickness.
Nanoscopic defects
In this study, we focus on the nanoscopic performance of electronic defects. For this purpose, we have performed high-resolution measurements of selected areas from figure 2 that are marked with white rectangles. The areas have been selected because they are not dominated by the larger defects discussed above.
Starting at room temperature, the solar cell temperature was increased in steps of 20 C up to 100 C, and the XBIV signal was measured in precisely the same area. The middle part of figure 3 shows the resulting maps. For direct comparison of the spatially resolved voltage variation that is induced by each temperature step, the XBIV measurements V 1 (x, y) and V 2 (x, y) at coordinates (x, y) were normalized to the median value at each temperature, and the voltage difference of subsequent temperature steps 1 and 2 was calculated as indicates a relative voltage increase compared to the median voltage, which is indicated in green, and ΔV<1 indicates a relative voltage decrease, which is indicated in red. This normalization is required for a direct nanoscopic comparison, as the spatial variations would otherwise be dominated by the overall trends that are expected with increasing temperature. Specifically: the voltage drops as the temperature increases due to an increase in recombination and bandgap narrowing [54,55]; and the annealing of CIGS solar cells leads to a substantial voltage increase [56]. Here, we observe the former effect as a consistent voltage decrease from 16°C to 100 C in the LG sample, and from 40°C to 80 C in the SG sample. The latter trend manifests clearly in the LG sample as a five-fold voltage increase from the initial measurement at 16 C and the final LG measurement after the full heating cycle and cool down at 18 C. Note that these effects dominate over any beam damage: only between 16°C and 40 C, we observed significant beam damage (see figure 2(f) of [39]; the LG area investigated here is at coordinates . Above 40 C, the scanned area showed barely any performance difference compared to the non-damaged areas, and no difference can be observed in the annealed state after cooling down (see figures 5(c)-(d) in [39]). In the following, we discuss characteristic signatures of temperature dependence highlighted by white shapes in figure 3.
In the LG sample, A 1 denotes the defect area with both lowest performance and largest footprint. Defect areas A 2 and A 3 are smaller and their impact is not as strong, but they share with A 1 the key signature: they all improve significantly from 40°C to 100 C relative to neighboring areas. Partially, this effect is reversible. Upon cooling from 100°C to 18 C, the relative performance of areas A 1 -A 3 decreases; however, the absolute voltage remains significantly higher than in the pristine state (∼12 versus m2.5 V), which indicates an irreversible annealing effect. Figure 4 shows the area density of the absorber elements Cu and Ga measured at 60 C (at other temperatures, XRF measurements were taken with lower sensitivity, but no temperature-induced changes in the Cu and Ga distribution were observed). 8 In all three areas A 1 -A 3 , the low performance is accompanied with a substantially lower Cu concentration. This pattern is true for many other spots of low performance. At the same time, the Ga distribution does not follow the same trend in general 9 . The fact that only the Cu concentration changes suggests that voids or pinholes in the absorber layer are not the cause for the performance decrease (which would show up as a reduced concentration of all absorber elements). Rather, a deficiency of Cu relative to Ga (and eventually In and Se, to which these measurements are not sensitive) appears to be responsible for the locally low voltage. The statistical correlation of the nanoscopic electrical performance with the absorber element distribution across the entire map further corroborates this interpretation. Figure 5 shows a clear positive correlation between the voltage and the Cu concentration (relative standard deviation of the slope: only 2.7%), whereas no correlation is observed between the voltage and the Ga concentration (relative standard deviation of the slope: 42%). These results of locally poor performance at areas of low Cu area density are in line with the observation of amorphous secondary phases in the CIGS absorber that were observed by electron microscopy and found to be Cu-poor and Na-rich [43]. Figures 3-5 provide direct evidence for the detrimental electrical impact of these phases.
Area B denotes a unique defect characteristic that has not been observed elsewhere. During the first temperature increase to 40 C, this defect is created and manifests as the strongest relative voltage decrease. Upon a subsequent temperature increase, this defect is nearly completely annealed out. This area neither stands out in the elemental distribution of the absorber layer, nor of Zn (not shown here), which excludes pinholes or voids in the absorber layer or top electrode as cause. Furthermore, we can exclude local beam damage as cause for this electronic defect, as this area has not absorbed a higher dose than any other area of these high-resolution maps. Thus, the reason for the creation and annihilation of this defect remains unclear. We may speculate that crystallographic or chemical rearrangement beyond the sensitivity of these measurements could have caused the defect in area B.
In the SG sample, there are more areas with high spatial performance gradients than in the LG sample. At 40 C, we can distinguish between poor and high performing areas showing up in blue and orange, respectively. Note that the distribution of the poor performing areas does not match the grain boundary distribution, which is at a smaller scale (compare m 0.8 m typical grain size versus well-performing areas sized several μm 2 ). Hence, grain boundaries are not generally under-performing, but defective areas may still be among the grain boundaries.
Within the defective areas, C 1 and C 2 stand out. There, the performance subsequently increases with temperature until the defects are annihilated to a large extent at 80 C. As a consequence of this defect annihilation, areas C 1 and C 2 outperform other areas at 80 C that showed similar or even better performance at 40 C. Area D shows the opposite behavior. Being at 40 C one of the best-performing areas, this area suffered the strongest voltage drop as a result of the temperature increase. Neither the performance changes of areas C nor D in the SG sample can be explained by compositional particularities, which would show up as features in the area density maps in figure 4 or in the statistical analysis in figure 5. Therefore, we can exclude topological defects such as pinholes or voids as cause. We emphasize that the amount to which the performance changes and electronic defects are activated and annihilated at the nanoscale, even at moderate operation temperatures, is remarkable.
Statistical analysis
For a quantitative analysis of the temperature-dependent performance variations, all pixels of the maps in figure 3 are added up in histogram-like violin plots in figure 6.
First, we note the above-mentioned trends of decreasing voltage at higher temperatures, and a voltage boost after annealing. However, the slope of the voltage decrease-a convolution of the negative temperature coefficient and the positive annealing effect-is noteworthy: while the LG sample starts at lower voltage than the SG sample in their pristine states, the temperature-induced voltage decrease is less pronounced such that the median performance is comparable at 60 C, and the LG sample outperforms the SG sample at higher temperatures. As the temperature coefficient is not expected to depend dramatically on the grain size, this suggests that the annealing effect is more pronounced in the LG sample.
A closer look at the high-and low-performing tails provides further insights: in the LG sample, the voltage of the low-performing areas remains fairly constant throughout the entire temperature range, which means that temperature-coefficient and annealing compensate each other there. As a result, the distribution gets narrower towards higher temperatures. This corroborates the argument that low-performance areas profit more from the annealing than the already-good areas. In the SG sample, this effect is far less pronounced. We may speculate that the detrimental effect of secondary Cu-poor phases can be annealed out to large extent, whereas the dominant defect type in the SG sample persists. Figure 7 shows the histograms of the normalized x-ray beam induced voltage. They support the key observations of figure 6: upon heating, the distributions get narrower, even though the signal decreases in absolute values (note that the normalized distribution would get wider for constant voltage spread and decreasing signal); this trend is more pronounced in the LG sample. After cooling down, the narrow distribution is maintained to large extent. Most importantly, the low-voltage tail is entirely cut as a result of the annealing: the area with minimum performance generates more than twice the voltage of the area with maximum performance before annealing (see figure 6). The histograms clearly show that not only the narrowing effect is stronger for the LG sample, but the voltage distribution is significantly narrower already in the pristine state. The wider XBIV distribution of the SG sample is indicative of the high degree of local inhomogeneity that is directly related to the small grain size and per se reduces the efficiency [11].
Spatial resolution
Generally, the spatial resolution of XBIC and XBIV measurements can be limited by any of the four following parameters: (A) step size of the scan; (B) x-ray probe size in the sample; (C) beam-sample interaction radius including the effects of secondary electrons and photons; (D) charge-collection length (approximated by the diffusion length of the minority charge carriers) in the absorber layer.
The beam-sample interaction radius is defined as the radius of a cylinder in the absorber layer, within which 68% (1-sigma) of the total dose in the absorber layer is deposited after considering all secondary photon-electron interactions such as scattering or secondary fluorescence. For this purpose, the dose density in the layer stack has been simulated using a modified version of the Monte-Carlo code Penelope [57]. Then, the interaction radius has been determined from the three-dimensional dose density using the same procedure as for perovskite solar cells in [32]. For the CIGS samples and measurement geometry of this study, the simulations yielded an interaction radius of 160 nm.
Precise values of the minority-carrier diffusion length are not known for the studied devices. However, we can estimate from the performance that the diffusion length should be on the order of the absorber thickness under standard measurement conditions, i.e. 1 to m 2 m at 25 C. Hence, the diffusion length limits the measurement resolution in these measurements, being significantly larger than the step size, probe size, and interaction radius.
In the temperature series of figure 3, all scan parameters were constant. Most importantly, the step size was constant (200 nm), and the temperature-dependence of the probe size (140 nm) and the interaction radius (160 nm) is negligible, as the involved x-ray energies are more than five orders of magnitude higher than the thermal energy. In contrast, the diffusion length in CIGS decreases substantially at higher temperatures due to the increased recombination rate, and becomes comparable to the step size. This leads to a temperaturedependence of the spatial resolution of XBIC and XBIV measurements.
Experimentally, the impact of temperature on the spatial resolution is visible in the XBIV maps in figure 3. The careful reader may have noticed that the maps seem to get sharper with increasing temperature, with the measurement taken at 100 C being the sharpest. Most clearly, the effect of reduced diffusion length and enhanced resolution at higher temperature can be observed at the following instances of the LG sample: 10 (i) high performing areas such as left of A 2 and between A 2 and A 3 appear to be smeared out at low temperatures, but show substructures at high temperatures; (ii) the contrast increases with temperature in areas with smallrange performance variations such as between A 2 and B; (iii) recombination-active areas such as A 1 appear larger at low temperatures and get locally more constrained as the diffusion length becomes smaller at higher temperatures.
For the quantification of the apparent 'sharpness' variation, we have performed a line-wise fast fourier transform (FFT) of the XBIV signal from the uncropped maps shown in figure 3 after their stretching to their respective min-max values. 11 Figure 8 shows the result of the FFT, averaged over all lines in the maps. The key outcome of this FFT analysis is the amplitude increase with temperature at high spatial frequencies, which is a measure of the pixel-to-pixel variation and as such of the spatial resolution of the measurement.
This analysis provides experimental evidence for the spatial resolution of XBIC, XBIV, and XEOL not necessarily being limited by the probe size, and demonstrates that fundamentally higher spatial resolution can be achieved in devices with shorter diffusion length.
Outlook
The left panel of figure 9 shows the XBIC signal of the same area of the SG sample as in figure 3. The right panel shows the correlation of the XBIC and the XBIV signal as a 2D histogram and a fit of the corresponding scatter plot. Note that the standard deviation of the slope is only 2.5%. Following the argumentation of [22,29], this indicates that the XBIC and XBIV variations are caused by recombination (or absorber thickness variations, which we can exclude here by the XRF measurements), and excludes bandgap variations as cause. Therefore, the recombination-sensitive XBIC and XBIV signal may be interpreted in this study as a measure of the chargecollection efficiency.
For the map in figure 9, we have evaluated the XBIC signal under conditions that involve all elements of outdoor operation conditions, combining a typical operating temperature ( 60 C) with the application of forward bias voltage (100 mV) and bias light (on the order of 0.1 W m −2 ). Under actual outdoor operating conditions, the forward voltage and the light intensity are higher; for instance, at maximum power point (MPP), the voltage is on the order of 500 mV, and bias light should add up to 1000 W m −2 with the spectrum AM1.5g. Therefore, the direct current in the solar cells is under outdoor conditions substantially higher than under the conditions of figure 9, such that the small modulated XBIC signal on top of it would vanish in the noise.
This combined in situ and operando experiment shows exemplarily the flux limitation at state-of-the-art synchrotron beamlines today. However, several 4th generation synchrotrons that are based on a multi-bend achromat lattice will be seeing light within the next few years [58][59][60][61]. Compared to 3rd generation synchrotrons, the brilliance will be boosted by ∼2 orders of magnitude, which directly translates into increased focused photon flux at nanoprobe endstations. This will enable entirely new experiments at time and length scales that have not been accessible so far. For example, such studies can cover larger areas and larger amounts of samples in the future, and the nanoscopic charge-collection efficiency can be evaluated by XBIC under actual operating conditions. Beyond that, the mapping of full current-voltage curves under different conditions is within reach, which will yield parameters such as the saturation current density (J 0 ) or the ideality factor (n) at unparalleled spatial resolution.
Conclusions
Based on multi-modal scanning x-ray microscopy, we have studied the performance variations in two types of CIGS solar cells with different grain sizes upon annealing. For both types, we have observed a lower performance at higher temperatures, which is in agreement with the generally expected higher recombination activity. Also, the observed performance increase induced by the first annealing after synthesis is widely known.
For the first time, however, we have directly observed the creation and annihilation of defects in industrial CIGS solar cells at the intrinsic resolution limit of the diffusion length. Hereby, we have identified a type of highly recombination-active electronic defect that is related to low Cu concentration. Both on the level of individual defects as well as by statistical means, poor performance has been shown to be correlated with low Cu concentration. Note that we have not found any correlation between the nanoscopic performance and the Ga concentration despite the greater sensitivity of these measurements to Ga, which excludes topological variation or voids as the cause. Rather, this suggests a Cu-poor phase being responsible for the recombination activity in large-grain CIGS grown at elevated temperature.
By demonstrating the combination of operando with in situ measurements of the XBIC signal with applied bias voltage and bias light at outdoor operating temperatures, we have laid out a path towards nanoscopic performance mapping under actual operating conditions. Such measurements will be enabled by nanoprobe endstations at 4th generation synchrotrons . beamtime. This material is based on the work partially supported by the National Science Foundation and the Department of Energy under NSF CA No. EEC-1041895 and contracts DE-EE-0005948 and DE-EE-0008163. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of National Science Foundation or the Department of Energy. Work at the Advanced Photon Source was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. The research leading to these results has received funding from Deutsches Elektronen-Synchrotron DESY. | 6,620.6 | 2020-02-07T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Manganese Luminescent Centers of Different Valence in Yttrium Aluminum Borate Crystals
We present an extensive study of the luminescence characteristics of Mn impurity ions in a YAl3(BO3)4:Mn crystal, in combination with X-ray fluorescence analysis and determination of the valence state of Mn by XANES (X-ray absorption near-edge structure) spectroscopy. The valences of manganese Mn2+(d5) and Mn3+(d4) were determined by the XANES and high-resolution optical spectroscopy methods shown to be complementary. We observe the R1 and R2 luminescence and absorption lines characteristic of the 2E ↔ 4A2 transitions in d3 ions (such as Mn4+ and Cr3+) and show that they arise due to uncontrolled admixture of Cr3+ ions. A broad luminescent band in the green part of the spectrum is attributed to transitions in Mn2+. Narrow zero-phonon infrared luminescence lines near 1060 nm (9400 cm−1) and 760 nm (13,160 cm−1) are associated with spin-forbidden transitions in Mn3+: 1T2 → 3T1 (between excited triplets) and 1T2 → 5E (to the ground state). Spin-allowed 5T2 → 5E Mn3+ transitions show up as a broad band in the orange region of the spectrum. Using the data of optical spectroscopy and Tanabe–Sugano diagrams we estimated the crystal-field parameter Dq and Racah parameter B for Mn3+ in YAB:Mn as Dq = 1785 cm−1 and B = 800 cm−1. Our work can serve as a basis for further study of YAB:Mn for the purposes of luminescent thermometry, as well as other applications.
Introduction
Crystals of yttrium-aluminum borate YAl 3 (BO 3 ) 4 (YAB) have the structure of the mineral huntite CaMg 3 (CO 3 ) 4 with the non-centrosymmetric space group R32 of the trigonal system [1]. Figure 1 shows different projections of the YAB unit cell. The crystal structure is formed by layers that are perpendicular to the crystallographic c axis and consist of distorted YO 6 prisms, AlO 6 octahedra, and BO 3 groups of two types (B1O 3 and B2O 3 ). Y 3+ ions in YO 6 prisms are surrounded by six oxygen atoms of one type and occupy sites with the D 3 point symmetry group. The point group of AlO 6 octahedra is C 2 . AlO 6 octahedra linked together by their edges form spiral chains running along the c axis. The Y 3+ ions are situated between three such chains and link the chains together. YO 6 prisms are isolated from each other, having no oxygen atoms in common, which, in the case of a axis. The Y 3+ ions are situated between three such chains and link the chains together. YO6 prisms are isolated from each other, having no oxygen atoms in common, which, in the case of a substitution of the Y 3+ ions by rare-earth or transition metal ions, results in low luminescence quenching [2]. This property, together with high optical nonlinearity and excellent physical characteristics and chemical stability, make YAB extremely interesting for many applications. Doped with various rare-earth and transition metal ions, YAB crystals are well-known phosphors, promising for use as materials for display panels, lasers, scintillators, LEDs, luminescent thermometers, and in medical imaging [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. YAB crystals doped with Nd 3+ [8,13], Yb 3+ [11,12,14], Er 3+ /Yb 3+ [10], and Yb 3+ /Tm 3+ [16] are well-known media for self-frequency doubling, self-frequency summing, and up-conversion lasers. Tunable anti-Stokes ultraviolet-blue light generation was demonstrated using a random laser based on Nd0.10Y0.90Al3(BO3)4 [3]. YAB:Eu 3+ /Tb 3+ phosphors were proposed for eye-friendly white LEDs [6]. In addition, YAB:Cr is being investigated as a material for LEDs [17]-in particular, as a phosphor for plant growth LEDs-with excellent thermal stability and high luminescent yield [5]. Recently, impressive applications of YAB:Pr 3+ /Gd 3+ and YAB:Cr 3+ in luminescent thermometry were reported [4,15]. In Ref. [15], it was proposed to use several excited levels of the Gd 3+ ion in YAB doped with Pr 3+ and Gd 3+ ions in the UV region of the spectrum to implement a Boltzmann thermometer operating from 30 to 800 K. The UV region allowed detuning from background thermal radiation even at the highest temperatures. In this case, excitation was carried out at a wavelength of 450 nm using an inexpensive commercial LED into the absorption band of the Pr 3+ ion, followed by the up-conversion energy transfer Pr 3+ → Gd 3+ . In Ref. [4], a combination of optical heating and luminescent thermometry in YAB:Cr 3+ was realized. Here, the temperature-dependent ratio of emission intensities for the 4 T2→ 4 A2 and 2 E→ 4 A2 transitions of Cr 3+ was used to measure the temperature. We note that the Mn 4+ ion has the same valence electron shell as the Cr 3+ ion (d 3 ) and is also used for luminescent thermometry [19]. Compounds with Mn 3+ (d 4 ) exhibit broadband, extremely temperature-sensitive luminescence in the near-IR and visible spectral ranges [20,21], due to which compounds with Mn 3+ are also topical materials for thermoluminescent sensors. Cryogenic luminescence ratiometric thermometry based on the diverse thermal quenching behaviors of Mn 3+ and Mn 4+ in manganese-doped garnettype Ca 3 Ga 2 Ge 3 O 12 single crystals was explored [22]. Tb 3+ and Mn 3+ co-doped La 2 Zr 2 O 7 nanoparticles were recently suggested as a promising material for dual-activator ratiometric optical thermometry [23]. Mn 2+ (d 5 )-containing phosphors exhibit bright broadband luminescence with a maximum from the red to green region of the spectrum, depending on the particular matrix [24][25][26]. In light of all of the above, it is of interest to study the luminescent properties of YAB doped with manganese.
We are aware of only one work on YAB:Mn spectroscopy ( [27]). Only the roomtemperature spectra were measured in Ref. [27]. Three lines characteristic of 2 E → 4 A 2 emission of ions with a d 3 electronic configuration were detected in the YAB:Mn roomtemperature luminescence spectra [27]. The authors assigned these lines to Mn 4+ (d 3 ). The results of electron paramagnetic resonance (EPR) showed that Mn introduced into YAB at low concentrations predominantly occupied the yttrium-ion sites in the crystal structure, its valence in this case being 2+ [28]. Two broad bands peaked at 544 and 637 nm were observed in the room-temperature luminescence spectrum of YAB:Mn and assigned to the transition from the 4 T 1 state of the Mn 2+ ion, split by a low-symmetry component of the crystal field, to the ground state 6 A 1 [27]. Since the Mn 2+ and Mn 4+ ions presumably replace the trivalent Y 3+ and Al 3+ cations, respectively, the question of charge-compensation arises. The formation of charge-compensating Mn 2+ -Mn 4+ dimers was suggested in [27]. In this work, we continue the study of the valence states of manganese in YAB:Mn using XANES spectroscopy and high-resolution broadband temperature-dependent optical spectroscopy, and obtain extensive data on the luminescence of Mn impurity centers of various valences in YAl 3 (BO 3 ) 4 .
Materials and Methods
YAl 3 (BO 3 ) 4 :Mn crystals were obtained by the flux method of crystal growth in the laboratory of L.N. Bezmaternykh at the Kirensky Institute of Physics of the Siberian Branch of the Russian Academy of Sciences in Krasnoyarsk. They were grown on seeds in platinum crucibles with a volume of 50 mL. The composition of the system during the flux crystal growth was 85 wt.% (Bi 2 Mo 3 O 12 + 2B 2 O 3 + 0.5Li 2 MoO 4 ) + 15 wt.% YAl 3 (BO 3 ) 4 with the addition of Mn 2 O 3 . The temperature regime consisted of heating the solution-melt to 1100 • C and then slowly cooling at a rate of 0.5 • C/h for 48 h. Note that manganese oxide Mn 2 O 3 decomposes in air at temperatures above 800 • C to form Mn 3 O 4 (Mn 2+ Mn 3+ 2 O 4 ) [29]. High-purity reagents were used in flux crystal growth. Cr (0.001%) and Pb (0.0005%) impurities in Al 2 O 3 as well as Nd 2 O 3 and Sm 2 O 3 (<0.0001%) in Y 2 O 3 have been reported on certificates and are of interest for further discussion.
Powder X-ray diffraction on the grown crystals at room temperature was performed on a Thermo Fisher Scientific ARL X'tra diffractometer (Basel, Switzerland) equipped with a Dectris MYTHEN2 R 1D detector (Cu K α1,2 radiation). The operational voltage and current were 40 kV and 40 mA, respectively. Powder diffraction patterns were obtained in continuous mode at a rate of 2 • /min in Bragg-Brentano geometry over an angle range of 10 • ≤ 2θ ≤ 90 • . The unit cell parameters of YAl 3 (BO 3 ) 4 :Mn were refined by the Le Bail method using the JANA2006 program [30]. All parameters were refined by the least-squares method. The pseudo-Voigt function was used as the peak profile function. The structural data for YAl 3 (BO 3 ) 4 (sp. gr. R32, a = 9.295(3) Å, c = 7.243(2) Å, α = β = 90 • , γ = 120 • ) were used as the initial structural parameters [31].
X-ray fluorescence analysis was carried out on a Bruker M4 Tornado analyzer. Absorption and luminescence spectra in the near-IR and visible ranges (5000-16,000 cm −1 ) with a spectral resolution up to 0.2 cm −1 were recorded on a spectrometer Bruker IFS 125HR (Bruker Optik GmbH, Ettlingen, Germany). Luminescence spectra in the visible and UV ranges (9000-20,500 cm −1 ) with a spectral resolution up to 3 cm −1 were registered using a OceanInside HDX spectrometer. The sample was cooled down to 5 K using a Cryomech ST403 closed-cycle helium cryostat (Syracuse, NY, USA). X-ray absorption spectra near the manganese K-edge were measured at the "Structural Materials Science" beamline at the Kurchatov Synchrotron Radiation Source [32] by X-ray fluorescence yield. Luminescence excitation spectra were recorded at a liquid nitrogen temperature (77 K) on a Fluorolog ® -3 spectrofluorometer at the Institute of Photonic Technologies of the Federal Research Center "Crystallography and Photonics" of the Russian Academy of Sciences.
Results and Discussion
3.1. X-ray Diffraction (XRD) Analysis XRD was used for the fingerprint characterization and investigation of the structural phases in the crystalline state. XRD patterns were analyzed by the Le Bail method in order to extract the parameters of the unit cell. The refined unit cell parameters were a = 9.274(7) Å, c = 7.223(3) Å, α = β = 90 • , and γ = 120 • . The convergence of the Le Bail approximation is shown in Figure 2. It can be seen that the diffraction pattern is well described, as indicated by the low R-factor values and small difference between the calculated and experimental diffraction patterns. The figure shows additional reflections of the Al 2 O 3 phase. Their presence is explained by the fact that a corundum mortar was used in the preparation of the powder samples. 125HR (Bruker Optik GmbH, Ettlingen, Germany). Luminescence spectra in the visible and UV ranges (9000-20,500 cm −1 ) with a spectral resolution up to 3 cm −1 were registered using a OceanInside HDX spectrometer. The sample was cooled down to 5 K using a Cryomech ST403 closed-cycle helium cryostat (Syracuse, NY, USA). X-ray absorption spectra near the manganese K-edge were measured at the "Structural Materials Science" beamline at the Kurchatov Synchrotron Radiation Source [32] by X-ray fluorescence yield. Luminescence excitation spectra were recorded at a liquid nitrogen temperature (77 K) on a Fluorolog ® -3 spectrofluorometer at the Institute of Photonic Technologies of the Federal Research Center "Crystallography and Photonics" of the Russian Academy of Sciences.
X-ray Diffraction (XRD) Analysis
XRD was used for the fingerprint characterization and investigation of the structural phases in the crystalline state. XRD patterns were analyzed by the Le Bail method in order to extract the parameters of the unit cell. The refined unit cell parameters were a = 9.274(7) Å, c = 7.223(3) Å, α = β = 90°, and γ = 120°. The convergence of the Le Bail approximation is shown in Figure 2. It can be seen that the diffraction pattern is well described, as indicated by the low R-factor values and small difference between the calculated and experimental diffraction patterns. The figure shows additional reflections of the Al2O3 phase. Their presence is explained by the fact that a corundum mortar was used in the preparation of the powder samples.
X-ray Fluorescence Analysis
The concentration of manganese ions was determined by X-ray fluorescence analysis to be 0.87 at.%. In addition, the presence of 1.18 at.% Bi was found, which is explained by its presence in the composition of the solvent. Insignificant amounts of potassium, calcium, titanium, and iron impurities were also found (see Table 1).
X-ray Fluorescence Analysis
The concentration of manganese ions was determined by X-ray fluorescence analysis to be 0.87 at.%. In addition, the presence of 1.18 at.% Bi was found, which is explained by its presence in the composition of the solvent. Insignificant amounts of potassium, calcium, titanium, and iron impurities were also found (see Table 1).
X-ray Absorption Spectroscopy
To address Mn ion oxidation state and position in the crystal structure, the fine structure of the X-ray absorption spectrum at the K-edge of manganese was measured. The EXAFS (Extended X-ray Absorption Fine Structure) spectrum was processed and analyzed using the software package IFEFFIT, version 1.2.11c [33,34]. The measured XAFS data were first processed by the ATHENA program of this package to merge four independently measured spectra, normalize the spectrum to a unity-height jump, and obtain the oscillating part of the spectrum. The fine structure of the X-ray absorption spectrum obtained in this manner after the K-jump was then used for the structural analysis ( Figure 3). The local structure of manganese ions in the crystal was analyzed by fitting the EXAFS spectra at the K-edge of Mn to the model of the local structure based on the crystal structure of YAB [31]. Two distinct models were used for the fitting. The first model includes a manganese atom in the yttrium position. The second model takes into account the partial occupation of aluminum positions by manganese atoms. Since the positions of yttrium and aluminum differ significantly in metal-oxygen distances in the first coordination sphere-2.3 and 2.0 Å, respectively-this was taken into account by introducing an additional Mn-O scattering path of shorter length. To estimate the occupancy of the aluminum position, the coordination numbers for the two nearest oxygen coordination spheres were chosen so that their sum was fixed equal to six ( Table 2). The distance for this shorter path was set to 2.052 Å to obtain a stable fit. Other parameters determined by fitting the EXAFS spectra are the distances between the absorbing and neighboring atoms R j and the Debye-Waller factors σ j 2 common to atoms of the same type. The errors for the Debye-Waller factors are quite large, since we can only use the spectrum up to k = 10 Å −1 due to the relatively high noise levels at large k. This leads to a significant correlation of the Debye-Waller factors with the overall amplitude of the EXAFS oscillations and to high uncertainty values. The refinement also included the Fermi energy shift ∆E 0 and the attenuation coefficient of the signal amplitude S 0 2 . The fitting ranges in k space and in R space were 2-10 Å −1 and 1-4 Å, respectively. The quality of the fit is characterized by the factor R f , which indicates the percentage mismatch between the data and the model. Table 2 shows that the two models do not differ in R f , i.e., manganese in aluminum positions does not contribute much to the EXAFS signal. Thus, one can conclude that the occupation of aluminum sites by manganese atoms is rather small. To estimate this occupation, the coordination numbers for the split coordination sphere of oxygen can be used. For a shorter distance, it was determined to be 0.7, so occupancy can be estimated as no more than 10%. It should be noted that this estimate shows the sensitivity of the EXAFS method for this quantity, since the error bars are also of the same order. The distances determined by the EXAFS fit correspond to the local structure of the yttrium site. The distance to oxygen in the first coordination sphere was determined to be 2.26 ± 0.03 Å, which is slightly smaller than the Y-O distance in the YAB structure (2.313 Å) [31].
The XANES part of the spectrum also provides valuable information. The position of the K-edge can be used to obtain the oxidation state of Mn [35]. Comparing the spectrum with manganese references Mn(BO 2 ) 2 , Mn 2 O 3 , and MnO 2 with the oxidation states Mn 2+ , Mn 3+ , and Mn 4+ , respectively, measured on the same beamline, we can see that the edge position coincides with the Mn 2+ reference (Figure 4a), which means that most manganese atoms are in the Mn 2+ state. We cannot decompose the spectrum into a linear combination of references, since they are irrelevant to the local structure of the YAB specimen; the admixture of manganese in higher oxidation states can be roughly estimated as 10%. In addition, we calculated the XANES spectrum using the FDMNES code [36] with two structural models, corresponding to the manganese atom at the Y and Al sites in the YAB crystal structure, respectively (Figure 4b). The experimental data are reproduced only for Mn at the Y position, which confirms the conclusions of the EXAFS data analysis and is consistent with the EPR data [28]. From this point of view, a smaller Y-O distance than in YAB can be explained by a smaller ionic radius of Mn 2+ (0.83 Å) as compared to Y 3+ (0.90 Å) [37]. The distance to oxygen in the first coordination sphere was determined to be 2.26 ± 0.03 Å, which is slightly smaller than the Y-O distance in the YAB structure (2.313 Å) [31]. The XANES part of the spectrum also provides valuable information. The position of the K-edge can be used to obtain the oxidation state of Mn [35]. Comparing the spectrum with manganese references Mn(BO2)2, Mn2O3, and MnO2 with the oxidation states Mn 2+ , Mn 3+ , and Mn 4+ , respectively, measured on the same beamline, we can see that the edge position coincides with the Mn 2+ reference (Figure 4a), which means that most manganese atoms are in the Mn 2+ state. We cannot decompose the spectrum into a linear combination of references, since they are irrelevant to the local structure of the YAB specimen; the admixture of manganese in higher oxidation states can be roughly estimated as 10%. In addition, we calculated the XANES spectrum using the FDMNES code [36] with two structural models, corresponding to the manganese atom at the Y and Al sites in the YAB crystal structure, respectively (Figure 4b). The experimental data are reproduced only for Mn at the Y position, which confirms the conclusions of the EXAFS data analysis and is consistent with the EPR data [28]. From this point of view, a smaller Y-O distance than in YAB can be explained by a smaller ionic radius of Mn 2+ (0.83 Å) as compared to Y 3+ (0.90 Å) [37]. The distance to oxygen in the first coordination sphere was determined to be 2.26 ± 0.03 Å, which is slightly smaller than the Y-O distance in the YAB structure (2.313 Å) [31].
The XANES part of the spectrum also provides valuable information. The position of the K-edge can be used to obtain the oxidation state of Mn [35]. Comparing the spectrum with manganese references Mn(BO2)2, Mn2O3, and MnO2 with the oxidation states Mn 2+ , Mn 3+ , and Mn 4+ , respectively, measured on the same beamline, we can see that the edge position coincides with the Mn 2+ reference (Figure 4a), which means that most manganese atoms are in the Mn 2+ state. We cannot decompose the spectrum into a linear combination of references, since they are irrelevant to the local structure of the YAB specimen; the admixture of manganese in higher oxidation states can be roughly estimated as 10%. In addition, we calculated the XANES spectrum using the FDMNES code [36] with two structural models, corresponding to the manganese atom at the Y and Al sites in the YAB crystal structure, respectively (Figure 4b). The experimental data are reproduced only for Mn at the Y position, which confirms the conclusions of the EXAFS data analysis and is consistent with the EPR data [28]. From this point of view, a smaller Y-O distance than in YAB can be explained by a smaller ionic radius of Mn 2+ (0.83 Å) as compared to Y 3+ (0.90 Å) [37]. Figure 5 shows the photoluminescence (PL) spectrum of YAB:Mn in a broad spectral range. The near-IR luminescence was recorded with the Bruker 125 HR Fourier spectrometer, while for the visible part of the PL spectrum an OceanInside HDX spectrometer was used. Relative intensities of these two parts cannot be compared. Figure 5 shows the photoluminescence (PL) spectrum of YAB:Mn in a broad spectral range. The near-IR luminescence was recorded with the Bruker 125 HR Fourier spectrometer, while for the visible part of the PL spectrum an OceanInside HDX spectrometer was used. Relative intensities of these two parts cannot be compared. A strong relatively narrow peak at about 685 nm is observed in the room-temperature low-resolution PL spectrum. Previously, three narrow peaks with maxima at 682, 684, and 686 nm were reported in the room-temperature luminescence spectrum of YAB:Mn, and two of them were attributed to the R 1 and R 2 lines of Mn 4+ [27]. The Mn 4+ ion has the same valence electron shell structure as Cr 3+ (d 3 ). Narrow R lines in the spectra of d 3 ions arise due to spin-forbidden transitions from the excited orbital doublet 2 E to the ground orbital singlet 4 A 2 . In a low-symmetry crystal field, the 2 E level, which is doubly degenerate in the cubic crystal field approximation, splits into two components, so that the R 1 and R 2 lines can be observed. We were able to observe peaks at the same wavelengths as in [27], both in the luminescence and absorption room-temperature spectra. However, a more detailed study of the temperature-dependent absorption, PL, and PL excitation spectra led us to the conclusion that those are R lines of uncontrolled Cr 3+ impurity. Figures 6-8 display these spectra. Figure 6 shows the absorption and luminescence spectra of YAB:Mn at low temperature (T = 5 K) in the region of the R lines. The spectra have the form of narrow zero-phonon lines (ZPLs) and broad adjacent bands of electron-phonon (vibronic) transitions. Figure 7a demonstrates the evolution of the R absorption lines with temperature. The wavelengths of the R 1 and R 2 lines at room temperature-684 nm and 682 nm, respectively-coincide with those reported for YAB:Cr 3+ [38,39]. Figure 7b shows very weak lines of a spin-forbidden transition from the ground state 4 A 2 to the next excited (after the 2 E doublet) level 2 T 1 in the absorption spectrum of YAB:Mn at 5 K. The excitation spectra of the R lines are presented in Figure 8. All these experimental data allowed us to determine the energies of the 2 E, 2 T 1 , 4 T 2 , and 4 T 1 levels; they are provided in Table 3. The values in Table 3, within the precision of measurements, coincide with those reported for Cr 3+ in YAB [38][39][40]. It is a well-known empirical fact that the strength of the crystal field as well as covalency increases with increased ionic charge [41]. For example, Mn 4+ in corundum Al 2 O 3 demonstrates blue shifts of 364, 413, and 2300 cm −1 for the R 1 , R 2 , and A 1 -4 T 2 transitions, respectively, as compared to Cr 3+ in Al 2 O 3 (ruby) [41,42]. Both ions substitute for Al 3+ . We tried to find the R lines of Mn 4+ in the spectra of YAB:Mn but failed. It is worth noting that Mn 4+ in Al 2 O 3 was introduced together with charge-compensating Mg 2+ . A broad line at the low-frequency side of the R1 and R2 lines of the uncontrolled Cr 3+ impurity (denoted "N" in Figure 7a) noticeably narrows with decreasing temperature and, at low temperatures (T < 100 K), it exceeds in amplitude the R1 and R2 lines. At the temperature T = 5 K, its frequency is 14,571 cm −1 . The N line apparently refers to a transition in exchange-coupled Cr 3+ -containing pairs. A very similar pattern was observed, for example, in the luminescence spectra of isostructural GdAl3(BO3)4 crystals doped with A broad line at the low-frequency side of the R 1 and R 2 lines of the uncontrolled Cr 3+ impurity (denoted "N" in Figure 7a) noticeably narrows with decreasing temperature and, at low temperatures (T < 100 K), it exceeds in amplitude the R 1 and R 2 lines. At the temperature T = 5 K, its frequency is 14,571 cm −1 . The N line apparently refers to a transition in exchange-coupled Cr 3+ -containing pairs. A very similar pattern was observed, for example, in the luminescence spectra of isostructural GdAl 3 (BO 3 ) 4 crystals doped with 1% Cr 3+ (GAB:Cr 3+ ) [17]. The authors attribute the corresponding transition to the emission from the 2 E state of the Cr 3+ -Cr 3+ pairs. In our case, the formation of Cr-Mn pairs could also be possible.
Optical Spectroscopy
The inset of Figure 7a shows the absorption spectra at the lowest measured temperature (T = 5 K) for two directions of incident light polarization, E||c and E⊥c. The ratio of the amplitudes of the R 1 and R 2 lines is in agreement with the corresponding ratio for YAB:Cr 3+ [38] (namely, I(R 1 )/I(R 2 ) = 1 for E||c, I(R 1 )/I(R 2 ) = 2 for E⊥c), which once again confirms the origin of the observed R lines as stemming from the uncontrolled Cr 3+ impurity. It is also worth noting that we found the same R lines of approximately the same intensity in "pure" YAB crystals grown from the same chemicals in the same laboratory as the YAB:Mn crystals under study. The rest of the spectrum observed for YAB:Mn is absent in YAB, so it is obviously associated with manganese.
EPR measurements revealed Mn 2+ ions occupying yttrium-ion sites in YAB:Mn [28]. Although Mn 3+ was introduced into the melt solution in the form of Mn 2 O 3 , it must be kept in mind that Mn 2 O 3 decomposes in air at T > 800 • C, losing part of the oxygen-6(Mn 3+ ) 2 O 3 = 4Mn 2+ (Mn 3+ ) 2 O 4 + O 2 -so that Mn 2+ ions appear. The charge-compensation can be realized by uncontrolled impurities such as Ti 4+ (see Table 1). Optical spectra of Mn 2+ in oxide crystals consist, as a rule, of a single broad band corresponding to the 4 T 1 → 6 A 1 transition, which for Mn 2+ in the Y 3+ position is in the green region of the spectrum [43]. We attribute a broad band peaking at 531 nm (T = 10 K, see Figure 5) to the 4 T 1 → 6 A 1 transition of Mn 2+ in YAB:Mn.
Mn 3+ was not found in the EPR studies of YAB:Mn [28]. Note, however, that Mn 3+ is a non-Kramers ion and can be studied in some cases only by a special high-frequency EPR technique. Such studies on SrTiO 3 :Mn have shown that Mn 3+ substitutes for the octahedrally coordinated Ti 4+ and forms three distinct types of Jahn-Teller centers that differ by charge-compensation mode [44]. The Mn 3+ ion in octahedral coordination replacing Al 3+ was found in Al 2 O 3 (corundum) [45] and Y 3 Al 5 O 12 (YAG) [20,21]. Below, we discuss the features observed in our spectra of the YAl 3 (BO 3 ) 4 :Mn crystal, which we attribute to the transitions in the octahedrally coordinated Mn 3+ at the Al 3+ site.
Low-temperature luminescence of YAB:Mn in the IR range (9500-6500 cm −1 or 1055-1500 nm, see Figure 5) consists of relatively narrow (<10 cm −1 ) ZPLs at 9371, 9388, 9430, and 9435 cm −1 and an adjacent vibronic band. In addition, narrow lines of uncontrolled impurities of Nd and Sm ions known from the YAB:Nd [3] and YAB:Sm [46] spectra are observed in the spectrum. A similar spectral pattern with narrow ZPLs with frequencies of about 9400 cm −1 and a phonon sideband was observed in a number of Mn 3+ -doped garnets and was associated by the authors with 1 T 2 → 3 T 1 transitions between excited triplets [20][21][22]. According to the Tanabe-Sugano diagrams [47], levels 1 T 2 and 3 T 1 have the same dependence on the crystal field, so the energy position of the corresponding transition band is practically independent of the strength of the crystal field. The multiple ZPLs observed in this region of the spectrum are most likely due to both the spin-orbit splitting of the 3 T 1 level and the orbital splitting of excited triplets caused by the low-symmetry component of the crystal field.
One more relatively narrow (~80 cm −1 ) line associated with manganese is observed in the red part of the low-temperature spectrum at 13,160 cm −1 (759 nm) (see Figure 5). It is accompanied by a Stokes vibronic sideband which grows in intensity with rising temperature; simultaneously, an anti-Stokes part appears (see, e.g., [48]). We tentatively assign this line to a transition from the excited orbital triplet 1 T 2 to the ground Jahn-Tellersplit doublet 5 E , 5 E " in Mn 3+ [23]. A similar transition (though not as rich in structure) with a peak at 13,700 cm −1 was observed in the low-temperature emission spectrum of Y 3 Al 5 O 12 (YAG) doped with Mn 3+ [20]. The excitation spectrum of the PL line 759 nm is presented in Figure 8c. It shows four bands peaking at 17,450, 22,000, 26,750, and 34,326 cm −1 . Bands at 17,450 and 22,000 cm −1 can be related to the spin-allowed transition from the 5 E ground state to the excited 5 T 2 triplet of Mn 3+ , split by the low-symmetry crystal field, whereas the bands at 26,750 and 34,326 cm −1 are apparently associated with the Mn 3+ transitions to the higher-lying states ( 3 E, 3 T 1 ) [23].
The strongest PL band of Mn 3+ -doped crystals has the maximum in the region of wavelengths 620-670 nm [20][21][22][23] and is associated with the spin-allowed transition 5 T 2 → 5 E. We assign a broad strong emission band peaked at 15,853 cm −1 (631 nm) to the 5 T 2 → 5 E transition of Mn 3+ . Taking into account positions of the corresponding PLE bands, we find the mean value of 17,725 cm −1 as the energy of the 5 T 2 state.
Based on the experimental values 17,725 cm −1 ( 5 T 2 ) and 13,160 cm −1 ( 1 T 2 ), as well as the Tanabe-Sugano diagram for the d 4 configuration [47], we estimate the crystalfield parameter Dq and Racah parameter B for Mn 3+ in YAB:Mn as Dq = 1785 cm −1 and B = 800 cm −1 . The energy difference of~9400 cm −1 between the 1 T 2 and 3 T 1 triplets, found from the IR spectra of the 1 T 2 → 3 T 1 transition, agrees with these estimates in the framework of the Tanabe-Sugano diagram, which provides additional verification. The value Dq/B = 2.23 is very close to Dq/B = 2.25 found for Mn 3+ in garnet-type Ca 3 Ga 2 Ge 3 O 12 single crystals [22].
Conclusions
Using XANES and high-resolution optical spectroscopy, the valence composition of Mn ions in YAB:Mn was determined. According to the EXAFS data, manganese is contained in the crystal mainly in the divalent state Mn 2+ (d 5 ), and substitutes for Y 3+ . This conclusion is in agreement with the EPR results [28]. Luminescence of the Mn 2+ ions at the 4 T 1 → 6 A 1 transition (near 630 nm) was detected. For charge-compensation reasons, it would be natural to assume that Mn 4+ is present in a neighborhood of Mn 2+ [27,49]. It was previously shown for a number of aluminates that Mn 4+ replaces octahedrally coordinated Al 3+ [41,49], which is consistent with the proximity of their ionic radii (0.535 Å for Al 3+ and 0.53 Å for Mn 4+ [37]). We show that the R lines characteristic of the d 3 configuration (Mn 4+ , Cr 3+ ), observed both in the absorption spectra ( 4 A 1 → 2 E) and in the luminescence spectra ( 2 E → 4 A 1 ) of YAB:Mn, arise not from Mn 4+ but from the uncontrolled Cr 3+ impurity. We failed to find the spectra of Mn 4+ .
During crystal growth, Mn 3+ was introduced in the form of Mn 2 O 3 , so the presence of the Mn 3+ ions could be anticipated. In the IR range of the luminescence spectra of YAB:Mn at low temperatures, the spin-forbidden transitions 1 T 2 → 3 T 1 and 1 T 2 → 5 E , 5 E " of Mn 3+ (d 4 ) were observed. A broad emission band in the orange spectral range (near 630 nm) is associated with the spin-allowed 5 T 2 → 5 E transition of Mn 3+ . Using the experimental spectroscopic data and the Tanabe-Sugano diagram for the d 4 configuration, we estimated the crystal-field parameter Dq and Racah parameter B for Mn 3+ in YAB:Mn.
Further studies are needed to evaluate the application potential of YAB singly doped with manganese or co-doped with chromium. Our work can serve as a basis for these studies. | 8,045.8 | 2023-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Experimental investigation on stability of an elastically mounted circular tube under cross flow in normal triangular arrangement
Experimental investigation on flow perturbations coupled with tube vibrations along the interstitial flow path are presented. A Normal Triangular tube arrays at different operating conditions with a pitch ratio of 1.85 was calculated. Interstitial flow perturbations Measurements all along the flow path were recorded by means of a hot-wire probe while monitoring the tube vibration in the stream wise and cross flow directions. A single flexible tube located in the centre of a rigid array was equipped with pressure transducers to observe the surface pressure deviation. The amplitude of flow perturbation and phase with respect to the tube vibrations were acquired at a number of positions alongside the flow path in the array. The consequence of tube vibration amplitude, mean gap velocity frequency, and measurement position of the hot wire probe on the amplitude of the flow perturbation and comparative phase were examined. It is observed that the perturbations of the fluid flow are primarily evident at the position of separation of the fluid flow from the test tube and decay swiftly with space from this position. It shows that the time delay between tube vibration and perturbation of the flow is associated with separation of the flow and enhanced vortices resulted due to the tube vibration.
Introduction
The fluid flow over the circular cylindrical structures has been widely investigated by many researchers especially tubes employed in heat exchangers, wind turbines etc.In the view of controlling the vortex shedding and thus reduce the flow-induced vibration, which is becomes a challenging problem in the field of fluid dynamics.The tube bundles are generally employed in lateral heat exchangers, the design parameters of the heat exchangers were still depending up on experimental correlations of heat transfer and fall of pressure which have boundaries and are of uncertain accuracy.The shell-and-tube heat exchanger is the most adaptable type of cross-flow exchangers [1] and one of the most employed types in the power and processing industries.Indeed, it records for more than 85 % of newly supplied heat exchangers to the chemical, petrochemical, oil refining and power companies in leading European nations [2].Hence, better design and well-organized operation of such equipment can lead to significant energy savings.Aside from the heat transfer, the design also should consider for vortex-induced vibrations.Apart from the instability of the shear layers, the large-scale vortex shedding and the different wake interaction mechanisms are significant factors for larger-amplitude vibrations or structural resonance in the arrangement of the structures and causing severe structural failures, which leads to loss of production and higher repairing costs.As a result, trustworthy numerical methods for evaluating the existing (or the development of new) designs are urgently required in order to avoid expensive and wide-range of experimentation.Structures subjected to cross flow arranged in different configuration resulted in vibration excitation and noise formation throughout its operation mainly depends on its design aspects were investigated and briefly described [3,4].[5,6] investigated excitation behavior of the circular cylinders arranged in Isolate, tandem and staggered arrangement, whereas [7] experimentally investigated the curved cylinder arranged in convex and concave configurations.Normal triangular arrangement [8], inline square arrangement [9][10][11], parallel triangle arrangement [9,11,12] investigated the characteristics of vortex shedding behavior inside several standard triangular arrangements, it is identified that three vortex shedding frequencies for the array with pitch to diameter ratio / = 2.08 for a range of (22200 ≤ ≤ 45000).Whereas the high frequency was coupled with the cylinders placed in the first rows, further the lower frequency was associated with the cylinder in the downstream rows, despite the fact that the third frequency was the consequence of the nonlinear interaction between the above two and was precisely equivalent to their discrepancy.They also verified that the multiple frequency nature of vortex shedding observed for the first few cylinder rows were mostly depends on the Reynolds number (Re).[8] Have performed in the normal triangular tube arrays for the Reynolds number in a range of (760 ≤ ≤ 49000) and identified that pitch-to-diameter ratio / > 2.0, a lower frequency vortex shedding is observed from the second row than the first row.
[11] performed a wind tunnel experiments for the inline square arrays with pitch to diameter ratios of (1.21 < / < 2.83) and, with the exception of the smallest pitch ratio array, they also acknowledged two different vortex shedding frequencies in the first and second rows of the tube.The vortex shedding, acoustic resonance, fluid elastic instability, and Multi-phase buffeting are the four mechanisms were recognized to be causing extreme vibration excitation of tube bundles employed in the high pressure applications such as heat exchangers.Among the above mentioned critical vibration excitation mechanisms, the fluid elastic instability is considered to be having significant impact in the induced vibration excitation of tube bundles.The phenomenon of fluid elastic instability is mainly occurred due to the flow of fluid across the circular tube bundles operates above the critical flow velocity; further the transfer of energy from the fluid over tube bundles increasing the higher vibration amplitudes [13,14].Such type of higher vibration amplitudes will lead to sudden failures, extremely costly and significantly hazardous, particularly in the steam generators equipped in the nuclear power plant [15].In the present study, an elastodynamic model is employed to understand the nature of response of a heat exchanger tube bundle made of same material with the similar dimensions.The studies of an elastic tube in a bundle have proven to be relevant not only for academicals but also for design aspects.[16] Made an assessment that upstream cylinders had a greater influence on the amplitude response than the cylinders in the downstream of the test cylinder.[17] Showed that adjacent tubes have a significant effect on the stability of the flexible tube, even though they are rigid.Additionally, based on an experimental verification and their hypothetical model, [18] and [19] show that a single flexible tube becomes unstable at fairly the same velocity as in a fully flexible array.In a comprehensive review, observation seems to be consistent especially for the parallel triangular array configuration.The tube located in the middle of the third row is the most prone to instability.Moreover, the parallel triangular array becomes unstable for lower reduced velocities than other arrays [20].Hence, its stability threshold is a conservative value for all the other bundles.Further to find the optimized outcomes related to the stability of the structures can be analyzed by employing swarm and genetic optimization techniques [21].
In the present study, a single flexible tube located in the centre of square tubes arranged in normal triangular pattern, equipped with pressure transducers to observe the surface pressure deviation.The amplitude of flow perturbation and phase with respect to the tube vibrations were acquired at a number of positions alongside the flow path in the array.The consequence of tube vibration amplitude, mean gap velocity frequency, and measurement position of the hot wire probe on the amplitude of the flow perturbation and comparative phase were examined.
Experimental technique
The experimental investigations were carried out in an open type low turbulence (0.1 % turbulence intensity) wind tunnel at the Fluid Flow Laboratory of Indian School of Mines, Karthik Selva Kumar and Kumaraswamidhas (2015).The wind tunnel has test section of 0.3 m×0.3 m×1.0 m, with a maximum wind speed up to 75 m/s.The temperature of the fluid flow was at 22 °C.Sequentially, before the 9:1 contraction a layer of honeycomb structures, several screens were established to decrease the turbulence.Pitot tube was outfitted to monitor the flow velocity of the fluid at inlet.The flow velocity distribution without considering the boundary layer in the test section area is found to be even within 1.1 %.
Fig. 1. Wind tunnel test experimental setup
The elastically mounted tube is made of copper of diameter = 18 mm surrounded by square shaped tubes having a span of 150 mm be used as adjacent tubes, further ratio of the tubes centre to centre spacing with respect to the tube diameter ( / ) is 1.85.Despite the elastically mounted circular tubes at the centre, the adjacent tubes were mounted rigidly in Normal Triangular pattern.To visualize the flow characteristics in the region of the test tubes surface, a smoke generator with a little orifice was positioned at a half a diameter of the tubes space in the upstream condition.The orifice of the smoke generator was positioned at the mid-distance of the tubes and about the equal height of the tubes axis in mid-distance.As shown in the Fig. 2, except the adjacent tube bundles, the test tubes located at the centre is outfitted with two accelerometers.
Data acquisition and analyzing
The instrumentation arrangement consists of two accelerometers manufactured by B&K with sensitivity of 100 mV/sec along with the frequency response up to 0.3-8 kHz.The two accelerometers were mounted at each edge of test tubes, which is elastically mounted in the centre of the array.
The configuration of the accelerometers was adapted to their receptive with respect to the vibration excitation of the tube in the stream wise ( ) and cross flow ( ) direction.Further the pragmatic output from each accelerometer was evaluated by make use of B&K dynamic signal analyzer.The obtained stream wise ( ), and cross-flow ( ) displacement signals were digitized concomitantly at a sampling rate of 192K samples/sec by a 24 bit ADC, whereas the B&K signal analyzer is attached to a computer to stock up the data for additional privilege and to screen the amplitude spectra of the test tubes throughout the experimental investigation.Prior to measuring the vibration excitation in the test tubes, the voltage response from the data acquisition and signal analyzer system is calibrated in resistance to the vibration amplitudes.While calibration, the test cylindrical copper tube with the two accelerometers at each edge was given a spherical motion.Besides that, a unique time series of displacements in the stream wise ( ), cross flow ( ) direction are shown in Fig. 6(a).The amplitudes with regard to the sinusoidal output voltages for the stream wise ( ) and cross-flow ( ) directions were experiential and then plotted in resistance to the vibration amplitude.As shown in Fig. 3(b), the calibration records for both the stream wise ( ) and cross flow ( ) directions are set to a straight line , where A represent the vibration amplitude and represents the voltage output.Curve-fitting gives the values of the coefficients, = 0.11 and = 0.008.It is thus revealed that the voltage outputs can linearly symbolize the vibration amplitudes in the investigational range.To differentiate the vastness of the test tubes vibration, the real mean square value with respect to the vibration amplitude is articulated in a dimensionless form as: where the diameter of the tubes is , rms values of the tubes displacement in and direction is characterized as and respectively.The tubes displacements were experiential by digitizing the yield voltages experiential from the measurement system which converts the tubes displacement into voltage signal.Whereas, the fine precision of the vibration measurement and data acquisition system ensued to be a source for the uncertainties in the measurement of displacement of the tubes in and direction.Further the basic parameters for the experimental analysis are recapitulated in the Table 1.
Vibration amplitude
The prominences of the present study are to investigate the response of an elastically mounted test tube with respect to the adjacent tubes placed at normal triangular arrangement and the natural frequency of the test tubes with respect to the vibration excitation in the tubes.This segment presents the measurements of perturbation amplitude of the test tube all along the flow channel in the test tubes array.The investigation is performed using a single flexible tube located in the centre of a rigid arrangement of tubes.In the wind tunnel, the fluid flow velocity was steadily increased to the preferred point and kept undisturbed for about 25 s to attain a steady-state.Subsequently, the hot-wire probe was moved through four different locations (a, b, c and d) to gather the measurements.The fluid flow velocity was increased in ascending order and the process was repetitive until the test tube finally attain the state of unsteady fluid elasticity.The rms amplitude response of the test is obtained from the motion of the test tube in and direction respectively at different flow condition, which is briefly illustrated in the Fig. 4. It is experiential form the amplitude response of the test tube for each cases that, the vibration response of the test tube in stream wise direction and cross flow direction are characterized as two curves, which demonstrate that the curves having similar pattern with respect to increase in the flow velocity of the fluid.Further a steady rise in the response of test tube for each cases is observed to be mainly coupled with the increase in the mean gap velocity up to a point of = 22.5 m/s where the amplitude of the test tube is measured about 1 % of the tube diameter ( ).A noteworthy enhancement in each cases, that the amplitude response of the elastically mounted test tube is observed to be fluidelastically unstable at the velocity of = 24 m/s.Due to the existence of fluid elastic instability with respect to fluid flow velocity, a steady increase in the vibration excitation of the test tube is observed.Besides that, minimal vibration amplitude observed could be endorsed for the presence of turbulence in a view of that monitored section in the wake of upstream cylinders.Whereas the stern response excitation behavior of the test tube at the critical flow velocity is briefly conferred as follows; The fluid exhibits an enormous energy, when the flow velocity is elevated enough to excite the test tube to excite in a certain amplitude with respect to the tube natural frequency.As a result of the gradual increase in the flow velocity, the vibration amplitude is observed to be enlarged and consequently leads to a relentless excitation of the tube.At = 24 m/s, the vibration excitation is observed to be fluid elastically unstable ( / = 0.245), which is shown in the Fig. 4(d).There is a significant enhancement in the RMS amplitude/Vibration excitation is observed for the elastically mounted circular tubes in the normal triangular arrangement.
Spectral response
Further the spectral measurements showed a minute enhancement in the peak frequency increased with respect to the increase in the fluid flow velocity shown in the Fig. 5.The related Strouhal number based on mean gap velocity is = 0.24 which consent plausibly fit in the midst of the previous research work in circular tubes/cylinders.The hot-wire probe is employed to observe the mean flow velocity and turbulence level at each location index (a, b, c and d).Fig. 5. shows the frequency spectrum resulting from 50 sample averages for the both cases.The velocity perturbation amplitude is consent to be the magnitude of the peak in the spectral frequency with respect to the natural frequency of the tube.Besides that, similar power spectra were experiential regardless of the inlet fluid flow velocity once the velocity is high enough.Whereas the experiential velocity spectra indeed exhibit multiple peaks, in which the highest peak were developed only nearer to the natural frequency test tube (89 Hz), which signify that the fluid flow fluctuate with respect to the natural frequency of the tube.Owing to the distinction in natural frequency and vortex shedding frequency leads to the development of volatile vortices, which further results in a higher excitation in the test tube.Accordingly, the vibration excitation in the elastically mounted test tube is a consequence of the existence of fluid elastic instability.Derived from the deeds of the vibration excitation in the test tube shown in the Fig. 4, it is observed that the vibration amplitude increases steadily with respect to the raise in the fluid flow velocity.According to Belvin, if vibration occurs in a sub or super harmonics with respect to vortex shedding frequency are normally identified with the presence of a single peak or with multiple peaks.Further the flow and the test tube excites in higher end with respect to the natural frequency, roughly self-sufficient of the inlet fluid flow velocity.The resonance of the test tubes all the way through the blending of the fluid is obvious that all the tubes having identical natural frequency.
Response with respect to hot wire probe positioned at multiple points in the flow regime
The amplitudes of flow deviation coupled with the vibration of the test tube along the fluid flow are shown in Fig. 6(a) for tube vibration amplitude of 1 % .The position of the measurements is illustrated using the curvilinear coordinates which follow the edge of the flow path along the stream wise direction and takes a value of zero at the cross flow contour throughout the core of the vibrating tube.The coordinates in the positive direction represents the downstream, while the negative direction represent upstream.The position of the hot wire probe index * with respect to the measurement position scaled with the test tube diameter ( ) is shown in Fig. 6(b).The observation was taken nearer to the vibrating tube as promising, proportionate with guarantee that the tube by no means crash with the hot-wire probe.It is pragmatic in Fig. 6 that there is a pointed peak in the RMS perturbation velocity amplitude at position c, which overlaps with the position at which separation of flow aroused at the test tube.This is the position upon which vortices were spawned in the boundary layer of the test tube which further shedding into wake formation.There is in addition a much minor peak closer to point b which corresponds to the position with respect to flow attachment of the tube.The RMS velocity disturbance amplitudes perish very swiftly upstream and downstream of position (c), accomplished very undersized values at about * = -1.0.The downstream disturbance emerges to stabilize at higher values near * = 1.0.These interpretations imply that the flow disturbances are allied with the development of vortices all along the vibrating tube, which supports the hypothesis of Granger and Paidoussis (1996).The disturbance perishes swiftly ahead of their starting point which consent with the experimental interpretation of Tanaka and Takahara (1981), established that the flow disturbances were fundamentally self-indulgent contained by rows of the vibrating tube.It is significant to distinguish that these outcomes do not establish a direct fundamental affiliation between the assumed mechanism of the vortices and the time delay.Conversely, it is experienced that there are two excellent motivations to consider this as the most reasonable mechanism of the existing entrants, as were conversed in the prologue.Primary, the disturbances related with vibration of the test tube, were deliberated through the array of tubes is evidently prevalent at the position of separation of fluid flow from the test tube and perished with detachment from that position, i.e., the position at which vortices spawned in the boundary layer of the tube is discard into the tube wake at the frequency of vibrating test tube.Subsequently, the vortices mechanism is mainly based in essential hypothesis which is not the issue for the system of flow inertia.The final two systems were based on heuristic point of view which plead to the self-aspiration for substantial perceptive except be short of what it is regard as to be an indispensable establishment in the speculation.
Response with respect to vibration excitation of the test tube
Further it is indomitable that the vibration excitation resulted due to disturbance of fluid flow be the prevalent in the vicinity of the position with respect to the separation of fluid flow from the tube and swiftly perish away from that position, it is of concern to resolve the consequence of vibration excitation of the tube on this actions.As a result, the experimentation illustrated above was repetitive for three diverse RMS amplitude of the test tube; 0.5 % , 1.0 % , and 5 % .For the assurance, the hot-wire probe was not spoiled in the course taking measurements all along the centerline fluid flow lane with the vibrating test tube.The outcomes of the trails are illustrated in Fig. 10 for the Normal Triangular arrangement, where the disturbance due to flow velocity erstwhile standardized with the velocity of the test tube.It is prominent that the peak in the disturbance due to flow velocity has actually stimulated a little downstream, shimmering the reality that this measurement position is at the middle of the flow path, well beyond the vibrating tube.The common conduct of the disturbance velocity ratio as a function of detachment beyond the vibrating tube is the equivalent as that observed in Fig. 7.The disintegrated facts for the three dissimilar vibration excitation amplitude of the test tubes are astonishing, particularly in the upstream condition.In view of the fact that the tube vibration frequency was observed to be identical in all experimentation, this imply that the flow disturbance velocity amplitudes extent linearly with the amplitude of the vibrating tube, in any case beyond the range commencing 0.5 % to 5 % .Hence, the measurements of the phase lag are conferred in the subsequent sections.
Quantification of turbulence
Experimental investigation is performed on a wind tunnel, where the fluid flow velocity was amplified incrementally to the desired value and left for about (30 s) to reach a steady-state.Then, the hot-wire probe was traversed through all location to collect measurements.The flow velocity was increased incrementally and the procedure was repeated until the tube became fluid elastically unstable.The hot-wire time signal was used to obtain the mean flow velocity and turbulence level at each location as shown in Fig. 8(a), while the averaged frequency spectrum of the hot-wire measurements was used to measure the flow perturbation amplitude as shown in Fig. 8(b).The spectra used were the results of 50 sample averages and the velocity perturbation component is the RMS amplitude of such a spectrum at the tube natural frequency.Whereas the upstream turbulence level in the wind tunnel is less than 1 % which means that flow disturbances caused by tube motion in the very early tube rows should be detectable over turbulence.The present arrangement supports such an investigation.Further over a range of mean gap velocity = 2-5 m/s, the RMS response of test tube is small, which is typically less than 0.21 % of the tube diameter.The excitation mechanism in this region is turbulent buffeting, and the very small amplitudes can be attributed to the low turbulence level.At about = 6.6 m/s, the RMS vibration amplitude in the transverse direction suddenly increased to about 3.5 % of the tube diameter and the tube is considered to have become fluid elastically unstable.At about 6.5 m/s, the RMS response of test tube fall slightly despite the fact that violent vibrations of the adjacent tubes in the array were observed and the experiment was terminated at 12.4 m/s.
Response with respect to turbulence level
It is anticipated that, identifying the consistent flow structure of the fluid will be more complicated due to the raise in the flow turbulence; as a consequence, the results of turbulence on the consistency achieved at any particular position of measurements in the array was examined.
Recollect that the consistency represents the dependency flanked by two signals, and varies from 0 for two discrete signals to 1 for two entirely associated signals.In the current study, the consistency phrase refers to the consistency at the frequency of the tube vibration and measurements of phase at this frequency are considered again for a coherence level > 0.5.The hot-wire probe was moved in the cross wise direction throughout the measurement position shown in Fig. 6, at each measurement position, the vibration response of the test tube, the flow disturbance velocity and the turbulence level were deliberated along with the phase amid the tube vibration and disturbance velocity in addition to the consistency among measurements were evaluated.The flow velocity was amplified steadily, and the measurement process was recurred until the tube became elastically unstable.It was established that the fraction of the flow velocity perturbation amplitude as demonstrated in Fig. 5, strongly affects the consistency amid the vibration of the tube and velocity perturbation as publicized in Fig. 9.As the fraction amplified from 0.1 to 0.2, a spectacular enhancement in the consistency level was identified.It was established that in order to achieve consistent phase measurements, the flow velocity perturbation amplitude be supposed to higher than about 15.5 % with respect to the turbulence level which correspond to a consistency superior than about 0.6.
Response with respect to tube vibration frequency
The consequence of vibration frequency of the tube was examined by determining the virtual phase involving velocity of the tube vibration and flow perturbation at a rigid position for some tube natural frequencies at the same time as retaining the vibration amplitude of the tube to a stable value.Sustaining the response amplitude of the tube to a stable value on 0.8 % was accomplished by bending the mean flow velocity.It was established that as the raise in the natural frequency of tube, requires a higher mean gap velocity to attain the similar response of rms amplitude as shown in Fig. 10.The correlation among velocity and frequency for predetermined RMS amplitude response of the tube is linear.Besides that, when the mean gap velocities been ranged to the subsequent frequencies of the tube, the reduced velocities ( / ) were established to be almost stable.Maintaining the RMS amplitude response of the tube is stable, the outcome of unreliable natural frequency of the tube on the phase angle involving tube vibration and the allied flow perturbations was then scrutinized as shown in Fig. 10.
It is distinguished that the virtual phase was nearly stable for all the tested vibration frequencies of the tube.Further it is facilitating to scrutinize the results of tube natural frequency of tube, the time flanked by tube vibration and flow perturbation was deliberated.This time delay is correlated to the virtual phase by: whereas the time hindrance among tube vibration and flow perturbation is the virtual phase angle, represents the signal period, and is the tube natural frequency.While the phase measurements are distorted to time hindrance in milliseconds (ms), it was established that the time hindrance diminished as the vibration frequency augmented.The time hindrance terms can be standardized by means of the mean gap velocity and the tube diameter , and the ensuing dimensionless time scale is / .The consequence of tube vibration frequency on this time scale is shown in Fig. 12.It is observed with the intention of the scaled delay time / amplified linearly with the raise in the tube vibration frequency with a goodness of fit 0.9586 for Normal Triangular arrangement.This deed is allied with the raised in mean gap velocity mandatory to attain steady vibration amplitude at the entire frequencies.It is also renowned that, as the tube frequency augmented, the consistency level reduced.This is allied with the raise in the turbulence level with escalating mean gap velocity.
Response with respect to phase measurements along the flow channel
To assist the examination of instability proliferation allied with tube vibrations, the virtual phase was calculated all along the flow channel illustrated in Fig. 10 by means of two experimental trials.The first method is mentioned as "rigid position", in which the hot-wire probe was set at one position and the flow velocity was amplified progressively in anticipation of the tube turn into elastically unstable.Then, the probe was stimulated to the subsequent measurement position and the process was repeated.The subsequent process is referred to as "steady velocity," in which the velocity of the fluid flow was situate to an invariable value whereas the hot-wire probe was pass through all positions to accumulate the measurements.
Then, the velocity was amplified gradually and the hot-wire probe was moved across again through all positions.As anticipated, both trials created indistinguishable results contained by the investigational inaccuracy.The outcome of these testings are reviewed in Fig. 12, where the standardized time delay is intrigued as a function of position all along the midpoint of the flow conduit for four diverse vibration amplitudes of the test tube.According to the multiple features as a manifestation, primary, the outcome for dissimilar flow velocities and tube vibration amplitudes disintegrate very well apart from the upstream conditions, because the distances are bigger than test tube diameter.The exemption is relatively possible the outcomes of exceptionally little perturbation velocity amplitudes is observed as shown in Fig. 10, which are related with lesser consistency and enlarged experimental improbability.Subsequent, the time delay shows a in close proximity to linear enlargement, in both upstream and downstream, as of a value close to zero at point c, the peak in the velocity perturbation amplitude is coupled with the flow separation as of the tube vibration.This supports the proposition that the basis of the velocity perturbations by the side of the natural frequency of the tube along with vortices generation from the tube vibration.The gradient of the curvature is determining the rate of proliferation of the flow perturbations ahead of the source of the perturbation.This slope, , is specified by: = ( / ) ( / = . (3) Thus, merging the Eqs.( 4) and ( 5), the gradient of the curvature in Fig. 11 can be written as: = .
(5) This is the fraction of the mean gap flow velocity to the hindrance velocity of propagation.Investigating the perturbation dissemination along the downstream, in Fig. 13, it emerges that the proliferation time (gradient of the curvature) is linear from the source at c to the point d at the least gap flanked by the tubes in downstream.The flow visualization of eddy detachment in an inflexible tube array Polak and Weaver (1995) demonstrated that the escalation of coherent structures of the vortices in the premature tube rows in an array was multifaceted and robustly subjected towards a restricted velocity of the fluid flow.
Conclusions
The foremost observations strained from this investigation are as follows, the higher vibration excitation amplitude of the flow perturbations is deliberated at the tube natural frequency transpired close to the position of flow separation away from the test tube.It recommends that the outcome of the hindrance as of fluid flow separation along with the development of vortex shedding all along the boundary layer of the test tube.Further the flow perturbations amplitude response is allied with vibration excitation of the tube is observed to be perishing swiftly along upstream and downstream of flow separation from the test tube.The perturbation proliferation time ahead of the flow separation position, as considered from the phase measurements, which is in the array, partly indicate the interstitial gap flow velocity above the range of gap velocities deliberated.Subsequently it is emerged out to be fairly less significant in the upstream direction.The consistency among the tube vibration and flow perturbations by the side of the tube natural frequency, which is fall stridently when the trialed averaged peak perturbation velocity falls less than about 20.98 % with respect to rooted mean square turbulence velocity ratio.Besides that, it is observed that amplitude response of flow perturbation velocity is balanced unswervingly all along the vibration amplitude response of the test tube.
Fig. 2 .
Fig. 2. A brief sketch of data acquisition, signal measurement and analyzing setup
Fig. 3 .
a) Displacement in the -and -direction, b) output voltage and v/s Vibration amplitude in stream wise ( ) and cross-flow ( )-direction for single flexible tube.Karthik Selva Kumar and Kumaraswamidhas (2015)
Table 1 .Fig. 4 .
Fig. 4. Vibration excitation of the elastically mounted circular tube at normal triangular arrangement
Fig. 5 .
Fig. 5. Spectral response with respect to test tubes vibration frequency for different flow velocity at 1 % of posistion index (c) for normal triangular array
Fig. 6 .
a) Flow velocity perturbation amplitude at different position for normal triangular arrangement, b) measurement position of hot wire probe
Fig. 7 .
Fig. 7. Scaled flow velocity perturbation amplitude measured at various positions along the flow channel
Fig. 8 .
Hot-wire measurements signal at point ( ), tube vibration amplitude 1 % , a) hot wire time signal representing the turbulence level, b) consequent frequency spectrum
Fig. 9 .
Fig. 9. Turbulence level effect on the coherence in between the fluid flow perturbation index and tube vibration for Normal Triangular arrangement
Fig. 10 .
Fig. 10.Divergence in Mean gap velocity with respect to natural frequency of the test tube, to observe stable vibration amplitude for normal triangular arrangement
Fig. 11 .
Fig. 11.Phase Lag between the fluid flow perturbation index ( ) and tube vibration (vibration of the tube is at 0.8 % ) for normal triangular arrangement
Fig. 13 .
Fig. 13.Normalized time delay as a function of measurement position, = 89 Hz, for various vibration amplitudes for normal triangular arrangement | 7,588 | 2016-05-15T00:00:00.000 | [
"Physics",
"Engineering"
] |
Dynamic and Heterogeneity of Urban Heat Island: A Theoretical Framework in the Context of Urban Ecology
: The dynamic and heterogeneity of the urban heat island (UHI) is the result of the interactions between biotic, physical, social, and built components. Urban ecology as a transdisciplinary science can provide a context to understand the complex social–biophysical issues such as the thermal environment in cities. This study aimed at developing a theoretical framework to elucidate the interactions between the social–biophysical patterns and processes mediating UHI. To do it, we conducted a theoretical review to delineate UHI complexity using the concept of dynamic heterogeneity of pattern, process, and function in UHI phenomenon. Furthermore, a hypothetical heterogeneity spiral (i.e., driver-outcome spiral) related to the UHI was conceived as a model template. The adopted theoretical framework can provide a holistic vision of the UHI, contributing to a better understanding of UHI’s spatial variations in long-term studies. Through the developed framework, we can devise appropriate methodological approaches (i.e., statistic-based techniques) to develop prediction models of UHI’s spatial heterogeneity.
Introduction
Urban regions consist of human and natural components that constantly change due to complex interactions within and between biophysical and social systems [1][2][3]. Therefore, these changes lead to the formation of unique landscapes, characterized by an extraordinary variety of land uses [4,5], which affect the surface-atmosphere energy balance and urban thermal environment (UTE) [6,7]. Dense urban settings tend to be significantly warmer than the nearby rural area which is known as urban heat island (UHI) phenomenon [8,9]. The UHI phenomenon exerts impacts on human heat-related health and comfort, particularly during heat waves [10,11]; moreover, it affects energy consumption, water quality, carbon dioxide emissions, and air pollution [12][13][14][15][16][17]. Due to health and environmental concerns, the UHI effect has aroused widespread attention in recent decades, leading to a cumulative body of research aiming to explore its drivers, formation, and consequences [18][19][20].
The spatial pattern of UHI is commonly retrieved from thermal data of satellite images such as Landsat 8-thermal infrared sensor (TIRS) and Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data, which are known as land surface temperature (LST) [21]. To capture the temperature of the heterogeneous surface (i.e, various land uses) LST within an urban landscape, unmanned aerial vehicles or drones have been introduced to retrieve LST at sub-meter spatial resolutions. The drone's spatial and temporal resolutions are highly advantageous for evaluating the variability of LST at fine spatial and temporal scales in an urban heterogeneous system [22,23].
Due to a wide diversity of socio-economic and biophysical intertwining drivers and outcomes, the UHI is a complex issue to study [13,[24][25][26][27][28]. In addition to the complex interactions, Cadenasso et al. [29] argued that the extreme complexity of urban issues arises from the spatial attribute (i.e., configuration and composition) of urban mosaic patches and their temporal changes. Similarly, the configuration and composition attributes of urban patches (i.e., green and built-up patches) affect the thermal environment [30][31][32][33][34].
The science of urban ecology deals with the complex social-biophysical issues of cities [35,36] and investigates the interactions between complex biological systems, built structures, and human actions [37][38][39]. Considering the principle of urban ecology that ecological (biophysical) patterns and processes affect ecosystem services [39], it gives an insight into how urban ecology would be beneficial in investigating the UHI. Urban ecology as a transdisciplinary science assists society in moving toward sustainability and resilience [40][41][42]. It focuses on spatial-temporal patterns of urbanization and how they affect social-ecological processes and functions, ecosystem services, human wellbeing, and urban sustainability [40,43,44]. McPhearson et al. [35] argued that urban ecology provides a robust and holistic approach to the study of cities, helping the decision-makers to understand the complex relationships among social, ecological, economic, and technological systems. Therefore, developing theoretical and empirical studies related to the different issues in the context of urban ecology is essential [35]. In urban ecology, the pattern of an urban area is considered to be spatially heterogeneous and to have an influence on ecological processes [45]. These processes can be investigated by considering three broad realms: the flow of material and energy, biotic performance, and human actions [46]. UHI, as a result of social-biophysical interactions, is a spatially heterogeneous and temporally dynamic phenomenon. Then, urban ecology can give a new insight into investigating UHI.
Landscape ecology, as a holistic transdisciplinary science [47,48], explicitly emphasizes spatial composition and configuration, and its consequences on biophysical processes like biogeochemical fluxes and socio-economic processes [49][50][51]. Recently, urban landscape ecology [50,52] as the invention of landscape ecology and urban ecology [52], provides an appropriate context for understanding the formation, the effects of spatial and dynamical heterogeneity, and the relationship between landscape patterns (i.e., land cover/land use composition and configuration) and biophysical and socio-economic processes in multiple scales of time and space [52]. According to the urban landscape ecology, the compositional and configurational attributes like connectivity, distance from green area, shape characteristics, density, and degree of aggregation of patches exert impacts on thermal processes and land surface temperature [32,[53][54][55].
Integrating social and ecological knowledge and data is critical to promoting the modeling of an urban ecosystem [56][57][58][59][60]. Therefore, to study the integrated complex infrastructural-social-ecological issues in urban ecology, different approaches and frameworks like a human ecosystem, Metacity, ecological feedback model, pattern-processfunction, and dynamic heterogeneity like patch dynamic, dynamic heterogeneity, patternprocess-function, urban-rural gradient, ecosystem service framework, ecosystem service integrity, human ecosystem framework have been developed in recent years [46,56,[61][62][63].
To study the integrated complex infrastructural-social-ecological issues in urban ecology, different approaches and frameworks like a human ecosystem, pattern-processfunction, and dynamic heterogeneity [46,64,65], which basically include similar concepts, have been developing. The concept of spatial heterogeneity can be applied to urban planning and management: social-biophysical processes mediate urban functions and sustainability [66]. In other words, urban ecology as transdisciplinary research integrates human actions and perceptions and policymaking with biophysical components [35,42,58,[67][68][69][70][71][72][73]. For instance, residents may respond to the heat in different ways like tree planting or using air conditioners [68], meaning that decision-making and human actions affect bio- physical processes that change the UHI. When exploring the mechanisms behind complex urban phenomena, the process-based 'dynamic heterogeneity' approach can help clarify the interactions between human and biophysical components [35,68].
A framework in urban ecology refers to intertwined mechanisms or processes which can be tested using various hypotheses [40,74]. Synthesizing conceptual frameworks is essential in advancing urban ecology towards a strong science of cities [35]. The magnitude of interactions is regulated by policy, governance, culture, and individual behavior of the urban system of the urban system [75]. Integrating human actions into the urban ecosystem is widely perceived at the conceptual level, but developing effective and integrative theories and their applications in an urban system study still remains a challenge [76,77].
A dynamic heterogeneity approach is a useful tool for enhancing ecological integration and exploring the interactions between social and biophysical patterns and processes in urban ecosystems [46]. Understanding the complex interactions between processes, identifying their driving factors, and, ultimately, predicting the behaviour of environmental systems are among the main objectives of environmental research [78]. It can be inferred that the concept of dynamic heterogeneity can be applied to long-term research, facilitating the integration of ecosystem components and the development of predictive models [46].
Although there are a large number of systematic reviews related to the different aspects of the UHI phenomenon [79][80][81], it has not been investigated using a "theoretical review" lying in urban ecology. For instance, Deilami et al. [79] organized a synthesis review to identify the spatio-temporal factors and their causal mechanisms or processes that mitigate the UHI effect. Considering all the above, we aimed to develop a theoretical framework for a better understanding of the social-biophysical mechanisms behind UHIs' heterogeneity through time. In this article, we conducted a theoretical review to illuminate how the social-biological-physical processes contribute to forming the UHI in an urban ecosystem and consequently cause the dynamic and heterogeneity of UHI. To achieve the goal, we made two main implementations: (1) the dynamic heterogeneity approach was adjusted to the UHIs' dynamics and heterogeneity and (2) a template model of the driver-outcome spiral (i.e., heterogeneity spiral) was conceived for the UHI phenomenon. The proposed conceptual framework offers a comprehensive perspective of the UHI phenomenon in the context of urban ecology, supporting the analysis of UHIs' spatial heterogeneity in long-term studies.
Method
In this article, we conducted a theoretical review [82,83] in regard to the concept of "dynamic heterogeneity" lying in the urban ecology context. A theoretical review consists of concepts, together with their definitions, and existing theories that were used for UHI study. A theoretical review is drawn based on the existing conceptual and empirical studies to provide a higher level of understanding of various concepts and relationships in the studied topic [83]. In this review, we attempted to demonstrate an understanding of theories and concepts that are relevant to the UHI. In fact, in this research, we saw the UHI phenomenon from the aspect of the dynamic heterogeneity framework which itself is an inclusive framework and includes many interrelated concepts.
Since the basic idea of the research originated from the "dynamic heterogeneity" approach, firstly, it was necessary to define the concept of dynamic and heterogeneity in urban ecosystems. In the first section of the paper, we reviewed the principles of urban ecology in order to elucidate the UHI. The second section reviewed papers that primarily consisted of specific variables related to the dynamics and heterogeneity of the UHI. The authors organized the papers according to whether they focused on the social, biological, or physical attributes that affect the dynamics and heterogeneity of the UHI. In the final section, a hypothetical spiral of dynamic heterogeneity of UHI was created based on empirical evidence of UHI studies. Following an initial search, the abstract and the content of the articles identified by the search engines were reviewed. The number of articles containing the keywords was extremely broad, and in many cases, the concepts that we sought were not recognizable only through keywords and titles. Therefore, the articles were screened and those which did not match our goals were excluded.
The literature review was based on searching peer-reviewed articles in search engines of ISI Web of Science and Scopus. To synthesize the literature, we used a broad range of keywords from diverse disciplines to identify papers related to our questions: What are the reciprocal interactions between physical and biological processes in the UHI phenomenon? What are the reciprocal interactions between physical and human processes in the UHI phenomenon? What are the reciprocal interactions between human and biological processes in the UHI phenomenon? For each process realm, we identified several variables, for instance, for the social process, we used human health, human comfort, energy consumption, household income, decision-making, and mitigation policy. These terms were chosen based on our knowledge acquired from basic literature. The keywords represented in Table 1 was related to the concepts of urban ecology and biological, physical, social, economic, and built variables that exert an influence on the dynamics and heterogeneity of the UHI. Table 1. The basic query for paper selection by keywords in concepts of urban ecology and UHI.
Concepts of "Urban Ecology" Keywords in UHI Literature
Urban ecosystem "Urbanization" OR "Urban development" OR "UHI" OR "cold spot and hot spot" OR "land surface temperature" OR "Spatial-temporal change" OR "Human intervention" OR "mitigation policy" OR "tree protection policy" OR "climate regulation" or "cooling effect" OR "decision-making" OR "artificial heat production" OR "human health" OR "energy consumption" OR "anthropogenic heat sources" OR "land architecture" OR "tree diversity" OR "tree attribute" OR "urban forest" OR "energy and water flow" OR "heat wave" OR "wind direction" OR "urban green space" OR "cooling effect"
Social-biophysical (ecological) interaction Pattern-process Dynamic and Heterogeneity
Spatial heterogeneity Social-ecological dynamic Biophysical dynamic Complexity Cause and effect
Dynamic and Heterogeneity in Urban Ecosystems
In studying urban phenomena, understanding the causes and consequences of spatial heterogeneity of patterns, processes, and functions are considered critical issues [46,84]. Pickett et al. [46] developed the dynamic heterogeneity approach as an inclusive theory, which provides a framework to explore the mechanisms, outcomes, and drivers of spatial variability over time. In the urban scientific literature, the term 'dynamic' indicates how a patch or patch mosaic changes structurally and functionally through time [85], while 'heterogeneity' refers to the spatial variation of a property of interest across a landscape [86]. In particular, 'spatial heterogeneity' refers to the causal structure and spatial variability of a specified object [40,74].
However, Pickett et al. [46] argued that 'heterogeneity' is not just about the patterns, but also the social-biophysical processes which are spatially heterogeneous. It means that heterogeneity is an outcome of past social and biophysical processes, and can act as a driver of future social and biophysical processes (i.e., heterogeneity observed at a certain time is the result of prior conditions). Therefore, by analyzing heterogeneity within different time intervals, it is possible to conduct long-term research in the urban ecosystem [46,87].
The urban dynamic heterogeneity approach could help recognize the interactions between social and biophysical components. Human ecosystems consist of heterogeneous biological, physical, social, and infrastructural components-the heterogeneous layers interact with each other at different scales. Over time, these interactions create a new type of heterogeneity. Since there are potential interactions between all the components, the aim of the research determines which interactions should be investigated at a particular scale. The concept of the human ecosystem emphasizes how heterogeneity of human interventions influences heterogeneity of buildings and infrastructures; moreover, social and biophysical attributes and fluxes outside urban boundaries have been found to affect heterogeneity over time [46]. According to landscape ecology, patterns are defined as spatial attributes of a landscape: they encompass both the composition and configuration of patches and influence biophysical processes [84]. Therefore, pattern heterogeneity can be explained by both compositional and configurational heterogeneities [88]. In an urban ecosystem, the process refers to the transferring of energy, material, or organisms, flux, and cycling of elements within a city [65], which are inherently heterogeneous in space and occur in particular places on a landscape [89,90]. In an urban ecosystem, patterns and processes interact reciprocally and are theoretically inseparable (i.e., there is a coupling of patterns and processes) [65,79,91,92]. The function is the interaction between pattern and processes that supports delivering ecosystem services like climate regulation in urban areas [65]. In a time frame, pattern heterogeneity leads to process and functional heterogeneity [46]. Functional heterogeneity is defined as the spatial variation of the urban ecosystem's capacity to provide services [65].
Urban ecologists hypothesized that the interaction between social-biophysical patterns and processes can be observed in the form of surface cover or land use/land cover (LULC). LULC is regarded as an ecological indicator in urban studies. It affects ecological patterns and processes, causing broad environmental phenomena like the UHI. The new biophysical conditions such as UHI affects human attitudes which may lead to the establishment of new policies. These policies themselves change the LULC over time [2,93].
Pickett et al. [46] outlined the existence of three interactive processes that lead to the hybridization of biophysical, social, and built components of the human ecosystem. These processes include (1) flows of material and energy (e.g., heat fluxes); (2) biological potentials or biotic performances (e.g., spatial arrangement of organisms, their traits, and community dynamics); (3) human actions and interventions and decision-making in an urban ecosystem.
The vast realm of material and energy flow in the urban ecosystem refers to the transforming and transferring of food, goods, and fuel. In other words, they can be defined by the pathways as the input and output of water, food, air, fuel, and heat [94][95][96]. The resources that stream into cities shape and modify the structure of the urban biological system, empower, and drive urban capacities with an impact on common biological forms of cities, and in the long run, create yields that remain inside the boundary or are sent out beyond the boundary [97]. Biotic differentiation (biota differentiation) is defined as various biodiversity (fauna and flora) and species richness within an ecosystem [98,99]. Regarding the social or human-made process, it involves social-economic attributes like zoning regulation, lifestyle and livelihood arrangement, economic and political policy, neighborhood identity, housing price, the pattern of investment, access to the road and green area, house density, population distribution, the market economy, general patterns of income, and access to the service which make social-economic heterogeneity across the city [100][101][102][103][104]. Table 2 represents the main attributes of the urban ecosystem to illuminate dynamics and heterogeneity. Table 2. The main attributes associated with h dynamic heterogeneity approach to elucidate UHI in an urban ecosystem (adapted from Pickett et al. [46]).
•
Urban systems are extraordinarily heterogeneous.
• Heterogeneity encompasses space and time, patterns, and processes.
•
There are different layers of biophysical, social, and infrastructural heterogeneity.
•
The layers of heterogeneity interact with each other at different scales.
• Heterogeneity acts both as driver and outcome, so mediates between the social and biophysical components in the urban system.
•
The interactions of different heterogeneous layers create new heterogeneities.
• Social and biophysical fluxes outside the urban boundary affect heterogeneity through time.
• Heterogeneity affects urban functions that lead to ecosystem services delivery, human wellbeing, and sustainability.
• Human beings' feedback amplify dynamic heterogeneity in urban systems.
Dynamic and Heterogeneity of UHI
In urban ecology, the human ecosystem consists of interacting biotic, physical, social, and built components that are temporally dynamic and spatially heterogeneous [46]. In association with UHI investigation, there are manifold types of biotic, physical, social, and built heterogeneities that mediate the spatial variation of UHI (Figure 1). There is a multitude of variables to study the biotic, physical, social, and built components that contribute to the UHI spatial heterogeneity. The arrows show the potential interactions between the heterogeneous components. The interactions between the components can be determined by the aim of particular research. Biotic heterogeneity can be defined as the heterogeneous distribution of natural and semi-natural patches (including forests, woodland, shrubs, green areas, and wetlands) across a city, which affect differentially the land surface temperature [55,[105][106][107][108][109]. In particular, heterogeneity of vegetation distribution, abundance, and tree species can affect the temperature in various ways, such as providing shade, modifying the landscape's thermal properties (i.e., albedo and emissivity modification), altering the air movement, and heat exchange (i.e., wind blowing) through evapotranspiration [13,[109][110][111][112][113][114][115][116][117]. The effect of biological differentiation on the thermal environment and the UHI phenomenon can be assessed using vegetation indices, like the greenness index and the normalized difference vegetation index (NDVI) [118][119][120]. Social-economic heterogeneous patterns affect urban temperatures and support the occurrence of the UHI phenomenon [25,131,132]. For instance, heterogeneities in population density and household income influence the intensity of this phenomenon [25]. Furthermore, urban anthropogenic heat emission, derived from household energy consumption and vehicular traffic, is significantly related to socio-economic activities and is considered a key factor contributing to the formation of UHI [133][134][135]. In this context, human perception is considered an important process capable of altering the intensity of the UHI phenomenon. For instance, there can be a tendency to plant certain species (e.g., trees that provide more shading) in neighborhoods [136][137][138]; moreover, people living in the hot area usually apply strategies (e.g., using air conditioning or altering the neighborhood's biophysical structure through tree planting) to mitigate the UHI effect [68]. At the same time, policymaking outcomes (e.g., increasing vegetation, constructing living (green) roofs, and promoting light-coloured surfaces) effectively influence variations of the UHIs over time [139]. The application of policies targeting the alteration of urban structures (e.g., the placement and orientation of buildings) and the residents' lifestyles can also explain temperature variations across a city [140].
The ultimate result of the reciprocal biotic-physical-social-built interactions Physical heterogeneity derives can be ascertained by topographic features (i.e., physical layers) like elevation, aspect, and slope. These features affect the thermal environment and control the UHI phenomenon [25,121,122]. Heterogeneous patterns of topographic attributes in an urban region alter the potential radiation and thermal loads (i.e., alter the energy flow process) [121].
In terms of the built component in the context of an urban ecosystem, it refers to a man-made built-up area characterized by infrastructural and technological components, changing through time due to human decision-making [46]. Notably, the characteristics of the built complex influence urban temperature and the formation of UHI. The height of buildings and their variability, as well as the spacing between buildings, affect air circulation, wind flow, and thermal energy absorption [18,24,[123][124][125]. More importantly, the material properties of roofs and walls significantly affect both albedo and emissivity, leading to temperature alterations [13,126]. The sky view factor (SVF) is a parameter related to urban building and measures sky visibility. A reduction of the SVF leads to an increase in solar radiation absorption and a lowering of wind speed, ultimately amplifying the UHI effect [110,123,124,[127][128][129][130]. Additionally, the normalized difference built-up index (NDBI), which reflects the amount of urban built-up areas, can be used to investigate the effect of the built-up surface on the intensity of the UHI phenomenon [119].
Social-economic heterogeneous patterns affect urban temperatures and support the occurrence of the UHI phenomenon [25,131,132]. For instance, heterogeneities in population density and household income influence the intensity of this phenomenon [25]. Furthermore, urban anthropogenic heat emission, derived from household energy consumption and vehicular traffic, is significantly related to socio-economic activities and is considered a key factor contributing to the formation of UHI [133][134][135]. In this context, human perception is considered an important process capable of altering the intensity of the UHI phenomenon. For instance, there can be a tendency to plant certain species (e.g., trees that provide more shading) in neighborhoods [136][137][138]; moreover, people living in the hot area usually apply strategies (e.g., using air conditioning or altering the neighborhood's biophysical structure through tree planting) to mitigate the UHI effect [68]. At the same time, policymaking outcomes (e.g., increasing vegetation, constructing living (green) roofs, and promoting light-coloured surfaces) effectively influence variations of the UHIs over time [139]. The application of policies targeting the alteration of urban structures (e.g., the placement and orientation of buildings) and the residents' lifestyles can also explain temperature variations across a city [140].
The ultimate result of the reciprocal biotic-physical-social-built interactions described above mediates a spatial heterogeneous mosaic of UHI (Figure 1). This mosaic affects the biophysical-social processes (i.e., evapotranspiration, heat exchange, and decision-making processes) in urban areas ( Figure 2). Each of these three processes contributing to the UHI heterogeneity is itself a large topic. The researchers can focus on each of the three processes related to the others and study the feedbacks and interactions among them. For instance, how does the decision-making process change the vegetation surface, or how energy and material flow can affect human perception. Integrating Figures 1 and 2, we can adjust the dynamic heterogeneity for the case of UHI. As shown in Figure 3, the interactions between the above-mentioned coupling social-biophysical patterns and processes over time may lead to a new heterogonous UHI pattern. In other words, the interactions between the patterns and processes change the Integrating Figures 1 and 2, we can adjust the dynamic heterogeneity for the case of UHI. As shown in Figure 3, the interactions between the above-mentioned coupling social-biophysical patterns and processes over time may lead to a new heterogonous UHI pattern. In other words, the interactions between the patterns and processes change the UHI heterogeneity over time which can be called "dynamic heterogeneity of UHIs". As shown in Figure 3, the interactions among a multitude of heterogeneous built-social-biophysical layers drive social-biophysical processes, and the process feedbacks change the pattern heterogeneity. The coupling interaction between heterogeneous patterns and processes can hence give birth to a new heterogeneous UHI pattern over time that is called "dynamic heterogeneity of UHI". The reciprocal interaction between the processes of 'energy and material flows', 'biotic differentiation', and 'human action' (adapted from Pickett et al. [46]) contribute to the formation of UHI. For each process realm, several variables have been outlined.
Integrating Figures 1 and 2, we can adjust the dynamic heterogeneity for the case of UHI. As shown in Figure 3, the interactions between the above-mentioned coupling social-biophysical patterns and processes over time may lead to a new heterogonous UHI pattern. In other words, the interactions between the patterns and processes change the UHI heterogeneity over time which can be called "dynamic heterogeneity of UHIs". As shown in Figure 3, the interactions among a multitude of heterogeneous built-socialbiophysical layers drive social-biophysical processes, and the process feedbacks change the pattern heterogeneity. The coupling interaction between heterogeneous patterns and processes can hence give birth to a new heterogeneous UHI pattern over time that is called "dynamic heterogeneity of UHI".
Driver-Outcome Spiral of UHI: Building a Model Template
The UHI is affected by numerous social-biophysical factors, as well as by the spatial arrangement of the LULC [25,141] and affects energy consumption, human health, water quality, and air pollution [14,142,143]. A template of heterogeneity or a spiral of dynamic heterogeneity [46] is a model template that indicates how a set of factors associated with a problem are potentially linked to each other. In addition, it represents the mechanisms, causes, effects, and interactions for a specific subject in a social-ecological system [68]. The above template, which was adopted from biological theories [64], follows the 'conditional statement' or 'if-then statements' (i.e., if A happens, then B is predicted: if a condition or relationship is verified, then certain results can be expected) [46,64].
In creating a driver-outcome spiral, due to the extremely wide diversity of the components, variables, and driver-outcome interactions involved, a myriad of templates can be developed to illustrate the causes and effects of the UHI. The choice of which template to build depends on the specific analytical goal: a large number of mechanistic spirals can be proposed by considering the various drivers and outcomes of the UHI. Figure 4 describes a simplified hypothetical driver-outcome spiral (i.e., a model template of the UHI dynamic heterogeneity), which was created based on a literature review. Here, heterogeneity is temporally dynamic and influences social-biophysical processes: physical, biological, and social-economic heterogeneities result from past interactions and are the drivers of future changes [46]. In this figure, the heterogeneous patterns of vegetation and impervious surfaces alter the land surface temperature pattern through biophysical processes (e.g., evapotranspiration and heat exchange) between time 1 and time 2. These, in turn, affect human comfort and health (between time 2 and time 3). Environmental concerns lead to the establishment of new policies for the mitigation of urban temperature. The decision-making policy process is expected to cause changes in land cover over time.
Notably, the occurrence of pulse events (i.e., regional events out of the urban boundary) at a given time may affect heterogeneity at a subsequent time. Note: the starting point of the driver-outcome spiral, which encompasses intrinsic physical attributes (e.g., topography and climatic zone) and corresponds to 0, is not shown in this figure.
Quantifying and Modeling the Interactions and Feedback among the Processes Mediating UHI
Due to the complexity and lacking direct measurement of different social-biologicalphysical processes and interactions in the context of urban ecology, various approaches and statistical models have been developed to quantify the specific goal [46]. The researcher can investigate the following interactions in UHI: how biotic differentiation (e.g., forest and woodland change) may influence physical processes like solar energy flux or the wind blowing; how physical processes like heat flux affect biotic performance; how the decision-making process and human perceptions can affect biotic differentiation; how human preferences and attitudes towards particular types of plants may affect the biodiversity of the urban area that consequently change UHI intensity; how biotic attributes can change human activities or how green space and plant diversity may influence human perceptions, leading to alteration of the urban temperature. Mechanistic models can describe a complex system by bringing the components together, providing a method to test the hypotheses in holistic ways. It also can describe a phenomenon through a hypothesized or assumed mechanism/process [155,156]. This model can be applied in studying the complex issue of UHI. In addition, a Bayesian Network model is a useful tool to deal with various social-ecological processes in a specific phenomenon [157,158] and can be used in evaluating probable outcomes in complex ecological systems [159]. This approach allows for a combination of different types of data like quantitative data, expert or local knowledge, and outputs from scenario building, and can deal with lacking data, hence they are useful in areas such as ecology or social science [157]. To analyze the relationship between the unobservable variables and the observed measurement, the state-space model can also be used [160] as a flexible approach [161]. In the case of UHI, for instance, there is some unobservable variable like human perception. In this case, the researcher's knowledge from the past is needed to estimate the future change of each variable [162]. The assumptive spiral starts with a heterogeneous pattern of impervious and green patches, which are linked to changes in biophysical processes (e.g., evapotranspiration, shade, and heat exchange) through time (heterogeneity at time 1). The above heterogeneity led to a heterogeneous land surface temperature pattern (heterogeneity at time 2); in turn, temperature variations typically affect human thermal comfort and health (heterogeneity at time 3) [144][145][146][147][148][149]. In addition, high temperatures can trigger specific atmospheric chemistry procedures (e.g., increased ozone production, hydrocarbon, PM 10 , and VOC concentrations), which lead to a worsening of air pollution [143,150]. Health and environmental issues deriving from high temperatures and air pollution may lead to changes in policies (heterogeneity at time 4), which would ultimately result in the alleviation of UHIs' effects [151]. Hence, policymaking processes would be the drivers of new land cover heterogeneities, starting a new turn of the spiral, which would continue to repeat through time [93]. Moreover, disturbances or pulsed events (e.g., heatwave) occurring outside urban regions (i.e., at a regional scale) [43,46] are expected to affect the UHI [152][153][154], giving rise to a new heterogeneity of the UHI in subsequent time.
Quantifying and Modeling the Interactions and Feedback among the Processes Mediating UHI
Due to the complexity and lacking direct measurement of different social-biologicalphysical processes and interactions in the context of urban ecology, various approaches and statistical models have been developed to quantify the specific goal [46]. The researcher can investigate the following interactions in UHI: how biotic differentiation (e.g., forest and woodland change) may influence physical processes like solar energy flux or the wind blowing; how physical processes like heat flux affect biotic performance; how the decision-making process and human perceptions can affect biotic differentiation; how human preferences and attitudes towards particular types of plants may affect the biodiversity of the urban area that consequently change UHI intensity; how biotic attributes can change human activities or how green space and plant diversity may influence human perceptions, leading to alteration of the urban temperature. Mechanistic models can describe a complex system by bringing the components together, providing a method to test the hypotheses in holistic ways. It also can describe a phenomenon through a hypothesized or assumed mechanism/process [155,156]. This model can be applied in studying the complex issue of UHI. In addition, a Bayesian Network model is a useful tool to deal with various social-ecological processes in a specific phenomenon [157,158] and can be used in evaluating probable outcomes in complex ecological systems [159]. This approach allows for a combination of different types of data like quantitative data, expert or local knowledge, and outputs from scenario building, and can deal with lacking data, hence they are useful in areas such as ecology or social science [157]. To analyze the relationship between the unobservable variables and the observed measurement, the state-space model can also be used [160] as a flexible approach [161]. In the case of UHI, for instance, there is some unobservable variable like human perception. In this case, the researcher's knowledge from the past is needed to estimate the future change of each variable [162].
Implications of the Theoretical Framework for Urban Planning toward UHI Mitigation
The urban ecology defines the cities as complex mosaics [66], engaging numerous social, ecological, and economic issues and strategies. In addition, landscape ecology as a science for dynamic and heterogeneity study focuses on spatial patchiness [163]. A city can be planned in a way to mitigate the UHI based on the transdisciplinary science of urban ecology and landscape ecology. In urban ecology, the urban heterogeneity comprises spatial variation within the physical, natural, and technological structures [40,46]. Urban planners consider how heterogeneity changes over time as the fundamental aspect of an urban ecosystem [66]. In addition, the compositional and configurational heterogeneity also affects the UHI intensity.
Therefore, the mitigation measures lying in this theoretical framework not only focus on the biotic components but also consider a hybrid of social-biological-physical-built components. Further, it emphasizes the pattern-process-function, considering how the composition and configuration of different patches within an urban landscape change the processes and functions and consequently, alleviate urban temperature.
Conclusions
Urban ecosystems are considered thermally heterogeneous because they typically comprise many small hot and cold spots which form a spatially heterogeneous pattern [164]. When dealing with this complexity, it is hence essential to recognize the mechanisms, components, and interactions between the social-biophysical components that contribute to the creation of UHI. In this context, the holistic science of urban ecology can be appropriate for investigating urban complex issues. Urban ecology studies are generally based on custodial frameworks, which enable the integration of biophysical and social components [68]. The concepts and tools introduced by transdisciplinary urban ecology have opened new pathways to tackle urban environmental concerns and ultimately improve related planning and management activities [66].
In this study, conceptualization and delineation of the causes and effects of spatial heterogeneity are essential in urban development [46]. In the case of UHI, the literature review indicated that pattern-process-function is heterogeneous and dynamic within an urban landscape (see the previous sections). In this study, by implying dynamic heterogeneity as an underlying approach in urban ecology, we developed a theoretical framework to understand the mechanisms behind the formation of UHI. In other words, the concept of dynamic heterogeneity was adopted to UHI: the interaction between social-biophysical patterns and processes over time leads to a new heterogeneous thermal environment. Furthermore, a hypothetical 'driver-outcome' spiral (i.e., heterogeneity spiral) was set up to better understand the UHI. In creating a driver-outcome spiral, due to an excessive diversity of components, variables, and driver-outcome interactions, a myriad of templates can be developed to illustrate a spiral of heterogeneity. Building a template depends on a specific analytical goal. Pickett and colleagues outlined that an "if-then" statement or "conditional statement" (i.e., if A happens then B is predicted, can support setting up a driver-outcome hypothesis. The synthesis of the literature review in this research demonstrated that UHI, as a specific subject that lies in a human ecosystem, can be defined through the dynamic heterogeneity approach. It enables us to integrate biophysical and social processes and patterns contributing to the UHI.
However, there are limitations to responding to all the questions related to the interaction between social-biophysical processes and their impact on UHI. As many variables and their effects are not directly observable, it means that the social-ecological feedback is not well understood. So, computer programs, simulation models, and special statistical models facilitate quantitative analysis of long-term data. Further, because of excessive diversity of components and driver-outcome interactions, a myriad of templates to illustrate the dynamic heterogeneity spiral can be developed. Building the model template is dependent on the specific analytical goal.
Overall, the theoretical framework in this paper allowed the examination of UHI from an ecological point of view, demonstrating that the concept of dynamic heterogeneity can describe UHI complexity. However, there are limitations to responding to all these questions related to the interaction between processes and their impact on the social-builtbiological-physical components. As many of the variables and their effects are not directly observable, then social and biophysical complexes, their feedback, and interaction are not well understood. So, computer programs, simulations, and statistical models should be used to facilitate the quantitative analysis of long-term data for sustainable urban planning. The conceptual framework can be insightful in heterogeneity management of an urban system in a way to achieve temperature mitigation and an increase of climate regulation services. According to the transdisciplinary urban ecology, in future studies, ecologists and landscape architects are urged to collaborate with city residents to mitigate the UHI effects. Moreover, potentially, the developed framework can give the insight to understand the complexity of social-biophysical phenomena like air pollution, water flow and pollution, and soil pollution toward urban sustainability.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,542.8 | 2022-07-26T00:00:00.000 | [
"Engineering"
] |
Paper—Using Learning Analytics to Predict Students Performance in Moodle LMS Using Learning Analytics to Predict Students Performance in Moodle LMS
Today, it is almost impossible to implement teaching processes without using information and communication technologies (ICT), especially in higher education. Education institutions often use learning management systems (LMS), such as Moodle, Edmodo, Canvas, Schoology, Blackboard Learn, and others. When accessing these systems with their personal account, each student’s activity is recorded in a log file. Moodle system allows not only information saving. The plugins of this LMS provide a fast and accurate analysis of training statistics. Within the study, the capabilities of several Moodle plugins providing the assessment of students' activity and success are reviewed. The research is aimed at discovering possibilities to improve the learning process and reduce the number of underperforming students. The activity logs of 124 participants are analyzed to identify the relations between the number of logs during the e-course and the final grades. In the study, a correlation analysis is performed to determine the impact of students' educational activity in the Moodle system on the final assessment. The results reveal that gender affiliation correlates with the overall performance but does not affect the selection of training materials. Furthermore, it is shown that students who got the highest grades performed at least 210 logs during the course. It is noted that the prevailing part of students prefers to complete the tasks before the deadline. The study concludes that LMSs can be used to predict students' success and stimulate better results during the study. The findings are proposed to be used in higher education institutions for early detection of students experiencing difficulties in a course. Keywords—Learning management systems, Moodle, electronic journal file, Moodle plugin, student success, student behavior. 102 http://www.i-jet.org Paper—Using Learning Analytics to Predict Students Performance in Moodle LMS
Introduction
Since the advent of personal computers and the Internet, almost every aspect of life, including the educational system, has changed [1]. E-learning systems, in particular, Edmodo, Canvas, Schoology, Blackboard LearnSakai, Moodle, ATutor, Chisimba [2] are actively introduced into the higher education institutions [3]. Thus, the matter of e-learning has attracted the attention of many researchers. Currently, various highquality publications that encourage online learning methods exist [4]. The importance of e-learning technology is improving steadily. Many universities use it as a key tool in educational programs [5]. The learning management system (LMS) is usually focused on organizing courses by teachers and includes managing learners in a modern online learning environment [6]. To succeed in higher education, as well as in future life and career, students should have a number of so-called "21st century skills" (for example, critical thinking, creativity, communication). Therefore, to enhance these skills, online learning systems should be developed. E-learning has several significant advantages over traditional classes in the classroom, primarily due to accessibility and flexibility [7]. It is easier to search for information in online resources for learning anytime, anywhere [8].
Initially, LMSs are simple and can be compared to regular web pages with information about disciplines, lecturers, teaching, and monitoring methods [9]. Gradually, LMSs became more complex and multi-tasking [10]. These days, LMSs provide the opportunity not only to exchange educational materials but also to interact with teachers and other students. Besides, LMSs allow objective assessment and statistical interpretation of the results of mastering the course by students.
Modern e-learning systems have advanced technologies and functions to support all forms of educational activity, including face-to-face interaction during the study [11]. Some training programs allow applicants to become students and get a university diploma without even leaving home. LMS is used to create study groups, conduct lectures, seminars, practical classes, and pass exams [12].
Over the past decade, many universities have acquired or developed LMS for curriculum management, and the creation of training materials, as well as a student assessment tool. Since 2012, global spending on LMS grew by 52% (21% only in 2014), totaling more than $ 2.5 billion per year. Nine out of ten US schools use one of the top five LMS providers. In the US, Blackboard holds the largest market share with 42%. The value of using educational analytics is that it changes the quality of administration, research, teaching and learning and allows countries such as the EAU to implement the best modern practices in education and expand the presence of the world's largest universities in the country [13].
Educational institutions spend thousands of dollars to implement LMSs, seeking to improve the quality of education, as well as increase the number of students through distance and blended learning. The impact of this system on improving students' performance has been a popular subject of research in recent years. Studies have been relying on data from users' opinions and subjective interpretation through surveys to determine the effectiveness of LMS usage on students' learning performance [14].
Even though the full distance learning course remains an advanced approach to education, additional opportunities that expand the use of the e-learning also exist [15]. Leading educational institutions are actively introducing learning analytics for monitoring and evaluating educational activities. Literature is replete with research on the benefits of e-learning and its impact on students' academic performance. For example, a significant positive correlation is found between the use of online tools and student exam results. It is discovered that using technology can improve academic performance and helps students with special needs to learn the course [16].
In Russia, distance learning is offered by more than 30 universities. The relevance of introducing e-learning approach is due to the growing tendency to reduce in-class learning and increase the share of students' independent educational activity [17]. LMS ensures the integrity of the learning process, its optimization, and objective control. Furthermore, students consider online learning useful since the availability of access to educational files, participation in the content formation, the possibility of real-time monitoring, and knowledge self-control [18].
In 2000, the Ministry of Education in the People's Republic of China (PRC) approved over 60 educational institutions for distance learning [19]. Today, additional Moodle plugins that expand the analytical capabilities of the system are actively introduced into the educational process [20].
Most universities in UAE offer distance education. This form of training is widely developed due to the presence in the country of departments of such famous foreign universities as the University of New York, Paris Sorbonne University and others. Most of these distance learning courses rely on Moodle and its modules in their work [21]. This choice is also due to easier opportunities for the implementation of training analytics in Moodle. The use of educational analytics in recent years has been one of the areas of constant interest of Arab researchers, because it opens up the possibility of predicting the quality of student learning. Based on the prediction, the shortcomings of training courses can be eliminated or an influence can be made on those groups of students who have higher risks of not completing the course or lowering academic performance [22].
However, the opportunities offered by educational analytics to predict students' performance require additional examination. The current study is focused on LMS Moodle. Thus, this LMS is described in detail, paying particular attention to plugins responsible for educational analytics. A visual analysis of student behavior and the results of the examination of log files are also presented. The paper intends to analyze the data obtained using the LMS Moodle, improve the learning process, and reduce the number of underperforming students.
Materials and Methods
LMS components offer various opportunities for improving student learning performance and can influence their final grades [23]. A Moodle log consists of the time and date it was accessed, the Internet Protocol (IP) address from which it was accessed, the name of the student, each action completed (i.e., view, add, update, or delete), the activities performed in different modules (e.g., the forum, resources, or assignment sections), and additional information about the action [24]. All these stored data are beneficial and can be applied for data mining algorithms. Preidys and Sakalauskas [25] notice a trend toward the combined use of data mining learning techniques for the analysis of activity data. Having the data in an LMS provides various opportunities for the use of data mining methods to examine them. Data mining can be useful to explore, visualize, and analyze data with the aim of identifying useful patterns in order to understand students' learning behavior and feedback [26]. Data mining remains a promising field for the exploration of data from educational settings. A number of institutions have also developed bespoke systems for learning analytics [27]. For example, such developments include commercial business intelligence tools that predict students at risk based on indicator variables within the system. Besides, a popular area of development is applications and plugins that combine the analysis of the training module's databases and the identification of students [28]. These applications and plugins combine data from various sources and allow teachers to communicate with students electronically.
To improve the prediction of students' performance, it is proposed to apply the clustering rules of the developed module of LMS Moodle to data mining [29]. The bulk of data is accumulated in accordance with the quantitative, qualitative, and social activities of students. As a result, it allows defining how the sample of students and the use of various classification algorithms affect the accuracy and intelligibility of predictions on academic performance.
This work presents developments in the field of setting up a highly accessible LMS Moodle for the technical implementation of fully automated virtual lessons. The students' knowledge is assessed automatically basing on compulsory tests [30]. Besides, the study proposes to conduct the process of mining e-learning data step by step, with the guidelines about how to use data mining techniques for mining Moodle data [31]. It is considered relevant since the authors report highly accurate predictions using the Bayesian classifier and the support vector machine.
Within the research, it is also proposed to apply [32] a classification model that predicts the student's ability to achieve excellent results during the study. The model is based on the following data:
Results
Moodle (Modular Object-Oriented Dynamic Learning Environment) is an open source online learning management platform known for its ease of use, intuitive controls, and many features offered [33]. Moodle is becoming is widely used by numerous schools, universities, and companies wishing to offer distance education to their employees and clients [34]. Moodle provides educational or communicative functions to create an online learning environment: it is an application or creating interactive courses, through the network of interactions between educators, learners, and learning resources. Moodle presents multiple advantages; therefore, it has become a staple platform [35].
The most frequently used Moodle plugins provided with a basic installation are Assignment, Attendance, Choice, Lesson, Page, Quiz, Link, Seminar, Folder, File, Glossary, SCORM Package, Feedback, and Database. The extended version of Moodle 3.4 also includes the Inspire Analytics plugin. One of the distinguishing features of this plugin is a model that predicts students at risk of failure to complete a course based on low engagement in the study process. Besides, it is possible to develop a new plugin or customize an existing one. Figure 1 illustrates a model where elements and connections supported in the educational process via the mechanisms of the distance learning system are highlighted. Based on the use of LMS Moodle components, the study of the logs in training groups is presented.
Fig. 1. The Components of the LMS Moodle
The study is aimed at identifying the extent to which individual data obtained from activity logs is a reliable parameter of students' academic success. Moreover, it is of particular interest to determine how the gender of students registered in the disciplines of the Department of Physical Education correlates with the final results. Figure 2 shows the distribution of logs by the overall grade students achieved in the course. The highest number of logs is achieved by the students with the highest grades, 4 and 5. To investigate if there is any correlation between specific activities on the LMS and student grades, we have performed correlation analysis.
Fig. 2. The Frequency of logs by grade [36]
Table 1 examines correlations between achievement over the span of the course (measured by grades) and effort in files, forums, and link usage, as well as the assignments uploaded. The results indicate a statistically significant correlation among students' grades and the opening of files. The correlation is positive, which indicates that students with a higher frequency of file openings have higher grades. There is a lack of association between grades and other logs in the course. File opening is correlated with activities on the forum, demonstrating that students who are active in forum discussions opened files more often.
In order to analyze the activity dependence on gender, three groups of students (Russian, Chinese and Arabian) are formed. The examination reveals that in both groups female students have a higher number of logs than their male colleagues. Differences between genders is also visible in the average grade received. Female students have a higher average grade than male students (Figure 3, 4).
Fig. 4. Analysis by gender (Group 3) [36]
The experience of the teachers of the training course demonstrates that Russian students are much more likely to complete their assignments in the last moments before the deadline. In turn, Chinese students mostly complete the assignments on time and seldom hand in their works at the last moment. Arab students also predominantly completed assignments in advance and passed the work as close to the last moment as possible. However, in general, the increase in the number of logs with the approach of the deadline is confirmed for both groups. For this reason, it is decided to combine the sample. Figure 5 presents the data on the frequency of students' logs before the deadline day. The diagram demonstrates the logs from the first to the twenty-first academic weeks. The dates for midterm and final assessment are indicated during the eighth and sixteenth weeks of the academic semester. Thus, weekly analysis showed that the highest number of logs appear on the day before test days. Figure 6 indicates course logs related to the days before test days. Page views occurred mostly between 16:00 and 20:00 on the day before the midterm assessment and after 14:00 on the day before the before the final test. In both cases, there is a significant number of logs in the late hours of the day. During these times, the most students downloaded test materials and started to study. Fig. 6. Distribution of logs before the assessment [35] Similarly, the time-focused analysis results for the whole course period level are presented in Figure 6. It shows the hours of the day during which students were logged into the course, and the opening is concentrated from 11:30 on. During the afternoon, a number of logs persist, but this decreases in the evening. In the period after midnight, there is no activity on the e-course. However, this does not mean that students were not active at all-some may have performed an offline assignment after downloading it earlier. Figure 8 presents an analysis of students' activities with respect to the grade they achieved and the day in the week. Surprisingly, for the students with the highest grade, most of the course activity was done on the day before lectures, seminars, and tests. Fig. 8. Distribution of logs per day in week with respect to achieved grade [33]
Discussion
Nowadays, the research of e-learning is becoming increasingly large-scale with the tools and methods used to explain student behavior in LMS as fundamental components. Online learning systems are essential for improving thinking skills and introducing innovative forms of mastering courses by students of higher educational institutions. However, at the same time, it is crucial to implement platforms that support mobile technologies so that students can build their learning plans with minimal time and space restrictions [37]. This will positively affect academic performance.
Reusable and flexible learning materials will also improve students' academic performance. For this reason, it is proposed to focus on intuitive and visually appealing tasks. For better tracking and forecasting student performance, it is recommended to use special JavaScript-based applications that significantly save time when processing large amounts of data on the midterm and final assessments [38].
Nowadays, the classical approach of teaching with a steady learning rhythm has lost its relevance. Thus, it is necessary to create a progress-oriented training course that can activate students' interest not only in grades but also in increasing personal potential [39]. Teachers should assign practical tasks via a virtual learning platform to develop student competencies [40]. Due to the fact that e-learning systems can accumulate a large amount of information, analyze student behavior, and help the teacher to identify possible student errors during the training course, the popularity of elearning continues to grow exponentially.
Even though no correlation between gender and preferences in the selection of educational tools is found, the role of gender and the influence of online tools on performance remains an interesting question for further research [41]. Since the analyzed sample is characterized by a predominance of female students, this matter can be investigated in groups with the same gender composition. Nevertheless, in terms of student performance, the current examination is consistent with existing research, determining factors that correlate with activity and overall success [42]. Now more and more research is being conducted on the empirical results of applying various data mining algorithms aimed at studying learning processes and predicting academic performance. These studies show that a large number of attributes that in one way or another can influence student performance are relevantly inclined to be attracted to a small number of defining categories. Among UAE students, such categories, for example, were demographic data, information about previous student performance indicators, information about courses and teachers, and general student information [43]. These factors can be considered independent of the personality of the student. Equally important are the factors of personal motivation.
In an e-learning environment, student motivation largely influences the quality of the educational process. Data mining techniques provide valuable information for assessing student motivation and the improvement of the learning process [44]. One of the drivers of student motivation is participation in joint projects. It provides team training and enhances the interest in completing complex tasks [45], therefore encouraging students to communicate within the framework of the LMS. Future research on student performance will consider a wider range of factors and review an increased sample of students engaged in various courses.
Conclusion
The results of the study prove that learning management systems enable producing new information about student behavior based on their digital profile. First of all, this information provides an opportunity to explore the successes of students in order to generate and implement new types of activities that stimulate positive results. For example, an examination revealed that students who have taken full advantage of the Moodle platform achieve higher grades. The achieved results are potentially beneficial in the early detection of students experiencing difficulties in a course. Both teachers and students benefit from this kind of research, as teachers can identify excellent students for collaboration and students find out how to give greater effort to obtain good results.
In the conducted research, the female students are more active and successful in the course than are the male students. There is a correlation between the number of logs in the e-course and the final grades. The students were most active in the test weeks and, specifically, on the day before the tests. Students can be characterized as "lastminute" students, as they perform their obligations as late as possible in terms of the deadline and are active in the late hours. However, this cannot be generalized because the research was conducted in only one course. Also, the research covered only informatics students. In future research, the analysis will be performed across several courses. Additionally, students from other disciplines will be included. | 4,657.8 | 2020-10-19T00:00:00.000 | [
"Computer Science",
"Education"
] |
kappa-deformed Spacetime From Twist
We twist the Hopf algebra of igl(n,R) to obtain the kappa-deformed spacetime coordinates. Coproducts of the twisted Hopf algebras are explicitly given. The kappa-deformed spacetime obtained this way satisfies the same commutation relation as that of the conventional kappa-Minkowski spacetime, but its Hopf algebra structure is different from the well known kappa-deformed Poincare algebra in that it has larger symmetry algebra than the kappa-Minkowski case. There are some physical models which consider this symmetry. Incidentally, we obtain the canonical (theta-deformed) non-commutative spacetime from canonically twisted igl(n,R) Hopf algebra.
I. INTRODUCTION
There have been extensive efforts to understand the gravity and quantum physics in a unified viewpoint. These led to the developments in many new directions of research in theoretical physics and mathematics. To accommodate the quantum aspect of spacetime, spacetime non-commutativity has been studied intensively [1,2,3,4,5]. The majority of the research in this direction focused on two types of noncommutative spacetimes [6,7], i.e., the canonical noncommutative spacetime satisfying, where θ µν is a constant antisymmetric matrix, and the commutation relation for the κ-Minkowski spacetime, where κ is a parameter of mass dimension. We call this κ-non-commutativity as time-like if a µ a µ < 0, space-like if a µ a µ > 0, and light-cone κ-commutation relation if a µ a µ = 0. In this paper, we consider time-like noncommutativity (i.e. for a µ ≡ (1, 0, 0, 0)).
The quantum field theories on canonical non-commutative spacetime have many interesting features. By the Weyl-Moyal correspondence, the theory can be thought as a theory on commutative spacetime with Moyal product of the field variables [8]. The theories break the classical symmetries (for example, Poincaré symmetry), and may not satisfy unitarity, locality, and some other properties of the corresponding commutative quantum field theory depending on the structure of θ µν . The attempts to cure these pathologies are still under progress [9,10,11,12,13].
Extensive studies have been devoted to the quantum group theory developed in the course of constructing the solutions of the Quantum Yang-Baxter equations. A branch of quantum group theory, deformation theory, was led by Drinfeld [14] and Jimbo [15]. Especially, Drinfeld [14] discovered a method of finding one parameter solutions of the Quantum Yang-Baxter equation from simple Lie algebra g, that is, he found one parameter family of solutions of Hopf algebras U q (g) deformed from Hopf algebra of the universal enveloping algebra U(g) [16].
Recently, following Oeckl [17], Chaichian et.al. [18] and Wess [19] proposed a new kind of symmetry group which is deformed from the classical Poincaré group. Especially, Chaichian et.al. [18] use the twist deformation of quantum group theory to interpret the symmetry of the canonical noncommutative field theory as twisted Poincaré symmetry. There have been some attempts to apply the idea to the field theory with different symmetries, for example, to theories with the Θ−Poincaré symmetry [20], conformal symmetry [21], [22], super conformal symmetry [23], super symmetry [24,25], Galilean symmetry [26], Galileo Schrödinger symmetry [27], translational symmetry of R d [17], gauge symmetry [28], [29], diffeomorphic symmetry [30,31,32], and fuzzy diffeomorphism [33]. The virtue of these twists is that the irreducible representation of the twisted group does not change from that of the original untwisted group, and moreover, the Casimir operators remain the same. And there have been some studies in constructing consistent quantization formalism of field theory with this twisted symmetry group [34,35,36,37].
In the same reasoning, if one finds a twist that gives κ-deformed commutation relation (especially time-like κdeformed noncommutativity) between coordinates from Poincaré Hopf algebra, it would be very useful in constructing quantum field theory in κ-deformed spacetime since we can use the irreducible representations of Poincaré algebra for the κ-noncommutative quantum field theory. There has been some attempts to obtain the κ-commutation relation of the coordinate system, Eq. (2), from twisting the Poincaré Hopf algebra [38], [39], [40]. In [38], Lukierski et.al. have argued that one can only get a light-cone κ-deformation from the Poincaré Hopf algebra. Hence, as far as we know, there is no twist of the Poincaré algebra which gives a time-like κ-deformed coordinate commutation relation. This is our motivation to seek for the twist that gives κ-deformed commutation relation from different Hopf algebra which is larger than the Poincaré algebra. The paper is organized as follows. In section II, we briefly review the Hopf algebra and twist deformation. We present two types of abelian twist elements, leading to the two twisted spacetime coordinate systems (κ-deformed and θ-deformed spacetime) in section III. We show that affine Hopf algebra, igl(n, R), gives the correct commuation relation by twisting. In section IV, we discuss some aspects of the symmetry algebra and κ-deformed spacetime induced from twisting igl(n, R) with some physical examples.
II. BRIEF SUMMARY OF TWISTING HOPF ALGEBRA
For any Lie algebra g, we have a unique universal enveloping algebra U(g) which preserves the central property of the Lie algebra (Lie commutator relations) in terms of unital associative algebra [41]. This U(g) becomes a Hopf algebra if it is endowed with a co-algebra structure. For Y ∈ U(g), U(g) becomes a Hopf algebra if we define where ∆Y is a coproduct of Y , ǫ(Y ) is a counit, and S(Y ) is a coinverse (antipode) of Y . In other words, the set {U(g), ·, ∆, ǫ, S} constitutes a Hopf algebra. Y acts on the module algebra A and on the tensor algebra of A, and the action satisfies the relation (hereafter we use Sweedler's notation where φ, ψ ∈ A, the symbol · is a multiplication in the algebra A, and the symbol ⊲ denotes the action of the Lie generators Y ∈ U (g) on the module algebra A.
We have a new (twisted) Hopf algebra {U F (g), ·, ∆ F , ǫ F , S F } from the original {U(g), ·, ∆, ǫ, S} if there exists a twist element F ∈ U(g) ⊗ U(g), which satisfies the relations This relations are called counital 2-cocycle condition. The relation between the two Hopf algebras, U F (g) and U(g), is If A is an algebra on which U(g) acts covariantly in the sense of Eq. (4), then for all φ, ψ ∈ A, defines a new associative algebra A F . In constructing new Hopf algebra and getting a twisted module algebra, Eq. (5) is crucial for the associativity of the twisted module algebra. This construction of twisted Hopf algebra has great advantages when it is applied to physical problems whose symmetry group and the irreducible representations are known, since we can use the same irreducible representations and Casimir operators in the twisted theory.
III. κ-DEFORMED COMMUTATION RELATIONS FROM TWISTING igl(n, R) In this paper we focus on twisting the Hopf algebra of igl(n, R) as a symmetry algebra. In this section we presents two abelian twists which result in two non-commutative coordinate systems, Eq (1) and Eq (2).
We use the commutation relation of the Lie algebra, g = igl(n, R), where P µ can be interpreted as generators of the translation in the x µ -direction and M µ ν can be generators of rotations, dilations and contractions.
In coordinate space, the generators are represented as (note that generators M µ ν are different from those of the Poincaré algebra.) A. κ deformed non-commutativity The κ-deformation is generated by the twist element F κ , where Hence we confirm that F κ is an abelian twist element. With this twist element Eq. (11), we twist the Hopf algebra of U(g ⋍ igl(n, R)) to get U κ (g) as in section II. From the fact that [E ⊗ D, D ⊗ E] = 0, we can rewrite F κ as which greatly simplifies the calculation of co-product of twisted Hopf algebra. From the commutation relations (9) we have From 2 = , we get the relation With the commutation relations (9), we obtain the explicit forms of the coproduct ∆(Y ), In this calculation, we use the well-known operator relation, Ade B C = ∞ n=0 (AdB) n n! C, with (AdB)C = [B, C], and the relation Note that, from 2 = and Ω 3 = Ω, we have relations The algebra acts on the spacetime coordinates x µ with commutative multiplication: When twisting U(P), one has to redefine the multiplication as in Eq. (8), while retaining the action of the generators of the Hopf algebra on the coordinates as in (10): It is represented as: Since P α = −i∂ α in this representation, the commutation relations between spacetime coordinates are deduced from this * −product: which corresponds to those of the time-like κ-deformed spacetime. The case of the tachyonic (a µ a µ = 1) and light-cone (a µ a µ = −1) κ-deformation is obtained in Lukierski et.al.'s work [39]. It should be noted that the twisted Hopf algebra in this section is different from that of the conventional κ-Minkowski algebra, which is a deformed Poincare algebra, in that it has different co-algebra structure from a bigger symmetry.
B. θ-deformed non-commutativity
Since affine algebra contains Poincaré algebra, there is also a twist which has same form as that of the canonical non-commutativity case [18]. We use the same twist element given by This twist element satisfies the 2-cocycle condition, Eq.(5). Following the same procedures as in subsection (III A) we get the twisted Hopf algebra given by the coproducts, Here also note that generators M µ ν are different from those of the Poincaré algebra. Similarly as in the κ-deformed case, we have used and for n ≥ 1. When φ, ψ ∈ A θ are the functions of the same spacetime coordinate x µ , the product * becomes the well known Moyal product. Since P α = −i∂ α in this representation, the commutation relations between spacetime coordinates are which leads to the commutation relation This is the same commutation relation of coordinates as those in the canonical noncommutative spacetime. This twist is different from the conventional twist [18], in that the relevant group is different. The twist in the work of Wess [19] and Chaichian, et al. [18] is that of the Poincaré Hopf algebra. Since we use bigger symmetry algebra, igl(n, R), than the Poincaré algebra, iso(n, R), only antisymmetric part of our twisted coproduct of generators M µ ν corresponds to those of [19] and [18]. We have more components (coproducts of the symmetric part of the generators M µ ν ) in the coproduct sector.
IV. DISCUSSION
In this paper we obtained time-like κ-deformed commutation relation by twisting the Hopf algebra of igl(n, R) not by twisting the Poincaré Hopf algebra. To understand why it is difficult to obtain time-like κ-deformed commutation relation by twisting the Poincaré Hopf algebra, U(P), it is instuctive to try a element F ∈ U(P) ⊗ U(P) as a twist element.
For F = 1 + r + O(r 2 ), r = r 1 ⊗ r 2 for all r 1 , r 2 ∈ U(P) From the action of r 1 , , to the coordinates, we infer the ansatz of the classical r-matrix in order to obtain the first order relation of the κ-deformed commutation relation, Eq. (2), as where r ρµν is a constant. In order to this element F to be a twist, the above classical r-matrix have to satisfy 'classical Yang-Baxter equation', i.e. the relation, Actually one can show that the above form of classical r-matrix in Eq.(32) cannot satisfy classical Yang-Baxter equation, in general, except in a very special combination of P ρ , L µν , i.e. [P ρ , L µν ] = 0, which is the crucial condition for the twist to satisfy a 2-cocycle condition. For that special case, Lukierski and Woronowicz give the 'abelian' twist element from classical r-matrix [39]. Although there have been many studies on field theories in the κ-Minkowski spacetime, the attempts to twist the Poincare group to get the κ-deformed coordinate spacetime have succeeded only in the light-cone κ-deformation [38], [39].
Hence, in order to obtain a twist which gives time-like κ-deformation we need a bigger symmetry algebra than the Poincaré algebra. We have obtain the κ-deformed non-commutativity at the cost of bigger symmetry. Our choice is the Hopf algebra of affine group IGL(n, R) and we have successfully twisted U(igl(n, R)) in two different ways corresponding to the κ-deformed and the θ-deformed non-commutativity.
Incidentally, since there are two abelian subalgebras in igl(n, R), we derive two the non-commutativity corresponding to the κ-deformed and the θ-deformed case in section III. Is there a twist which transform the θ-noncommutativity to the κ-noncommutativity? Since the inverses of F θ and F κ also satisfy the counital 2-cocycle condition, we may think of the maps F θ • F κ −1 and F κ • F θ −1 as mapping between the two different non-commutative spaces as shown in the following figure: However, they (F θ • F κ −1 and F κ • F θ −1 ) do not satisfy the counital 2-cocycle condition in general. Hence the composition map of them can not be a twist. We can not regard this relation between the two twists as a kind of symmetry (while it can be a kind of symmetry in the context of quasi-Hopf algebra).
The algebra igl(n, R) we twisted in this paper can be interpreted as a physical symmetry algebra in two ways. One interpretation is to regard the GL(4) as the generalization of O(1, 3) [42,43,44]. In those works, they consider the generalization of the Einstein-Hilbert action. That is, from the theory of the local symmetry group to the theory of local symmetry group The present method will be useful when these kinds of non-commutativity are applied to those theories with translational symmetry. Another possible interpretation is to regard igl(n, R) as the algebra of a subgroup of diffeomorphism group. Since the Minkowski spacetime is a solution of Einstein-Hilbert action, we obtain by twist the κ-Minkowski spacetime and the canonical non-commutative spacetime using F κ and F θ , respectively. We may summarize this relations as In this sense, we are in the same direction as in the work of Aschieri et.al. [31,32]. From the above relation, we realize that the canonical twist in this interpretation has different origin from the twists in earlier studies [18], [19]. We twist the subgroup of the Diff(M), while in earlier studies (for example, Chaichian et.al. [18]) one twists the Hopf algebra, o(1, 3), i.e., the algebra of the Poincaré group, or more general symmetry group (for example, conformal symmetry by Matlock [21], etc). The twisted κ-deformation gives the same coordinate non-commutativity, but is distinguished from the κ-deformed Minkowski algebra in that the co-algebra structures are different. The blackholes in κ-deformed spacetime can be regarded as an example of the application of the result of this paper. Since field theories in κ-deformed spacetime are the same as field theories with a κ-moyal product as in Eq. (23) in commutative spacetime, the deformed Einstein equations will be the ones in which the products between the metric and its derivatives are changed to the κ-moyal product. Among the solutions of the Einstein equations in commutative spacetime, a static solution is also a solution in a κ-deformed spacetime. The Einstein eqations are turned into the form: R µν (g(x)) = 0 → R ⋆ µν (g(x)) = 0.
Since a static solution of the left hand side of Eq. (38) has no time dependence, κ-moyal products in the right hand side are the same as normal products between them, it would automatically satisfy the deformed Einstein equations. The κ-deformed spacetime here denotes the module space of the twist induced algebra. It should be distinguished from the κ-deformed spacetime which is a module space of the well-known κ-deformed algebra. Though a static solution is also a solution in κ-deformed spacetime, it has different dynamics. The time dependance changes the dynamics through κ-moyal products. That is, the perturbation equation of the static solution will be different: δR ⋆ µν (g(x)) = 0 ≇ δR µν (g(x)) = 0.
Hence to tell the stability of the static solution in κ-deformed spacetime, careful analysis of the deformed perturbation equation Eq.(39) is expected. The stability analysis of these static solutions in κ-deformed spacetime is under investigation. | 3,809.4 | 2006-11-16T00:00:00.000 | [
"Physics"
] |
Effect of Dispersoids on the Microstructure Evolution in Al–Mg–Si Alloys
An Al–Mg–Si alloy with a high level of Cr is investigated via Electron Probe Micro Analysis (EPMA) and Scanning Transmission Electron Microscopy (STEM). EPMA is conducted on the same area of a sample after numerous heat treatments in a vacuum furnace to study the evolution of Mg, Si, Cr, and Fe from the segregated structure formed on casting. Mg and Si are found to segregate toward the grain boundaries and remained segregated up to 550 °C. Cr segregates away from the grain boundaries. Regions of lower Cr separated from high Cr regions by sharp transitions are observed. To investigate the effect of segregation on dispersoid precipitation, samples are heated to a number of different temperatures and examined using STEM. The evolution of dispersoid area fraction and effective diameter is measured as a function of position within a grain. The dispersoid area fraction decreases, while the size initially decreases and then increases toward the grain center. Both α‐Al(FeCr)Si and α′‐AlCrSi dispersoids exist with a variety of morphologies. The α′‐AlCrSi dispersoids are found to have a larger effective diameter. The change in dispersoids fraction, size, and morphology with position has important implications for the pinning effectiveness of the dispersoids against recrystallization.
Introduction
Al-Mg-Si (6xxx series) based alloys are increasingly being used in the automotive industry. The primary reason is to reduce the weight of vehicles, resulting in a reduction of carbon dioxide (CO 2 ) emissions and increased fuel efficiency. The demand for 6xxx series aluminum alloys is predicted to increase over the coming decade. Due the cost and CO 2 emissions produced by raw extraction of aluminum, close loop recycling of these alloys becomes a more important process. However, recycling of automotive alloys can be difficult due to pick up of impurity elements such as Mn, Cr, and Fe. The effect of systematically changing the concentration of these minor alloying additions is not yet understood. In particular, Mn and Cr form fine dispersoid particles and Fe forms large constituent particles. Both types of particle have a strong influence on the recrystallization behavior and texture evolution during thermomechanical processing. The purpose of this study was to understand how high Cr levels in particular can influence dispersoids precipitation and distribution.
Three different types of dispersoid have been identified in 6xxx alloys when Cr and Mn are present. These include the α-Al(CrMnFe)Si, α 0 -AlCrSi, and θ-AlCr phases. [1][2][3][4] The α 0 -Al 13 Cr 4 Si 4 has a face centered cubic (FCC) structure, while the θ-Al 7 Cr has a monoclinic structure. [1,2] When a high ratio of (Mn,Cr):Fe is present, the α-Al 15 (MnCrFe) 3 Si 2 phase is observed to adopt a simple cubic (SC) structure. [3] It has been found that a phase change occurs from Al 15 (MnFe) 3 Si 2 SC to Al 12 (MnFe) 3 Si body centered cubic (BCC) with increased homogenization time and temperatures as further Fe diffuses to the dispersoids. [4,5] When Mn or Cr is absent, the dispersoid phase remains α, of a composition that omits the missing element.
Lodgaard and Ryum studied the precipitation of dispersoids containing Mn, Cr, and Mn þ Cr. [3] They found that Mn and Mn þ Cr dispersoids precipitate on a semi coherent phase termed the "u-phase", from approximately 400 C. A number of dispersoids precipitate from each u-phase particle, resulting in a string of dispersoids. The u-phase precipitates at approximately 350 C on β 0 -Mg 2 Si particles, which themselves completely dissolve. In a separate study, the precipitation of Cr dispersoids was investigated but the precipitation sequence was not determined. [6] More recent studies have also shown the requirement of β 0 -Mg 2 Si precursor particles as nucleation sites for Mn dispersoids. A number of studies reported that Mn dispersoids had precipitated in a string along the <100> Al direction. [7,8] As this coincided with the traces of the β 0 -Mg 2 Si phase, it was suggested the Si contained in the β 0 phase could act as a preferential nucleation site for the dispersoids. [7] Hu showed TEM evidence of Mn dispersoids precipitating directly on the β 0 -Mg 2 Si phase. [8] In both studies, there was no indication of any other intermediate phase such as the u-phase, so the importance of such a phase being essential to dispersoid nucleation remains a topic for debate.
A number of studies have also conducted microprobe measurements to determine the elemental segregation in the cast microstructure. Mg and Si were found to segregate with a wavelength equivalent to the secondary dendrite arm spacing while Hu showed qualitative EPMA maps of Mg and Si segregated around the grain boundaries. [3,6,8] Cr was relatively uniformly distributed with a slight segregation toward the grain centers. As the solid solubility of Fe in aluminum is low (<0.05 wt%), it was beyond the resolution of the microprobe measurements. [3,6] However the segregation profile of Fe has been demonstrated through Scheil calculations. [9] It has been predicted that Fe segregates between the secondary dendrite arms. In addition, due to the slow diffusion of Fe in FCC aluminum, the profile is not expected to change significantly during homogenization.
In this study, an alloy with a high level of Cr (>0.4 wt%) was used in order to study the precipitation of dispersoids via STEM. An investigation of the elemental segregation through heating to homogenization temperature was also conducted.
Experimental Section
The 6xxx series ingot received was direct chill cast by Novelis R&T center, Sierre, Switzerland. The alloy composition was 0.63Mg-0.65Si-0.21Fe-0.41Cr (wt%) with Al balanced. The segregation of elements through heating was studied via EPMA. Measurements were conducted on a JEOL JXA-8530F EPMA equipped with 4 Wave Dispersive Spectrometers (WDS) containing TAP, LiF(L), PET(L), and LDE crystals. A step size of 1 mm was used for the quantitative line scans. Qualitative maps were taken at 15 kV, 43 nA beam current. The dwell time was 30 ms per point with a field of view of 512 mm. Specimens for EPMA were prepared using standard metallographic techniques. The as-cast sample was ground down to 2500 SiC. Polishing was conducted at 3 mm grit with a final OP-S polish performed for 30 s. After the initial as-cast EPMA scan, the sample was heated at 3 C min À1 to 100 C in a vacuum furnace to protect the polished surface. To prevent oxidation on the surface, the sample was cooled in the non-heated segment of the vacuum tube. Once cooled, the sample was given a 10 s clean using an OP-S pad with water to remove any carbon deposition from the previous scan. The sample was then scanned again on the same area. This process was repeated heating from room temperature to 200-550 C in increments of 50 C.
Heat treatments for STEM analysis were conducted at two different temperatures to investigate the u-phase þ dispersoids (450 C) and the general dispersoid structure (550 C). In order to study the dispersoid size and area fraction, the sample heated to 550 C was held at this temperature for 20 h. An air circulating furnace with temperature control within AE5 C was used. 10 Â 10 Â 10 mm as-cast samples were heated at a rate of 50 C h À1 from room temperature to the temperatures given above, followed by a water quench. The samples were ground to 100 mm thickness with the final thinning achieved using a Struers tenupol electropolisher. The electrolyte was a solution of 90:10 Methanol:Perchloric acid conducted at À35 C, 21 V. STEM imaging was performed on a FEI Talos F200A microscope equipped with Super-X EDXS detectors.
Elemental Segregation through Heating
In order to represent the evolution of segregation, quantitative line scans and qualitative maps were conducted on the same sample and area after subsequent heat treatments. Figure 1 shows an example backscattered electron (BSE) micrograph with corresponding qualitative maps of the area that was scanned. Note; each qualitative map consists of a raw intensity count. They cannot be compared to other maps. The position of the quantitative line scan is also shown in the BSE micrograph of Figure 1. Figure 2 shows the quantitative line scan data at 100, 400, 450, 500, and 550 C. The line scan was centered on a Fe containing grain boundary (GB) particle identified in Figure 2 at 100 C. Thermodynamic simulations were also conducted on JMatPro predicting the atomic% element in Al against fraction solid using the Scheil-Gulliver assumption during solidification of the alloy. In this assumption, no back-diffusion is allowed in the solid, and the prediction of the reactions toward the end of solidification must be treated with caution. Nevertheless, the predicted segregation direction with respect to the fraction solid is consistent with the measured segregation direction with respect to the position within a dendrite (assuming the fraction solid correlates directly with position in the dendrite from center to edge). For example, the simulations shown in Figure 3 indicate a similar segregation for Mg and Si with the current study. Fe has a very low solubility limit in Al (<0.05 wt%), however there is indication of segregation away from the dendrite centers adjacent to the grain boundary particle located in Figure 2 at all temperatures. The majority of Fe is located in the eutectic particles located on the grain boundaries, as shown in the Figure 1 Fe map. The Fe segregation profile agrees with the thermodynamic simulation in Figure 3 in addition to other simulated results in literature. [9] Cr is found to segregate toward the grain center as seen in Figure 1 and 2, which is in agreement with the thermodynamic simulation shown in Figure 3. The spikes in concentration of some elements toward the end of solidification correspond to invariant reactions forming the primary intermetallic particles.
From the as-cast state to 300 C, there is no significant change in segregation or overall matrix concentrations for Mg, Si, Fe, or Cr. However, there is evidence of some Mg þ Si containing particles present in the microstructure from solidification, as can be seen in Figure 2 at 100 C. At 400 C the Mg and Si levels in the matrix are significantly lower compared to the levels at 100 C. An average of the Mg and Si levels in the matrix was www.advancedsciencenews.com www.aem-journal.com calculated from the EPMA line scans shown in Figure 2, excluding the composition spikes due to precipitates. The decrease in Mg and Si from 100 to 400 C in the matrix is 44% and 33%, respectively, giving a decrease in Mg/Si of 1.47. The decrease in Mg and Si contained in the matrix is due to the precipitation of Mg þ Si containing particles indicated by the Mg and Si peaks in Figure 2 at 400 C. From 400 to 500 C the Mg and Si matrix compositions increase by 64% and 37%, respectively, giving a Mg/Si ratio of 1.73. Thermodynamic simulations (conducted on JMatPro) were used to predict the composition of metastable and equilibrium phases in this alloy.
The β 00 and β 0 phase were both predicted to have Mg/Si ratios close to 1 while the equilibrium β phase has a composition 2:1. The β 00 phase exists at temperatures around 100-200 C. Between 300 and 500 C, there are a number of additional Mg/Si containing phases that exist alongside the β 0 and β phases depending on the Mg/Si ratio and heat treatment. These include the U1-Al 2 MgSi 2 , U2-AlMgSi, and B'-Al 3 Mg 9 Si 7 phases which have Mg/Si ratios close to 1. [10] The level of Mg and Si released into the matrix will depend on the composition of these phases present. Assuming that only β and β 0 exist in equal fractions with a Mg/Si ratio of 2:1 and 1:1, respectively, the Mg/Si ratio released between 400 and 500 C should be around 1.5. This shows that there is less Si dissolving into the matrix than expected and can be attributed to being contained in the precipitating dispersoids.
At 450 C as shown in Figure 2, there is clear alignment of Fe, Cr, and Si peaks indicating the presence of α-Al(CrFe)Si dispersoids in the microstructure. Cr is found to segregate toward the grain centers at all temperatures. The Cr segregation profile is similar from the as cast state to 550 C with no significant change. This is due to the slow diffusion rate of Cr in FCC aluminum. There are also regions of significantly lower Cr in the vicinity of grain boundaries, as seen in the Figure 1 Cr map. These areas are of particular importance, as a lower density of dispersoids will form here, and thus the grain boundary pinning effect of the dispersoids will be locally greatly reduced as discussed in the next section.
At 500-550 C, it is expected that the β/β 0 particles should become completely dissolved. However there is still evidence of β/β 0 particles at 500 C and 550 C. These are represented by the spikes in Mg as shown in Figure 2 at 550 C. Since no β/β 0 particles were observed at the same temperature by electron microscopy, where the sample had been quenched before examination, it is suggested that the β/β 0 composition spikes demonstrated in Figure 2 at 550 C are the result of reprecipitation of these particles during the slow air cool from the temperature that was imposed by the need to use a vacuum furnace to prevent oxidation. Furthermore, the Mg matrix concentration decreases from 500 to 550 C, further indicating precipitation of β/β 0 particles. The spikes in composition measured at the higher temperatures can be attributed to the beam interacting with these precipitates. www.advancedsciencenews.com www.aem-journal.com β 0 -Mg 2 Si particles that had formed at lower temperatures. [3,7] This suggests that Cr containing dispersoids follow a similar precipitation sequence to Mn and Mn þ Cr dispersoids. All the dispersoids studied via EDS were of the α-Al(FeCr)Si type dispersoid and no evidence of the α 0 -AlCrSi dispersoid has been found at this temperature. Figure 5a and b show STEM micrographs of the general dispersoid morphologies after continuous heating to 550 C and holding for 20 h. Figure 5a shows the microstructure at positions between 0 and 7.5 mm away from the grain boundary with larger eutectic particles present on the left of the micrograph indicating the grain boundary position. Figure 5b covers the distance range A number of micrographs were taken consecutively in a line from the grain boundary toward the grain center after the 550 C treatment with the analysis conducted using ImageJ. Figure 6 shows the evolution of dispersoid area fraction and effective diameter from the grain boundary (located at 0-7.5 mm). The precipitates studied consisted of two types of dispersoids, the α-Al(FeCr)Si and α 0 -AlCrSi dispersoids. The analysis shown in Figure 6 does not distinguish between the two types of dispersoid. From 0 to 15 mm distance from the grain boundary there is no significant change in size or area fraction of the dispersoids (the precipitate free region and eutectic particles displayed in the left of Figure 5a were not included in the analysis). Between 15 and 37.5 mm the dispersoid size and area fraction steadily decrease. From 37.5 to 60 mm the dispersoid size increases whilst the area fraction remains similar, indicating a lower number density of dispersoids toward the grain center.
Precipitation of Dispersoids
The dispersoids play an important role during thermomechanical processing, exerting a pressure on grain and subgrain boundaries known as the Zener pinning pressure P Z . This pressure is in direct competition with the driving pressure for recrystallization P D . Recrystallization will occur when P Z < P D , which can be represented as [11] ; where V f is the particle volume fraction, r is the radius of the particle, and γ H is the high angle grain boundary energy, which for Al is 0.324 mJ m À2 . The driving pressure P D in a well recovered structure, is dependent on the subgrain boundary energy γ S and mean subgrain diameter D. Assuming a uniform stored energy in the grain, γ S can be estimated as %0.2 of the high angle grain boundary energy. [11] Typical subgrain diameters for Al-Mg-Si alloys after cold deformation are around 0.2-1.0 mm. [13,14] This gives a driving pressure of 0.19-0.96 MPa. Figure 6 shows the Zener pinning pressure evolution from grain boundary to grain center. As can be seen, the pinning pressure decreases toward the grain center and assuming an average subgrain diameter of 0.6 mm (resulting in a driving pressure P D % 0.32 MPa), drops below the driving pressure at around 35 mm. Even though the current sample has not been deformed, a similar trend in dispersoid pinning pressure will exist after deformation. This calculation suggests at the start of annealing process after cold rolling, there are regions toward the grain center where the pinning effect of dispersoids is insufficient to prevent boundary migration. The observation that recrystallization is not usually initiated in such regions can, therefore, be attributed to the lack of suitable nucleation sites for new grains.
The decrease in number of dispersoids toward the grain center could be attributed to the segregation of Mg and Si through heating, as shown in Figure 2. Other studies have shown for Mn and Mn þ Cr dispersoid systems, that the dispersoids precipitate at lower temperatures on Mg and Si containing particles. [3] In these systems, the segregation of Mg and Si results in an increase in density of β 0 particles toward the grain boundary. This results in a higher density of dispersoids toward the grain boundary during heating. This observation emphasizes the complexity of the influence of the segregation in the cast structure and through heating on dispersoids precipitation, since it is not only the distribution of the dispersoid forming elements (e.g., Mn and Cr) that is important, but also the segregation of the major alloying additions that form the pre-cursor phases (Mg þ Si). Dispersoids will only form in regions where there is sufficient Cr, but once this critical concentration is exceeded, the local Mg and Si concentrations become important by controlling the potential to precipitate the pre-cursor phases. The optimum conditions to precipitate the maximum fraction of dispersoids require both sufficient Cr and sufficient Mg and Si. Since the local Cr concentration is low toward the dendrite edges, and the local Mg www.advancedsciencenews.com www.aem-journal.com and Si are low toward the centers, this optimum condition corresponds to positions between these extremes. The increase in dispersoid size toward the grain center is most likely due to a combination of a higher level of Cr in the surrounding matrix toward the grain center (as indicated in Figure 2), and a lower density of dispersoids. The lower density of dispersoids toward the grain center allows for less competition of the remaining Cr in the matrix between neighboring dispersoids, enhancing dispersoid growth and coarsening. Figure 7 shows dispersoids after continuous heating to 550 C with corresponding Electron Dispersive Spectroscopy (EDS) maps. Figure 5 indicates the presence of two types of dispersoids, the α-Al(FeCr)Si and α 0 -AlCrSi dispersoids, distinguished by the exclusion of Fe in the α 0 dispersoids. 13 of each type of disperoid were studied. The average effective diameters of the two types of dispersoids are shown in Table 1 along with the average aspect ratio and form factor. Aspect ratio and form factor were chosen to represent the dispersoid morphology, as they are dependent on two different particle attributes, elongation, and ruggedness, respectively. [15] The aspect ratio and form factor indicate that both types of dispersoids have varied morphologies. This too will influence their ability to pin migrating grain boundaries.
Conclusions
EPMA quantitative line scans have been conducted on the same area at increasing temperatures up to 550 C. In the as cast condition Mg and Si segregated away from the dendrite centers whereas Cr segregated toward the grain center. The segregation of Cr was retained during heating up to 550 C due to slow diffusivity in FCC Al and regions of significantly low Cr found in the vicinity of grain boundaries were also retained. β/β 0 particles that precipitate between 300 and 450 C significantly reduced the matrix concentration of Mg and Si. Once the β/β 0 particles had begun dissolution, segregation of Mg and Si toward the grain boundary was still present. Quantitative line scans indicated the existence of α-Al(FeCr)Si dispersoids from 450 C and above. The dispersoid area fraction decreased toward the grain boundary while the size decreased to around 35 mm away from the boundary and then increased toward the grain center. This results in the Zener pinning pressure decreasing toward the grain center, and it is estimated that there are regions toward the grain center where the pinning pressure will be exceeded by the driving pressure for high angle boundary migration. α 0 -AlCrSi dispersoids were found to be larger than the α-Al(FeCr)Si dispersoids. Both types of dispersoids had varied morphologies including spherical, ellipsoidal and plate like structures. There was no evidence of α 0 -AlCrSi dispersoids at 450 C or lower. In addition, no evidence of the θ-AlCr phase co-existing with α-Al(FeCr)Si and α 0 -AlCrSi dispersoids has yet been found in this alloy. A combination of the segregation of Mg, Si, and Cr along with the decrease in area fraction toward the grain center highlights the dependence of the dispersoid precipitation on the Mg þ Si containing prerequisite phase distribution during heating to the homogenization temperature. | 5,040 | 2018-09-09T00:00:00.000 | [
"Materials Science"
] |
Economic and Mathematical Modeling for the Process Management of the Company's Financial Flows
This paper presents an analysis of existing methods and models designed to solve the problem of planning the distribution of financial flows in the operational management cycle of the enterprise; it also offers tools for process management of enterprise financial flows based on the method of dynamic programming, which allows for determining the optimal combination of factors affecting the financial flow of the enterprise, taking into account existing restrictions on changes in the influencing parameters of the model. The current study develops an innovative model that maximizes the economic efficiency of investment in the sale of food products through retail chains and the practical implementation of the developed model based on the data from the financial reports of LLC "Kraft Heinz Vostok". The theoretical and methodological basis of the research includes the works of Russian and foreign experts in the fields of methodology of economic and mathematical modeling and decision-making, dynamic programming, system analysis, information approach to the analysis of systems, process management of enterprise financial flows, and human resource management. The author's methodology makes it possible to increase the company's profitability in key clients and categories in the range of 4 to 6 million dollars and to increase the return on investment by 10–17%. The scientifically innovative aim is to develop a toolkit for process management of enterprise financial flows, characterized by a systematic combination of methods of dynamic programming, social financial technologies, and economic evaluation of investments, which allows for the creation of mechanisms for managing the development of enterprises of all organizational and legal forms and the development of model projects of decision support systems with the prospects of their incorporation into existing information and analytical systems.
decisions based on economic and mathematical methods and models.The problems of low efficiency in the processes of managing the financial resources of enterprises are related to the fragmentation and imperfection of the mathematical apparatus and tools used in practice, as well as methods of work incentives [7][8][9][10].Attempts to create a full-fledged comprehensive system of management of financial flows, including a progressive system of incentives for employees and payments to the development fund of the enterprise, as well as effective mechanisms for managing financial resources based on the developed methodology of economic and mathematical modeling of management processes for developing medical organizations, are present in the following works [11,12].Yet all of them have a drawback, which is that the issues of managing investments in the development of organizations are not fully discussed, and the criteria for making informed managerial decisions aimed at organizing the interaction of patients with polyclinics and hospitals according to the categories of medical care provided are not specified.Ambitious goals require advanced technologies and models of process management of the financial flows of enterprises and prospective systems of financing their activities.Nonlinear processes and computational methods in the management tasks of such systems are becoming increasingly relevant as the most effective tools for making management decisions and providing scientifically valid algorithms and models for the development of management objects.When managing complex systems with many interrelated parameters, it is necessary to apply algorithms and methods of nonlinear programming to achieve the best result from the set of possible values of the dependent variable within a limited range of changing influencing factors.As a rule, both the target function and each inequality of the constraint system of the optimization problem in most modern models of real process control are non-linear functions; this fact implies additional restrictions on control objects and requires special mathematical models and instrumental methods.Thus, in solving the problems of financial flow management, dynamic programming methods have proven effective [13,14].
Thus, among the most important scientific and practical problems of the national economy are the economic and mathematical modeling of process management of enterprise financial flows, sound policy on existing and prospective customers and categories of goods, products, work, and services, the instrumental base of enterprise development management, taking into account promising and effective technologies for financing and evaluating the effectiveness of investment in their development, structural system analysis, internal and external factors, and scientifically substantiated personnel policy and motivation systems for executives and administrative and managerial staff.
The main goal of the study is to develop an economic and mathematical model that optimizes the financial flows using the methods of dynamic programming and process management and practical implementation of the developed model based on the data from the financial statements of LLC "Kraft Heinz Vostok".This paper studies the method of managing the cash flows of an enterprise for given volumes of investment and historical returns.It focuses on the modeling of the financial flows of LLC "Kraft Heinz Vostok" using methods of dynamic programming and process management.
2-Literature Review
We analyzed 87 academic articles by Russian and foreign researchers and experts that share the topic of this study; they can be divided into the following groups: Cash Flow Management (15 papers), Investment Efficiency (17 papers), Goods Safety (11 papers), Welfare and Wage Growth (13 papers), Enterprise Development (6 research papers), Dynamic Programming (11 research papers), and Business Process Management (14 papers).The relative weights of each group are shown in Figure 1.
Analysis of the data presented in Figure 1 shows that the largest share of the papers falls in the section Investment Efficiency (17 articles, or almost one fifth of all papers), followed by the problem related to Cash Flow Management, with a total of 15 articles, which is 17% of the literature review.The group of articles devoted to business process management (14 papers, or 16% of the articles) closes the top three.The smallest group of works is the section Enterprise Development, with a total of 6 articles, which is 7% of the list of the analyzed articles.
The extent of the research problem development and its analysis are presented in detail in Table 1.
Figure 1. Distribution of articles by research topics
A comparative analysis of the research results obtained by the authors and the results of other scientists and experts dealing certain aspects of the problems raised in the article is presented in Table 2.A developed economic and mathematical model of process management of company financial flows, the target function of which is to maximize the economic efficiency of investment (return on investment) in the implementation of food products through retail chains.
In contrast to the process management models used in practice [20,25], the basis of this economic and mathematical model is a progressive system of incentives for employees of the company, which allows increasing financial incentives for employees depending on the increase in the volume of sales of food products through retail chains, taking into account the optimal distribution of investments obtained by solving the dynamic programming problem, as well as deductions for developing the company, which allows the entire team to participate in the process of management.
Enterprise Development
A developed comprehensive system of process management of the company's financial flows, which allows linking profits from the sale of food products through retail chains with additional remuneration of staff, investments in key partners and categories of food products, as well as allocation for developing the enterprise.
In contrast to established models of process management of companies [28][29][30] This approach allows for simultaneous consideration of the effectiveness of investments in key customers and food categories based on the results obtained using dynamic programming and process management techniques, which is that a comprehensive evidence-based decision is made that has synergistic effects that exceed the effectiveness of investments in key customers and food categories separately.
Investment Efficiency
An approach to economic evaluation of the feasibility of investment consists of making management decisions based on estimates of return on investment in key customers and categories of food products, takes into account the capacity of the market.
This approach differs from other methods of investment evaluation [32,35,42,89] by the fact that the management decision is made in a threedimensional system: return on investment in key customers and categories of food products, considering the optimal distribution of investments based on the results of the dynamic programming problem and market capacity, understood as the difference between the target value of sales of categories of food products for each retail chain and their current state.Additionally, a distinctive feature of the author's approach is that additional profits from increased sales are directed to investments, which makes such a model closed, complex, and dynamic.
Goods Safety
The approach to food and commodity security, the essence of which lies in the that the authors propose to load the spare capacity of retail networks and increase the sales of food products through those retail chains that have the highest return on invested capital and maximum market capacity, understood as the difference between the target sales of food products category for each retail chain and their current state, which increases their financial sustainability and competitiveness in the food market.
In contrast to the number of studies [45,51], the proposed approach makes it possible to increase the investment attractiveness of the market of food products sold through retail chains, due to the practical application of the progressive system of staff incentives and the optimal distribution of investments between retail chains and categories of food products developed by the authors based on the basis of solving the problem of dynamic programming and using the process management of the company's financial flows.
Dynamic programming 13%
Business process management 16%
Number of arcticles Cash Flow Management
A comprehensive model for managing the cash flow of an enterprise, its revenues and expenses from the sale of products, goods, works, and services.
The proposed model differs from the used financial flow management models [57,59,62,68] in that it gives the opportunity to decisionmakers to coordinate investment programs and plans depending on the prices of final long products sold through retail chains, the volume and cost of these products, which contributes to the economic efficiency (profitability) of investment, material incentives for staff, and deductions to the development fund of the company.Dynamic Programming A new formulation and approach to solving the problem of dynamic programming on the allocation of investments between key customers and categories of food products.
In contrast to the works of prominent scientists devoted to solving the problem of investment distribution [70][71][72]75], the author's approach uses the company budget as a source of funding, which is formed at the expense of deductions to the development fund, which depends on the performance of each employee.This approach allows the most effective redistribution of financial flows between key customers and categories of food products and provides sources of funding to equip the workplace staff and staff development and involving the entire staff in the management of the company.
Business Process Management
A developed economic and mathematical model of process management of company's financial and cash flow, characterized by system integration of methods of dynamic programming, economic and mathematical modelling, and process management, which allows creation tools for managing the cash flow of all organizational and legal forms and developing standard projects of support systems for managerial investment decisions with the prospects of embedding them into existing and prospective companies' information.
The developed economic and mathematical model of process management allows finding the best science-based management solution by breaking down the complex problem of allocating the investment budget between key customers and categories of food products into smaller subtasks: on the distribution of investments between key customers and on the distribution of investments between categories of food products, each of which is solved by familiar methods of dynamic programming.Then the results are integrated into a comprehensive system of management decision-making at the stage of practical implementation by using additional criteria: market capacity, the share of profits allocated to increase the material incentives for personnel, development, and investment, etc.Thus, the literature review presented in Tables 1 and 2 shows the lack of scientific research aimed at the development and practical implementation of economic and mathematical models of management of financial flows using dynamic programming and mathematical optimization, which allow the decision-maker to ensure the relationship of financial performance of enterprises (revenues, profits, costs of goods, works, and services produced and sold) and the progressive system of material incentives for the work of citizens working in enterprises.
3-The Proposed Methods
The company's cash flow management method is a way of tracking, analyzing, and changing the way the company's financial assets circulate.The proper use of enterprise cash plays an important role in any commercial enterprise.It makes it possible to abandon unpromising areas of business development and develop the most profitable areas of free cash investments.To effectively manage the cash flows of an enterprise, it is necessary to adhere to the following principles [19]: 1) The Reliability of Information: The reports used to collect the information must contain the correct data.There must be no ambiguity in the information.
2) Balance: Cash flows should be relevant in terms of volume, time, and other characteristics to maintain the correctness of estimation.
3) Efficiency: Each asset should be used in the most rational way to maximize the financial results of the enterprise.
Let's consider the main methods of company cash flow management.
3-1-Direct Method
The analysis is carried out with the data obtained by accounting for the current cash flow of the company.The main basis for calculations is the total revenue from the sales of goods and services.This method has peculiarities: showing the areas of use of resources and the sources of their origin; determining the level of solvency of the enterprise; showing the correlation of sales and profits for the reporting period; determining key items of expenses and profits; helping forecast cash flows for future periods; being a way to control negative and positive cash flows.
In this method, the analysis is "top-down", performed with the help of the statement of financial results.Also, the key disadvantage of the direct method is the complexity of assessing the relationship of cash flows with the financial results of the company [86].
3-2-Indirect Method
This method involves the analysis of the system of financial movement by type of activity based on reports.The basis of this method is the study of net profit from a certain type of activity.The main features of the method are: it shows the relationship between profit and cash flow; it determines the correlation between working capital and profit; it shows the problem areas in the work of the company; it helps to determine the amount of incoming cash, its sources, and methods of use; it determines cash reserves; it analyzes the solvency of the company; and it helps compare the profit target with the obtained result [88].In this study, we used the direct method, as it was the most informative and allowed us to analyze the efficiency of the use of financial resources by the company.
For practical implementation of the direct method of a company's cash flow management, we will use the methodology of constructing dynamic programming tasks, namely, the task of distributing resources between companies (responsibility centers within the company).
Six stages (n = 6) are present.At the k stage, the volume of investment is optimized not in all centers of responsibility of the enterprise, but only from the k to the n.It is worth noting that this involves an amount of investment, serving as a system position variable and acting as an investment project budget for investments in companies from k to n.At the k stage, the sum representing the amount of investment in the k company acts as a management variable.This sum goes to the development of the k enterprise responsibility center or to the k company (in this study, it is to invest in the k enterprise key customer or the k category of food products).In accordance with the bellhop's optimization principle at the k stage in these conditions, the highest potential profit earned by investing in the company's responsibility centers from the k to the n should be selected.It should be considered that for financial investments in these responsibility centers, the sum .Accordingly, in the process of investing financial resources in the k responsibility center of sum, the final profit will be ( )k, which, is determined by the return on investment in this responsibility center.Atthek+1 stage, there is a shift to +1 = − (state equation), which characterizes the investment budget available for distribution among the remaining companies (key customers, categories of food products, etc.) from k+ 1 to n.After this step, there will be funds of +1 [69].And so on until the funds are invested in the responsibility centers of the company, i.e., until the entire investment budget is spent.Thus, at the first stage of the recurrence relation, when k = n, the efficiency indicator represents the profit of the company when it invests in the development of the n responsibility center.Simultaneously, an amount of funds can be allocated for investments in this responsibility center of the company.Accordingly, to obtain the greatest profit when investing in the development of all enterprise responsibility centers, it is necessary to direct the entire amount of financial resources [3]: In the subsequent stages of the process, to calculate the equation, it is necessary to apply the indicators obtained in the previous stages.The largest amount of profit received from the responsibility centers of the enterprise from the k to the n is [44]: The maximum value in this formula was obtained under managerial influence Ik.In the same way, it is possible to calculate the bellhop functions sequentially for all states of the system from k = n to k = 1 (iterative process based on bellhop recurrence relations).
The target function in the bellhop optimization principle Profit1(I1) for the problem under consideration is the maximum total profit from all responsibility centers of the organization at the optimal distribution of investments at each step (for each state of the system) , which can be used to obtain the highest value of profit, is the optimal amount of investment allocated to the k responsibility center of the company [69,85].For all subsequent stages, the indicator characterizing the balance of funds available for allocation among the remaining responsibility centers of the enterprise is determined in accordance with the equation of states [5,87]: Thus, considering the above considerations, economic and mathematical models of process management of the company's financial flows using methods of dynamic programming and social financial technologies [27], maximizing the economic efficiency of investment (return on investment) in the sale of food products through retail chains has the following form: Target function: Limitations: ) The economic and mathematical model ( 4)-( 12) use the following designations: Reconomic efficiency of investment (return on investment) in the sale of all categories of food products through all retail chains, %; mthe number of categories of food products considered by the management of the enterprise as objects of free cash investments; nthe number of retail chains, considered by the management of the company as objects of free cash investments; Profitjkprofit of the enterprise from investments in the sale of j type of food products through the k trading network, USD; Ijkinvestments in the sale of j types of food products through the k trading network, USD; Targetjktarget value of sales of the j category of food products through the k trading network, USD; Currentjkthe current value of sales of the j category of food products through the k trading network, USD; Profitjkprofitability of investment in the sale of j type of food products through the k trading network, %; Bjkthe investment budget available for distributing investments between the k to n retail chains and the j to m food product categories, USD; Bthe total investment budget of the enterprise, available for distribution among all n trade networks and all m categories of food products, USD; φjkthe share of the company development fund allocated to invest in the sale of j type of food products through the k trade network; Fjkcompany development fund, formed at the expense of revenues from the sale of j kinds of food products through the k trade network, USD; Profitbjkprofit of the company from the sale of j kind of food products through the k trading network before the introduction of the developed economic and mathematical model to the activities of the company, taken as a basis for comparison, USD; ξjkcoefficient of redistribution of profit growth between the company personnel involved in the sale of j type of food products through the k trade network, and the development fund of the company; Taxincjkcorporate income tax rate from the sale of j type of food products through the k trading network; Incjkincome of the company from the sale of j type of food products through the k trading network, USD; Vjksales volume of j type of food products through the k trading network; Cvarjkspecific variable costs of the j type of food products sold through the k trade network, i.e., variable costs per unit of production, USD; Cfixjkthe fixed costs of the k trading network associated with the sale of j type of food products, USD; ωvarjkshare of variable costs in the structure of the cost of sales of j kind of food products through the k trading network; ωfixjkshare of fixed costs in the structure of the cost of sales of j kind of food products through the k trading network; Salijkthe amount of salary of the i employee from the sale of j type of food products through the k trading network, USD; Incijkincome of the i employee from the sale of j type of food products through the k trading network, USD; θbijkbase percentage of income from the sale of j type of food products through the k trading network, allocated to stimulate the labor of the i employee; ξijkcoefficient of redistribution of profit growth between the i company's employee, engaged in the sale of j type of food products through the k trade network, and the company's development fund; Profitijkprofit of the i employee from the sale of j type of food products through the k trading network, USD; Profitbijkthe profit of the i employee from the sale of j type of food products through the k retail network before the implementation of the developed economic and mathematical model in the activities of the company, taken as a basis for comparison, USD.
Algorithm of an integrated process control system for company financial flows: Step 1: Analysis of the initial data provided by the company.At this stage, the quality, completeness, and reliability of the information provided on the historical return on investment of the company in key categories of food products and customers are evaluated.If the data received from the company meets the criteria, then we move on to the next step of the algorithm.Otherwise, we repeat step 1 until the criteria are met.
Step 2: Set up a dynamic programming problem for distributing investments among key clients.At this stage, it is necessary to solve the problem of dynamic programming on the distribution of investments among the key clients of the company and check the fulfillment of the basic assumptions and conditions of the model: the financial result from investments in one company does not depend on the financial result from investments in another company; the system in question is closed, i.e., no additional financing is provided for the whole period of investment.If the conditions are met, we proceed to the next step of the algorithm.Otherwise, repeat Step 2 until the criteria are satisfied.
Step 3: Solving the dynamic programming problem of distributing investments among the key clients in the LibreOffice Calc software environment (an analog of MS Excel in Linux).The problem is to be solved using the economic and mathematical models (4)-( 12) and the integrated procedure "Solver" in LibreOffice Calc or "Solution Search" in MS Excel software.
Step 4: Setting a dynamic programming problem for distributing investments between categories of food products.
Step 5: Solving the dynamic programming problem of investment distribution among food product categories in LibreOffice Calc (analogous to MS Excel in Linux).Actions performed at Steps 4 and 5 are similar to those performed at Steps 2 and 3, respectively.
Step 6: Solving the problem of optimal distribution of food product sales through retail networks based on the analysis of relative sales growth and the results obtained in Steps 3 and 5.At this step, it is necessary to determine the most effective distribution of funds between key customers and categories of food products by evaluating the relative increase in sales of food products through retail networks, subject to the maximization of return on investment according to the target function (4) and meeting the limitations of the proposed economic and mathematical model ( 4)- (12).The results obtained at stages 3 and 5 of this algorithm for the complex system of process control of enterprise financial flows should be taken into account.
Step 7: Evaluating the obtained result according to the criterion of maximization of the target function (4) under constraints ( 5)- (12) If this condition (maximum of objective function (4)) is executed, it is necessary to analyze the received results, draw conclusions, and finish the algorithm of the complex process control system of company financial flows.Otherwise, return to step 6 of the algorithm.
Figure 2 shows a block diagram of an integrated process control system for company financial flows, which reflects in detail the main aspects and criteria for key management decisions on the investment of funds.
4-Results
Results of this research are solving the problem of distributing investments between the key customers of the company LLC "Kraft Heinz Vostok", namely, JSC Tander, LLC Auchan, LLC OK, LLC METRO Cash & Carry, LLC IKS5 Retail Group, LLC Lenta, to increase the capital return on the invested budget.
Problem Statement: it is necessary to distribute an investment budget of 5 million dollars between six above-listed companies for 2023.The data on historical profitability at different volumes of investment are shown in Table 3.
The assumptions of the model: a) The financial result from investments in one company does not depend on the financial result from investments in another company; b) The system under consideration is a closed system, i.e., no additional financing is provided for the whole period of investment.
Table 3 provides the initial information about the return on investment in key customers and categories of food products, provided by LLC "Kraft Heinz Vostok".Solving the problem using the LibreOffice Calc software.The above-considered algorithm for solving problems (4)-( 12) is implemented using the LibreOffice Calc software package (an analog of MS Excel in Linux) with the "Solver" service since, under the sanctions imposed on Russian individuals and legal entities, MS Excel software is not available on all computers.
To determine the maximum value of the target function, we used the built-in LibreOffice Calc sub program "Solver": the dialog window is shown in Figure 3.
Figure 3. Dialog window of the "Solver" subprogram of LibreOffice Calc
Consider the sequence of operations in the LibreOffice Calc software to solve the problem.Create a working field on the LibreOffice Calc sheet of the same dimensionality as the original data (see Table 3).Fill out the workspace (matrix) with arbitrary values.In cells P4-P9, K10-O10 on the right and at the bottom of the table, enter the formulas SUMM(K4:O4)-SUMM(O4:O9), which are the sums of elements in each row and column of the working matrix, respectively.The values from 1 to 5 in cells K11-O11 characterize the multiplicity of investments in the company.The target function is the sum of the products of the corresponding elements of the two arrays.In the LibreOffice Calc, the entry looks like this: cell B12 = SUMPPRODUCT(C4:H9; J4:O9).
In the corresponding field of the dialog box, set the target cell $B$12 equal to the maximum value (the task of maximizing the total income from investments in enterprises).The cells $K$4:$O$9 are editable cells in the investment distribution problem in question, so specify them in the appropriate field of the dialog box.In the "Limitations" field, enter the existing restrictions on modifiable parameters (see Figure 3).The following restrictions are used in the given task: 1) Binary values of the cells to be changed are 0 or 1; 2) The sum of investments in all companies should not exceed the initial size of the investment, equal to 5 million dollars.Thus, the value in cell $P$11 is equal to the sum of products of cells K10 and K11, L10 and L11, M10, and M11, N10 and N11, O10, and O11 ($P$11 = K10 -K11 + L10 -L11 + M10 -M11 + N10 -N11 + O10 -O11); 3) Each company may either receive investment or do not receive it, which is expressed by the inequality $P$4:$P$9 < = 1.
In the Options tab of the LSolver, select "LibreOffice Linear Solver" from the drop-down list "Solver Mechanism", check the boxes "Limit Depth of Branches and Bounds" and "Accept Variables as Integer" (Figure 4).Press OK, then press "Solve".The analysis of the data presented in Figure 5 shows that the maximum value of income from the invested funds in the amount of 5 million USD in all companies is equal to 11 million USD.To maximize profits from all companies in 2023, the following managerial decisions should be taken: 1 million USD should be invested in JSC Tander; 1 million USD should be invested in LLC OK; 2 million USD should be directed as investments to LLC METRO Cash & Carry; and 1 million USD should be directed to the LLC IKS5 Retail Group.The results of the modeling are presented in Table 4 (the profitability of companies with the optimal size of investment is highlighted in color).Problem statement: It is necessary to distribute a production budget of $5 million between the six categories of food products: ketchup, mayonnaise, sauces, canned vegetables, baby food, and milk porridge, which will be sold through the above-mentioned trade networks.The data on the historical returns at different volumes of investment in these categories of food products is presented in Table 5.The basic assumptions and assumptions of the model are the same as in the solution to the problem of investment allocation among the key customers of LLC "Kraft Heinz Vostok" (see above).The solution to this problem is similar to the solution to the problem of allocation of investments among the key clients of LLC "Kraft Heinz Vostok".Table 5 shows the profitability of food product sales through the retail networks of partner companies of "Kraft Heinz Vostok" with the optimal distribution of investments (the table cells corresponding to the investments that provide the maximum total return are highlighted in color).
Table 5. Solving the problem of allocating investments among the key categories of food products to increase the capital return on the invested budget
The analysis of the data presented in Table 5 shows that the maximum value of income from investments in the sale of food products through retail chains of partner companies of LLC "Kraft Heinz Vostok" at the optimal distribution of investments is 14 million U.S. dollars.At the same time, to maximize profits from the sale of all categories of food products through the retail chains of the partner companies of LLC "Kraft Heinz Vostok" in 2023, it is necessary to take the following managerial decisions: 1 million US dollars should be invested in the realization of ketchups through the retail chains of the partner companies of LLC "Kraft Heinz Vostok", 2 million US dollars should be invested in the realization of ketchups through the retail chains of the partner companies of LLC "Kraft Heinz Vostok"; 1 million USD should be invested in the realization of sauces; 1 million USD should be invested in canned vegetables; 1 million USD should be invested in the realization of milk porridge through the trading networks of the companies-partners of LLC "Kraft Heinz Vostok".Practical implementation of the obtained results is based on the data on the sales of food products (ketchup, mayonnaise, sauces, canned vegetables, baby food, and milk porridges) through the trading networks of LLC "Kraft Heinz Vostok" partner companies, namely, all the above-mentioned companies.
Figure 6 shows the sales volume of food products through retail chains as a percentage, corresponding to the ratio of the sales volume through the retail chain to the total sales volume of food products through all retail chains considered in the paper.The percentage indicated in Figure 6 corresponds to the share of sales of the category of food products through the point of sale in the total volume of sales of this category through all outlets presented in Figure 6.Thus, for the category of ketchup sold through the retail network of JSC Tender," the value of 12% indicates that through this outlet is sold 12% of all ketchup sold through all the above-mentioned companies together.Similarly, for the remaining values presented in Figure 6.
Figure 6. The volume of sales of food products through retail chains
Category sales volumes and target values for each network are in Table 6.The cells highlighted in gray show the planned indicators of sales of categories of goods in retail networks.Thus, the analysis of Figure 6 allows us to conclude that the ketchup category is the least represented in the trading network of JSC Tander, as only 12% of ketchup is sold in this trading network.Table 6 shows that the total sales volume of ketchup was 50 million rubles.Therefore, the volume of sales in this category at JSC Tander is 50 × 0.12 = 6 million USD, which is indicated in row 9 and column 3 of Table 6.In addition, from the data presented in Table 6, we know the target value for each store in the ketchup category.This means that this food category must be increased by 6 million USD (12 million USD (sales target, see row 2, column 3 of Table 6) -6 million USD (current sales, see line 9, column 3 of Table 6), which is indicated in line 15 and column 3 of Table 6.Similarly, for the remaining columns of line 15 of Table 6.Thus, the solution to the problem of dynamic programming, presented in Tables 4 and 5, as well as formula (5), suggests that an investment of 1 million USD to increase the volume of ketchup sales through the trading network of JSC Tander would bring an additional 6 million USD.Similarly, for other retail chains, and categories of food products are presented in Table 6.
Table 6. Volume of sales of food products through retail chains, the main indicators and economic efficiency (profitability) of investment in the sale of food products through retail chains (The table uses points instead of commas in decimal numbers)
The target value of sales volumes serves as a reference point for those retail chains that have a lower indicator, and those food products, investments in which are expedient and give the maximum value of the target function of the total return (see Table 6).The analysis of Figure 6 and Table 6 suggest the need to invest in the following food products sold through retail chains: 1 million USD -to increase the volume of sales of ketchup by 12% through the retail network of JSC Tander, and 2 million USD -to increase the volume of sales of ketchup through the JSC Tander retail network.We invested 2 million USD to increase our sauce sales by 16% through the LLC METRO Cash & Carry retail network, 1 million USD to increase the sales of ketchup and sauces through the JSC Tander retail network, and 2 million USD to increase the sales of sauces through the LLC METRO Cash & Carry retail network. 1 million USD should be channeled to increase sales of canned vegetables by 11% through the retail network of the LLC IKS5 Retail Group, and 1 million USD should be channeled to increase sales of canned vegetables through the retail network of LLC METRO Cash & Carry. 1 million USD should be used to increase the sales of milk porridge by 10% through the trading network of LLC OK.
The total investment budget of LLC "Kraft Heinz Vostok" available for distribution among all trading networks and all categories of food products was 10 million USD.Checking whether the condition of not exceeding the total investment budget of the enterprise according to the formula (6) of the economic-mathematical model ( 4)- (12) showed that the budget overrun in solving the problem of process management of financial flows of the enterprise did not occur, i.e., condition (6) is satisfied.
Let's determine the size of the development fund (F) of the company, formed by the revenues from the sale of j type of food products through the k retail network, using the example of JSC Tander and the "Ketchup" category of food products.According to the formula (8) of economic and mathematical models (4)-( 12), we have: = 6 + (1 − 0.06) • (9 − 6) • (1 − 0.2) = 8.26 , which is indicated in line 24, column 3 of Table 7.Here, 6 million USD is the company's profit from selling ketchup through JSC Tander before the implementation of the developed economic and mathematical model in the company's activity, taken as a basis for comparison.The coefficient of redistribution of profit growth between enterprise personnel, involved in the realization of ketchup through JSC Tander, and company development fund, (parameter ξjk in formula ( 8)) is 0.06; 9 million USD is company profit from investments in the realization of ketchup through JSC Tander, and 0.2 is the profit tax rate.Similarly, for the other columns of line 24 in Table 6.Here, the enterprise profit presented in row 25, column 3 of Table 6 is determined by subtracting the total production cost from the sales target (line 2, column 3 of Table 6) (see row 20, column 3 of Table 6), i.e., 9 million USD = 12 million USD -3 million USD (see formula (9) of economic and mathematical model ( 4)- (12).
The share of conditionally variable costs in the structure of the cost of sales of the category of food products through the trade network, presented in line 26 of Table 6, is determined by the formula (10).Thus, for row 26, column 3 of Table 6: ω = 2 2+1 = 0.67.Similarly, for other columns of line 26 of Table 6.The share of conditionally fixed costs in the structure of the cost of sales of the category of food products through the trade network, presented in row 27 of Table 6, is determined by the formula (11).Thus, for row 27, column 3 of Table 6: ω = 1 2+1 = 0.33.Similarly, for the other columns of row 27 of Table 6.
The wages of employees from the sale of food products through retail chains are determined by the formula (12) of economic-mathematical model ( 4)- (12).Therefore, for the category of food products "Ketchup" and the trading network of JSC Tander, it follows: = 12 • 0.06 + 0.06 • (9 − 6) = 0.9 .The values for other categories and retail chains are calculated similarly (see row 28 of Table 6).
Let's show the methodology of economic efficiency of investment (return on investment) in the implementation of the category of food products "Ketchup" through the trading network of JSC Tander (see formula (4) and row 29, column 3 of Table 6). = 6, and the sum of all columns of the last row of Table 6 is the target function of the economic and mathematical models (4)- (12).
5-Conclusions
When solving the problem of distributing investments among the company's key clients to increase the return on capital from the invested budget (see Table 6), the following results were obtained: maximizing profit from invested funds will be 11 million USD.To maximize profits for the next year (2023), it is necessary to: invest 1 million USD in JSC Tander; 1 million USD to invest in LLC OK; 2 million USD to invest in LLC METRO CASH AND CARRY; 1 million USD in the LLC IKS5 RETAIL GROUP.
When solving the problem of distributing investments among the company's key clients to increase the return on capital from the invested budget (see Table 6), the following results were obtained: maximizing profit from invested funds will be 14 million USD; to maximize profits for the next year, it is necessary to invest 1 million USD in the production of ketchup, 2 million USD in the production of sauces, 1 million USD in canned vegetables, 1 million USD in cereals.
Comprehensive analysis of the data presented in Table 6 allows us to draw conclusions about the optimal distribution of investments: USD 1 million should be invested in JSC Tander into the ketchup, USD 2 million should be invested in METRO Cash AND CARRY LLC in the sauces, USD 1 million should be invested in LLC IKS5 Retail Group in the canned vegetables, 1 million USD should be invested in LLC OK in the porridge.
Analysis of line 22 of Table 6 shows that the investments calculated on the basis of the developed economic and mathematical models (4)-( 12) make it possible to increase the wages of LLC "Kraft Heinz Vostok" employees by 600 thousand USD for the category of food products "Ketchup"; by 700 thousand USD -for the category "Sauces"; by 750 thousand USD -for the category "Canned vegetables" and by 520 thousand USD -for the category "Milk porridge," which is 2.570 thousand USD.
A comparison with other studies and the scientific increment of knowledge is presented in Table 2.
This study is a logical extension of a series of papers aimed at the development and practical implementation of the methodology of mathematical modeling of management processes in the development of enterprises and organizations of all forms of ownership and economic activity in Russia in general.It includes progressive methods and tools to encourage the work of executives and administrative and managerial staff and social financial technologies as a tool to increase wages of employees and the development of enterprises and the economy of the country in general [11,12,44,90].Simultaneously, the major differences in this work in contrast to the previous research is the combined application of the results of solving the problem of dynamic programming on the allocation of investments between the key categories of food products and customers based on the criterion of economic efficiency (profitability) of investment, social financial technologies aimed at increasing the material incentives for executives and administrative and managerial staff(see formula (12) of economic-mathematical model ( 4)-( 12)), a progressive system of labor incentives and sovereign emissions as an inexpensive source of investment in the country's food security and growth of the wellbeing of its citizens.
Analysis of the data presented in the last line of Table 6 gives reason to believe that it is the most effective for direct investments in the food category "canned vegetables," sold through the trading network of LLC IKS5 RETAIL GROUP.The return on investment in this category of goods is estimated at 398% (see the last row, column 6, of Table 6).Then comes the category "sauces," sold through the trading network of LLC Metro CASH AND Carry with a return on investment of 367%, which is indicated in the last line of column 5 of Table 6; in the third place are investments in the food category "Ketchup," sold through the trading network of JSC Tander with a return on investment of 300%, and, finally, in the last place is the category "milk porridge," sold through the trading network of LLC "OK" (see last row 11, column 8 of Table 6).Investments in this category are 150% economically efficient (see the last row, column 8, of Table 7).
The main users of the results are the Board of Directors of the "Kraft Heinz Vostok" company.The results obtained using the economic-mathematical model ( 4)-( 12) developed by the authors may be used to implement the company's development strategy in investment, marketing, and other related areas of activity.The developed progressive system of personnel labor stimulation (formula (12) of the economic-mathematical model ( 4)-( 12)) is intended for the HR-department of the "Kraft Heinz Vostok" company.Its purpose is the practical implementation of the bonus system of remuneration for employees depending on key performance indicators, namely the company's revenue, return on investment in the category of food products sold through retail chains, and the coefficient of labor involvement of the employee.
5-1-The Proposed Model's Advantages
The economic and mathematical model designed here, having a progressive system of stimulating the work of the company's employees as its basis, with the practical implementation of an integrated system for the process management of financial flows of LLC "Kraft Heinz Vostok", makes it possible for the managerial decision-maker to: Increase the material incentives for employees, depending on the increase in the volume of sales of food products through retail chains, by 38%, taking into account the optimal distribution of investments obtained by solving the dynamic programming problem (compare rows 28 and 22 of Table 6).
Increase charges for developing the company by USD 28.31 million (the sum of all columns of line 24 of Table 7), which makes it possible for the entire team to participate in the management of the company and direct funds for its further development and investment in the most promising customers and categories of food products, equipping the workplace of employees, and improving their skills.
5-2-The Proposed Model's Limitations
The financial results from the investments in one company do not depend on the financial results from investments in another company; The system under consideration is closed, i.e., during the whole period, no additional financing is provided; Quality, comprehensiveness, and reliability of data on the company's historical profitability; The amount of investment in key customers and categories of food products is limited and determined by the company's investment budget.
The economic and mathematical model of process management of the financial flows of companies designed here may be used to improve the accuracy, efficiency, and appropriateness of management decisions in the interests of the company's development, increasing the profitability of its activities, the growth of employee wages, and contributions to the development fund.
The results of the development of scientific and methodological apparatus and the implementation of practical tools in this study allow us to conclude that the goal of the study has been achieved.The accomplished research provides decision-makers with effective tools for process management of companies' financial flows.Further research on these research problems may include: Introducing a progressive system of employees' labor stimulation in other spheres of activity, for example, providing educational services to motivate the highly effective work of scientific-pedagogical workers, increases their qualification and professional level; Extending the technology of setting and solving the problem of dynamic programming to prospective investors and searching for optimal (from the perspective of weighted average price) sources of financial resources to implement company investments; Adapting the economic and mathematical model developed in the study to the companies of all sectors of the economy; Including the developed economic and mathematical tools in a unified information-analytical system of company financial flow management, its interaction with widely used applied software products and others.
Figure 2 .
Figure 2. Block diagram of an integrated process control system for company financial flows
Figure 4 .
Figure 4. Options dialog window of the "Solver" subprogram Figure 5 shows a fragment of the LibreOffice Calc workspace with the modeling results.
Figure 5 .
Figure 5.A fragment of the working field with the results of modeling in the LibreOffice Calc software The values for the other categories and trade networks are calculated similarly.The data are shown in the last row of Table
N Food Products Names Ketchup Mayonnaise Sauces Canned Vegetables Baby Food
of food products categories through the trade network, million USD (the highlighted cells of the table correspond to the categories of food products and networks in which it is planned to increase the volume of sales). | 11,156.6 | 2023-05-10T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Swampland Bounds on Dark Sectors
We use Swampland principles to theoretically disfavor regions of the parameter space of dark matter and other darkly charged particles that may exist. The Festina Lente bound, the analogue of the Weak-Gravity conjecture in de Sitter, places constraints on the mass and charge of dark particles, which here we show cover regions in parameter space that are currently allowed by observations. As a consequence, a broad set of new ultra-light particles are in the Swampland, independently of their cosmic abundance, showing the complementarity of Quantum Gravity limits with laboratory and astrophysical studies. In parallel, a Swampland bound on the UV cutoff associated to the axion giving a St\"{u}ckelberg photon its longitudinal mode translates to a new constraint on the kinetic mixings and masses of dark photons. This covers part of the parameter space targeted by upcoming dark-photon direct-detection experiments. Moreover, it puts astrophysically interesting models in the Swampland, including freeze-in dark matter through an ultra-light dark photon, as well as radio models invoked to explain the 21-cm EDGES anomaly.
Introduction
The particle content of our universe appears to contain a dark sector, which so far has eluded all direct probes of its nature. This sector is composed of at least dark matter and dark energy, and thus requires new physics to be added to the-otherwise highly successful-Standard Model. Moreover, different arguments ranging from data anomalies (e.g., [1][2][3]) to theoretical ones, such as the hierarchy problem and neutrino masses, demand for new low-energy physics to exist in our universe. As such, the properties and nature of new dark sectors beyond the Standard Model is one of the key open questions in particle physics and cosmology. The experimental program to detect such new physics is highly diverse, given the broad spectrum of possibilities. For instance, dark-matter candidates range from ultra-light bosons (with masses as low as m ∼ 10 −22 eV [4]) to super-Planckian objects, such as primordial black holes [5], including the more traditional candidates at the weak scale [6]. Particle-physics experiments have been able to robustly test new physics up to energy scales Λ ∼ TeV, if their couplings are large enough to be produced in colliders. Direct-detection experiments, such as XENON [7] and LUX [8], are placing significant constraints on the WIMP paradigm [6], which has encouraged the community to focus on lighter DM candidates not directly related to electroweak physics. In line with this, cosmological and astrophysical observations are sensitive to new degrees of freedom at lower masses and energies, and can in principle reach much smaller couplings [9][10][11]. In parallel, there have been recent advances in our understanding of the low-energy implications of quantum gravity (QG). Naively, it seems difficult to connect the "low-energy" (i.e., sub-Planckian) world to QG. For instance, it would appear that any effective quantum field theory (EFT) could be coupled to dynamical gravity. However, there are self-consistent EFTs that can never arise as the low-energy limit of a quantum theory of gravity. These EFT's are said to lie in "the Swampland" [12] (as opposed to "the Landscape"; see e.g. [13][14][15] for reviews). Theories that can be placed robustly in the Swampland are, therefore, theoretically disfavored, as they cannot be consistently UV-completed when including gravity. In this note, we open the study of the implications of this Swampland program for the dark sectors of our universe, and show its complementarity to both cosmological and experimental probes of new physics. We will take advantage of recent progress in the Swampland literature, including the Festina Lente (FL) bound proposed in [16] (an extension of the Weak-Gravity conjecture [17] to de Sitter space) as well as the photon-mass bounds from [18], to place new constraints on the existence of new dark sectors. We consider vector-portal models (i.e., sectors with dark photons that may kinetically mix with ours), and study the cases of i) millicharged particles, ii) a secluded dark sector (with negligible kinetic mixing), and iii) new massive (but light) dark photons.
In the first two cases we employ the FL bound to constrain new very light, charged particles, showing that part of the parameter space that will be probed by new experiments is in the Swampland. In the dark-photon case we will use the conjectures in [18], as well as the generic mixing from [19], to constrain dark-photon masses as well as their interactions. One of the Swampland insights into phenomenology is that Stückelberg masses require the existence of a radial mode. We show that this radial mode σ can be produced in astrophysical environments, rendering the Stückelberg case similar to the Higgs one. In addition, the angular mode of a Stückelberg photon is an axion and a Swampland bound on the UV cutoff of an axion EFT can be applied to models of the dark photon as well. This allows us to place a portion of the light dark-photon parameter space (roughly those with masses m A 20 eV, given a kinetic mixing ) in the Swampland. We will additionally briefly study how other models are in tension with the Swampland bounds, and how these bounds can interface with physics during inflation. As we will show, the Swampland program can reach a broad set of models targeted by both dark-matter experiments and astrophysical observations. We caution the reader that so far there are no universal proofs of these Swampland constraints (see e.g. [20][21][22][23] for recent efforts), as we lack a framework to prove general statements in QG. Nevertheless, these constraints are supported by several different lines of evidence, coming from general arguments based on black-hole physics, unitarity, or String Theory (which provides a concrete model of quantum gravity in which we can test Swampland constraints). The following is a table of the Swampland principles that we use in this paper, together with a one-liner explanation of what expected property of quantum gravity "would break" if new physics was found in violation of each of the bounds.
Constraint
Statement What goes wrong if untrue? Reference
Weak Gravity (WGC) m gM Pl
Charged black holes cannot evaporate while remaining sub-extremal [17,24] Festina Lente (FL) Horizon-sized charged black holes in dS evaporate to pathological space-times [16,25] Magnetic WGC Λ UV √ f M Pl EFT cutoff coming from tension of WGC strings [17,[26][27][28] Species scale Scale at which loops of WGC states makes local EFT break down [29][30][31][32] The statements above will be further detailed in the main text below whenever relevant. In this work we will take an agnostic approach, and explore the phenomenological consequences of the Swampland bounds we consider (as well as possible model-building approaches to evading the bounds, whenever possible). The paper is organized as follows. In Section 2, we study the implications of the FL bound to the case of new charged particles, briefly introducing the FL bound and its extension to multiple U (1)'s. Then, in Section 3 we study the related massive dark photon case. In Section 4 we review the implications of the FL bound for models with non-Abelian fields, and in Appendix A we make some general comments on the compatibility of the bound with inflation. Appendix B contains some general comments regarding the application of these ideas to the scenario of cosmological relaxation. We conclude in Section 5. Throughout the text we will briefly review the relevant Swampland bounds for the astroparticle reader, as well as the phenomenology of the models for the formal reader, and will work in natural units.
New Charged Particles
We begin by studying the case of particles charged under a new U (1), i.e., millicharged and darkly charged particles.
Formalism
As is well known, a new (dark) photon can kinetically mix with its Standard Model counterpart. The kinetic-mixing operator, being dimension 4, is not suppressed by a high-energy scale and can therefore leave an imprint at low energies despite its UV origin [33], which makes it phenomenologically appealing (for a treatment of irrelevant portal interactions, see for e.g. [34]). For this reason, darkly charged particles are an excellent dark-matter candidate (e.g. [35]), and a plethora of experimental efforts are directed towards detecting them. Here we will appeal to Festina Lente arguments (see the Table in the Introduction) to understand what regions of parameter space are in the Swampland and which others are favored experimental targets.
In order to set our notation, we start by considering a theory with an Einstein-Maxwell Lagrangian coupled to multiple U (1) fields [33,36] S Here, R and Λ are the Einstein-Hilbert term and cosmological constant respectively, is the kinetic mixing parameter, and the index i runs over the two U (1)'s, which in this Section will be assumed to be massless (we will lift this restriction in Sec. 3). In that case, we can diagonalize the kinetic term by defining two new gauge bosons, which will be the regular photon (A) and the dark photon (A ). We choose the matrix M to provide a diagonal kinetic term. In that case, we have In the absence of charged particles, this diagonalization is largely irrelevant. However, these Lagrangians alone are not consistent with Swampland principles. In particular, the Completeness Principle [37,38] requires that there are physical states with all possible charges. The Weak Gravity Conjecture, and its extension to multiple U (1) fields, dubbed the "Convex Hull Condition" [39], can be regarded as stronger versions of the Completeness Principle that put additional constraints on the kinematic properties of particles (masses and charges). This relationship can be made precise [40]. We can satisfy this condition by including charged particles, in particular the electron of the Standard Model and an additional particle, a dark electron χ, minimally coupled to the A 2 gauge field. For the range of parameters of interest to this work, the regime ∼ O(1) is experimentally ruled out (see discussion below and in Figure 2). We will therefore work in the 1 approximation from this point onward. In our normalization, a charged particle i on a worldline W couples to the gauge fields A = (A, A ) in (2.3) (i.e., the ones with diagonal kinetic terms) via (2.4) Here, the vector Q i belongs to some lattice Λ Q . We do not know what the full charge lattice is, but it must include at least a vector Q e for the electron. By choosing the appropriate basis (via an orthogonal transformation that preserves (2.3)), we have ensured that it takes the form Q e = (e, 0). We are also assuming the model contains a millicharged particle χ, with a charge vector parametrized as in the same basis, where now g is the gauge coupling of the χ particle to the dark photon A in (2.3). As a consequence, χ aquires a millicharge under the SM photon [33], of value q χ = g /e (in units of the electron charge, as customary).
The Festina Lente Bound
With our notation set, we move on to apply the Festina Lente bound from Ref. [16]. Reference [16] studied evaporation of charged black holes in a de Sitter background, and by applying the rationale behind the successful Weak Gravity Conjecture (WGC) [17], proposed a constraint on the spectrum of charged particles, which applies to any U (1) field in de Sitter space (with expansion rate H). This bound, the Festina Lente (FL) bound, demands that for a minimally coupled U (1), the spectrum of charged states satisfies where g is the U (1) charge and m its mass. Crucially, (2.6) must be satisfied by all charged states in the theory. This bound is satisfied in the Standard Model today, where the quantity in the right-hand side of (2.6) is roughly M Pl H 0 ∼ O(meV 2 ), or around neutrino mass scale, whereas the lightest charged particle is the electron. The bound (2.6) can be derived studying the decay of Reissner-Nordstrom-de Sitter black holes. Unlike in flat space, where one can have black holes of arbitrary mass and charge provided that Q ≤ M in Planck units, in de Sitter space there is a limit to both the mass and charge that a black hole can have. This maximally charged black hole, the Nariai black hole [41,42], is the largest black hole that fits within the cosmological horizon, as illustrated in Figure 1.
Much like their flat-space counterparts, black holes in de Sitter evaporate by slowly emitting charged particles, which slowly discharges the black hole . This is a quantum-mechanical Figure 1. Charge versus mass plot for Reissner-Nordstrom-de Sitter black holes. Sub-extremal solutions (those without naked singularities) only exist inside the gray-shaded "shark-fin"-shaped region. Unlike in flat space, there is a maximal value of the mass for a given value of the charge (the right-side edge of the shaded region); this corresponds to the so-called Nariai black hole, for which the cosmological and black-hole horizons coincide. A Nariai black hole with some charge Q amd M can decay following the dashed line if there exist particles in the spectrum that violate the FL bound, becoming super-extremal and thus producing a pathological spacetime that does not evaporate back to empty de Sitter space. The FL bound is the condition that these pathological decays do not ocurr.
process, which corresponds to Schwinger pair production of electrically charged particles in the near-horizon region of the black hole, where the electric field is strongest [43][44][45]. The Schwinger current has, schematically, a suppression factor where m is the mass of the charged particle being emitted, and E is the near-horizon value of the black-hole electric field. If m 2 qE, the black hole will evaporate quickly; otherwise, the decay process is very slow.
If the Schwinger current is not suppressed, the black hole will evaporate to a singular space time, instead of empty de Sitter space. Intuitively, if the current is unsuppressed, m 2 qE, then a black hole will lose charge much faster than it loses mass (see vertical line in Fig. 1), effectively becoming overextremal, and leading to a singular spacetime; the black hole does not evaporate to empty de Sitter space. This is in tension with the principle, suggested by Weak Gravity, that every charged black hole should be able to evaporate back to empty space. This has been verified in every string compactification known to date (see e.g. the review [24]), and there is some evidence in holography [38] and from analyticity and causality in flat space [21,29]. The electric field of the Nariai black hole is E ∼ √ 6gM Pl H; imposing that the Schwinger current is suppressed, m 2 qE, leads to (2.6).
In order to study the consequences of FL for new darkly charged particles we must first generalize (2.6) to a setup with multiple U (1) gauge fields. This was done in [25], but since it plays a central role in our current work, we review its derivation here briefly. The FL bound (2.6) can also be written as m 2 > gE, where E is the electric field of the Nariai black hole. At the Nariai limit, the electrostatic energy in the black hole is comparable with the vacuum energy itself, M 2 Pl H 2 . When there is more than one U (1) field, this energy density can be distributed between the different components of the electric field. The corresponding generalization is then simply, for a unit vector u. Because this relation has to hold for any unit vector u, by taking u ∝ Q we get This expression is the FL counterpart of the "convex hull condition" of [39] for the WGC. We emphasize that, unlike multi-field generalizations of the the WGC, which can be satisfied by a finite number of particles satisfying the WGC, the FL bound must be satisfied by every state with the appropriate charges in the theory, for otherwise we could find a black hole that discharges too rapidly, becoming superextremal. Applying the above expression to the dark electron (with mass m χ and a charge vector given as in Eq. (2.5)) we obtain to leading order in . This has direct consequences for very light charged particles, which we now explore.
Case I: Millicharges
First we focus on the case of millicharges. The kinetic mixing between the two photons induces a millicharge on the new χ particles of size q χ = g /e < g /e. As a consequence, the upper limit in Eq. (2.10) can be conservatively applied to q χ e as well, resulting in the limits that we show in Fig. 2 (labeled as conservative, as we have only required ≤ 1.) This constraint places Milli-Charged Particles (MCPs) with in the Swampland, which as is clear from its mass and charge dependence will be most relevant for very light and weakly charged particles χ.
In deriving the previous bound, we were agnostic about the size of the kinetic mixing parameter, as long as < 1. A commonly considered value is [46,47]. MCPs in the black-shaded region, from Eq. (2.11), would violate the Festina-Lente bound, as they make black holes evaporate to pathological spacetimes in dS (see Fig. 1). They are thus "in the Swampland", and theoretically disallowed. The region above the green line, given by Eq. (2.13) is disfavored when further assuming that the kinetic mixing is = e g /(16π 2 ), rather than just ≤ 1.
The area above the red line is constrained by astrophysical observations of the tip of the red giant branch [48] (though this can be circumvented [36]), whereas above the purple dashed line there are limits related to its stability as DM [49]. The dotted brown line is the forecasted sensitivity of a future laboratory experiment proposed in Ref. [36].
estimate for the kinetic mixing may seem unwarranted, since we do not know anything about the spectrum of massive states of the theory; in fact, the common situation in string theory is that one has infinite towers of states [50], with increasing values of the charges. On top of this, there could (theoretically) just be a bare kinetic mixing of order 1.
In spite of these caveats, it turns out that (2.12) is a good estimate for the magnitude of the kinetic mixing in a large class of perturbative string-theory models, as described in [19]. This is because one-loop contributions of higher states in the tower cancel out. It is also in line with the emergence proposal of [28,51], which would naturally yield a kinetic mixing suppressed by the A and A gauge couplings, and thus parametrically identical. This gives us the more aggressive result that q χ = g /e ≥ (m χ /10 meV) 4 (2.13) is disallowed by FL, which we also show in Fig. 2. While this aggressive constraint can be circumvented if there was a ∼ 1 kinetic mixing in the Lagrangian, or in the (unnatural) case that the unit charge is q χ 1 where there is no dark photon, the conservative constraint in Eq. (2.11) is harder to evade. We emphasize that there are O(1) unknown factors in front of these constraints, and as such they ought to be taken as guidance rather than strict no-go theorems. We now compare the region covered by the FL arguments with other probes of millicharged particles. While accelerators are sensitive to new particles up to ∼ TeV scale [52][53][54], for the extremely weak charges we are interested there are other more precise laboratory probes. In particular, PVLAS rules out new light particles with charges large enough to change the vacuum polarization of light [46,47]. We show this limit in Fig. 2. That Figure shows that the FL bound improves upon the laboratory limits of PVLAS for m χ µeV in the conservative ( < 1) case, and for for m χ meV for the aggressive ( ∝ g ) case, and continues to strengthen for lower masses. In addition to laboratory bounds, there are astrophysical arguments, related to cooling of redgiant, horizontal-branch and globular-cluster stars, which can constrain MCPs. The strongest of these limits is at the q χ ≥ 2 × 10 −14 level [55]. These limits, however, are indirect, and can be circumvented through model building [36]. This prompted the proposal of future laboratory experiments to search for MCPs more sensitively in this mass range, and we show the projected reach of the superconductive-cavity experiment from Ref. [36] in Fig. 2. We note that our constraints, albeit theoretical in nature, can already disfavor a portion of the parameter space to be probed by these future experiments. Moreover, our conservative limit can improve upon even the astrophysical constraints for m χ 0.1 neV, or m χ µeV for our aggressive limit from Eq. (2.13). The constraints we obtain are both phenomenologically relevant and fairly independent of the details of the model 1 . None of these results require the MCPs to be the cosmological DM. There are additional limits if the MCPs compose the entirety of the DM, from plasma instabilities [56,57], magnetic-field effects [58,59], and coherent effects that de-stabilize DM against annihilations or decays [49] (shown in Fig. 2). For MCPs that compose a small fraction of the DM, the effects are more subtle, and include cooling of hydrogen in the early universe [60], as well as alter the dispersion relation of radio emission from pulsars [61]. We mention, in passing, that the extremely light MCPs we consider here would require non-thermal production to be the cosmological DM such as in [59,[62][63][64][65].
Case II: Secluded Dark Sectors
We can additionally apply the FL equation directly on the dark-sector coupling g , regardless of whether the dark U (1) is significantly mixed with our sector. Such a "secluded" dark sector is motivated to be the cosmological DM, as their self interactions can potentially alleviate tensions in structure formation [66]. Assuming a darkly charged particle (DCP), which does not interact with the visible sector other than gravitationally, we find the limits shown in Fig. 3 for different DCP masses m χ . These are, to our knowledge, the first time that Swampland limits are applied to dark-sector particles independent on their coupling to our sector. Additionally, the DM self interactions would affect the shape of galaxies [35], as well as alter the famous . Parameter space of darkly charged particles (DCPs), given a coupling g to a dark photon (assumed massless). As in Fig. 2, the Festina-Lente bound places the black shaded region in the Swampland. Different dashed lines show constraints for DCPs if they compose the entirety of the DM, and come from self interactions (blue [35]), stability arguments (purple [49]), and from magnetic fields (red [59], where we assume an induced millicharge q χ given by the standard 1-loop value for the kinetic mixing = e g /(16π 2 )).
"bullet" cluster collision, which leads to the limit shown in Fig. 3. We have additionally shown in Fig. 3 a limit on DM charges from the magnetic field of our galaxy [59], assuming that for any dark charge g there is a kinetic mixing between the dark and regular photon of size = eg /(16π 2 ), as in the previous subsection. All these limits, while strong, only apply if a majority of the DM is self-interacting DCPs, whereas our constraint should be present as long as MCPs are in the spectrum of the theory, and the dark photon is lighter than H 0 . In the spirit of studying not only the parameter space covered by QG arguments, but also possible model-building to evade them, we note that the arguments underlying the FL bound (2.6) apply only to massless photons (or massive photons but with mass below the Hubble scale H 0 ) [16]. Thus, strictly speaking our constraints may be evaded if the dark photon is made sufficiently massive. In the next subsection, however, we will use a different Swampland principle to place constraints on dark-photon masses. While a rigorous argument is lacking, WGC bounds are often true for massive vector bosons as well, so the same may be true for the FL bound. We do not attempt to generalize these bounds here, but instead mention the massive-dark photon case as a possible loop-hole to our constraints.
The Mass Term
The mass m V for a photon V µ is described by the corresponding mass term in the Lagrangian, It is often convenient to perform the "Stückelberg trick" by replacing V µ → A µ − ∂ µ θ/m A and renaming m V → m A . This allows us to separately describe the longitudinal and transverse polarizations of the massive photon via the equivalent Lagrangian where now F µν = ∂ µ A ν − ∂ ν A µ , A µ describes the propagation of two degrees of freedom by virtue of the gauge invariance shown below and θ is a periodic scalar. The Lagrangian L is invariant under the following gauge transformation: Gauge fixing θ = 0 brings us back to unitary gauge and reintroduces a longitudinal polarization into A µ , and we recover the original Lagrangian (3.1). What we have described so far is a perfectly valid theory of a free massive photon. Although completely consistent as a QFT at any energy scale, this theory ought to be UV completed within quantum gravity, and as such the Swampland can shed light on the parameters of this model. In particular, one must ask about the dynamical origin, in the UV, of the mass term for the photon. One possibility is that the Lagrangian (3.2) is describing the low-energy EFT after a charged Higgs 2 field Φ picks up a vacuum expectation value (VEV). Using the Lagrangian for a charged field coupled to a massless photon, writing Φ = he iϕ , (3.4) and assuming the U (1) theory has gauge coupling g and the Higgs field picks up a VEV v, one recovers at low energies the Lagrangian (3.2), with In this scenario, there is a massive scalar, the Higgs field, which has a mass naturally of the same order as A . Furthermore this scalar couples to A , since the mass term comes from a coupling where we substituted one h for its VEV. We will call this scenario, where the theory at energy scales of order m A becomes that of a massless photon coupled to a charged scalar field, the "Higgs" scenario, and will call a mass term arising as above a "Higgs mass". The coupling (3.6) is very important for the phenomenology of a Higgs massive dark photon, since it implies the dark photon A can scatter with the Higgs field h (or at low energies, that the radial mode of A can be excited), as we will review below. Within effective field theory, the only other possibility for a massive photon is simply that the Stückelberg Lagrangian (3.1) remains valid even at energy scales beyond m A , all the way to some UV cutoff scale where quantum gravitational (or stringy) effects become strong, possibly even the Planck scale. We will call this scenario the "Stückelberg mass" case. This is in contrast to the Higgs case in which (3.1) is not valid above m A where the dynamics of the Higgs also have to be taken into account. At first sight, the phenomenology of "Stückelberg" and Higgs massive dark photons seems very different, owing to the presence or absence of a radial Higgs mode. This also means that different limits can be set on the two scenarios, as outlined in Ref. [81]. Broadly speaking, the constraints on the Higgs scenario are stronger due to the presence of a radial mode, which introduces extra interactions that must also be suppressed to be below detection limits. This is especially important for ultra-light vectors, as otherwise their mixing with the SM is suppressed by the plasma mass of the regular photon. We show the parameter space of dark photons, given their mass m A and kinetic mixing , in Fig. 5, where it is clear that current astrophysical, laboratory, and DM constraints leave a significant gap for low m A with weak mixing [81]. This region is phenomenologically interesting, as it can possibly account for ultra-light DM, as well as explain the 21-cm excess reported by EDGES [73]. Here we will study this parameter space under the light of the Swampland, focusing on one guiding principle: that Stückelberg photons get their mass by eating an axion. This then leads to two outcomes: the presence of a radial mode (the saxion) and a UV cutoff (set by the tension of axion strings). More details are provided in the following two Subsections.
Constraints From Physics of the Radial Mode
A key recent insight is that "pure Stückelberg masses" are in the Swampland. In more detail, the argument put forth in Ref. [18] is that whenever one finds an axion a in a consistent quantum theory of gravity, it is always accompanied by a corresponding "radial mode". This radial mode is often called a saxion, because together with the axion they constitute the two real scalars of an N = 1 chiral multiplet, but the statement is supposed to hold even when SUSY is broken. The claim was first introduced in one of the original Swampland papers [50], where it was mapped to the geometrical property that there is no closed curve of minimum length in the moduli space. This statement has significant phenomenological implications; for instance, under some mild assumptions, it allows one to conclude that the Standard Model photon must be exactly massless [18].
This radial mode, which we will call σ, has couplings very similar to those of the Higgs. In particular, a coupling like Eq. (3.6) but with h → σ, always exists as a consequence of the fact that the radial mode allows for the shrinking of the closed curve in scalar space parametrized by the axion. The basic conclusion is thus that the existence of the radial mode-and its coupling-render the Stückelberg case somewhat similar to the Higgs case. This can give us an extra handle to disfavor new regions of parameter space, as we can import constraints on dark photons from the Higgs to the Stückelberg scenarios. There is an important caveat to this reasoning that we outline below. An important feature of the Higgs scenario is that there is a precise prediction for the mass of the Higgs mode and, barring tuning, it is of the same order as the mass of the vector boson.
In known stringy examples, this is also true of the mass of the radial mode. Phrased in terms of the dark-photon mass, this upper limit on the radial-mode mass is: where m A is the mass of the dark photon and g is the dark gauge coupling. One can give a heuristic argument for (3.7) roughly as follows. The dual field to the axion couples to strings, which must satisfy a version of the WGC for axions [27]. This sets a cutoff for the effective field theory, According to the Swampland Distance Conjecture [50], we expect the effective field theory to be valid for saxion field displacements ∆σ ∼ O(M Pl ). In such a variation, the potential energy increases by Imposing that this variation is describable within the EFT leads to ∆V ≤ Λ 4 UV , which, when rearranged, leads to (3.7). Taking (3.7) into account, the phenomenology of Higgs and Stückelberg scenarios is similar due to Swampland constraints. For instance, in the Higgs scenario, there is a coupling m g h (∂θ) 2 (3.10) which is relevant for stellar-cooling constraints. We now show that a similar coupling is present in the Stuckelberg case. The universal asymptotic structure for an N = 1 kinetic term for the saxion-axion system is This structure is not only present in all 4d N = 1 limits in known string theory compactifications; it is also true in non-supersymmetric setups like the O(16) × O(16) string [82], and it is indeed part of the motivation behind the radial-mode conjecture of [18]. Working around a particular expectation value s 0 = s for the saxion, we can introduce the physical axion decay constant which allows us to rewrite (3.11) as Canonically normalizing the field σ asσ = f σ, we obtain a coupling in the Lagrangian. This has the same structure as the coupling (3.10), with an additional suppression by a factor of α ≡ √ 2 m g M * .
( 3.15) This is the caveat we previously mentioned: the two cases are very similar up to the parametrics of the coupling between the radial mode and the massive photon. The fact that the light radial mode (Eq. (3.7)) couples to the Stückelberg photon (as in Eq. (3.6) with h → ασ) means that we have similar interactions in the Stückelberg case as in the Higgs case. To estimate the constraints on massive Stückelberg photons, we consider the implications of these couplings for stellar-cooling effects. In particular, the Higgs-strahlung process in Fig. 4 that is known to dominate the production of light dark Higgses and photons in stellar plasmas [83][84][85] also exists for the Stückelberg photon (Fig. 4). . The 'Higgs-strahlung' process for the Stückelberg photon. In this case, the emitted scalar particle is σ, the radial mode required by the radial mode conjecture (see Section 3.2). The amplitude for this process is proportional to the product αg where the first factor comes from the kinetic mixing and the second from the vertex (see Eq. It is immediately clear from inspecting the cross term in (3.6) that the amplitude for the process A → A T → h + θ is proportional to the product αg = √ 2 m/M * . This process is important as long as there is enough energy to allow for the production of the final state. In practice, this means that the masses of A and σ should be lower than the plasma mass of the photon at the Sun, which is O(100 eV). For these plasma decay processes not to significantly alter stellar evolution, the amplitude of interactions such as those shown in Fig. 4 should be small. This criterion has been used to constrain the importance of similar processes in the case of Higgsed massive photons [85] and millicharged particles [55,86]. In our case, the limits from [55] imply: This is shown in Figure 5 for two representative values of M * . Because of the smallness of m, the only relevant constraint is when M * is also small. In string models, M * is typically the string scale and experimental constraints would prevent us from setting it too low. However, if there are models where M * is effectively replaced by a low scale, then our bound above can become more constraining. We finally note that the bound (3.7) can be avoided by making the Higgs h (or radial mode σ for the Stückelberg case) heavy enough, so it is not produced in stellar environments, at the cost of fine tuning. For the Higgs case, one can attempt to set where λ is the Higgs quartic coupling, and g the dark gauge coupling as before. Perturbative unitarity places an upper limit on λ 8π 2 . The above hierarchy then has to be arranged by choosing small g . For example, to have dark photons in the m A ∼ 10 −14 eV mass range, but m h keV (so as to avoid the stellar constraints), one would need g / √ λ 10 −17 , i.e., extremely feebly interacting dark charges. The question of whether potentials with a large hierarchy such as (3.17) are in the Landscape or in the Swampland is very interesting (indeed, it is tantamount to the Electroweak hierarchy problem), but beyond the scope of this paper. The Stückelberg scenario follows identically, as the bound for m σ in Eq. (3.7) (from Ref. [18]) comes from a similar argument. In this case, we can phrase the tuning in terms of requiring a kinetic mixing several orders of magnitude larger than implied by the formula ∼ eg /(16π 2 ).
Constraints from the Axion String Species Scale
There is another constraint that follows from the arguments in [18], which applies only in the Stückelberg case. In a Stückelberg theory, the axion θ is "fundamental", in the sense that it is not replaced by any other degree of freedom before the EFT breaks down. Consequently, one can consider metastable axion strings to which one may apply the corresponding version of the WGC. Doing this, one can put a bound on the string tension, and a corresponding string scale, which sets an upper bound Λ UV for the cutoff of the local effective field theory as 3 We remind the reader that the physical interpretation of the cutoff (3.18), which also appeared in (3.8), is that it corresponds to the energy scale of the strings that couple magnetically to the axion. At this scale, therefore, a fundamental string becomes light, and the local effective field theory description breaks down (indeed, Λ UV corresponds to the "species scale" in the case of a perturbative string limit). Since quantum field theory remains applicable at energy scales probed by the LHC, one must require this cutoff to be above ≈ 10 TeV. This gives: Assuming that the magnitude of kinetic mixing is given by Eq. 2.12, we obtain a bound: This is also shown in Fig. 5.
Phenomenological Implications
The Swampland-disfavored regions that we show in Fig. 5 have important phenomenological implications, which we now explore. First, there has been considerable attention on the low-frequency tail of the cosmic microwave background (CMB) as a possible avenue for new physics, motivated by the 21-cm detection during cosmic dawn by the EDGES collaboration [73]. A class of models that attempt to explain the signal introduce a dark photon that oscillates to the SM photon thereby increasing the number of CMB photons in the Rayleigh-Jeans tail of the distribution, e.g. [74,87,88].
The increased photon number acts as an extra radio background, and thus deepens the 21-cm absorption trough, as claimed by EDGES. The dark-photon parameters proposed in Ref. [74] are indicated by the yellow star in Fig. 5. In order for this model to avoid our constraint from Eq. (3.19) one would need g 10 −10 to give rise to Λ UV > 10 TeV. This demands a fair amount of fine tuning, or a new mechanism to give rise to small kinetic mixing other than the usual 1-loop term from Ref. [33] (as ∝ g would be far too large with O(1) couplings). Second, the dark-photon portal is one of the most popular avenues to a renormalizable theory of dark matter that interacts with our sector [89]. In particular, freeze-in DM [90,91], the case in which the DM is slowly produced from interactions with our sector over cosmic history but never thermalizes, is tantalizingly close to the reach of direct-detection experiments. In this scenario the DM coupling to our sector is through a very light mediator, and it requires tiny couplings (∼ 10 −12 , e.g. [92]). Our results imply that freeze-in through a kinetically mixed dark photon is disfavored for masses m A < 10 −10 eV (assuming the standard ∝ g mixing), given the × g < 10 −14 requirement for freeze-in 4 . Finally, ultralight bosons are an attractive dark-matter candidate, and dark photons in particular can produce the correct DM abundance for masses as low as m A ∼ 10 −20 eV [65,69].
A plethora of experimental efforts have been developed to test A DM, and we show the reach of some of those "direct-detection" experiments in Fig. 5. Our work shows that part of the parameter space targeted by these experiments for low m A is in the Swampland. Therefore, a detection of a dark photon within this region is not expected, and it would test our knowledge of quantum gravity. Figure 5. Parameter space of a Stückelberg dark photon. The red region is constrained by experiments that measure photon-to-dark photon transitions, such as COBE/FIRAS [93] and light-shiningthrough-walls experiments [94][95][96][97][98][99]. The green region is constrained by bounds on stellar cooling from the Sun, Horizontal Branch stars and Red Giants [100]. The gray shaded region is ruled due to the UV cutoff from Eq. (3.19). The blue region is ruled out following the existence of the radial mode from Eq. (3.16), for which we show two values of M * . The orange star shows the model of [74] which is an attempt to explain the 21 cm EDGES anomaly and is disfavored by our constraints. We note that there are further constraints from the Solar basin around eV masses [101], as well as from superradiance for lower masses [102].
Non-Abelian Dark Matter
In this Section we constrain Dark Matter models that make use of non-Abelian gauge fields. The idea is simple and relies on the Festina Lente bound described above. As noted in [16] and reviewed in Section 2.2, the existence of a lower bound on the mass of charged particles in (quasi-)de Sitter space means that unconfined and un-Higgsed non-Abelian gauge fields are forbidden since their gluons are massless charged particles. These gluons catalyze the decay of Nariai black holes leading to pathological spacetimes with naked singularities. While a few applications have been pointed out in [16,25], we take this opportunity to apply this bound to a specific model and comment on potential general lessons that we can learn. We attempt to describe the model being discussed in a self-contained manner to bring out the features related to the non-Abelian gauge fields and show their appeal for model-building. In this section, we will only discuss models that describe dark-matter physics, leaving comments and related ideas about inflationary and dark-energy models to Appendix A and Appendix B, respectively. The specific non-Abelian Dark Matter (NADM) model we study was proposed in [103]. The DM candidate in the NADM model is a WIMP (Weakly Interacting Massive Particle) that is thermally produced in the early universe. In addition, it transforms in the fundamental representation of a dark SU (N ) d gauge group. The gluons of the dark non-Abelian symmetry are weakly coupled today and constitute a dark radiation (DR) component that interacts with the DM. The lack of observation of DM self-interactions places an upper bound on the dark gauge coupling which gives g d < 10 −3 . The presence of the interacting DM-DR system as well as the multiplicity of the DM lead to the distinguishing features of the NADM model. For instance, its effect on N eff presents an opportunity to relieve the Hubble tension (see e.g. [104]) by altering the CMB prediction of the Hubble expansion rate H 0 . In addition, the DM multiplicity decreases the cross-section relevant for indirect-detection experiments since the DM color degrees of freedom must match for successful annihilation (see also [105] for example). By contrast, the cross-section seen in collider experiments will be enhanced since colliders can produce any of the N DM particles in the final state. Finally, DM-DR interactions can potentially play a role in the resolution of the σ 8 tension [106] since they can lead to a smooth suppression of the matter power spectrum rather than a sharp cut-off at small scales.
Since the dark gluons are weakly coupled at galactic scales, the confinement scale Λ conf. ∝ e −1/g 2 d is much below Hubble, and the model is incompatible with FL. In particular, in this theory, one can start with a Nariai black hole that has a charge along a Cartan direction of the non-Abelian gauge group and the massless gluons would immediately screen this charge causing the black hole solution to leave the extremality region as shown in Fig. 1. We emphasize that this conclusion does not require a relic abundance of non-Abelian dark radiation, as is assumed in [103], and is therefore phenomenologically stronger than an experimental exclusion since the latter can potentially be avoided by reducing the abundance of non-Abelian gauge particles. We briefly mention that we do not rule out the features of this model but only this particular realization. For example, one can have a bath of interacting relativistic components to serve as interacting radiation in lieu of the gluon bath and the FL bound would not necessarily apply in such settings. Other features may be more difficult to reproduce. An example is the DM multiplicity. This leads to correlated enhancement/suppression of the cross-section seen in various DM experiments and this signal might be difficult to come by without the presence of a symmetry. In the NADM model, the SU (N ) d gauge group ensures the spectrum has this symmetry. Instead of a gauge symmetry, one may attempt to use a global symmetry which is broken at a high scale (cf. [107]). The absence of gauge bosons, however, will change the interaction pattern between DM particles leading to very different phenomenology that deserves an independent study.
Conclusions
New dark sectors are ubiquitous in extensions of the Standard Model of particle physics, and in fact are necessary to explain the existence of dark matter and dark energy in our universe. These dark sectors, for instance composed of darkly charged particles or massive dark photons, are well-motivated dark-matter candidates, as well targets for new-physics searches. As such, any new insight on their particle content can become invaluable. Dark sectors can leave distinct signatures in both particle-physics experiments and cosmological observables. This has led to a very active research field aimed at covering the vast range of possible models. In this note we have added to this rich experimental landscape by studying which parts of their parameter space contradict our current understanding of Quantum Gravity (QG). For that, we have used recent advances in the Swampland program. While the usual point of view is that QG is far beyond experimental reach, given the remoteness of the Planck scale, the Swampland program uses insights from unitarity and properties of black holes to constrain them. Using these principles, it is possible to place interesting constraints in low-energy effective field theories, which can then be checked against a plethora of String Theory constructions (so that String Theory here acts as a "laboratory" to check proposed Swampland constraints), and applied to phenomenologically interesting models. In this paper, we have done exactly that, using the principles of [18] and the Festina Lente (FL) bound of [16] to constrain models of dark matter, as well as dark energy and inflation. Some of these models (most notably non-Abelian dark matter) are in the Swampland according to these principles. As a consequence, the phenomenology they predict, if observed, must be due to different physics. We have also been able to significantly constrain the parameter space of charged dark matter, both in the case of secluded hidden sectors as well as "millicharged" DM, where the FL bound covers previously allowed regions of parameter space for low DM masses m χ µeV. Moreover, we have used the existence of the radial mode to place constraints on the Stückelberg mass and kinetic mixing of dark photons. This, together with previous constraints on Higgsed dark photons [85,108,109], disfavors the entire region of m A 20 eV (under the assumption of a standard ∝ g kinetic mixing). These bounds on the dark sector are a novel application of Swampland principles, and can shed light on the long-standing puzzle of the nature of DM. Given the strength of the FL bound, it is natural to expect that the results in this paper can be extended, resulting in more general and far-reaching constraints. One very interesting avenue is studying the implication of the FL bound on inflation, as we have only scratched the surface in this work. See also [110,111] for recent work on this direction. As we have shown here, "theoretical" probes from the Swampland are highly complementary to the observational program already underway to detect the dark sector of our Universe. It is interesting that while both quantum gravity and the dark sector of our cosmos are open questions in Physics, we can make progress by combining our limited understanding of both these areas. Though the Swampland is a very active topic on the formal side, there has been so far little exploration of the phenomenological implications of existing Swampland bounds, as evident by the scarcity of Swampland literature on such an important topic as dark matter phenomenology (see, however, [28,[112][113][114], and especially [115], which bounds the parameter space of dark photons from positivity arguments). Our hope is that this paper will entice more phenomenology and astrophysics experts to uncover the consequences of Quantum Gravity for our universe.
FL inequality must be greater than 3H 2 . Since particles are expected to get a mass of order Hubble during inflation, FL can potentially be marginally satisfied. The QCD sector could also be Higgsed during inflation, albeit with a different field to the SM one. This might seem tuned at first but could be realized along the lines discussed in [117]. Another possibility is that the non-Abelian sector (both SU (3) and electroweak SU (2) fields) is confined during inflation. This can happen naturally in the context of some Grand Unified Theories (GUT)'s. If their coupling takes the value α 0 at an energy scale Λ 0 , the confinement scale of a supersymmetric GUT can be determined via the NSVZ formula [118], where the coefficients T Adj are group-theoretic. For E 6 grand unification, setting α 0 to its grand-unification value at the scale Λ 0 ∼ 10 15 GeV gives Λ ∼ 10 13 GeV, so that gauge fields would be confined during inflation (for H inf above that scale). In this case, FL would apply automatically, but one would still be pressed to explain reheating in this model (which would be a strongly coupled process), and the setup could also have a monopole problem. We just present it as an illustration of the fact that, while the SM+inflation scenario is certainly incompatible with FL, the ways out are manifold. More generally, any inflationary scenario that requires the use of non-Abelian gauge fields is incompatible with FL if the bound can be applied. As we have seen above, FL cannot be avoided by tuning the gauge coupling. That said, one sure way of avoiding FL is to have the lifetime of dS be shorter than the decay time of the Nariai black hole 5 If this condition is satisfied then FL cannot be applied. In fact, if one starts with an initial condition that is the dS Nariai black hole, then the cosmological constant would change considerably before the black hole has a chance to decay into the superextremal region. The conclusion of [16] cannot then be applied directly to this case and a more careful analysis is required. Inflationary models where the FL bound may apply include chromo-natural inflation [120] and gauge-flation [116] for example. In this appendix, we focus on the former. Chromo-natural inflation was proposed as an extension of natural inflation [121,122] in order to circumvent the need for a super-Planckian axion decay constant. In order to match CMB observations [123,124], the axion decay constant has to be super-Planckian, f 5M Pl something that has been argued extensively to be in the Swampland [125][126][127][128][129][130][131][132][133][134][135][136][137][138][139]. Chromo-natural inflation avoids this problem by coupling the axion to a non-Abelian gauge field allowing for inflation on a steeper potential, i.e. with f < M Pl , allowing the axion's kinetic energy to be dissipated into a bath of non-Abelian gauge fields. In this model, with f < M Pl , accelerated expansion without the non-Abelian background lasts for a relatively short time. To check the applicability of FL, we must then compare this time-scale to the black hole lifetime. For an axion near the top of a cosine potential of the form: the condition for FL to apply, t dS t bh , translates to: For the parameter values given the original paper [120], this condition is not satisfied and thus it seems like the naïve application of FL does not go through. That said, the inequality derived above A.3 Finally, the production of chiral gravitational-wave signals relying on non-Abelian gauge fields, such as in [140] have also to be considered carefully in light of the FL bound. This should motivate the search for other means of generating parity-violating gravitational-wave signals, such as those studied in [141].
is precisely these non-Abelian fields that cause the model to run afoul of the FL bound during the inflationary phase as they ought to be unconfined. It is conceivable that another form of radiation that can thermalize during inflation can act as a substitute for the dark gluon bath but care must be taken to ensure that the masses of the constituent particles and the gauge interactions between them do not contradict the FL bound. It is likely that the features of this model which make it incompatible with FL are not essential. A possible way out of our constraints would be to replace the non-Abelian gauge fields in the model by Abelian ones or, if strong interactions are necessary, by coupling to a Conformal Field Theory. The Swampland viewpoint often encourages exploration of phenomenological scenarios that do not fit in the standard notion of naturalness for a low-energy observer. | 12,457.2 | 2022-07-19T00:00:00.000 | [
"Physics"
] |
Automated building occupancy authorization using BIM and UAV-based spatial information: photogrammetric reverse engineering
ABSTRACT At present, due to the use of manual, rather than automated measurements, the inspection for building occupancy authorization lacks objectivity. Seeking to improve this situation, in this study, we used unmanned aerial vehicle for automated inspection for building occupancy authorization. Theoretical considerations about building occupancy authorization and the trend of UAV technology were accomplished. Furthermore, reverse engineering, including digital photography, network RTK-VRS surveying, and post-processing data, was conducted. The obtained spatial information was used for building occupancy inspection authorization in a BIM platform, and the effectiveness and applicability of UAV-based inspection was analyzed. The results of this study demonstrate that UAV-based automated inspection can reduce error rate in on-site measurement, conduct complete enumeration survey of the entire range of building, and collect data ensuring objectivity of the inspection due to authentic accuracy and effectiveness.
Introduction
In the current building occupancy authorization system in Korea, due to the use of manual, rather than automated measurements, the inspection for building occupancy authorization lacks objectivity (Choi 2014b). Connivance of illegal buildings has frequently led to numerous incidents (Kim 2010). This contradicts the original intent of building occupancy authorization system to protect people's property and safety, and, therefore, there is an urgent need to improve the system using automated inspection using of spatial information. Today, the world is witnessing the Fourth Industrial Revolution characterized by the convergence of manufacturing and ICT. Accordingly, extensive research has been conducted on the convergence of construction and ICT, including BIM (Building Information Modeling), GIS (Geographic Information System), AR/VR (Augmented Reality/Virtual Reality), and UAV (Unmanned Aerial Vehicle) throughout the world. In particular, UAV enables a rapid and accurate investigation across a wide range through aerial digital photogrammetry. In the area of inspection for building occupancy authorization, the spatial information generated through a series of UAV-based reverse engineering is expected to have big expectancy effects as reliable digital evidence.
Hence, this paper focuses on the UAV-based automated inspection for building occupancy authorization.
Our aim is to enhance the objectivity of the inspection, thereby contributing to the original intent of the system, ie protecting the public safety and property. Furthermore, as part of the research on applying ICT to ACE, this paper aims to establish whether inspection for building occupancy authorization can be automated by reverse engineering approaches, including digital photography, network RTK-VRS surveying, postprocessing data, establishing BIM-UAV based spatial information and building occupancy inspection authorization in a BIM platform using spatial information. Figure 1 shows the flow of the research.
Consideration for automated inspection for occupancy authorization
Buildings must be designed, constructed, and supervised in compliance with the procedures and codes stipulated in the Building Act. Figure 2 explains the entire process of building occupancy authorization in Korea. On-site inspection for building occupancy authorization is a manual measurement method where an operator moves around the building carrying equipment. Accordingly, it is impossible not to generate errors, and there is a limit to examining the entire building moving around. The lack of tangible evidence to prove objectivity and legality in inspection results leads to rampant occupancy authorization of illegal structures. Other negative consequences include the loss of architectural ethics due to connivance of illegal matters caused by corruption in the process of getting an approval of use, safety accidents in illegal structures, and ill-gotten profits to particular individuals.
Currently, UAV and 3D scanner are widely used in the AEC industry for automated measurement and onsite surveying. Table 1 outlines the pros and cons of using UAV-based photogrammetry and 3D laser scanning technologies. As outlined in Table 1, compared to manual measurement and 3D scanning, the UAVbased photogrammetry is more suitable for the experiment on the automated inspection for building occupancy authorization, particularly in terms of cost effectiveness and efficiency. According to previous research, the advantages of UAV are as follows. First, unlike photos taken by manned aircrafts or satellites, UAV-based aerial photogrammetry accelerates data gathering and monitoring tasks (Unger, Reich, and Heipke 2014;Kim et al. 2014;Kim, Yu, Park, and Ha 2008). Second, UAV-based aerial photogrammetry has advantages over manned aircrafts, artificial satellites, and 3D scanners in terms of convenience and costeffectiveness (Colomina et al. 2008;Choi 2014a;Lee, Choi, and Joh 2015). Third, UAV features low-altitude photogrammetry and gathers high-resolution image information and spatial information securing the position accuracy and spatial resolution (Haala, Cramer, and Rothermel 2013;Cho et al. 2014;Lee, Hong, and Lee 2016). In summary, compared to the established surveying method, UAV is fast, convenient, accurate, and interoperable.
Based on the aforementioned theoretical considerations, in this study, 9 items of outdoor inspections were derived for automated inspection for occupancy authorization using UAV-based photogrammetry; these items were expected to exert the greatest effects on the improvement of the on-site inspection. In addition, two further items, ie the building coverage ratio (a ratio of a building area to a lot area) and the open space ratio (a ratio of total open space to a lot area), were also included. The enforcement ordinance of the Building Act was analyzed to determine the methods of inspecting each of the 11 items (see Table 2). The specifics listed in Table 2 pertain to the enforcement ordinance of the Building Act and Daegu City's Building Ordinance.
UAV application process for automated inspection for occupancy authorization
A series of experiments were conducted to determine the viability of the UAV-based automated inspection for occupancy authorization. The experiment followed the following procedure: 1. Lot Selection & Condition Setting; 2.UAV-based Spatial Information Generation; 3. Inspection Using UAV-based Spatial Information; 4. Analysis of Results, Comparative Verification, & Effectiveness Analysis. To control the variables, the UAV-based automated inspection for occupancy authorization was postulated to be reliable based on the literature. In addition, the permitted drawings and documents were set as the control group for the experiment, while their objectivity was posited for the comparative analysis. Based on Table 2, random expected values were set for inapplicable regulations based on building sizes and ordinances in order to determine if the inspection would be viable; relevant criteria were set by modifying the specifics to implement a comparable inspection. The building selected for the purpose of this study was a 122-m 2 sports facility completed in 2016 in K National University in Korea. As the selected target site was located in an educational and research institution, it had no separate cadastral line, such as the reference building line, the lot boundary, and the road lines. Therefore, a new cadastral boundary was set by referencing the natural boundary points, the permitted site plan, and the 1:1000 GRS80 ortho-image.
To create the spatial information of the building, its high-resolution images were first captured by the UAVbased low-altitude photogrammetry. The UAV used in the research was a 'DJI Inspire 1 v2ʹ fitted with a 12.4M resolution digital camera for the low-altitude photogrammetry. The UAV was set to fly at the height of 10m and 15m above the target building at the speed of 2m/s. The overlap and sidelap were 80% each. Next, the position accuracy was secured by calibrating the ground reference point using the Network VRS-RTK GPS measurement. ITRF 2000 Korea East Belt TM was used as the coordinate system. Finally, the images gathered by the UAV were post-processed to obtain spatial information that ensured a certain spatial resolution (Ground Sampling Distance, GSD). Pix4D mapper was used for the entire post-processing. Based on the data with the position accuracy secured, the point cloud and 3D mesh model were created. Then, the data were used to create the spatial information, including ortho-image and DSM (Digital Surface Model). Figure 3 shows the process of generating UAV-based spatial information (Ryu 2016).
The GSD of spatial information, including the post-processed data, the ortho-image, the point cloud data, and the DSM, was less than 1cm, meaning that a 1-cm object was identifiable per unit pixel. The error rate of 1% per meter met the allowable margin of error (Article 20, Building Act in Korea). Also, the image overlap was over 99%, indicating the lens distortion was sufficiently reliable. Autodesk's Revit Architecture 2016 (Revit), BIM platform, was used for the inspection for occupancy authorization based on spatial information. The first reason for conducting the inspection in the BIM environment was that BIM, an integrated information platform, can put all architectural information on an objectoriented model. Second, it makes it possible to easily collaborate with multiple decision makers by exchanging information about buildings through interoperability. Thirdly, such an information model is very useful for the future, eg building management & maintenance, evacuation simulation, and environmental evaluation. Revit is compatible with CAD files, PDF files, GIF images, and point cloud data. Therefore, the extracted point cloud data, the orthoimages, and the DSM were retrieved to the Revit environment and overlapped with the CAD drawings. Top and cross-sectional views were used to measure the building's height, length, area, the length of the building lot, and the widths and slopes of streets and roads. Figure 4 shows the process of inspection using UAV-based spatial information. Table 3 shows the inspection methods and records for occupancy authorization based on the data gained from the previous experiment. The inspection using the data relative to each item and the comparison with the permitted drawings yielded the following results. As for the building line items, compared with the drawings, the measured distance from the road line to the building showed an 8.1-mm error. The error rate was 0.2%, which was within the allowable margin of error (3%) for the recession distance of the building line (Article 20, Building Act). As for the maximum height per street zone and road, the measured height showed an error of 300mm.
Comparative verification of inspection results and analysis of effectiveness
The error rate was 7.1%, which was beyond the allowable margin of error for building heights (Article 20, Building Act). Accordingly, an on-site inspection or surveying, as well as reviews of data and drawings, was required for a closer inspection to analyze the cause of the error. The area of the building was calculated based on the outermost outline of the exterior finishing, not the central axis. Compared with the drawings, the lot area showed an error rate of 0.5%, which was within the margin of error allowed for the building coverage ratio (Article 20, Building Act), whereas the building area showed an error rate of 3.1%, which warranted a closer inspection. Overall, among 11 inspection items, the UAV-based spatial information was applicable to 9 items. As for the safety measures for the lot, this item requires expert help. Since there were differences in the results from inspection of the building and the height, length, and area of the lot between the spatial information and drawings, there is a need to verify which information is closer to the on-site measurement results. Table 4 (see corresponding items' depiction in Figure 5) analytically compares the errors in the spatial information between the drawings, UAV, and onsite inspection or surveying results. Table 4 shows that the dimensions based on the UAV data are greater than those specified in the drawings. Also, the on-site measurement results are greater than those in drawings. These findings can be attributed to construction errors.
The on-site measurements were closer to the results based on the UAV-based spatial data than those based on the drawings. Therefore, the inspection results based on the spatial information are closer to the dimensions of the actual building. The error difference between the UAV-based spatial information and the on-site measurement results was compared with the allowable margin of error (Article 20, Building Act in Korea). The UAV-based data for automated inspection for the authorization showed the error rates of 3%, 0.5%, and 1% in the distance, building coverage ratio, and floor area ratio, respectively, falling within the statutory allowable margin of error criteria. In addition, both the building height and the length of the flat surface showed an error rate exceeding 2%, meeting the criteria. That is, the exterior information of the building measured based on the spatial information generated by the UAV met the allowable margin of error, which underscores the reliability and accuracy of the information.
Next, Table 5 shows the amount of time spent per construction type for the analysis of effectiveness (Yun et al., 2016). 2.5 hours, 3.5~9 hours, and 1.5 hours were spent on imaging, image processing, and analysis of results, respectively. As for photogrammetry, the surveying at ground reference points and the aerial photography should take longer in proportion to the size of a building. As for image processing, the time increased with the number of photos, which, in turn, increased with the size of the building, and varies with the quality of results. To acquire a moderate quality and half the size of the original images, approximately 3.5 hours were spent on 100 photos with the original sizes taking 3~4 times longer.
Except for the different sizes, the spatial resolution and the position accuracy remained identical, which ensured the reliability of the inspection with small-sized outputs. As for the analysis of results, the time spent should not be affected by the increase in the size of a building unless the number of inspection items increases. The analysis of effectiveness indicated it would take 7.5 hours to inspect the building for occupancy authorization based on the UAV-based spatial information; therefore, the output of moderate quality was sufficient for the inspection. According to previous research on the time for inspection agents to conduct the inspection on behalf of municipal offices, the municipal ordinances prescribed 3-5 days for the inspection, which seemed sufficient for inspecting the outdoor parts of the building and for securing objective data by generating spatial information.
Conclusions
This study aimed to improve the objectivity of the inspection for occupancy authorization through automation and exploring the applicability of UAV technology. To this end, we analyzed the viability of automated on-site inspection for the occupancy authorization. We assumed this approach to be easier, faster, and more reliable d the conventional method, and our results supported this expectation. Specifically, the findings of this study can be summarized as follows.
First, a method of UAV-based automated on-site inspection for building occupancy authorization was previously proposed in a series of reverse engineering approaches, including UAV-based low-altitude photogrammetry, network RTK-VRS surveying, postprocessing data, establishing BIM-UAV based spatial information, and building occupancy inspection authorization in a BIM platform using spatial information. Accordingly, in this study, manual inspection was replaced by photogrammetry, which enabled outdoor inspection of the entire building.
Second, the method used in this study proved to be more reliable than the conventional method because of the position reference point and the spatial resolution secured. The intuitive comparison based on the visible and objective spatial information model of the building successfully measured the distance, height, and area. Therefore, this experiment proved that the 11 items for outdoor inspection among the 30 items for the inspection for occupancy authorization are viable for automation. However, this paper is limited to outdoor information for the inspection for occupancy authorization, which warrants further research on the automated inspection of indoor information using other devices including 3D scanners.
Third, as for the efficiency of the UAV-based automated inspection, the spatial information of a building could be acquired simultaneously with the inspection for occupancy authorization. As demonstrated by our results, the proposed method shortens the time to occupancy authorization and meets the allowable margin of error in Korea (Article 20, Building Act), which suggests that the proposed method is reliable and effective. Overall, the UAV-based inspection for occupancy authorization of a building was proved to be effective in practice. The data gained in the process of UAVbased automated on-site inspection for occupancy authorization could be shared for the monitoring in the public sector and could serve as the objective reference data for purposes such as cadastral resurvey, 3D geospatial information generation of country, and ortho-photograph manufacturing. The proposed method could address further challenges, including moral hazards and safety accidents resulting from the tolerance for unlawful buildings, and create a sound ethos in the building industry. The information reported in this study seems applicable to the cadastral resurvey or the 3D national spatial information implementation in connection with the government policies. In future research, it would be necessary to determine the applicability of the proposed method to a wider range of buildings, eg large buildings of complex geometry, high-rise buildings, and even mega structures. This would require classification of buildings based on their size and functions. | 3,841.6 | 2019-03-04T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Precise Identification of Chromosome Constitution and Rearrangements in Wheat–Thinopyrum intermedium Derivatives by ND-FISH and Oligo-FISH Painting
Thinopyrum intermedium possesses many desirable agronomic traits that make it a valuable genetic source for wheat improvement. The precise identification of individual chromosomes of allohexaploid Th. intermedium is a challenge due to its three sub-genomic constitutions with complex evolutionary ancestries. The non-denaturing fluorescent in situ hybridization (ND-FISH) using tandem-repeat oligos, including Oligo-B11 and Oligo-pDb12H, effectively distinguished the St, J and JS genomes, while Oligo-FISH painting, based on seven oligonucleotide pools derived from collinear regions between barley (Hordeum vulgare L.) and wheat (Triticum aestivum L.), was able to identify each linkage group of the Th. intermedium chromosomes. We subsequently established the first karyotype of Th. intermedium with individual chromosome recognition using sequential ND-FISH and Oligo-FISH painting. The chromosome constitutions of 14 wheat–Th. intermedium partial amphiploids and addition lines were characterized. Distinct intergenomic chromosome rearrangements were revealed among Th. intermedium chromosomes in these amphiploids and addition lines. The precisely defined karyotypes of these wheat–Th. intermedium derived lines may be helpful for further study on chromosome evolution, chromatin introgression and wheat breeding programs.
Introduction
As a perennial grass with a native distribution throughout the Mediterranean and Eastern Europe [1], Thinopyrum intermedium (Host) Barkworth & D.R. Dewey has undergone the most direct selection for hay and pasture grass improvement [2]. As an allohexaploid (2n = 6x = 42), Th. intermedium chromosomes have been previously identified and allocated into three tentatively designated sub-genomes sets as J, J S and St, where St is closely related to the genome of Pseudoroegneria strigosa and J is closely related to that of Th. bessarabicum. The J S genome is a modified version of the J and St genomes [3,4]. Th. intermedium consists of two subspecies, namely, intermediate wheatgrass, ssp. intermedium and a pubescent variant ssp. trichophorum that carries novel and high levels of resistance to several wheat fungal diseases, such as rusts and powdery mildew, and many useful agronomic traits, including novel glutenins, and contributes to the tertiary gene pool for wheat improvement [4,5]. The production and identification of amphiploids or partial amphiploids between wheat and Th. intermedium is an important intermediate step for gene transfer [5]. To date, a number of wheat-Th. intermedium amphiploids, Otrastsyuskaya (OT), TAF46, Zhong1 to Zhong5, 78829, TAI7044, TE-3, TE253-1, TE257, TE267, TE346, SX12-787, SX12-1150 and SX12-1269, have been obtained [4,[6][7][8][9][10]. The precise identification of individual Th. intermedium chromosomes is essential for comparative molecular and cytogenetic analysis of Th. intermedium species and the introgression of diverse Th. intermedium chromatin for breeding purposes of wheat. Due to the complexity of the genomic composition of Th. intermedium, the precise recognition of individual Th. intermedium chromosomes in wild grass itself and wheat-Th. intermedium partial amphiploids is rather difficult by current molecular cytogenetic methods including C-banding, genomic in situ hybridization (GISH) and fluorescent in situ hybridization (FISH) with the reported probes [4,5,[10][11][12]. Recently, we developed seven bulked pools for an oligo painting system, which enabled the assignment of the chromosomes to the Triticeae linkage groups and the characterization of wheat-alien chromosome derivatives [13]. The powerful Oligo-FISH painting methods [14], in combination with non-denaturing fluorescent in situ hybridization (ND-FISH) using multiple oligo probes [15][16][17][18][19][20], have clearly improved the efficiency and accuracy of distinguishing Th. intermedium chromosome segments and their introgression in a common wheat background.
In the present study, we established a standard karyotype of individual Th. intermedium chromosomes, which show the genomic heterogeneity of wheatgrass. Comprehensive FISH and Oligo-FISH painting enabled the precise identification of individual chromosomes and chromosome rearrangements in a wheat-Th. intermedium partial amphiploid and derived lines.
Identification of Individual Thinopyrum intermedium Chromosomes
All 21 pairs of Th. intermedium chromosomes have been previously identified by ND-FISH using the repetitive sequences Oligo-pSc119.2 and Oligo-pTa535 as probes. Subsequently, ND-FISH using the probes Oligo-B11 and Oligo-pDb12H was performed to further characterize the mitotic metaphase chromosomes of the Th. intermedium accession PI440043. The Oligo-pDb12H was able to clearly classify the 14 J S chromosomes of Th. intermedium, while the abundant signals associated with Oligo-B11 revealed the St chromosomes, and the mostly telomeric and sub-telomeric signals with Oligo-B11 on the J chromosomes [17,18]. As shown in Figure 1a, all the 42 Th. intermedium chromosomes were clearly distinguished into three subgenomes designated as J, St and J S , each with 14 chromosomes by using ND-FISH. The sequential ND-FISH patterns using probes Oligo-pSc119.2 and Oligo-pTa535, combined with the FISH patterns by Oligo-B11 and Oligo-pDb12H, were able to distinguish the individual Th. intermedium chromosomes in three J, St and J S sub-genomes ( Figure 1b).
The wheat-barley bulked oligo pools Synt1 to Synt7 were then used by sequential FISH for validation of the signal distribution and specificity for the chromosomes of Th. intermedium PI440043 (Figure 1). Each chromosome plate was first analyzed by ND-FISH with probe combinations of Oligo-B11 + Oligo-pDb12H ( Figure 1a) and then Oligo-pSc119.2 + Oligo-pTa535 (Figure 1b,e,g,i). For example, the probe Synt5 produced strong hybrid green signals on each of three homoeologous chromosome pairs along the entire lengths of these groups of six chromosomes (Figure 1c). According to ND-FISH results, the three pairs of homoeologous chromosomes belong to each St, J and J S subgenome. They are thus designated as 5St, 5J and 5J S (Figure 1c). In the same metaphase cell, the probe Synt7 generated distinct red signals on three chromosomes pairs of 7St, 7J and 7J S (Figure 1c), and Synt3 identified another three chromosomes pairs 3St, 3J and 3J S (Figure 1d), respectively. Similarly, the bulked probes Synt1, Synt2, Synt4 and Synt6 also hybridized to mitotic chromosomes of Th. intermedium, and they all produced distinct signals on their individual three homoeologous chromosomes pairs ( Figure 1). Therefore, all seven painting probes resulted in distinct hybridization signals covering chromosome arms along their entire lengths for each linkage group, suggesting that the bulked Oligo-based FISH painting can be used to identify each of the Th. intermedium chromosomes or chromosome segments of particular homoeologous groups. The 21 chromosome pairs were finally assigned to seven homoeologous groups among St, J and J S -subgenomes (Figure 1k).
Based on the hybridization patterns of Oligo-pSc119.2 and Oligo-pTa535 on the chromosomes of PI440043, some homologous chromosomes pairs showed different hybridization signal patterns for Oligo-pSc119.2, including 2St, 6St and 3J S with the presence or absence of signals at their terminal regions. Furthermore, the chromosome pairs 7J S and 5J displayed different Oligo-pTa535 signals on the pericentromeric regions ( Figure 1). Importantly, the different ND-FISH patterns were visible only on one of the homologous pairs, indicating structural chromosome heterozygosity in Th. intermedium plants.
Chromosome Identification of Wheat-Th. intermedium Amphiploids
The wheat-Th. intermedium partial amphiploid TAF46 was selected to characterize the Thinopyrum chromosome compositions. Sequential ND-FISH using Oligo-pSc119. using the probes Oligo-B11 and Oligo-pDb12H for TAF46 showed the absence of Oligo-pDb12H hybridization signals, indicating that TAF46 did not possess J S chromosomes. A total six St and eight J chromosomes were observed based on the hybridization patterns of Oligo-B11 (Figure 2a,b). This is consistent with the previous reports of St-genome based GISH [3]. The Oligo-FISH painting by seven probes gave rise to distinct hybridization signals covering wheat and Th. intermedium chromosome arms along their entire lengths for each homoeologous group. The probe Synt1 produced strong green signals on the chromosome pairs of 1A, 1B and 1D (Figure 2c), in addition to another a pair of Th. intermedium group 1 chromosomes, which was defined as chromosomes 1J compared to the standard karyotype (Figure 2b,d). Similarly, the probe Synt3 generated distinct red signals on eight chromosomes including the previously identified 3A, 3B and 3D by Oligo-pSc119.2 + Oligo-pTa535 (Figure 2c), and a pair of Th. intermedium 3J-chromosomes (Figure 2c). Similarly, the bulked probes Synt2 + Synt5 ( Figure 3e) and Synt4 + Synt6 (Figure 2g) also hybridized specifically to chromosomes of TAF46, compared to the sequential ND-FISH of Oligo-pSc119.2 + Oligo-pTA535 (Figure 2f,h). These results show that Th. intermedium chromosomes of TAF46 consisted of J genome chromosomes of the homoeologous groups 1, 3, 5, 7 and St genome chromosomes of groups 2, 4 and 6 ( Figure 2). The karyotype for individual chromosomes of the subgenomes and linkage groups of Th. intermedium of TAF46 is shown in Figure 2j. Our results are consistent with the chromosome assignment of Th. intermedium addition lines by GISH and molecular analysis of the TAF46-derived wheat-Thinopyrum addition lines L1 to L7 [21]. . FISH using the probes Oligo-B11 and Oligo-pDb12H for TAF46 showed the absence of Oligo-pDb12H hybridization signals, indicating that TAF46 did not possess J S chromosomes. A total six St and eight J chromosomes were observed based on the hybridization patterns of Oligo-B11 (Figure 2a,b). This is consistent with the previous reports of St-genome based GISH [3]. The Oligo-FISH painting by seven probes gave rise to distinct hybridization signals covering wheat and Th. intermedium chromosome arms along their entire lengths for each homoeologous group. The probe Synt1 produced strong green signals on the chromosome pairs of 1A, 1B and 1D (Figure 2c), in addition to another a pair of Th. intermedium group 1 chromosomes, which was defined as chromosomes 1J compared to the standard karyotype (Figure 2b,d). Similarly, the probe Synt3 generated distinct red signals on eight chromosomes including the previously identified 3A, 3B and 3D by Oligo-pSc119.2 + Oligo-pTa535 (Figure 2c), and a pair of Th. intermedium 3J-chromosomes ( Figure 2c). Similarly, the bulked probes Synt2 + Synt5 ( Figure 3e) and Synt4 + Synt6 ( Figure 2g) also hybridized specifically to chromosomes of TAF46, compared to the sequential ND-FISH of Oligo-pSc119.2 + Oligo-pTA535 (Figure 2f,h). These results show that Th. intermedium chromosomes of TAF46 consisted of J genome chromosomes of the homoeologous groups 1, 3, 5, 7 and St genome chromosomes of groups 2, 4 and 6 ( Figure 2). The karyotype for individual chromosomes of the subgenomes and linkage groups of Th. intermedium of TAF46 is shown in Figure 2j. Our results are consistent with the chromosome assignment of Th. intermedium addition lines by GISH and molecular analysis of the TAF46-derived wheat-Thinopyrum addition lines L1 to L7 [21]. Figure S1a), we constructed the karyotype of the 14 Th. intermedium chromosomes of 78829, as linkage groups 3St, 4St, 5St, 6J S -J, 7J S , 1St-J S and 2St-J S , respectively. The Th. intermedium chromosome karyotype is shown in Figure 3j. Therefore, the bulked Oligo-based FISH painting was effective in identifying each of the wheat chromosomes or chromosome segments for particular linkage groups in the wheat-Th. intermedium amphiploid. Figure 3. Sequential FISH probes Oligo-B11 + Oligo-pDb12H (a), Oligo-pSc119.2 + Oligo-pTa535 (b,d,f,h) and Oligo painting using specific bulked oligo probes Synt1+ Synt7 (c), Synt3 + Synt6 (e) Figure 3. Sequential FISH probes Oligo-B11 + Oligo-pDb12H (a), Oligo-pSc119.2 + Oligo-pTa535 (b,d,f,h) and Oligo painting using specific bulked oligo probes Synt1+ Synt7 (c), Synt3 + Synt6 (e) and Synt2 + Synt5 (g) for wheat-Th. intermedium partial amphiploid 78829, respectively. Karyotypes of wheat chromosomes (i) and Th. intermedium (j) are shown. Chromosomes were counterstained with DAPI (blue). Arrows showed the wheat-Thinopyrum small translocation chromosomes.
We used a similar approach to precisely identify the Th. intermedium chromosomes in 78,829 (2n = 56). The ND-FISH using Oligo-pSc119.2 and Oligo-pTa535 revealed that 78,829 has complete wheat chromosomes, each of 14 A, B, and D chromosomes. The sequential ND-FISH by probes Oligo-B11 and Oligo-pDb12H revealed the Thinopyrum chromosomes included six St, four St-J S , two J-J S and two J S chromosomes (Figures 1 and 3b). It is also interesting to observe that an Oligo-B11 signal appeared in the terminal region of 4D, indicating a small translocation between 4D and a Th. intermedium chromosome (Figure 3a). In comparison with ND-FISH patterns using Oligo-pSc119.2 + Oligo-pTa535 and in combination with bulked pool probes Synt1 + Synt7 (Figure 3c Figure S1a), we constructed the karyotype of the 14 Th. intermedium chromosomes of 78829, as linkage groups 3St, 4St, 5St, 6J S -J, 7J S , 1St-J S and 2St-J S , respectively. The Th. intermedium chromosome karyotype is shown in Figure 3j. Therefore, the bulked Oligo-based FISH painting was effective in identifying each of the wheat chromosomes or chromosome segments for particular linkage groups in the wheat-Th. intermedium amphiploid.
Revisiting the Karyotype of Wheat-Th. intermedium Additions Z3
The wheat-Th. intermedium chromosome addition line Z3 contained a pair of shortsatellited chromosomes with clear nucleolus regions. The ND-FISH using probes Oligo-B11 + Oligo-pDb12H, showed that the added Th. intermedium chromosomes were a J-St-JS translocation, and the sequential ND-FISH using probes Oligo-pSc119.2 + Oligo-pTa535 showed that these chromosomes had weak signals (Figure 5a). The Oligo-FISH painting using probes Synt1 and Synt5 was used to screen the metaphase spreads of Z3. The probe Synt1 produced strong green signals on the chromosomes 1A, 1B and 1D, and the probe Synt5 generated the red signals on chromosomes 5A, 5B and 5D, respectively, while the added Th. intermedium chromosomes showed the green and red signals in each arm ( Figure 5b). Therefore, the Oligo-FISH painting results indicated that the added Thinopyrum chromosomes in Z3 were a pair of translocated chromosomes involving the short arms of group 5 and long arms of group 1. The sequential ND-FISH with probes Oligo-5SrDNA, Oligo-pTa71, Oligo-3A1 and Oligo-pSt122 was performed to confirm the karyotype of the added Th. intermedium chromosomes of Z3. The signals from Oligo-5SrDNA, Oligo-pTa71 were observed in the interstitial regions of the satellite regions of Th. intermedium chromosomes in Z3 ( Figure S3). The alien chromosome in Z3 has strong Oligo-3A1 hybridization signals on the short arm closed to centromere, and Oligo-pSt122 signals at the telomeric ends of long arms. In addition, the PCR analysis by molecular markers of group 5S (CINAU1462, CINAU1463, CINAU1472, CINAU 1489) and group 1L (TNAC1021, TNAC1042, TNAC1076, TNAC1088) produced the Th. intermedium specific amplification of Z3 ( Figure 6). The evidence provided by molecular markers and the standard Th. intermedium karyotype (Figure 1) indicates that the Th. intermedium chromosome in Z3 was 5JS.1St-J S L. Therefore, the Oligo-FISH painting combined with molecular markers provide an opportunity for the precise identification of a complex chromosome rearrangement.
Discussion
Assigning a chromosome to a specific linkage group in polyploid Triticeae species with extremely large genomes traditionally relies on aneuploid analysis combined with individual chromosome recognition [22]. However, such aneuploid stocks are difficult to maintain and complete sets of aneuploids are not available for all Triticeae species [23]. Chromosome-banding techniques have long been considered as a fast, reliable, and economical means for the identification of chromosomes for wheat and related species; however, chromosome banding represents uninformative heterochromatin blocks [24][25][26][27][28][29][30][31][32][33][34][35]. Previous studies revealed that Th. intermedium possessed a large amount of cytogenetic polymorphism and structural modifications of chromosomes as revealed by C-banding [24]. Moreover, conventional FISH for chromosome identification was principally based
Discussion
Assigning a chromosome to a specific linkage group in polyploid Triticeae species with extremely large genomes traditionally relies on aneuploid analysis combined with individual chromosome recognition [22]. However, such aneuploid stocks are difficult to maintain and complete sets of aneuploids are not available for all Triticeae species [23]. Chromosomebanding techniques have long been considered as a fast, reliable, and economical means for the identification of chromosomes for wheat and related species; however, chromosome banding represents uninformative heterochromatin blocks [24][25][26][27][28][29][30][31][32][33][34][35]. Previous studies revealed that Th. intermedium possessed a large amount of cytogenetic polymorphism and structural modifications of chromosomes as revealed by C-banding [24]. Moreover, conventional FISH for chromosome identification was principally based on the dispersed or tandem repetitive DNA sequences, which are mostly not linkage group-specific [16,20,23,26]. Synthesizing short single-copy oligonucleotide pools has provided a new and affordable method to develop chromosome specific painting probes for FISH [27,28]. We first applied seven bulked oligos Synt1 to Synt7 as permanent resources, which enabled us to identify the homoeologous groups of chromosomes from Triticeae species fast, at a low cost and with high efficiency [13,14]. In the present study, we conducted the sequential ND-FISH probed by Oligo-B11 and Oligo-pDb12H to distinguish each of the three genomes of St, J and J S in Th. intermedium. After that, Oligo-FISH painting by probes Synt1 to Synt7 was used to identify the seven individual linkage groups. In the present study, we first set up all the 21 pairs of Th. intermedium chromosomes with their precise assignment of subgenomes and linkage groups based on the Oligo-FISH painting probes and ND-FISH with subgenome-related probes (Figure 1). Our protocols can be used to karyotype the different accessions of Th. intermedium. The heterozygosity of each linkage group of the Th. intermedium was revealed (Figure 2). The results indicate that the heterozygous FISH patterns were observed in seven chromosome pairs, indicating the complexity of the karyotype of Th. intermedium, which is difficult to be identified by previous cytogenetic methods. Recombination among inter-genomic and non-homologous chromosome pairs was not found in the present Th. intermedium accession. The appropriate multiplex probes could be applied for studying chromosome constitution and chromosome variation for a large number of wild wheatgrass genetic resources.
Cauderon [31] and Cauderon et al. [32] reported the production of a wheat-Th. intermedium partial amphiploid TAF46 with 56 chromosomes, and it was firstly identified to have seven pairs of alien chromosomes and wheat 5B-7B reciprocal translocation by C-banding [24]. GISH using a genomic DNA from Ps. strigosa, Chen et al. [33] clearly determined that TAF46 has six S and eight J genome chromosomes in a wheat background, and also found that chromosome 6A was missing in an aneuploid of TAF46. In the present study, we precisely identified the individual St and J chromosomes in TAF46 and confirmed that four 6D chromosomes substituted for 6A of wheat ( Figure 3). Zhong 5 has the Thinopyrum chromosome composition, including four S, two J S and eight S-J S or S-J translocation chromosomes [33]. The present study defined the linkage group of the individual Thinopyrum chromosome in Zhong 5, which is consistent with the previous ND-FISH results from the Zhong 5-derived addition lines [24]. The Robertsonian translocation between 1St and 1J S , 2J S and 2St, as well as two small translocations, occurred between 4D and Th. intermedium chromosomes in 78,829 (Figure 3). In the present study, the Oligo-FISH painting probes facilitated the study of inter-species chromosome homologous relationships and visualized non-homologous chromosomal rearrangements in some wheat-Thinopyrum derivatives. Our results provide a precise recognition of individual Th. intermedium chromosome and their rearrangement of the 13 wheat-Th. intermedium partial amphiploids, which will be helpful for the subsequent transfer of Thinopyrum chromatin from amphiploid to wheat. The high transmission rate of chromosome 7J S in the identified amphiploids may be partially due to the presence of novel disease-and insect-resistance genes [4,34,35].
The wheat-Th. intermedium partial amphiploid contained stable karyotypes, since the homoeologous group of three subgenome chromosomes have high complementary [4]. However, their chromosome structures were easily rearranged following their hybridization with wheat [5]. The wheat-Th. intermedium introgression lines have been developed to transfer novel gene(s) from the diversified gene-pool of Thinopyrum to wheat [5,8]. Tang et al. [34] revealed that the addition line Z3 derived from Zhong 5 contained a pair of Th. Intermedium-derived small-satellite chromosomes by GISH. Hu et al. [36] revealed that the C-banding patterns and the FISH signals by pTa71 hybridization of 1St#2 chromosomes in AS1677 differed from those in Z3. In the present study, the ND-FISH with multiple probes and the Oligo-FISH painting methods indicated that the added Th. intermedium chromosome in Z3 is a 5JS.1St-J S L translocation ( Figure 5). The Th. intermedium chromosomes in Z3 are derived from the chromosome set in the parent Zhong 5 ( Figure 4). Our previous study identified that the line Hy37, which originated from Zhong 5, contained a 5JS.3StS chromosome [24], indicating frequent chromosome modifications after the wheat-Th. intermedium amphiploid was crossed to wheat. Therefore, the oligo-painting probe-based FISH proved to be a useful tool for visualizing the occurrence of chromosome rearrangements during early and later generations of wheat-alien transfer subsequent to wide hybridization and chromosome engineering.
High molecular weight glutenin subunits (HMW-GSs) are encoded by the Glu-1 loci located on the long arm of group-1 chromosomes of Triticeae species [37,38]. Several genes for HMW-GS have been identified in the subgenomes of Th. intermedium [39]. Niu et al. [40] systematically characterized the HMW-GS composition of several wheat-Th. intermedium partial amphiploids and derivatives, and found that TAF46 and Zhong 2 expressed the Thinopyrum-specific HMW-GS. Hu et al. [36] confirmed that the Th. intermedium ssp. Trichophorum-derived partial amphiploid and substitution line AS1677 expressed the 1St-specific HMW-GS. The present study found that both TAF46 and Zhong 2 have the 1J chromosome, while other amphiploids contained the 1StS.1J S L-rearranged chromosome ( Figure 4). The line Z3 with 5JS.1J S L did not express Thinopyrum-specific HMW-glutenin subunits [40]. It is worthwhile to investigate HMW-GS expression in newly developed Th. intermedium 1J S substitution lines [41,42], and also to reveal the structure of the 1J S L in Z3 and determine the absence of the expression of HMW-GS. The HMW-GS gene expansion and expression will be studied in detail to reveal the mechanism of the changes during allo-polyploidization and introgression [43]. The ongoing precise genome sequencing of Th. intermedium species at a chromosomal level and the re-sequencing of wheat-Th. intermedium derivatives will be essential to dissect the complex evolutionary history of Triticeae species [43][44][45][46].
Probe Preparation
Seven oligo-FISH pools of probes Synt1 to Synt7 corresponded to each of the seven Triticeae homoeologous groups, respectively [13]. These oligo probes were selected from the single-copy sequences derived from 1H to 7H barley chromosomes with a 96% sequence homology with wheat linkage groups 1 to 7 chromosomes, which enabled us to distinguish the chromosomes for specific linkage groups. The bulked oligo libraries Synt1 to Synt7 were synthesized by MYcroarray (Ann Arbor, MI, USA). Probe preparation from the synthesized oligo library was performed as described by Han et al. [29]. The tandem repeat-based oligo-nucleotide probes for ND-FISH are listed in Table S1. Labeled Oligonucleotide probes were synthesized by Shanghai Invitrogen Biotechnology Co. Ltd. (Shanghai, China). The labeling and preparation of bulk painting oligos was conducted following the description of Li and Yang [14].
Fluorescence In Situ Hybridization
Root tips from germinated seeds were collected and treated with nitrous oxide followed by enzyme digestion, using the procedure of Han et al. [47]. The synthetic oligonucleotides were either 5 end-labelled with 6-carboxyfluorescein (6-FAM) for green or 6carboxytetramethylrhodamine (Tamra) for red signals. The protocol of non-denaturing FISH (ND-FISH) by the synthesized probes was described by Fu et al. [19]. After the oligobased FISH, the sequential FISH with bulk painting with oligos was conducted following the description by Li and Yang [14]. Photomicrographs of FISH chromosomes were taken with an Olympus BX-53 microscope equipped with a DP-70 CCD camera.
Molecular Marker Analysis
The PLUG markers [48] and CINAU primers [49] for location on specific chromosomes were obtained by searching the database of Wheat Genome Assembly ref. v1.0. The PCR protocol used the 8% PAGE gel separation was as described by Hu et al. [36].
Conclusions
The present oligo-bulked pool FISH visualized homoeologous regions directly on Triticeae chromosomes in a simple and fast experimental procedure. Our present comparative genomic-based oligo-painting FISH studies provide new insights into the evolution of the Th. intermedium genomes. The protocol of FISH with multiple types of oligos has great potential for the high-throughput karyotyping of wheat-Th. intermedium introgression lines for effective wheat breeding by chromosome manipulation. | 5,446 | 2022-08-01T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
A Motor-Gradient and Clustering Model of the Centripetal Motility of MTOCs in Meiosis I of Mouse Oocytes
Asters nucleated by Microtubule (MT) organizing centers (MTOCs) converge on chromosomes during spindle assembly in mouse oocytes undergoing meiosis I. Time-lapse imaging suggests that this centripetal motion is driven by a biased ‘search-and-capture’ mechanism. Here, we develop a model of a random walk in a drift field to test the nature of the bias and the spatio-temporal dynamics of the search process. The model is used to optimize the spatial field of drift in simulations, by comparison to experimental motility statistics. In a second step, this optimized gradient is used to determine the location of immobilized dynein motors and MT polymerization parameters, since these are hypothesized to generate the gradient of forces needed to move MTOCs. We compare these scenarios to self-organized mechanisms by which asters have been hypothesized to find the cell-center- MT pushing at the cell-boundary and clustering motor complexes. By minimizing the error between simulation outputs and experiments, we find a model of “pulling” by a gradient of dynein motors alone can drive the centripetal motility. Interestingly, models of passive MT based “pushing” at the cortex, clustering by cross-linking motors and MT-dynamic instability gradients alone, by themselves do not result in the observed motility. The model predicts the sensitivity of the results to motor density and stall force, but not MTs per aster. A hybrid model combining a chromatin-centered immobilized dynein gradient, diffusible minus-end directed clustering motors and pushing at the cell cortex, is required to comprehensively explain the available data. The model makes experimentally testable predictions of a spatial bias and self-organized mechanisms by which MT asters can find the center of a large cell.
Introduction
Spindle assembly in higher eukaryotic cells involves the self-organization of microtubules (MT) into a bipolar structure. During mitosis in animal cells, spindle poles are defined by a pair of centrosomes. However bipolar structures emerge even in the absence of centrosomes during meiosis in vertebrates as well as mitosis in plants. In such acentrosomal spindles, the poles self-organize by the dynamic interactions of MTs with molecular motors, regulatory factors and chromatin. While multiple components of this cellular-scale pattern forming system have been identified, the precise nature of the interactions between the components are still not completely understood.
The meiotic maturation of mouse oocytes is a well studied example of such an acentrosomal spindle assembly system. The first meiotic division is characterized by germinal vesicle breakdown (GVBD) [1], before and after which small aster-like fibrillar structures or microtubule organizing centers (MTOCs) are observed [2]. MTOCs which are nucleated both in the cytoplasmic and peri-nuclear spaces, both aggregate at the center to form a spindle by prometaphase I [3]. Such a convergence of radial MT arrays or asters was reported previously in Xenopus meiosis II oocytes [4]. Using cell-free Xenopus oocyte extracts, this convergence was shown to result from asymmetric centrosomal MT growth due to a gradient of RanGTP [5][6][7]referred to as biased 'search-and-capture' . However during meiosis I in mouse oocytes, experimental perturbation of RanGTP levels does not significantly affect spindle assembly [8,9]. If RanGTP does not act as a guidance cue as reported previously [10], the nature of the directional cue and force generation remains to be understood.
The force required for MTOC convergence to the nuclear region is thought to originate from a combination of MTs, motors and anchorage points. Multiple mechanisms have been reported in the past to drive radial MT array transport in cells-(a) polymerization dependent pushing forces as seen during the centering of asters in vitro [11,12], (b) cortical force-generator based pulling [13], (c) cortical motors which both depolymerize and pull [14], (d) cytoplasmic minusended motors which pull asters in a length-dependent manner [15], (e) cytoplasmic streaming by cargo transport driving aster movement [16,17] and (f) acto-myosin contractility as seen in starfish oocytes [18]. Contact with the cell cortex can move asters when the relative MT lengths is comparable to the cell radius [19]. Both active and passive mechanisms drive the movement of centrosome nucleated asters. However most of the cortical pushing and pulling models are unlikely to affect long-range movement of MTOCs which have MT lengths *3 μm as compared to the cell-radius of *40 μm. Transport of asters by cytoplasmic streaming based on cargo transport by one large aster [17], is also unlikely to drive mouse meiosis I oocyte MTOCs due to their size and number (*80 to 100), which will prevent a coherent and directed flow. Inhibition of acto-myosin contractility has also been shown to have no effect on the centripetal movement of mouse MTOCs [9]. While centrally-anchored MTOCs and cross-linking motors have also been proposed by Schuh et al. [9] to drive the MTOC motility, the movement continues even after nuclear envelope breakdown (NEBD). Thus for a complete theoretical understanding of the mechanism by which MTOCs converge in spindle assembly a mathematical model of the process is necessary to test multiple hypotheses that have been proposed.
Theoretical models have been used to probe the interactions of microtubules and motor complexes and are capable of reproducing in vitro self-organized patterns [20][21][22]. These simulations have been extended to understand the role of multiple components in spindle assembly such as antiparallel interactions [23], pole focussing by minus-end directed motors [24], gradients of stabilization [25] and intra-spindle nucleation and dynamic instability regulation [26]. In recent work, we have demonstrated the centripetal movement of centrosomal MT asters towards surface immobilized chromatin in Xenopus egg extracts can be modeled by a gradient of polymerization dynamics and uniform motor distribution [27]. This is comparable to a model of length-dependent pulling by motors to translocate MT asters during C. elegans embryogenesis [15]. However, neither of these models take into account the relatively shorter MTs seen in MTOC asters, and lack details specific to meiosis I. In search of common design principles in spindle assembly, theoretical modeling of the centripetal motility of MTOC arrays can be used to test the generality of previous results.
Here, we quantify the spatial trends in MTOC motility and find the random and directional components of motility depend on how far the MTOCs are from the cell center. The detailed quantitative analysis allows us to develop and test theoretical models of random walk with drift. Only a spatial gradient of drift can reproduce the experimental data. Such an optimized gradient is further used to model MT dynamic instability and motor distributions, to test the combination of mechanisms that can reproduce the experimental statistics of centripetal MTOC motility.
Models
We have developed two kinds of models-phenomenological and mechanistic-to address both the general principles and specific mechanisms of the centripetal convergence of small MT asters, MTOCs. The models are: 1. 2D Random walk with drift (RWD) model
2D MT-motor model
The outputs of both models are compared to 2D experimental motility measures of MTOCs from mouse oocyte meiosis I reported previously by Schuh and Ellenberg [9] The RWD model is used to optimize the functional form of the spatial drift field by comparison to experiment, while the MT-motor model is used to test molecular mechanisms which could generate the drift field. Mechanisms that have been previously hypothesized to drive asters to the center of cell involve MT-pushing and pulling [28,29]. A combination of pulling and pushing mechanisms has been experimentally tested in sand-dollar eggs [30] and C. elegans embryos [16,31] and pulling alone in Xenopus egg extracts [5,27], while pushing has been seen in fibroblasts [32]. The MT-motor model is used to test whether the reported self-organized clustering of MTOCs in mouse oocytes [9] is sufficient to result in MTOC convergence, or whether additional mechanisms are necessary. These mechanisms are: 1. Self-organized mechanisms: not requiring explicit spatial localization a. Cortical pushing alone b. Clustering motors and cortical pushing 2. Spatial gradients: requiring explicit spatial localization a. MT dynamic instability gradient b. Dynein motor gradient 3. Hybrid model: combination of self-organized and gradient mechanisms An optimization routine has been developed to compare the outputs of the 'scenarios', i.e. combinations of these mechanisms, to reproduce previous experimental reports of mouse meiotic MTOC motility and distribution [9,33,34].
The choice of a 2D model for a 3D spherical oocyte is determined by the need to compare simulated motility outputs with the only available experimental time-series dataset of MTOC motility in mouse oocyte meiosis I, which is 2D over time [9] (Fig 1A). This compatibility of dimensions is essential since some of the RWD model input parameters are obtained from fits to experiment and MT-motor model mechanisms are optimized based on their ability to reproduce experimental statistics. Additionally, the choice of dimensionality of spatial models is considered to be determined by the balance between the need to capture the behavior of the system sufficiently and the clarity of the model [35].
Random walk in a drift field
The mouse oocyte is modeled in a 2D circular geometry of radius r cell , with concentric circular chromatin of radius r chr . The outer cytoplasmic region has a radius r cyto and r cell = r chr + r cyto (Fig 1B). MTOC asters are modeled as point particles, nucleated uniformly in the cytoplasmic space of the oocyte. The motion of the simulated particles is a mixture of random Brownian and directed centripetal motion, depending on the position of the MTOC in the oocyte ( Fig 1B), based on the previously observed 'stop and go' nature of the motility [9]. In this model, MTOCs are transported to the cell center and 'captured' by the chromatin once they reach the central chromatin mass. The process resembles models of biased 'search-and-capture' used to describe spindle assembly [7,36,37]. Here, we use the model to define the spatial properties of the bias of attraction by comparing the simulation outputs to spatial trends in MTOC motility seen in experiment.
Velocity. The Brownian component of the motion results from radial MTs in an aster interacting with motors in all directions resulting in uniform forces, which fluctuate due to thermal noise and microtubule dynamics [27]. The equation of motion of the particle is given by its velocity ( _ X) and net angle (θ net ). The magnitude of the velocity is described by: where D eff is the effective diffusion coefficient, v eff is the effective velocity and t is time. The values of D eff and v eff are determined by fits to experimental data as described in the section on data analysis and listed in Table 1.
Model geometry
Oocyte radius (r cell ) 40 μm [9] Chromatin radius (r chr ) 10 μm [9] RWD model Total simulation time (T) 8000 s [8,9] Step time (δt) 0.1 s Optimized for numerical accuracy No. of MTOCs (N p ) 100 [8,9] Effective diffusion coefficient (D eff ) 0.006 μm 2 /s Fit to experiment (S1A and S1B Direction. The net angle (θ net ) of motion, which governs the direction of motion, is determined by: where θ df is the diffusive and θ dr the directed angle. The angle θ df can take random values uniformly distributed between 0 and 2π, while θ dr is determined by the angle of the vector XC ! between the particle (X) and the center (C) of the cell (Fig 1B). Drift field. Sub-cellular transport has been successfully modeled in previous work by combining Brownian motion and directed transport [38]. In our model, the Brownian and directed components are spatially determined. Here, two radially symmetric fields of drift, one repulsive and one attractive, were modeled to represent the effective forces acting on the MTOCs. The attractive field resulting in centrosomal aster convergence in Xenopus oocytes towards chromatin [5], was modeled by a sigmoid gradient [7,27], A similar function was tested for the model of mouse oocytes. Since the attractive (0 a ðXÞ) and repulsive fields (0 r ðXÞ) determine the nature of motion, depending on the position of the particle in space, the net velocity of a particle ( _ X net ðx; yÞ) is given by: where _ X c ðx; yÞ is the velocity at the cell cortex determined by the weighted average of repulsion and Brownian motion. _ X c is determined by: In this model, attraction originates from the central chromatin, since the removal of the nucleus in previous experiments resulted in MTOCs losing their directionality of motion [9,39]. The repulsion from the cell cortex, is based on models of MT-aster pushing at the cell cortex as a mechanism of centering in multiple cell types [28,29]. MTOCs undergo Brownian motion when neither attraction nor repulsion are acting on them. The net angle θ net (X) is the circular weighted average [40] of θ df and y dr ðXC Þ, weighted similarly to the velocity magnitude (Eqs 3 and 4). The fields of 0 a ðXÞ and 0 r ðXÞ are modeled in 2D by radially symmetric functions dependent solely on the distance r from chromatin. A modified sigmoid gradient was tested, based on previous experimental measurements of long-range gradients acting on MTs around chromosomes in Xenopus spindle assembly [5][6][7] and theoretical models testing their form [25,26] and their role in centrosome directional motility [27].
The field (ϕ) of either attraction (ϕ a ) or repulsion (ϕ r ) is determined by: where r is the distance-for ϕ a , r = 0 at the chromatin edge and for ϕ r , r = 0 at the cell boundary. r 1/2 is the distance at which the function takes the half-maximal value and 1/s is the steepness factor. While exponential gradients were also tested, they failed to reproduce the spatial trend in experimental data and are not shown here. Representative gradients of ϕ a and ϕ r are plotted as scaled values between 0 and 1 ( Fig 1C).
Mechano-chemical model of MT-motor interactions
A mechanistically detailed model was developed using Cytosim [41], a C++ Langevin dynamics simulation engine, by building on previously developed models of MT-mechanics [14,42], polymerization kinetics [7,25] and motor interactions [20-24, 26, 27, 43]. To test what minimal components will produce the observed centripetal motility of MTOC asters, a model of an MT stabilization gradient described previously [27] was developed by mapping the gradient shape optimized in the RWD model. This scenario was compared with scenarios where the gradient consisted of immobilized minus-ended molecular motors. These biased 'search-and-capture' scenarios were contrasted with self-organized scenarios which lacked any directional bias, i.e. diffusible motor-complexes and MTOC pushing from the cell boundary. The model is implemented in an oocyte cell geometry with MT polymerization dynamics and mechanics as well as discrete stochastic molecular motors that are either immobilized or diffusible. Cell geometry. As before the oocyte was modeled in a 2D circular geometry with a concentric chromatin region. The chromatin region here is not treated as an absorber in this model, unlike in the RWD model.
MT polymerization dynamics and mechanics. MTs were modeled as discrete polymers undergoing dynamic instability-stochastic switching between growth and shrinkage phasesbased on the four-parameter model [44,45]. Since dynamic instability parameters of meiosis I mouse oocytes are not reported, values were taken from measurements made on centrosomal MTs in mitotic pig kidney cells, which have average MT lengths of 3.2 μm [46], very similar to the average length of MTs in mouse meiotic MTOCs (<L > *3 μm) [2,3,33]. The flexibility of the simulated MTs is defined by a combination of bending modulus (κ), typical cytoplasmic viscosity and thermal energy. The values of these parameters were taken from literature ( Table 2), from the mouse oocytes or related systems, if mouse data was not available.
MTOCs. MTOCs were modeled as hollow circular structures of radius 0.2 μm, initialized randomly throughout the cytoplasmic region. Each MTOC had N MT number of MTs uniformly distributed radially around the centrosome, forming an aster. MTOCs can move throughout the cell interior.
Motor mechanics. Molecular motors were modeled as described previously [27,42] as discrete particles with properties taken from minus-end directed dynein-like motors moving at a velocity v m . Motors bind stochastically to MTs at an attachment rate r attach only if the distance between them is less than a threshold distance of attachment (d attach ). Detachment occurs at a rate r detach . Motors bound to MTs behave like Hookean springs with a spring constant (k mot ), experiencing a extension force (f ex ) parallel to the filament if attached to a motile filament. The rate of detachment increases with extension as: r detach ¼ r 0 detach Á e jf ex j=f 0 (based on Kramers theory [47]), where r 0 detach is the constant basal detachment rate and f 0 stall force. A separate r end detach rate accounts for the rate at which motors detach from the end of a filament-here it is set to a value similar to r 0 detach . As f ex increases to values approaching f 0 , motor step sizes are expected to change, referred to as the 'gear-like' behavior of dynein [48]. This is modeled by a piece-wise approximation, as previously described [27,49]. The parameters of motor mechanics, when available, are taken from experimental measurements of dynein (Table 2). In the absence of reported values estimates were used. Immobilized motors bind to microtubules and generate forces on the MTs causing their movement. Conversely, diffusible motor complexes with diffusion coefficient D c , are modeled as crosslinkers [23], which can bind two different microtubules. Since aster have minus-ends at the center, cross-linking minus-end directed motility of the motors results in coalescence of the MTOCs.
Aster motility. MTOC asters move due to a net force calculated by resolving multiple forces acting on the MTs (using a finite difference scheme to solve the Langevin equation of motion in Cytosim [42]) which are: (a) f bend , arising due to bending of growing MT filaments contact the rigid cell boundary and scaled by the bending modulus of MTs (κ) [11,12,50]; (b) f i ex , the extension force generated when immobilized motors bound to MTs walk on the filament and are stretched by length x, experience a restoring force f i ex ¼ k mot Á x, resulting in filament motion [42]; (c) f c ex , the motor-complex of diffusible tetrameric motors have two binding sites separated by a spring of stiffness k mot . Complexes that are simultaneously bound to two MTs, undergo stretching. The restoring spring force brings the two MTs close and results in clustering [20,22,23]; (d) f diff , the diffusive force is a random normally distributed force [42]; and (e) f drag , the drag force acting on the MTs resulting from the translational and rotational drag forces on the individual points representing the filament [42] based on the cytoplasmic viscosity (η). Parameters are reported in Table 2.
Methods
Data analysis. Experimental 2D trajectories of multiple MTOCs in mouse oocytes meiosis I described previously by Schuh and Ellenberg [9] with Δt % 3 to 4 minutes) ranging between pre-and post-NEBD stages (data kindly shared by M. Schuh) were normalized by origin shift Table 2. MT-motor model parameters. The mecho-chemical parameters of motors were based on reported values for dynein, while motor densities were estimated. MT polymerization dynamics and mechanics is taken from literature, while cell geometry parameters are identical to Table 1.
Motor parameters
Motor stiffness (k mot ) 0.1 pN/nm [53] Motor (dynein) speed (v m ) 2 μm/s [54][55][56] Attachment rate (r attach ) 12 s −1 [27] Basal detachment rate (r to the respective end-points. Thus, multiple experimental trajectories could be compared. For comparison with experiment, simulated trajectories were down-sampled to a time-interval comparable to experiment (Δt = 3.5 minutes), and the following measures of motility were evaluated from both experimental and simulated data: Motility statistics: The instantaneous velocity v = δL/δt and directionality or tortuosity (χ) was estimated as χ = d net /L. A value χ = 1 indicates directional motion, while χ * 0 indicates a random (tortuous) path. The time taken by MTOCs from their time of nucleation to arrive at the edge of the central chromatin region is used to calculate a 'capture time' from both experiments and simulations. The mean square displacement (msd) of experimental and simulated trajectories was estimated by: where r is the displacement, t is the time point, δt is the time increment. A sliding window of δt from the smallest simulated time step to 3/4th of the trajectory length was used [27]. This empirically determined cut-off reduces artifacts arising from the fact that the number of steps of time-length greater than this threshold are extremely small [58,59].
In order to estimate input parameters for the RWD model (Eq 1), the msd profiles of experimentally measured MTOCs were fit to the model of diffusion and transport: On the other hand, the msd output from the motor-MT interaction model (Models section) were quantified by fitting to an 'anomalous diffusion' model: to estimate the apparent diffusion coefficient (D 0 ) and the anomaly parameter (α) as described previously [27] (S1 Text). All data fitting was performed using the trust region reflective least square fitting algorithm implemented in the Optimization Toolbox of MATLAB (Mathworks Inc, USA). Simulations. The RWD simulation time steps were optimized for numerical stability, by simulating a normal random walk (α = 1) with no boundary conditions and minimizing the error in input and fit diffusion coefficient (D). Simulations with typically 100 particles were run for 8 Á 10 3 seconds and took *20 minutes. The explicit MT-motor model (Cytosim) with 80 MTOCs was run for 1.2 Á 10 3 seconds, typically requiring *8 hours on a 12 core Intel Xeon machine with 16 GB RAM.
Model optimization to experiment. The parameters of the model were optimized by a rank minimization method based on a modified weighted root mean square error () of simulations as compared to experiment. Only one published dataset by Schuh and Ellenberg [9], has reported detailed XY over time trajectories of MTOC centering motility during mouse oocyte maturation, as a result of which quantitative model optimization was performed on this dataset (kindly provided by M. Schuh). The error ((k)) for a parameter set k was estimated between the i th experimentally measured (e i ) and simulated (s i ) values by: The weight (w i ) was determined based on whether the simulated value was within one σ of the average experimental measure or outside that range: where w m = 2, n i is the number of experimental data-points for the i th value and n max is the maximal number of data-points in experiment. This weighting scheme allows us account for the increased uncertainty in experimental measurements with fewer data points. The choice of w m was set empirically to 2 to ensure a high penalty (since n i /n max 1). For each parameter set the of multiple variables (v) were ranked (R v (k)). The directionality (χ) distribution with distance (r) and frequency distribution of capture times (t c ) were used to calculate errors. The sum rank, R s = S v R v (k) was minimized to obtain a parameter set, referred to as optimal.
RWD simulated MTOC motility compared to experiment
The experimental trajectories of MTOCs show a distinct centripetal motion as seen in the time-projected trajectories ( Fig 1A). The input parameters for diffusive and directed motion in the RWD model ( Fig 1B) were obtained from fitting Eq 7 to experimental msd profiles (S1A The outputs of such a gradient result in XY trajectories which are directed inwards at the cell boundary, random in the mid-zone and directed closer to chromatin (Fig 1D), qualitatively comparable to experiment (Fig 1A). Quantitative comparisons between experimental and simulated χ profile reflects this trend in motility-particles at the cell boundary and near chromatin are more directed, than those in the mid-zone ( Fig 1E). The simulated capture time distribution also matches with experiment ( Fig 1F). In order to understand the mechanism underlying the experimental motility observed, trajectories were further analyzed for their time-dependence.
Pushing and pulling profiles in MTOCs transport
The experimentally measured MTOCs have heterogeneous distance-time profiles, with some MTOCs moving rapidly in < 40 min, while others undergo a delayed (> 40 min) inward movement (Fig 2A). The optimized RWD model profiles qualitatively match those from experiment. In previous work, distance-time plots with a sigmoid profile have been interpreted to mean pulling forces are at work, while a parabolic has been interpreted to mean pushing is at play [15]. Here, we use a fit function with three parameters, n: a measure of the shape of the profile (n > 1: sigmoid and n 1: parabolic), T half : the time at which the distance travelled is half-maximal, and d max : the maximal distance travelled, as follows: The experimental and RWD simulation distance-time plots were fit to obtain d max , T half and n.
A value of n > 1 is taken to indicate pulling, while n 1 is taken to indicate pushing. Representative data from experiment ( Fig 2C) (Fig 2E). The n value from simulations is higher in the mid-cell region as compared to near chromatin or at the cell boundary. This can be understood in terms of the sharp transition in the attractive gradient of drift (ϕ a ) at r % 15 μm, resembling a 'pulling' process. The phenomenological model does not allow an interpretation of the origin of pushing or pulling forces. We therefore proceeded to add detailed molecular-motor and MT polymerization dynamics to the model to make experimentally testable predictions about the system.
Mechanistic models of MTOC centering: Gradient and Self-Organized Models
The RWD model predicts two drift fields-attractive from the center and repulsive from the cell boundary-are required to reproduce experimental statistics. Here, we proceed to test molecular mechanisms which combine molecular motors, MT-dynamics and mechanics of MTs and membrane interactions, with the aim of developing a mechano-chemical understanding of the effective drift fields. We categorize these mechanisms into two types: 1. Self-organized
Gradient-based
We then systematically evaluate plausible mechanisms and evaluate them for their ability to result in MTOCs finding the center of the oocyte within an experimentally observed time-scale (*20 min).
(i) Self-organized mechanisms. (a) Cortical pushing. In somatic cells the MT-asters find the center of a cell entirely based on pushing interactions at the cell cortex [32]. We test if pushing from the cell boundary alone, could result in MTOC centering, while maintaining uniform MT polymerization and immobilized dynein-like motors (Fig 3A). MTOCs at the start of the simulation are localized randomly throughout the oocyte (Fig 3B). However even after 20 minutes, asters did not move centripetally (Fig 3C, S1 Video).
(b) Clustering motors and cortical pushing. Based on the self-organized centripetal motility of MTOCs observed experimentally in mouse oocytes, clustering motor complexes had been proposed to be the major driver of MTOC motility to the center of the cell [9]. As a result, we modeled diffusible minus-end directed (dynein-like) motor complexes uniformly in the cytoplasmic region (Fig 3D and 3E). These complexes couple two nearby MT filaments and as a result of minus ended-motility, result in clustering of the asters (since asters have their minusends at the MTOC center). In this scenario, dynamic instability parameters were uniform and no immobilized motors were modeled. The complexes diffuse through the whole cell including the chromatin region within % 30 s, mimicking start of NEBD (S2 Video). However even after 20 min of simulations, only a small fraction of the simulated MTOCs are inside the chromatin region ( Fig 3F, Table 3).
(ii) Gradient-based mechanisms. (a) MT dynamic instability gradient. Based on evidence of a gradient of dynamic instability in multiple cell types [5,7,10,36], we tested a gradient of f cat and f res as a mechanism for centering of asters, while maintaining a uniform surface-immobilized motor distribution ( Fig 3G) and random MTOC nucleation (Fig 3H). However, this too did not result in a perceptible increase in accumulation of MTOCs at the chromatin center at the end of 20 minutes (Fig 3I, Table 3, S3 Video).
(b) Dynein motor gradient. It was only when motors were localized in a chromatin-centered sigmoid gradient and dynamic instability parameters were homogeneous (Fig 3J), did the randomly nucleated MTOCs (Fig 3K) dramatically converge to the center (Fig 3L, S4 Video).
Thus, a minimal model of asymmetric pulling forces resulting from a gradient of immobilized motors can move MTOCs to the center of the cell in a time-scale comparable to experiments (*20 min). However, a quantitative comparison of the simulation statistics with experiment is necessary to understand the critical parameters in this model.
Model sensitivity to stall-force and density of motors and MTs per aster
In the previous section, qualitatively, self-organized mechanisms of MTOC centering could not drive centripetal motility. The spatially binned directionality (χ) of simulated MTOCs in the absence of any gradient further quantifies this (Fig 4A). Neither tetrameric motor complexes, nor uniform surface immobilized motors and pushing at the cell boundary result in a trend in χ comparable to experiment. Cross-linking by motor complexes of different stall forces (f 0 = 2 and 7 pN) and densities (N c m ¼ 10 3 and 10 4 motors/oocyte) were tested and higher stall forces with high densities result in high values of χ throughout the cell (> 0.5). A directional bias in the form of a field of f cat and f res (based on Eq 5) resulting in asymmetric MT lengths, also fail to reproduce the directionality trends (Fig 4B). Confirming our qualitative observations from the simulation visualization, only a gradient of motors (f 0 = 7 pN, N i m ¼ 10 3 ) can reproduce most of trend in directionality as a function of distance (Fig 4C and 4D).
In the absence of experimental estimates of the number of motors and their stall forces from meiotic mouse oocytes, we explore two extreme values of f 0 (2 and 7 pN) reported in literature for dynein and scan N i m over thee orders of magnitude. We find high density (N i m ¼ 10 4 motors/oocyte) of weak motors (f 0 = 2 pN) ( Fig 4C) and a lower density (N i m ¼ 10 3 motors/ oocyte) of strong motors (f 0 = 7 pN) (Fig 4D), can both reproduce experimental profiles of directionality. The proportion of MTOCs captured at the chromatin boundary was evaluated by following the distance of MTOC centers as they entered the chromatin mass. Strikingly the proportion of MTOCs captured in the first 20 minutes of simulation from the motor gradient with immobilized motors of f 0 = 7 pN and N i m ¼ 10 3 most closely matched experiment (Table 3). However, the mean velocity values from simulations were insensitive to either f 0 = 2 or 7 pN and motor densities over a range N m = 10 2 to 10 4 motors/oocyte (Table 4).
Additionally to test how the motility was affected by the total number of MTs per aster in the scenario of a motor gradient, we examined the χ and <v> of simulated MTOCs which were initialized at a distance of 15 μm from the chromatin edge. As a result we expect these asters to experience the maximal force asymmetry, as they are at the lower end of the motor gradient. We find for increasing motor densities (N i m ) directionality χ continues to increase, but increasing N MT per aster appears to rapidly saturate the χ value for any given N i m value for both f 0 = 2 and 7 pN (Fig 5A and 5B). Increasing N i m leads to a marginal increase in the mean velocity (<v>) for a fixed N MT value. However increasing the value of MTs per aster, to our surprise, does not affect <v> (Fig 5C and 5D). We interpret this to be the result of the uniform radial distribution of MTs in the aster and a tug-of-war arising from it.
Model
Motor type Localization f 0 (pN) N m % captured In order to further understand the role of motor-density changes and their effect on MTOC motility, we evaluate the random walk statistics of the motility and compare it to experiment.
A hybrid model of a motor-gradient and self-organized clustering enhances MTOC centering
Simulations of multiple gradient forms, motor types and densities in this work suggest a motor gradient as a minimal model to understand the centering motion of MTOCs in (Fig 4A), due to MTOC aggregation. We hypothesized that a hybrid mechanism combining an immobilized motor-gradient with diffusible clustering motor-complexes might reproduce the complete experimental distance-dependent directionality profile. Visually the MTOCs appear to find the center more efficiently (Fig 6A and 6B). Here 10 4 clustering motors per oocyte of stall force 2 pN were combined with 10 4 immobilized motors per oocyte and stall force 2 pN. A systematic screen of the effect of increasing tetrameric complex density while keeping the density of immobilized of motors constant was evaluated in terms of the measure of directionality. The radial distance profile of χ increases in the mid-range when clustering complex density is increased from 10 3 to 10 5 motors/oocyte (Fig 6C). Either 'weak' diffusible dynein-like complexes (f 0 = 2 pN) with a high density (N c m 10 5 ) or 'strong' motors (Fig 6C and 6D). This result suggests that while clustering alone cannot center MTOCs in oocytes, it improves the fit to experiment.
Thus we believe our model reproduces both the qualitative and quantitative nature of the MTOC centering motility and provides novel insights into the sensitivity of this model. It demonstrates how a combination of directional cues and self-organized clustering can center small MT asters in a large cell such as an oocyte.
Discussion
Meiotic spindle assembly in mammalian cells in the absence of centrosomes involves the nucleation of MTOCs in cytoplasm and their coalescence and sorting around chromosomes, resulting in bipolar spindle assembly. The nucleation of MTOCs in cytoplasm and the centripetal motility of small radial MT asters has been previously observed during the first meiotic division in mouse oocytes [3]. Similar convergence was also observed in Drosophila oocytes [60] and quantitative analysis and model calculations were used to infer that directed transport for the MTOCs was essential for the coalescence of MTOCs [61]. In Drosophila the unconventional kinesin Ncd (minus-end directed) was implicated to play a role in this inward motility [62]. In mouse oocyte MTOC coalescence, cytoplasmic dyneins or comparable minus-end directed motor have been implicated in force generation for the centripetal movement [9]. Here, we analyze experimental data from the early stages of the meiotic maturation of mouse oocytes. The experimental analysis is used to constrain a field of spatially inhomogeneous drift in a random walk. The optimized model of drift is used to model gradients of the motor dynein and MT dynamic instability. By comparing the simulation outputs with experiments, we arrive at a minimal model of a gradient of motors, essential to reproduce the experimentally observed statistics.
The experimental statistics of the MTOC motility from mouse oocytes are mostly representative of post-NEBD dynamics, allowing us make the simplifying assumption that the nuclear envelope plays no explicit role in the process. The movement of these radial MT arrays appears visually to have both an effectively diffusive and a transport component (Fig 1A). The frequency distribution of velocity from experiment is long tailed and fit to a lognormal function (S4 Fig), suggesting anomalous super-diffusive transport. While the motility had been previously described as 'stop-and-go' [9], we find little evidence of 'stop' or pause events in the motility. Our quantification of MTOC motility in cells, demonstrates the motility is qualitatively comparable to previous estimates of centrosomal aster movement observed in C. elegans fertilization [15], MTOCs in Drosophila oocyte meiosis I [61] and centrosomal asters in Xenopus meiotic extracts [5,27]. This suggests a common theme underlying the transport of radial MT arrays in meiotic spindle assembly.
The distance travelled or displacement from the start-point plotted over time of MT arrays have been used previously [15] to distinguish between "pulling" and "pushing" modes of motility of centrosomal MT arrays during pronuclear migration. In the case of mouse meiotic MTOCs, these plots also help to distinctly separate trajectories into two sub-populationsthose which show an initial rapid rise followed by capture (< 40 min), while others do not move much for a long time (> 40 min) and after this delay are captured at chromatin (Fig 2A). We further improve on the work of Kimura et al. (2005) [15] by fitting the data with a saturation model with cooperativity (Eq 11), and use it quantitatively to distinguish between pushing (n * 1) and pulling (n > 1) mechanisms (Fig 2C). Every trajectory from experiment was fit to obtain a profile shape (sigmoid or parabolic) term n (S3 Fig) and comparing n from simulation and experiment shows a pushing mode of transport (n * 1) close to chromatin and the cell boundary, while those in the mid-range are pulled (n > 1) (Fig 2E). The distance travelled plots from the mechanistic motor-gradient model with MT asters sorted by nucleation distance demonstrate an increase after a short delay near chromatin (d n = 0 to 10 μm) due to pulling (S5A Fig). Those in the intermediate range (d n = 10 to 20 μm) have a longer delay and then appear to be pulled (S5B Fig). Close to the cell-boundary (d n > 20 μm) they increase rapidly and then saturate, due to MT-pushing at the membrane (S5C Fig). Thus a model of a gradient of chromatin-centered motors and a rigid cortex can reproduce the qualitative differences observed in the distance-time profiles of MTOC transport. However in these experiments, the absence of clear 'pulling' in experiment (n > 1) near chromatin might have been missed, since the MTOC identity is lost as it nears the chromatin. While cell cortexbased pushing mechanisms have been demonstrated to generate centripetal movement [12,50], the nature and localization of minus-end directed pulling motors in the oocyte remains to be determined. Recent evidence of from mouse oocytes during MTOC fragmentation [63] and meiotic maturation [34] suggest dynein anchored at the nuclear envelope might influence both processes. A careful study of dynein localization dynamics in this and related systems, could by used to test our model prediction.
A 'tug-of-war' in the transport of anti-parallel MTs moving on a surface coated with motors arises from the action of the same species of motor acting against each other, with small asymmetries in length, amplifying the velocity of transport [64]. Two-fold length asymmetries (10 to 20 μm) have been previously observed in centrosomal asters [5,7] and simulations of such asymmetric asters on sheets of dynein motors resulted in aster transport towards chromosomes [27]. However, the mouse MTOC radius is in the range of 2 to 3 μm and no appreciable asymmetry of lengths has been reported [9]. In this work, neither homogeneous motor distributions (S1 Video), nor tetrameric motor-complexes (S2 Video), nor a dynamic instability gradient (S3 Video) result in convergence to chromatin in the *20 min time scale seen in experiment. Taken together, a gradient of motors is necessary in a minimal model with a mouse oocyte geometry for MTOCs to converge to the chromatin center (S4 and S5 Videos). This suggests aster motility in meiosis I of mouse oocytes differs from that in meiosis II of Xenopus oocytes. In the latter, length asymmetry can result in directional transport, however in mouse oocytes the MTOC asters are *4 fold shorter in MT length, resulting in smaller forces being generated. The motor numbers would then be insufficient to successfully resolve the tug-of-war in mouse oocytes by a simple *2-fold changes in MT length, as seen in Xenopus oocytes. In contrast, the gradient of immobilized motors result in a comparable spatial distribution of velocity for minus-ended motors with f 0 = 2 pN (S6A Fig) and f 0 = 7 pN (S6B Fig). Spatial gradients of RanGTP [6,65] are thought to direct centrosomal MT asters to chromatin during spindle assembly in meiotic extracts [5][6][7]. However the outcome of our simulations predicts that in meiotic MTOC motility, motor gradients are necessary. Such a gradient could arise from self-organized diffusion and attachment of minus-ended motors on MTs nucleated at the chromatin periphery. In addition, the motors could be immobilized on intracellular organelles, as shown in centrosomal aster centration in C. elegans [16]. It remains to be seen which of these specific mechanisms results in the the chromatin centered gradient of anchored motors. Some evidence from experiments in mouse oocytes serves to support our motor gradient model, where pre-NEBD maturing oocytes showed a high-concentration of dynein motors around the nucleus [66]. More recently, evidence of dynein localized at the nuclear periphery fragmenting MTOCs [63], suggests a validation of our model of a chromatin centered gradient of dynein-like motors. Further testing is however required to examine its dynamics.
The role of chromosomes in the centering activity of MTOCs in maturing mouse oocytes during meiotic I spindle assembly had been tested by experimentally removing the nucleus [9,39]. The MTOCs of such oocytes have a scattered appearance and fail to assemble bipolar spindles. While chromosomes are considered essential for meiotic spindle assembly [67], we hypothesize that they also serve as the primary guidance cue of centripetal MTOC motility, in a manner comparable to other aster cell centering systems [5,27,36]. To test this, the spatial organization of MTOCs in enucleated oocytes was measured using published experimental data reported by Schuh and Ellenberg [9]. The image analysis of MTOCs positions from experiment (S7A and S7B Fig) resulted in a radial density distribution around the cell center (S7C Fig). This distribution is comparable to a simulation of randomly localized MTOCs, suggesting the chromatin serves as an important guidance cue for the centripetal motility of MTOCs, and in it's absence the directional motility is lost. The molecular mechanism which converts the positional information of the chromatin into a gradient of molecular motors, still remains to be understood.
The msd profiles of MTOCs from simulations that were close to chromatin transition from super-diffusive to sub-diffusive motility (S1 Text, S8 Fig), as estimated by α, the measure of anomalous diffusion (S8F Fig). This transition results purely from changing the single dynein motor stall force from that reported for bovine brain cytoplasmic dynein (f 0 = 2 pN) [48] to the yeast cytoplasmic dynein (f 0 = 7 pN) [56] value. The MTOC convergence in the 2 pN stall force calculations is slow (S4 Video) as compared to when the motors had a higher stall force (7 pN) (S5 Video) at the same motor density (N i m ¼ 10 3 motors/oocyte). Once the MTOCs reach the center of the motor gradient, they undergo effectively sub-diffusive movement, unable to generate enough force asymmetry to escape. With an additional centrosomal fluorescence label in mouse MTOC motility, the intra-nuclear motility and reorganization of MTOCs could be studied in future. This will also help better understand the subsequent bipolar spindle formation and its connection with chromosome biorientation in meiosis I of mouse oocytes [68,69].
The sensitivity analysis of the gradient model demonstrates that MT number per aster does not affect velocity and directionality of the asters. On the other hand the model is sensitive to motor density. The motor density in 2D area-density ranging between N i m ¼ 10 3 and 10 4 motors/oocyte corresponds to a physical density of % 0.2 to 2 motors/μm 2 for an oocyte of radius 40 μm. The addition of motor complexes, which cross-link MTs and walk to the minusends of the respective MTs, also change the dynamics of MTOC transport. At high densities of motor complexes, the asters coalesce to a few (*10) clusters and centripetal convergence results. Yet the directionality profiles are qualitatively different from the experimental measures ( Fig 6C). While the densities of both immobilized and diffusible motors tested are mean area densities, over-expression of dynein motor proteins could serve as an experimental test of our model predictions.
A limitation of the models described here is that both the phenomenological and mechanistic models assume a 2D circular geometry, while the mouse oocyte is a 3D sphere. So asters finding the cell center in 3D is expected to take longer due to dimensionality in biological search problems [70], and a further parameter optimization would be required. Our effort here serves to reduce the search for 'scenarios' , i.e. combinations of mechanisms such as motors, MT-dynamic instability, gradients and clustering motors, by optimizing simulations to experimental data. In future, a full 3D model would then only require parameter optimization to a 3D experimental dataset. At the other extreme, a 1D model could have further simplified the geometry based on the radial-symmetry of the system, as has been assumed in the 'slide and cluster' model of linear MT filaments in spindle assembly [71]. However this would ignore the orthogonal interactions between MTs of multiple asters, which are only possible in an explicit 2D geometry. Thus our choice of spatial dimensions is driven by an attempt to capture the important qualitative behavior of the system, while keeping the model simple enough for clarity and calculation speed. While a 3D simulations of cellular processes are more 'complete' , it has been suggested the choice of spatial dimensions should simplify the system to sufficiently capture the spatial dynamics [35,72]. A further limitation of both the phenomenological RWD and MT-motor model is the availability of only one time-series dataset. In future, additional experiments with mouse oocytes could help test the model predictions. While this model has been developed to test the generality of mechanisms on the mouse MTOC motility, it would be useful in future to explore the relevance of this model to other aster-centering systems such as in C. elegans [15], Xenopus [27], Drosophila [73] and sea urchin [74]. Additional mechanisms such as a contractile actin network has been shown to drive spindle assembly in meiosis I of starfish oocytes [18], but are ignored in this model since in experiments, it does not affect the process [9]. Additionally the potential role of MT-dependent MT nucleation [75] remains to be explored as a self-organized mechanism. Testing alternative cellular geometries and additional mechanisms could in future result in a more complete exploration of MT aster centripetal transport and the physical constraints. This in turn might help us better understand the physical basis of the evolutionary diversity of aster-centering mechanisms.
The models of of MTOC centripetal motility explored in this study, provide insights into several aspects of the early stages of self-organized spindle assembly. For instance, the RWD model demonstrates that for a cell of diameter % 80 μm, a simple random-walk strategy of MTOC particles 'search-and-capture' at the chromosomes is insufficient within the time-scale of spindle assembly in meiosis (20-30 min) [9], and a long-range bias in motility is essential. The mechanistic model demonstrates that a bias of motors is minimally capable of reproducing the dynamics. MT mass increase which could arise due to increasing MT lengths (stabilization) [5,6] or number (nucleation) [75], both appear to be insufficient to affect the dynamics of motility. We also find, dynein-like clustering motor complexes, which diffuse and cross-link MTs, at a high density result in an aggregation of MTOC asters to the cell interior. Such a selforganized mechanism however is inefficient in resulting in MTOC capture percentages comparable to experimental values (Table 3) in comparable time-scales (% 20 min). Additionally MT density per aster does not affect MTOC centripetal motion but the density of immobilized motors (N i m ) localized in a gradient does change the dynamics. Such parameters are likely to be adjusted by cells, depending on the specific geometry and time-constraints. The results of this study could be used to predict the nature of MTOC centripetal transport, when the size ratios of asters to cell sizes are also comparable to the mouse oocyte. A more complete picture of the evolutionary constraints on mechanisms driving radial MT arrays to find the center of a cell will however require further quantitative studies from acentrosomal spindle assembly from more organisms, as well as calculations that explore a greater parameter space.
The model presented here predicts the functional form of a drift field in agreement with experimental data of MTOC motility. A mechanistic model of the gradient with immobilized minus-ended motors minimally reproduces experimental dynamics and allows us to test the effect of single molecule characteristics of motors on collective transport statistics. A hybrid model with a gradient of immobilized motors and diffusible clustering dynein-like complexes, combined with cortical pushing (Fig 7), satisfies all experimental measures available. Measuring the localization, density and mobility of dyneins in the mouse oocyte during meiosis I, would be a useful test of the model predictions. While the model is fit to spindle assembly during meiosis I in mouse oocytes, it also predicts the design constraints in terms of MT lengths and the localization, mobility and density of molecular motors when MT radial arrays are required to search space and undergo capture at the cell center. All those parameter sets (k) which fell within the top 15% of the sum of scores ranking are plotted with the component rank for capture time (R t (k)) (grey) and directionality (R χ (k)) (blue). The parameter sets (k) of the top-ten ranks are listed in S1 Table with the values of r 1/2 and s for ϕ a and ϕ r . For a representative subset of the optimization scheme, the error () in (B) χ and (C) t c were evaluated keeping the attractive gradient constant (r a 1=2 ¼ 10 μm and s a = 1) and varying the repulsive gradient parameters r r 1=2 (y-axis) and s r 2 (x-axis). The colorbar indicates the value of . MTOCs (grey) were simulated in cytoplasmic space of the 2D oocyte geometry marked by the outer cell boundary (outer blue circle) and inner chromatin region (blue circle) in the presence of minus-end directed motor complexes (purple dots) initialized in the cytoplasm with density N c m ¼ 10 3 motors/oocyte. These diffusible motors with stall force f 0 = 7 pN, can cross-link MTs and result in clustering by walking towards the minus-ends of neighboring asters that they crosslink. (MP4) S3 Video. MTOC motility in a gradient of MT dynamic instability. 80 MTOCs (grey) were simulated in the 2D oocyte geometry with the outer cell boundary (outer blue circle) and N i m ¼ 10 3 uniformly distributed surface immobilized motors with f 0 = 7 pN. The f cat and f res parameters were distributed in a sigmoid gradient (Fig 3G) originating from the center of the chromatin region (inner blue circle). (MP4) S4 Video. MTOC motility in a gradient of weak motors. 80 MTOCs (grey) were simulated in the 2D oocyte geometry with the outer cell boundary (outer blue circle) and N i m ¼ 10 3 surface immobilized motors with f 0 = 2 pN distributed in a sigmoid gradient (Fig 3J) originating from the center of the chromatin region (inner blue circle), and homogeneous dynamic instability. (MP4) S5 Video. MTOC motility in a gradient of strong motors. 80 MTOCs (grey) were simulated in the 2D oocyte geometry with the outer cell boundary (outer blue circle) and N i m ¼ 10 3 surface immobilized motors with f 0 = 7 pN, distributed in a sigmoid gradient (Fig 3J) originating from the center of the chromatin region (inner blue circle). (MP4) S6 Video. MTOC motility in a gradient of immobilized motors and diffusible motor-complexes. 80 MTOCs (grey) were simulated in the 2D oocyte geometry with the outer cell boundary (outer blue circle) and motors which are immobilized at a density of N i m ¼ 10 3 motors/ oocyte (green dots) in a sigmoid gradient originating from the center of the chromatin region (inner blue circle). The diffusible minus-end directed motor-complexes with N c m ¼ 10 4 motors/oocyte (purple dots) bind to 2 MTs and walk simultaneously on them, generating a clustering force on the MTOC asters. For both kinds of motors f 0 = 7 pN. (MP4) S1 Table. Optimized RWD gradient parameters. The top ten sum ranks are listed in ascending order, based on the ranks from the directionality and capture time error ((k)), with the corresponding parameters of the attractive (r a 1=2 and s a ) and repulsive (r r 1=2 and s r ) gradients (see | 12,269.6 | 2016-10-01T00:00:00.000 | [
"Biology",
"Physics"
] |
Analysis of the η ′ photoproduction off the proton and preliminary beam asymmetry results at the GRAAL experiment
Differential and total cross section measurements on η photoproduction were published by the CLAS Collaboration (M. Dugger et al., Phys.Rev.Lett.96, 062001 (2006) and M. Williams et al., Phys.Rev.C80, 045213 (2009)) for center-of-mass energies from near the threshold up to 2.84 GeV, and by the CB-ELSATAPS Collaboration (V. Crede et al., Phys.Rev.C80, 055202 (2009)) up to 2.36 GeV and also making a precise threshold scan of the differential cross section in the 1446 − 1527.4 MeV γ beam energy range. However, the wide information about reaction cross sections are not sufficient to understand the role of resonances involved in the process. Different theoretical works stressed the importance to have also polarization observables in order to solve the ambiguity in the choice of the parameters used in their models. We present the analysis of the η photoproduction off the proton, identifying the meson via the γγ, ππη, and ππη decay modes by using the GRAAL apparatus; and we show the preliminary GRAAL results on the beam asymmetry Σ from the threshold (1.446 GeV) up to 1.5 GeV.
Introduction
The investigation of meson photoproduction off nucleon with polarized photons is a powerful tool to identify the broad and widely overlapped baryon resonances not easily visible with the differential cross section measurements only.Pseudo-scalar meson photoproduction is described a e-mail<EMAIL_ADDRESS>in the models by 4 complex helicity amplitudes: Polarization observables (Σ, T , P) are sensitive via interference between the complex helicity amplitudes, and consequently allow to reveal the small resonance contributions which remain hidden under some dominant contribution in the differential cross section [1][2][3][4].The detailed description of the photon-nucleon interaction requires a complete data set containing, at least, eight independent observables: the cross section, the three single polarization 1720) and D 13 (1520) [7]; g η NN = 1.3 − 1.5, a value consistent with existing theoretical estimates [7]; above 2 GeV, where the process is dominated by ρ and ω exchange, the dynamics of η photoproduction are similar to those of η photoproduction [9]; flat differential cross sections are found near the photoproduction threshold [9], suggesting the s-wave dominant role in this energy range.Different theoretical approaches developed to describe these data [10,11], stressed the importance to have also polarization observables like beam asymmetry to solve the ambiguity on the parameters used in their models.
The GRAAL collaboration thanks to the stable and high polarization degree of its γ beam and a 4π detector, published precise single polarization observables for different meson photoproduction channels off free and quasifree proton [12][13][14][15][16][17].
In this work, we present the preliminary beam asymmetry of η photoproduction off proton just over the threshold at GRAAL.
GRAAL experimental set-up
The Graal experiment worked at the European Synchrotron Radiation Facility in Grenoble (France) until the end of 2008.The experiment consisted of a polarized γray beam, an unpolarized Hydrogen and Deuterium liquid target and a 4π solid angle detector LAGRANγE (Large Acceptance GRaal-beam Apparatus for Nuclear γ Experiments).
The Graal γ-ray beam is produced by the backward scattering of laser photons on the relativistic electrons of 6.03 GeV energy circulating in the storage ring.This method was used for the first time on a storage ring for the Ladon beam at the Adone ring in Frascati [18], and it is able to produce polarized and tagged γ-ray beams with high polarization degree and good energy resolution.
The beam polarization depends on the beam energy, it is very close to the one of the laser photons (linear or circular) [19], and can be easily rotated or changed with conventional optical components changing the polariza-tion of the laser light.The correlation between photon energy and polarization is calculated with QED [20] and it is about 96 % in the investigated energy over the η photoproduction threshold (1.446 GeV).With the 351 nm line of an Argon (Ion) Laser, the maximum γ-ray energy obtainable is about 1500 MeV (with the multi-line far-UV is about 1550 MeV) and the spectrum is almost flat over the whole range.The energy resolution of the tagged beam is 16 MeV (FWHM) over the entire spectrum.The energy of the γ-rays is provided by the tagging set-up which is located inside the ESRF shielding, attached to the ESRF vacuum system.The electrons, scattered on laser photons producing the γ-rays, lose a significant fraction of their energy escaping from the equilibrium orbit of the stored electrons and finally they are detected by the tagging system.Then, the tagging system measures the electron displacement from the equilibrium orbit.This displacement measures their energy loss in the scattering process and therefore provides a measure of the energy of the gammaray produced.The tagging system [12] consists of 10 plastic scintillators and a 128 channels Solid State Microstrip Detector with a pitch of 300 μm.The detector is located inside a shielding box positioned in a modified section of the ring vacuum chamber.The shielding box is positioned at 10 mm from the circulating electron beam.
The detector covers the entire solid angle and is divided into three polar angle regions: central part (25 • < θ ≤ 155 • ), forward (θ ≤ 25 • ), and backward (θ > 155 • ).In Fig. 1, we report the cross view of the central detectors and the target nose structure as drawn and used in our simulation.
A cylindrical liquid Hydrogen (or Deuterium) target is located on the beam and coaxial with it.In figure 1 a detailed view of the target nose geometry and the cell containing the liquid Hydrogen (or Deuterium) are present.
The central part, 25 • < θ ≤ 155 • , is covered by two cylindrical multi-wire chambers (MWPC), a Barrel made of 32 plastic scintillators and a BGO crystal ball made of 480 crystals, which is well suited for the detection of γrays of energy below 1.5 GeV.The two MWPCs, due to their length can cover the polar angle up to 16 • (see Fig. 1).The chambers, the Barrel and the BGO are all coaxial with the beam and the target (see Fig. 1).The wire chambers detect and measure the positions and angles of the charged particles emitted by the target, while the scintillating Barrel measures their energy loss.The BGO ball detects charged and neutral particles measuring their deposited energy.For neutral particles it provides a measurement of their angles (480 crystals: 15 in the θ direction and 32 in the φ direction).The BGO and Barrel information allow to distinguish protons and charged pions by a cut on the ΔE Barrel vs. E BGO distribution.
The solid angle cone for θ ≤ 25 • is covered by two planar MWPCs, and two scintillator walls, a hodoscope and a shower detectors placed at about 3 and 3.3 metres from the target, respectively.The planar MWPCs are used to track the charged particles measuring their polar and azimuthal EPJ Web of Conferences 00016-p.2At the end of the beamline, two flux monitoring detectors are used.
Data analysis and results
The η meson photoproduction identification via its γγ, π 0 π 0 η and π + π − η decay modes was performed thanks to the large number of quantities measured with the LAGRANγE (Large Acceptance GRaal-beam Apparatus for Nuclear γ Experiments) detector.We were able to identify protons and charged pions everywhere in the apparatus, photons and neutrons in forward direction θ < 25 • ; to measure angles and energy (hodoscope wall) of protons, angles and energy of photons in the BGOcalorimeter, angles of charged pions with MWPCs.
The preliminary events selection was realized by using the following conditions: i) to detect at least two neutral particles in the BGO, in order to be able to reconstruct the invariant mass, and to identify a proton in forward direction θ < 25 • ; ii) the energy of the beam has to overcome the η photoproduction threshold (E γ ≥ 1.446 GeV); iii) the measured energy of the beam E γ and polar angle of the proton θ p of selected events must lie in the acceptance region showed Fig. 2.
The distribution E γ vs. θ p was produced with the event generator described in Ref. [23].The distribution is due to the relation between the maximum polar angle of proton and the energy available to the reaction in the case of two body in the final state.This cut essentially rejects all events in which the mass of the particle (orthe sum of particle masses) photo-produced off the proton is lower than the η mass (see Fig. 3).
The study of the kinematics η photo-production off proton at 50 MeV over the threshold shows that for such energy interval the maximum values of polar angle and momentum-energy ratio of proton do not overcome 16 degrees and 0.4, respectively.Therefore, we expect to detect in our investigation a relatively slow (non relativistic) proton in the forward detectors.In such a case, the hodoscope wall is able to measure the energy of proton, from time of flight information, with a resolution of about 1%, while the planar MWPCs measure the polar and azimuthal proton angles with the accuracy of 1.5 • and 2 • , respectively.The resolution of proton momentum estimated with simulation in the case of η photoproduction is about 2.5% for energies of the photon beam up to 1.6 GeV.The combination of the good measurement resolution of the four-momentum of proton in the forward direction and the incident photon beam, and the small gamma width of the η meson mass (full width Γ = 0.199 MeV) makes the missing mass from the proton (γp → pX), evaluated in the hypothesis of two body reaction in the final state, the best quantity in order to identify the events coming from the γp → η p reaction.The missing mass from the detected proton was successful used also in the recent analysis of the ω meson photoproduction performed by the same collaboration (see Ref. [22]).Moreover, the analysis of simulated samples shows the possibility to measure the missing mass of proton with a narrow distribution around the η mass (RMS of about 5 − 6 MeV) by using the GRAAl apparatus for energies of the beam up to 1.55 GeV.In Fig. 3 panels a) and b) we present the missing mass of the detected proton, in the hypothesis of two particles in the reaction final state , by applying the condition i) (dash-dotted line), the conditions i) and ii) (dashed line), all conditions i-iii) (solid line).The figure clearly shows the identification of η and ω mesons, while only after the application of the conditions i) and ii) the η contribution starts to appear on the spectrum (see dotted line in figure).Finally, solid line in Fig. 3 (b) shows the η signal contribution (η mass = 0.95778 GeV) dominant over a small and flat background.
The simplest and cleanest channel for the LAGRANγE detector is the η decaying into γγ.In this case, a similar analysis was used to that already used and published by the GRAAL collaboration in the cases of the π 0 and η photoproduction off free proton and quasi-free proton (see Ref. [12,14,15,17]).Unfortunately, the relatively low branching ratio does not allow us to collect enough events to extract a stable beam asymmetry.For this reason, a new analysis devoted to the identification of η via its decay into π 0 π 0 η and π + π − η particles was implemented.The proton missing mass of selected events for γγ analysis is reported in Fig. 4 a).
The π 0 π 0 η channel analysis was performed by asking to detect six neutral particles in the BGO, one proton in the forward direction and no other charged or neutral particle elsewhere.We reconstruct the invariant masses of η from six photons and of η and two π 0 decay products.The search of decay products was performed by looking to the best photon couple able to reconstruct the invariant mass of η at first and using the remaining four photons for the re-construction of the invariant masses of the remaining two neutral pions.The proton missing mass of this selection is shown in in Fig. 4 b).
The charged decay channel was treated by applying the condition to have a good invariant mass reconstruction from two photons, detected in the BGO calorimeter, and the identification of two charged pions in the whole detector (see Fig. 4 c)).
As one can see in Fig. 4, the missing mass of proton estimated for all analyzed decay modes are very similar in the different decay modes and very close to η mass.By simulation we estimate that the residual background for the three considered cases is lower than 4%.
The complete reconstruction of η is not possible in the case of its decay into π + π − η and has a bad resolution in the case of π 0 π 0 η decay mode, while we are able to measure with good accuracy all quantities about the proton in the final state.For this reason, we derive all η quantities in the center-of-mass frame from the well determined ones of the detected proton.
circles), the fit of the ratio by eq. ( 1) by using PΣ as only free parameter (dashed line), or by using a general parametrization of eq.(1) (solid line).by fitting the azimuthal distribution defined by the following ratio: where the N V (N H ) and F V (F H ) are the number of events and the total flux for vertical (horizontal) beam polarization, while P(E γ ) is the polarization degree of the beam calculated by QED [20].The systematic error due to the efficiency is canceled in the ratio.The stability of the present results was verified by different checks (as in Ref. [15]).We fitted the azimuthal distributions by using different parametrizations of eq. ( 1).Parametrizing eq.(1) as A[1 + B cos(Cφ + D)], we obtain that the quantities A − 0.5, C − 2 and D are consistent with zero within the fitting error bar.In Fig. 5 b) we report the distribution of the ratio (N V /F V )/(N V /F V + N H /F H ) for < E γ >=1.475GeV and < θ η c.m. >=68.73 • (full circles) and the fit of the distribution by using P(E γ )Σ as only parameter (dotted line) and the general parametrization above described (solid line).The P(E γ )Σ parameters extracted in the two fits are in agreement within the error bars.We extract the beam asymmetry of a simulated sample generated with a 0.5 flat asymmetry.As one can see in Fig. 6 a) the extracted asymmetries are compatible with the simulated ones within one standard deviation.Fig. 6 b) shows the comparison between the beam asymmetry evaluated in the set of data obtained by looking for the π + π − η decay and the one obtained with the sum of γγ and π 0 π 0 η decay channels.The two estimations are in agreement within the error bars.
The trend of the beam asymmetry is in average zero confirming the dominant role of the s-wave function [8,9], but the presence of a value clearly higher than zero for θ η c.m. lower than 90 • hints the possible interference of different resonances.
Figure 1 .
Figure 1.Cross view of 3D rendering of BGO-Ball crystal, Barrel, MRPC and target nose.
Figure 2 .
Figure 2. Energy of beam versus the proton polar angle events distribution for a simulated γp → η p reaction.
Figure 3 .
Figure 3. Panels a) and b) experimental proton missing mass with the cuts defined in condition i) (dash-dotted line), i) and ii) (dashel line) and i)-iii) (solid line).
Figure 5 .
Figure 5. Preliminary η beam asymmetries off proton at the average energy beam value of 1.475 GeV (panel a).Azimuthal experimental distribution of the ratio (N V /F V )/(N V /F V +N H /F H ) (full circles), the fit of the ratio by eq.(1) by using PΣ as only free parameter (dashed line), or by using a general parametrization of eq.(1) (solid line).
Fig. 5
Fig.5shows the preliminary beam asymmetry of η photoproduction off proton.In particular, we report in figure seven bins in θ η c.m. estimated for the averaged en- | 3,785.6 | 2014-05-01T00:00:00.000 | [
"Physics"
] |
Signal Model for Coherent Processing of Uncoupled and Low Frequency Coupled MIMO Radar Networks
MIMO radar networks consisting of multiple independent radar sensors offer the possibility to create large virtual apertures and therefore provide high angular resolution for automotive radar systems. In order to increase the angular resolution, the network must be able to process all data phase coherently. Establishing phase coherency, without distributing the transmitted RF signal to all sensors, poses a significant challenge in the automotive frequency range of $\text{76 GHz} \,\text{to}\, \text{81 GHz}$. This paper presents a signal model for uncoupled and low frequency coupled radar networks. The requirements for phase coherent processing for uncoupled radar sensors are systematically derived from the signal model. The proposed signal processing methods, which establish coherency, are sub-aperture based. Both the signal model and the proposed signal processing methods are verified by measurements with radar sensor networks composed of 2 and 3 radar sensors, providing 768 and 1728 virtual channels respectively. Measurements verify that phase noise is insignificant in the process of establishing coherency in uncoupled and low frequency coupled radar networks.
I. INTRODUCTION
Driver assistance systems' demands towards the imaging capabilities of radar systems are ever increasing.In light of these demands, higher Direction of Arrival (DoA) resolution is required [1].As the requirements rise, it becomes increasingly challenging to address these demands with a single sensor radar system [2].Multiple-Input-Multiple-Output (MIMO) radar networks, consisting of multiple individual MIMO sensors, offer the possibility to place the sensors further apart and thereby increase the aperture of the virtual antenna array.Multiple of MIMO radar networks have already been realized and analyzed [3], [4], [5], [6], [7], [8].The different implementations are divided into three network topologies: Radio Frequency (RF) coupled networks, Low Frequency (LF) coupled networks and uncoupled networks.The three topologies are depicted in Fig. 1.The simplest realization is the RF coupled topology, whose unique feature is the distribution of the RF signal to each radar sensor.RF coupled networks preserve the coherency of Phase Noise (PN) and therefore benefit from the range-correlation effect [9].
In [10], a Chirp-Sequence Frequency Modulated Continuous Wave (CS-FMCW) radar network according to Fig. 1(a) is presented, which distributes the RF signal with semi-rigid cables at a fourth of the transmit signal frequency.Due to the CS-FMCW modulation scheme, each sensor simply uses a frequency multiplier to create the required 77 GHz signal.The network consists of 2 sensors with a total of 768 Virtual (Vx) channels.This approach eases the requirements on the signal distribution, but semi-rigid cables at 20 GHz still significantly attenuate the signal and are sensitive to phase variations under mechanical stress.
The LF coupled network topology, Fig. 1(b), distributes an LF clock derived from a single reference oscillator instead of the RF signal.From the reference clock, the multiple times higher RF signal is synthesized.This still requires a signal distribution, but since the distributed signal has a significantly lower frequency than the actual transmit signal, complexity and costs are drastically lower than for the RF coupled topology.
In [5], a network with CS-FMCW modulation and Frequency Division Multiplexing (FDM) is presented, in which the reference frequency is 80 MHz, while the synthesized transmit signal frequency is 77 GHz.The network realizes 32 Vx channels with 2 radar sensors.
A different realization was shown in [7], in which a CS-FMCW modulation-based network and Time Domain Multiplexing (TDM) was used.The distributed reference frequency is 100 MHz, while the synthesized transmit signal frequency is 154 GHz.The network realizes 64 Vx channels with 2 radar sensors.
The most flexible, but also most challenging, network topology is the uncoupled network topology, in which each radar sensor synthesizes the RF signal from independent reference oscillators, Fig. 1(c).
In [6] and [11], uncoupled radar networks with CS-FMCW and FDM are presented, operating at 122 GHz, derived from uncoupled 100 MHz oscillators.The networks are able to coherently detect the range [6] and velocity [11] of a target with a total of 6 Vx channels with 3 radar sensors.
In [7] and [8], radar networks operating at 154 GHz, synthesized from reference frequencies of 100 MHz and 10 GHz respectively, are presented.Both networks realize 64 Vx channels with 2 radar sensors.These networks require a calibration target with known position to establish coherency.
The topologies investigated in this work are LF coupled and uncoupled networks; the individual sensors are triggered by an independent central trigger source.In contrast to previous works, the presented signal model incorporates the fact that not only the RF signal is derived from the reference oscillator but also all intra frame and intra chirp timings.
A signal model including the coupling-induced errors is derived, which is used to calculate boundaries in terms of frequency deviation for the reference oscillators.The boundaries heavily depend on the transmit frequency, the reference oscillator's frequency, and the radar waveform parameters.Signal processing methods are presented to estimate and correct the coupling-induced errors on the radar signals to achieve fully coherent processing of the complete network in the range, velocity, and angular domain.None of the presented estimation and correction methods require targets with known positions or require the presence of single scatter targets.
To verify the signal model and the derived boundaries, networks consisting of 2 and 3 radar sensors with 768 and 1728 Vx channels, respectively, are used.The reference clocks are provided by Temperature-Compensated Voltage-Controlled Crystal Oscillator (TCVCXO) and the radar sensors are realized by multiple state-of-the-art automotive graded MIMO radar chips.
The structure of the paper is as follows: Section II describes the structure of the individual parts of the radar network.In Section III, the influence of the reference oscillator's frequency deviation on the timing generation and the RF signal is described.Section IV introduces the used notation and investigates the spectrum of a chirp.The signal model and boundaries for the frequency deviation of the reference clocks are discussed in Section V.In Section VI, the necessary estimation and correction methods to coherently process the complete network are presented.Finally, in Section VII measurements for different radar network topologies and varying sizes in an anechoic chamber are presented and compared to the theoretical signal model.The section ends with outdoor measurements of a moving extended target using the uncoupled 2-sensor radar network.
II. SYSTEM DESCRIPTION
First, the realized radar network is described to better understand the presented signal model.The derived signal model is universally applicable to all radar networks in which the single sensors derive the timings and the RF signal from a common reference oscillator.The realized network consists of multiple individual MIMO sensors, which in turn consist of multiple state-of-the-art MIMO radar chips, specified for automotive applications.
A. CHIP DESCRIPTION
A single radar chip provides 3 Transmit (Tx) and 4 Receive (Rx) channels.The block diagram of a single chip is depicted on the left in Fig. 2. The chip requires only a single reference frequency, f c = 40 MHz to operate.From this reference frequency, all timings, clocks, and the RF signals are derived.
The first stage for the reference clock is a clean-up Phase-Locked Loop (PLL) having a narrow loop bandwidth for removing noise and interferences picked up by the reference clock signal [12].The consequence of this narrow loop bandwidth is that the PN of the clock driving the synthesizer and timing generator is predominantly determined by the internal PN of the PLL itself and the Voltage Controlled Oscillator (VCO) [12].Therefore, the PN of the reference oscillator is insignificant.
The timing generator creates the intra chirp and the intra frame timings, while the RF synthesizer creates the chirps, which are transmitted and used for downmixing.
B. SENSOR DESCRIPTION
A single sensor is composed of 4 radar chips, which are jointly triggered and provided with the same reference clock.The reference clock on the sensor is distributed to all chips via a 1-to-4 clock buffer.Every chip independently derives the intra frame and intra chirp timings from the reference clock.The block diagram of a single sensor is depicted on the right in Fig. 2.
The 4 MIMO radar chips provide each sensor with 12 Tx channels per sensor P S and with 16 Rx channels per sensor Q S .The chips are divided into a primary chip, namely chip 1, and secondary chips, chip 2, 3, and 4. Chip 1 uses its RF synthesizer to create the chirps, the RF synthesizers of the secondary chips are disabled.The chirps of chip 1 are distributed to the secondary chips, which they transmit and feed the mixers of the receive channels.The daisy chain chirp distribution scheme introduces additional time delays, which add up to the time-of-flight τ .The additional time delays are ignored, since they are time invariant and are estimated and corrected during the calibration process.
For the uncoupled network topology, all sensors of the network are provided with reference clocks from independent oscillators.In contrast, in the LF network topology, all sensors of the network are provided with the clock from a single reference oscillator.
C. VIRTUAL NETWORK ANTENNA APERTURE AND SUB-APERTURES
A network is composed of V sensors, in which each sensor is equipped with an individual antenna array containing P S Tx and Q S Rx antenna elements.
Due to the coherent processing of the network, all antenna elements form a joint virtual network aperture.Throughout the derivation of the signal model, it is necessary to divide the complete virtual aperture into multiple sub-apertures, since a joint DoA estimation is only possible after multiple correction steps on all channels included in a sub-aperture.
The joint network virtual aperture is split in mono-and bistatic sub-apertures.In this context, the terms mono-and bistatic apply to the sensor level.The position of a virtual element is described by where x Vx,m,n,q,p describes the virtual antenna position created when the p-th, p∈[0 . . .P S − 1], Tx antenna located on the m-th, m∈[0. ..V − 1], sensor transmits and the q-th, q∈[0. ..Q S − 1], Rx antenna located on the n-th, n∈[0. ..V − 1], sensor receives.A sub-aperture is described by the set of positions generated by all Q S Rx antennas of the m-th sensor and all P S Tx antennas of the n-th sensor and is denoted as n,m ., in turn, describes the set of all positions in the network.The number of sub-apertures in a network consisting of V sensors is V 2 .From the V 2 sub-apertures, V sub-apertures are monostatic, m = n, and V 2 − V bistatic, m =n.
A bistatic pair is formed by 2 sets of Vx antennas generated by 2 sensors.One set of the pair is formed by the m-th sensor's Tx antennas and the n-th sensor's Rx antennas, m,n .The other half of the pair is generated by the n-th sensor's Tx antennas and the m-th sensor's Rx antennas, n,m .
Redundant Vx antenna elements play a crucial role in the process of establishing coherency between sub-apertures and are created if multiple unique pairs of Rx and Tx antenna positions exist which result in the same Vx position vector.In Fig. 3, the virtual network aperture for the 2-sensor network and its sub-apertures is depicted.
The complete virtual network aperture of the 2-sensor radar network consists of 4 sub-apertures, 2 monostatic apertures and 2 bistatic apertures.The 2 bistatic sub-apertures form a bistatic pair.The virtual aperture of the 2-sensor network contains redundant elements on each transition from sub-aperture to sub-aperture.The properties of the virtual antenna aperture and its sub-apertures are listed in Table 1.A detailed description of the antenna array can be found in [10].
The large size of the network aperture leads to a violation of the far-field condition in the measurement range of the radar network [13], [14].The focus of this paper is on the network coupling effects.As the problem of far-field condition violation is independent from the coupling mechanism of the network, it is neglected throughout the derivation of the signal model.
III. OSCILLATORS & TIMINGS
In the actual case of an uncoupled topology, each sensor is provided with its own independent clock with frequency f c,n .For the signal model, the absolute frequencies of the oscillators are less important, whereas the pairwise relative frequency deviation from oscillator to oscillator is significant.Therefore, one of the two oscillators of a pair is arbitrarily defined as the oscillator with the nominal frequency f c = f c,m , and the other is defined with a frequency deviation: ( For the LF coupled topology, all sensors are provided with the same clock derived from a single oscillator with frequency f c,0 .The frequency difference, therefore, is always f = 0 Hz.The used oscillators are TCVCXO, which are frequency tunable by a potentiometer and equipped with a clock buffer providing multiple phase coherent rectangular shaped outputs.The signal model assumes that the frequencies of the individual oscillators are fixed for the duration of a frame but can change from frame to frame.In the following, the random variable behavior of the frequency is dropped, but the frequency deviation must always be lower than the derived limits.
In Fig. 4, the RF signal and the timings created by the synthesizer and the timing generator are depicted.At the point in time t 0 the chip receives an external frame trigger signal, the remaining intra chirp and intra frame timings required are created by the timing generator and therefore only dependent on the reference clock.The RF synthesizer is triggered by the timing generator to generate chirps with slope S for the duration T Up .The slope is expressed in terms of starting frequency f 0 , end frequency f 1 and ramp up time, T Up as The Analog-To-Digital Converter (ADC) starts to sample the received Intermediate Frequency (IF) signal after the time duration T S , which is defined relative to the start of a chirp.The time T S is largely determined by the settling effects of the High Pass Filter (HPF) and the time-of-flight τ .The ADC receives the start trigger from the timing generator and samples N s samples at the frequency f s , which determines the sampling time T ADC = N s f s .Well-designed waveforms choose a ramp time which is longer than the sum of T s and T ADC , due to the group delay of the digital filter chain.At the end of the chirp, the timing generator triggers the RF synthesizer to ramp its VCO down.T Down is determined by the chirp bandwidth; the larger the chirp bandwidth, the longer T Down must be chosen for a clean signal.The chirp repetition period T R is the sum of T Up and T Down and determines when the next chirp starts.
A. TIMINGS
Timings are created by counting clock cycles, a counter must count N CLK = T f c number of cycles in order to create a timing signal of time duration T .The number of cycles are calculated based on the nominal frequency at which the counter is driven.Since the counters of the radar sensors are driven by oscillators with different frequencies, this leads to different time durations for the same counter value.The time duration T m therefore becomes sensor specific, but since oscillators are always treated pairwise, one of the two oscillators is chosen as the nominal one, T = T m .This is written as where the last part follows from the fact that the m-th oscillator was chosen to be the nominal one.The partner oscillator counts the same number of cycles and creates a different time duration: The time difference from the point of view of oscillator f c,m , occurring while both oscillators count the same number of cycles, N CLK , is then From the perspective of oscillator f c,n , the timing difference is the same but with opposing sign: If the frequency f c,n is higher than f c,m , f n,m ≥ 0, the timings of f c,m are always too late, whereas from the perspective of oscillator f c,m , the timings of f c,n are too early.
It is important to note that the relative time difference T with T .For the radar use case, this means that with the progression of the frame, the timing difference between the sensors increases: Because at the beginning of the frame only a frame trigger is issued and the intra frame timings are created by the sensor specific timing generators in reference to the frame trigger.The time duration is always with respect to the current frame trigger.
For LF coupled networks, the timing errors are always 0, since the frequency deviation is 0,
B. FREQUENCY SYNTHESIS
The RF chirp signal is created by a synthesizer, which is a time variant frequency multiplier.From the perspective of the oscillator f c,m the start frequency of the chirp f 0 , is created by multiplying the reference frequency by the factor N 0 : The final chirp frequency is achieved with the final frequency multiplication factor The frequency of oscillator f c,n is multiplied by the same factors N 0 and N 1 , which leads to a relative frequency deviation between the radar sensors.
The RF signals created by the synthesizer driven by f c,n is expressed in terms of f c,m and f n,m .This is used to get the frequency difference of the RF start frequency, f 0,m,n , end frequency f 1,m,n , and bandwidth BW n,m in relation to the reference frequencies.
As for the timing, the point of view is crucial.From the point of view of the oscillator which runs at the lower frequency, the other oscillator starts and stops at frequencies which are too high.This is reversed from the perspective of the oscillator with the higher frequency; the other clock starts and stops at too low a frequency.The frequency and slope deviations are of the same magnitude but with reversed signs, depending of the point of view.
IV. SIGNAL THEORY
This section's aim is the introduction and derivation of mathematical expressions which are repeatedly employed during the derivation of the signal model.
A. MATHEMATICAL NOTATION
The fundamental function of modern CS-FMCW radar systems is the time limited sampled sinusoidal function which is described as in which |A| is the magnitude, ϕ 0 the starting phase, which together form the complex amplitude A = |A|e jϕ 0 , f 0 the frequency of the signal, f s = 1 T s the sampling rate in Hertz, and N the observation interval in number of samples.The signal model and the estimation and correction of the couplinginduced errors are described in the frequency domain.The spectrum of (10), [15] is The frequency resolution δ f is defined as the frequency difference between the maximum of ( 11) and its first zero, which is given by The Fast Fourier Transform (FFT) is used to compute the spectra of equidistant sampled data and is the frequency discrete and scaled version of the Time Discrete Fourier Transform (TDFT) [15].This means that in situations in which no analytical spectrum is available, the FFT can be used to evaluate the TDFT at discrete frequencies.Measurement data are always processed by the FFT and additional windowing [16].Throughout this paper, the signal of the radar is derived time continuously, and the time discretization is implicit.This means that the spectra of the signals are described by the TDFT.The windowing of the signal is omitted for the sake of easier notation and the following shorthand notation for the spectrum is used, where the ( * ) operator indicates convolution, and X Window ( f ) is the Time Continuous Fourier Transform (TCFT) of the window.Including windowing, this allows (11) to be written more clearly as For the sake of simplicity, the explicit dependency on the sampling rate is dropped, since the context will make it self explanatory which sampling frequency must be used.The TDFT describes the spectra in all radar dimensions.In order to distinguish between the dimensions, the frequency variables f of the spectra are denoted with a subscript indicating the dimension.The range is associated with the fast time frequency.The frequency variable is denoted as f R and the frequency resolution as δ f R .The velocity is associated with the slow time frequency.The frequency variable is denoted as f v and the frequency resolution as δ f v .
B. SPECTRUM OF A CHIRP
During the investigation regarding the coupling influence on the CS-FMCW signal structure, a time limited sinusoidal oscillation with a residual chirp bandwidth occurs.This is mathematically described in continuous time as in which S describes the chirp rate in Hz s −1 .The corresponding time discrete version is described by For a more general and easier investigation of ( 16), the chirp rate is defined as the ratio of the bandwidth B covered by the chirp over the course of the observation interval T = N•T s .Furthermore, the bandwidth B is expressed in multiples of the frequency resolution, δ f , which results in in which the parameter α describes the amount of bandwidth covered by the chirp in terms of bins over the course of time duration T .Due to the lack of a closed form expression for the spectrum of ( 16), a numerical investigation via the FFT is carried out.In Fig. 5 the FFT-based calculated power spectra of ( 16) for values of α greater than a frequency bin is shown.From Fig. 5 it is deduced that for a chirp with a bandwidth larger than a frequency bin width, the spectrum is significantly spread over multiple frequency bins, which leads to a loss of frequency resolution.In Fig. 6 the FFT-based spectra ( 16) for chirps with significantly smaller chirp bandwidth than a frequency bin width are shown.The red circles highlight the frequencies, f max , at which the power spectra of the signals have a maximum.It can be seen that the dominant influence of the chirp bandwidth α is a frequency shift of the maximum in the frequency domain.A quantitative evaluation of the additional frequency shift of the maximum in the frequency domain shows that a residual chirp with the covered bandwidth α introduces an additional frequency shift of α δ f 2 .The loss of resolution is negligible.The frequency maximum of the spectrum of ( 16) is therefore described by where the factor δ f 2 is condensed to f for clearer notation.The loss of power in the spectrum, evaluated at the maximum depending on the chirp bandwidth, is less than 0.2 dB and therefore neglectable for |α|<0.1.
For a pure sinusoidal signal, α = 0, evaluating the phase of the spectrum at its maximum yields the correct phase, ϕ 0 , of the signal.In Fig. 6 it is clearly visible that this is not true for signals with a residual chirp, α =0.Evaluating the phase of the FFT-based spectra of ( 16) at its maximum shows that and additional phase error ϕ(α) is introduced in comparison to a pure sinusoidal.Furthermore, can be concluded from Fig. 6 that the introduced a phase error ϕ(α), depends linearly on the chirp bandwidth α.In order to quantify the error, a linear regression of ϕ(α) was carried out and is also shown in Fig. 6.The linear regression results in the following approximation of the phase shift, depending on the chirp bandwidth α.
The spectrum of ( 16) for chirp bandwidths of |α|<0.1 is therefore approximated by
V. SIGNAL MODEL
The signal model is derived for single Vx channels of a network consisting of 2 sensors with uncoupled oscillators, f n,m =0 Hz.The individual Vx channels are representative of all channels of the 4 sub-apertures.The derived couplinginduced errors for the 4 Vx channels effect all Vx channels on the same sub-aperture equally.For a general network composed of V sensors, the formulas must be evaluated for all pairs of oscillators, and the derived conditions must be met by all pairs.
For the sake of simplicity, path loss and noise are neglected during the derivation of the signal model to focus on the phases and frequencies.The conditions regarding the reference oscillators are chosen such that there is no loss of resolution in range, velocity, or angular resolution.The conditions are general and applicable to every CS-FMCW radar sensor network.For better understanding, numerical results for the waveform parameters used in the 2-sensor setup are given.The used waveform parameters are listed in Table 2.
A. SINGLE CHIRP MODEL
All sensors are triggered at the mutual frame trigger time t 0 , but each sensor introduces a frame trigger jitter t j .The frame trigger jitter is the time duration between the actual trigger time t 0 and the point in time at which the synthesizer of the sensor actually starts creating the first chirp of the frame.The phase of the first chirp of a frame created by the m-th and n-th sensor is described by The signal after the mixer at the m-th sensor for a static target in the monostatic case is described by The phase difference ϕ 0,m,m (t ) for a monostatic channel is calculated as For automotive radar systems, the term depending on the square of the time-of-flight τ 2 and the term depending on the product of the time-of-flight and the frame trigger jitter, τt j,m , can be neglected, since they are small in comparison to the phase term linearly depending on the time-of-flight ϕ τ .The same calculation carried out for a monostatic channel of the n-th sensor, already neglecting the term depending on τ 2 , yields The phase difference of monostatic channels consists of a constant phase term ϕ τ , which only depends on the time-of-flight τ and is independent of the frame trigger t 0 and the frame trigger jitter t j .Furthermore, neither the frame trigger nor the frame trigger jitter have an influence on the beat frequency f τ , as described by the phase term depending linearly on time t.
For the bistatic channels it must be taken into account that the start frequency f 0 , the slope S, and the frame trigger jitter t j , are sensor specific.The bistatic frequency progressions are visualized in Fig. 7. Carrying out the calculation of (23) for a bistatic channel and neglecting all which depend on t 2 0 , yields Exploiting the fact that the reference frequencies can be expressed as each other by (2), consequently, the start frequency and the slope can also be expressed as Using ( 27) and ( 28) in (26) and resorting the terms yields Since all terms depending on (t 2 j,n − (t j,m + τ ) 2 ) or t 2 j,n are insignificant in comparison to the phase terms linearly dependent on them, (29) is approximated by Comparing (29) with the monostatic signals ( 24) and (25) shows that additional constant phase terms, additional frequency shifts and a residual chirp are present.
Both the phase and frequency shifts are composed of two error sources; The first error source is the relative frame trigger jitter, t j = (t j,m − t j,n ), and the second is the frequency difference f 0,m,n caused by the frequency deviation of the reference oscillators.The additional constant phase and additional frequency terms do not influence the range resolution of the network but cause only constant shifts.However, as derived in Section IV-B, the residual chirp S m,n 2 can lead to a loss of resolution if not limited.
Calculating (26) with reversed roles, which means the m-th sensor is now the receiving sensor while the n-th sensor is the transmit sensor, shows that the amount of the additional phase, the additional frequency shifts and the residual chirp are the same, but with opposite sign Comparing (31) with (29) shows that the influence of the errors are symmetric around the fast time frequency f τ and the constant phase ϕ τ caused by the time-of-flight τ .
1) CONDITION 1
In order to preserve the range resolution for the bistatic channels, the bandwidth covered by the residual chirp during the sampling interval must be lower than a tenth of the frequency resolution.The condition is derived from the analysis in Section IV-B and is mathematically written as, The chirp difference S m,n is expressed in terms of the reference frequency as where N is the difference in the frequency multiplication factor of the fundamental frequency between the start of the chirp and the end of the chirp, as defined in (9).If the timing error T Up,m,n in an interval of T Up is smaller than 1 1000 , the different T Up durations of the sensors can be neglected T Up,m ≈ T Up,n = T Up .This condition is met when is fulfilled.If condition (34) is met, the timing difference during an interval of T Up can be neglected, which also means that the timing interval error during T ADC is neglectable since T ADC ≤ T Up .(32) is then rewritten as where the second line is the result of algebraic manipulation of the the first line.For the waveform parameters listed in If the conditions are fulfilled, the sampling frequencies of the sensors can be assumed to be equal, f s,m = f s,n ≈ f s .This is due to the fact that the nominal sampling frequency is in the MHz region, a sampling frequency difference in the 100 Hz region is therefore neglectable.Moreover, it can be assumed that the fast time frequencies of the monostatic sub-apertures are the same, since the slopes of the sensors are only marginally off and neglectable, S n = S m ≈ S.This also means that the linear frequency shifts caused by the slope difference and the frame trigger jitter, S m,n t j,m t and S m,n t j,n t, are insignificant and neglectable.But the slope difference S m,n for the bistatic channels over the course of the entire sampling interval must still be accounted for, since the error accumulates over the duration of the sampling window.
The fast time spectra of the monostatic channels after digitizing and windowing are then described by ( 14) Due to the residual chirp term S the spectra will additionally be frequency and phase shifted, since the residual chirp terms are present with opposite signs, such that the symmetry between the bistatic channels are preserved.The fast time spectra of the bistatic channels are described by (20): It is important to note that for LF coupled networks, all terms originating from the frequency differences of the reference oscillators are f 0,m,n = 0 Hz, including the residual chirp term S m,n .Nevertheless, the relative frame trigger jitter t j still causes a symmetric frequency and phase shift.
B. MULTI CHIRP MODEL
So far only a single chirp of a frame has been considered.(24) repeats K times and the repetition rate in a TDM radar network for the m-th sensor is T R,m = N Tx T R,m .The phase difference depending on the chirp number k for a monostatic channel is For the monostatic channels, the chirp start time cancels out, so the spectrum of each chirp of a frame is described by (36), regardless of the chirp number k.
Applying the TDFT on the slow time, with as the sampling frequency and f v as the fast time frequency variable, yields This is different for the bistatic channels: The timing error of the chirp repetition time, T R,m,n , must be taken into account.
The phase difference then yields A static target with no velocity appears as a moving target, since the timing difference of the chirp repetition time for the bistatic channels increases over the duration of a frame.Incorporating this in (31) shows that a static target appears as a moving target for a bistatic channel, where the influence of the residual chirp term was absorbed in the static phase ϕ n,m and static frequency f b,n,m variables, for the sake of simplicity.As for an actual moving target, the fast time frequency and the phase become dependent on the chirp number.The spectrum is calculated to (43) gives reason to two conditions: First, the change of the fast time frequency over the course of a frame must be smaller than half of a slow time frequency bin in order not to lose any significant velocity resolution.Second, the induced phase progression due to the frequency difference between the reference oscillators should be within the ambiguous velocity of the network, to correctly estimate the velocity and correct the phase shifts from channel to channel due to the TDM-induced phase shifts [17].
1) CONDITION 2
The total frequency migration of the fast time frequency over the course of a complete frame must be limited to avoid significant loss of velocity resolution.This loss results from the fast time frequency spectrum not being evaluated at its maximum for each consecutive chirp.This effect is unavoidable but is assumed to be neglectable if the frequency migration during a frame is smaller than half of the fast time frequency resolution: For the waveform parameters listed in Keeping either N Tx constant while reducing K, or vice versa, is equivalent to reducing the measurement time of a frame.In the end, (44) either limits the measurement time of a frame for a given bandwidth or imposes an increasingly strict condition on the frequency difference as the measurement time increases.If (44) is fulfilled, the dependency of the fast time frequency on the chirp number is neglectable, f b,n,m + fb k ≈ f b,n,m .
2) CONDITION 3
The last condition on the reference frequency difference stems from the fact that the slow time frequency-induced phase progression over the course of a frame must be within the ambiguous velocity range of the waveform.This condition is stated as For the waveform parameters listed in Table 2, the frequency difference must be | f m,n | ≤ 220.4 mHz.
(45) becomes stricter as the unambiguous velocity decreases.This is especially disadvantageous for large TDM radar networks, since every additional Tx channel reduces the unambiguous velocity range.If the conditions of (44) and (45) are met by the reference oscillators, the start frequencies of the individual sensors can be approximated by the nominal one, f 0,m = f 0,n ≈ f 0 .
The spectra of the bistatic channels are then: The coupling-induced fast time frequency has the same magnitudes for the bistatic channels, but with opposite signs.Since the LF networks do not suffer from timing errors, the slow time frequency shift is not present for this network topology.
VI. ESTIMATION & CORRECTION
In order to coherently process all data, the additional phase, fast frequency and slow frequency shifts, caused by the frame trigger jitter, and the frequency deviation of the reference oscillators must be estimated and corrected for each frame.The slow frequency shift is not present for LF coupled networks and therefore needs not to be estimated or corrected.The necessary signal processing chain to estimate and correct the data of the bistatic sub-apertures are depicted in Fig. 8.The derivation of the spectra in Section V was only carried out for single channels of a sub-aperture.But all channels contained on the same bistatic sub-aperture are shifted by the same amount in phase, fast frequency and slow frequency as the single channels.
The estimation of the network coupling-induced shifts in fast frequency, and slow frequency is carried out in the 2D frequency domain,where the coupling-induced errors appear as a simple shift.All power spectra of the channels contained in a set m,n are summed to The 2D deterministic cross correlation between two 2D functions is defined as The shifts of the bistatic channels with respect to the monostatic channels are then estimated by Due to the symmetry between the channels of the two bistatic sub-apertures, both 2D-power spectra are shifted in opposite directions in both domains.The shifts estimated by (50) must be divided by the factor of 2, in order to align the bistatic channels with the monostatic channels and enable coherent processing.This becomes clear if the shift between the two bistatic 2D-power spectra in the fast time frequency domain is calculated as The same calculation can be carried out for the phase and the slow time frequency.
The correction can either be carried out in the time domain by multiplying the time signals with a correction signal or in the frequency domain by shifting and resampling all of the 2D-spectra associated with bistatic channels.The time domain method is described by where the necessary TDM-induced phase correction terms are omitted.
For LF coupled radar networks, the slow time frequency shift is not present.The 2D-cross correlation can then only be carried out over the fast time frequency f R and the correction term for the slow time domain is set to γ v = 0.
To coherently estimate the DoA across the whole network, it is necessary to estimate the phase difference between the sub-apertures.This is done with the aid of the redundant Vx antenna elements.After the fast and slow time frequency correction, all channels of the network can be evaluated at a fixed fast and slow time frequency to evaluate the phase terms ϕ m,n and ϕ n,m .During the derivation, the fact that the time-of-flight τ is channel specific τ i was omitted since the time difference between individual channels is neglectable in fast and slow time frequency processing.
Nevertheless, for redundant Vx elements, the time-offlights are equal, which is exploited to estimate the phase shifts between the sub-apertures m,n and n,m .
A Constant False Alarm Rate (CFAR) algorithm is applied on the corrected 2D-power spectra P( f R , f v ) of the complete network to identify possible targets [18].The cell with the highest peak prominence is used to estimate the phase difference between the sub-apertures.The phase difference is estimated by calculating the phase difference of all redundant channel pairs in the set ˜ m,n→n,m , summing over all phase differences and dividing it by the factor of 2. This procedure is described by where X i,n,m is the 2D-spectrum of the i-th channel evaluated at the cell with the highest peak prominence, Because the phase difference between the two bistatic sub-apertures can be larger than |2π |, the estimate γ ϕ is ambiguous by ±π .To resolve this ambiguity, the sets of redundant Vx channels between the monostatic and the bistatic sub-apertures must be used in addition.The phase difference between a monostatic and bistatic sub-aperture is calculated by If the magnitude of the difference between (54) and ( 55) is larger than | π 2 |, the estimate γ ϕ must be corrected by ±π .This procedure is also applied to the redundant elements of the set ˜ m,m→n,m to reduce the probability of a wrong decision regarding the additional addend of ±π .
VII. MEASUREMENTS
To verify the signal model and the efficiency of the estimation and correction of the coupling-induced errors, a target consisting of 2 corner reflectors is placed 5 m in front of the radar network.The 2 corner reflectors, with a side length of 6cm are spaced 6cm apart from each other.This means that only the complete network aperture is able to separate the targets in the angular domain.The measurement setup is depicted in Fig. 9.
The reference oscillators are tuned once before the measurements and are free running from that point on.For the LF coupled network measurements, all sensors are connected to the same reference oscillator.
A. UNCOUPLED NETWORK: 2-SENSOR SETUP
The waveform parameters are listed in Table 2.In Fig. 10, the uncorrected 2D-power spectrum of the complete network for a single measurement is depicted.As the theory states, it is clearly visible that the static target around 5 m is split into multiple targets, symmetrically shifted around the correct velocity and range.In order to verify that the symmetrically shifted power spectra originate from the bistatic sub-apertures, the complete network power spectrum P( f R , f v ) consisting of 768 Vx channels is split into its sub-aperture power spectra P m,n ( f R , f v ) of which each one consists of 192 Vx channels.The sub-aperture power spectra, with the same color scaling as in Fig. 10, are depicted in Fig. 11.During the derivation of the signal model, the frame trigger jitter and the frequency deviation between the reference oscillators were treated as constants.To demonstrate the fact that they are frame-to-frame random variables, 90 consecutive frames are evaluated and analyzed.For each frame, the arguments of the maximums of the bistatic 2D-power spectra are evaluated, estimating the target's fast time frequency f b,0,1 as well as its slow time frequency f d,0,1 : f b,0,1 , f d,0,1 are the estimated target frequencies of the bistatic sub-aperture 0,1 .This is also carried out for the sub-aperture 1,0 .Furthermore, the phase of a redundant Vx channel for each sub-aperture at the estimated frequencies of the target is evaluated, where ϕ 0,1 is the estimate of the phase.In Fig. 12 on the left, the time series over 90 frames for the three estimates of the two bistatic sub-apertures 1,0 and 0,1 are displayed.In Fig. 12 on the right, the mean-free scatter plots of the estimates for the two bistatic sub-apertures in addition to their estimated correlation coefficient ρ are shown [19].
The time series representation shows that the influence of the frame timing jitter and the frequency deviation of the reference clocks on the bistatic sub-apertures are symmetric, with the same magnitude but opposite signs.The correlation coefficient in all dimensions exceeds the magnitude of |ρ| = 0.99.The mean in f R is the target-induced fast time frequency f τ while the mean in f v represents the residual mean frequency difference between the reference oscillators due to imperfect tuning.
The outliers in the scatter representation of the phase are due to uncorrelated noise, which causes a wrap around of ±2π , if the phase errors induced by the frame timing jitter and the frequency difference are in sum close to |π |.Since this is a representation flaw, the outliers are disregarded in the computation of ρ.
With the basic signal theory proven, the next step is the estimation and correction of the coupling-induced errors.In Fig. 13, the 2D-power spectrum of the complete network is shown after carrying out the estimation and correction described in Section VI.The DoA estimates of the individual sub-apertures, the phase-uncorrected complete network aperture, and the phase corrected complete network aperture are depicted in Fig. 14.The individual sub-apertures are unable to resolve the two corner reflectors, and the uncorrected complete network aperture suffers from high ripple, which makes it impossible to distinguish the side-lobes from the targets.Only with the phase correction, the complete network aperture is able to resolve the two corner reflectors, which means the full resolution is available in the angular domain.
B. LF COUPLED NETWORK: 2-SENSOR SETUP
The measurements in VII-A were repeated for LF coupling of the network.In Fig. 15, the uncorrected 2Dpower spectra of the complete network is depicted.As expected, no shifts in the velocity dimension are present for the LF coupled network, since the same reference oscillator is provided to the sensors and therefore, no intra frame timing errors exist.
But the frame trigger jitter is present and leads to a symmetric shift in the fast time frequency and a symmetric phase shift for the bistatic sub-apertures.The influence of the frame trigger jitter becomes clearly visible in the correlative analysis.In Fig. 16, the results of the correlative analysis are depicted, where the dimension f v is omitted since it is not influenced in a LF coupled network.
The distribution in the fast time frequency differs greatly in comparison to the uncoupled network, but the influence remains symmetric.The reason for the different distribution in comparison to the uncoupled network is that there, the influence of the frame trigger jitter and the frequency deviation of the reference clocks overlay.The distribution in an LF network is exclusively caused by the frame trigger jitter.The distribution of the frame trigger jitter is device specific and depends on the implementation of frame triggering by the radar chip manufacturer.
Whether correction of the frame trigger jitter is necessary, solely depends on the distribution of the frame trigger jitter which makes establishing a general condition very difficult.The strictest and most general condition is derived from the frame trigger jitter induced phase error ϕ t j .In the case ϕ t j 1 and therefore neglectable, no correction, neither in the fast time frequency domain nor in the angular domain, is necessary.When this condition is fulfilled, the redundant Vx antenna elements are not required.For the used radar chips, this condition is clearly not fulfilled and correction in the fast time frequency domain and the angular domain are required.The DoA estimates of the individual sub-apertures, the phase-uncorrected complete network aperture, and the phase corrected complete network aperture are depicted in Fig. 17.Just as for the uncoupled network, it is only after the phase correction that the full resolution of the network is available, and the two corner reflectors are separable.
C. COMPARISON BETWEEN RF COUPLED, LF COUPLED AND UNCOUPLED NETWORKS
In order to show that the LF coupled and uncoupled networks do not suffer from any loss in range and velocity resolution in comparison to a RF coupled network, the power spectra of a monostatic sub-aperture is compared to the power spectra of a bistatic sub-aperture for a LF coupled network and an uncoupled network.The monostatic sub-apertures are always RF coupled, since the Rx and Tx signal originate from the same sensor and therefore the same RF synthesizer.The power spectra of the sub-apertures are extracted after the correction.In Fig. 18 the range power spectra, P( f R , f v = 0), around the target position are depicted.The sharp peak around 5 m is the target, while the broad peak around 6.2 m is the wall of the anechoic measurement chamber.It can be clearly seen that the target peak has the same width up to around 20 dB below the peak, regardless of whether the sub-aperture is RF coupled, LF coupled or uncoupled.This proves that LF networks and uncoupled networks do not suffer any performance degeneration in terms of range resolution in comparison to RF networks.
In Fig. 19 the velocity power spectra, P( f R = f R,Target , f v ), around the target position are depicted.On the one hand, it can be concluded from Fig. 19 that LF networks and uncoupled networks provide the same velocity resolution in comparison to RF coupled networks.On the other hand, it becomes clear that uncoupled radar networks suffer from an increased side lobe level.The reason for this are the timing errors introduced by the fundamental oscillators' frequency deviations.Nevertheless, the side lobe level is only about 5 dB higher than for RF coupled networks and LF coupled networks.
D. UNCOUPLED NETWORK: 3-SENSOR SETUP
To show that the analysis carried out for a single pair of oscillators holds true for all pairs of oscillators in a network with more than two sensors, a third identical radar sensor with a randomly distributed antenna array was added to the radar network.The radar network now consists of V = 3 radar sensors with N Tx = 36 Tx channels and N Rx = 48 Rx channels, resulting in a total of 1728 Vx channels.Instead of two closely spaced corner reflectors, a single corner reflector was placed at around 4 m in front of the radar network in the anechoic measurement chamber.The intra chirp timing parameters, the number of chirps per Vx channel, and the bandwidth remain the same, which leads to an increase of the total measurement time to T F = 228 ms and a decrease of the unambiguous velocity, due to the extra Tx channels.
The uncorrected 2D-power spectrum of the complete network is depicted in Fig. 20.It is clearly visible that the target is split into 7 targets, shifted in f v and f R .In Fig. 21, the 2D power spectra of each sub-aperture, each consisting of 192 Vx channels, are depicted.With V = 3 sensors in the network, 3 pairs of sub-apertures must be taken into account.In Fig. 21, each pair of sub-apertures are highlighted by a colored frame.In order to correct the bistatic sub-apertures and enable coherent processing, the correction steps described
TABLE 3. Outdoor Waveform Parameters
in Section VI must be carried out for each pair individually.The corrected 2D-power spectrum of the complete network is depicted in Fig. 22.
E. UNCOUPLED NETWORK: 2-SENSOR SETUP OUTDOOR
To prove that the network is able to operate in the absence of a strong single scatter target, an outdoor measurement with a car driving towards the radar network was conducted.The measurement setup is depicted in Fig. 23.The waveform parameters were changed such that a higher unambiguous velocity range and a shorter measurement time are realized in order to keep the range migration effect to a minimum.An added benefit of the greatly reduced chirp repetition time T R , which in combination with the reduced number of chirps per Vx channel K leads to a reduced total measurement time T F , is that the requirements on the frequency difference of the reference oscillators | f n,m | are eased.
The waveform parameters for the outdoor measurements are listed in Table 3.In Fig. 23, the uncorrected and corrected 2D-power spectrum of the complete network as well as the spatial 2Drepresentation of all moving targets are depicted.The power spectra of the bistatic sub-apertures are shifted in f R and f v , but the proposed correction method is able to estimate and correct the shifts even for a moving extended target.The phase shifts between the sub-apertures are estimated and corrected, and the front of the car is clearly visible in the spatial representation.
VIII. CONCLUSION
In this paper, a signal model for MIMO radar networks of different topologies has been derived.The model makes it possible to systematically derive conditions regarding the frequency deviation which the reference oscillators must fulfill in order to estimate and correct the coupling-induced errors and enable coherent processing of all radar data in the range, the velocity, and the angular domain.The conditions are universally applicable to all uncoupled radar networks, regardless of the frequency of the reference oscillators, the RF frequency at which the radar operates, or the waveform parameters, since all of those parameters have been accounted for.
For LF coupled radar networks, the conditions in terms of frequency deviation of the reference oscillators are always fulfilled, but the frame trigger jitter induces similar errors in the range and angular domain.The frame trigger jitter distribution is device specific and depends on the implementation of frame triggering by the radar chip manufacturer.The derived model accounts for the frame trigger jitter.Conditions imposed on the frame trigger jitter, which allow for the omission of the estimation and correction of the induced errors, have been discussed.
A radar network consisting of up to 3 radar sensors has been realized, and the measurements proved the signal model for both the uncoupled and LF coupled radar networks.It was shown that the influence of the coupling-induced errors on the radar signal is strictly symmetrical, which allows for simple but yet effective methods of estimation and correction in order to enable coherent radar processing for uncoupled and LF coupled radar networks.The effectiveness of the proposed methods to estimate and correct the coupling-induced errors even for moving extended targets was verified by measurements.Measurements demonstrated that phase noise has insignificant influence on establishing coherency between the sensors.
Even though the derived requirements for the realized radar network, in which the reference oscillator frequency is just 40 MHz and the operating RF frequency is around 77 GHz, are very stringent, it was shown that the requirements can easily be fulfilled by TCVCXO if they are manually tuned.Since the frequency deviation of the reference oscillators is directly measurable by evaluating the 2D-power spectra of sub-apertures, automatic tuning of the reference oscillators is theoretically possible.
FIGURE 2 .
FIGURE 2. Left: block diagram of a single radar chip.Right: block diagram of a single radar sensor.
FIGURE 3 .
FIGURE 3. Virtual aperture and sub-apertures of the 2-sensor radar network.The color of the ring indicates the sensor affiliation of the Tx antenna, the color of the circle indicates the sensor affiliation of the Rx antenna.Yellow elements are redundant Vx antenna elements.
FIGURE 4 .
FIGURE 4. Chirp signal and timings created by the synthesizer and the timing generator.
FIGURE 5 .FIGURE 6 .
FIGURE 5. FFT-based evaluation of the power spectrum for the signal defined by (16) for varying α and fixed parameter of N = 1024, f s = 10 MHz, f 0 = 2.5 MHz, and A = 1.
FIGURE 8 .
FIGURE 8. Block diagram of the processing chain to estimate and correct the bistatic sub-apertures.
FIGURE 18 .
FIGURE 18. Range power spectrum comparison between a monostatic sub-aperture (RF coupled), a LF coupled sub-aperture and an uncoupled sub-aperture.
FIGURE 19 .
FIGURE 19.Velocity power spectrum comparison between a monostatic sub-aperture (RF coupled), a LF coupled sub-aperture and an uncoupled sub-aperture.
FIGURE 23. 2 -
FIGURE 23. 2-Sensors uncoupled network outdoor measurement.(a) Measurement setup and target.(b) Uncorrected 2-power spectrum of the complete network in dB.(c) Corrected 2D-power spectrum of the complete network in dB.(d) x-y-plot in dB.
Table 2 ,
(34) requires that the frequency difference of the reference oscillators must fulfill | f n,m | ≤ 39.96 kHz.(35) requires that the reference oscillators must have a smaller frequency deviation than | f n,m | ≤ 83.35 Hz.
Table 2
, the frequency difference must fulfill | f m,n | ≤ 109.7 mHz.The condition stated by (44) is greatly eased if the number of Tx elements N Tx or the number of chirps K is reduced. | 12,495 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
GNSS Scintillations in the Cusp, and the Role of Precipitating Particle Energy Fluxes
Using a large data set of ground‐based GNSS scintillation observations coupled with in situ particle detector data, we perform a statistical analysis of both the input energy flux from precipitating particles, and the observed occurrence of density irregularities in the northern hemisphere cusp. By examining trends in the two data sets relating to geomagnetic activity, we conclude that observations of irregularities in the cusp grows increasingly likely during storm‐time, whereas the precipitating particle energy flux does not. We thus find a weak or nonexistent statistical link between geomagnetic activity and precipitating particle energy flux in the cusp. This is a result of a previously documented tendency for the cusp energy flux to maximize during northward IMF, when density irregularities tend not to be widespread, as we demonstrate. At any rate, even though ionization and subsequent density gradients directly caused by soft electron precipitation in the cusp are not to be ignored for the trigger of irregularities, our results point to the need to scrutinize additional physical processes for the creation of irregularities causing scintillations in and around the cusp. While numerous phenomena known to cause density irregularities have been identified and described, there is a need for a systematic evaluation of the conditions under which the various destabilizing mechanisms become important and how they sculpt the observed ionospheric “irregularity landscape.” As such, we call for a quantitative assessment of the role of particle precipitation in the cusp, given that other factors contribute to the production of irregularities in a major way.
. Throughout this manuscript, we will only use phase scintillations as they are more common at high latitudes, and we shall use GNSS scintillations as a proxy for ionospheric irregularities (e.g., Meziane et al., 2023).
Scintillations are recorded when the GNSS radio signal passes through turbulent plasma structures in the ionosphere (Yeh & Liu, 1982).Since the GNSS scintillation receivers are fixed on the ground, scintillation observations will be sensitive to low-amplitude fast-growing irregularities and high-amplitude slow-growing irregularities alike, regardless of how long-lived those irregularities are, as long as the small-scale fluctuations (on the order of ∼ a few kilometers) induced in the plasma are sufficiently intense.Absolute density fluctuations (as opposed to relative fluctuations) are most important (Aarons et al., 1981;Jin et al., 2018).The scintillation-inducing fluctuations in plasma density are thought to be caused by turbulent redistribution of irregularity power (Hamza et al., 2023).As such, scintillations are in general triggered by ionospheric irregularities over a wide range of spatial scales (Jin et al., 2014;Kintner et al., 2007;van der Meeren et al., 2014), though specific irregularity scale sizes become an important factor in determining whether amplitude and/or phase scintillations are triggered (e.g., Song et al., 2023).
There are two distinct main scenarios that have been considered regarding the formation of ionospheric irregularities in the cusp ionosphere (Jin et al., 2017).One is during relatively quiet times, when no classical polar cap patches (or Tongue Of Ionization, TOI) are created in the cusp region.The other is invoked for more disturbed conditions, when the expanded ionospheric convection brings in high density plasma from the sunlit sub-auroral region to form polar cap patches (Carlson, 2012;Lockwood & Carlson Jr, 1992).For the first scenario, Kelley et al. (1982) proposed that the soft electron precipitation is an important source of large-scale (>10 km) ionospheric structures in the cusp region.Sharp density gradients on the edges of such large structures then feature in plasma instability processes, such as the Gradient Drift Instability (GDI, Tsunoda, 1988), to create smaller scale ionospheric irregularities (Moen et al., 2002).This basic process is thought to explain why soft electron precipitation should be important for the production of irregularities in the cusp.Case studies using in situ measurements by sounding rockets and satellites in low-Earth-orbit later confirmed that soft electron precipitation is indeed a source of ionospheric irregularities in the cusp ionosphere (Goodwin et al., 2015;Jin et al., 2019;Moen et al., 2012;Spicher et al., 2015).These case studies were conducted during relatively quiet times, typically during deep winter when the solar terminator is significantly equatorward of the high-latitude convection throat, where classical high-density polar cap patches do not form.We note that although some events meet the criteria that electron density inside a plasma patch be at least two times higher than the background density (Crowley, 1996), the absolute density in these cases can be relatively low (1-2 × 10 11 m −3 ).Such low-density patches are termed "baby" patches by Hosokawa et al. (2016), since they are created by auroral structures such as Poleward Moving Auroral Forms (PMAF, Sandholt et al., 1986Sandholt et al., , 1998)).
In a more recent study, Jin et al. (2017) directly compared the ionospheric irregularities for the two scenarios with and without classical polar cap patches in the cusp region.The authors demonstrated that while soft electron precipitation can create weak to moderate GNSS scintillations, occurrence rates for the latter are significantly enhanced in the cusp ionosphere when classical polar cap patches are present.The differing results depending on whether there are patches present in the cusp were explained by the combined effect of polar cap patches and cusp dynamics: while polar cap patches provide the main body of high-density plasma, cusp dynamics act to structure the patches on smaller scales.In this respect, flow shears (Basu et al., 1990;Spicher et al., 2020), intense small-scale FACs (Lühr et al., 2004), and auroral precipitation (Moen et al., 2012;Oksavik et al., 2015) have all been shown to play significant roles in generating ionospheric irregularities.In other words, the menagerie of processes and mechanisms capable of producing cusp irregularities contain many specimens, of which soft electron precipitation is often highlighted as an important source of both ionization and free energy.However, there is a need to assess the relative importance and separate contribution of each source of free energy and under which geomagnetic conditions a particular mechanism prevails or even dominates.
On top of the need to identify the relative importance of shears, FACs and precipitation for the triggering of plasma instabilities, there is a need to address another question that is likely related to the interplay between these destabilizing factors, namely, the stark contrast reported in the literature between the seasonal variations in the cusp between soft electron precipitation and the occurrence of scintillation.For one thing, the dayside number flux of precipitating electrons and ions largely maximizes during local summer (Newell & Meng, 1988b;Newell et al., 2010), and during geomagnetically quiet conditions (Newell et al., 2009).This seasonal effect is some-times explained by the impact of dayside Pedersen conductance, which strongly depends on the incident sunlight (Brekke & Moen, 1993;Vickrey et al., 1981), whereas the preference for geomagnetic quiet conditions can be explained by a preference for northward IMF on the dayside (Newell et al., 2009).On the other hand, in opposition to the inferred cusp precipitation trend, climatological studies of GNSS scintillations show that scintillation occurrences in the cusp are higher during local winter and during geomagnetically active conditions (Alfonsi et al., 2011;Jin et al., 2015;Prikryl et al., 2015).
In order to add more substance to the cusp irregularity generation question and to shed light on what appears to be opposite seasonal trends, we have put together a statistical analysis of two large data sets of both in situ observations of particle precipitation by Defense Meteorological Satellite Program (DMSP) satellites and ground-based GNSS scintillation data in the northern hemisphere.From the DMSP satellites' particle detector instrument we collected data from 52,000 crossings over the high-latitude northern hemisphere made during 3 years near the peak of the 24th solar cycle (2014)(2015)(2016).For the same time period, we also collected continuously recorded GNSS scintillation indices from three GNSS stations located in Svalbard, Norway.Through a statistical aggregation, and through direct in situ detection of the cusp, we demonstrate that the energy flux of precipitating particles decreases in the cusp during local winter and actually tends to decrease as geomagnetic activity increases, though with a very large spread around that decrease.At the same time, we demonstrate that the scintillation occurrence rate increases drastically with increasing geomagnetic activity.The lack of statistical association between irregularities and particle precipitation in the cusp reinforces earlier suggestions that processes/sources other than soft electron precipitation are playing a key role in creating the more intense scintillation that is observed in the cusp during geomagnetically active times.
Instrumentation and Methodology
There are two aspects to the methodology used in this study.First is a database of precipitating electron and ion data from the SSJ instrument on the F16, F17, F18, and F19 satellites of the DMSP.The DMSP satellites are in helio-synchronous dawn-dusk polar orbits at an altitude of around 840 km, covering most of the region of interest to the present paper, the dayside high-latitude ionosphere in the northern hemisphere.The SSJ instrument uses particle detectors to measure the energy flux of precipitating electrons and ions through 19 energy channels from 30 eV to 30 keV, with a cadence of 1 s (Redmon et al., 2017).We characterize soft electron precipitation by integrating over energy channels from 30 to 650 eV, following the method outlined in Redmon et al. (2017).We classify each precipitating particle spectrum whenever we find it to be directly sampled in the cusp, following a widely used definition of the cusp given by Newell and Meng (1988a).This means that a cusp datapoint is defined as having an average electron energy lower than 220 eV, and an average ion energy higher than 300 eV and lower than 3,000 eV.In addition, the electron energy flux through channels 2 and 5 keV should be lower than 10 7 keV cm −2 s −1 ster −1 , and the total integrated ion energy flux should exceed 2 × 10 9 keV cm −2 s −1 ster −1 .The different satellites exhibit slightly different energy fluxes statistically, which is likely due to instrument calibration.However, after testing, we have concluded that the slight measurement variations do not influence the results in any systematic way.Note that "total integrated energy flux" refers to differential energy flux integrated across energy channels and is denoted jetot in the figures.Figure 1 shows an example of a pass through the cusp by DMSP F19, where all the mentioned criteria are met.The data were obtained around 06:45 UT on 6 December 2014.Panel (a) shows the orbit, and panels (b) and (c) show electron and ion energy flux respectively, with the cusp precipitation "patch" indicated by a black square.In this case, the cusp datapoint stretches over an orbital stretch of 85 s, corresponding to 646 km of distance.This is double the median size of a typical cusp crossing in the data set, which is around 40 s of data per pass (excluding passes where the cusp was not detected at all).Data such as that shown in Figure 1 are used in the analysis to come, but first we need to introduce the scintillations data set used in the present study.
The scintillation database comes from ground-based observations of the σ ϕ radio index, using vertical phase scintillation calculations (Jin et al., 2018;Spogli et al., 2013).We perform these calculations on data from three GNSS receivers on Svalbard, Norway (Oksavik, 2020), located in Ny Ålesund (78.9°N, 11.9°E), Kjell Henriksen Observatory (78.1°N, 16°E), and Bjørnøya (74.5°N, 19°E).We selected a 30° elevation cut-off and an ionospheric piercing point altitude of 350 km, and used satellites from the GPS and Galileo systems.The total time period for the two data sets in the present study stretches from 2014 through 2016, and roughly captures the 24th solar cycle peak.We consider northern hemisphere observations collected in all seasons, where we define a season as a 90-days period centered on a solstice in the case of summer or winter, with the rest classified as equinox.
We collected and stored the quantities covered above, and also extracted the value of several geomagnetic indices and solar wind-magnetosphere-ionosphere coupling functions, with the goal of quantifying the ebb and flow of solar wind-energy being injected into the ionosphere.To start with, we used the SME-index, which provides a global assessment of the intensity of Hall currents from several hundred ground-based stations in the auroral electrojet, and is therefore able to provide a global view of the geomagnetic activity resulting from the coupling with the solar wind which starts at the cusp (Cowley, 2000).The SME-index has indeed been shown to accurately quantify the total auroral energy input into the nightside aurora (Gjerloev, 2012;Newell & Gjerloev, 2011).We also considered the Sym-H index, which measures the storm-time ring current (Wanliss & Showalter, 2006) and is widely used to characterize magnetic storms.However, the SME-index is useful not just for storms but also for magnetospheric substorms that need not be part of clearly identifiable storm.From space, we collected observations of the interplanetary magnetic field (IMF) and solar wind, using 1-min omni data timeshifted to the bowshock (Papitashvili & King, 2020).Based on the latter, we calculated the so-called Newell coupling function, namely, the rate at which magnetic flux is opened at the magnetopause (dΦ/dt, Newell et al., 2007).We also computed the so-called Kan-Lee electric field, which quantifies "the power delivered by the solar wind dynamo to the open magnetosphere" (Kan & Lee, 1979, p. 577).
Results
First, we aggregated DMSP data along with scintillation indices from Svalbard.This resulted in Figure 2, which displays the entire data set in terms of 18 climatological maps of the high-latitude dayside ionosphere.Here, all data are plotted using magnetic local time (MLT) and magnetic latitude (MLAT) as coordinates (Baker & Wing, 1989).In each spatial bin, we took the occurrence rate of σ ϕ > 0.15 rad events, and the median soft electron energy flux obtained from an integration over channels lower than 1 keV.Panels (a-i) show the GNSS scintillation occurrence rate and panels (j-r) the median integrated soft electron flux.Each row represents a local season and each column shows geomagnetic disturbance binned by the SME-index.Each map shows data binned in MLT and MLAT (>65°), with noon pointing upwards and dawn-side to the right.GNSS scintillation occurrence rates are calculated by taking the proportion of σ ϕ index values greater than 0.15 in each bin.A color scale is used to identify intensity levels, with gray to signify a lack of data.
The three columns in Figure 2 indicate the following different geomagnetic disturbance levels; an SME-index value lower than 103 nT, indicative of quiet observations; between 103 and 234 nT indicative of disturbed conditions; and a value greater than 234 nT to characterize extreme situations.These are the 33rd and 67th percentile value of the SME-index for the years 2014-2016, and so the three categories constitute a third of the total data set each, and the extreme category features a median SME-index value of 400 nT.Note that further discussion concerning the usefulness of the SME-index is provided in an appendix to this paper.Suffice to say that an analysis that bins the data using any of the indices mentioned so far does produce similar results (we refer also to Figure 4 later).Lastly, note that in all panels, a series of black contour lines indicates the distribution of DMSP datapoints having a cusp-occurrence rate greater than 10%, 20%, and so forth, until 50%.The 10%-line is always the outermost contour.As most bins have in fact less than 50% cusp datapoints, the median conditions are unlikely to reflect the cusp.The precipitating particle data presented in Figure 2 thus shows a dayside or noon-sector climatology.
Quiet time observations of the dayside (first column) are characterized by an overall low occurrence of GNSS scintillations and a high flux of soft electrons, especially for the equinoxes.During disturbed conditions (second column), strong GNSS scintillations occur more frequently at MLATs exceeding 75°, while the flux of soft electrons seems to diminish slightly compared to quiet times.Finally, during extremely active conditions (third column), GNSS scintillation occurrence reaches a clear peak in each season, at which point the dayside soft electrons seem to have reached a clear minimum.Indeed, panel (i) of Figure 2 contains fully one third of all σ ϕ > 0.15 rad events in our database, despite containing a clear minimum in the dayside soft electron energy flux.
However, as mentioned, occurrence rates for direct observations of the cusp are relatively low, and so to investigate conditions inside the cusp we will now show the results from performing a statistical analysis on all 1 million datapoints that were determined to be inside the cusp proper, using the Newell and Meng (1988a) definition described above.We start by binning the data set by the IMF B Z (Figure 3), followed by binning the data set in all five geomagnetic indices in turn (Figure 4).In the figures to come, we only show winter cusp-detections, as scintillations maximize during this season.Later, in the discussion section (Figure 5), we shall show an analysis of the seasonal trends behind cusp-electron energy flux and scintillation occurrence.
Returning to the task at hand, in Figure 3 we present an analysis where we now bin winter cusp detections by IMF B Z (taking the 30-min median value to account for distance traveled from the bowshock).In the first column, we plot the prevalence of density irregularities, represented by the occurrence rate of σ ϕ > 0.15 rad events occurring within 2° MLAT of the average latitudinal cusp locations.Each panel in the first column corresponds to one of seven B Z bins, where the first and the last bins contain 15% of the data set on both ends of the distribution, with the remaining bins linearly spaced between those two extreme bins.This way, each bin contains roughly the same number of observations.We integrate over MLAT and plotting the data as functions of MLT (x-axis).The next two columns show the precipitating electron (second column) and ion (third column) energy flux, with energy channel along the y-axes.We plot the median energy flux through each energy channel for each local time.The "severe northward IMF"-bin (B Z > 2.7 nT) is on the top of the page, while the extreme opposite bin (B Z < −2.7 nT) is located on the bottom, with the center bin corresponding to |B Z | < 0.6 nT.
From the fourth bin (Figure 3j) and downwards, there is a clear and systematic increase in GNSS scintillations.At the same time, for both ions and electrons, the magnitude of the precipitating particle energy flux is decreasing monotonically from the top-most bin to the bottom.The same is true for the number flux (which we show in Figure S1).This clearly indicates that scintillation occurrence and particle precipitation follow opposite trends in terms of the IMF B Z : the more southward the IMF is, the greater the scintillation occurrence whereas the same IMF changes mark a steady decrease in energy fluxes of both electrons and ions.
For a more in-depth exploration of this result, we applied the foregoing analysis to five geomagnetic indices or coupling functions.Figure 4 summarizes the results using the SME-index, the Sym-H-index, the Newell coupling function, the Kan-Lee electric field, and the IMF B Z .Similarily to Figure 3, we used seven bins for each index, with the first and last bins containing 15% of the data set on both ends of the distribution, while the remaining bins were chosen to be linearly spaced between those two extremes.We considered a data subset that contained winter DMSP cusp data and did not include other data devoid of satellite cusp crossings.Panels (a)-(e) show the resulting probability distributions that we obtained for the data subset.Panels (f)-(j) show, for each of the seven bins, the distribution found in the total integrated electron energy flux using only spectra that were inferred to strictly originate from the cusp.Panels (k)-(o) likewise present the binned occurrence rate of σ ϕ > 0.15 rad obtained between 10.5 and 13.5 hr MLT.Note that each panel has what amounts to the same limits along the x-axis: we show all data between the 0.5th percentile value of each index to the left and the 99.5th percentile value on the right.
Figure 4 shows that, owing to the smaller scatter (vertical errorbars) about the median values, the best of the five indices/coupling functions to parameterize precipitation in the cusp is actually the IMF B Z .We also notice that when the SYM-H indicates a magnetic storm (values less than −20 nT) the energy flux in the cusp is at its minimum.However, this should correspond to larger SME values, which does a better job at relating to the cusp electron energy flux.This being stated, the SME does a better job than the IMF B Z at predicting scintillations while it remains an adequate predictor of energy deposition by particles in the cusp when it exceeds 100 nT.Interestingly, the indices most directly related to the cusp, namely, dΦ/dt and E KL , are extremely good statistical predictors of the scintillation activity, but the quiet-most bins in panels (m) and (n) are higher than the quiet-most bin in panel (k), meaning that the SME-index is best at separating the scintillations database.Like the other indices (except for B Z ), they predict that on average the cusp precipitation energy goes down as they take more extreme values, but, like the SME case, the scatter about the median remains considerable when it comes to precipitating energy flux.
One important fact remains clear from Figures 3 and 4: no matter what is used to characterize magnetic activity in relation to cusp dynamics, whenever the intensity of scintillations in the cusp goes up the energy flux at the 9 of 20 cusp does not increase and in fact goes down on average, except for the slight tendency for scintillations to increase during severely positive B Z (Figure 4, panel o).In addition, the best controlling factor for energetic cusp particles is the IMF B Z : when the IMF is increasingly northward, the energy deposited by particles in the cusp keeps increasing.
Discussion
In this study, we have parameterized GNSS scintillations and cusp precipitation energy fluxes by several measures of geomagnetic activity.The σ ϕ > 0.15 rad occurrence changes dramatically (from a rate of ∼1% to rate of ∼15%) following an increase in geomagnetic activity (Figure 4k through n).Conversely, the median energy flux of precipitating particles does not increase statistically with increased geomagnetic activity or with strong activity in the cusp, with in fact a slight tendency to decrease (Figure 4f through i).
We have shown that while these facts are particularly evident for the winter cusp, similar trends exist for the whole dayside region and across seasons (Figure 2).That the trends in energy and number fluxes appear, if anything, to be decreasing rather than increasing during storm-time strongly suggests that soft precipitation is not driving the increased scintillation occurrence rates during increasingly disturbed conditions.Certainly, there are other sources that play a major role in causing ionospheric scintillations during storm-time, and some of these do not depend on particle precipitation, and might not be associated with precipitation.There is in fact a striking connection between dΦ/dt or E KL and the scintillations, suggesting that we should look for parameters linked to the dynamics of the cusp.
Convection and Polar Cap Patches
There is no doubt that the dayside scintillation mostly occurs near the cusp region, as has been shown by many previous studies (Alfonsi et al., 2011;De Franceschi et al., 2019;Jin et al., 2015;Moen et al., 2013;Prikryl et al., 2015).By combining collocated GNSS scintillation receiver and all-sky imager, Jin et al. (2015) demonstrated that the dayside scintillation region is closely collocated with the active cusp auroral region for all solar wind-and IMF conditions.However, the plasma processes are highly complicated in the cusp due to the complex solar wind-magnetosphere-ionosphere coupling.This is a region where soft particles from the magnetosheath directly enter the ionosphere and cause impact ionization.The transient reconnection on the dayside magnetosphere will also impact this region through flux transfer events (FTEs, Southwood et al., 1988).The ionospheric signature of FTEs includes enhanced ionospheric flow and/or flow shears, field-aligned currents and auroral particle precipitation (Carlson, 2012;Southwood et al., 1988).Moreover, Jin et al. (2015) showed that the GNSS phase scintillations tend to occur during IMF B Y positive.This has been explained by the intake of plasma with higher density in the afternoon sector.
In the context of a lack of change or of a decrease in energy deposition by energetic particles, there is a need to explain the enhancement in the scintillations seen when the interaction between the solar wind is felt more forcibly near its entry point at the cusp.We can think of at least two inter-related processes that can contribute to the increased scintillation activity during disturbed time: enhanced ionospheric flow and drastic density enhancements brought about by the TOI action or by polar cap patches.Upon inspection of Figure 4o, we see that scintillation events tend to occur during severe southward IMF.During such geomagnetically disturbed conditions, the area covered by the high-latitude ionospheric convection pattern expands and the flow intensifies.The expanded convection can transport high-density plasma from lower latitudes to form TOI/polar cap patches (Clausen & Moen, 2015).Compared to the density enhancements produced by soft precipitation, the density of the TOI/polar cap patches is considerably higher in the topside F region (Carlson, 2012;Clausen & Moen, 2015).Due to greater densities at F region altitudes, density structures in TOI/polar cap patches have a much longer lifetime compared to that of precipitation induced structures, if and when the latter is created lower down where there is quicker dissipation owing to chemical recombination (Ivarsen et al., 2021a;Schunk & Sojka, 1987).
Convecting high-density plasma associated with TOI/polar cap patches provides an excellent breeding ground for plasma instabilities.For example, the growth rate of GDI is dependent both on plasma drift velocities and the steepness of plasma density gradients (Makarevich, 2017;Tsunoda, 1988).During particularly disturbed conditions, the increased flow velocity is therefore expected to create more irregularities due to GDI.In addition, the flow shears related to shears and reversed flow events can activate the shear-driven Kelvin Helmholtz Instability (KHI) (Keskinen et al., 1988;Spicher et al., 2016).KHI is thought to be more efficient in generating large-and intermediate-scale plasma gradients (Carlson, 2012;Carlson et al., 2008).In turn, then, the GDI works to break these newly created intermediate scale structures into smaller scale ones (Carlson et al., 2007).Lastly, intensified AC electric fields can induce turbulent mixing, but this effect is largely unexplored due to insufficient observations (Burston et al., 2016).We are in the process of investigating ion drift speeds in relation to observed density irregularities in a related publication.
To summarize, various localized and transient energetic dayside phenomena other than soft electrons constitute a way for particle precipitation near the cusp to influence irregularity production.PMAFs occur during dayside reconnection (Hosokawa et al., 2016), and are associated with plasma structuring (Oksavik et al., 2015).The energy transfer associated with Alfvén waves maximizes during southward IMF, and on the dayside (Billett et al., 2022;Ivarsen et al., 2020).FACs, associated with precipitating particles or Alfvén waves, can trigger the current convective instability (Ossakow & Chaturvedi, 1979).In fact, bursts of intense kilometer-scale FACs frequently occur on the dayside during elevated geomagnetic activity (Rother et al., 2007), and are associated observationally with the cusp (Lühr et al., 2004).These are some of the topics that must be investigated in future studies of cusp-associated dynamics.
Long-Term Trends in Scintillation Occurrence: A Case for Irregularity Dissipation
There is another tantalizing mechanism by which a reduction in particle precipitation will in fact facilitate the occurrence of plasma irregularities.For the cusp region, it involves ion precipitation rather than soft electron precipitation.Although the energy flux of precipitating electrons in the cusp can be orders of magnitude higher than that of ions, the entire cusp-ion energy flux will end up ionizing the E-region (Fang et al., 2013).Its effect on Pedersen conductance will be much greater than that of the soft electron flux, which typically ionizes F-region altitudes (Fang et al., 2010).In the relative absence of solar EUV photoionization (such as during local winter), the statistical decrease in ion energy flux on display in Figure 3 when the IMF B Z becomes southward will then cause a decrease in expected Pedersen conductivity.Since Pedersen conductivity peaks in the E-region, a decrease in conductivity will translate into a decrease in the ratio of E-to F-region conductance, a ratio that is proportional to irregularity dissipation rates (Ivarsen et al., 2021b).This will in turn affect irregularity occurrence (Kane et al., 2012;Lamarche et al., 2020;Vickrey & Kelley, 1982).
The fact that high-latitude dissipation rates are cyclical between solstices might be an important contributor to the general seasonal trends observed in high-latitude irregularities.Illustrating this, we show in Figure 5 an analysis into how cusp-associated scintillation occurrence evolves on long timescales.First, we bin the scintillation data set by Carrington rotations, 27-day periods in which the Sun makes a full rotation (Carroll & Ostlie, 1996).We then calculate the occurrence rate of σ ϕ > 0.15 rad events over Svalbard between 10.5 and 13.5 hr MLT within each Carrington rotation.As geomagnetic activity depends to some extent on Carrington rotations, each bin will be impacted by a different dominant solar wind condition that will change from bin to bin.Some bins will have strong cusp interactions while some will not.Decay in irregularities inside each Carrington rotation, as measured from one rotation to the next will largely reflect long-term changes from rotation to rotation, in both solar EUV photoionization and geomagnetic activity.
Figure 5 plots each Carrington rotation in sequence, with scintillation occurrence rate on the y-axis, for the 3-years period under consideration.In solid red line we plot an empirical model that reflects both changes in geomagnetic activity and solar EUV photoionization as the 24th solar cycle approached its minimum.Appendix B derives this model (in particular, Equation B3).The declining solar cycle ensures an overall decrease in the winter occurrence rates.This decline is associated with changes in the F 10.7 solar flux, which we show in yellow hexagrams.
Ionospheric conductance is affected both by the solar cycle and by the turning of the seasons, and conductance will in turn affect bulk plasma flow through basic relations between conductivity and the ionospheric electric field.The long-term effects associated with conductivity changes are thus manifold, and it is difficult at present to untangle the exact role played by irregularity dissipation rates from other conductance-related governing mechanisms.Nevertheless, Figure 5 shows clearly that whatever is driving cusp irregularity occurrence is highly dependent on season and solar cycle.
How do the seasonal changes in the cusp-associated precipitating energy flux compare? Figure 6 shows the distribution of summer (blue) and winter (orange) total integrated electron energy fluxes, in the cusp-identified DMSP measurements.The figure demonstrates that the distributions are markedly similar, with only a slight tendency for a higher energy flux during summer.It is therefore safe to say that the cusp-associated energy flux does not vary much with changing season.Nevertheless, the right-side tails of the distributions show a relatively clear seasonal contrast, with the extreme (98th percentile) values being separated appreciably.In other words, the cusp-associated energy flux maximizes during summer, opposite to that of scintillations (Figure 5).In Appendix B, we present an analysis into the seasonal trends of the 98th percentile energy flux.
Particle Precipitation and Geomagnetic Activity
We have presented the case for a quantitative evaluation of the role of cusp particle precipitation, based on the concurrent observation of increased irregularity occurrence, together with a persistent non-increase in particle precipitation.This prompted the discussion of convection and polar cap patches in Section 4.1.This being stated, the variation in cusp-associated precipitating particle energy flux with changing geomagnetic activity is of interest in and by itself.Why does both the ion and the electron energy flux appear to decrease with increasingly southward IMF (Figure 3)?To address this question we produced in Figure 7 a plot based on the present DMSP cusp-analysis together with measurements collected in the dawn sector (between 2 and 7hr MLT).The intent here is to compare cusp-precipitation to that of the early morning aurora.We therefore limited the comparison to dawn-side DMSP-observations with a total integrated energy flux exceeding 10 9 keV cm −2 s −1 ster −1 , which is a reasonable floor based on the data.We binned the resulting 6.2 million precipitating energy spectra by geomagnetic indices, as we had done in Figure 4. To facilitate a clear comparison between the cusp-and dawn-sectors, we now show the percentage change in energy flux, where 0% marks the quiet-most bin.In all five panels, the slight decrease in the cusp-associated energy flux is accompanied by an increase in the dawn-side energy flux (with the exception being positive IMF B Z , during which conditions both energy fluxes increase).In other words, an opposite trend appears between the energy flux in the cusp and dawn-side aurora.The present paper is however not the first study to point out this opposite relationship.Figures 9 and 10 in Newell et al. (2009) shows that the number flux of the "diffuse electron aurora" and ions maximize in the cusp during quiet conditions.The same two figures show unambiguously that both fluxes maximize on the nightside during disturbed geomagnetic conditions.Panel (e) of Figure 7 is thus supporting the findings in Newell et al. (2009).The authors of that paper offered an explanation for the observations of smaller precipitating fluxes for southward IMF: the low-latitude boundary layer (LLBL) is thicker during northward as opposed to southward IMF, and the LLBL is associated with particle precipitation (Ogawa et al., 2003;Yamamoto et al., 2003).Newell et al. (2009) pointed out that the rate of field-line merging at the sunward-facing magnetosphere increases during southward IMF, and this merging involves relatively cold particles.The same mechanism allows hotter particles from the magnetotail to precipitate into the nightside diffuse aurora during southward IMF, as shown in Figure 9 of Newell et al. (2009) and in Figure 7j in the present paper.This goes far in explaining the opposing trends observed between precipitation and IMF B Z in Figures 3 and 4, which could in turn provide a rudimentary explanation for all the trends we observe in the present paper: the southward IMF causes reconnection events, spurring first nightside particle precipitation, and then a drastic increase in cusp-irregularities.The latter could come through various transient phenomena associated with reconnection events, which maximize during southward IMF.That the cusp-precipitation cycle is different and in part opposite to the irregularity cycle by the changing direction of the IMF may be a key insight when unraveling what is really causing irregularity growth in the cusp ionosphere.
Extreme Events
As they frequently appear in case studies, we now briefly address the prevalence of extreme events in our data set.Figure 8 bins the data akin to Figure 3, only now binning by the SME-index, using the same seven bins as in Figure 4.However, we now plot the distributions of each quantity.Here, we calculate the probability density function for each distribution, as given by pdf = c/(Nw), where c is the number of elements in each bin, N is the total number of elements, and w is the width of the bin (we omit y-axis information about the pdf value in order to focus only on the distribution shapes).In each panel of Figure 8 we indicate the 98th percentile value by a solid red line.We observe that as geomagnetic activity increases, the right-most tails of the scintillation distributions grows increasingly longer, and the 98th percentile value of the σ ϕ phase scintillation index doubles.At the same time, the energy flux tails increase slightly (on both sides) throughout the SME interval.In other words, there is no clear tendency for more extreme precipitation events in the cusp with rising geomagnetic activity, as opposed to a clear tendency for more extreme scintillation events.
Conclusion
We analyzed a large data set of ground-based GNSS scintillation observations along with in situ precipitating particle observations.Based on a comprehensive statistical analysis of the broader dayside region (Figure 2) and the cusp (Figures 3 and 4), we have demonstrated that the cusp-associated precipitating particle energy flux decreases or stays the same during active conditions.By contrast, ionospheric irregularities in the cusp increase significantly with increasing geomagnetic activity.
Although apparently surprising, our results are broadly supported in the literature, where the seasonal and geomagnetic activity trends in precipitating energy flux and scintillations have been known to be opposite (e.g., Figure 2 in Newell et al., 2010;and Figures 2 and 4 in Prikryl et al., 2015).The result is that indices such as the SME-index, which uniquely measure the magnitude of the electrojet's Hall currents, do remarkably well in separating quiet from active conditions in the scintillations database, while not managing to parameterize the cusp energy flux in any meaningful way.(In Appendix A we show that the SME-index manages to simultaneously parameterize a southward turning of the IMF and an increase in solar wind dynamic pressure.) The clearly observed increase in cusp-associated plasma turbulence during geomagnetically active times (Figures 4f-4j) can be said to ultimately result from an injection of free energy, followed by an accelerated return to equilibrium, a process which is broadly responsible for the observed abundance of plasma irregularities in the cusp.If particle precipitation in itself was the dominant driver of irregularities during storm-time, the energy flux carried by precipitating particles would in a large part be responsible for this energy injection.However, the results shown strongly suggests that the increased GNSS phase scintillation occurrence during storm-time is not driven by soft electron precipitation, and the energy pent up in the highly turbulent cusp plasma during stormtime likely has different origins.occur during deep winter, and feature exceedingly low plasma densities (Jin et al., 2018), to the extent that irregularity amplitudes are simply too low to excite scintillations.For clarity, these bins are removed from Figure 5.
Finally, we are in a position to write out the composite empirical model (solid red line in Figure 5).That is, Equation B1 + Equation B2, What follows is a justification and a description of this composite model, where we also refer to the discussion in Section 4.2.First, the solar cycle term (Equation B1) ensures a steady decrease in occurrence rate during the declining phase of the 24th solar cycle.But the data also favors a decrease in annual variation.The linear solar zenith angle-model (Equation B2) represents an expected direct relation between solar illumination (solar zenith angle) and dissipation rates and effective growth rates (Ivarsen et al., 2019).Since the zenith angle is a geometric quantity, its variation is perfectly cyclical with season, and so must be dampened to reflect the observed decreasing annual variation.With all three factors considered, Equation B3 captures the competing effect of a declining winter-occurrence rate and a slightly rising summer-occurrence rate.Except for the deep-winter outliers (yellow hexagrams in Figure B1), the composite model Equation B3 fits the irregularity occurrence data well, both in terms of seasonal fluctuations and solar cycle trend.We thus see tentative evidence that the discussion of irregularity dissipation in Section 4.2 accurately describes long-term trends in cusp-associated plasma irregularities.
Lastly, we must briefly discuss the significance of the decay rate τ 2 in Equation B2, the long-term model used as a fit to the cusp-region scintillations in Figure 5. There, a decay-rate of 2.5 years, coupled with the slowly decaying baseline trend (Equation B1), adequately describes the data.The former implies that the variation in cusp irregularity occurrence rates would experience an e-fold decrease every 2 years after the solar cycle peak.Together with the decreasing baseline (the solar cycle term), the two timescales quantify the decay in expected maximum scintillation occurrence rate in the cusp during any given Carrington rotation period.This involves considering the damping term τ 2 in Equation B2 as a characteristic decay parameter, and Equation B3 as a novel way to consider plasma irregularity "lifetimes" on ultra-long timescales.Figure B2 shows the maximum (red) and minimum (blue) permitted annual occurrence rate within the model Equation B3, obtained by plotting that equation with maximum and minimum possible annual variation respectively.We validate the solar cycle-trends with the occurrence rates for an extended timeperiod, including data up until 2018.The long-term decay present in the red line, which is supported by the extended data set, shows a characteristic lifetime, and documents how the landscape of northern hemisphere cusp plasma irregularities tended to decrease in severity as the solar cycle 24 progressed toward a minimum.The decrease is strong-the winter occurrence rates decline from around 15% during the solar cycle peak to around 5% near the minimum.This decay, or characteristic lifetime, finds support in a recent study by Lovati et al. (2023), where the authors discuss this decline in relation to the F10.7 solar flux (see Figure 6 in that paper, and Figure 5 in the present paper).Note that the time-period analyzed here is short, and so we cannot draw conclusions on general solar cycle trends.The results are primarily valid for the descending phase of Solar Cycle 24.
B1. Application to the the Cusp Energy Flux
Published climatologies document seasonal trends in dayside precipitation (Newell et al., 2010).However, we are not aware of analyses into the seasonal trends in precipitation that is directly associated with the cusp, and so we shall present such an analysis here by application of the above empirical model to the 98th percentile cusp-associated energy flux, in which quantity there is an appreciably seasonal contrast (see Figure 6).The relevance of the 98th percentile energy flux is heightened by Figure 8, which is concerned with extreme events in our two databases.We can then address the question of whether extreme precipitation events are more common during local winter, when scintillation events tend to occur.
Figure B3 shows a similar analysis to that of Figures 5 and B1: we bin the DMSP cusp-associated energy flux by Carrington rotations, taking the 98th percentile energy flux for each rotation.Low-and high-vertical errorbars now denote the 97th and 99th percentile flux respectively.As geomagnetic activity is often somewhat cyclical in Carrington rotations, the 98th percentile energy flux is a good measure of the extreme flux events in each consecutive solar rotation.In Figure B3a, we subtract a solar cycle trend, where Φ denotes the 98th percentile total energy flux.τ 1 is unchanged from Equation B1, but the standard deviation σ is now halved.We then calculate a linear fit, but now with an intensifying term as the solar cycle progresses, τ 3 = 4 years, and the exponent is positive, meaning that the variation in the 98th percentile energy flux undergoes an e-fold increase after 4 years into the declining phase of solar cycle 24.The dashed red line in Figure B3a shows the fit evaluated at the end of 2016, when the intensifying term has reached the value 2 (a doubling).Finally, panel (b) shows the Carrington rotation bins in sequence, with the composite fit (Equation B4 + Equation B5) in a solid red line, and total annual variation as a function of solar cycle in shaded light-blue area.
First we note that there is considerable spread.The distributions in both panels are almost consistent with the solid red lines being flat, as is hinted at in Figure 6, where the distributions are markedly similar.Nevertheless, the tendency for a seasonal dependency is there: extreme precipitation events go through a maximum in energy flux during summer.On top of that, as is shown by the dashed red line in panel (a) and the shaded blue region in panel (b), extreme flux events in the cusp exhibit larger annual variability toward the solar cycle minimum.The seasonal and 24th solar cycle-trends in cusp-associated electron energy flux are then opposite compared to those of scintillation occurrence rates (Figure 5), and in line with the "dayside diffuse electrons" (Newell et al., 2009(Newell et al., , 2010)).
Figure 1 .
Figure 1.Panel (a): a DMSP F19 pass through the cusp on 6 December 2014.Red markings show cusp detections.Panels (b) and (c): electron and ion energy flux with particle energy along the y-axes, and two x-axes showing MLAT (top) and MLT (bottom).
Figure 2 .
Figure2.A northern hemisphere climatology of GNSS scintillation occurrence (panels a-i) and median integrated soft electron flux (panels j-r).Each row represents a local season (e.g., a-c show summer while g-i show winter), and each column represents geomagnetic activity in three SME-index bins with equal population counts.Black lines show where cusp datapoints were encountered, with occurrence rates from 10%, 20%, and so forth, until 50%, with the 10%-line always being the outermost contour.
Figure 3 .
Figure 3. Local time slices through six IMF bins for the cusp.Each IMF B Z bin aggregates a roughly equal number of orbital winter passes through the northern hemisphere cusp.The first column shows median GNSS scintillations, the second, the median contents of the various electron energy flux channels, and the third shows the same thing, but for the ion energy flux.
Figure 4 .
Figure 4. Panels (a) through (e): probability distributions of five different indices or coupling functions as measured during the time period selected for this study.Panels (f) through (j): median DMSP electron energy flux recorded as a function of the changes in the various indices, errorbars denote upper/lower quartile distributions.Panels (k) through (o): change with the various index values of the proportion of events for which the phase scintillation index σ ϕ exceeded 0.15 rad, with errorbars based on the underlying σ ϕ deviation.
Figure 5 .
Figure 5.The occurrence rate of scintillation events in the cusp-region, binned by Carrington rotations (27-day periods of solar rotation), for 10.5 hr < MLT <13.5 hr MLT over Svalbard.A composite model (solar cycle variation plus a damped solar zenith angle, Equation B3) is shown in solid red line, with annual variation during the solar cycle declining phase in shaded light blue area.Deep-winter outliers (see Appendix B) are removed from the long-term scintillation occurrence data.In yellow hexagrams are shown the median F 10.7 solar flux for each Carrington rotation, in solar flux units divided by 10.
Figure 6 .
Figure 6.The distribution of cusp-associated total electron flux, for summer (blue) and winter (orange) observations, where season is again defined as a 90-days period centered on the respective solstice.The median (dashed line) and 98th percentile (solid line) values for both distributions are indicated with appropriate color.
Figure 7 .
Figure7.A similar analysis as that presented in Figure4, comparing cusp observations (blue) to 6.2 million observations from the early morning aurora (2h < MLT < 7hr, orange).The y-axes show % change in each quantity from the quiet-most bin (e.g., IMF B Z = 0 nT corresponds to 0% change in panel e).
Figure 8 .
Figure 8. Distributions of phase scintillations in the cusp-region (first column), the total cusp electron energy flux (second column), and total cusp ion energy flux (third column), with separate SME-index bins for each row.The 98th percentile value is indicated in each panel with a red line.Note the sharp cutoff in the right column, which is due to the cusp definition in Newell and Meng (1988a).
Figure B2 .
Figure B2.All Carrington rotations for the extended period 2014-2018 plotted in sequence (dark gray circles).The red and blue lines show Equation B3 with maximum and minimum solar zenith angle variation inserted in lieu of the z-dependent term respectively.Solid line shows the model validity, while dashed lines make a prediction for the years 2017 and 2018.
Figure B3.The 98th percentile total energy flux in the cusp-measured DMSP datapoints, binned by Carrington rotations.Panel (a) shows the 98th percentile energy flux in each 27-days solar rotation period, with low-and high-errorbars showing the location of the 97th and 99th percentile flux respectively.A solar zenith angle deconstruction model (Equation B5) is shown in solid red line, but now with an intensification (dashed red line) halfway to solar minimum.Panel (b) shows the energy flux bins in sequence, with the composite model (Equation B4 + Equation B5) in solid red line, and with annual variation during the solar cycle declining phase in shaded light blue area. | 11,095 | 2023-10-01T00:00:00.000 | [
"Environmental Science",
"Geology",
"Physics"
] |
Heat-shock Protein 90 Is Essential for Stabilization of the Hepatitis C Virus Nonstructural Protein NS3*
The hepatitis C virus (HCV) is a major cause of chronic liver disease. Here, we report a new and effective strategy for inhibiting HCV replication using 17-allylaminogeldanamycin (17-AAG), an inhibitor of heat-shock protein 90 (Hsp90). Hsp90 is a molecular chaperone with a key role in stabilizing the conformation of many oncogenic signaling proteins. We examined the inhibitory effects of 17-AAG on HCV replication in an HCV replicon cell culture system. In HCV replicon cells treated with 17-AAG, we found that HCV RNA replication was suppressed in a dose-dependent manner, and interestingly, the only HCV protein degraded in these cells was NS3 (nonstructural protein 3). Immunoprecipitation experiments showed that NS3 directly interacted with Hsp90, as did proteins expressed from ΔNS3 protease expression vectors. These results suggest that the suppression of HCV RNA replication is due to the destabilization of NS3 in disruption of the Hsp90 chaperone complex by 17-AAG.
The hepatitis C virus (HCV) is a major cause of chronic liver disease. Here, we report a new and effective strategy for inhibiting HCV replication using 17-allylaminogeldanamycin (17-AAG), an inhibitor of heat-shock protein 90 (Hsp90). Hsp90 is a molecular chaperone with a key role in stabilizing the conformation of many oncogenic signaling proteins. We examined the inhibitory effects of 17-AAG on HCV replication in an HCV replicon cell culture system. In HCV replicon cells treated with 17-AAG, we found that HCV RNA replication was suppressed in a dose-dependent manner, and interestingly, the only HCV protein degraded in these cells was NS3 (nonstructural protein 3). Immunoprecipitation experiments showed that NS3 directly interacted with Hsp90, as did proteins expressed from ⌬NS3 protease expression vectors. These results suggest that the suppression of HCV RNA replication is due to the destabilization of NS3 in disruption of the Hsp90 chaperone complex by 17-AAG.
Infection by the hepatitis C virus (HCV) 2 is a major public health problem, with 170 million chronically infected people worldwide (1,2). The current treatment by combined interferon-ribavirin therapy fails to cure the infection in 30 -50% of cases (3,4), particularly those with HCV genotypes 1 and 2. Chronic infection with HCV results in liver cirrhosis and can lead to hepatocellular carcinoma (5,6). Although an effective combined interferon-␣-ribavirin therapy is available for about 50% of the patients with HCV, better therapies are needed, and preventative vaccines have not yet been developed.
HCV is a member of the Flaviviridae family and has a positive strand RNA genome (7,8) that encodes a large precursor polyprotein, which is cleaved by host and viral proteases to generate at least 10 functional viral proteins: core, E1 (envelope 1), E2, p7, NS2 (nonstructural protein 2), NS3, NS4A, NS4B, NS5A, and NS5B (9, 10). NS2 and the amino terminus of NS3 comprise the NS2-3 protease responsible for cleavage between NS2 and NS3 (9,11), whereas NS3 is a multifunctional protein consisting of an amino-terminal protease domain required for processing NS3 to NS5B (12,13). NS4A is a cofactor that activates the NS3 protease function by forming a heterodimer (14 -17), and the hydrophobic protein NS4B induces the formation of a cytoplasmic vesicular structure, designated the membranous web, which is likely to contain the replication complex of HCV (18,19). NS5A is a phosphoprotein that appears to play an important role in viral replication (20 -23), and NS5B is the RNA-dependent RNA polymerase of HCV (24,25). The 3Ј-untranslated region consists of a short variable sequence, a poly(U)-poly(UC) tract, and a highly conserved X region and is critical for HCV RNA replication and HCV infection (26 -29).
Hsp90 (heat-shock protein 90) is a molecular chaperone that plays a key role in the conformational maturation of many cellular proteins. Hsp90 normally functions in association with other co-chaperone proteins, which together play an important role in folding newly synthesized proteins and stabilizing and refolding denatured proteins in cells subjected to stress (30 -34). Its expression is induced by cellular stress and is also associated with many types of tumor. Hsp90 inhibitors are currently showing great promise as novel pharmacological agents for anticancer therapy.
Hsp90 inhibitors have two major modes of action as preferential clients for protein degradation or as Hsp70 inducers. The benzoquinone ansamycin antibiotic geldanamycin and its less toxic analogue 17-allylamino-17-demethoxygeldanamycin (17-AAG) directly bind to the ATP/ADP binding pocket of Hsp90 (34 -36) and thus prevent ATP binding and the completion of client protein refolding. Recently, Waxman et al. (37) demonstrated a role for Hsp90 in promoting the cleavage of HCV NS2/3 protease, using NS2/3 translated by rabbit reticulocyte lysate. Nakagawa et al. (38) also reported that inhibition of Hsp90 is highly effective in suppressing HCV genome replication. Hsp90 may directly or indirectly interact with any of the proteins NS3 through NS5B to regulate replication of the HCV replicon. More recently, Okamoto et al. (39) reported that Hsp90 could bind to FKBP8 (FK506-binding protein 8) and form a complex with NS5A. The interaction with FKBP8 has also been shown to be the mechanism by which Hsp90 regulates HCV RNA replication, a process in which Hsp90 clearly plays an important role.
In this study, we have demonstrated that NS3 also forms a complex with Hsp90, which is critical for HCV replication. On the basis of the findings that treating HCV replicon cells with the Hsp90 inhibitor, 17-AAG, suppressed HCV RNA replication, and that the only HCV protein degraded in these cells was NS3, we suggest a crucial role for Hsp90-NS3 protein complexes in the HCV life cycle.
Western Blotting and Immunoprecipitation Analyses-Cells were lysed in 1ϫ CAT enzyme-linked immunosorbent assay buffer (Roche Applied Sciences). Cell lysates were separated by SDS-PAGE and transferred to nitrocellulose membranes, and these were blocked with 5% skimmed milk. The primary antibodies used were monoclonal or polyclonal antibody against FLAG-M5 (Sigma), Hsp70 (Sigma), Hsp90 (Cell Signaling Technologies, Danvers, MA), Hsp90␣ (Calbiochem), Hsp90 (Calbiochem), and Hsf-1 (Calbiochem). Core, NS4A, and NS4B were a gift from Dr. M. Kohara (Tokyo Metropolitan Institute of Medical Science). E1, E2, NS3, NS5A, and NS5B were a gift from Prof. Y. Matsuura (Osaka University, Japan). Immunoprecipitation from cell lysates was carried out using anti-FLAG M5 antibody (Sigma) and the Protein G immunoprecipitation kit (Sigma), according to the manufacturer's instructions, and the immunoprecipitates were analyzed by Western blotting. Plasmids and Transfection-The pFLAG-CMV-NS3 vector was constructed by subcloning a DNA fragment encoding fulllength NS3, ⌬helicase, ⌬protease, ⌬PH 1, ⌬PH 2, and ⌬H 1 into the EcoRI and XbaI sites of the pFLAG-CMV TM -2 expression vector (Sigma), so that the amino-terminal FLAG epitope was fused in frame with NS3. The core expression vector was a gift from Dr. M. Kohara. The vector was transfected into 293T cells using the FuGENE 6 transfection reagent (Roche Applied Science) according to the manufacturer's instructions.
Long Term Suppression of HCV RNA Replication-We next examined the effect of 17-AAG on HCV replication over time. When NNC#2 cells were cultured with 50 nM 17-AAG only on day 0 (white squares), the level of HCV RNA was reduced by 2 log on day 3 but had increased to control levels by day 12 (Fig. 3B). However, when 50 nM 17-AAG was added to the cells at 3-day intervals for 15 days (black squares), the observed significant reduction in HCV RNA (by 3 log) was sustained from day 3 to day 15. We used trypan blue staining to check that long term treatment with 17-AAG did not induce cellular toxicity (Fig. 3A). Our results suggested that 17-AAG has the potential to safely induce long term suppression in HCV replication.
Reduced Expression of NS3 Protein in 17-AAG-treated HCV Replicon Cells-To investigate the mechanism by which 17-AAG inhibited HCV replication, we analyzed the expression of HCV core, E1, E2, NS3, NS4A, NS4B, NS5A, and NS5B proteins by Western blotting. NNC#2 cells treated with increasing doses of 17-AAG showed a marked reduction in the expression of NS3 (Fig. 4A) after 3 days, in common with the level of HCV RNA ( Fig. 2A). However, levels of the other proteinswereunchanged.Thisdosedependent inhibition suggested that NS3 was more sensitive to 17-AAG than the other proteins. Similar effects on NS3 expression and RNA replication were seen in #50-1 cells treated with 17-AAG (Fig. 4A).
Another effect of 17-AAG treatment seen in these cells was an increase in Hsp70 expression and a slight increase in Hsp90 expression (Fig. 4B). The induction of Hsp70 expression suggested that Hsp90 inhibition by 17-AAG strongly activated HSF-1 (heat-shock transcription factor 1) (43). We also examined the levels of HCV core and NS5B protein expression in NNC#2 cells treated with 50 nM 17-AAG. Reduced levels of these proteins were seen in NNC#2 cells on day 6, and both HCV core and NS5B protein were undetectable on day 9 (Fig. 4C). To determine whether 17-AAG promoted the degradation of NS3, we next looked at the effect of 17-AAG on #50-1 cells in which proteasomal degradation was also inhibited. Although 17-AAG treatment still induced a reduction in the NS3 protein level in #50-1 cells (Fig. 4D), the degradation of NS3 was completely blocked in the presence of the proteasome inhibitor, MG132. This suggested that the pharmacological effect of 17-AAG was dependent on the proteasome system (44,45).
Protein Folding in Hsp90-NS3 Interaction-To investigate the role of Hsp90 in HCV NS3 activation, the FLAG-NS3 protein was transfected into 293T cells, with or without 17-AAG, and the cell lysates were analyzed by Western blotting. The expression of NS3 from FLAG-NS3 was reduced in the presence of 17-AAG (Fig. 5A), suggesting that Hsp90 is involved in HCV NS3 degradation, possibly through a physical interaction. We confirmed this specific interaction by immunoprecipitating 293T cell lysates with anti-FLAG antibody. This clearly showed that FLAG and Hsp90 co-precipitated, suggesting that NS3 was bound to the chaperone complex formed with Hsp90 (Fig. 5B). NS3 mutants lacking the protease and helicase regions were generated in order to identify the region responsible for the interaction with Hsp90 (Fig. 5C). FLAG-NS3, FLAG-NS3-⌬helicase, or FLAG-NS3-⌬protease were transfected into 293T cells, and anti-FLAG antibody immunoprecipitates were analyzed by Western blotting (Fig. 5D). Although FLAG-NS3-⌬protease was clearly co-immunoprecipitated with Hsp90, no protein band corresponding to FLAG-NS3-⌬helicase was detected (Fig. 5D), suggesting that the NS3 helicase region mediates binding to Hsp90. To confirm this finding, plasmids expressing different NS3 helicase mutants fused with FLAG (⌬PH 1, ⌬PH 2, and ⌬H 1) were constructed (Fig. 5E). Expressing these NS3 helicase mutants in 293T cells and analyzing their immunoprecipitates with anti-FLAG antibody by Western blotting showed that, although all of the NS3 helicase mutant proteins were immunoprecipitated by anti-FLAG-antibody, no Hsp90 was co-precipitated (Fig. 5F).
We also confirmed that the NS3 helicase region mediated the specific interaction with Hsp90 by transfecting FLAG-NS3 and FLAG-NS3 deletion mutants into 293T cells pretreated with 17-AAG (Fig. 5G). The proteins expressed by FLAG-NS3 and FLAG-NS3-⌬protease were degraded in cells pretreated with 17-AAG, whereas no degradation of the ⌬PH 2 and ⌬H 1 NS3 mutants lacking helicase regions was seen (Fig. 5G). Further, when pEF-core was expressed in 293T cells, core was unable to co-immunoprecipitate Hsp90, and no degradation of core protein was observed (Fig. 5G). Our data demonstrate that 17-AGG destabilizes several binding proteins (NS3 and NS3-⌬protease) to Hsp90 but stablilizes some nonbinding proteins (the ⌬PH 2 and ⌬H 1 NS3 mutants lacking helicase regions and core) to Hsp90. In previous reports (46), similar effects were observed when wild-type and mutated p53 were translated in the presence of geldanamycin. These results further supported the hypothesis that Hsp90 has a role in folding the NS3 helicase domain and that this has an important role in stabilizing the full-length NS3 protein. A protein complex that includes NS3 and Hsp90 is therefore implicated in the control of HCV replication.
DISCUSSION
The Hsp90 inhibitor, 17-AAG, is known to have highly selective effects on tumor cells that are a result of its high affinity for Hsp90 client oncoproteins, which are incorporated into the Hsp90-dependent multichaperone complex, thereby increasing their binding affinity for 17-AAG more than 100-fold (47). This high selectivity effectively minimizes the toxic side effects of 17-AAG so that it is a good candidate for clinical application, especially in treating neurodegenerative diseases. In this study, we observed the inhibitory effects of 17-AAG on the replication of an HCV subgenomic replicon that lacked NS2. On the other hand, Waxman et al. (37) demonstrated a role for Hsp90 in promoting the cleavage of HCV NS2/3 protein using NS2/3 translated in rabbit reticulocyte lysate and expressed in Jurkat cells. Because the replicon cells used in our study genetically lacked NS2, our results suggest that Hsp90 may directly interact with the NS3 protein in the HCV replicon.
In cell lines in which 17-AAG was a potent inhibitor of HCV replication, with IC 50 values of 3-10 nM, we also found strong evidence that the association between HCV Hsp90 and NS3, but not other NS proteins, was the essential mechanism controlling the preferential degradation of NS3 after 17-AAG treatments. Furthermore, we showed that NS3 interacted with Hsp90 through the NS3 helicase domain. It was also clear that the expression of NS3 protein with helicase activity in 293T cells pretreated with 17-AAG was reduced, but the expression of NS3 mutants lacking the helicase regions (⌬PH 2 and ⌬H 1) was not. The role of Hsp90 in achieving and/or stabilizing the NS3 protein was suggested by the fact that only 17-AGG bound to Hsp90 was capable of affecting NS3. The use of Hsp90 inhibitors represents a novel strategy for the development of anti-HCV therapies. | 3,138.8 | 2009-03-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Strengthening local volcano observatories through global collaborations
We consider the future of volcano observatories in a world where new satellite technologies and global data initiatives have greatly expanded over the last two decades. Observatories remain the critical tie between the decision-making authorities and monitoring data. In the coming decade, the global scientific community needs to continue to collaborate in a manner that will strengthen volcano observatories while building those databases and scientific models that allow us to improve forecasts of eruptions and mitigate their impacts. Observatories in turn need to contribute data to allow these international collaborations to prosper.
Introduction
Volcano observatories 1 represent the intersection between volcano science and public safety. Though many of the key advances in volcanology emerge from universities and government research laboratories, the impact on society occurs when observatories use those advances to improve forecasts and to protect citizens. Moreover, the most critical advances occur during unrest and eruptions, when observations made with ever more and diverse in situ and remote techniques can be used to develop better models, test hypotheses, and improve forecasts. Observatories play a critical role in making those observations, while also serving as conduits for timely science information to communities near volcanoes as well as to a curious and concerned public (Lowenstern and Ewert 2020).
Over the past 20 years, most observatories have changed how they collect and analyze data, and report information to the public. Specifically, digital data and telemetry (including cellular) have replaced the earlier (and cheaper) analog radios. Satellites increasingly collect critical datasets. Social media provides a less formal, but direct means for messaging about hazards, but with the challenge that other groups and individuals with diverse motivations compete for the ear of the public. In the future, it is clear that techniques utilized during responses to volcanic activity will evolve, as surveillance by satellites becomes ever more widespread, and databases accumulate global volcano information that can inform local forecasts. As global data and remote monitoring expands, how will this progress impact the contract between volcano observatories and the communities they serve? This brief paper considers the history of volcano observatories, their current status, and their future role in disaster-risk reduction.
A very short history of volcano observatories
At the present time, forty-two countries with active volcanoes have one or more volcano observatories conducting various types of systematic observations (Table 1). The first official observatory was created at Vesuvius in 1841 in Editorial responsibility: J.H. Fink This paper constitutes part of a topical collection: Looking Backwards and Forwards in Volcanology: A Collection of Perspectives on the Trajectory of a Science response to the ~ 9-year-long eruption that began that year. Geological surveys, university professors, and professional societies added to volcano science throughout the nineteenth century, mostly by making detailed observations of volcanic phenomena at volcano hotspots in Italy, Indonesia, Japan, and the Caribbean. Over an approximately 6-month period in 1902, the eruptions of Mont Pelée, Martinique, Soufrière St. Vincent, and Santa Maria, Guatemala, captured the world's attention with sudden and horrifically large loss of human life by mechanisms mysterious and misunderstood by the scientific community and the public. These eruptions, and others in the first 20 years of the twentieth century, spurred the development of long-term, place-based systematic observation and research at volcano observatories and geological mapping of young, but non-erupting, volcanic systems (Tilling et al. 2014). The founding of the first observatories at Vesuvius, Kīlauea, Asama, and Mont Pelée was by academic institutions, and nearly all scientific effort in volcanology was devoted to observing, analyzing, characterizing, and cataloging volcanic phenomena with the greatest emphasis given to developing the ability to accurately predict eruptions and develop engineered mitigation strategies.
The large and deadly eruption of Kelut volcano Java, Indonesia, in 1919 (over 5000 fatalities), was the catalyst to form the Netherlands East India Volcanological Survey, which later became the Volcanological Survey of Indonesia (VSI). VSI was the first national-scale volcanological service by which all volcanoes in a country were studied and kept under observation. VSI's mission was to study how the population near volcanoes could be protected from eruptions. This would be accomplished by: "(1) studying the type of the volcano (Classification 2 ); (2) finding out a possibility to predict an eruption (Forecasting); (3) investigating the menaced regions (Hazard and risk mapping); (4) developing a system to warn and evacuate the population of these regions (Public warning and communication); and (5) trying to reduce the effect of an eruption (Engineering mitigation measures)." (Neumann van Padang 1983). In time, many more volcano observatories would be established, often in response to deadly eruptions, in other countries. The mission profile developed by VSI endures as the modus operandi of most current volcano observatories.
Diverse administration and organization of volcano observatories
Today, most volcano observatories are established as governmental institutions, often collaboratively with academic institutions, but with the majority of operational support coming from national governments. Some have an entirely operational mission, whereas others are tasked also with research and scientific understanding. It is a challenge to count the number of existing volcano observatories because many are tied to national research centers. For example, there are 77 observatories in Indonesia alone, bolstered by a large roving staff based in Bandung at CVGHM headquarters (institution abbreviations spelled out in Table 1). Only a few employees may work at the volcano observatory full-time. PHIVOLCS (Philippines) has a similar setup, but for only six volcanoes. Italy monitors about a dozen volcanoes from two primary observatories (Vesuviano and Etneo), with additional support from other INGV offices. The U.S. Geological Survey (USGS) operates 5 volcano observatories, though one of them is entirely virtual and without an onsite office (the Yellowstone Volcano Observatory). Japan's volcano warnings are issued by JMA, the meteorological service, but at least a dozen government organizations and universities contribute to monitoring, assessment and research. The observatories in Ecuador, Costa Rica, and the Caribbean are run by universities that adhere to agreements with civildefense agencies. Table 1 lists our compilation of the observatories for the world's volcanically active countries.
Priorities and capabilities of volcano observatories
As with the initial five goals of the VSI from 100 years ago, volcano observatories still focus on a wide range of activities. In those countries that are most volcanically active, workflow often revolves around VSI's item #4: monitoring systems and public warnings, combined with #2: forecasting. Figure 1 illustrates how the observatory collects data from disparate sources and distributes value-added information and assessments to stakeholders such as municipal and national authorities, the private sector, aviation community, media, and public. Messaging may occur via official alerts such as VANs (Volcano Activity Notice) and VONAs (Volcano Observatory Notice for Aviation), as well as via websites, social media, apps, press conferences, interviews, and many other means. Different observatories have vastly different capabilities, however-not only in terms of their monitoring networks, but also their ability to put out messages, and in their readiness to incorporate outside data and collaborations. Some observatories in 2021 have no Internet presence, no official social media accounts, and only minimal infrastructure to collect and distribute data. Nevertheless, they may still have governmental responsibility, active liaisons with civil defense, and considerable local experience and credibility responding to volcanic unrest. In some cases, observatories have advanced infrastructure, but their capabilities are challenged by administrative turnover, loss of staff to opportunities elsewhere, poor funding, or changing responsibilities due to competition with other governmental
A changing landscape of eruption forecasting and volcano response
Potential external collaborators have two increasingly useful tools to help local observatories with volcano crises: remote sensing and global datasets. In addition to these assets, observatories can sometimes call upon assistance from other countries near and far.
Remote monitoring and assessments
Over the past two decades, an exciting yet daunting reality for all observatories is the advancement of technology and accumulation of knowledge, which make it simultaneously easier and harder to manage an eruption response. The most obvious change is the (literally) skyrocketing increase in satellite data that can be used to track deformation, gas output, thermal flux, and atmospheric ash dispersion, and surface change (Pritchard and Simons 2002;Reath et al. 2019;Poland et al. 2020). These satellite-collected data can provide rapid insights that often cannot be obtained from ground-based measurements due to clouds, hazards, expense, and other challenges. Satellite images combined with seismic data were critical to effecting evacuations that are credited with saving thousands of lives in 2010 at Merapi (Pallister et al. 2013). In 2020, low-latency, freely accessible Sentinel-1 InSAR data provided important insights to PHIVOLCS during a dike-fed eruption at Taal volcano (Bato et al. 2021), offering particularly valuable insights given that some ground-based monitoring was compromised by ash accumulation on solar panels. Many volcanoes still have no nearby instrumentation, such that eruptions may be detected by infrasound or lightning networks hundreds or thousands of kilometers distant from the volcano, or by weather satellites before an alert has been issued by the pertinent volcano observatory.
The remote-monitoring equipment may be owned by other countries, and the data may be reduced and analyzed by workers half-way around the world. This complicates any workflow envisioned in Fig. 1. In particular, the public may receive information and analysis directly from a satellite working group unassociated with the volcano observatory. National space agencies may publicize images or data and disseminate simple assessments that might run counter to local knowledge and experience. Foreign scientists may be interviewed by news media based on their interpretation of the data. In our experience, most volcanologists defer to the local volcano observatory, and attempt to follow established guidelines (IAVCEI Subcommittee for Crisis Protocols et al. 1999, IAVCEI Task Group on Crisis Protocols et al. 2016Pallister et al. 2019); however, if the volcano observatory has not released alerts or information statements, it becomes challenging for the volcanological community to mirror any local assessment.
Global datasets
In the above section, we briefly outlined how critical datasets are increasingly collected by groups that may or may not have a close connection to the observatory. Another key area requiring collaboration is the compilation, archiving, and distribution of global volcano data (e.g., WOVODAT: Costa et al. 2019; and the Global Volcanism Program 2013) Fig. 1 A schematic for the complex manner by which information is distributed to the public and other stakeholders during volcanic unrest. Bold arrows represent preferred means of information flow, but it is apparent and inevitable that many stakeholders will receive information directly from external data sources or the global volcano community without vetting or interpretation by the observatory (dashed arrows). The blue two-way arrow represents the communication and collaboration between the observatory and other volcanologists or regional data (e.g., GeoDIVA;DGGS Staff and Cameron 2004). These data are an important source of information that can be used by volcano observatories to aid in eruption forecasts. Crucial questions can be addressed, particularly when a volcano without historical activity becomes restless. The growth and utility of these databases in turn rely on integration of ground-based observational, geophysical and geochemical data originating from observatories around the world. Thus, ideally, all observatories would collaborate with the data-base compilers, and would offer timely, relevant data that has been vetted, and compiled in an accessible format. Clearly, this is a huge reach both for small observatories and for the databases that are operated by small staffs with non-permanent sources of funding. Some observatories with research staff may seek to protect their opportunity to publish the results of their work and will be unwilling to distribute the data to others. Others may want to avoid being judged for the quality of the data or the accuracy of their assessments or may not fully appreciate how observations from their volcanoes fit into a global context. For these reasons, there can be resistance to growth of these global resources. As an example, we have encountered volcano observatories that do not share data with the IRIS seismic database.
Regional support
Another recent change is the number of regional and international groups that exist to foster collaboration, communication, and research. In Europe, the FUTUREVOLC program was a 26-partner consortium focused on geologically active areas in Europe. In Latin America, ALVO supports countries by organizing meetings and facilitating workshops in Spanish. The INVOLC group seeks to support volcano observatories and volcano science in resourced-constrained nations, whereas the NSF-supported CONVERSE in the USA seeks enhanced cooperation between the academic community and volcano observatories during eruptions.
All of these groups aim to increase the amount of international collaboration between scientists and observatories, with the goal of broadening the capabilities and resources available to all. And overall, we think they hint at a future where volcano responses increasingly take advantage of expertise, models, computer systems, and data provided from sources beyond the local volcano observatory. Nevertheless, the observatory should and must remain the focal point for the response, given its ties, credibility, and indeed obligations to the local and national governments and populations, as well as its knowledge of landscape, volcano behavior, history, and culture.
Communicating risk to decision-makers
Ultimately, local and national authorities (together with scientists) must make critical decisions regarding evacuation and mobilization of resources. To do so, they need information stripped of scientific jargon, with clear scenarios and relative likelihoods (Mileti and Sorenson 1990). Creating scenarios and estimating probabilities (even if highly uncertain) requires collaboration among volcano scientists and communications experts (often social scientists). Moreover, the information must be provided within the context of local political, societal, and geographical realities. One critical protocol for CVGHM during the 2017 Mount Agung crisis was the numerous briefings with Hindu priests (the key local influencers) as part of socialization (Syahbana et al. 2019).
The role of social media has grown rapidly for the distribution of hazards information during crises. Increasingly, observatories use social media at the expense of more permanent and authoritative statements posted to observatory archives. Moreover, the public may not always recognize the authoritative source (or origin) of hazards observations within a blizzard of tweets and posts. This places more pressure on the global community to support the local observatory to prioritize accurate information. Yet, it also requires that the global community know and trust the local observatory.
Looking ahead and strengthening local volcano observatories through global collaboration
Whether we consider the natural environment or society, the only thing that is constant is change. We are in a time of rapid change-changing climate and changing societal norms that will require observatories to adapt. Climate change may eventually decrease the likelihood of primary lahars on now-glaciated volcanoes, but prolong non-eruptive lahar hazards owing to more intense rainfall events. Social media may continue to produce polarizing effects on societies and erode confidence in scientific authority. But access to broadband communications, cloud computing, and inexpensive data storage, coming with decreasing costs may enable observatories to develop more and closer ties with colleagues that are not co-located to aid in response to volcano crises and pursue collaborative research projects.
Volcano observatories, regardless of scale or experience, must consider how to provide the most useful information to decision makers. Often, this requires careful planning in anticipation of volcanic unrest (Pallister et al. 2019;Newhall et al. 2021;Lowenstern et al. 2021). It is important that external scientists and scientific groups contributing to a volcano crisis recognize the limitations of their understanding of local conditions, social structures, and priorities. They should be aware that their scientific viewpoint, data, and models may sometimes have limited applicability toward making timely decisions for public safety. In our view, the global volcano community needs to clarify how to best assist observatories during volcanic crises. At the same time, local observatories need to recognize that part of their success will depend upon their ability to integrate knowledge coming from afar. They need to plan not only for their interactions with local authorities, but also how they can collaborate and benefit from their colleagues abroad. | 3,698.8 | 2021-12-21T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Bose-Einstein Correlations of π 0 Pairs from Hadronic Z 0 Decays The OPAL Collaboration
We observe Bose-Einstein correlations in π 0 pairs produced in Z 0 hadronic decays using the data sample collected by the OPAL detector at LEP 1 from 1991 to 1995. Using a static Gaussian picture for the pion emitter source, we obtain the chaoticity parameter λ = 0 . 55 ± 0 . 10 ± 0 . 10 and the source radius R = (0 . 59 ± 0 . 08 ± 0 . 05) fm . According to the JETSET and HERWIG Monte Carlo models, the Bose-Einstein correlations in our data sample largely connect π 0 s originating from the decays of different hadrons. Prompt pions formed at string break-ups or cluster decays only form a small fraction of the sample.
Introduction
The Bose-Einstein correlations (BEC) effect has a quantum-mechanical origin. It arises from the requirement to symmetrise the wave function of a system of two or more identical bosons. It was introduced into particle reactions leading to multi-hadron final states as the GGLP effect [1] in the study of the π + π + and π − π − systems. The distributions of the opening angle between the momenta in pairs of like-sign pions were shifted towards smaller values compared to the corresponding distributions for unlike-sign pairs. A related effect was exploited earlier in astronomy [2] to measure the radii of stars.
In high energy physics, for example e + e − collisions at LEP, a quantitative understanding of the BEC effect allows tests of the parton fragmentation and hadronisation models. This would in turn help in achieving a more precise measurement of the W boson mass and better knowledge of several Standard Model (SM) observables [3]. The fragmentation models presently used are those of strings and clusters implemented, respectively, in the JETSET [4] and HERWIG [5] Monte Carlo generators.
Numerous studies of BEC in pairs of identical bosons already exist, see for example [6]. Due to the experimental difficulties in photon and π 0 reconstruction, only very few studies [7] exist for BEC in π 0 pairs, even though they offer the advantage of being free of final state Coulomb corrections.
The string model predicts a larger BEC strength or chaoticity and a smaller effective radius of the emitting source for π 0 pairs compared to π ± pairs while the cluster fragmentation model predicts the same source strength and size [8,9]. However, neither model of primary hadron production has a mechanism to allow BEC between π 0 s produced in different strong decays. The string model prediction is a consequence of electric charge conservation in the local area where the string breaks up. Similar expectations can be derived if the probabilities in the string break-up mechanism are interpreted as the squares of quantum mechanical amplitudes [8,10]. A small difference between π ± pairs and π 0 pairs is also expected from a pure quantum statistical approach to Bose-Einstein symmetry [11]. In addition, based on isospin invariance, suggestions exist on how to relate BEC in the pion-pair systems i.e. π 0 π 0 , π ± π ± , and π + π − and how to extend it to π ± π 0 . [12]. The L3 collaboration has recently reported [7] that the radius of the neutral-pion source may be smaller than that of charged pions, R π ± π ± − R π 0 π 0 = (0.150 ± 0.075(stat.) ± 0.068(syst.)) fm, in qualitative agreement with the string fragmentation prediction. This paper presents a study of BEC in π 0 pairs using the full hadronic event sample collected at centre-of-mass energies at and near the Z 0 peak by the OPAL detector at LEP from 1991 to 1995. This corresponds to about four million hadronic Z 0 decays. A highly pure sample of π 0 mesons is reconstructed using the lead-glass electromagnetic calorimeter. The correlation function is obtained after accounting for purity and resonant background. It is parametrised with a static picture of a Gaussian emitting source [1,2].
Selection of hadronic Z 0 decays
A full description of the OPAL detector can be found in [13]. The sub-detectors relevant to the present analysis are the central tracking detector and the electromagnetic calorimeter. The central tracking detector consists of a silicon micro-vertex detector, close to the beam pipe, and three drift chamber devices: the vertex detector, a large jet chamber and surrounding z-chambers 1 . In combination, the three drift chambers sitting inside a solenoidal magnetic field of 0.435 T yield a momentum resolution of σ pt /p t ≈ 0.02 2 + (0.0015 · p t ) 2 for | cos(θ)| < 0.7, where p t (in GeV) is the transverse momentum with respect to the beam axis. The electromagnetic calorimeter detects and measures the energies and positions of electrons, positrons and photons for energies above 0.1 GeV. It is a total absorbing calorimeter, and is mounted between the coil and the iron yoke of the magnet. It consists of 11704 lead-glass blocks arranged in three large assemblies (the barrel that surrounds the magnet coil, and two endcaps) which together cover 98% of the solid angle. The intrinsic energy resolution is σ E /E ≃ 5%/ √ E, where E is the electromagnetic energy in GeV.
Standard OPAL selection criteria are applied to tracks and electromagnetic clusters [14]. Tracks are required to have at least 20 measured points in the jet chamber, a measured momentum greater than 0.1 GeV, an impact parameter |d 0 | in the r − φ plane smaller than 2 cm, a z position at the point of closest approach to the origin in the r − φ plane within 25 cm of the interaction point, and a measured polar angle with respect to the beam axis greater than 20 • . Electromagnetic clusters are required to have an energy greater than 0.1 GeV if they are in the barrel part of the detector (i.e. | cos θ | ≤ 0.82) or greater than 0.3 GeV if they are in the endcap parts. Hadronic Z 0 decays are selected by requiring for each event more than 7 measured tracks, a visible energy larger than 60 GeV and an angle larger than 25 • and smaller than 155 • between the calculated event thrust [15] axis and the beam axis. The visible energy is the energy sum of all detected tracks, electromagnetic clusters not associated to tracks and electromagnetic clusters associated to tracks after correcting for double counting. A sample of 3.1 million Z 0 hadronic decays is selected for which the total background, consisting mainly of τ pairs, is less than 1% and is neglected throughout the analysis.
Detector effects and detection efficiencies for the spectra of π 0 pairs are evaluated using eight million Monte Carlo hadronic Z 0 decays. Events are generated using the JETSET 7.4 program, tuned to reproduce the global features of hadronic events as measured with the OPAL detector [14], with the BEC effect explicitly switched off. Samples generated with the HERWIG 5.9 program without the BEC effect are used for comparison. The generated events were passed through a full simulation of the OPAL detector [16] and were analysed using the same reconstruction and selection programs as were applied to the data.
Reconstruction of π 0 mesons
For the selected event sample, neutral pions are reconstructed from photon pairs. Photon reconstruction is performed in the barrel part of the electromagnetic calorimeter where both the photon reconstruction efficiency and the energy resolution are good. The procedure of [17] which resolves photon candidates in measured electromagnetic clusters is used. It employs a parametrisation of the expected lateral energy distribution of electromagnetic showers. It is optimised to resolve as many photon candidates as possible from the overlapping energy deposits in the electromagnetic calorimeter in a dense environment of hadronic jets. The purity of the photon candidate sample is further increased using a likelihood-type function [17] that associates to each photon candidate a weight w for being a true photon. Photon candidates with higher w are more likely to be true photons.
All possible pairs of photon candidates are then considered. Each pair was assigned a probability P for both candidates being correctly reconstructed as photons. This probability is simply the product of the w -weights associated with the two candidates. The combinatorial background consists of a mixture of three components: (i) wrong pairing of two correctly reconstructed photons, (ii) pairing of two fake photons and (iii) pairing of one correctly reconstructed photon with a fake one. Choosing only photon pairs with high values of P leaves combinatorial background mostly from component (i).
The π 0 reconstruction efficiency and purity are illustrated in Figure 1 for different cuts on P. The efficiency is defined as the ratio of the number of correctly reconstructed π 0 s over the number of generated π 0 s , and the π 0 purity is defined as the ratio of signal over total entries in a photon-pair mass window between 100 and 170 MeV.
Selection of π 0 Pairs
The average number of π 0 s produced in Z 0 decays has been measured [18] to be 9.76±0.26, which is reproduced by our Monte Carlo simulations. This leads to about 45 possible π 0 pairings per event. Considering only π 0 candidates with P > 0.1 (i.e. 17% efficiency and 36% purity), we reconstruct at the detector level 4.7 π 0 candidates on average per event. This leads to about 8 pairings among which only 1 pair on average is really formed by true π 0 s . Here, the detector level means that detector response, geometrical acceptance and photon reconstruction efficiency are taken into account. Therefore, the π 0 pair sample is background dominated and the study of π 0 pair correlations or invariant mass spectra is subject to very large background subtraction. Monte Carlo must be used to predict both the shape and amount of background to be subtracted, leading to large systematic errors in the measurements of the BEC source parameters.
To avoid this, the π 0 selection criteria are tightened. We select π 0 s which have a momentum above 1 GeV. This cut reduces the fraction of fake π 0 s . In addition, it removes π 0 s produced by hadronic interactions in the detector material for which the Monte Carlo simulation is not adequate. The probability P associated to each π 0 candidate is required to be greater than 0.6. In the case where a photon can be combined in more than one pair, only the pair with the highest probability is considered as a π 0 candidate. Among the events with four or more reconstructed photon candidates, only those leading to a possible π 0 pair with four distinct photon candidates are retained for further analysis. Events with six or more photon candidates leading to more than two π 0 candidates are rejected. They represent about 10% of the retained sample and would increase the sensitivity to unwanted resonance signals if they were not rejected. Figure 2 shows the photon pair mass, M 2γ , for the selected events. The average purity of the π 0 sample is 79% in the mass window between 100 and 170 MeV. The background is estimated directly from data by a second order polynomial fit to the side bands of the peak and by Monte Carlo simulation. The two background estimations yield compatible results and the Monte Carlo reproduces correctly the data. The superimposed curves are not the result of a fit to the data, but smoothed histograms of the Monte Carlo expectations for signal and background normalised to the total number of selected hadronic Z 0 decays.
A clear π 0 pair signal is obtained as shown in Figure 3 where the two values of M 2γ are shown for the retained events. A π 0 pair is considered as a signal candidate if both values of M 2γ are within the mass window between 100 and 170 MeV. The average π 0 pair signal purity is 60% and the Monte Carlo simulation describes the data well. Kinematic fits were made, constraining the mass of pairs of photon candidates to the π 0 mass, with the assumption that the photons come from the primary interaction vertex. Monte Carlo studies showed that this gives a 26% improvement in the resolution of the π 0 momentum.
The BEC Function
The correlation function is defined as the ratio, where Q is a Lorentz-invariant variable expressed in terms of the two π 0 four-momenta p 1 and p 2 via Q 2 = −(p 1 − p 2 ) 2 , ρ(Q) = (1/N)dN/dQ is the measured Q distribution of the two π 0 s and ρ 0 (Q) is a reference distribution which should, in principle, contain all the correlations included in ρ(Q) except the BEC. For the measurement of ρ 0 (Q), we consider the two commonly used methods [6]: • Event Mixing: Mixed π 0 pairs are formed from π 0 s belonging to different Z 0 decay events in the data. To remove the ambiguity on how to mix events, we select two-jet events having a thrust value T > 0.9, i.e. well defined back-to-back two-jet events. The thrust axes of the two events are required to be in the same direction within (∆ cos θ × ∆φ)= (0.05 × 10 • ). Mixing is then performed by swapping a π 0 from one event with a π 0 from another event. To avoid detection efficiency problems arising from different detector regions, swapping of two pions is performed only if they point to the same region of the electromagnetic barrel detector within (∆ cos θ × ∆φ)= (0.05 × 10 • ). With this procedure, we start with two hadronic Z 0 events each having two π 0 candidates and can end up with between zero and four pairs of mixed π 0 candidates. The Q variable is then calculated for each of the mixed pairs. If the contributions from background are removed or suppressed, this method offers the advantage of being independent of Monte Carlo simulations, since C(Q) can be obtained from data alone.
• Monte Carlo Reference Sample: The ρ 0 distribution is constructed from Monte Carlo simulation without BEC. The Monte Carlo is assumed to reproduce correctly all the other correlations present in the data, mainly those corresponding to energymomentum conservation and those due to known hadron decays. In order to be consistent with the first method, the cut T > 0.9 is also applied for both data and Monte Carlo.
In the following, the distributions ρ(Q) and ρ 0 (Q) are measured from the same sample of selected events. The mixing technique is used as the main analysis method and the Monte Carlo reference technique is applied only for comparison.
The Measured BEC Function and Background Contribution
The correlation function, C(Q), corresponds experimentally to the average number of π 0 pairs, corrected for background, in the data sample divided by the corresponding cor-rected average number in the reference sample. Thus, we can write where ρ m and ρ m 0 are the measured values, and ρ b and ρ b 0 are the corresponding corrections for background contributions. For both the numerator and denominator, the background consists mainly of π 0 pairs in which one or both π 0 candidates are fake.
The background distributions ρ b and ρ b 0 are obtained from the Monte Carlo information. These background distributions can also be obtained from data using a side band fit to the projected spectra of the two-dimensional M 2γ distributions (see Figure 3) in each 400 MeV interval of the measured Q variable. The resulting background distributions are correctly reproduced by Monte Carlo. However, for the smaller Q intervals as used in this analysis, i.e. 100 MeV, the side band fit is subject to large statistical fluctuations, so the Monte Carlo distributions have to be used.
In the region of interest where the BEC effect is observed, Q < 700 MeV, pion pairs from particle (resonance) decays could mimic the effect. The relevant decays are: K 0 s → π 0 π 0 , f 0 (980) → π 0 π 0 , and η → π 0 π 0 π 0 with branching ratios of 39%, 33% and 32% respectively. Pion pairs from η decay contribute only to the region Q < 315 MeV. According to Monte Carlo studies, the number of reconstructed K 0 s in the 2 π 0 channel is very small. Furthermore, the hypothesis that each π 0 originates from the primary vertex, as used in the kinematic fits (Section 4), does not apply. This is an advantage for this analysis since the K 0 s peak is flattened, making its effect on the Q distribution negligible. The Monte Carlo estimates of this particle decay backgrounds are included in the distribution ρ b (Q), adjusting the rate of individual hadrons to the LEP average [18] where necessary.
For our analysis we select π 0 candidates with momentum greater than 1 GeV. This is dictated by the observation of correlations at small Q even for Monte Carlo events generated without any BEC effect. Indeed, as shown in Figure 4, a clear BEC-type effect is visible in the correlation function obtained from Monte Carlo events without BEC for different low cuts on π 0 momentum. Using Monte Carlo information, we find that these correlations are mainly caused by π 0 s originating from secondary interactions with the detector material. They would constitute an irreducible background to the BEC effect if low momentum π 0 s are considered in the analysis. This effect vanishes for π 0 momenta greater than 1 GeV.
We rely on Monte Carlo simulation only to define the appropriate momentum cut (i.e. 1 GeV) which completely suppresses the effect of soft pions produced in the detector material, rather than relying on its prediction for the exact shape and size of this effect. The reason is that, in contrast to charged pions where the measured track information can be used to suppress products of secondary interactions in the detector material, the neutral pions have to be assumed to originate from the main interaction vertex. Furthermore, with this assumption the kinematic fits (Section 4) bias the energy of soft pions emitted in the detector material towards larger values since the real opening angle between the photons is larger (vertex closer to the calorimeter) than the assumed one.
With the above selection criteria, the composition of the selected π 0 pair sample is studied using Monte Carlo simulations. According to the string fragmentation model implemented in JETSET, the selected sample consists of about 97.9% of mixed pion-pairs from different hadron decays, 2% of pairs belonging to the decay products of the same hadron and only 0.1% prompt pairs from the string break-ups. Similarly, using the cluster fragmentation model implemented in HERWIG, the selected sample consists of 97% of pairs from different hadron decays, 2.3% belonging to the decay products of the same hadron and only 0.7% originating directly from cluster decays. It is worth mentioning that even if the direct pion pairs from string break-up (JETSET) or cluster decays (HERWIG) were all detected and accepted by the analysis procedure, they would be diluted in combination with other pions and would constitute only a marginal fraction (< 1% ) of the total number of reconstructed π 0 pairs. Thus, our analysis has no sensitivity to direct pion pairs originating from string break-up or cluster decay.
Results
The correlation distribution C(Q) (Eq. (2)) is parametrised using the Fourier transform of the expression for a static sphere of emitters with a Gaussian density (see e.g. [19]): Here λ is the chaoticity of the correlation [which equals zero for a fully coherent (nonchaotic) source and one for a chaotic source], R is the radius of the source, and N a normalisation factor. The empirical term, (1 + δQ + ǫQ 2 ), accounts for the behaviour of the correlation function at high Q due to any remaining long-range correlations. The C(Q) distribution for data is shown in Figure 5 as the points with corresponding statistical errors, and the smooth curve is the fitted correlation function in the Q range between 0 and 2.5 GeV. A clear BEC enhancement is observed in the low Q region of the distribution. The parameters are determined to be: λ = 0.55 ± 0.10, R = (0.59 ± 0.08) fm, N = 1.10 ± 0.08, where the quoted errors are statistical only and the χ 2 /ndf of the fit is 14.7/19.
The distribution C(Q) obtained for Monte Carlo events generated with no BEC is shown as a histogram in the same figure. It shows that there is no residual correlation at low Q and indicates that the observed enhancement is present in the data only. The dashed-line histogram of Figure 5 represents the correlation function obtained from data but before the subtraction, using the Monte Carlo estimates, of pairs from the decay products of the same hadron, indicating that these contributions have only a minor influence on the measured parameters. In addition, the correlation function constructed with background π 0 pairs does not show any enhancement at low Q (not shown). Here, background π 0 pairs are defined as pairs for which one or both of the π 0 s are outside the mass window 100-170 MeV, i.e. these are likely to be fake π 0 candidates. The second method, which uses the MC reference sample, yields the following results: λ = 0.50 ± 0.10, R = (0.46 ± 0.08)fm.
These results are quoted for comparison only. We choose to quote the results obtained with the event mixing method since they are much less dependent on details of the Monte Carlo modelling.
The string model predicts a smaller source radius and a larger chaoticity in the BEC effect for π 0 pairs than for π ± pairs, while the cluster model predicts no difference. These predictions hold only for prompt boson pairs produced directly from the string or cluster decays. According to our Monte Carlo simulations, we have no sensitivity to these pairs.
Systematic Uncertainties
Potential sources of systematic error are investigated. In each case the effect on the parameters R and λ and their deviations with respect to the standard analysis are estimated. The results are summarised in Table 1.
• Bin width resolution: After the kinematic fits (Section 4), the resolution on the invariant mass of two pions, or on the variable Q, is approximately 60 MeV. We have chosen a bin width of 100 MeV for the fit to the measured C(Q) distribution. This bin width is varied from 100 MeV to 80 MeV and to 120 MeV.
• Fit range: The low end of the fit range is set to start at Q = 350 MeV (fourth bin) The high end of the fit range is changed to stop at Q = 2 GeV.
• Effect of hadron decays: To estimate the effect of the π 0 pairs from the same resonance decay on the measured BEC parameters, the estimated contribution is varied by ±10% which represents the typical error on the measured individual hadron rates [18]. In order to investigate the dependence of the measured parameters R and λ on the π 0 momentum cut, the analysis is repeated for π 0 momenta larger than 1.2 GeV.
• Analysis procedure: The analysis is repeated for several variations of the selection criteria.
-3) The thrust value for two-jet events is changed from 0.9 to 0.85 and to 0.92 (changes the overall event sample size by ±5%).
-4)
The factor 1 + δQ + ǫQ 2 is replaced by 1 + δQ. The correlation distribution C(Q) as measured for OPAL data. The smooth curve is the fitted correlation function and the dotted histogram is the correlation distribution obtained for JETSET Monte Carlo events generated without BEC. The dashed histogram represents the measured correlation function before the subtraction of the contributions from known hadron decays. | 5,649 | 2003-05-01T00:00:00.000 | [
"Physics"
] |
In Staphylococcus aureus the regulation of pyruvate kinase activity by serine/threonine protein kinase favors biofilm formation
Staphylococcus aureus, a natural inhabitant of nasopharyngeal tract, survives mainly as biofilms. Previously we have observed that S. aureus ATCC 12600 grown under anaerobic conditions exhibited high rate of biofilm formation and l-lactate dehydrogenase activity. Thus, the concentration of pyruvate plays a critical role in S. aureus, which is primarily catalyzed by pyruvate kinase (PK). Analyses of the PK gene sequence (JN645815) revealed presence of PknB site in PK gene indicating that phosphorylation may be influencing the functioning of PK. To establish this hypothesis the pure enzymes of S. aureus ATCC 12600 were obtained by expressing these genes in PK 1 and PV 1 (JN695616) clones and passing the cytosolic fractions through nickel metal chelate column. The molecular weights of pure recombinant PK and PknB are 63 and 73 kDa, respectively. The enzyme kinetics of pure PK showed K M of 0.69 ± 0.02 µM, while the K M of PknB for stpks (stpks = NLCNIPCSALLSSDITASVNCAK) substrate was 0.720 ± 0.08 mM and 0.380 ± 0.07 mM for autophosphorylation. The phosphorylated PK exhibited 40 % reduced activity (PK = 0.2 ± 0.015 μM NADH/min/ml to P-PK = 0.12 ± 0.01 μM NADH/min/ml). Elevated synthesis of pyruvate kinase was observed in S. aureus ATCC 12600 grown in anaerobic conditions suggesting that the formed pyruvate is more utilized in the synthesis phase, supporting increased rate of biofilm formation. Electronic supplementary material The online version of this article (doi:10.1007/s13205-014-0248-3) contains supplementary material, which is available to authorized users.
Introduction
Staphylococcus aureus mostly derives energy from glucose catabolism through glycolysis and Krebs cycles. The end product of glycolysis ''Pyruvate'' enters into TCA cycle and regulates the energy levels linked to pathogenicity of organism (Venkatesh et al. 2012). Pyruvate kinase (PK) belongs to a group of transferases that couples the free energy of PEP hydrolysis, using K ? and Mg ?2 as co-factors where it generates ATP and pyruvate (Nowak and Suelter 1981). S. aureus generates two molecules of pyruvate for every molecule of glucose consumption that ultimately reduces two molecules of NAD? to NADH creating redox imbalance which facilitates biofilm formation (Shimizu 2014;Ravcheev et al. 2012;Zhu et al. 2007).
''PK'' is one of the three regulatory enzymes in glycolysis; it exhibited homotropic positive co-operativity for PEP, but not for ADP. It controls the entire glycolytic pathway by regulating the flux from Fructose-1, 6-bisphosphate (FBP) to pyruvate (Muñoz and Ponce 2003). The most common form of allosteric regulation for PK is its upregulation by FBP, which increases the affinity and reduces the co-operativity of substrate binding which also depends on bound divalent cations in the active site. Here bound substrate and metal ions increases affinity of FBP for the allosteric site (Bond et al. 2000;Zoraghi et al. 2010Zoraghi et al. , 2011aKumar et al. 2014). ATP, Alanine, and phenylalanine become negative allosteric inhibitors for PK and serves as a switch between the glycolytic and gluconeogenic pathways. This regulation flux by PK directly affects the concentrations of glycolytic intermediates, biosynthetic precursors, and nucleoside tri-phosphates in the cell. Thus PK controls consumption of metabolic carbon for biosynthesis and utilization of pyruvate for energy production (Shimizu 2014). Therefore; it appears that pyruvate levels are vital in the organism for the biofilm formation and maintaining of reductive conditions (Cramton et al. 1999;Gotz 2002;Yeswanth et al. 2013). In view of importance of pyruvate in this pathogen it was predicted that pyruvate kinase could be potential drug target and various studies using alkaloids as PK inhibitors were proposed as effective drugs against S. aureus infections (Zoraghi et al. 2010(Zoraghi et al. , 2011a; however, the essential role of pyruvate in the biofilm formation continues to pose questions against the efficacy of such compounds in vivo conditions. The expression of enzymes involved in the cell wall biosynthesis, virulence factors, production of toxins, and purine biosynthesis for both energy production and growth are controlled through phosphorylation by PknB (Beltramini et al. 2009;Débarbouille et al. 2009;Donat et al. 2009;Tamber et al. 2010;Miller et al. 2010). The gene sequence of PK (JN645815) showed the presence of PknB site; therefore, like in Bacillus anthracis where fall in PK activity was observed on phosphorylation with PknB (Arora et al. 2012), we predicted that probably phosphorylation of PK might be controlling its function in S. aureus; hence the present study is aimed at understanding the effect of PK function on phosphorylation by PknB.
Materials and methods
In the present study chemicals were obtained from Sisco Research Laboratories Pvt. Ltd., India, Hi-Media Laboratories Pvt. Ltd., India, Sigma-Aldrich, USA, New England Biolabs, USA, and QIAGEN Inc., Valencia, CA, USA.
Bacterial strains and conditions
Staphylococcus aureus ATCC 12600 and Escherichia coli DH5a were obtained from Bangalore Genei Pvt Ltd. S. aureus was grown on modified Baird Parker agar at 37°C. After overnight incubation, a single black shiny colony with distinct zone was picked and inoculated in Brain heart infusion (BHI) broth and incubated at 37°overnight. Thus grown S. aureus ATCC 12600 culture was used for the isolation of chromosomal DNA (Hari Prasad et al. 2012).
Pyruvate kinase enzyme assay
The pyruvate kinase enzyme assay was performed using crude and pure pyk (Venkatesh et al. 2012) and the kinetic parameters V max , K M , and K cat were calculated from Hanes-Woolf plot [S] vs ([S]/V). In all the experiments protein concentration was determined by the method of Bradford (1976).
Serine/Threonine protein kinase (PknB) assay PknB activity was determined at 30°C using novel nonradiolabeled protein kinase spectrophotometric assay with synthetic peptide acting as substrate on a Cyber lab spectrophotometer, USA. PknB assay mixture contained 0.1 M Tris-HCl pH 7.5, 0.1MATP, and 11.8 lM (30 lg/ll) peptide (stpks = NLCNIPCSALLSSDITASVNCAK). 1 lg/ll enzyme fraction (pure His tag PknB) was mixed and incubated at 30°C for 10 min. The phosphorylated peptide was purified by passing it through Sephadex G-25 column (1 cm 9 15 cm); the fractions were eluted with 0.1 M Tris-HCl pH 7.5 and 150 mM NaCl. The enzyme fraction appeared in the void volume, and in elution volume the phosphorylated peptide was obtained. The phosphate covalently bound to the proteins was estimated by adding freshly prepared reagent A (3.4 mM of Ammonium Molybdate dissolved in 0.5 mM H 2 SO 4 , 10 % SDS, 0.6 M L-Ascorbic acid mixed in 6:1:1(v/v/v) ratio) and incubated at 30°C for 15 min and the absorbance was recorded at 820 nm against blank (0.1 M Tris-HCl pH 7.5 and 150 mM NaCl and reagent A (Clore et al. 2000). The enzyme activity was measured as the amount of phosphorous added per microgram peptide at 30°C per minute per ml. For this, the calibration curve was developed using standard KH 2 PO 4 for the estimation of inorganic phosphate and free phosphate was determined by adding reagent A (Fiske and Subbarow 1925). The phosphorylated peptide was further demonstrated by fractionating the eluted peptide on 15 % SDS-PAGE and staining the gel with reagent A; the bluish green-colored band that appeared in the gel indicated that peptide was phosphorylated by the enzyme fraction. Similarly, the autophosphorylation property of PknB was also determined and for this the reaction mixture composition was same except in that peptide was not added. The enzyme activity was measured as the amount of phosphorous added per microgram enzyme at 30°C per minute per ml. Substrate level phosphorylation was performed by taking different substrate concentrations of 10-120 lM of synthetic peptide, keeping the ATP concentration constant, and the corresponding velocities were calculated and a graph of [S] vs [S/V] (Hanes-Woolf) was plotted, from the graph K M and V max was determined. For auto phosphorylation activity of PknB the same enzyme assay was carried out except in that peptide was not added. Similarly, the Km, V max for auto phosphorylation of PknB was determined by Hanes-Woolf plot.
Cloning of PknB and PK genes
PknB and PK genes were PCR amplified from chromosomal DNA of S. aureus ATCC 12600 and sequenced (Table 1); the amplified products were purified with NP-PCR Purification kit, Taurus Scientific, USA, and sequenced by dye terminating method at MWG Biotech India Ltd, Bangalore, India, and Xcelris Pvt. Ltd. Ahemadabad, India. Thus obtained gene sequences were deposited at GenBank (www. ncbi.nlm.nih.gov/genbank/submt.html) ( Table 1) (Ohta et al. 2004). The PCR products were made into proper blunt ends using Klenow fragment (New England Bio labs, USA) and cloned in the Sma I site of pQE30 vector and transformed into E. coli DH5a, and generated clones were named as PV 1 and PK 1. The genes in clones PV 1 and PK 1 were over expressed with 0.75 mM IPTG and 1 mM IPTG for 2 and 5 h, respectively. The pure enzymes were obtained by passing the cytosolic fraction of each clone through nickel metal chelate column ). The expressed proteins were fractionated on 10 % SDS-PAGE and transferred on to nitrocellulose membrane (NCM) (Towbin et al.1979). The NCM was blocked with 2 % gelatin and treated with antiHis-tag monoclonal antibody kit (Qiagen) following the manufacturer's method.
Multiple sequence alignments were carried out to understand the sequence similarities and dissimilarities. The PK sequences were retrieved for Staphylococcus aureus, Escherichia coli, Pyrobaculum aerophilum, Aeropyrum pernix, Streptococcus mutans, Bacillus licheniformis, Salmonella typhimurium, and Homo sapiens R/L isoform from GenBank. All these structures were subjected to CLUSTAL W software and the results were recorded (Thompson et al. 1997).
In vitro regulation of PK PK was phosphorylated with PknB; the assay mixture contained 0.1 M Tris-HCl pH 7.5, 0.1 M ATP, 1 lg/ml pure enzyme (PK), and 3 lg/ml pure PknB was mixed and incubated at 30°C for 10 min. The phosphorylated enzyme (PK) was purified by passing through Sephadex G-25 column (1 cm 9 15 cm); the fractions were eluted with 0.1 M Tris-HCl pH 7.5 and 150 mM NaCl. The enzyme fractions appeared in the void volume. The bound phosphorous was estimated by adding freshly prepared reagent A and incubated at 30°C for 15 min, and the absorbance was recorded at 820 nm against blank (0.1 M Tris-HCl pH 7.5 and 150 mM NaCl and reagent A). The phosphorylated enzymes were used to carry out the enzyme assay as described earlier. These phosphorylated proteins were fractionated in 10 % SDS-PAGE and the bound phosphate was detected by immersing the gel in reagent A and followed by staining with Coomassie Brilliant blue R 250 .
In all the experiments protein concentration was determined by the method of Bradford (1976).
Cloning, expression, and characterization of PK gene
In the present study, the PK gene (1.7 kb) was PCR amplified from the chromosomal DNA of S. aureus ATCC 12600 and cloned in the Sma I site of pQE30 vector (Fig. 1a) in -1 frame, and the clone was named as PK 1. In order to ascertain the PK gene was sequenced (Supplementary Fig. 1) using the same primers and after ensuring the correct sequence (Accession number: JN645815), the enzyme was expressed in E. coli DH5a with 1 mM IPTG; thus expressed PK gene was purified by passing through nickel metal chelate agarose column. The pure recombinant PK exhibited single band in SDS-PAGE with a molecular weight of 63 kDa corresponding to the monomeric form of the enzyme (Fig. 1b) and the expression was validated using anti-His tag antibody (Fig. 1d). The PK sequence showed complete homology with all the PK gene sequence reported for other strains of S. aureus, indicating the presence of only one PK in this pathogen (Fig. 2). The rPK kinetics was close to the native PK ( Table 2). The formation of pyruvate was higher in S. aureus grown in BHI broth compared to LB broth (Table 3). On scanning the annotated protein sequence of S. aureus PK in PROSITE (Altschul et al. 1997) the following exclusive sites were observed: casein kinase phosphorylation, N-myristoylation, N-glycosylation, protein kinase C phosphorylation, cAMP and cGMP-dependent protein kinase phosphorylation, cell attachment sequence, and Amidation sites which all indicate the pyk of S. aureus is a unique enzyme. The Multiple sequence alignment results showed very low amino acid sequence identity with other bacterial PK and human pyk (Fig. 2).
Cloning, expression, and characterization of PknB gene
The gene encoding PknB (2.0 kb) was amplified from S. aureus ATCC 12600 chromosomal DNA and sequenced; the sequence (JN695616) showed complete homology with PknB of several S. aureus strains in the reported databases. The sequence analysis of PknB enzyme showed the presence of catalytic domain between 10th and 267th residues which contains 12 specific Hank motifs, both ATP and substrate binding sites similar to eukaryotic protein kinases. Transmembrane domain followed by three different duplicated forms of PASTA domains. PASTA1 distributed between 377th and 440th residues, PASTA2 distributed between 445th and 508th residues, and PASTA3 distributed between 514th and 577th residues in the annotated protein sequence of PknB gene and between these two domains is a single transmembrane segment. These unique characters are the features exhibited by several PknB enzymes expressed in different strains of S. aureus.
The PknB gene was cloned in pQE 30vector and the clone was called as PV 1. PknB gene in this clone was expressed with 0.75 mM IPTG which resulted in the successful expression of the gene and the enzyme was purified by passing through nickel metal agarose column. The molecular weight of the rPknB was found to be 73 kDa which corresponds to the insert cloned and is equivalent to the monomeric form of Pkn B protein (Fig. 3a, b). The kinetics of PknB is explained in Table 4. The rPknB exhibited both substrate level phosphorylation and autophosphorylation; the phosphorylated molecules were separated on SDS-PAGE, and the appearance of blue colored bands on staining with reagent A confirms the presence of phosphate in the enzyme and substrate stpks (Fig. 3c).
In vitro phosphorylation of PK
The presence of PknB site in the PK gene sequence encouraged us to carry out phosphorylation of PK. On phosphorylation with PknB, the bound phosphate was identified by its ability to react with reagent A which was identified spectrophotometrically. On fractionating these phosphorylated enzymes in 10 % SDS-PAGE and immersing the gel in reagent A, emergence of blue colored bands indicated PK was phosphorylated (Fig. 1c). The mobility of phosphorylated PK was higher than the native pure PK (Fig. 1c). The phosphorylated PK (P-PK) exhibited reduced activity (40 %) compared to the native PK (PK = 0.2 ± 0.015 lM NADH/min/ml to P-PK = 0.12 ± 0.01 lMNADH/min/ml) (Table 5). Cytosolic fraction of clone PK 1. Lane 2 Cytosolic fraction of clone PK 1 induced with IPTG. Lane 3 Purified PK obtained by passing the cytosolic fraction of IPTG-induced PK 1 clone through Nickel metal chelate agarose column. c In vitro phosphorylation assay: SDS-PAGE gel was first stained with reagent A, followed by Coomasie brilliant blue R250 staining. Lane M High-molecular-weight marker from Merek Biosciences Pvt Ltd. Lane L1 phosphorylated PK obtained from Sephadex G-25 column. Lane L2 Pure PK obtained from Nickel metal chelate agarose column. d Western blot using an anti-His tag antibody: Lane L1 phosphorylated PK obtained from Sephadex G-25 column. Lane L2 Pure PK obtained from Nickel metal chelate agarose column Fig. 2 Multiple sequence alignment of Pyruvate kinase: Amino acid sequences of Pyruvate kinase from various organisms were compared with the amino acid sequence of Human pyruvate kinase using CLUSTAL X
Discussion
Staphylococcus aureus is a death-defying pathogen of both animals and humans that can cause minor skin infections to major life-threatening diseases. Phosphorylation of host proteins due to the secretary PknB has been implicated its ability to grow in any anatomical organs of human host (Lowy 1998;Miller et al. 2010). The expression of PknB in S. aureus is involved in controlling metabolic stress and regulation of plethora of metabolic pathways accordingly; we have found 40 % decreased PK activity on phosphorylation with PknB (Table 5). This response is upregulated in anaerobic growth conditions where Redox-sensing repressor Rex is reported to be involved in the regulation of anaerobic respiration in response to the NADH/NAD ? levels; in these conditions the association between pyruvate and TCA cycle is reported to be very weak, thus leading to more biosynthesis of toxins, virulence factors, and PIA synthesis which are highly favored for biofilm formation (Cramton et al. 1999;Beltramini et al. 2009;Débarbouille et al. 2009;Donat et al. 2009;Tamber et al. 2010;Liu et al. 2011;Strasters and Winkler 1963;Zhu et al. 2007;Pagels et al. 2010;Ravcheev et al. 2012).
In S. aureus, shift of growth conditions from aerobic to anaerobic increased the expression of glycolytic enzymes, such as GapA, Eno, Pgk, and Pyk (Fuchs et al. 2007); in our previous studies we have also shown elevated biofilm units and lactate dehydrogenase activity when S. aureus was grown in BHI broth with increased concentrations of glucose . This enhanced glycolysis suppresses Krebs cycle resulting in the accumulation of lactate, acetate, formate, and acetoin, suggesting that glucose is catabolized to pyruvate further, catabolization via the lactate dehydrogenase, pyruvate formate-lyase, and butanediol pathway leading to biofilm formation (Zhu et al. 2007). In congruence with these observations in the present study, we have observed elevated PK activity in S. aureus grown in BHI broth compared to LB broth (Table 3). All these results conclusively explain the pyruvate formation in Values are the mean ± SD obtained from three determinations Values are the mean ± SD obtained from three determinations anaerobic condition favors more synthesis than energy generation contributing to the formation of biofilms which is one of the key pathogenic factors.
Conclusion
Staphylococcus aureus has unique feature to colonize on any anatomical locales of human body. This character makes the organism to spread its pathogenesis at a rapid rate. In this context, we found that PK catalyzes the irreversible conversion of Phosphoenolpyruvate to pyruvate that regulates the metabolic flux which is controlled by the expression of PknB. This PknB regulates functioning of PK thus controlling the levels of pyruvate in the organism. Therefore, high pyruvate formation in anaerobic conditions does not contribute to energy generation, but favors upregulating biosynthetic pathways involved in biofilm formation which is one of the key pathogenic factors. Values are the mean ± SD from three determinations | 4,244.8 | 2014-09-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Real-Time Monitoring in Home-Based Cardiac Rehabilitation Using Wrist-Worn Heart Rate Devices
Cardiac rehabilitation is a key program which significantly reduces the mortality in at-risk patients with ischemic heart disease; however, there is a lack of accessibility to these programs in health centers. To resolve this issue, home-based programs for cardiac rehabilitation have arisen as a potential solution. In this work, we present an approach based on a new generation of wrist-worn devices which have improved the quality of heart rate sensors and applications. Real-time monitoring of rehabilitation sessions based on high-quality clinical guidelines is embedded in a wearable application. For this, a fuzzy temporal linguistic approach models the clinical protocol. An evaluation based on cases is developed by a cardiac rehabilitation team.
Introduction
Home-based e-health programs are being increasingly used due to the proliferation of wearable devices and portable medical sensors which are seamlessly integrated into the daily lives of users to monitor vital signs and physical activity [1]. In this way, wearable devices, together with connectivity and ubiquitous computing in mobile applications [2], have provided a solution for monitoring a greater number of patients under prevention and rehabilitation programs in a personalized manner [3].
Moreover, wearable devices have been demonstrated to favor strategies for changes to healthy habits and the promotion of healthy physical activity [4]. To achieve this, a key aspect is to adapt high-quality clinical guidelines and protocols from health centers to home-based solutions [5] in order to provide real-time activity monitoring by means of wearable devices [6].
Motivated by these recent advances, in this work a cardiac rehabilitation program is embedded in a wrist-worn device with a heart rate sensor, which provides real-time monitoring of physical activity during sessions in a safe and effective way. For this, a linguistic approach based on fuzzy logic [7] is proposed in order to model the cardiac rehabilitation protocol and the expert knowledge from the cardiac rehabilitation team. Fuzzy logic has provided successful results in developing intelligent systems from sensor data streams [8][9][10][11][12], and more specifically, it has been described as an effective modeling tool in cardiac rehabilitation [13].
The remainder of the paper is structured as follows: in Section 1.1, the principles and motivation of cardiac rehabilitation together with previous related works are presented; in Section 2, we detail a standardized protocol for cardiac rehabilitation, and based on it, a fuzzy model is proposed for real-time monitoring the heart rate of patients. In Section 3, the developed architecture based on wrist-worn wearable and mobile applications for patients and a cloud web application for the cardiac rehabilitation team is presented; in Section 4, an evaluation of fuzzy modifiers and temporal windows from heart rate sessions is provided by the cardiac rehabilitation team in order to adjust the real-time monitoring of the fuzzy model in practice; and finally, in Section 5, conclusions and suggestions for future works are presented.
Home-Based Cardiac Rehabilitation
Cardiovascular diseases represent a major health problem in developed countries according to the World Health Organization (WHO) [14]. Around 17 million people die annually from cardiovascular pathologies [15]. Fortunately, its prognosis has been improved by primary prevention, drug treatment, secondary prevention, and cardiac rehabilitation, the latter of which has been shown to be the most effective tool [16]. In this way, cardiac rehabilitation has been revealed by multiple studies as effective for reducing morbidity and mortality by around 20-30% in acute myocardial infarctions [17]. Cardiac rehabilitation (CR) is defined as the sum of the activities required to favorably influence the underlying cause of heart disease, as well as ensuring the best physical, social and mental conditions, thus enabling patients to occupy by their own means a normal place in society [18]. For these reasons, in recent years, secondary prevention programs and cardiac rehabilitation units (CRUs) have been developed in several countries [17,19,20]. However, there is a lack of accessibility due to several factors, such as lack of time, comorbidities, geographical area, and access to health services [18,20,21]. Minimizing these limitations by means of home-based programs and wearable devices is the motivation of this work, which is underway with respect to the development of CR at the primary and home-care level in order to increase the number of patients who benefit from these programs. These are fundamentally low-risk patients [16].
Related Works
In the literature, we highlighted recent works and reviews where the effectiveness of smart health monitoring systems is described and summarized [15,22,23]. In recent years, some works have been carried out in which information and communication technologies and/or wearable device have enabled the telemonitoring of these patients. These are the most representative of the following works.
In [24], a home-based CR with telemonitoring guidance is evaluated. It includes individual coaching by telephone weekly after uploading training data. In [25], a home-based walking training program is presented. The approach includes a health device with four electrodes. At the end of the sessions, the data are transmitted using a mobile device to a monitoring center, which provides indicators of adherence and evaluation. In [26], a combination of e-textiles, wireless sensor networks, and a transmission board provide monitoring of several physiological parameters, such as the electrocardiogram (ECG), heart rate, and body temperature for future healthcare environments. In [13], the cardiac and aortic data are collected by wearable t-shirts with embedded electrodes. Then, they are processed by a mobile device to acquire biosignals. In addition, fuzzy logic is presented as an effective modeling tool with monitoring of the vital signs by means of fuzzy rules. In [27], a mobile application uploads the sessions from a wearable device to enable the coaching of health personnel. The wearable device is presented as a data collector without providing a real-time feedback of sessions. In [28], Fitbit wearable sensor devices, and personalized coaching with SMS are proposed. In the same way, an ad hoc application was not embedded in the wearable device. Finally, in [29], the heart rate was measured by the index finger on a built-in camera for one minute at each exercise stage in order to evaluate the quality of the session.
In this way, previous works have foregrounded the relevance and efficacy of integrating cardiac rehabilitation programs (CRPs) into home-based solutions, but with the limitations of non-programmable heart rate sensors or burdensome devices in the early stages of implantation. However recently, a new generation of smart-watches and wrist-worn devices has improved the quality of measures in heart rate (HR), achieving a median error below 5% in laboratory-based activities [30]. Moreover, smart-watches and wrist-worn devices are expected to be a boon to mHealth technologies in physical activity sensing thanks to the recent tools and operating systems which enable application development [31].
Based on this context, in this work, we describe real-time monitoring and evaluation of cardiac rehabilitation sessions (CRSs) at home using wearable wrist-worn devices with heart rate sensors. The highlights of this approach are: • Wear and play devices. Wrist-worn devices are noninvasive because of the heart rate sensor is embedded. In addition, they do not require placement of electrodes on the body prior to physical activity. Moreover, they are worn as a watch in an everyday manner.
•
Modeling a theoretical high-quality CRP. A standardized CRP, which was developed by the CRU of Hospital Complex of Jaen (Spain) was introduced in the home-base approach for each patient's care in a personalized way. A linguistic approach based on fuzzy logic [7] is included to model the CRP and the expert knowledge from the cardiac rehabilitation team.
•
Real-time smart monitoring is embedded in the wrist-worn device in order to: (1) show patients the adherence to CRP during physical activity; and (2) prevent unsuitable and inadequate HR ranges. • Practical methods are described for applying the theoretical model to wearable computing.
Methodology
In Section 2, we detail the standardized protocol for the CRP where this work is focused for proposing a fuzzy model for real-time monitoring of the heart rate of patients by means of wrist-worn wearable devices.
Setting a Cardiac Rehabilitation Program
In this section, we describe a standardized CRP for patients with ischemic heart disease, a disease where patients suffer from a kind of restriction in the blood supply to the tissues. In the literature, several models for handling CRPs have been proposed and analyzed in many countries [32]. In this work, we proposed the use of a general model for cardiac rehabilitation [33] based on the heart rate, which is focused on determining the values of the heart rate training zones in the CRS. This model was developed at the CRU of the Hospital Complex of Jaen, Spain, where this work is centered.
As a previous step before starting the CRP, a first evaluation for each new patient of CR is required in health centers. In this initial evaluation, the patients are connected to an ECG and undergo a controlled cardiac stress test, which is evaluated by a cardiologist in terms of symptoms and blood pressure response for diagnosing patients. From this test, a cardiologist determines the next thresholds for each patient [34]:
•
The maximal or peak heart rate (HR max ), that is, the number of contractions of the heart per minute (bpm) when it is working at its maximum capacity without severe problems.
•
The basal or resting heart rate (HR rest ), that is, the bpm when the patient is awake, relaxed, and has not recently exercised.
•
The first ventilatory threshold (VT 1 ), that is, the bpm which represents a level of intensity when blood lactate accumulates faster than it can be cleared, this being related to the aerobic threshold.
•
The second ventilatory threshold (VT 2 ), that is, the bpm which represents the point where lactate is rapidly increasing with an intensity that generates hyperventilation; this being related to the anaerobic threshold.
Once patient thresholds are defined in the health center, a set of sessions are designed for configuring CRP by the cardiac rehabilitation team defining the:
•
The optimal heart rate training zones (OHRTZs). These are defined by the clinical protocol in each session, as percentage ranges [p * + , p * − ] from HR max and HR rest . The methodology of Marvonen [33] allows translating the percentage range to absolute bpm [r * + , r * − ] that is defined by r * = HR rest + p * ( HR max − HR rest ).
The middle point between [r * + , r * − ] is known as target heart rate HR tar , which is related to the ideal heart rate to maintain in the session.
• Duration of the progressive stage (d w ). The progression of HR within from basal state needs for a lineal increase, which starts from the resting point until reaching the OHRTZ. The duration of this progressive stage is defined in minutes.
A Fuzzy Model for Real-Time Monitoring and Evaluation of Cardiac Rehabilitation Sessions
In this section, we describe a fuzzy model for real-time monitoring and evaluation of the heart rate of patients in accordance with the cardiac rehabilitation program described in Section 2.1.
This fuzzy model is proposed to describe, in real-time, the heart rate stream, which is composed of the measured values and the time-stamps when they are collected by the heart rate sensor. In Section 2.2.1, we focus on fuzzification of the measures from heart rate sensor. In the Section 2.2.3, we describe a fuzzy aggregation of the terms using temporal windows. Moreover, in order to model the progressive stage, a fuzzy transformation from progressive to maintenance stage is detailed in Section 2.2.2. Finally, in Section 2.2.4, we detail an interpretable evaluation based on the previous steps to describe the further heart rate stream at the end of the rehabilitation session.
Fuzzification of Heart Rate Measures by Optimal Heart Rate Training Zones
In this section, we describe a linguistic approach based on fuzzy logic for the OHRTZ. In fuzzy logic methodology, a variable can be defined by means of terms, which are described by means fuzzy sets. Each fuzzy set is defined in terms of a membership function which is a mapping from the universal set to a membership degree between 0 and 1.
Based on the fuzzy logic methodology, we proceed to describe the HR under a linguistic representation defined by the parameters from the CRP, detailed in Section 2.1. Specifically, we propose three intuitive terms {low, adequate and high}, which are defined by fuzzy sets, for describing the variable heart rate, which is measured by a 2-tuple valueh r i = hr i , t i . hr i represents a given value in the heart rate stream and t i its time-stamp. Hence, the heart rate stream is composed of a set of measured values Sh r = {h r 0 , . . . ,h r i , . . . ,h r n } which are collected by the heart rate sensor.
In this section, we focus on the fuzzification of a heart rate measure individually,h r i . On one hand, because of prior definitions in cardiac rehabilitation, which are, (1) the OHRTZs as values of HR between the ranges [r + , r − ]; and (2) the ventilatory thresholds [VT 1 , VT 2 ] as the efficient and safe ranges of aerobic physical activity, we define the term as adequate. This term is described by a fuzzy set characterized by a membership function whose shape corresponds to a trapezoidal function. The well-known trapezoidal membership functions are defined by a lower limit l 1 , an upper limit l 4 , a lower support limit l 2 , and an upper support limit l 3 (See Equation (1)): For the term adequate, the fuzzy set is characterized by the trapezoidal membership function that is defined by Equation (2): On the other hand, with VT 2 being the threshold from aerobic to anaerobic activity, and r * + the upper limit range for OHRTZs, we define the term high, which is described by a fuzzy set characterized by the trapezoidal membership function that is defined by Equation (3): In a similar way, with VT 1 being the lower threshold of the aerobic activity and r − the lower limit range for OHRTZs, we define the term low, which is described by a fuzzy set characterized by the trapezoidal membership function that is defined by Equation (4): The relation between the thresholds from cardiac rehabilitation program and the membership functions is shown in Figure 1. Moreover, thanks to the use of linguistic modifiers, in fuzzy logic, we can model different semantics over the membership functions for describing the linguistic terms [36]. To represent the impact of a linguistic modifier m over a linguistic term v, such as great or fair, a straightforward power operation of the membership function is proposed [37] and defined by Figure 1. Example of membership functions for the terms low, normal, and high. In the example, the optimal heart rate training zones (OHRTZs) of the sessions are for trained patients, which are closer to VT 2 than to VT 1 . In the example of modifiers, the impacts of the weak modifier in short-dashed lines and the strong modifier in long-dashed lines are shown.
If α m < 0, we obtain a weak modifier, such as fair; and a strong modifier with α m > 0, such as great. In Figure 1, we describe the impact of the linguistic modifiers, and in Section 4, we describe the comparative results of provided by the cardiac rehabilitation team.
At this point, based on the current value of the heart rate hr i and the thresholds for the session [VT 1 , VT 2 ] and [r * + , r * − ], we are able to calculate the degree of the fuzzy terms {low, adequate, and high} in order to advise the patient in real-time with respect to the adequacy of the sessions.
The degrees of membership of the HR to the fuzzy sets {low, adequate, and high} can provide an intuitive evaluation for the real-time monitoring of sessions in wrist-worn wearable devices. For example, in this work, gradually changing colors in the evaluation of the HR are used to paint the screen of the wearable device and to evaluate the session using a 4-star scale, as described in Section 3.
However, in practice, it is necessary to handle additional issues in order to provide real-time monitoring during the rehabilitation sessions: the monitoring in the progressive stage and the temporal evaluation of heart rate streams.
Fuzzy Transformation from the Progressive to Maintenance Stage
In the literature there is a lack of proposals for modeling of the progressive stage in cardiac rehabilitation. This is related to the fact that it does not contain critical HRs. To resolve this issue, we propose a straightforward method to translate the model of OHRTZs from the aerobic state to define the initial basal state. In this way, the basal state is described by the following parameters: where the HRs of patient are adequate in order to start the session.
•
Lower basal threshold in bmp VT 0 1 . This represents a minimal value of HR not recommended before starting the session. • Upper basal threshold in bmp VT 0 2 . This represents a maximal value of HR not recommended before starting the session.
Next, for calculating the time evolution in real-time within the progressive stage, we define a weight progression w = ∆t 0 /d w , w ∈ [0, 1], where ∆t 0 is the duration of session in the current time t 0 and d w is the total duration of the progressive stage defined by the cardiac rehabilitation team.
Based on the temporal evolution of the weight progression as well as the initial and final values of each threshold, we can define the threshold in the progressive stage for each current time frame using a linear progression as shown in Equation (5).
In Figure 2, we show an example of a CRS, where the linear progression of thresholds from progressive to maintenance stages is plotted.
Fuzzy Temporal Aggregation of the Heart Rate Stream
In the practice of developing based-sensor systems, the temporal component in the data streams is a critical aspect to analyze [38]. For example, in a given current time when we evaluate the heart rate sensor stream, we can take into account the last single sample of HR or calculate an average within a sliding window.
In this work, we propose fuzzy temporal aggregation [8], which provides a model to: (1) weight linguistic terms based on temporal membership functions; (2) define progressive and interpretable temporal linguistic terms; and (3) give flexibility in the presence of eventual signal loss or variance in the sample rate.
Based on previous works [8,39], we have integrated a fuzzy aggregation of the terms in the heart rate sensor stream using fuzzy temporal windows, which are straightforwardly described in function of the distance from each sample time-stamp ts = {t 0 , . . . , t n } to the current time ∆t i = t i − t 0 .
First, the degrees of a fuzzy term, in our case V = {low, adequate, high}, are weighted by the degree of their time-stamps evaluated by a fuzzy temporal window T k defined by Equation (6).
Secondly, the degrees of membership over the fuzzy temporal window are aggregated using the t-conorm operator in order to obtain a single degree of both fuzzy sets V r ∩ T k by Equation (7).
We note several fuzzy operators can be applied to implement the aggregation. However in this paper, we propose a fuzzy weighted average [40] as is recommended in the case of high sample rates from wearable sensors [8]. The aggregation process is defined by Equation (8).
The definition and adequacy of several temporal windows, which model the evolution of linguistic terms in the heart rate stream, are discussed in Section 4.
Evaluating the Cardiac Rehabilitation Sessions
Previous sections describe real-time monitoring of heart rate stream within a wearable device based on fuzzy logic. Once the rehabilitation session has been finished by the patient, evaluating the further session at the end to provide a feedback is fairly intuitive.
Based on the degree of a fuzzy term V r and its temporal window T r , we compute an accumulative degree in the complete data stream Sh r by Equation (9).
Under this approach, the accumulative degrees are calculated as the average degree of the terms in the heart rate stream ahead, providing upright and interpretable analytical data for the session. For example, the accumulative value of the term adequate has been used to fill a 4-star scale in the mobile application of this work in order to provide an evaluation of the rehabilitation session for the patient.
Development as a Wearable Mobile Cloud Platform
In this section, we describe the technical development of the proposed approach to be deployed in wearable wrist-worn devices, mobile devices, and a cloud web platform. The proposed architecture is inspired by current advances in wearable and mobile development tools [41], which provide real-time monitoring in wearable devices and data synchronization between mobile and web applications.
For the client, we have implemented two applications using Android Platform [42], both in wearable wrist-born and mobile devices. On the server side, we have implemented a web server under Java Tomcat, which web services orchestrate and synchronize the flow data between the cardiac rehabilitation team and patients. In Figure 3, we show the architecture and data flow of components. Hence, the approach includes three applications: a wearable application, a mobile application, and a web application, whose use cases are: • A mobile application for patients in order to show the sessions and communicate the data between the web server and the wearable devices. It has been developed for Android and included the next use cases: -Synchronization of the parameters of the next sessions from the CRP, which are defined by the cardiac rehabilitation team and are collected in the web server, in the mobile device using a web service under wireless network technology (3G/4G or WiFi).
-Synchronization of the session data from the wrist-worn wearable device into the mobile device using an ad-hoc Bluetooth connection.
-
Uploading of the session data from the mobile device into the web server using a web service under wireless network technology (3G/4G or WiFi).
-
Showing and evaluating the CRSs. In the same way as the cardiac rehabilitation team, patients can observe the following information in their mobile devices for each session: (1) the raw data from the HR of sessions in a timeline; (2) the real-time monitoring provided using gradually changing colors blue, green, red; and (3) a summarized indicator of the session using a 4-star scale.
• A wearable application for patient in order to develop the sessions with regard to the CRP. This has been developed for Android to include the next use cases: -Updating the the parameters of sessions from the mobile device into the wrist-worn wearable device using an ad-hoc Bluetooth connection. To synchronize the data of sessions, which contains the monitoring and raw HR, from the wrist-worn wearable device into the mobile device using an ad hoc Bluetooth connection.
Images from wearable, mobile, and web applications are shown in Figure 4, where we detail the evaluation and real-time monitoring developed under the methodology described in Section 2 by means of the technological components of the approach. In the mobile application, the VT 1 , VT 2 , r * + and r * − thresholds and the 4-star evaluations are described. In the web application, the cardiac rehabilitation team has access to heart streams and VT 1 , VT 2 , r * + and r * − thresholds of patients with zoom and scale options. Polar M600 (https://www.polar.com/us-en/products/sport/M600-GPS-smartwatch) was chosen as an Android Wear device due to the high-quality optical heart rate monitor. The strength specifications of Polar M600 include : (1) optical heart rate measurement with six LEDs; (2) its waterproof nature (IPX8 10 m); (3) low weight (63 g); (4) reduced dimensions (45 × 36 × 13 mm); and (5) long-life battery (500 mAh Li-pol for a 2-day average uptime per charge or 8 h of training).
Based on the further evaluation of [43], Polar M600 is highly accurate. The HR value is ±5 bpm or less from the ECG HR value during periods of steady-state sports (cycling, walking, jogging, and running), which are the focus of cardiac rehabilitation. However, the accuracy was reduced during some intensity change exercises. No statistically significant was found in this sample on the basis of sex, body mass index, VO2max, skin type, or wrist size.
Results
In this section, we present an evaluation of the fuzzy model for real-time monitoring of cardiac rehabilitation sessions (CRSs) at home using wearable wrist-worn devices with heart rate sensors. As we discussed previously in Section 2, the theoretical aspects of cardiac rehabilitation are well defined in the literature. However, we can model different semantics over the membership functions for describing the linguistic terms by means of linguistic modifiers and temporal windows. In the next sections, we discuss the impact and adequacy of them based on the expert knowledge of a cardiac rehabilitation team.
Impact of Modifiers over Linguistic Terms
In this section, we describe an evaluation of the linguistic modifiers for the terms defined in Section 2.2.1. As we detailed previously, the OHRTZs define the ranges where the values of HR are totally adequate, VT 1 defines the basal-aerobic threshold from which inferior values of HR are totally low, and VT 2 the aerobic-anaerobic threshold from which upper values of HR are totally high. However, the values of HR between these optimal zones need to change gradually. This progression between optimal zones has been modeled and evaluated using several modifiers.
In this work, we have evaluated three models using different modifiers to adjust the progression in the trapezoidal membership functions of the terms low, adequate, and high with regards to Section 2.2.1.
First, we have evaluated low-adequate values of the heart rate by means of three models: (A) a severe model, where the low term is strong and the adequate term is weak; (B) a neutral model, where neutral modifiers are applied to both terms; and (C) a yielding model, where the adequate term is stronger than the low term, which is weaker. The strong, neutral, and weak properties have been defined by the parameters α = 0.5, α = 1, and α = 2.0 of the modifier, respectively. In Figure 5, we show a representation of the impact of the modifiers on the degree of the linguistic terms in a HR stream.
To evaluate the impact of fuzzy modifiers, we have included a survey of 10 cases with key fragments of low values from the heart rate of a sessions, which were colored with blue and green, based on the degree of the terms low and adequate, respectively. In Figure 5, we show an example of a survey case. In a clinical session, the cardiac rehabilitation team evaluates them using a 5-point Likert scale: {value -2, value -1, value 0, value +1, value +2}, for which results are detailed in Table 1.
Second, in a similar way, high-adequate values of heart rate have been evaluated by means of three models: (A) a severe model; (B) a neutral model; and (C) a yielding model. A second survey, which contains 10 cases with key fragments of high values from the heart rate of sessions, was evaluated by the cardiac rehabilitation team using the 5-point Likert scale. Results are detailed in Table 1 and two examples of cases from the surveys are presented in Figure 5. Figure 5. Impact of the fuzzy modifiers on heart rate streams. Heart rate is plotted using gradually changing colors blue, green, red based on the degree of the terms {low, adequate, high}, respectively. Green dotted lines determine the OHRTZs of patient. Blue and red dotted lines determine aerobic thresholds VT 1 , VT 2 of the patient, respectively. The impact of the models A, B and C for a case of high-adequate HRs (right); and the impact of the models A, B, and C for a case of low-adequate HRs (left).
Impact of Temporal Window over Linguistic Terms
In this section, we describe an evaluation on the fuzzy temporal windows over linguistic terms which describe the heart rate stream during rehabilitation sessions. As we detailed previously, the theoretical thresholds of the heart range zones from CRP are defined theoretically missing the temporal permanence in OHRTZs. In some critical situations when patients develop the CRP, the evolution of heart rate between OHRTZs is prompt and inconstant. In those cases, the adherence and adequacy could not be just defined by the current value of HR.
In order to analyze the impact of the temporal windows, a survey with 15 key fragments of prompt and inconstant heart rate streams from the CRSs was designed to evaluate three temporal windows. The cardiac rehabilitation team from the Hospital Complex of Jaen (Spain) analyzed the impact of temporal windows for each term {low, adequate, high} based on their expert knowledge.
The temporal windows to evaluate are (t 1 ): the last single sample; (t 2 ): a 3-5 s window; and (t 3 ): a 5-10 s window. In the two last cases (t 2 and t 3 ), we have defined the next fuzzy temporal windows µ t2 (∆t i ) = TS(3s, 3s, 3s, 5s) and µ t3 (∆t i ) = TS(5s, 5s, 5s, 10s) based on the temporal fuzzification described in Section 2.2.3. In Figure 6, we detail an example of the semantics and impact of the three temporal windows on a heart rate streams.
In Table 2, we show the results of the evaluation described by a 5-point Likert scale: {value −2, value −1, value 0, value +1, value +2}. We can observe that the short-term temporal window t 1 is more recommendable when evaluating the term high because of corresponding critical values of HR, which require an immediate response from the patient to decrease the heart rate, whereas the long-term window t 3 is strongly not recommended. On the other hand, the longer temporal window t 2 is more properly related to the temporal term adequate because of the correct adherence the HR stream needs for a temporary stabilization in OHRZs. Finally, the term low is more appropriate with the temporal window t 2 , and is adequate with other windows too. . Impact of temporal windows on a case of prompt heart rate streams within the zones described by the linguistic terms {low, adequate, high}. Based on expert evaluation: Model A (short-term temporal window) suits in high zones detecting immediately critical HRs; Model B (middle-term temporal window) suits in adequate zone requiring a minimal permanence within; and Model B suits in the low zone without critical differences with regard to other models.
Discussion
On one first hand, from the results presented in Section 4.1, where the impact of modifiers are evaluated, we observe the preferences for the neutral model, where neutral modifier α m = 1.0 is applied to the linguistic terms. It indicates the non-predominance of a linguistic term over another when defining transition zones between OHRTZs and the aerobic thresholds VT 1 , VT 2 . We note that in the case of high values of HR, which are more sensitive for patients, the yielding model is strongly not recommended by the experts.
On the other hand, based on expert evaluation presented in Section 4.2 where the impact of temporal windows are evaluated, we note that the short-term temporal window suits in high zones detecting immediately critical heart rates. Model B (the middle-term temporal window) suits in the adequate zone requiring a minimal permanence within, and also suits in the low zone without critical differences with regard to other models.
In this way, we can note the adequacy of the clinical protocol for real-time monitoring the CRSs in wrist-worn devices. In addition, the use of a fuzzy model including modifiers and temporal windows has provided a methodology to obtain more accurate terms. This methodology can be extended to model other health contexts based on data stream processing.
Although previous works have been mainly focused on ECG sensors [13,26], the use of a wrist-worn device with the new generation of heart rate sensors provides high accuracy with respect to ECGs [43] for low-risk patients performing low-and medium-intensity exercise. The proposed approach has been implemented within Polar M6000 with Android Wear. Moreover, this wrist-worn device is noninvasive, light and comfortable, but with powerful computing capacity.
On translating the approach to other devices and health contexts, we advise that the quality and precision of heart rate is critical to ensure patient safety. High-risk patients and those with other pathologies could require more accurate devices such as ECGs. In this way, the proposed wrist-worn device just provides a measurement of HR. ECG devices could provide further signal processing of HR where heart rate variability or QRS could be described by means of the here-proposed linguistic terms and fuzzy temporal windows, due to the expanding importance of short-term beat windows in patient analysis [44].
Finally, we note the light processing to compute our methodology, which is based on fuzzy logic, enabling low-cost wrist-worn devices to incorporate it without a computational burden. Other approaches based on similar devices, such as the Fitbit [28] or Garmin [45], could be extended to develop embedded applications providing real-time monitoring during rehabilitation sessions.
Conclusions and Future Work
The main motivation of this work is enabling the high-quality real-time monitoring of CRPs at the homes of patients, designed and supervised remotely by the cardiac rehabilitation team. For this, we have proposed: (1) integrating a high-quality protocol based on clinical guidelines for monitoring the HR of the patient in a personalized way; (2) providing a real-time monitoring during sessions using a wearable wrist-worn device with heart rate sensor; and (3) using a wearable-mobile-cloud platform for collecting and synchronizing data between patients and the cardiac rehabilitation team.
The methodology of this work has been focused on modeling the theoretical approaches for developing an wearable application for real-time monitoring using wrist-worn devices. In order to address this challenge, first, a fuzzy model is proposed to describe under a linguistic approach the heart rate stream by means of three representative terms: low, adequate, and high. Fuzzy modifiers and fuzzy temporal windows are included in the methodology. The fuzzy approach provides a flexible evaluation of the HR stream: (1) enabling a intuitive real-time monitoring in wrist-worn wearable devices during sessions; and (2) providing visual and gradual advice, for which intensity is related to the degree of the terms.
On the other hand, an evaluation of fuzzy modifiers and fuzzy temporal windows is included to generate more accurate and flexible terms. In Section 4, the impact of fuzzy modifiers and temporal windows over the linguistic terms is analyzed by means of surveys based on cases. They have been evaluated by the cardiac rehabilitation team from the Hospital Complex of Jaen (Spain), indicating the most appropriated semantics for each linguistic term.
In future works, the approach will be extended to generate linguistic recommendations and summaries of the further sessions for the patients and the cardiac rehabilitation team. In this work, we have introduced the aggregation as a straight indicator in a 4-star scale, but a further analysis of the heart rate stream will provide intelligent and automatic feedback for cardiologists to detect weak points in the sessions of patients.
Acknowledgments: This contribution has been supported by the project PI-0203-2016 from the Council of Health for the Andalucian Health Service, Spain together the research project TIN2015-66524-P from the Spanish government.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 8,395.8 | 2017-12-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Pushing the limits of EUV mask repair: addressing sub-10 nm defects with the next generation e-beam-based mask repair tool
Abstract. Mask repair is an essential step in the manufacturing process of extreme ultraviolet (EUV) masks. Its key challenge is to continuously improve resolution and control to enable the repair of the ever-shrinking feature sizes on mask along the EUV roadmap. The state-of-the-art mask repair method is gas-assisted electron-beam (e-beam) lithography also referred to as focused electron-beam induced processing (FEBIP). We discuss the principles of the FEBIP repair process, along with the criteria to evaluate the repairs, and identify the major contributions determining the achievable resolution. As key results, we present several high-end repairs on EUV masks including a sub-10-nm extrusion achieved with the latest generation of e-beam-based mask repair tools, the MeRiT® LE. Furthermore, we demonstrate the corresponding repair verification using at-wavelength (actinic) measurements.
Introduction
Extreme ultraviolet (EUV) lithography today is deployed to high-volume chip manufacturing. [1][2][3] Several leading chip manufacturers already announced the delivery of first products created with EUV lithography. The key advantage of EUV lithography is its extendibility. The extremely short EUV light wavelength of 13.5 nm enables the continuation of Moore's law for many technology nodes ahead, 4,5 which is a tremendous value for the entire semiconductor industry. To realize this, significant technology challenges must be mastered. In particular, the EUV light properties drive significant changes in core elements and tools of the chip manufacturing process. 1-3 An illustrative example for such an innovation step is the photo mask. The entire working principle and design had to be revised. Firstly, EUV masks need to be operated in reflection and not in transmission as conventional deep ultraviolet (DUV) masks. The fact that EUV light is strongly absorbed by all materials not only requires that all manufacturing tools using EUV light operate in vacuum, it also implies that the incorporated optical elements are reflective to maximize system transmission.
Second, complex multi-layers had to be developed to coat the mask. 6 For lithography process productivity reasons, it is essential to maximize the reflectivity of each optical element, including the mask. Creating such a high-reflectance surface, however, is again limited by the physical properties of EUV light. The refractive index of all materials at 13.5 nm is close to 1, and differences in between materials are small. 7 Thus, creating a high-reflectance EUV mask operated under near normal incidence angles can only be achieved by multilayer coatings, making EUV masks highly complex structures. *Address all correspondence to Tilmann Heil<EMAIL_ADDRESS>Third, the mask topography affects the aerial image. The height of the absorbers structuring the mask is significantly larger than the EUV wavelength. Consequently, electro-magnetic interaction of the EUV light with the mask topography becomes important for the imaging result on wafer. Understanding and optimizing this interaction is a field of ongoing research. 8,9 Finally, the optimization of the mask absorber material is a very active field of research toward the next technology nodes. Some developments target to reduce the absorber thickness via so called high-k materials, i.e., materials with a large absorption, to reduce the mask 3D effects. Further activities target to improve imaging by phase shifting masks using low-n materials. 10 All activities focus on improvements to increase the overall mask reflectivity, imaging contrast, and hence larger process windows.
Consequently, EUV masks are significantly more complex than their predecessors in DUV lithography. Their manufacturing complexity and value is increasing accordingly. This also means that the economic pressure not to lose masks during the manufacturing process is soaring. Thus, the ability to repair EUV masks is becoming technologically and economically increasingly important.
Even though the fabrication of EUV masks is extremely advanced, it is not perfect, i.e., yielding certain amounts of defects. As explained above the repair of such defects is practically a prerequisite for the profitable application of EUV masks. Whereas several technologies exist for the repair of mature technology photomasks such as, e.g., nanomachining or laser-based repair techniques, all these technologies run into physical limitations at shrinking feature sizes. 11,12 Therefore, the repair of the most advanced features is currently solely based on gas-assisted electron-beam (e-beam) lithography methods. [13][14][15] These can be subsumed as focused electron-beam-induced processing (FEBIP) techniques. [16][17][18][19] The basic principle of FEBIP is the very local alteration of adsorbed precursor molecules or/and the substrate by the impact of electrons. 17,20 More specifically, focused electron-beam induced etching (FEBIE) 15,16 is used as a subtractive and focused electron-beam induced deposition (FEBID) 18,19,21 as an additive technique. To do so, the focused e-beam is applied in the presence of certain precursor molecules thus triggering the local removal (FEBIE) or deposition (FEBID) of material. In a vivid picture, one might think of the focused e-beam as a pen and the precursor molecules as ink to draw on very small scales.
In this paper, we review the repair of EUV masks using FEBIP and demonstrate high-volume manufacturing ready solutions for EUV mask repair with the required high accuracies. In Sec. 2, we discuss the physical mechanisms determining the repair accuracy to of EUV masks and we describe the procedures and metrics to control the quality and success of mask repairs. In Sec. 3, we introduce the latest mask repair tool, the MeRiT ® LE, available since 2021 to provide solutions for future EUV mask repairs. Furthermore, we present EUV mask repair results achieved with and corresponding verification results using the Aerial Image Measurement System (AIMS ® ) EUV tool. In particular, we demonstrate the repair of line extrusions smaller than 10 nm meeting the specifications of the next technology nodes. Finally, we summarize our results and present an outlook on future developments in EUV mask repair.
EPE and Minimum Repair Size-What is Limiting Resolution
A major challenge of the FEBIP-based repair process is to address smaller defects with decreasing critical dimensions (CD) on EUV masks. However, the underlying physics and chemistry of the repair process is considerably complex. On a fundamental level, we identify three main contributions: the size of the electron affected area of the mask, 22-26 the properties of the precursor molecules in general 20 and particularly the mask material, 22,23,25 and finally the instrumental setup. Furthermore, each of the listed contributors can be broken down in several subtopics all influencing the achievable resolution.
For example, the size of the electron affected area is obviously determined by the size and shape of the e-beam impinging on the surface. However, even more importantly, the electron affected area strongly depends on secondary processes such as scattering events and the release of secondary electrons (SE). 19,22,[25][26][27][28] The spatial range of these effects drastically decreases with decreasing primary energy of the e-beam. Thus, the electron affected area depends on the shape and size of the e-beam, the energy and current density of the primary e-beam and the material properties of the mask, such as density, elemental composition, and morphology.
To initiate the repair process, the electrons must interact with an adsorbed precursor molecule. Usually, the repair process relies on electron induced dissociation of the precursor. This again is a rather complex process. For example, the cross section for electron induced dissociation is dependent on the actual electron energy and can be triggered by different processes like dissociative electron attachment or dissociative ionization to name just two. Conventional wisdom holds that cross section is low for higher electron energies and higher for low electron energies, which indicates that indeed SE play a decisive role in FEBIP.
In the next step, the properties of the dissociation products determine the course of the FEBIP process. In addition, also the physicochemical properties of the intact precursor, such as sticking coefficient or mobility on the mask surface, are to be considered for a full understanding of the repair process.
The last major contribution is the instrumental setup, which is probably the best accessible of the discussed. Thereby the tool stability as, e.g., beam position accuracy or inherent vibrations contribute to the repair performance. To deal with the complexity of the mask repair process, we follow a semiempirical approach in which the minimum repair size (MRS) is used as the quantity to describe the repair precision. As discussed above, a major contribution to the MRS is the size of the mask area that is exposed to primary, backscattered, and SE which in turn trigger the electron induced surface chemistry. As mentioned above, the size of the SE-distribution depends on the mask material, the energy of the impinging electrons, their spatial distribution given by the properties of the e-beam, and the relative movement between mask and electron source, i.e., the so-called jitter, which is a purely instrumental contribution. To address the main goal to decrease the MRS, a reduced jitter, a smaller SE distribution, and a smaller spot size of the electron source are obviously beneficial. While reducing the jitter is a matter of engineering excellence, the SE distribution and the e-beam spot size are fundamentally linked. Since the mask material is preset, an effective way to reduce the SE exit area is by minimizing the energy of the impinging primary electrons. 29 A reduced electron energy, however, leads to an enlarged e-beam spot size in conventional scanning electron microscopes (SEM). We have quantified these trade-off behaviors with Monte Carlo simulations of the SE distribution via the CASINO software 30,31 and with calculations of the e-beam spot size for different column types. 27 Figure 1 summarizes the results showing an increased repair resolution may occur at lower electron voltages if the reduction of the SE cloud overcompensates the loss in imaging resolution of conventional SEMs. However, at small voltages the increasing imaging errors of the SEM may still dominate. Thus, SEM columns providing optimized e-beam resolution even at low landing energies are key modules for further improving the MRS. Therefore, the next generation mask repair tool MeRiT ® LE features an improved column with respect to its predecessor the MeRiT ® neXT. Future column Fig. 1 The e-beam spot size for two different columns (light and dark blue) and relative spatial extension of the secondary electron distribution (gray) as a function of electron voltage. In addition, the working points of the MeRiT ® neXT tool using a conventional column at 600 V, and the MeRiT ® LE tool using an optimized column at 400 V are indicated. Both, primary electron spot size, and secondary electron spot size have been reduced for the MeRiT ® LE as compared to the MeRiT ® neXT tool, thus enabling significantly smaller repair sizes. developments may even further extend the MRS roadmap to even lower electron voltages with aberration corrected e-beam columns.
The edge placement error (EPE) is another important criterion to describe how accurately the repair can be performed. The EPE is directly coupled to local CD errors 32 that the repair process shall correct for. Hence, a repair process is successful when the positioning of the repaired edge relative to the target position stays within a well-defined range around the target edge. The EPE consists of an edge placement offset and noise on the repaired site, which is a result of the process chemistry. The edge placement offset is dominated by tool contributions, such as the exact determination of the defect location and shape, which defines the positioning of the e-beam during repair. Another important factor is the spatiotemporal alteration of this area in the SEM image during the repair, e.g., by charging or drift effects. Accordingly, the EPE reductions required by the semiconductor roadmap need continuous improvements of both, the tool design, and the repair processes.
Repair and Verification
The accuracy of the repair and hence the EPE can be determined by the position of the repaired edge relative to the target position as measured with the ZEISS AIMS ® tools, an industry standard for defect disposition and repair verification of DUV as well as EUV masks. 33 Within a mask shop the repair process and the subsequent printability verification process with an actinic imaging tool such as the AIMS ® are closely connected since a mask repair is only deemed successful if the aerial image of the defective site on the mask is within specification. 34 AIMS ® is an at-wavelength (actinic) metrology system specifically designed to emulate the aerial image formation as being done on exposure tools (e.g., ASML scanners). Figure 2 shows the core functionalities of the AIMS ® EUV system in comparison to a wafer scanner. The illumination of the mask within the AIMS ® system is generated in an equivalent way compared to a scanner tool. While the optical design and components of the AIMS ® tools may differ from the modules building up a scanner, they must be able to reproduce the same angular and spatial light intensity distribution locally within a limited light spot on the mask surface, assuring that the features on the mask experience (locally) the same light distribution they would experience on the scanner. After being collected by the numerical aperture (NA) on the mask side, the light is propagated in a projector lens (or mirror optics in the case of EUV) and the aerial image is magnified to be recorded with a CCD camera for post-evaluation. Not only the illumination of the mask must be emulated to achieve a complete and fully reliable printability statement, but the collection of diffraction orders as transmitted (DUV) or reflected (EUV) off the mask must be equivalent as on the scanner, to reproduce all relevant imaging effects which are significant for a thorough mask qualification, e.g., in the qualification of a repair. Thanks to this scanner equivalent aerial image generation process, and ability to measure through focus, AIMS ® offers capabilities which go well beyond a printability statement, but allows to break down the components of an EPE and qualify different contributions deriving from the mask. This is one major advantage of such powerful metrology system, i.e., the capabilities to provide a full mask qualification, and therefore gaining tight control over an extensive metrology budget, without the need of printing one single wafer.
One of the challenges still open for having a seamless EUV production line is an inherent property of EUV light: each photon transports a considerable amount of energy when compared to 193 nm photons of DUV. As a consequence, the number of photons needed to develop a resist is much lower than for DUV lithography, giving rise to so-called stochastics failures on wafer, which to date are still a matter of concern in the community. In normal measurement mode, the AIMS ® EUV system itself uses an absolute intensity of light to locally illuminate a mask feature much larger compared to the ones in the scanner. However, being able to tune this light to match the flux seen by the single masks features locally on the mask, the AIMS ® EUV is also able to capture the aerial image of a mask feature emulating the same amount of photon noise as produced in the scanner aerial image, before this gets absorbed into the resist and developed. The novel AIMS ® EUV scanner stochastics emulation mode can be further employed in studying the statistical impact of mask error sources contributing to defectivity and/or EPE, being able to qualify defects or defect repairs from a statistical point of view, providing more insights into the effects such defects and different repairs have on the final wafer product. 33 AIMS ® Auto Analysis (AAA) 35 is a powerful tool that automatically evaluates these light intensity maps and subsequently assesses the success of the repair without much overall need of user input.
Overview Photomask Repair Tool
As detailed in a previous section, scaling trends in the semiconductor industry towards smaller technology nodes and feature sizes are continuing and first consumer products manufactured with EUV technology are already on the market. These developments lead to more strict technological requirements especially for the corresponding EUV photomasks in terms of repair accuracy, i.e., in particular MRS and EPE (see previous section). The current industry standard for high-end photomask repair tools is the MeRiT ® neXT system. The next generation repair tool targeting the high-end photomasks of the upcoming technology nodes is the MeRiT ® LE system. The MeRiT ® LE offers a 35% improved repair resolution compared to its predecessor and features a MRS of 10 nm. It is designed to repair photomasks of the 5-nm technology node and beyond. 36 With the MeRiT ® LE transparent and opaque defects of many different geometries on DUV and EUV photomasks can be repaired successfully. The tool uses FEBIP technology and provides highest repair resolution. The MeRiT ® LE makes use of a new e-beam column, which provides a decreased e-beam spot size at a given voltage compared to e-beam columns employed in predecessor MeRiT ® tools. Furthermore, it is operated at a lower e-beam voltage of 400 V, which reduces the area exposed to SE and shows a reduced jitter by improved tool damping. All these points together enable the repair of photomasks for future technology nodes with shrinking feature sizes (s. previous chapter). In Fig. 3, an image of the tool is depicted.
High-End EUV Photomask Repair Results
In this section, we present EUV mask repairs of transparent and opaque defects on a programmed defect mask (PDM) to verify the repair capabilities of the MeRiT ® LE. The PDM is based on an industry standard EUV blank, i.e., using the same Mo-Si multi-layer and capping as EUV production masks and a binary Ta based absorber of 60 nm thickness. Please note that, unless stated otherwise, all dimensions are in mask dimensions, and not in wafer dimensions (Factor 4x for current EUV scanners). To demonstrate repair of opaque pattern defects, compact extrusions and bridge defects have been chosen. Broken-line defect types have been selected for clear defect repairs. All repairs have been verified by using AIMS ® EUV actinic measurements, which provide a full emulation of the scanner imaging conditions. To quantify the measurement results, the AAA software has been used. Simply speaking, AAA automatically extracts edges from the recorded aerial images through focus thus allowing for quantitative Edge Placement and CD evaluations. Selected pattern sizes and defect types on the mask reflect decreased device structures on high-end photomasks and increased complexity of pattern defects of the upcoming 5 nm technology node and beyond: Defects on EUV masks with feature sizes down to 60-nm half-pitch and extrusion widths of 9 nm in size have been successfully repaired.
For repair demonstration and verification, SEM Pre-Repair, SEM Post-Repair, and corresponding AIMS ® EUV aerial images have been recorded. For all shown AIMS ® EUV measurements, NXE:3300 scanner Dipole aperture settings have been used. All results have been analyzed at best focus and through focus for various focal planes (FP). Here an FP of FP ¼ −1 describes a FP with a shift of −1 μm compared to the best focus plane (FP = 0).
To illustrate what effect even small defects can have on the wafer print of high-end photomasks with shrinking feature sizes a small extrusion type defect with 500 nm in length, 9 nm in width, and a half-pitch size of 88 nm on mask has been analyzed. Figure 4(a) show the SEM image of the programmed defect and Fig. 4(b) the corresponding aerial image, acquired by AIMS ® EUV.
From the widened blue line in the center of Fig. 4(b), it becomes clear that the aerial image is significantly affected even by this tiny extrusion defect. To repair such small defects, the repair tool needs to provide an excellent MRS capability of better than 10 nm and a highly accurate repair edge placement. Subsequently, it is shown how this kind of small defects can be addressed by the MeRiT ® LE system.
First investigated defect type on the EUV PDM represents the same as just discussed in the previous section, i.e. a small extrusion type defect with 500 nm in length, 9 nm in width, and a half-pitch size of 88 nm on mask. The repair is thus subtractive. Figures 5(a) and 5(b) show the SEM image of the defect before and after the repair, respectively, measured with the MeRiT ® LE system.
In Figs. 5(c) and 5(d) the post repair aerial image, acquired by AIMS ® EUV and a corresponding detailed analysis of the CD after the repair carried out with the software AAA at best focus are shown. The black line in Fig. 5(d) represents the CD along the center of the defect site, i.e., in "horizontal" direction indicated by the coordinate "y." The gray lines represent the corresponding CDs along the "unrepaired" neighboring reference "spaces," i.e., along the clear regions. From Fig. 5(d), it becomes evident that the CD variations along the center of the defect (black line) are below the CD variations of the reference, i.e., unrepaired regions (gray lines). A detailed analysis of the aerial image revealed for the repair a maximum CD deviation of ΔCD MAX ¼ 3.3 nm, whereas for the unrepaired reference regions a standard variation, representing 3σ-values, of ΔCD REF ¼ 3.6 nm to the average CD reference value of CD REF ¼ 88 has been determined. Depicted CD values represent average values and corresponding deviations (3σ-values). From Figs. 5(e) and 5(f), it is evident that also for the other analyzed FP the CD deviations of the extrusion repair are about the variations of the reference CDs. This demonstrates that the MRS of the repair tool is better than 10 nm. Furthermore, the average CD in reference site and repaired site are both well within 88 nm þ ∕ − 1 nm demonstrating the Edge Placement capability of the repair tool. In total, these data verify the successful repair of the 9-nm extrusion with the MeRiT ® LE.
The second investigated opaque defect type on the EUV PDM is a bridge type defect with 500 nm in length and a half-pitch size of 60 nm on mask. The repair is thus subtractive. Figures 6(a) and 6(b) show the SEM image of the defect before the repair and the defect site after repair with the MeRiT ® LE system. In Figs. 6(c) and 6(d) the post-repair aerial image, acquired by AIMS ® EUV and a corresponding detailed analysis of the CD after the repair with the software AAA at best focus are depicted. The black line in Fig. 6(d) represents the CD along the center of the defect site, i.e., in horizontal direction indicated by the coordinate y. The gray lines represent the corresponding CDs along the unrepaired neighboring reference "lines," i.e., along the clear regions.
In Fig. 6(d In addition to the extrusion and the bridge defect types, we have investigated clear defects on the EUV PDM using a broken-line type defect with 500 nm in length and a half-pitch size of 60 nm on mask, i.e. the repair in this case is additive. Figures 7(a) and 7(b) show the SEM image of the defect before the repair and a corresponding SEM image of the defect site after the repair with the MeRiT ® LE system. In Figs. 7(c) and 7(d), the corresponding post-repair aerial image and the detailed analysis of the CD after repair carried out with AAA at best focus are shown. The black line in Fig. 7(d) shows the CD along the center of the defect site, i.e., in horizontal direction indicated by the coordinate y. The gray lines represent the corresponding CDs along the unrepaired neighboring reference lines, i.e., along the opaque regions.
In Fig. 7(d), the CD variations along the center of the defect (black line) are about the CD variations of the reference regions (gray lines).
Since the repair processes are still under optimization on the new tool, the edges of the deposited line do not yet fully close up on the mask feature line [ Fig. 7(b)]. However, since the gaps are very small and the imaging process of the scanner acts as a low-pass filter, the post repair AIMS ® EUV aerial image is only marginally affected [ Fig. 7(c)]. Nevertheless, at the edges of the repair site (around y ¼ AE250 nm) slightly higher CD deviations for the repair occur. Consequently, the maximum extracted CD deviation in that case is a bit higher compared to opaque defect types. A detailed analysis of the aerial image revealed a maximum CD deviation of ΔCD MAX ¼ 3.8 nm for the repair, whereas for the unrepaired reference regions a standard variation, representing 3σvalues, of ΔCD REF ¼ 1.9 nm to the average CD reference value of CD REF ¼ 58.5 nm was determined. 7(e) and 7(f) show the results of the through-focus analysis of the CD of the Postrepair aerial image at various FP. From Figs. 7(e) and 7(f), it appears that for the analyzed FP, the CD deviations of the repair are even lower than at best focus. Overall, the absolute 3σ-values of the repair CD are for all FP at a very low level with a highest 3σ-value of only ΔCD 3σ ¼ 2.4 nm. It is worth mentioning that these CD deviations are the to our knowledge lowest values reported so far for a deposition repair at a half-pitch size of 60 nm on mask.
Summary and Outlook
In the work at hand, we report significant advances in EUV mask repair and verification equipment. Using the new MeRiT ® LE system, EUV mask repairs of extrusions as small as 9 nm and of 60 nm L&S patterns on mask have been demonstrated and successfully verified using the AIMS ® EUV tool and AAA software. These results demonstrate that the MeRiT ® LE represents a valid EUV mask repair solution for the next technology nodes of the semiconductor roadmap.
Along with this development, two major trends are becoming apparent. First, the shrink of feature sizes keeps continuing and needs to be realized for the repair of EUV masks as well. This will be facilitated by the scaling of the repair mechanisms towards smaller repair sizes and better resolution described in this paper. The most important developments towards a reduction of the MRS are the reduction of the primary energy of the e-beam following a low-voltage roadmap, innovations in process chemistry and a higher stability of the tool itself. Secondly, the introduction of new EUV mask materials significantly improving the entire lithography process is imminent, and developments are ongoing throughout the industry. Overall, this is a very interesting new field of innovations with a high value potential. We speculate that we may see a similar development as for 193-nm lithography, where many different mask material options emerged over time. The MeRiT ® focused e-beam technology inherently has a great flexibility to repair different material types including both DUV and EUV mask materials and is therefore well prepared to also support this trend. In conclusion, the MeRiT ® LE provides the next generation EUV mask repair solution for the upcoming technology nodes further establishing mask repair as a cornerstone in the industry wide activities to support scaling and the continuation of Moore's law. | 6,311.2 | 2021-07-01T00:00:00.000 | [
"Physics"
] |
Integrative DNA Methylation and Gene Expression Analyses Identify DNA Packaging and Epigenetic Regulatory Genes Associated with Low Motility Sperm
Background In previous studies using candidate gene approaches, low sperm count (oligospermia) has been associated with altered sperm mRNA content and DNA methylation in both imprinted and non-imprinted genes. We performed a genome-wide analysis of sperm DNA methylation and mRNA content to test for associations with sperm function. Methods and Results Sperm DNA and mRNA were isolated from 21 men with a range of semen parameters presenting to a tertiary male reproductive health clinic. DNA methylation was measured with the Illumina Infinium array at 27,578 CpG loci. Unsupervised clustering of methylation data differentiated the 21 sperm samples by their motility values. Recursively partitioned mixture modeling (RPMM) of methylation data resulted in four distinct methylation profiles that were significantly associated with sperm motility (P = 0.01). Linear models of microarray analysis (LIMMA) was performed based on motility and identified 9,189 CpG loci with significantly altered methylation (Q<0.05) in the low motility samples. In addition, the majority of these disrupted CpG loci (80%) were hypomethylated. Of the aberrantly methylated CpGs, 194 were associated with imprinted genes and were almost equally distributed into hypermethylated (predominantly paternally expressed) and hypomethylated (predominantly maternally expressed) groups. Sperm mRNA was measured with the Human Gene 1.0 ST Affymetrix GeneChip Array. LIMMA analysis identified 20 candidate transcripts as differentially present in low motility sperm, including HDAC1 (NCBI 3065), SIRT3 (NCBI 23410), and DNMT3A (NCBI 1788). There was a trend among altered expression of these epigenetic regulatory genes and RPMM DNA methylation class. Conclusions Using integrative genome-wide approaches we identified CpG methylation profiles and mRNA alterations associated with low sperm motility.
Introduction
Traditional semen analysis measures sperm concentration, motility, morphology, and semen volume, and is acknowledged to be a poor predictor of fertility, demonstrating remarkable intraand inter-individual variability [1,2]. Because of these limitations, effort has been devoted to developing sperm molecular biomarkers that may better and more stably reflect sperm function.
DNA methylation is the stable, covalent addition of a methyl group to cytosine that can represent response to environmental cues or exposures that may modify gene expression. Both human and animal studies indicate that abnormal sperm DNA methylation patterns are associated with subfertility, including aberrant methylation of both imprinted [3][4][5][6][7][8][9][10][11] and non-imprinted genes [4,12,13] in oligospermic men.
In the present study, we utilized high-density array techniques to investigate the hypothesis that alterations to the pattern of sperm DNA methylation or mRNA content are associated with sperm function.
Ethics Statement
The Committee on the Protection of Human Subjects: Rhode Island Hospital Institutional Review Board 2 (Committee #403908) approved the study and written informed consent was obtained from all participants. Clinical investigation was conducted according to the principles expressed in the Declaration of Helsinki.
Microarray DataSets
The microarray data discussed in this publication is MAIME compliant and the raw data has been deposited in NCBI's Gene Expression Omnibus (Edgar et al., 2002) as detailed in the MGED Society website http://www.mged.org/Workgroups/MAIME/ maime.html. This data is accessible through GEO Series accession number GSE26982 (http://www.ncbi.nlm.nih.gov/geo/query/ acc.cgi?acc = GSE26982).
Patient Population, Semen Analysis, and Sperm Isolation
Study subjects presented for semen evaluation at Rhode Island Hospital's tertiary male reproductive health clinic. Samples were collected from 21 men with unknown fertility status and a range of semen characteristics (Table 1). During the semen analysis, morphology was scored using Kruger strict criteria and total motility was calculated as described in the WHO laboratory manual (2010) [31].
After clinical analysis the samples were divided into one quarter and three quarter aliquots for DNA and RNA isolations, respectively. Each group was processed through an optimized Percoll (GE Healthcare, Uppsala, Sweden) gradient to eliminate debris, non-sperm cells, and dead sperm [32]. Briefly, 1 ml of the fresh semen was applied to a monolayer of 50% Percoll. After centrifugation, the upper and interface layers containing the dead sperm and other somatic contaminants were aspirated off, leaving the sperm enriched fraction. The sperm fraction was washed with phosphate buffered saline and the purified sperm samples were processed immediately for mRNA and DNA isolation.
Prior to processing the 21 samples, sperm purity was confirmed by the absence of somatic cell contaminants using bright phase microscopy and by the absence of 18/28S ribosomal RNA peaks by RNA gel electrophoresis (data not shown) [19,21]
Imprinted Genes
A list of 187 imprinted genes in the human genome was compiled based on information from three sources: (1) experimentally determined imprinted genes listed in two databases (http://www. geneimprint.com/databases/ and http://igc.otago.ac.nz/home. html) (n = 62); (2) imprinted genes identified using the ChIP-SNP method (n = 27) [40]; and (3) protein-coding genes from the 156 putatively imprinted sequences that correspond to known genes listed by NCBI (n = 106) [41]. Taken together, a final list of 187 imprinted genes is identified from these three sources (Table S1).
mRNA Isolation and Affymetrix GeneChip Human Gene 1.0 ST Array Sperm mRNA was extracted from 18 of the 21 men using a modified Stat 60 (IsoTex Diagnostics, Inc., Friendswood TX, USA) protocol in addition to components of Qiagen's RNeasy kit (Qiagen Sciences, Germantown, MD, USA). Using the Brown Genomics Core Facility, the isolated sperm mRNA was processed and hybridized to Affymetrix GeneChip Human Gene 1.0 ST Arrays (Affymetrix, Santa Clara, CA, USA), providing wholetranscript coverage of 28,869 genes by ,26 probes spread across the length of each gene. The probe cell intensity data from the Affymetrix GeneChips was normalized and annotated using Affymetrix Expression Console as recommended by the manufacturer. The application uses the RMA-Sketch workflow analysis as the default to create CHP files. The CHP log2 expression files were then merged in Expression Console with the annotation file
Statistical Analyses
Aside from array normalization procedures, the R software environment (R Foundation for Statistical Computing, Vienna, Austria) was used for all statistical analysis.
Recursively Partitioned Mixture Modeling
Recursively partitioned mixture modeling (RPMM) profiles were fit to the entire Infinium array using previously described methods [42]. This method builds classes of samples based upon the similarity of methylation profiles by recursively splitting samples into parsimoniously differentiated classes. The classes are identified by pattern of branching into right (R) or left (L) arms. Permutation tests Table 1. Our test statistic was the maximum of the KW test statistic, and the null distribution for this test statistic was obtained by the permutation. Semen parameters were considered significantly associated with RPMM profiles when P,0.02, after Bonferroni correction for multiple comparisons.
Quantitative Analysis of the DNA Methylation Status of All CpGs
The LIMMA procedure [43] (R package limma) utilized a matrix design containing the 21 samples and their corresponding percent motility values listed in Table 1 to fit a simple linear regression model for each CpG dinucleotide. This univariately tests each CpG for association between methylation and sperm motility. LIMMA results provided estimates of strength and direction of association between CpG methylation and sperm motility and were adjusted for multiple comparisons with the qvalue package in R [44]. CpGs with positive slopes were interpreted as hypomethylated in low motility sperm and CpGs with negative slopes were interpreted as hypermethylated in low motility sperm.
mRNA Content Analysis of Candidate Transcripts
The transcript presence of the 276 candidate genes was tested using the same statistical strategy as the CpG analysis except here the design matrix was limited to the 18 samples with array data and the slopes were transformed into fold change values. The Affymetrix platform yielded a dataset with ,28,000 transcripts to assess. However, sperm contain a limited transcriptome (,5000 transcripts) with few (,400) consistently expressed in sperm [22]. Therefore, we assessed 276 genes where an a priori hypothesis for association with subfertility existed based on previous reports. The analysis included 177 imprinted genes (10 of the 187 potential imprinted genes were not present on the Affymetrix array) as well as 99 candidate genes with biallelic expression (Table S1 and Table S2) [10,11,13,24,26,29,[45][46][47][48][49].
Statistical Analysis Comparing Associations Among RPMM Classes and Candidate Genes
Associations among the RPMM classes and the normalized gene expression values for candidate transcripts were calculated with the KW test statistic utilizing the strategy employed previously. Messenger RNAs were considered significantly associated with RPMM class when P,0.02, after adjusting for multiple comparisons using the Bonferroni correction.
Sperm DNA Methylation Profiles Cluster by Motility
Unsupervised clustering of sperm DNA methylation data for the 1,000 most variable CpG loci on the array highlights the methylation differences among the 21 individual men ( Figure 1). As shown in the column annotation track, the clustering differentiated men based upon the motility of their sperm, with high motility samples (dark purple) clustering together and low motility samples (dark orange) clustering together, with intermediate shades between. The DNA methylation of CpGs within imprinted genes is established during spermatogenesis and maintained in mature spermatozoa. In addition, several laboratories have shown alterations at imprinted loci to occur more frequently in men with sperm abnormalities [3][4][5][6][7][8]10,11]. Thus, we hypothesized that imprinted loci may be specifically targeted for aberrant methylation in low motility sperm and separately clustered the 616 CpG loci associated with the 187 imprinted genes present on the array. We observed the same overall trend, with high motility samples clustering together and low motility samples clustering together ( Figure 2).
Sperm DNA Methylation Profiles are Significantly Associated with Motility
Recursively partitioned mixture modeling (RPMM) was performed on raw methylation data to organize the sperm samples into methylation classes based on similarity. The algorithm first separated the 21 sperm profiles into two different branches left (L) and right (R) and then further subdivided each branch into right and left branches resulting in 4 total classes: left left (LL), left right (LR), right left (RL) and right right (RR) (Figure 3, A). In Figure 3 (B) we plotted methylation class-specific sperm motility values: samples in methylation class RR had the lowest median motility, and methylation class was significantly associated with motility after adjusting for multiple comparisons (P = 0.01). The association between RPMM methylation class and sperm morphology approached statistical significance (P = 0.09), though methylation class was not associated with sperm count (P = 0.29).
Thousands of CpG Loci are Significantly Altered in Low Motility Sperm
Linear models of microarray analysis (LIMMA) was used to univariately test each CpG for association with motility. 9,189 of 27,578 CpGs (34%) had significantly altered methylation associated with motility after adjusting for multiple comparisons (Q,0.05) (Table S3). Of these, 1,827 CpGs (20%) were hypermethylated in the low motility samples, whereas 7,362 CpGs (80%) were hypomethylated.
Because establishing proper methylation marks within imprinted genes during spermatogenesis is critical, we next restricted our analysis to CpGs associated with imprinted genes. Of the 616 CpGs associated with imprinted genes, 194 CpGs (31.5%) had significant associations with motility, similar to the distribution of the array overall. Amongst these loci, 47% (n = 92) were hypermethylated in the low motility samples, whereas 53% (n = 102) were hypomethylated. The majority of hypomethylated CpGs were on maternally expressed genes (45%), followed by paternally expressed (33%) and those with undetermined parent of expression (22%). Conversely, the majority of hypermethylated CpGs were associated with paternally expressed genes (70%), with the remainder maternally expressed (26%), and of undetermined parental expression (4%). The 194 loci corresponded to 92 genes, with 11 genes showing both hyper-and hypomethylated loci ( Table 2).
mRNA Content is Altered in Low Motility Sperm
Focusing on imprinted mRNAs and candidate biallelic mRNAs, LIMMA analysis was performed to identify differentially expressed transcripts, conditioning on motility. Twenty genes were identified as significant after adjusting for false discovery rate (Q,0.05) (Table S4)
Known Imprinted
Predicted Imprinted
Integration of Epigenetic and Transcript Data
It is known that major modifications in chromatin organization occur in spermatid nuclei during spermatogenesis, leading to the high degree of packaging in the sperm head. Chromatin compaction ensues when the histones surrounding the DNA are replaced by protamines, and this occurs in parallel with transcriptional arrest [45]. Therefore, nuclear packaging and transcript content are interrelated. To determine whether altered expression of epigenetic regulatory genes was associated with methylation profiles we plotted the methylation class-specific gene expression values for the three epigenetic regulatory genes (HDAC1, SIRT3, and DNMT3A) with significantly altered expression in low motility sperm (Figure 4). Among methylation classes, expression values for HDAC1, SIRT3, and DNMT3A were most altered in class RR, the class with lowest motility sperm (increased expression for HDAC1 and DNMT3A, and decreased expression for SIRT3). For all three genes, the association between mRNA expression level and methylation class membership approached significance after adjusting for multiple comparisons (HDAC1, P = 0.03; SIRT3, P = 0.06; and DNMT3A, P = 0.07).
Due to the unreliable nature of classifying men into abnormal and normal groups during a semen analysis, we used a data driven approach to first qualitatively assess associations among sperm DNA methylation and our patient population. Unsupervised clustering indicated that there was an association between DNA methylation and motility status. This was true both for all of the CpGs on the array and the imprint-only subset.
RPMM separated the 21 men into four classes based on similarity of DNA methylation array data. The median motility values were calculated for each class and the results suggested that the methylation profiles were associated with motility. Comparing the DNA methylation heatmap to the class versus motility boxplot indicates that the low motility class has the most aberrantly methylated CpGs. Overall, these data suggest that low motility sperm have increased hypomethylation relative to high motility sperm. We used LIMMA to identify the significantly altered CpGs conditioned on changes in motility for all CpGs on the array: over one-third of the CpGs (and almost half of the genes represented on the array) were significantly differentially methylated in the low motility samples and the majority of these were hypomethylated. The high prevalence of aberrantly methylated CpGs suggests a genome-wide DNA methylation defect in the low motility sperm. It has been previously hypothesized that the aberrant sperm DNA methylation could be due to abnormal chromatin compaction, inefficient DNA methyltransferases, and/or failure to maintain or acquire the correct methylation marks during spermatogenesis and our results are consistent with this literature [11][12][13]29].
To further clarify the potential functional alterations to imprinted genes and critical epigenetic regulatory genes, we evaluated sperm mRNA content of 177 imprinted genes and 99 other transcripts where an a priori hypothesis for association with male subfertility or epigenetic regulation exists. Twenty genes were identified as demonstrating significantly altered transcript levels in low motility sperm. All of the mRNAs except HDAC1, DNMT3A, LBD1, and FAS were present in decreased amounts in low motility sperm, and we did not observe altered mRNA content for BRDT, which was previously reported to have increased expression in subfertile patients [29].
Integration of epigenetic and expression data revealed a relationship between transcript content of three epigenetic regulatory genes (HDAC1, SIRT3, and DNMT3A) and methylation class. HDAC1 is the predominant histone deacetylase (HDAC) during spermatogenesis. Histone hyperacetlyation is required for the histone to protamine exchange and is facilitated by the degradation of HDAC1 in elongated spermatids [51]. If HDAC1 Table 3. Genes Associated with Spermatogenesis and Epigenetic Regulation with Aberrant DNA Methylation.
Note: # = Number of significantly altered loci in low motility samples; MS = methylation status of the loci; 2 = loci hypomethylated in low motility samples; + = loci hypermethylated in low motility samples; 2/+ = more than two loci were altered and some of the loci were hypomethylated and some of were hypermethylated. Genes with * have been previously reported differentially methylated in sperm. doi:10.1371/journal.pone.0020280.t003 is in excess, one could hypothesize that the histones are not being replaced by protamines, leading to an ''immature'' sperm chromatin structure, with less compact DNA. Therefore, incomplete or incorrect nuclear compaction may influence overall sperm maturation and be reflected in the physiological endpoint of motility.
SIRT3 is a class III histone deacetylase and this HDAC family is similar to the yeast Sir2 protein which has been associated with chromatin silencing and also plays roles in cellular metabolism and aging [46]. In mammals, however, SIRT3 is targeted to the mitochondria and functions to induce the expression of the antioxidant MnSOD to eliminate reactive oxygen species (ROS) generated during oxidative phosphorylation [52]. Recent studies have found that increased ROS in sperm have deleterious effects on sperm motility parameters which ultimately have adverse effects on fertility [53]. Therefore, the decrease in SIRT3 mRNA in the low motility sperm may reflect reduced MnSOD and increased intracellular ROS during spermatogenesis, leading to a diminished fertility potential.
The literature also suggests that oxidative stress itself can impede the process of DNA methylation, resulting in a hypomethylated phenotype [54]. Interestingly, we observed global hypomethylation in the low motility sperm even though we saw increased DNMT3A transcript presence in the low motility sperm. Because DNMT3A is the DNA methyltransferase responsible for de novo methylation, our data suggests a failure of the low motility sperm to acquire the proper methylation patterns.
Although we were limited by sample size, we used a powerful integrative approach to simultaneously examine sperm DNA methylation and mRNA content utilizing two high density array techniques. We found that: (1) low motility sperm have genomewide DNA hypomethylation that may be due to a failure of the sperm to complete chromatin compaction properly because of increased HDAC1 presence; (2) low motility sperm have reduced SIRT3 mRNA content which might be related to increased subcellular ROS during spermatogenesis leading to the abnormal motility phenotype; and (3) this oxidative stress may be impeding the ability of DNMT3A to set the correct methylation marks which would also contribute to the hypomethylated phenotype. Our results suggest that additional integrative studies including larger sample sizes as well as prospective studies of fertility following these integrated molecular assessments have great potential to advance our understanding of the molecular features of sperm associated with fertility status. | 4,156.6 | 2011-06-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Genome-wide analysis of HSP70 gene superfamily in Pyropia yezoensis (Bangiales, Rhodophyta): identification, characterization and expression profiles in response to dehydration stress
Background Heat shock proteins (HSPs) perform a fundamental role in protecting plants against abiotic stresses. Individual family members have been analyzed in previous studies, but there has not yet been a comprehensive analysis of the HSP70 gene family in Pyropia yezoensis. Results We investigated 15 putative HSP70 genes in Py. yezoensis. These genes were classified into two sub-families, denoted as DnaK and Hsp110. In each sub-family, there was relative conservation of the gene structure and motif. Synteny-based analysis indicated that seven and three PyyHSP70 genes were orthologous to HSP70 genes in Pyropia haitanensis and Porphyra umbilicalis, respectively. Most PyyHSP70s showed up-regulated expression under different degrees of dehydration stress. PyyHSP70-1 and PyyHSP70-3 were expressed in higher degrees compared with other PyyHSP70s in dehydration treatments, and then expression degrees somewhat decreased in rehydration treatment. Subcellular localization showed PyyHSP70-1-GFP and PyyHSP70-3-GFP were in the cytoplasm and nucleus/cytoplasm, respectively. Similar expression patterns of paired orthologs in Py. yezoensis and Py. haitanensis suggest important roles for HSP70s in intertidal environmental adaptation during evolution. Conclusions These findings provide insight into the evolution and modification of the PyyHSP70 gene family and will help to determine the functions of the HSP70 genes in Py. yezoensis growth and development. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-021-03213-0.
Background
Heat shock proteins (HSPs) are found in almost all organisms, from bacteria to humans [1]. In plants, members of the family of HSPs act in cell protection through the folding and translocation of nascent proteins and the refolding of denatured proteins under both stress and non-stress conditions [2,3]. HSPs can be divided into five families based on molecular weight: HSP70, 70-kDa heat shock protein; HSP90 and HSP100 family; HSP60, chaperonin family; and sHSP, small heat shock protein.
Of these, HSP70 is widely conserved and has been shown to play roles in development and defense mechanisms under various stresses.
Open Access
*Correspondence<EMAIL_ADDRESS>3 The HSP70 gene family contains three highly conserved domains: a C-terminal domain about 10 kDa in size that can bind substrate, an intermediate domain 15 kDa in size, and an N-terminal domain (NBD) 44 kDa in size that binds ATP [4]. Plant HSP70 genes have been localized to four locations: the cell nucleus/cytoplasm, endoplasmic reticulum (ER), plastids, and mitochondria, with different functions in different locations [5,6]. Deficiency of some cytosolic HSP70s led to severe growth retardation, and heat treatment of plants deficient in HSP70 genes dramatically increases mortality, indicating that cytosolic HSP70s plays an essential role during normal growth and in the heat response by promoting the proper folding of cytosolic proteins [7,8].
Ectopic expression of a cytosolic CaHSP70-2 gene resulted in altered expression of stress-related genes and increased thermotolerance in transgenic Arabidopsis [9]. Cytosolic HSP70A in Chlamydomonas regulates the stability of cytoplasmic microtubules [10,11]. Transgenic tobacco plants that over-expressed nuclear-localized NtHSP70-1 exhibited decreased fragmentation and degradation of nuclear DNA during heat-/drought-stress [6,12]. Knockout experiments indicate that the import of stromal HSP70s into the chloroplast stroma is essential for plant development and important for the thermotolerance of germinating seeds [13]. Transgenic tobacco plants constitutively expressing elevated levels of BIP (an ER-localized HSP70 homologue) exhibited tolerance to water deficit by preventing endogenous oxidative stress [14]. In rice, the BIP1/OsBIP3 gene, encoding HSP70 in the ER, regulates the stability of XA21 protein to interfere with XA21-mediated immunity [15]. Mitochondrial HSP70 can suppress programmed cell death in rice protoplasts by maintaining mitochondrial membrane potential and inhibiting the amplification of reactive oxygen species (ROS) [16]. However, the biological functions of most HSP70s in nori have not yet been elucidated, partly due to a lack of information about coding genes or other genomic information. Pyropia yezoensis (Bangiales, Rhodophyta) is an economically important seaweed that is cultivated in the intertidal zones of China coastlines [17]. The production and quality of the cultivated Py. yezoensis thalli are significantly influenced by intertidal environmental stress. Tidal exposure imposes considerable environmental stress on intertidal seaweeds due to altered irradiance levels [18], temperature changes [19], and direct effects from desiccation [20,21].
In this study, all of the non-redundant members of HSP70 genes in Py. yezoensis were screened from available, high-quality, chromosomal-level genomes. We determined the characteristics of PyyHSP70 genes based on the physicochemical properties, genomic locations, and conserved motifs, promoters, and analyzed the phylogenetic relationships of these genes. In addition, the expression levels of the PyyHSP70 genes were analyzed under dehydration and rehydration conditions. Finally, highly expressed PyyHSP70 proteins were localized in Arabidopsis protoplasts. Our findings will be useful resources for future studies of the functions of HSP70 genes in algae, which will help us understand the evolution of HSP70 genes in different species.
Genome-wide identification of PyyHSP70 genes in Py. yezoensis
After verification, the sequence information was obtained from the Py. yezoensis genome for 15 putative PyyHSP70s. The basic information of PyyHSP70 genes (including genomic position, gene length, intron number, amino acid number, isoelectric point (pI), molecular weight, CDS, subcellular localization, and instability index) is listed in Table 1. The predicted PyyHSP70 protein sequences ranged from 276 amino acids to 934 amino acids, and the molecular weights ranged from 29.59 to 96.07 kDa. Analysis with the Expasy online tool revealed instability index values of PyyHSP70s that ranged from 21.76 to 48.25, with a single PyyHSP70 member (PyyHSP70-8) having an instability index greater than 40, indicating an unstable protein. Of the 15 PyyHSP70 proteins, 11 members are predicted to localize to the nucleus/cytoplasm, one to the ER, one to the mitochondria, and two to the chloroplasts. The genes either had no introns or one intron, with eight and seven members respectively (Fig. 1A). The 15 members of PyyHSP70 were distributed on all three chromosomes, with an uneven distribution in the genome. Chromosome 1 had the highest density of PyyHSP70 genes, nine members ( Fig. 1B).
Conserved motifs and phylogenetic analysis of PyyHSP70s
To better understand the structural characteristics of PyyHSP70 proteins, a multiple sequence alignment was performed of the HSP70 domains of all 15 PyyHSP70 proteins and the EcDNAK protein, as shown in Figure S1. The two functional domains (ATPase domain and peptide-binding domain) were present in all PyyHSP70s. The ATPase domains of PyyHSP70-6, PyyHSP70-7, and PyyHSP70-15 were shorter, and lacked three signature motifs that are characteristic of the ATPase domain of HSP70 family members (Table S1). Additionally, the peptide-binding domain of PyyHSP70-2 was shorter, and much shorter C-terminal sub-domains were present in PyyHSP70-2 and PyyHSP70-7 (Table S1).
Twelve consensus motifs were found in PyyHSP70 proteins using the MEME motif search tool (Fig. 2, Table S2). Motifs 1, 2, 5, 6, 7, 9, 10, 11, and 12 were identified in the ATPase domain, and motifs 3, 4, and 8 were identified in the peptide-binding domain. Only motifs 3, 4, 6, and 7 were detected in all PyyHSP70 members of the DnaK subfamily, and only motif 2 was detected in all PyyHSP70 members of the Hsp110 subfamily. An unrooted phylogenetic tree was constructed to visualize the evolutionary relationships between HSP70 members, using 76 HSP70 protein sequences from nine species (Table S3). As shown in Fig. 2, these HSP70s were classified into two subfamilies (the DnaK subfamily and the Hsp110 subfamily). The DnaK subfamily was further divided into four groups based on localization (cytoplasm, ER, mitochondria, and plastid). The HSP70 proteins from different species were more closely related to those in the same subfamily than to others in the same species. For example, cytosolic PyyHSP70-4 was more closely to PyhHSP70-5 than PyyHSP70-11.
For the PyyHSP70 family members, orthologs from Py. yezoensis and Py. haitanensis (seven pairs) or Porphyra umbilicalis (three pairs) were identified, indicating there may have been common ancestral genes of the HSP70 family before differentiation of the three species (Fig. 3). In addition, a subclade of six genes (PyyHSP70-2, PyyHSP70-5, PyyHSP70-6, PyyHSP70-10, PyyHSP70-14 and PyyHSP70-15) in the cytoplasm group implied the proximity of these sequences and potential paralogous relationships by duplication events after the divergence of the two Pyropia species. The Ka/Ks ratios of the parolog pairs in the subclade were calculated and the results ranged from 0.6327-1.3487 ( Table 2). The Ka/Ks ratios for five pairs were less than but close to one, indicating slightly negative selection; the other two pairs' Ka/Ks ratios were greater than one, suggesting positive selection. All seven orthologous events of Pyropia exhibited Ka/Ks ratios far less than 1 (Table S4).
Cis-regulatory element analysis of the PyyHSP70 gene family
The regulatory roles of the identified PyyHSP70 genes were further studied by analysis of the 2000 bp region upstream of these genes. We searched the promoter sequences using the PlantCARE tool for seven regulatory elements previously found to be involved in various stresses: ABRE, CGTCA-motif, TGACG-motif, TCAelement, MYB-binding sites (MBS), LTR, and DRE ( Besides, we also searched three types of heat shock elements (HSEs), perfect type (nTTCnnGAAnnTTCn), gap type (nTTCnnGAAnnnnnnnTTCn) and step (S) type (nTTCnnnnnnnTTCnnnnnnnTTCn) [22], in these promoter sequences. Only one S-type HSE was detected in PyyHSP70-3. The detection of these abiotic response elements suggests that the PyyHSP70 genes may be extensively involved in stress responses, thereby increasing the range of mechanisms that organisms could emply to escape or better cope with adverse environmental effects.
Expression patterns of PyyHSP70 genes under dehydration treatments
To further clarify the potential ability of the PyyHSP70 genes to respond to dehydration stress, RNA-Seq data were analyzed. Expression analysis of PyyHSP70 under dehydration stress revealed low (< 0.3) or no expression from seven genes in all treatments, but the other eight PyyHSP70 genes exhibited higher expression (Fig. 4). The expression of PyyHSP70-1 and PyyHSP70-3 gradually increased with increased dehydration stress, and the expression level slightly decreased with subsequent rehydration treatment. The expression levels of PyyHSP70-11 increased with increased water loss and continued to increase during rehydration. The expression levels of PyyHSP70-8, PyyHSP70-4, and PyyHSP70-13 first increased and then decreased as the degree of dehydration deepened. The expression levels of PyyHSP70-8 and PyyHSP70-13 increased during rehydration, and PyyHSP70-4 experienced an increase of expression for one dehydration condition (AWC20) and then decreased during rehydration. These RNA-seq expression patterns were verified by detecting the expression patterns of the PyyHSP70 genes by qRT-PCR (Fig. 5). The measured expression levels of most genes were highly consistent with the levels determined by RNA-seq, except for PyyHSP70-13.
The subcellular localization of PyyHSP70-1 and PyyHSP70-3 proteins
PyyHSP70-1 and PyyHSP70-3 showed the biggest expression changes in response to dehydration stress, so we next determined the subcellular localization of these two proteins. PyyHSP70-1 was localized to the cytoplasm, and PyyHSP70-3 localized to the nucleus/cytoplasm (Fig. 6), basically consistent with the predicted results ( Table 1, Fig. 6).
Disscussion
Daily changes in tide height cause air exposure to seaweed, triggering rapidly-changing physical stresses such as dehydration, high temperature, and different irradiance levels [23]. Because they live in the challenging habitat of the intertidal zone, intertidal macroalgae have adapted a set of protective mechanisms to survive [24]. Some intertidal seaweeds are highly tolerant to desiccation. Species of the genera Pyropia and Porphyra (Bangiales, Rhodophyta) inhabit the upper intertidal zone and can lose up to 95% of cellular water content during maximum low tide [18]. HSP70 is a superfamily of molecular chaperones widely distributed in eukaryotic cells. These proteins play important roles under abiotic stress by participating in many protein folding processes. However, the HSP70 superfamily of Py. yezoensis was not previously characterized. In this work, we comprehensively analyzed the characteristics, expression patterns under dehydration stress, and the subcellular localization of PyyHSP70s.
Evolution analysis of HSP70 genes
In this study, we identified 15 HSP70 domain-containing genes in the P. yezoensis genome that constitute the HSP70 superfamily, including 11 DnaK subfamily genes and 4 Hsp110 subfamily genes. We also analyzed the genomes of five other red algae and identified 36 HSP70 genes. We found no direct relationship between genome size and the number of HSP70 genes in red algae. For example, we identified eight HSP70 genes in Py. haitanensis (genome size: 53 Mb), eight genes in Galdieria sulphuraria (genome size: 14 Mb), and eight genes in Chondrus crispus (genome size: 105 Mb). This diversity in the number of red algae HSP70 genes indicated that the HSP70 gene family has utilized different evolutionary strategies in different species. PyyHSP70s proteins were divided into two sub-families, similar to those reported by previous analysis of HSP70s in A. thaliana and yeast [5,25]. The DnaK subfamily was further divided into four groups based on localization. The number of HSP70 genes from the six red algae was basically same in each group of the DnaK subfamily, except the Cytoplasm group which contained more members of the PyyHSP70 gene family due to paralogous duplication events. Paralogous duplication events were not evident in the other five red algae, further implying that PyyHSP70s expanded according to species-specific approaches during evolution. We found no expression for these paralogous PyyHSP70 genes with intact gene structures in dehydration treatments, and also no expression of these genes was detected in response to other abiotic/biotic stresses of P.yezoensis [20,21,26]. HSP genes in other species were previously identified that also did not appear to be expressed under tested conditions, but the reason has not yet been determined [27,28]. Interesting, two-pairs of PyyHSP70 paralogs showed positive selection, suggesting new functions that should be verified by further experiments. Fig. 4 Heatmap of the expression patterns of PyyHSP70 genes under dehydration and rehydration treatments: absolute water content 100% (AWC100, control), absolute water content 70% (AWC70), absolute water content 50% (AWC50), absolute water content 20% (AWC20), rehydrated 30 min after 20% of water loss (AWC20_30min). The color bar (right) represents log 2 expression levels (FPKM). The tree (left) represents clustering result of PyyHSP70s' expression patterns Yu et al. BMC Plant Biol (2021) 21:435 HSP70 genes play essential roles in response to dehydration stress Previous studies have found abundant HSEs in the promoter regions of HSP70 genes that become active in response to heat shock and other temperature treatments in higher plants [2,29]. However, we found few HSE and LTR in the promoter regions of PyyHsp70 genes. Py. yezoensis live on intertidal rocks, where they experience repeated cycles of dehydration and rehydration. Cisregulatory element analysis showed that most PyyHSP70 gene promoter sequences contained cis-elements associated with dehydration stress. For example, the ABRE motif conserved in drought response genes [30,31], MYB binding sites (MBS) involved in drought-inducibility, and CRT/DRE elements associated with dehydration and salt stresses [32,33]. The results suggest that PyyHSP70s might be significantly related to dehydration response. Intermittent desiccation stress caused by tidal changes is a significant abiotic factor affecting intertidal seaweed species. This stress can affect the physiology of organisms, mainly through oxidative stress causing destabilization of proteins, leading to loss of membrane integrity [34][35][36]. Desiccation results in increased expression of tolerance genes, such as genes encoding HSPs and related transcriptional factors [37,38]. These mechanisms may also function in intertidal seaweed to tolerate desiccation. Several HSP70s have also been found to help protect against desiccation damage by assisting protein-folding processes involved in stress and affecting the proteolytic degradation of unstable proteins [39]. HSP70s have also received attention in marine organisms as a kind of biomarker of stress, because their expression is highly variable in the presence or absence of stimuli [40][41][42]. Zhou et al. (2011) suggested that analysis of HSP70 genes could be utilized to evaluate algae tolerance to stresses and monitor coastal environmental changes [43]. Tang et al. (2016) found that moss plants overexpressing PpcpHSP70-2 highly induced by dehydration treatment showed dehydration tolerance [44]. We found that more than half of PyyHSP70 genes exhibited increased transcription levels with increasing degree of dehydration. We also found significantly increased expression of some PyyHSP70 genes, especially PyyHSP70-1 and PyyHSP70-3, upon reaching a water content of 20%, with down-regulated expression after rehydration. This finding was consistent with that of a previous study that showed that HSP70s played important roles only in the response to extreme desiccation stress [41].
Like Py. yezoensis, Py.haitanensis also lives in the intertidal zone and experiences repeated dehydration and rehydration (though at a different temperature). These two species are evolutionarily very close, both belonging to Pyropia. Paired orthologs between Py.yezoensis and Py. haitanensis showed strong purifying selection and similar trends in expression (Table S4, Figure S2), suggesting that these HSP70 orthologs play important roles in dehydration treatments of laver. Therefore, it is important to study the HSP70 genes involved in the dehydrationinduced response of Py. yezoensis to further explain the stress resistance and environmental adaptation of intertidal algae.
Conclusions
The Py. yezoensis genome contains 15 members of the HSP70 gene family, and these genes are unevenly distributed on three chromosomes. The gene structures and phylogenetic analysis suggest a complex evolution history of this gene family in Py. yezoensis. The analysis reveals that the PyyHSP70 family has experienced gene duplication events after species divergence relative to other red algae. Most HSP70s showed up-regulated expression under different degrees of dehydration stress, especially PyyHSP70-1 and PyyHSP70-3 which showed much higher expression levels in dehydration treatments and slightly decreased expression after rehydration treatment. Similar expression trends of orthologs of Py.yezoensis and Py.haitanensis in dehydration treatments demonstrate the important roles of these proteins in intertidal environmental adaptation during evolution. PyyHSP70-1-GFP and PyyHSP70-3-GFP were localized in the cytoplasm and the nucleus/cytoplasm, respectively. This overview of this gene family should facilitate further studies of the HSP70 gene family, particularly in regards to their evolutionary history and biological functions.
Genome-wide identification of HSP70 proteins in Py. yezoensis
The Py. yezoensis genome and protein sequences were deposited to DDBJ/ENA/GenBank as accession WMLA00000000 [17]. To identify candidate Py. yezoensis HSP70 protein sequences, the Hidden Markov model (HMM) profile of the HSP70 domain was downloaded from the Pfam (http:// www. sanger. ac. uk/ Softw are/ Pfam/) database (Pfam:PF00012) and then submitted as a query in a HMMER (e-value < 1e −5 ) search (https:// www. ebi. ac. uk/ Tools/ hmmer/) of the Py. yezoensis protein database. The obtained protein sequences were screened and verified for the presence of the HSP70 domain using SMART (http:// smart. embl-heide lberg. de/) tools [45], CDD (http:// www. ncbi. nlm. nih. gov/ Struc ture/ cdd/ wrpsb. cgi) and InterProScan (http:// www. ebi. ac. uk/ inter pro/ result/ Inter ProSc an/). The same process was used to obtain the other four red algae HSP70 family genes from their genome databases [46][47][48][49]. For the PyyHSP70 genes, we determined the chromosomal locations, genomic sequences, full coding sequences, protein sequences, and the sequence of the 2000 nucleotides upstream of the translation initiation codon. The molecular weight (Da) and isoelectric point (pI) was calculated for each gene using the Compute pI/Mw tool from ExPASy (http:// www. expasy. org/ tools/) [50]. The subcellular localization of proteins was determined by analysis of the WoLF PSORT, Predotar, PSORT, SherLoc2, CELLO, and Softberry databases, and decided based on consensus localization for two or more algorithms. Schematic images of the chromosomal locations of the PyyHSP70 genes were generated using MapGene2Chrom software (http:// mg2c. iask. in/ mg2c_ v2.1/), according to the chromosomal position information in the NCBI database.
Gene structure analysis and identification of conserved motifs
To investigate the diversity and structure of members of the PyyHSP70 gene family, we compared the exon/ intron organization of the cDNA sequences and the corresponding genomic DNA sequences of HSP70 using EVOLVIEW (https:// evolg enius. info// evolv iew-v2/). In addition, the amino acid sequences were subjected to "predict the domain and motif analyses" online with MEME (http:// meme-suite. org) [51]. The parameters were as follows: number of repetitions, any; maximum number of motifs, 12; and optimum motif widths, 2 to 300 amino acid residues.
Multiple alignment and phylogenetic analysis
We constructed two phylogenetic trees, one with only PyyHSP70 protein sequences and the other including 76 HSP70 protein sequences from different species. The gene and protein sequences of Arabidopsis thaliana, Escherichia coli and yeast were acquired from previous studies [2,25,52,53] and accession GCA_008690995.1 (NCBI). Multiple sequence alignment to full predicted HSP70 protein sequences was performed with Muscle in Molecular Evolutionary Genetics Analysis (MEGA) 7.0 software using default parameters [54]. Sequence alignments were performed with ClustalX software [55]. Phylogenetic trees were constructed using MEGA7.0 with the Neighbor-Joining (NJ) method, and a bootstrap analysis was conducted using 1000 replicates with pairwise gap deletion mode.
Gene duplication and Ka/Ks analysis
The microsynteny between Py. yezoensis, Py. haitanensis, and Po. umbilicalis was analyzed by MCScanX with the default parameters [56]. The criteria used to analyze potential gene duplications included: (1) the length of the sequence alignment covered ≥ 70% of the longer gene; and (2) the similarity of the aligned gene regions ≥ 70% [57]. Non-synonymous (Ka) substitution and synonymous (Ks) substitution were calculated for each duplicated PyyHsp70 gene using KaKs_Calculator [58].
RNA-seq atlas analysis
To investigate the expression patterns of PyyHSP70 genes in response to dehydration/rehydration treatments, the related RNA-sequencing (seq) data of Py. yezoensis were downloaded from NCBI under accession number PRJNA401507 [21]. The RNA-seq data of Py.haitanensis in dehydration/rehydration treatments were used to obtain expression patterns of PyhHSP70 genes [20]. Expression heatmaps were constructed using R software and based on the FPKM values of gene expression in different treatments.
RNA isolation and qRT-PCR analysis
By weighing the fresh weight and the dry weight of thalli, the absolute water content (AWC) of the thallus was calculated according to the methods described by Kim et al. (2009) [59]. Thalli produced under normal growth condition were harvested as the control group (AWC100). Before dehydration, the surface water of the thalli was removed by paper towels, and then the selected thalli were naturally dehydrated under 50 μmol photons m −2 •s −1 at 8 ± 1 °C. The thalli samples were collected until the total water content decreased by 30% (AWC70), 50% (AWC50), and 80% (AWC20). After losing 80% water content, the samples were recovered in normal seawater for 30 min (AWC20_REH) [20,21]. Three biological replicates were performed for each treatment. Samples were harvested and placed in liquid nitrogen before processing for gene expression analysis. Total RNA was extracted using the RNeasy Plant Mini Kit (OMEGA) according to the manufacturer's instructions. Next, 1 μg total RNA was used to synthesize the first-strand cDNA using a HiScript ® III RT SuperMix for qPCR (+ gDNA wiper) Kit (Vazyme Biotech). The qRT-PCR analysis was performed as described previously [60]. The expression levels of the ubiquitin-conjugating enzyme (UBC) and cystathionine gamma-synthase 1 (CGS1) genes were used as reference [61]. and the 2 −△△Ct method was used to calculate relative gene expression values. The sequences of the primers used are listed in Supplementary Table S5.
Subcellular localization analysis of PyyHsp70s
To validate the prediction of subcellular localization, transient expression analyses were performed using a protoplast system based on the pBWA(V)HS-(PyyHSP70-1/ PyyHSP70-3)-GLosgfp vector. For two representative PyyHSP70 genes, the full-length CDS without the stop codon was cloned into the pBWA(V)HS vector. Each CDS was fused in-frame to the N-terminus of the green fluorescent protein (GFP) coding sequence under the control of the CaMV 35S promoter. The primers used for PCR amplification of the full-length HSP70 CDS are listed in Table S6. The vector with only GFP gene expressed was used as a control. The protoplasts used for transient expression analysis were extracted from Arabidopsis leaves and transformed by the polyethylene glycol (PEG) method [62]. Briefly, the Arabidopsis leaves was put into enzyme solution (1.5% (w/v) cellulose R10, 0.75% (w/v) macerozyme R10, 0.6 M mannitol, 10 mM MES, pH5.8) at 24 °C for 4 h with gentle shaking in the dark. After filtering through nylon mesh and washing two times with W5 solution (154 mM sodium chloride, 125 mM calcium chloride (CaCl 2 ), 5 mM glucose, 2 mM KH 2 PO 4 , 2 mM MES, pH 5.7), protoplasts were resuspended in MMG solution (0.4 M mannitol, 15 mM magnesium chloride, 4 mM MES, pH 5.7) at a cell concentration of 2 × 10 5 mL −1 . Then, 10 μg of each plasmid sample was mixed with 100 μL protoplasts, followed by addition of 120 μL of freshly prepared PEG solution (40% (w/v) PEG4000, 0.6 M mannitol, and 100 mM CaCl 2 ). The mixture was incubated at room temperature for 30 min in the dark, and then diluted gently with 1 mL W5 solution. After centrifugation at 300 rpm for 3 min, protoplasts were resuspended in 1 mL of W5 solution before incubating at 25 ℃ for 16 h and then observed using a Nikon Eclipse 80i fluorescence microscope. Respective excitation and emission wavelengths were 488 nm and 510 nm for the GFP signal, and 640 nm and 675 nm for the Chl signal. expression levels (FPKM). The tree (left) represents clustering result of PyhHSP70s' expression patterns. | 5,741.4 | 2021-09-24T00:00:00.000 | [
"Biology"
] |
Non-local order parameters and quantum entanglement for fermionic topological field theories
We study quantized non-local order parameters, constructed by using partial time-reversal and partial reflection, for fermionic topological phases of matter in one spatial dimension protected by an orientation reversing symmetry, using topological quantum field theories (TQFTs). By formulating the order parameters in the Hilbert space of state sum TQFT, we establish the connection between the quantized non-local order parameters and the underlying field theory, clarifying the nature of the order parameters as topological invariants. We also formulate several entanglement measures including the entanglement negativity on state sum spin TQFT, and describe the exact correspondence of the entanglement measures to path integrals on a closed surface equipped with a specific spin structure.
Introduction
A salient feature of topological phases of matter is the lack of local order parameters characterizing them. For example, topologically-ordered phases in (2+1)d cannot be characterized by their symmetry-breaking pattern, but by the anyonic excitations that they support [1]. Quantum Hall systems are characterized by their quantized Hall conductance, which detects the global topological properties of the ground states. Specifically, the Niu-Thouless-Wu formula [2] relates the quantized Hall conductance to the first Chern number defined on a parameter space of boundary conditions, through the Berry connection of the (many-body) ground state wave functions in the presence of twisted boundary conditions.
JHEP01(2020)121
For topological phases of matter beyond the quantum Hall example, such as symmetryprotected topological (SPT) phases with internal or spacetime symmetries, their topological characterization using non-local operations have been proposed [3][4][5][6][7][8]. For example, in the prototypical case of (1+1)d topological superconductors with time-reversal symmetry (symmetry class BDI), the operation called "partial time-reversal" (or partial transpose) 1 can be used to construct a quantized quantity which can detect the Z 8 classification [9,10] of (1+1)d BDI topological superconductors [11]. Similarly, one can use "partial-reflection" to construct a quantized quantity that can detect the Z 8 classification of (1+1)d reflection symmetric topological superconductors (symmetry class D+R − ) [11]. The precise definitions of these quantities will be presented in the later sections. Henceforth, we loosely call these quantized quantities non-local order parameters. These are the quantities which are constructed from the ground states of topological phases by acting with a non-local operation, and detect topological classifications. 2 Experimental protocols to measure these quantized non-local order parameters have been proposed [12].
The response of quantum Hall systems at low energies and long distances is expected to be described by the Chern-Simons topological quantum field theory (TQFT). The Niu-Thouless-Wu formula extracts, from the ground state wave functions, the quantized coefficient of the Chern-Simons theory, which is the quantized Hall conductance. Similarly, (1+1)d time-reversal invariant topological superconductors are expected to be described by an invertible topological quantum field theory at long distances [13][14][15], whose partition function on a spacetime gives a bordism invariant. In this case, the underlying topological field theory needs to be equipped with a spacetime structure called pin − structure. Such invertible pin − TQFTs are classified up to deformation by the Pontryagin dual of the pin − bordism group Ω pin − 2 (pt) = Z 8 [16]. It was argued [17] that the proposed quantized non-local order parameter is associated with the partition function of the corresponding invertible pin − TQFT, evaluated on a manifold which generates the bordism group Ω pin − 2 (pt) (e.g., RP 2 ), thereby providing the Z 8 -valued topological invariant detecting the classification.
One of the purposes of this paper is to elucidate the connection between the quantization of non-local order parameters and the underlying field theory, clarifying the nature of the non-local order parameters as topological invariants. To do this, it is indispensable to formulate the non-local operations in the Hilbert space of TQFT. This can be achieved by a local lattice definition of pin − TQFT recently proposed in [18,19]. The lattice formulation makes it possible to construct the "fixed point" wave functions of fermionic symmetry-protected topological phases on a 1d spatial lattice -they are the representatives of the ground state wave functions with shortest possible correlation length, and have structures akin to Matrix Product States (MPSs). (See also [20][21][22][23] for relevant refer-1 Partial time-reversal and partial transpose may differ by a local unitary transformation. In this paper, we will exclusively use partial time-reversal. 2 String-type order parameters, commonly discussed in the context of the Haldane phase and related systems, are also often called non-local order parameters. While string-type order parameters can detect topologically distinct phases of matter, they are not quantized. In this paper, we will exclusively discuss quantized non-local quantities, which are distinct from string-type order parameters.
JHEP01(2020)121
ences.) Using the invertible pin − TQFT generating the Z 8 classification, we explicitly show that the quantized non-local order parameter for (1+1)d time-reversal symmetric topological superconductors (class BDI) is identical to the partition function of the pin − TQFT computed on RP 2 . Similarly, for (1+1)d reflection symmetric topological superconductors (class D+R − ), we also prove the exact correspondence between the order parameter and the partition function of field theory, based on a lattice definition of pin − TQFT. Partial time-reversal (partial transpose) can also be used to construct an entanglement measure for mixed quantum states -the (fermionic) entanglement negativity. The entanglement negativity has been studied recently in the context of many-body physics and quantum field theory -see, for example, [24][25][26][27][28][29][30][31][32]. Experimental protocols for the entanglement negativity has been also proposed [33]. The formalism which we will develop in this paper allows us to study the entanglement negativity in 2d fermionic TQFT, i.e., in the fixed point wave functions of fermionic symmetry protected topological phases. While not expected to be a topological invariant, the entanglement negativity in the fixed point wave functions are known to take specific values. For example, for the ground state of the (1+1)d Kitaev chain in its topologically non-trivial phase, the entanglement negativity for adjacent intervals is given by log √ 2, which is related to the quantum dimension of the boundary Majorana modes. We will reproduce this result by using our TQFT/MPS formalism.
Another quantity of our interest is the moments of the partially-transposed reduced density matrix, which give the spectrum of partially-transposed reduced density matrix ("the negativity spectrum") [34][35][36]. Just like the entanglement spectrum provides the universal information of quantum ground states (for gapped quantum states in particular), we expect that we could extract universal (topological) data from the negativity spectrum. For comparison, it is good to recall that for the case of unitary and on-site symmetry, ground states of symmetry-protected phases in one spatial dimension are characterized by symmetry-protected degeneracy of their entanglement spectra [5,[37][38][39]. Here, we will show that we can develop a similar diagnostics by using the negativity spectrum for fermionic symmetry-protected topological phases protected by time-reversal.
Summary of results
The first column of table 1 lists the quantities studied in this paper. We will prove that these quantities computed for the fixed point wave function of (1+1)d topological superconductors (constructed from lattice TQFT) exactly give the partition functions of the TQFT on spacetime manifolds listed in the second column. The third column lists the explicit values of these quantities.
In the first and second rows of table 1, we find that the partial time-reversal and partial reflection on the fixed-point wave function gives the partition function of pin − TQFT on RP 2 . Here, although the state is initially prepared on a boundary of an oriented surface (e.g., disk), we will see that the process of partial time-reversal or reflection introduces a combinatorial pin − structure in the whole triangulated spacetime, which becomes unoriented, providing a pin − bordism invariant. Especially, when computed within the
JHEP01(2020)121
Order parameter Spacetime manifold the Kitaev chain Section partial time-reversal RP 2 In the third row of table 1, we find that the exponential of the Rényi entanglement negativity e En of the fixed point wave function is equal to the TQFT partition function on a closed oriented manifold with genus (n/2 − 1). The notation like (n/2 − 1)× (NS, NS) means that the spacetime manifold has an induced spin structure given by a connected sum of (n/2 − 1) copies of (NS,NS) torus. When evaluated for the correctly-normalized fixed point wave function, the TQFT partition function has the Euler term, which makes the Rényi negativity proportional to the Euler characteristic of the spacetime manifold up to constant.
In the fourth row of table 1, we present the moment Z n of a ground state density matrix acted by partial time-reversal. We find an interesting periodicity of the spacetime JHEP01(2020)121 structure with respect to the degree of the moment. The spin structure of the spacetime manifold is fixed for each n, and it has a pattern of mod 4 periodicity. Entries marked as "-" in table 1 mean that the induced structure on a spacetime manifold is not spin. We will see that these cases have a vortex of fermion parity introduced in a spacetime, which makes a spin structure ill-defined. When evaluated for the fixed point wave function, the phase of the TQFT partition function corresponds to the Arf invariant on a spacetime manifold equipped with a spin structure, which has a pattern of mod 8 periodicity.
Although most of the analysis will be done for a specific pin − invertible TQFT in the main text which corresponds to the Kitaev chain, the partial time-reversal and reflection can also be formulated on the Hilbert space of generic spin/pin TQFT prepared by Z 2graded Frobenius algebra. The result presented in this paper can be safely generalized for generic spin/pin TQFT on a lattice.
The rest of the paper is organized as follows. In section 2, we review the lattice construction of fermionic TQFT on unoriented spacetime manifolds. In section 3, we formulate partial time-reversal for the Hilbert space of pin − TQFT. In section 4, we discuss the formulation of the entanglement negativity. In section 5, we discuss the formulation of the moments of the density matrices with partial time-reversal and the periodicity presented in table 1. Finally, in section 6, we illustrate the partial reflection.
Review of fermionic TQFT
In this section, we recall the lattice construction of the spin and pin ± TQFT on a 2d manifold M , following [18,19]. We provide a recipe to construct a state sum definition of spin/pin TQFT, by formulating the spin/pin theory called the Gu-Wen Grassmann integral on M , equipped with a Z 2 global symmetry, whose partition function has the form [18,40,41] z[M, η, α] = σ(M, α)(−1) M η∪α , (2.1) where α ∈ Z 1 (M, Z 2 ) is a background Z 2 gauge field of the Z 2 symmetry, and η specifies a spin or pin ± structure on M , which is related to the obstruction of the structure as δη = w 2 (resp. δη = w 2 + w 2 1 ) in the spin or pin + (resp. pin − ) case. Here, w 1,2 are the first an second Stiefel-Whitney classes, respectively. σ(M, α) is written in terms of a certain path integral of Grassmann variables defined by giving a triangulation of M . (In the following, when there is no confusion, we simply write z[η, α], σ(α), instead of z[M, η, α], σ(M, α), etc.) By studying the effect of re-triangulations and gauge transformations, this theory is shown to be anomaly free for a spin or pin − surface which we focus on in the main text of the present paper. Then, one can construct a spin or pin − theory fully invariant under the change of triangulation and gauge transformations, by coupling the Grassmann integral with an anomaly free bosonic theory Z M [α] called a "shadow theory" [42][43][44], and then gauging the Z 2 symmetry, (2.2)
JHEP01(2020)121
The rest of this section is organized as follows. In section 2.1 and 2.2, we review the construction of the Grassmann integral on a spin and pin surface respectively. In section 2.3, we provide the lattice construction of a pin − invertible TQFT, which describes (1+1)d topological superconductors in class BDI at long distances. Then, we describe the construction of spin/pin TQFT on a surface with a non-empty boundary in section 2.4, and apply it to construct the "fixed-point" ground state wave function of (1+1)d topological superconductors in section 2.5. This is the ground state wave function of the Kitaev chain deep inside its topological superconductor phase, with the smallest correlation length.
Spin TQFT on the lattice
We endow an oriented surface M with a triangulation. In addition, we take the barycentric subdivision for the triangulation of M . Namely, each 2-simplex in the initial triangulation of M is subdivided into 6 simplices, whose vertices are barycenters of the subsets of vertices in the 2-simplex. We further assign a local ordering to vertices of the barycentric subdivision, such that a vertex on the barycenter of i vertices is labeled as i.
Each simplex can then be either a + simplex or a − simplex, depending on whether the ordering agrees with the local orientation or not. We assign a pair of Grassmann variables θ e , θ e on each 1-simplex e of M when α(e) = 1, we associate θ e on one side of e contained in one of 2-simplices neighboring e (which will be specified later), and θ e on the other side. Then, σ(M, α) is defined as where t denotes a 2-simplex, and u(t) is the product of Grassmann variables contained in t. Namely, u(t) on t = (012) is the product of ϑ . Here, ϑ denotes θ or θ depending on the choice of the assigning rule, which will be discussed later. The order of Grassmann variables in u(t) will also be defined shortly. We note that u(t) is ensured to be Grassmann-even when α is closed.
Due to the fermionic sign of Grassmann variables, σ(α) becomes a quadratic function, whose quadratic property depends on the order of Grassmann variables in u(t). We will adopt the order used in Gaiotto-Kapustin [18], which is defined as u(012) = ϑ for a − triangle. We choose the assignment of θ and θ on each e in the following fashion: the Grassmann variables on e are assigned such that, if t is a + (resp. −) simplex, u(t) includes θ e when e is given by omitting a vertex with odd (resp. even) number from t = (012), see figure 1.
Based on the above definition of u(t), the quadratic property of σ(α) turns out to be where M is the same manifold M with a different triangulation, α is a cocycle such that [α] = [ α] in cohomology, and K = M × [0, 1] such that the two boundaries are given by M and M , and finally α is extended to K so that it restricts to α and α on the boundaries. The derivation of (2.4), (2.5) was given in [18]. Then, the spin theory z[M, η, α] is defined as where η specifies a spin structure on M , and satisfies δη = w 2 .
Pin TQFT on the lattice
We construct an unoriented manifold by picking locally oriented patches, and then gluing them along codimension one loci by transition functions. The locus where the transition functions are orientation reversing, constitutes a representative of the dual of the first Stiefel-Whitney class w 1 . We will sometimes call the locus an orientation reversing wall. We can choose a consistent orientation everywhere if we remove a locus of the orientation reversing wall. We remark that the assigning rule of the Grassmann variables described in the previous subsection fails, when e lies on the wall where we glue patches of M by the orientation reversing map. In this case, we would have to assign Grassmann variables of the same color on both sides of e (i.e., both are black (θ) or white (θ)), since the two triangles sharing e have the identical sign when e is on the orientation reversing wall, see figure 2(a). Hence, we need to slightly modify the construction of the Grassmann integral on the orientation reversing wall. To do this, instead of specifying a canonical rule to assign Grassmann variables on the wall, we just place a pair θ e , θ e on the wall in an arbitrary fashion.
Along with this modification, the Grassmann integral on M is revised as where the e|wall (±i) α(e) term assigns weight (+i) α(e) (resp. (−i) α(e) ) on each 1-simplex e on the orientation reversing wall, when e is shared with + (resp. −) 2-simplices. There is JHEP01(2020)121 no ambiguity in such definition, since both 2-simplices on the side of e have the same sign when e is on the wall. The quadratic property of the Grassmann integral (2.4) still holds for the pin ± case while the effect of re-triangulations and gauge transformations are given by as shown in [19]. Then, the pin ± theory z[M, η, α] is defined as where η specifies a pin ± structure on M , which satisfies δη = w 2 (resp. δη = w 2 + w 2 1 ) in the pin + (resp. pin − ) case.
Here, it should be emphasized that the expressions (2.8), (2.9) are based on a specific choice of the representative of the Poincaré dual of w 2 , w 2 1 in M . Firstly, the representative of the dual of w 2 on M is given by the set of all vertices of the barycentric subdivision [45][46][47]. Secondly, to specify the dual of w 2 1 on M , we first observe that the choice of the assignment of Grassmann variables on the wall corresponds to choosing the slight deformation of the wall, such that the deformation intersects transversally with the wall at vertices. Concretely, we deform the wall on each edge of the wall to the side where θ (black dot) is contained, see figure 2(b). Here, both walls before and after deformation give a representative of the dual of w 1 , and then the intersection of two walls gives our representative of the dual of w 2 1 . η in (2.9) is a trivialization of the representative of the obstruction class, prepared in the above fashion.
Arf-Brown-Kervaire invariant in (1+1)d
In this subsection, we construct the 2d pin − invertible TQFT [48] for the Arf-Brown-Kervaire (ABK) invariant via the Grassmann integral on the lattice, whose state sum JHEP01(2020)121 definition was initially given in [49]. In condensed matter literature, this invertible theory describes (1+1)d topological superconductors in class BDI. Here, we construct the Z 8valued ABK invariant by coupling the 2d state sum shadow TQFT with the Grassmann integral. For the Z 2 -valued Arf invariant of the spin case, this was done in [18].
The weight for the state sum is assigned in the same manner as the case of the Arf invariant of the spin case [18], described as follows. For a given configuration α ∈ C 1 (M, Z 2 ), we assign weight 1/2 to each 1-simplex e, and also assign weight 2 to each 2-simplex f when δα = 0 at f , otherwise 0. Let us denote the product of the whole weight as Z[α]. Then, we can see that the partition function is given by the ABK invariant up to the Euler term, The ABK invariant determines the pin − bordism class of 2d manifolds Ω pin − 2 (pt) = Z 8 , which is generated by RP 2 [16]. To see this, let α be a nontrivial 1-cocycle that generates H 1 (RP 2 , Z 2 ) = Z 2 . Then, using the quadratic property for α = α in (2.12), one can see that Q η [α] takes value in ±1, since Q η [0] = 0 and M α ∪ α = 1. Q η [α] = ±1 corresponds to two possible pin − structures on RP 2 . Then, the ABK invariant is given by an 8th root of unity, If M is oriented, the ABK invariant reduces to the Arf invariant Arf[M, η], which determines the spin bordism class Ω spin 2 (pt) = Z 2 [51].
Wave function on boundaries
Now let us consider a spin or pin TQFT on M constructed in the manner described in section 2.1 and 2.2, when M has a non-empty boundary.
To construct the wave function of the vacuum state, let us describe the state of spin/pin TQFT on the lattice. We recall that the state-sum model of 2d oriented spin TQFT is built JHEP01(2020)121 from a Z 2 -graded (not necessarily commutative) semi-simple Frobenius algebra A [18,52]. A similar construction for the spin TQFT wave functions is also found in [53]. If one seeks to consider unoriented pin case, we further have to assume that A is commutative, to ensure invariance of the theory under re-triangulation [54].
Let i ∈ A(i ∈ I) be the basis of A and denote α(i) as the Z 2 grading of i . We sometimes call the Z 2 grading as the fermion parity. We write C i jk as the structure constant of A in this basis. Then, let g ij := C l ik C k jl . Because g ij is non-degenerate, it has the inverse denoted as g ij . We define C ijk := g il C l jk , which turns out to be cyclically symmetric. If we further assume that A is commutative, C ijk is symmetric under any permutations. Since the algebra A is Z 2 -graded, C ijk and g ij always respect the Z 2 grading. Namely, C ijk vanishes unless α(i) + α(j) + α(k) ≡ 0 (mod 2), and g ij vanishes unless α(i) + α(j) ≡ 0 (mod 2).
Using these data, we can construct the bosonic shadow theory Z M [α] coupled with background Z 2 gauge field α. We assign a pair of elements i , j of A on each 1-simplex of M . Here, the background field α ∈ Z 1 (M, Z 2 ) is regarded as the Z 2 grading of the element in A assigned on 1-simplices. Namely, a pair of elements share the same Z 2 grading specified by α. Then, we assign weight g ij on each 1-simplex and C ijk on each 2-simplex. To obtain the partition function Z M [α], we just have to perform contraction of indices for all factors g ij , C ijk on M with a fixed Z 2 grading α. Then, the spin/pin TQFT is constructed in the form of (2.2), by coupling with the spin/pin theory prepared by the Grassmann integral.
If one makes up a boundary on M , the Hilbert space for the TQFT is constructed on A ⊗n , where n is the number of boundary 1-simplices. Here, ⊗ denotes the supertensor product of Z 2 -graded algebra. Compared with conventional tensor product, supertensor product is modified in the way which respects the fermionic sign of the elements carrying fermion parity. Namely, given the algebra A 1 ⊗ A 2 , the multiplication of elements in A 1 ⊗ A 2 is defined as [53] ( (2.14) For a given basis of A ⊗n on ∂M , the wave function is evaluated as the path integral on M . Denoting the element of A ⊗n in the form of | 1 . . . n , the wave function for the prepared Hilbert space is given by evaluating the path integral of the TQFT (2.1) on M , which is expressed as Especially, let us consider the simplest case where A = Cl(1) (real Clifford algebra generated by one Z 2 -odd element), where we are setting g jk = 1/2 · δ jk , C ijk = 2δ i+j,k . For simplicity, let M be an oriented spin surface. Then, we can build the Hilbert space on ∂M as the Fock space of n complex fermions. Namely, we prepare a complex fermion on each boundary 1-simplex, and consider a Fock space of the fermions. Then, the wave function for the prepared Hilbert space is given by evaluating the path integral of the TQFT (2.1)
In the expression (2.16), η satisfies δη = w 2 as an element of Z 2 (M, ∂M ; Z 2 ), where the representative of w 2 is specified as the dual of a set of all 0-simplices in the barycentric subdivision, as illustrated in section 2.2. Thus, we can rewrite the factor (−1) M η∪α as where E M is the dual of η, and ∂E M becomes a set of all 0-simplices in the barycentric subdivision, when restricted to the interior of M . The algebra A = Cl(1) also works as data for the unoriented pin − TQFT presented in section 2.3. When M is a pin − surface, we include the Z 4 factor e|wall (±i) α(e) in the l.h.s. of the equation (2.17), and let η be a trivialization of w 2 + w 2 1 as an element of Z 2 (M, ∂M ; Z 2 ).
Ground state wave function of the Kitaev chain
Here, we provide the fixed-point ground state wave function of topological superconductors in class BDI, based on the 2d pin − TQFT described in section 2.3. In this case, the shadow theory on a closed spin or pin − manifold is given by where |F | and |E| denote the number of faces and edges of X respectively. We set the shadow theory Z M on M with a non-empty boundary, by requiring that the wave function |in is correctly normalized, out|in = 1. (2.20) where out| is conjugate to |in . From this condition, the shadow theory is obtained as gives the shadow theory on Y , and the norm of the wave function is (2.23) Here, ABK[Y, η] is the Arf-Brown-Kervaire invariant quantized as the 8th root of unity.
Since the above expression is positive (ensured by reflection positivity of unitary TQFT [14,15]), ABK[Y, η] is 1, which shows that the wave function is correctly normalized.
Partial time-reversal
In this section, we will formulate the quantized non-local order parameter for SPT phases proposed in [11], for the states prepared by spin or pin TQFT. First, we recall the construction of the order parameter for (1+1)d SPT phases in class BDI, following [11]. Let us consider a ground state of the SPT phase on a ring with length n, constructed in the Fock space of complex fermions c 1 , . . . , c n with anti-periodic boundary condition. We take the reduced density matrix ρ I of the ground state, defined on an interval I in the ring. Then, we take a bipartition of I as I = I 1 I 2 . Roughly speaking, the order parameter is defined via the process of taking the "transpose" of the density matrix, restricted to the interval I 1 . With a proper definition of the transpose in the partial region I 1 in I, the order parameter is given by where ρ T 1 I denotes the density matrix acted by "partial time-reversal". The definition of partial time-reversal is transparently expressed in the coherent state basis. Namely, we introduce n Grassmann variables ξ 1 , . . . , ξ n and denote a state like |{ξ i } = j exp(−ξ j c † j ) |0 . The density matrix is rewritten in the coherent state basis as where d[ξ, ξ] = j dξ j dξ j e − j ξ j ξ j , and ρ I ({ξ j }; {χ j }) = {ξ j }| ρ I |{χ j } . Then, the operation ρ T 1 I is defined as This operation on I 1 is called partial time-reversal in [11], since it acts on Grassmann variables in I 1 in the same fashion as the time-reversal for the symmetry class BDI. In the following, we formulate partial time-reversal and compute the quantity (3.1), on a wave function constructed from a (1+1)d pin − invertible TQFT discussed in section 2.4. We find that (3.1) is identical to the partition function of the pin − TQFT Z[X, η] evaluated on a closed unoriented pin − surface X, which generates the pin − bordism group Ω pin − 2 (pt) = Z 8 . Especially, when M is taken to be a disk, X = RP 2 and
JHEP01(2020)121
where η specifies a pin − structure. The partial time-reversal can also be defined on the Hilbert space of spin/pin TQFT prepared by Z 2 -graded Frobenius algebra, which is described in A. Though we will mostly work on the Kitaev chain wave function in the main text, the correspondence between the quantized non-local order parameter and TQFT wave function (3.4) is safely extended to a pin − TQFT prepared by generic commutative Z 2 -graded Frobenius algebra.
Evaluation of partial time-reversal
Now we perform the explicit computation of (3.1). We start with constructing the reduced density matrix. We first prepare the state on ∂M = S 1 and its conjugation, in the form of (2.16) where we let the number of boundary 1-simplices 2n here, and labeled 1-simplices in ∂M as e −n , . . . , e −1 , e 1 , . . . , e n , for later convenience. M is given by reversing the orientation of M , and we denote 1-simplices in ∂M as e −n , . . . , e −1 , e 1 , . . . e n . Starting from the density matrix ρ = |in out|, we take the reduced density matrix ρ I for the interval I = 1≤|j|≤l e j , see figure 3. For simplicity, we set l, n as even. Then, ρ I is expressed as Z N [α]σ(M, α| M ; ord(−n, · · · , n))σ(M , α| M ; ord(n, · · · , −n)) (−1) α(e −j ) σ(N, α; ord(−l, · · · , l, l, · · · , −l)), the boundary to make (3.7) valid. Since A = Cl(1) and g ij is diagonal g ij = diag(gi), we can do this by assigning an additional weight √ gi on a boundary 1-simplex colored by i ∈ A. In the main text, we will work with such a redefinition.
JHEP01(2020)121
where we define E N as (3.10) One can check that ∂E N correctly gives the dual of w 2 on N , when restricted to the interior of N . Thus, E N actually works as a dual of η on N .
To compute the partial time-reversal of ρ I , we express ρ I using the coherent state basis, where ← − d ξ (resp. − → d ξ) denotes the integral which satisfies ξ ← − d ξ = 1 (resp. − → d ξ ξ = 1). Now we take the partial time-reversal (3.3) acting on the region I 1 = 1≤j≤l e j and I 1 = 1≤j≤l e j ,
where E X is the dual of η trivializing w 2 + w 2 1 on X, whose choice of representative is described in section 2. By comparing (3.15) with (3.16), one can see these expression are completely the same, by checking that (3.17) Thus, we have shown that (3.1) is identical to the partition function of the pin − TQFT, For instance, we can evaluate tr I (ρ I ρ T 1 I ) for the ground state wave function of the Kitaev chain described in section 2.5. Using the form of the ABK invariant (2.10), the expression becomes tr I (ρ I ρ T 1 I ) = 2 −χ(Y )+χ(X)/2 ABK[X, η], (3.19) where Y is a closed surface given by gluing M and M along the boundaries. In particular, if we choose M as a disk, Y = S 2 and X = RP 2 , which gives This reproduces the results obtained in [11].
Entanglement negativity
In this section, we evaluate the Rényi entanglement negativity of even degree n, which is defined as where ρ T 1 I (ρ T 1 I ) † is multiplied n/2 times in the above expression. Let us comment on notations used throughout this section. JHEP01(2020)121 • Following the notation in section 3.1, ρ T 1 I and (ρ T 1 I ) † are thought to be a path integral on surfaces N and N * respectively, where N * is given by reversing the orientation of N . For simplicity, N is taken to be a disk. We represent the boundary intervals as I 1 , I 1 , I 2 , I 2 ∈ ∂N (following figure 3(b)), and I 1 * , I 1 * , I 2 * , I 2 * ∈ ∂N * .
• We introduce the following notation for path integral on an open surface where we defined E X as where the sum runs over boundary 1-simplices of M contained in I, rounding a 2-simplex of M whose sign is +.
We aim to show that the quantity e En is identified as a path integral of the TQFT on a certain closed surface. To do this, we start with examining how the surface looks like. We can obtain the resulting surface by gluing each N and N * step by step; (i) multiplying ρ T 1 I and (ρ T 1 I ) † , (ii) multiplying ρ T 1 I (ρ T 1 I ) † s and taking the trace. Let us begin with the first step. Since ρ T 1 I (3.13) has an outgoing state in the interval I 1 and I 2 of ∂N , taking ρ T 1 I (ρ T 1 I ) † amounts to gluing N and N * along I 1 , I 2 and I 1 * , I 2 * , making up a cylinder. See figure 5(a). Similarly, one finds that gluing two ρ T 1 I (ρ T 1 I ) † s gives a torus with two punctures, by gluing two copies of cylinder along I 1 * , I 2 * and I 1 , I 2 , see figure 5(b). Then, E n amounts to successive gluing of n/2 cylinders, which gives an oriented closed surface Σ g(n) with genus g(n) = n/2 − 1. Actually, we will show that the Rényi negativity is directly associated with the partition function of spin TQFT as which will be demonstrated in the following subsection. Though we will mainly work on the Kitaev chain wave function, the above correspondence is safely extended for generic spin TQFT prepared by a Z 2 -graded Frobenius algebra (see A).
JHEP01(2020)121
where X 0 , X 0 denotes two copies of cylinder, and X 1 is a two-punctured torus given by gluing two cylinders. By successive gluing of cylinders, if we let X g denote a genus g surface with two punctures, one finds (4.10) Finally, we obtain a closed surface with genus g(n) = n/2 − 1 by taking the trace of the equation (4.10), Now let us examine what the induced structure η Σ g(n) is like. First, it is not hard to see that η Σ g(n) correctly gives a trivialization of w 2 , δη = w 2 , thereby defining a spin structure on Σ g(n) . We can further show that the induced spin structure η is equivalent to a connected sum of g(n) copies of (NS, NS) torus. To see this, let us first determine the spin structure measured around a cylinder X 0 given by gluing N and N * ( figure 5(a)). In this case, if we denote C χ as a cycle around a cylinder, we have where χ is a dual 1-cocycle of C χ . Since the configuration of C χ can be chosen in a symmetric fashion such that z[N, η N , χ] and z[N * , η N * , χ] are identical, we see that the l.h.s. of (4.12) is 1. Thus, we have z[X 0 , η X 0 , χ] = 1, which means that the induced spin structure is NS around C χ . Using the same logic for the other cycles of Σ g(n) , we see that the spin structure is NS for all fundamental cycles of Σ g(n) , hence we have a connected sum of g(n) copies of (NS, NS) tori.
Recalling that a bosonic shadow theory and the summation over α is omitted in the equation (4.11), we obtain This is what we wanted to achieve. We have shown that the moments of partial timereversal are identical to the partition functions on a surface with genus g(n) with NS spin structure for all fundamental cycles. If we employ the ground state wave function of the Kitaev chain described in section 2.5, we obtain (4.14)
Moments of partial time-reversal
In this section, we compute the moments of partial time-reversal for any power
JHEP01(2020)121
where ρ T 1 I denotes the partially transposed density matrix twisted by the fermion number parity (−1) F 1 of the interval I 1 [36], Here, we note that Z n coincides with the moment of partial time-reversal e En computed in section 4 when n is even, because As well as the case of E n , these moments can also be represented as a partition function on a surface Σ g with genus g where g = n/2 − 1 for an even n, and g = (n − 1)/2 for an odd n. Interestingly, we find that Z n shows the following Z 8 effect, while Z n just shows even/odd effect, for the wave function of Kitaev chain described in section 2.5.
Here, let us outline the key steps of the computation, focusing on Z n . The evaluation of Z n runs largely parallel to the case of the entanglement negativity e En in section 4. First, let us drop the imaginary factor e∈I 1 ∪I 1 (−i) α(e) in ρ T 1 I (3.13) supported on I 1 ∪ I 1 . Then, analogously to section 4, Z n is regarded as the path integral on some surface X given by gluing n copies of N . During the process of the successive gluing of surfaces, let us denote the intermediate surface obtained by gluing k copies of N which corresponds to (ρ T 1 I ) k as N (k) . Then, every time we glue the (k + 1)th copy of N with N (k) along the interval in ∂N (k) , to evaluate (ρ T 1 I ) k+1 , we have the following relation between the partition function before and after gluing using (4.3), By successively applying this relation and taking the trace in the last step, we finally obtain E X , dual of η X induced on the resulting surface X which corresponds to Z n . Hence, if we ignore the imaginary factor e∈I 1 ∪I 1 (−i) α(e) in ρ T 1 I (3.13), Z n would be associated with the partition function on X as JHEP01(2020)121 The above expression does not necessarily give the partition function of spin TQFT, since E X constructed above may or may not provide the correct trivialization of w 2 . Next, we incorporate the effect of the imaginary factor e∈I 1 ∪I 1 (−i) α(e) in ρ T 1 I (3.13) supported on I 1 ∪ I 1 . The imaginary factor introduces the shift of the spin structure. This factor acts as the "half of fermion parity" on the fermions living in I 1 ∪ I 1 . Hence, if we glue two copies of N along I 1 and I 1 , the doubled imaginary factor gives the fermion parity twist to the resulting surface, inserted in the interval where we glued them, see figure 6(a). Accordingly, every time we glue surfaces along I 1 ⊂ ∂N (k) and I 1 ⊂ ∂N , we introduce the twist by fermion parity in the resulting surface. The fermion parity twist leads to the shift of η X , which is expressed as η X → η X + χ where χ is the dual of the 1-cycle C χ in which we insert the twist. Incorporating this effect to the expression (5.7), we find that Z n is given by (5.8) The above expression becomes the partition function of spin TQFT, if E X + C χ gives the trivialization of w 2 correctly. Summarizing, the computation of the moment Z n proceeds as follows.
1. Firstly, we ignore the imaginary factor on I 1 ∪ I 1 in (3.13), and write Z n without the imaginary factor as the path integral in the form of (5.7).
2. Then, we introduce the effect of the imaginary factor, which shifts E X by the fermion parity twist line C χ . Z n is eventually expressed as (5.8).
In the following subsection, we explicitly compute Z n following the above procedure, dividing into the cases of even or odd n. Though we will mainly work on the Kitaev chain wave function, the following analysis is safely extended for generic spin TQFT prepared by a Z 2 -graded Frobenius algebra (see A).
Even powers
For even n, the resulting surface X becomes an oriented closed surface Σ g with genus g = n/2 − 1, see figure 6(b) for the case of n = 4. In this case, one can check that E X in (5.7) correctly provides the trivialization of w 2 , hence (5.7) represents a spin TQFT. In the same way as the case of the entanglement negativity, the induced spin structure η X is given by the connected sum of g copies of (NS, NS) torus. Then, let us consider the effect of the imaginary factor, which makes Z n different from the case of the entanglement negativity e En . As we discussed above, this factor shifts the spin structure of the resulting surface, by inserting the fermion parity twist in the interval where we glued the copies of N , see figure 6(a). For instance, in the case of n = 4, the twist lines run along two fundamental cycles of the torus, shifting the spin structure from (NS, NS) to (R, R), see figure 6(b). Generally, one can see that Σ g is a connected sum of (g + 1)/2 copies of tori with (R, R) spin structure, and (g − 1)/2 copies of tori with (NS, R) spin structure when n = 0, 4 mod 8.
JHEP01(2020)121
(a) (b) Figure 6. Every time we glue surfaces along I 1 and I 1 , we introduce the twist by the fermion parity in the resulting surface along the gluing interval. (a): when we glue two copies of N to take the square of ρ T1 I , we introduce the fermion parity twist along the red line in the resulting cylinder. The induced spin structure is Ramond. (b): when we glue four copies of N to take the fourth power of ρ T1 I , we introduce the fermion parity twist along the red line in the resulting punctured torus. If we close the puncture by taking the trace, the induced spin structure on the torus is (R, R).
However, for the case of n = 2, 6 mod 8, the line of the fermion parity twist is no longer closed, which introduces the vortex of the fermion parity at the end of the twist line. This violates the gauge invariance under α → α + δλ of the theory (2.1). Thus after summing over α ∈ Z 1 (M, Z 2 ) the partition function becomes zero. Hence, we conclude that Z n = 0 when n = 2, 6 mod 8, as shown in (5.4). Now let us evaluate Z n for the ground state wave function of the Kitaev chain described in section 2.5, when n = 0, 4 mod 8. The phase of Z n is given by the Arf invariant of Σ g equipped with the spin structure discussed above. Recalling the Arf invariant on the torus becomes +1 for (NS, R) and −1 for (R,R), the Arf invariant for Σ g becomes −1 for n = 4 mod 8, and +1 for n = 0 mod 8. If we correctly normalize the wave function to match the amplitude, Z n for n = 0, 4 mod 8 is given by (5.4).
Odd powers
For odd n, the resulting surface X is Σ g with genus g = (n − 1)/2. In this case, E X in (5.7) does not give the trivialization of w 2 . One can check that ∂E X becomes the set of all 0-simplices of the barycentric subdivision in X, except for a pair of 0-simplices which are denoted as v 1 and v 2 , where the equation δη = w 2 is violated.
Then, let us incorporate the effect of the fermion parity twist. For odd n, the network C χ of fermion parity twist has two junctions where n twist lines gather at a point. The JHEP01(2020)121 network becomes not closed at the junctions, and one can check that v 1 , v 2 are exactly where the two junctions live. Thus, E X + C χ in (5.8) correctly gives the spin structure on X after all. The spin structure induced by E X + C χ on X = Σ g is determined by the same logic described around (4.12), and summarized as follows.
On one hand, if n = 4m + 1, the spin structure is given by the connected sum of g/2 copies of (R, R) tori and g/2 copies of (NS, R) tori. Especially, for n = 1, X is a sphere equipped with a spin structure. On the other hand, if n = 4m + 3, the spin structure is given by the connected sum of (g + 1)/2 copies of (R, R) tori and (g − 1)/2 copies of (NS, R) tori.
Let us evaluate Z n for the ground wave function of the Kitaev chain described in section 2.5, when n = 1, 3, 5, 7 mod 8. The phase of Z n is again given by the Arf invariant of Σ g equipped with the spin structure discussed above. One can see that the Arf invariant for Σ g becomes −1 for n = ±3 mod 8, and +1 for n = ±1 mod 8. If we correctly normalize the wave function to match the amplitude, Z n for n = 1, 3, 5, 7 mod 8 is given by (5.4).
Spectrum of partial time-reversal
Once we have determined the moments for any degree, it is a simple matter to obtain the spectrum of partial time-reversal ρ T 1 I for Kitaev chain wave function. First, due to the mod 8 periodicity of Z n , the phases of eigenvalues of ρ T 1 I are all quantized as the eighth root of unity. Since we have Z 8n = 4 × (2 √ 2) −n , the spectrum consists of four nonzero values whose absolute values are 1/(2 √ 2), otherwise zero. By further matching the spectrum with the obtained value of Z n , we see that the four nonzero eigenvalues of ρ T 1 I are given by 6 Partial reflection In this section, we discuss another quantized non-local order parameter for (1+1)d SPT phases protected by spatial reflection symmetry, proposed in [11]. The proposed quantity is associated with operating the reflection partially on the state. Concretely, let us consider the ground state of the SPT phase |ψ protected by the spatial reflection symmetry R, where we have R |ψ = |ψ . Then, the "partial reflection" is defined as where R part denotes an operator which reflects a segment in the lattice system. As well as the partial time-reversal in section 3, the partial reflection diagnoses the Z 8 classification of (1+1)d topological superconductors protected by reflection symmetry R satisfying R 2 = (−1) F . In the following, we compute the partial reflection (6.1) on the state prepared by the pin − TQFT (3.5), (3.6), and show that (6.1) is identical to the partition function of the pin − TQFT, where X is a closed pin − surface which generates the pin − bordism group Ω pin − 2 (pt) = Z 8 . where ∂E X is the dual of w 2 + w 2 1 on X, specified in section 2.2. Here, we note that the boundary contribution of E M , E M cancel out, once we postulate the reflection symmetry of the state R |in = |in , making ∂E M reflection symmetric on ∂M . Then, we can check that the l.h.s. of (6.7) gives the correct trivialization of w 2 + w 2 1 on X. Hence, we have shown that out|R part |in = Z[X, η]. This reproduces the results obtained in [11].
Conclusions
In conclusion, we explicitly computed entanglement measures and quantized non-local order parameters for fermionic SPT phases in the framework of spin/pin TQFT. We clarified the properties of order parameters defined via partial operations, as topological invariants diagnosing the Z 8 classification of (1+1)d fermionic SPT phases in class BDI and D+R − . Moreover, we demonstrated that these order parameters have universal amplitudes, indicating a topological origin such as the quantum dimension of the boundary Majorana modes. Furthermore, we revealed that the moments of partial time-reversal have the mod 8 periodicity, which leads to the eight-fold quantization of the negativity spectrum.
JHEP01(2020)121
There are several avenues to pursue for future research. First, a natural extension of the present paper is to explore the formulation in higher dimensions. In this case, we should be able to prepare a wave function in the form of tensor network state, from a path integral of spin/pin TQFT in generic dimension. In higher dimensions, it is suggested [17] that fermionic SPT phases with point group symmetries can be detected by partial point group operations. It is interesting to formulate the partial point group operation and its relationship to path integral of lattice TQFT. It also remains open for future works to examine entanglement properties of spin/pin TQFT in higher dimensions.
Furthermore, it is worth investigating entanglement measures studied in the present paper for conformal field theory (CFT) coupled with a spin structure. Since we can obtain a fermionic CFT by coupling a bosonic CFT with a spin/pin TQFT, we believe that our formulation of entanglement measures is useful in studying entanglement properties of fermionic CFT. Especially, it was demonstrated in [36] that a critical Majorana chain with c = 1/2 shows the six-fold quantization of the negativity spectrum, which resembles the eight-fold quantization in spin TQFT discovered in the present paper. It is conceivable that the six-fold quantization for eigenvalues of ρ T 1 I is a universal nature of spin CFT not limited to a Majorana chain, since the moment of partial time reversal is associated with a three point function of twist fields of CFT. The derivation of the six-fold quantization for the case of fermionic CFT is left for a future work.
JHEP01(2020)121
definition of the inner product and the trace. We postulate that i | j = tr(| i j |) = g ij , (A. 3) where g ij is the weight on 1-simplices in the state sum of the shadow theory Z. When the above inner product is a sesquilinear positive definite form, and further A is a commutative Frobenius algebra equipped with a structure of a * -algebra C i jk = C i * kj , then the theory is guaranteed to be unitary [21]. In the following, we assume that A satisfies these properties. Then, replicating the logic of section 3.1, the reduced density matrix for the interval I = 1≤|j|≤l e j is given in the form of (3.9), Z N [α]σ(N, α; ord(−l, · · · , l, l, · · · , −l))(−1) E N α × | −l . . . l l . . . −l | . Based on the above definition of partial time-reversal, we can compute the entanglement negativity and moment of partial time-reversal for generic wave function of spin TQFT in the same fashion as section 4, 5. It is straightforward to see that the results presented in section 4, 5 are also true for generic spin TQFT on lattice. The results in section 3, 6 are also extended to pin − TQFT on lattice constructed from A.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 12,448.8 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
The Exploratory Study of On-Line Knowledge Sharing by Applying Wiki Collaboration System
The purpose of this study was to present the experience of knowledge sharing by applying wiki collaboration system as an experiment in a Taiwan Company. The findings of this study might be the reference resources to those who want to employ wiki system as the on-line knowledge-sharing tool in organizations. In this research, the researchers followed the Phenomenological research methodology to describe “What is the content and context of experience of on-line knowledge sharing through wiki application system for those people in case company?” The results shown that the essences of experience of knowledge sharing by applying wiki collaboration system come into four themes: mass collaboration with co-workers to construct knowledge, infrastructure of wiki collaboration system, collaborative knowledge sharing design, and scaffolding as the learning facilitator.
Introduction
The first stage of the web is a static web sites and an accessing content.Web 2.0 is a trend in World Wide Web technology, and web design, a second generation of webbased communities and hosted services such as socialnetworking sites, wikis, blogs, and folksonomies, which aim to facilitate creativity, collaboration, and sharing among users [1].In contrast, Web 2.0 breaks the traditional boundaries between the information providers and the passive audiences.One of the advantages of these technologies is that they are accessible whenever and wherever user needed them, which means just-in-time learning [2].
O'Reilly [3] defined Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an architecture of participation, and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.
With the rapid evolution of blogs, wikis, social net-working and bookmarking, and related applications offer rich users' experiences where the process of knowing is a community-based, collaborative endeavor [4].Web 2.0 has a more robust platform to share knowledge and it is the users who add value and expand the value of the venue, enhancing the initial knowledge base [2].
The term "wiki" refers to a social computing system that allows a group of users to initiate and evolve a hyper-linked set of Web pages using a simple markup language [5].The ultimate goal of sharing employees' knowledge is its transfer to organizational assets and resources [6].
This study attempted to discover the experience of how participates sharing their knowledge when joined the experiment of wiki collaboration system as the tool on constructing document technical reports in the case company.
Literature Review
Web 2.0 breaks the traditional boundaries between the information providers and the passive audiences.An advantage of these technologies is that they are available when and where the user is available (just-in-time learning) [2].Web 2.0 has a more robust platform to share knowledge and it is the users who add value and expand the value of the venue, enhancing the initial knowledge base [2].The knowledge transmission and interchange mutual of users.Web 2.0 is a second generation of webbased communities and hosted services such as socialnetworking sites, wikis, blogs, and folksonomies, which aim to facilitate creativity, collaboration, and sharing among users [1].
A successful online example of this phenomenon is Wikipedia, a free online encyclopedia that enables anyone with Web access to post articles on any topic, edit them or challenge their relevance [7].A wiki provides an extremely fast and efficient way to collaborate and communicate knowledge among virtually anyone interested without the constraints of place or time [8].
The open principle solicits the constant interaction of wiki contributors through editing page syntax or content, and adding or correcting posted knowledge-elements that foster the social ties vital for knowledge sharing.A wiki environment is conducive to the constructivist learning theory, where trust enables an individual to express knowledge in order to construct it, and influence helps to refine knowledge [5,9].Therefore, the purpose of this research is to understand the knowledge sharing experience of employees of a company through the Wiki collaboration system.
Knowledge Sharing
Garcia' and VanÕ [10] argue that the issues surrounding the complex environment can be summarized and condensed into two major challenges: that of dealing with the global marketplace; and that of trying to manage an organization's knowledge.Knowledge, it is frequently argued, is the intangible resource that can give an organization competitive advantage [11,12] that can be sustained [13][14][15] and is difficult to duplicate [16].
Driven by a knowledge economy, many organizations have recognized knowledge as a valuable intangible resource that holds the key to competitive advantages [17].It is the ability of firms to create, to transfer and to adopt knowledge rather than their allocating efficiency that determines their longrun performance [18].Recently, the studies about knowledge sharing involve a lot of fields, such as education, hotel industry, business, technology, and so on [19,20].Besides, knowledge sharing is not only using for the industry or organizations, but also influencing to individual.There are researches which indicate that knowledge sharing is related to individuals, including personality traits and individual attitudes [6,21].In addition, knowledge sharing can applied to project-based organizations [22].
Knowledge sharing can define as a social interaction culture, involving the exchange of employee knowledge, experiences, and skills through the whole department or organization [23].In addition, knowledge sharing in-volves individuals sharing organizationally relevant experiences and information with one another [24].And, Bartol and Srivastava [25] defined knowledge sharing as the action in which employees diffuse relevant information to others across the organization [6].Knowledge sharing as a reciprocal process of knowledge exchange and examines factors that help explain why individuals are willing to engage in this process.Knowledge sharing is very important for organizations, because knowledge sharing is able to help organizations to develop their skills and competences, increase value, and maintain their competitive advantage [26].
Kim and Lee [27] demonstrate three positions about the effects of knowledge sharing, they are: 1) knowledge-sharing activities create opportunities for private organizations to maximize their ability to meet customers' changing needs and to generate solutions to gain competitive advantage; 2) knowledge sharing is one of the most important factors affecting organizational agility and performance; 3) knowledge sharing further entails the development of storage and retrieval mechanisms for quick and easy access to information that is used for adjusting strategic direction, problem solving, and improving organizational efficiency.Lin [23] also shows three points about the effects of knowledge sharing, including: 1) for individual employees, knowledge sharing is talking to colleagues to help them get something done better, more quickly, or more efficiently; 2) for an organization, knowledge sharing is capturing, organizing, reusing, and transferring experience-based knowledge that resides within the organization and making that knowledge available to others in the business; 3) knowledge sharing is essential because it enables organizations to enhance innovation performance and reduce redundant learning efforts.In addition, Knowledge-sharing systems have been implemented in various companies during the last few years.However, many of them have failed because they were limited to technical solutions and did not consider the organizational and environmental factors that are necessary to make a knowledge-sharing platform successful [28].Whereas knowledge sharing can improve an organization's competitiveness, a lack of knowledge sharing can cause serious problems for an organization [29].Modern processes and systems enable the sharing of organizational knowledge in new ways [30].A wiki environment is conducive to the constructivist learning theory, where trust enables an individual to express knowledge in order to construct it, and influence helps to refine knowledge [5,9].The open principle solicits the constant interaction of wiki contributors through editing page syntax or content, and adding or correcting posted knowledge-elements that foster the social ties vital for knowledge shar-ing.Therefore, the purpose of this research is to explore the knowledge sharing experience of employees through Wiki collaboration system.
Wiki Collaboration System
The term "Wiki" was sourced by language of Hawaii, which means "super-fast".Wikipedia is the website to let any users or visitors from anytime, anywhere can edit freely."Wiki collaboration system" or "wiki Software" could operate and develop to accomplish the concept of mass collaboration.Wiki Software includes all the Wiki related software, such as web server, management database and Wiki Engine [1].
A wiki is a website that allows, and in fact encourages, users to share information by freely writing new content, adding to existing content, and editing or commenting on content.It can be viewed as an electronic version of a brainstorming session among colleagues when it works well, and it has the advantage of extending the session around the globe so that like-minded individuals can contribute productively to a discussion.
Gorman [31] provided a brief viewpoint on wikis.A wiki is a web site that allows and encourages users to share information by freely writing new content, adding to existing content, and editing or commenting on content.It is also an opinion piece based on the author's own experiences.Reflecting the chaotic nature of the web world in which wikis exist, the reality of the situation is that they often do not operate in a positive environment.The author's experience as a wiki participant on several occasions has been far from positive.
Wiki collaboration systems encourage student-centered learning environments, because they encourage students to be co-creators of course content.However, there are several problems with the traditional wiki paradigm for use in the classroom.This paper identifies these problems, and describes a system we implemented to solve them.
Methods
In order to exploring and describing the knowledge sharing experience of the participants in wiki collaboration system, we took Phenomenology as the perspective to analyzing corpus.This study adopted single case to analyze the data descriptively.This single case is about a company applying Mediawiki, the software of wiki system, to co-write the technical reports for RD staffs.We desire to introduce this exploratory experiment in order to demonstrate the innovative application in the concept of web 2.0.Quinn, Anderson et al. [32] note that employees readily use ad hoc networks for projects, because their compensation is tied to peered review of team be-havior.It shows the same stage on the case company that expresses in the following paragraph.This phenomenological study focuses on exploring the meaning of lived experiences as phenomena.The data was collected through individual face-to-face interviews with participants in wiki project of the case company.Eligible participants included individuals joined in the discussion board as the on-line knowledge-sharing tool in the case company during the past three years.This target group was selected for two primary reasons.First, they were voluntary to participate in the wiki project, which implied they might own the stronger motivation for sharing their knowledge.Second, the case company had built up the discussion board for employees to solve their problems and communicate their experiences by sharing.That might be the prior knowledge of how to share with others for these participants.The qualitative in-depth interviews were guided by a semi-structured interview guide, with open-ended questions and as few prompts as possible to elicit rich descriptions of experiences.The interview guide included several general prompts to ensure that the interview maintained a general focus and that major themes of interest were explored.Specifically, we asked our informants questions about their personal experiences of sharing knowledge in wiki collaboration system, perceptions regarding community interactions, their understanding of shared knowledge, the influence of experiences on their self-perceptions, and strategies to facilitate more discussion of knowledge in wiki project setting.For example, participants were asked "Can you tell me about some of your experiences of sharing knowledge in the wiki system?" "How do you understand your experiences of sharing knowledge in the wiki system?" and "Does your wiki experiences influence you about willingness to share knowledge?"The specific phrasing of interview questions varied slightly across participants depending upon the interview contexts.With permission from the participants, the in-depth interviews were audio taped and transcribed.The transcribe data were coded and analyzed using N-Vivo, a computer software program for qualitative data analysis.Each interview lasted for approximately 90 minutes.At the end of each interview, the background information of participants was collected.Emerging themes were discussed among the researchers for dependability and confirmability.Initially, the resear-chers' past or present experiences as wiki participants were bracketed [33] so that the themes were allowed to emerge from the data.Next, to examine the credibility and confirmability of emerging themes, e.g.[34], the preliminary findings of this study were presented and reviewed by all participants.This member checking provided one means of increasing trustworthiness by ensuring that participants' experiences have been appropriately represented.The researchers were the main research tools for gathering data.Through the phenomenological viewpoint, researchers attempted to understand the meaning of interviews conveyed.
Findings and Discussion
There are six aspects that composed factors affecting individual to share knowledge on-line, which were organizational culture, infrastructure of wiki collaboration system, rewarding system, work pressure, knowledge sharing design, and willingness of sharing knowledge [35].We apply these dimensions as the foundation to develop four items of experience as the finding in this research.
Mass Collaboration with Co-Workers to Construct Knowledge
There were more barriers for experienced employees to operate this system for co-write the technical report.The knowledge transfer will be limited through technical problems.
The tool itself provides convenience to end users including writers, readers, and managers inside the company.In the viewpoint of sharing resources in the organization, the wiki collaboration system combines with the organization intra-structure will lead more feasibility.All the employees joining in the project build the benefits of applying wiki collaboration system.The outcomes of the project are the processes that participants working together to cohere.The information renewal of wiki collaboration system use the way of branching to modifying the data.The benefits are leading to a more efficient pattern comparing with traditional file system management.New information will be accessed by users who exactly needed by just-in-time concept.The ultimate objective is that organizational employees are communitilized by the project.All members could collaborate with each other.Knowledge that shared in the discussion board needs to be adapted and induced by gatekeeper.In the other hand, wiki collaboration system presents the entire modifying log for anyone to perceive.
Infrastructure of Wiki Collaboration System
The case company adopted free wiki software to develop the intra-wiki co-write system such as Mediawiki and shareport.Language of wiki is a difficult barrier for RD staffs to use.For example, the double quotes are a specific part of wiki, which made RD staffs confused quite offen.Some systematic issue should be tailored, in order to eliminate the impact of corporation.Hyperlink in intra-net will be a convenient part for users to apply.In application viewpoint, managers can assign several cer-tain topics to share and interact.Then managers can review the discussing log through the process, which is the most valuable part of wiki system in the web 2.0 environment.
Collaborative Knowledge Sharing Design
The wiki system is easier to collecting data than other platform, such as Google the search engine because of the data convergence.But, the correctness of the content will be important concern, especially the no-supervisor condition.When thinking about implementing the wiki knowledge sharing system, organization design side will consider to developing by itself to customize to the organization design.In the application way, the functions and integrity of more than 256 kinds of wiki engines are different.From technical viewpoint, the database format will be the important issue to regard as as well as the compatibility with the intra-net system.Some employees are still used to web custom.They need more experience in practice.The wiki participants are qualified in the characteristics and experience.The content validity to be delivered will be confirmed.
Scaffolding as the Learning Facilitator
The facilitators need to recommend this system to the executive level to let them realize the benefit of the wiki system for sharing the synchronous knowledge with openness.Users will feel free to use this system when they become conscious that the system will help the workflow so well.Inside the organization, the employees are used to communicate by phone or face-to-face.The success factors will follow the development of environment.When the web 2.0 concept becomes maturely in the business side, the wiki will be adopted to working process without more barriers.The willingness of sharing knowledge will be more openly and actively.
Conclusions
The on-line knowledge sharing using wiki collaboration tool is a new model of knowledge management.The case company applied experimentally the open source -Mediawiki as the software system to co-write technical reports within intra-net.The experiment showed good expanding functions for RD staffs to discuss and interact intelligence and experience from on-line discussion board to intra wiki collaboration system.The oriented training program of new system usage for participants is needed when conducting new system into the organization.Although the volunteers were very familiar to the on-line discussion board, they still had some difficulties to get used to the wiki collaboration system.Which means the operating process is the important guide to promote the willingness of users to adopting wiki collaboration system as the on-line knowledge sharing tool.It will also be an important issue to implement the wiki collaboration system to the entire company.The intra-net discussion board as the on-line knowledge-sharing tool has made the interaction patterns for RD staffs in the case company.To successfully applying the wiki collaboration system, the facilitators need to distinguish both the functions between wiki collaboration system and discussion board.As well as deal the barrier of adopting the wiki system such as familiar to wiki language.The experiences were explored by this study.The mean-ing and explanation of knowledge sharing by applying wiki collaboration system are highly recommended to be the further research topic. | 4,064.2 | 2010-09-30T00:00:00.000 | [
"Business",
"Computer Science"
] |
The Temptation of Getting the “ Pie ”-----The Cost-benefit Analysis on Farmer Specialized Cooperatives Access to Government Subsidies in Mainland China
In the Chinese mainland, government subsidies look like a delicious “pie” hunted by different cooperatives. In order to get more charge of it, however, some cooperatives used strategies distorting the cooperative’s basic characters. These problems appeared also decline the capital usage efficiency of government subsidies in a certain extent. This paper uses transaction cost theory to explore the current situation and the cost/benefit existed in different cases. The authors believe that the win-win options for Chinese cooperatives are getting the supportive resource, developing themselves and profiting from the market as well. In the end, authors suggest that the financial subsidies should be more used on building a public service platform to realize more and more cooperatives benefiting.
Introduction
According to the data from the State Administration for Industry & Commerce of China, the number of registered Farmers' Specialized Cooperatives (FSCs) has already over 350,000 at end of the year 2010.The rapid growth in number is partly due to the implementation of Law of the PRC on Specialized Farmers Cooperatives (2006) which removed many former institutional barriers.Furthermore, this growth is closely linked with subsidies to FSCs (Note 1) provided by center and local governments.Chapter 7 of the cooperatives law specifically states that government should provide support for formation and development of FSCs.However, it is the abundant government subsidies that stimulate the growth in the number of FSCs with an over dramatic speed.In rural China, the motivations of establishing FSCs are very complicated.Some FSCs do not aim to satisfy all members' common interests, instead, focusing on acquiring government subsidies.Meanwhile, the 'number talks' fact pushes local government into encouraging more and more famers to form FSCs in order to achieve better performance.The abundant Chinese government subsidies look like a delicious "pie" to those FSCs who constantly adjust their strategies in order to acquire them.However, some of their strategies to secure the subsidies have deviated from classical cooperatives' principles.Apparently, these issues are caused by many reasons.The authors consider the main reason is that FSCs' limited cost-benefit awareness on gaining and using government subsidies do not match the original intention of government policy.Therefore, cooperatives can make more efficient use of subsidies by identifying the apparent or hidden costs, risks and benefits while trying to gain the "pie".Related government policies should also aim to make clear all costs, risks and benefits involved as well.
Reasons and current situations on government subsidies
Why should Chinese government provide abundant financial support to FSCs?On the one hand, it mainly depends on the weakness of farmers and agriculture.Because the interests of FSCs are consistent with the interests of farmers, the government is willing to promote agricultural sustainable development and encourage improved agricultural efficiency by supporting FSCs.On the other hand, one of the distinctive features of the cooperative business form is the promotion and adherence to a set of principles.Commonly accepted principles including voluntary organizations, democratic control and concern for community are consistent with the idea of good governance.Consequently, the growth of FSCs is good for both improving farmers' livelihood and the construction of harmonious society.As for this question, obviously there are no divergent views among different scholars.Xiaoshan ZHANG and Peng YUAN (1991) hold the view that the government should only provide financial support and other incentives to FSCs through legislation, cooperatives education, extending agricultural technology and marketing management knowledge and so forth.They also believed that intervention in the formation and growth of FSCs is not so-called "the first driving force" in providing government financial support.There is no doubt that cooperatives are autonomous, self-help organizations controlled by their members (Note 2), while too much intervention would be detrimental FSCs must still be anchored to deal with government and other institutions.From the dynamic point of view, Peng YUAN (2001) considered there were non-equilibrium mutual penetrations and interactions between the state and cooperatives.To be specific, the government has strong penetrating impact on FSCs, meanwhile, the cooperative business themselves become independent and gradually differentiate.Based on the field investigation materials in Jiangsu and Zhejiang Provinces, Jingxin WANG (2005) found government support is necessary for FSCs' growth.Government should used measures to ensure FSCs can run their business in a more favorable environment, that they are enacted by a local decree, given tax preference and improved credit services to establish agricultural insurance and risk fund,etc.Xiangzhi KONG and Yanqin GUO (2006) used non-random sampling method to investigate FSCs in 23 provinces of the Chinese mainland.They also found out that government support was helpful and necessary in the development of FSCs, especially during their formation time.However, the reality also showed government support in the policy and financing areas are still insufficient in general.In the case of Beijing Municipal, the growth trend of FSCs' number experienced three distinctive stages during 2007-2010.To be sure, the number experienced a rapid growth and then followed by a relatively flat period.After that another sharp increase appeared.According to the statistic data (Note 3), with the implement of the Law of the PRC on Farmers Specialized Cooperatives, the FSCs' number reached 1609 in the year 2007.During the following one year time, only 707 new cooperatives registered.On the contrary, the increasing number rose to 2079 from 2008 to 2010.Compared with the year 2009 and 2010, the rate of growth in 2008 was far less than the other two years.
<Insert Figure 1 here> As for the first phase of FSCs' development in Beijing, due to the implement of the law which effectively cleared away the existing institutional barriers, there was a rapid growth of the number immediately appeared.This situation was the result of long-term systematic block.Therefore, after resolution of the related problems, especially where/how to register, a large number of FSCs will format for sure.Things were changed in the second phase, the impact of registration system was gradually digested, and systematic congestion problems did not exist.In this phase, the main purpose for establishing FSCs is farmers consider their own situation and use collective action in order to pursuit the normality market returns.The increasing trend gradually slowed down and the constant growth emerged thereafter.Under the ideal state, this situation would continue.However, the FSCs in Beijing showed the third increasing phase.This phenomenon proves the new institutional temptation concentrating in government subsidies play an essential role in FSCs' original development space.To maximize the benefits, FSCs prefer pay nothing or small amount of costs to gain substantial increasing profits and to reduce the operating costs as well.When government subsidies as a delicious "pie" placed in front of cooperatives, it becomes an additional received income except the general input-output relations to FSCs.This becomes the main driving force for FSCs to get the "pie".However, the government subsidies do not belong to the generalized system of preference, there must be several corresponding conditions included.The cost FSCs should pay for preparing and meeting all these conditions are the potential cost existed in getting the "pie".Also, the target FSCs will be selected by government step by step from all FSCs which will meet their requirements.It is not 100 percent for sure that all FSCs which meet the government requirement will get the subsidies.Therefore, FSCs need to assume the additional risk just in case they cannot get the governmental incentives.Similarly, to those FSCs which already got the government subsidies, the existing risks are mainly focus on their direct/indirect impact on their capital structure changes, patronage refund and organizational governance structure.Furthermore, when government provides subsidies to the FSCs, it should consider more on the usage efficiency of financial resources, as well as the impact on the normalization of FSCs.As for the cooperatives, it is necessary to make corresponding benefit-cost analysis before making further strategies to reduce the costs of pursuing government subsidies and their negative effects on the FSCs ' sustainable development.
The cost-benefit analysis on obtaining government subsidies
In the broadest sense, transaction cost include all those not exist in the absence of property rights, no trade, not one kind of economic organization of Robinson Crusoe the cost of the economy.This definition of transaction costs can be seen as a series of system cost, including information costs, negotiation costs, costs of development and implementation of contracts, property rights define and control the cost of supervision and management of costs and the cost of structural changes in the system.In short, including all material not directly in the cost of the production process.(Wuchang ZHANG, 2000) Also, organization can be seen as a constantly changing entity.The organizational structure continues changing due to the complexity and uncertainty of surroundings.In particular, when organizations interact with the external environment, they need to pay the cost on every transaction part without any exempt.Apparently, the costs of some transactions are higher than the others.As for the government subsidies, there are all kinds of transaction costs existing in different competing, obtaining and using process.Government would like to set up a number of regulations to narrow down the quantity of supported objects.Therefore, FSCs should adjust their organizational structure in order to satisfy these requirements.Taking the Yanqing District of Beijing as an example, the local government introduced the new government document "About the implementation details involved in supporting FSCs in Yanqing District" (Note 4) ( 2007).The Article IV regulated "Government should provide appropriate subsidies to FSCs while they establish their own website".Obviously, every step of having a website needs financial and technical inputs, including establishing, post-maintenance and management respectively.In fact, cooperatives can use the current marketing platform to self-promote and sale their products.And the involving cost may far less than building an entirely new website.Meanwhile, because of the limited extension system, other stakeholders have a little awareness on the new website.Therefore, it cannot effectively improve the current sales conditions.As for the risks created in hunting for government subsidies, they are assumed by cooperatives which may negatively influence the sustainability of cooperative's development and the improvement of members' income in a certain extent.
When authors analyze the cost-benefit existed in obtaining government subsidies, it is important to classify these target FSCs into two groups, that they are FSCs which have already met government conditions and others have not.In order to meet these requirements, the latter group would make strategies including changing the current organizational structure, product structure and so forth.When FSCs try to get scarce resources from government, some of their strategies may cause the changes of their cooperative's character.In addition, these changes may also influence the governance structure of the organization and some of these changes may be internalized into their cooperative's characteristics forever.On the contrary, other strategies are potentially implicated with special phases, cooperatives may abandon these after experiencing a certain stages.The whole process of struggling and digesting governmental subsidies all involve transaction cost FSCs may pay.The net profit of FSCs obtaining government subsidies may be calculated by the formula 'the obtained government subsidies -transaction cost = net profit'.If FSCs pay much transaction fee and the income is less than the cost, so the net profit will be negative.However, the reality in the field is much more complex which made the evaluation of transaction cost and net profit could not simply equivalent to one plus one must equal two.Usually, the quantity and quality of government subsidies received by a specific FSC should much more than the cost paid by the entity itself.However, to a specific FSC, factors involving the uncertainty and heterogeneity of transaction and asset specificity may lead to the uncertainty that cooperative whether can get government subsidies for sure after paying the related transaction cost.This phenomenon typically refers to the term aleatory (Note 5) in fields of law.In general, the whole quantity of government incentives is limited and scarce as well, oppositely FSCs' desires for getting them are far beyond.On the one hand, the dramatically number growth of FSCs in the Chinese mainland has already caused heated competitions among the same type of FSCs even in the same region.On the other hand, it is not the equivalent payment between cooperative and government when cooperatives try to.FSCs are willing to accept the competition among FSCs and take the full responsibility of getting nothing.From this aspect, FSCs' strategies contain speculative motive in some extent.In other words, this so called "press one's luck" behavior makes if cooperative will not get the chance to obtain the government subsidies, so that the organization could only digest the former paid transaction cost without any income.As for obtaining government subsidies, it does not mean the organization can successfully get it without any doubt when FSCs meet government's requirements.Cooperatives merely get the chance to fight for these resources.And then, here comes three possibilities: Firstly, there are only costs existed but any income for return.To be specific, although FSCs make gestures to struggle for and adjust its organizational structure, it still merely get the chance but the real resource.Hence, there is no income for FSCs to cover the paid transaction cost fee in this occasion.Secondly, some FSCs pay for the cost and also gain the income (got the government subsidies).There is net profit still existed after the deduction.However, this case only existed in the intuitive economic interest aspect, that is after cooperative paying the cost of getting the subsidies, it successfully obtain the resource.Moreover, the quantity of gained resource is larger than the transaction cost fee, therefore the space of cooperative's net profit exist.Thirdly, the transaction cost FSCs pay coincidently lead the enhancement of their marketing power.Besides the government subsidies they get, there is another profit of getting incentive exist.Unlike the other two classifications above, it is this kind of FSCs' strategies improve their adaptation ability to the market during the fighting process, thus the measurement of the net profit they gain should deduct transaction costs from the return.The previous analysis is based on using cooperatives as the major unit of analysis.Because cooperative is a collective union, their strategy making embodies their members' common needs and willingness.According to the Law of the PRC on Farmers Specialized Cooperatives ( 2006), there are no particular regulations on members' capital requirements, correspondingly the related members' responsibilities are all specified statement in cooperatives' bylaw.According to the Article 18 of the Law, a member of a FSC shall be charged with the duty to make capital contributions to the cooperative as stipulated in the charter.In other words, the bylaw of FSCs can make specific requirements on the amount, methods and procedures of capital contribution.Meanwhile, it can also make no requirements for any capital contribution, that is members don't need to have the investor's identity when they decide to join.Therefore, there are four different forms of members' capital contribution methods under the current framework of the law: ①all members joint and make equal contribution; ②all members joint but unequal contribution; ③part of the members make capital contribution and part don't; ④all member don't make capital investment.(Gang SONG, 2007)Actually, there are membership differences exist in some FSCs in China.In particular, all members in this kind of FSCs can be grouped as core-members and ordinary members.The core-members refer to members who made large capital investment to the cooperative.It is because the unequal capital contribution leads to the differentiation, which will also influence the governance, management control and the distribution of residual claim in cooperatives, existed in the membership structure.For members in this FSCs above, the maintenance of governmental subsidies may lead changes happened in the orders of internal members' relationship.The following requirement in the Law of the PRC on Farmers Specialized Cooperatives shows if the FSCs accept the government subsidies, the money they get need to be documented in every member's account and deemed as members' capital contribution.
… The distributable profits shall be returned or distributed to the members according to the following provisions, and the specific measures for distribution shall be decided according to the stipulations in the charter or the resolution of the membership assembly: … to distribute pro rata to the members of the cooperative the rest of the profits left after the return according to the provisions in the preceding subparagraph, on the basis of the capital contributions and shares of common reserve funds recorded in the members' accounts and the members' average quantified shares of the assets accumulated from subsidies directly given by the government and donations made by other persons to the cooperative.(Article 37, Law of the PRC on Farmers Specialized Cooperatives) Those non-investing members will turn out to be the investing members in this case.In consequence, government subsidies received will change the capital structure of the cooperative and dilute core members' benefit as well which will directly affect on these members' income.The situations above show the adjustments of the capital structural make more members from the unequal capital contribution FSCs will be involved in the profit distribution process.Correspondingly, the proportion of earning distribution those individual members can get reduced in a certain extent.In addition, taking the differences existed in member's contribution to their cooperatives into account, the law regulated an alternative institutional arrangements.The additional voting rights is a such an example, which will achieve the interest balance among members with different interest orientation and maintain the sustainable improvement of members' common interest under the current "chaxu geju" (Note 6) situation.However, the capital structure changes caused by accepting government subsidies may deprive the additional voting rights from those members who make huge capital contributions to their cooperatives.As a chain reaction, they may change the governance structure and sharply reduce the enthusiasm members with huge capital contribution to participate the governance and management process.For those interests damaged members, they prefer to against the acceptance of government subsidies.On the contrary, non-investing members prefer to use the democratic governance mechanisms to against core members' strategy.In a word, government subsidies may lead interest and behavior conflicts among members in FSCs.
Recommendations
More emphasis is needed on the improvement of current government incentive institution to enhance the quantity and quality FSCs benefit and raise the usage efficiency of government resources.On the basis of present institutional fundamental, taking the Chapter VII Government Supportive Policies involving in the Law of the PRC on Farmers Specialized Cooperatives as an example, and considering the proper costs, risks, benefit and issues existed in the process FSCs compete for and digest government subsidies, both government and FSCs should develop their strategy-making system with the precondition of two-way choice.
FSCs should selectively choose government subsidies depend on their own
We believe the win-win situation can be achieved, that is FSCs can both gain government subsidies, and improve their current conditions to adapt the heated competition with the other stakeholders in the market and gain more benefit from their business while they compete for the government incentives, so that can maximize the usage efficiency of the subsidies.In other words, 'win-win' refers to realize the sustainable development of FSCs and the usage efficiency improvement of government subsidies.If the required transaction costs is too high and could not help cooperative to grow, then it is not suitable for this kind of FSCs to compete for government funding.Meanwhile, in accordance with the 'earmark' principle of using government subsidies involved in the law, those acquired funds can be only spent on specific projects.It is difficult for government to target individual FSCs' specific project requirements and provide enough funding to every single FSCs.In this case, some FSCs should reject the temptation of subsidies and make their own growth strategies.4.2 Government should strengthen the cost-benefit awareness and effectively improve the performance of financial support The main purpose for supporting FSCs is to strengthen the cooperatives' ability to adapt for the surroundings.Also FSCs can benefit more from the market after having this ability and maximize the usage efficiency of government incentives.Therefore, when the government chooses the potential supportive objects, they should be aware that there are different transaction costs and net income existed among different FSCs.Correspondingly, the practical utility of using subsidies to support the development of cooperatives may be totally different.It is more necessary for government to choose FSCs in their birth and survival periods instead of those successful FSCs.In a word, FSCs which need most support and can improve their marketing competitiveness have the priority to be the objective.As for some FSCs, the over-support from government is not good for the cultivation of cooperatives' self-development and independency capacity.In reality, those FSCs used taking subsidies as operating target, the money they get from the government always do not been used on the organizational sustainable development and benefit all their members.Instead it secretly turns out to be some members' personal income, which reduces the usage efficiency dramatically.On the other hand, repeating provide support to successful FSCs would also undermine the fairness of competition among FSCs.Therefore, government subsidies should be more used on forming the public service platform for all cooperatives and reduce the current single model of directly financial payment.Moreover, a dynamic, long-term and effective selecting and monitoring mechanism should be established, to prevent those fake cooperatives failing to meet the requirements to squeeze and use the public resources from other deserved cooperatives.References Gang SONG. (2007).Some problems of farmer cooperatives -Review of "farmer cooperatives".Zhejiang Social Sciences, (5), pp.64.Hind, A.M. (1994).Cooperatives under performers by nature?An exploratory analysis of cooperative and non-cooperative companies in the agribusiness sector.Journal of Agricultural Economics, (2), pp.213-219. Jingxin WANG. (2005) (2001).Peasant Life in China, (9th ed.).Beijing: The Commercial Press, (Chapter 3).
Notes
Note 1.According to the Law of the PRC on Farmer Specialized Cooperatives (2006), farmers specialized cooperatives in China are mutual-help economic organizations jointed voluntarily and managed in a democratic manner by the producers and operators of the same kind of farm products or by the providers or users of services for the same kind of agricultural production and operation.They mainly serve their members, offering such services as purchasing the means of agricultural production, marketing, processing, transporting and storing farm products, and providing technologies and information related to agricultural production and operation.Note 2. This principle is quoted form the International Cooperative Alliance (ICA) Statement on the Cooperative Identity.Available: http://www.ica.coop/coop/principles.html.Note 3. The data above was provided by Beijing Municipal Commission of Rural Affairs.Note 4. This document confirm that government support to FSCs should cover infrastructure construction, independent grant registration system, quality accreditation of agricultural products, trademarks, marketing net construction, technical training, etc.Note 5. Aleatory is depending on an uncertain event or contingency as to both profit and loss.Note 6.The term "chaxu geju" was first coined by the famous Chinese sociologist Xiaotong FEI more than half a century ago.To describe the different structural principles in Chinese society, he wrote 'In Chinese society, the most important relationship-kinship-is similar to the concentric circles formed when a stone is thrown into a lake" (Xiaotong FEI, 2001) In such a network of concentric circles, 'everyone stands at the center of the circles produced by his or her own social influence.Everyone's circles are interrelated.One touches different circles at different times and places'.FEI refers to such a mode of social origination as chaxu geju, translated as the "differential mode of association".Through this concept, FEI argues that Chinese society is not group oriented but egocentric.(Sarah Franklin, Susan Mckinnon, 2001)
Figure 1 .
Figure 1.Number of FSCs and government subsidies to FSCs in Beijing Municipal Source: Beijing Municipal Commission of Rural Affairs, 2010 | 5,380.2 | 2011-08-31T00:00:00.000 | [
"Economics"
] |
Karyotype evolution in Aeshna ( Aeshnidae, Odonata)
1994. Karyotype evolution in Hered- itcts 121: ISSN 1994 The haploid DNA content of Aeshnu confiisa (2n = 27, n = 13 + XO, male). A. bonuric,nsis (2n = 26, n = 12 + neo-XY, male) and A. cornigero plunultica (2n = 16, n = 7 + neo-XY, male) has been determined (2.16 i 0.16 pg, 1.81 k 0.17 pg, and 2.08 k 0.08 pg, respectively). Despike the differences in chromosome size end number, differences in DNA content between species are nok significanl. The karyotypic analysis of ,4e.shnu species leads to the conclusion that fusions between autosomes or autosome and thc scx chromo- some, are the only chromosome rearrangement that occurred during evolution. In the species here studied, fusions have taken place with a minimal loss of DNA; however, other species of the genus show important differences in genome size, which cannot only he justified by fusion events.
The comparison of DNA content of related species is of great value in any analysis of karyotype evolution. This is particularly true in holokinetic systems, in which the lack of a localized centromere rcstricts thc numbcr of karyotypes charactcristic possible to consider, and also because fusions and fragincntations are the principal chromosome rearrangcnicnts obscrvcd (SYHENGA 1972;WHITE 1973). In spite of this, DNA content has been seldom determined in insects with holokinetic chromosomcs ( HUGHES-SCHRADER and SCHRADER 1956;SCHRADER andHUGHES-SCHRADER 1956, 1958;SCHREIBER et al. 1972;MELLO et al. 1986;PAPESCHI 1988PAPESCHI , 1991 and are almost absent in Odonata ( C U M M I N G 1964;PETROV and ALJESHIN 1983;PETROV et al. 1984). In the genus Aeshna, twenty-five spccies have been cytogenetically analyzed up to date, and the diploid number varies in malcs bctwccn 16 (14 + nco-XY) and 27 (26 + XO).
In the present work DNA content has been determined in Aeslina crrnfusa (2n = 26 + XO, male), A. honariensis (2n = 24 + neo-XY) and A . Cytological preparations for DNA measurements of the three species were obtained by squashing a piece of testis in 60 YO acetic acid; the coverslip was then removed by the dry-ice method and slides were air-dried. DNA charges were estimated by using the Feulgen reaction in conjunction with scanning densitometry. Thc Feulgen reaction was carried out as follows: air-dried slides were rinsed 3 times for 10 min each in distilled water, immersed in 5 N HCI for 20 min at 25 & I T , washed 3 times for 10 min each in distilled water, stained with the Schiff reagent for 2 h, and washed 3 times 10 min each in SO, water. The optimal hydrolysis time was previously determined through the calculation of a hydrolysis curve. Although the material had different fixation times, a previous analysis has shown that differences in DNA measurements in the same species with different fixation times are statistically non-significant ( MOLA 1992).
Only spermatid nuclei which have just begun to elongate, were measured. The number of slides per individual (from 1 to 3) and the number of individuals per species varied according to the number of cells and individuals suitable for this study. Chicken crythrocytes were used as standard of reference for the dctection and correction of any differences in thc Feulgen reaction among the slides used; according to RASCH et al. (1971) the DNA content of chicken erythrocytes is 2.5 pg. Twenty spermatid nuclei and twenty chicken erythrocyte nuclei were measurcd in each slide, and slides were coded and randomized prior to scoring. The measurements were conducted with a cytospectrophotometer Zeiss MPC 64 at a wavelength of 570 nm, attached to a Kontron MOP-Videoplan computer, with the program APAMOS 99.
Chromosome complement
The chromosome complement and meiotic behaviour of the three specics havc been previously describcd in detail (MOLA 1992). The principal features of the karyotype of these species can be summarized as follows: Aeshna confusu (2n = 27, n = 13 + XO) has a larger bivalent and a very small one (m bivalent), while the other eleven bivalents decrease gradually in size. The X chromosomc is larger than the m bivalent and of similar sizc to the second smallest one (Fig. la).
In A . bonariensis (2n = 26, n = 12 + neo-XY), the autosomal bivalents decrease gradually in size, except for the very small nz bivalent; the neo-XY is the largest bivalent and is heteromorphic (Fig. Ib).
A. cornigeru plunultica (2n = 16, n = 7 + neo-XY) has a reduced chromosome number and larger chromosomes than the Cormer two species. The autosomal bivalents can be grouped in five largc and two small ones, while the sex chromosome bivalent is the smallest one and noticeably heteromorphic (Fig. lc). 1). The analysis of variance of the data shows that differences between the three species and between individuals within each species are non-significant (Table I); however, differences between slides within each individual are significant.
Discussion
The modal karyotype of Aeshna (2n = 26 + XO, male) is present in 72 YO of the species (Fig. 2). In most of them a larger autosomal pair and a noticeably smaller one (in pair) are readily distinguished; the X chromosome is small and of similar size to the smallest bivalent ( MAKALOVSKAJA 1940;OK-SALA 1943;CUMMING 1964;CRUDEN 1968;KI-AUTA 1969;HUNG 1971;KIAUTA 1971KIAUTA , 1973KIAUTA andKIAUTA 1980, 1982). Thc chromosome complement of A. confusa presents these characteristics. Considering this modal karyotype as the ancestral one, it can be observed that during karyotype evolution in the genus fusions have taken place, involving autosomes and/or the sex chromosome. No increase in diploid number has been reported until present. The chromosome complement of A . bonuriensis (2n = 26, n = 12 + neo-XY, male) would have originated through the fusion of the original X chromosome with the largest autosomal pair, giving rise to a neo-XY system. A quite similar situation has been described in A. grandis, a species in which the neo-XY is the largest pair, heteromorphic and, hence, easily identified (OKSALA 1943;KIAUTA 1969). Thc chromosome complement of A . cornigera planaltica (2n = 16, n = 7 + neo-XY, male), which is much more reduced, would have originated through six fusions: five between autosomes and one between the original X chromosornc and thc smallcst autosornal pair. Fig. l(a-c). DNA content in Aeshnu confusa (n = 13 + XO), A . honariensis (n = 12 + neo-XY) and A . cornigera planaltica (n = 7 + neo-XY). In a, b, and c, one cell at diakinesis from each species is shown in order to compare sizc and iiumbcr of chromosomes; the sex univalent or sex pair is indicated. Bar = I0 jim.
Hcredirah 121 (1994) The large size of the chromosomes of A . cornigera plmcrlticri, when compared with those of A . con?fusa and A . honariensis, suggests that all the fusions that gave rise to such a reduced chromosome complement were probably accompanied by a minimal loss of DNA. As in A . cornigera plunalticcr, OKSALA ( 1943) described in A . coerulea that the sex bivalent was the smallcst of thc complement and hetcromorphic. The neo-XY system is particularly frequent in Aeshna (28 'YO of the species) since in the order only 5.4 'YO of the species have the neo-system. In many species of Aeslinrr, the heterotnorphism of the sex bivalent is easily recognized, a fact that is also unusual in other genera of Odonata.
DNA content is not always correlated with chromosome number and size, and particularly in insecls with holokinctic chromosomes different situations have been cncountcrcd. Related species whose karyotypcs diffcr by one or more fusions or fragmentations, can show constancy in DNA content, as in Thynntu and Bancisci (Heteroptera) (SCHRAIER and HUGHES-SCHRADER 1956, 1958 or significant differences in genome size, as described in some species of Balo.rtomr1 (Heteroptera) (PAPESCHI 1988). On the other hand, there arc also examples of related species with the same diploid chromosome number but significant differences in DNA content, as in Triutoma and other species of Belostoma (Heteroptera) ( SCHREIBER et al. 1972;PAPESCHI 1991).
In the species of Aeshna here analyzed, no diffcr- THOMAS 1992). This fact suggests that fusions could be associatcd with a noticeable loss of DNA, not necessarily involving loss of information. PETROV andALJESHIN (1983), andPETROV et al. (1984) cstimated the haploid genome size of clcvcn species of Odonata belonging to 6 families, by means of DNA reassociation kinetics. They obtained values ranging from 0.37 pg to 1.7 pg. According to thesc authors, the DNA content of A. c.ocwdLw (as A . .rquamata) (2n = 24, n = I I fneo-XY) and A . ,junceu (2n=26, n = 12 + nco-XY) is 1.6 pg and 1 .0 pg, respectively. As both spccies differ only in one autosomal fusion and the species with higher chromosome number has a lowcr DNA content, it is evident that the DNA differences are not associated with the fusion itself; instead, thcy have probably occurred indc- | 2,079 | 2004-05-28T00:00:00.000 | [
"Biology"
] |
A complementary approach for neocortical cytoarchitecture inspection with cellular resolution imaging at whole brain scale
Cytoarchitecture, the organization of cells within organs and tissues, serves as a crucial anatomical foundation for the delineation of various regions. It enables the segmentation of the cortex into distinct areas with unique structural and functional characteristics. While traditional 2D atlases have focused on cytoarchitectonic mapping of cortical regions through individual sections, the intricate cortical gyri and sulci demands a 3D perspective for unambiguous interpretation. In this study, we employed fluorescent micro-optical sectioning tomography to acquire architectural datasets of the entire macaque brain at a resolution of 0.65 μm × 0.65 μm × 3 μm. With these volumetric data, the cortical laminar textures were remarkably presented in appropriate view planes. Additionally, we established a stereo coordinate system to represent the cytoarchitectonic information as surface-based tomograms. Utilizing these cytoarchitectonic features, we were able to three-dimensionally parcel the macaque cortex into multiple regions exhibiting contrasting architectural patterns. The whole-brain analysis was also conducted on mice that clearly revealed the presence of barrel cortex and reflected biological reasonability of this method. Leveraging these high-resolution continuous datasets, our method offers a robust tool for exploring the organizational logic and pathological mechanisms of the brain’s 3D anatomical structure.
Introduction
The cerebral cortex, a sheet-like formation on the outside of cerebrum, harbors approximately 16 billion neurons in humans (Van Essen et al., 2016).Given its pivotal role in cognitive function, structural and functional studies of cortex are paramount, particularly in humans and non-human primates.Since the pioneering work of Brodmann in 1900s, intensive efforts have been devoted to the delineation of cortical architecture.Regional variations in cell size, packing density and laminar organization within cortical transverse sections were considered as the proofs for cortical parcellation (Zilles and Amunts, 2010).Cytoarchitectonic Liu et al. 10.3389/fnana.2024.1388084Frontiers in Neuroanatomy 02 frontiersin.orgdifferences among cortical regions not only reflect the anatomical features, but also the functional characteristics (Amunts and Zilles, 2015).
Several new techniques and reference standards were proposed for cortical parcellation, including anatomical circuit and functional connections.To elucidate the connectivity patterns of target cortical areas, the neurotracers were applied for parcellation based on the circuit uniqueness (Saleem et al., 2014;Borra et al., 2019).However, these methods based on two-dimensional (2D) representations proved less effective in a continuous three-dimensional (3D) space due to the limited information obtainable from the whole brain regions such as volumetric cortex (Hezel et al., 2012).With the invention of non-invasive stereo imaging such as MRI, it became feasible to visualize the whole brain structure and measure regional connectivity three-dimensionally with high-spatial resolution (Liu et al., 2018(Liu et al., , 2020;;Saleem et al., 2021).However, compared with optical imaging, the resolution of non-invasive stereo imaging remained a constraint, precluding the observation of finer structures.Consequently, observations often require verification using optical methods (Yan et al., 2022).Therefore, the cytoarchitecture observed by optical imaging still plays an irreplaceable role in brain structural analysis of and cortical parcellation.
More objective approaches had been proposed to quantify cytoarchitectonic feature on the cortex based on the optical imaging to overcome the subjective influence of the observer (Amunts et al., 2007).By these approaches, noteworthy results had been achieved (Amunts et al., 2020), in which the cortices were treated as ribbons on sections and the peaks of feature signal differences along the ribbons were set as the boundaries between cortical regions.However, the cortex was convoluted due to cortical expansion exceeding the surface area of subcortical nuclei and white matter (Van Essen et al., 2016), during which more cortical columns formed as the units for information processing (Molnár, 2011).This poses a challenge as a single section plane could not maintain the transverse plane of cortex at every position, especially in the brains of gyrencephalic animals like humans and macaques.To address this problem, an optimized blocking approach relied on MRI and surgical neuronavigation tools had been established (Novek et al., 2023) to obtain the ideal transverse view perpendicular to the sulcal orientation.However, it still required specialized equipment and expertise.Another strategy was relied on registration in which the slice images were registered and then stacked to form the volumetric data (Majka et al., 2021).While this strategy overcomes the limitations of 2D slices to some extent, the alignment process itself can introduce distortions to structural properties, such as cell distribution.Therefore, a more promising alternative is 3D optical imaging method, which possesses inherent self-alignment properties (Ragan et al., 2012;Gong et al., 2013;Seiriki et al., 2017).The abilities to delineate cytoarchitectonic image across the entire brain with subcellular resolution and 3D continuity (Xu et al., 2021;Zhou et al., 2022) had proven their potential for cortical analysis.Besides, how to extract cytoarchitectonic feature from large datasets remained a challenge, because cytoarchitecture encompassed both microscopic cellular morphology and mesoscopic architecture.Moreover, the convolution of cortex further compounded the complexity of the problem.Hence, an effective method for data analysis was needed.
In this study, we introduced a novel strategy for profiling cortical cytoarchitecture.The intact macaque brain was imaged using the fluorescence micro-optical sectioning tomography (fMOST) method, which enabled the revelation of neuron's cytomorphological details with fluorescent dye (Hezel et al., 2012).Benefited from the cellular resolution in 3D, the cortical cells were classified based on their size, enhancing the contrast of the cytoarchitectonic differences across cortical laminates.The local densities of graded cells were extracted as cytoarchitectonic features and integrated into a surface-based framework which served as a representative of cortex.This strategy enabled us to detect significant differences in cytoarchitectonic feature signals across cortical regions, indicating its potential of for cortical architectonic profiling and parcellation.
Animals
A 9-years old male Macaca fascicularis and 8-weeks old C57BL/6 J mice were used for this study.The monkey lived in individual cage under standard conditions (temperature 21 ± 2°C, humidity 60%), and were fed with food and water ad libitum.It was treated in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals.The experimental protocol and animal care protocols were approved by the Ethics Committee of the Kunming Institute of Zoology and the Kunming Primate Research Center (Approval No. IACUC18018).The mice were housed in normal cages in a specific-pathogen-free environment with a 12-h light/dark cycle, where the temperature and humidity were kept stable at 22-26°C and 40-70%, respectively.All mice had free access to food and water.
Sample embedding and imaging
The macaque was anesthetized with sodium pentobarbital (45 mg/ kg, intramuscular) and transcardially perfused with 4 L of 4°C 0.01 mol/L PBS and 1 L of 4% PFA.The mice were also deeply anesthetized with sodium pentobarbital (intraperitoneal) and subsequently perfused with 0.01 mol/L PBS, followed by 4% PFA.The brains were excised and post-fixed in 4% PFA at 4°C for 24 h.After fixation, each intact brain was rinsed overnight at 4°C in a 0.01 mol/L PBS solution.
The sample embedding and imaging of intact mouse brain and a right macaque hemiencephalon were followed the reported method (Zhou et al., 2022) (Supplementary Figure S1).In short, the post-fixed specimens were immersed in the solution of synthesized N-acryloyl glycinamide (NAGA) hydrogel monomer at 4°C for 12 h and 7 days, respectively and incubated at 40°C for 4 h for hydrogel polymerization.Then a home-made large-volume tissue imaging system was used for whole brain imaging.In the system, the samples were immersed in 2 mg/ml propidium iodide (P21493, Thermo Fisher, America) solution and the superficial sample was removed by a vibratome (VT1200S, Leica, Germany) after imaging in each "imaging-sectioning" cycle, so that the superficial sample could be stained in real time.A laser (Cobolt, Sweden) with wavelengths of 561 nm was used and the red emission signals were captured by a sCMOS camera (C13440-20CU, Hamamatsu, Japan).The imaging plane was set 10 microns below the sample section to avoid the effects of tissue loss or deformation caused by cutting, and the samples were imaged with a 10× objective (Olympus, Japan) four times every 12 μm thickness section, yielding the image volumes with 3D resolutions of 0.65 μm × 0.65 μm × 3 μm.The raw image stripes with overlap were stitched together implemented using C++ and the illumination intensities were corrected with polynomial curves as the preprocessing.The code is available at http://atlas.brainsmatics.org/a/zhong2019 (Zhong et al., 2021).
For the comparison of image processing strategy between 2D and 3D, the embedded macaque brain was sliced with vibrating microtome to get a 100-μm thick coronal plane section.The brain slice was stained with propidium iodide for 0.5 h and then imaged with laser scanning confocal microscope (Leica DMi8, Germany).The 20x objective was used and the step size of z direction was 0.5 μm to get a cellular 3D resolution of 0.446 μm × 0.446 μm × 5 μm.And a strip area of cortex from outer surface to inner was selected for imaging to get an image stack with 2000 μm in radial length, 350 μm in span and 75 μm in thickness.
Size-depend cell grading and image enhancement
The original images, with a resolution of 0.65 μm × 0.65 μm, were utilized as the input for cell size grading.Initially, a morphological top-hat filtering was applied to each image to suppress the background signal.This filtering used a structural element (SE) slightly larger than the largest cell body, with a radius of 20 pixels.Subsequently, the filtered images were binarized with local adaptive thresholds to segment the foreground patches and three rounds of morphological opening operations were performed, with increasing radii of diskshape structural elements (6, 10, and 14 pixels).This stepwise elimination of small patches left only the larger cells.Finally, each foreground pixel was assigned a cell size grade.
Given that pixels were assigned three grades, the foreground pixels of each image were distributed across three image channels, corresponding to red, green, and blue, respectively.These images were termed "cell-graded images." The same processing steps were applied to the entire image stack.To generate a composite view, adjacent cellgraded images were projected using the mean, converting the binary images into continuous-value images.Since the absolute value ranges differed significantly across the three channels, adaptive brightness adjustments were made to each channel.The resulting images were designated as "enhanced images."
Image processing strategy comparison
An image volume of macaque brain cortex, exhibiting cellular and isotropic resolution, was employed for this comparative analysis.The image volume was binarized slice by slice in three orthogonal planes.Following binarization, Gaussian filtering was applied to the results, and a subsequent binarization step was performed using a threshold of 0.5.In the following analysis, most steps remained the same except SE.In the previously introduced 2D method, disk-shaped SEs were utilized for morphological opening operations.However, for the 3D method, ball-shaped SEs were employed instead.Finally, each voxel in the image volume was assigned a grade number.
The number of voxels within each grade was counted along the cortex radial direction generating a signal curve.The pattern of this curve could be discerned based on the number and positions of peaks and valleys.If there were minimal systematic differences between the two methods, the curve patterns should exhibit similarity.
Cortical segmentation
To minimize computing and memory consumption, the serial images were down-sampled to form an image volume an isotropic resolution of 12 μm × 12 μm × 12 μm.Then the image volume was automatically segmented using Brain Extraction Tool in FMRIB Software Library (FSL 1 )to obtain a preliminary segmentation.During this process, the voxels were classified into three categories: grey matter, white matter and imaging background.
Manual verification was then performed to correct any unexpected segmentation errors.Additionally, irrelevant subcortical regions and the cortical layer I were removed in this step.This was necessary because in vitro samples often exhibit closely spaced sulcal surfaces on both sides, which could interfere with subsequent surface extraction and modeling of the cortex.
For manual segmentation, a proprietary software tool was utilized.This tool allowed for efficient segmentation by automatically interpolating every local operation across a specified thickness interval.To further enhance accuracy, operations were executed in three orthogonal views, minimizing the impact of oblique section planes that could result in a blurry GM/WM interface.
Finally, the desired segmented cortical volume should form a contiguous entity with no topological holes or rings, ensuring a clean and accurate representation of the cortical structure.
Establishment of cortical coordinate system
The cortical streamlines were generated by simulating the behavior of electric field lines within the cortical region.The interfaces between cortical layer I and II (outer surface) and between gray matter and white matter (inner surface) were defined.These interfacial voxels were assigned fixed potential values (0 for outside and 2000 for inside).The potential values of the remaining voxels within the cortex were calculated to create a continuous potential field in 3D space ensuring consistency with the fixed surfaces.The final potential field was determined by solving a Laplacian equation.
The 3D gradient direction of the potential aligned with the direction of the streamline at each voxel.By tracing these streamlines through the cortex, we obtained a representation of cortical thickness, measured by the length of the streamlines.The relative positions of voxels along these streamlines served as a measure of radial depth within the cortex.Additionally, we identified the midpoint voxels of each streamline to construct a mid-thickness surface.Each voxel on this surface was uniquely labeled with an ID number, serving as a 1 http://www.fmrib.ox.ac.uk/fsl/ lateral coordinate.The remaining voxels were labeled based on their connections to the mid-thickness surface voxels via the streamlines.
Using this coordinate system, we defined radial units by grouping together streamlines that were in close proximity based on the proximity of their mid-thickness surface voxels.The k-Nearest Neighbor (KNN) method was used here to identify the nearest mid-thickness surface voxels, with k-values set to 1,000 for the macaque brain and 50 for the mouse brain.As a result, each voxel on the mid-thickness surface was associated with a specific radial unit.It is noteworthy that adjacent radial units had overlapping regions, resulting in a series of continuous signals that captured the spatial variation across the cortex.
Cytoarchitectonic signal counting and clustering
With the established coordinate system, the cortex was laterally divided into radial units, and these radial units were further stratified into multiple laminates based on their relative depth.Since the somas pixels in the raw images were already labeled with grade numbers, we calculated the foreground pixel ratio in each lamina of each radial unit as a measure of cell distribution density.Initially, this calculation was performed within each 12 × 12 × 12 cubic volume, and then the results were aggregated at the level of radial units.In this study, we defined 20 laminates and considered 3 cell size grades.Consequently, the cytoarchitectonic pattern of a single radial unit was represented by a 20-by-3 signal matrix, effectively a 60-dimensional vector.
To facilitate further analysis, the dimensionality of these vectors was reduced using principal component analysis (PCA).The scores of the principal components that explained 95% of the total variance were then normalized and used as the basis for clustering.Subsequently, the radial units were clustered using the K-means method, providing an initial state for Markov random field clustering.
The surface-based cortical cytoarchitectonic tomogram and parcellation
The voxels located on the mid-thickness surface were designated as vertices to generate a patch surface.The color assigned to each voxel was based on the cytoarchitectonic signal intensity (in the case of tomographs) or the group number (for parcellation) of the corresponding radial units.For tomographs, a patch surface array was generated to visualize the cytoarchitectonic pattern.This array represented the pattern across nine equally divided intervals of relative depth, ranging from 5 to 95%, for each cell size grade.
Results
During the investigation of the cortical cytoarchitecture features, several steps could be summarized.Cells were recognized and classified based on morphological features locally in the highresolution image.Then the distributions of different types of cells were counted within a large span with less spatial resolution requirement.
Finally, laminar concepts were generated and regional cytoarchitectonic variations were identified.In our study, this thought had been followed.
The whole macaque hemisphere imaging datasets
Through the large-volume tissue imaging system (Zhou et al., 2022), we imaged the whole macaque hemisphere integrally and continuously in 3D with a resolution of 0.65 μm × 0.65 μm × 3 μm (Figures 1A,B and Supplementary Figure S2).Owing to the highresolution on imaging section, the cortical cytoarchitectonic details were sufficiently captured (Figure 1C).The differentiation of cell packing density together with soma morphology highlighted the laminar cortical architecture.As shown, multiple complex laminates could be observed in primary visual cortex (V1) where the neurons were relatively small (Figures 1D,G).In the ventral bank of principal sulcus (46v), the typical granular layer in primate prefrontal cortex could be seen (Figures 1E,H).In primary motor cortex (M1, or agranular frontal area F1), the representative large pyramidal neurons could be found, with imperceptible laminar transitions (Figures 1F,I).The observed differences indicated the feasibility of cortical cytoarchitecture profiling in 3D space with these datasets, enabling image signal extraction and quantitative statistical analysis.
The cortical cytoarchitectonic feature enhancement based on cell size grading
Cell size is a pivotal aspect of cortical cytoarchitecture, and its distinction could greatly enhance laminar contras.Traditionally, evaluating cell size involved segmenting individual cells in images and measuring their 2D areas or 3D volumes.However, despite the challenging associated with isolating adhered cells (Costantini et al., 2021), the computational costs for processing billions of neurons across the entire macaque cortex (Collins, 2011) are immense.In this study, we employed traditional image morphological filtering for size grading.The images with original resolution were binarized using adaptive threshold to segment the cell signals as foreground.Subsequently, several rounds of morphological opening were applied to the binarized images to eliminate small signal patches (Figure 2A, III-V).The filtered images were then overlaid and the pixel values were accumulated to form grades (Figure 2A, VI).The results showed that large pyramidal neurons (labeled blue) in M1 were accurately distinguished from the neighboring smaller neurons (labeled red and green), demonstrating the effectiveness of our method in classifying cells based on size.
At the mesoscopic scale, the distributions of cells with different sizes manifests as laminar texture.However, the laminates may exhibit ambiguous boundaries in a single image (Figure 2B middle).In such cases, we observed that the laminar contrast was prominently enhanced in the mean projection of processed images (Figure 2B right) compared to the original images (Supplementary Figure S3A).The cell distribution densities were represented by brightness with different colors corresponding to three size grades.This allows for the presentation of more laminar details that are undetectable in unenhanced images.A quantitative analysis of laminar signal Frontiers in Neuroanatomy 06 frontiersin.orgdifferences was also conducted, revealing that the enhanced images exhibited more pronounced fluctuations and additional peaks (marked by the arrows in Supplementary Figure S3B) indicating more detectable information.One consideration was that the area of soma section might be affected by the position and angle of section plane.Therefore, an evaluation was conducted to assess the difference between 2D morphological method and 3D.By employing ball-shaped structure elements instead of disk-shaped ones on an image volume of cortex with cellular 3D resolutions acquired through laser scanning confocal microscope, we found limited difference between the two methods.While there were slight variations in the absolute voxel counts, the overall trends of the curves were comparable (Supplementary Figure S4E).The same laminar pattern could be recognized using both methods.Given the computational considerations, the 2D-based method was ultimately utilized.
When this strategy was applied to the entire brain, cytoarchitectonic feature could be quickly observed on a large scale (Figure 2C), owing to the presentation of cortical laminar texture through brightness and color.Some cortical region boundaries were also identifiable where laminar patterns underwent significant changes and these boundaries were consistent with the existing atlas (Saleem and Logothetis, 2012).It is noteworthy that the shape of neuronal sections can vary when the cortex is sliced at different angle, especially The cell grading manipulation enhance the cytoarchitectonic contrast between cortical laminates and regions.(A) The images were presented to explain the grading principle according to cell size with image morphological manipulation.The results after each step were shown in order: the raw (1), the binarized (2), the morphological opening filtered in three rounds (3-5), and the raw image labeled with colors of grade ( 6 for pyramidal cells, which could potentially affect the estimation of cell size.However, the laminar texture remained discernible even in areas where the cortex appeared curled in coronal and horizontal sections (Supplementary Figure S5), indicating that the effect of section angle was limited.
The 3D framework of cortex structure The two-dimensional (2D) result above had showed the potential of extracting cortical cytoarchitectonic features and performing regional parcellation based on enhanced images according to cell size grading.However, a problem persisted: when the cortices were parallel to the section plane, this led to ambiguous texture and a low utilization rate of the sample.Additionally, it was hard for observers to generate an idea of cortical parcellation in 3D space based on the observation of 2D image series.A more intuitive approach would be to visualize the cytoarchitectonic features on a surface (Van Essen et al., 1998).Therefore, a framework was established to simplify the cortex structure.
The dorsolateral part of the frontal cortex was selected and segmented.Initially, a low-resolution image volume was pre-segmented using commonly utilized MRI software (FMRIB Software Library, FSL) to eliminate the background and white matter.Subsequently, the subcortical structure and cortical layer I were manually removed.With the help of an in-house labeling interface, the cortex was segmented continuously in 3D (Figures 3A-D).Notably, even in regions where the section planes were oblique relative to the cortices, resulting in indistinguishable grey matter (GM)/whiter matter (WM) interfaces in 2D images, the segmentation results remained equally precise.
Following binary segmentation, a cortex coordinate system was established.Given that cortical laminates are locally parallel and the cortical columns are perpendicular to these laminates, the cortex could be accurately modeled using a 3D Laplace's equation.This modeling approach enabled the simulation of cortical laminates as an isosurface and the corresponding gradient direction of the cortical columns.Once the gradient directions were calculated, streamlines were traced from the interface between cortical layer I/II to the GM/ WM interface, following the gradient direction.The distances along these streamlines were then normalized and were treated as relative cortical depth (Figures 3E-G).Based on this relative cortical depth, the mid-thickness surface was extracted as a simplified representative model of cortex structure (Figure 3H).The mid-thickness surface served as a lateral reference, as any point in cortical space could be mapped to a corresponding point on this surface.Combined with the relative depth information, the cortex coordinate system was established.This system allowed for the cortex to be divided into radial units, which served as the fundamental units for the surfacebased analysis (Supplementary Figure S6).It is worth emphasizing that this 3D modeling approach was more precise than modeling based on 2D slices, because the streamlines cross the section plane in most time (Figures 3I-K) and it was consistent with the fact.
The surface-based cortical cytoarchitectonic tomogram
After dividing the cortex into radial units, the distribution density was measured within radial units by calculating the percentages of foreground pixels, and the values were mapped onto the mid-thickness surface, providing a tomographic visualization of cytoarchitectonic signal.
Utilizing this approach, the cytoarchitectonic pattern in each cortical laminate was expansively displayed across the lateral span (Figure 4).Generally, the ratios of cells of different sizes were consistent with the result obtained from the 2D-based analysis, while the laminar textures were also presented in a novel and distinct form.For example, a notable increase in the density of small cells was observed at the middle depth in somatosensory areas 1 and 2 (1-2, Figure 4A), while the opposite trend was evident for the cells of medium size (Figure 4B).This pattern is characteristic of the granular layer.Additionally, the cytoarchitectonic patterns exhibited uniformity along the sulcal contours but greater variability perpendicular to them.This phenomenon was evident in the central sulcus, the principal sulcus and even the posterior supraprincipal dimple (Figures 4A,B).Notably, even subtle anatomical features like the superior precentral dimple, which appears as a shallow indentation on the cortical surface, exhibited changes in cytoarchitectonic pattern.Specifically, a high percentage of large cells was observed at the middle depth, manifesting as a bright spot on the surface visualization (Figures 3D, 4C).This finding suggests that cytoarchitectonic patterns may reflect underlying cortical developmental trajectories.
The radial units cluster based cortical parcellation
Following the progress above, the selected cortex was parceled by clustering the adjacent radial units together using Markov random field method, resulting in the designation of 10 distinct clusters (Figures 5A, 6A).Within the principal sulcus, a gradual shift in cytoarchitectonic patterns was observed across clusters 1, 2, 3, and 5.These clusters were spatially arranged from the internal to external part of sulcus (Figure 6B), exhibiting a consistent increase in the relative cortical depths of the granular layer (indicated by the arrows in Figure 5B).This finding underscores the cytoarchitecture pattern of area 46, in which the cytoarchitectonic differences between walls and crowns of principal sulcus were greater than those between dorsal and ventral regions (Figures 6B,D).Although previous studies have identified these distinctions (Cruz-Rizzolo et al., 2011), the potential parcellations had not been incorporated into the existing atlas (Saleem and Logothetis, 2012) (Figure 6C).Another notable change was detected in the central sulcus.Here, clusters 5, 10, 9 and 7 were arranged in order from posterior to anterior (Figure 6E).The granular layers initially shifted towards deeper laminate and then became undetectable, reflecting a dramatic cytoarchitectonic transition from somatosensory areas 1 and 2 through somatosensory area 3a/b, to primary motor cortex (Figures 6F,G).It is noteworthy that the cluster 10, corresponding to the somatosensory area 3a/b (Figures 6E,F), exhibited the most concentrated distribution and the most specific cytoarchitectonic pattern (Figures 5A,B), indicating the unique functional role of this area.Concurrently, a vast and relatively uniform area was occupied by clusters 7 and 8, displaying minimal intra-group variations (Figures 5A,B).However, subtle change could still be observed, such as variations in the number of medium-sized 10.3389/fnana.2024.1388084 Frontiers in Neuroanatomy 08 frontiersin.orgcells within deeper laminates.Overall, despite the fragmented nature of the results, they can still serve as useful references for manual cortical parcellation.It is also worth emphasizing that cytoarchitectonic similarities can exist across different cortical regions, providing insights into the organizational principles of the brain.We also compared our cytoarchitecture-based cortical parcellation with the macaque brain atlas obtained from multimodal data (Collins, 2011) (Supplementary Figure S7).In the atlas, the area anterior to the central sulcus is divided into three subregions F1, F2, and F4 of agranular frontal area.While in our results, the corresponding area was occupied by clusters 7 and 8. Exactly, cluster 7 appeared as two isolated regions separated by cluster 8, which coincided with the partitioning pattern of the atlas.Meanwhile, although region F5 is also part of agranular frontal area according to its name, this region corresponded to cluster 6 in our results and exhibited a dense distribution of small-sized cells suspected to be the granular layer (Figure 5B).On the other hand, the high consistency of cytoarchitecture along the sulcal contours found in this study was also consistent with parcellation pattern in atlas (46d, 46v and 1-2).Although the subtle and gradual differences in cytoarchitecture among these regions proposed in our study are not reflected in the atlas, this phenomenon could still reflect the differences within the regions.
The methodological verification
To assess the biological plausibility and methodological universality of this approach, similar operations were performed on mouse brain specimen (Supplementary Figure S8).Several boundaries between regions could be clearly delineated, reflecting regional characteristic (Supplementary Figures S8B,C).Notably, the typical texture of barrel cortex was observed in the barrel field of the primary somatosensory area (SSp-bfd, Supplementary Figure S8B), where each barrel appeared as a dark spot.
In the radial units clustering analysis (Supplementary Figure S8D), the parcellation pattern closely resembled previously reported findings (Wang et al., 2020).These cytoarchitectonic tomographic presentations were biologically significant and not merely artifacts of the analysis.
We also investigated the impact of radial unit size.When using smaller radial units, the images were sharper and contained more granules.Conversely, with larger radial units, the granules appeared smoother.Nevertheless, despite changes in image granularity or smoothness, the overall signal pattern in the macaque cortex remained relatively unaffected (Supplementary Figure S9A).The situation was different in the barrel cortex signals, whereas the radial unit size increased, the barrel signals became increasingly ambiguous and eventually disappeared (Supplementary Figure S9B).This finding In summary, our approach demonstrates robustness across different species and brain regions, while also highlighting the importance of considering radial unit size when analyzing fine cytoarchitectonic structures.
Discussion
As the brain region responsible for higher-order mental functions, the architecture profile of cortex could help us understand the generation of human intelligence and improve the artificial intelligence.But the cortical architecture is too complex to revealed The cytoarchitectonic tomographic presentation of macaque cortex.(A-C) The panels were corresponded to three cell size grades and the cell distribution densities in different depth intervals from superficial to deep were projected on mid-thickness surface in each panel.The depth intervals were labeled in (A).The arrows pointed to the sulci and region, including: cs, central sulcus; ps, principal sulcus; pspd, posterior supraprincipal dimple; spcd, superior precentral dimple; 1-2, somatosensory areas 1 and 2. only by conventional biological approaches.Multidisciplinary collaborations are necessary, such as the engineering issue for specimen imaging and computer software science for data analysis and visualization.In this study, a strategy for cortical cytoarchitectonic profile with stereo optical imaging data was introduced.This approach enabled the presentation of global cytoarchitectonic patterns on surface, while also delineating cortical laminar and regional difference in cell distribution.And some related questions and assumptions were discussed below.
The parcellation of cortical region on surface
The surface-based data visualization of whole brain cortex was usually used in MRI-based research to present the data like the cortical thickness, the depth of sulci (Van Essen and Dierker, 2007), the myelin density and brain activity in functional task (Glasser et al., 2016).For the brains of gyrencephalic animals, the folding cortex made it less intuitive to visualize data on 2D slice images than surface, because the observers could not quickly capture the distribution pattern in a large area without the information integration from discrete images in mind.But the cytoarchitectonic study typically utilized the 2D histological slices for high-resolution optical imaging of cells.Although the slices were continuous, the axial spatial resolution was still insufficient due to the slice thickness and sampling ratio (Saleem et al., 2021) that prevented the cytoarchitectonic signal presentation on surface.In this study, the fMOST system was used for whole macaque hemisphere imaging with a stereo resolution of 0.65 μm*0.65 μm*12 μm that provided a chance to characterize the cytoarchitectonic feature in whole brain level.With this dataset, the cell distribution densities were extracted as cytoarchitectonic signals and were projected on a surface which was a representation for cortex.By this approach, the distribution pattern of cell with different size could be acquired in each cortical region and each cortical laminate, and so were the cortical laminar pattern and the cortical regional difference.
In the cytoarchitectonic feature based cortical parcellation by the traditional method on 2D sections, the multiple decisions were made for the boundary traversing the slices and the results were summarized to form the stereo parcellation.Because of the difficulty in integrating The cortical parcellation based on radial units cluster.(A) The radial units were clustered into 10 clusters, and each area were painted with colors on the mid-thickness surface.(B) The means of cell distribution densities for every units group were drawn as curves and the shaded areas represented the standard deviations.The arrows indicated the positions of granular layers which presented a gradient trend from cluster 1-5.information between slices, the decisions were made independently in discrete slices and the error would be accumulated leading to mismatch between slices (Reveley et al., 2017).In contrast, when parceling the surface labeled with cytoarchitectonic signals, the observer or the algorithm took into account information in a large span and the boundary could be decided more directly with single drawing, reflecting the superiority for cortical parcellation with our method.
The complementary information for cortical architecture
In this study, neurons were classified according to soma size, which was feasible to roughly distinguish the cell type and the existing results presented enhanced discrimination of the cortical laminates.But the cortical architecture details were far more complex than the cell size and complementary information was needed.
For an instance, the brain slices could be collected (Larsen et al., 2021) after imaging for gene expression measurement by combining single-cell sequencing with laser capture microdissection (Chen et al., 2017), the in situ molecule capture approach (Chen et al., 2022), the in-situ sequencing (Wang et al., 2018) or the in-situ hybridization (Fang et al., 2022).The bottleneck for integration of these method with ours was the specimen slice collection, especially for the thin slices that were difficult to maintain the integrality.Collection of the thicker slices was easier, but it also meant the lower sampling rate of imaging in current pipeline.The 3D imaging of optically cleared tissue combined with molecular detection (Pesce et al., 2022) provides an alternative approach that could offer deeper insights into neuronal characteristics and aid in elucidating their functional mechanisms.However, the alignment of individual slices is crucial and poses a significant challenge, particularly when the sample sustains damage or undergoes notable deformation during the processing phase.In our method, the specimen was securely immobilized within the hydrogel, safeguarding it from mechanical damage and minimizing distortion.Consequently, the captured images are naturally aligned, ensuring accurate and reliable results.
The nucleic acid dye propidium iodide with red fluorescence could also be combined with the green fluorescence dye for myelin (García-García et al., 2023), allowing the capture of myeloarchitectonic information at the same time, which was a crucial component of cortical architecture.The myelin signals would also assist in defining the GM/WM interface.Moreover, by recognizing the orientations of the myelin texture (Schurr and Mezer, 2021), the nerve fiber tracts could be reconstructed (Menzel et al., 2021) and contribute to the cortex-associated connection study.
Besides, the current cytoarchitectonic analysis could also be complementary with neural tracing.The regional diversity of neural connection could supplement the area with less cytoarchitectonic difference.And the cytoarchitectonic feature could serve as a reference for precise location of tracing signals in fine structures such as barrel cortex.
The discriminative ability of this method
As it had been mentioned, the results were influenced by the size of radial unit, and the microstructure may be ignored if not already known.It was caused by the spatial resolution loss during the mean calculation of cell distribution density.Things seemed inevitable in The varied cytoarchitectonic pattern transverse the sulcus.(A) The cluster result was shown on the mid-thickness surface as a summary of Figure 5A.And two boxes were drawn to indicate the section positions for (B-G).(B-G) The principal sulcus which was mainly occupied by the area 46 and the central sulcus were focused.The brain slice was labeled with cluster color (B,E).The border lines were manually drawn in accordance with the atlas and relevant reference.The boundaries in the reference atlas (Collins, 2011) were indicated with solid lines, and the extra boundaries in the reference article (Van Essen et al., 1998) were labeled with dotted line (C,F).The enhanced 2D images were presented as cytoarchitectonic reference (D,G).The abbreviation of the region name: 46d/v(r), the dorsal and ventral (crowns) of areas 46; 1-2, somatosensory areas 1 and 2; 3a/b, somatosensory areas 3a and 3b; F1(4), agranular frontal area F1 (or 4).current process.One possible solution was introducing the fourth dimension for signal smoothing by registering individuals together, and the signals would be smoothed amount the samples.This strategy had been used in mouse brain (Wang et al., 2020), but it would be challenging for cortex registration in animals like macaque and human because the high diversity between individuals (Van Essen et al., 2012).In this case, our current method could be used to produce fundamental landmarks, and the similar regions could be highlighted during the global registration.Then the local registration could be performed for microstructure detection.
Another issue was the measurement of cell size.As it was assessed based on the image binarization, it was susceptible to be affected by the staining intensity and imaging conditions.And we had found the overall signal intensity difference between samples, although cytoarchitectonic pattern was less affected.But it could still make misleading in the analysis with more variables, like interspecies or pathological comparison.Therefore, more criteria were needed to ensure equality of specimen treatment during the process.
Conclusion
In this article, a new form of presentation for cortical cytoarchitecture was introduced.By employing the fMOST system, the whole brain hemisphere of macaque was imaged with high definition.Thus, two limitations in traditional methods were overcame.The first was the relatively low resolution of MRI-based method, which prevented the analysis at the cellular level.Depending on the high-resolution imaging data, the somas were graded by the size, and prominent laminar textures were revealed on cortical sections.Another problem was caused by the folding of cortex, that made observations from 2D sections doubtful.By establishing a cortical coordinate system, the cortex was divided into radial units and the cytoarchitectonic signals in certain depth intervals were presented on surface as tomogram.Based on these approaches, the cortical cytoarchitectures were demonstrated in both macaque and mouse and the cytoarchitectonic patterns in mouse brain were consisted with the existing finding that proved the biological reasonability of this method.
FIGURE 1
FIGURE 1 Cortical cytoarchitectonic profile with whole macaque hemisphere imaging datasets.(A) The overview of the pipeline.(B) The volume rendering and three orthogonal sections of imaging datasets.(C) A representative sagittal section.Scale bar, 10 mm.(D-F) The enlarged images from (C) shown typical cortical cytoarchitecture in V1, M1 and 46v, respectively.Scale bars, 200 μm.(G-I) The enlarged images from (D-F) presented the morphological details of cells.Scale bars, 100 μm.
FIGURE 2
FIGURE 2 ). Scale bars, 200 μm.(B) The comparison of the raw, the cell-graded and the enhanced image from the section of intraparietal sulcus (the definitions of images were in methods).Scale bars, 1 mm.(C) The comparison of the raw and enhanced images in whole brain level and a local area.The candidate boundaries were drawn with solid lines and the names of cortical regions were labeled: V4v, the ventral part of visual area 4; TFO, area TFO of the parahippocampal cortex; TF, area TF of the parahippocampal cortex.Scale bars, 5 mm.
FIGURE 3
FIGURE 3The establishment of cortical coordinate system.(A-C) Three orthogonal sections were presented in which the outlines of segmented cortex were drawn with yellow lines.The cross lines in each section indicated the positions of the other two sections.The anatomical directions were indicated with arrows: A, anterior; P, posterior; D, dorsal; V, ventral; M, medial; L, lateral.(D) The segmented cortex was presented by volume rendering.(E-G) The relative cortical depths were displayed with gradient color that the shallow laminates were displayed with cool colors and the deep with hot.(H) The mid-thickness surface was rendered and part of streamlines were drawn in random color.(I-K) The values of angles between the section planes and the streamline directions at their intersections were displayed with gradient color.The positive value meant the angle direction away from the observer. | 9,009 | 2024-05-23T00:00:00.000 | [
"Biology",
"Engineering"
] |
An Efficient Method for Proportional Differentiated Admission Control Implementation
,
Introduction
Efficient implementation of admission control mechanisms is a key point for next-generation wireless network development. Actually, over the last few years an interrelation between pricing and admission control in QoS-enabled networks has been intensively investigated. Call admission control can be utilized to derive optimal pricing for multiple service classes in wireless cellular networks [1]. Admission control policy inspired in the framework of proportional differentiated services [2] has been investigated in [3]. The proportional differentiated admission control (PDAC) provides a predictable and controllable network service for real-time traffic in terms of blocking probability. To define the mentioned service, proportional differentiated service equality has been considered and the PDAC problem has been formulated. The PDAC solution is defined by the inverse Erlang loss function. It requires complicated calculations. To reduce the complexity of the problem, an asymptotic approximation of the Erlang B formula [4] has been applied. However, even in this case, the simplified PDAC problem remains unsolved.
In this paper, we improve the previous results in [3] and withdraw the asymptotic assumptions of the used approximation. It means that for the desired accuracy of the approximate formula an offered load has to exceed a certain threshold. The concrete value of the threshold has been derived. Moreover, an explicit solution for the considered problem has been provided. Thus, we propose a method for practical implementation of the PDAC mechanism.
The rest of the paper is organized as follows. In the next section, we give the problem statement. In Section 3, we first present a nonasymptotic approximation of the Erlang B formula. We then use it for a proportional differentiated admission control implementation and consider some alternative problem statements for an admission control policy. In Section 4, we present the results of numerous experiments with the proposed method. Section 5 is a brief conclusion.
Problem Statement
Let us consider the concept of admission control inspired in the framework of proportional differentiated services. In the above paper [3], whose notation we follow, PDAC problem is defined as
EURASIP Journal on Wireless Communications and Networking
Here, (i) K: is a number of traffic classes. K ≥ 2; (ii) δ i : is the weight of class i, i = 1, . . . , K. This parameter reflects the traffic priority. By increasing the weight, we also increase the admittance priority of corresponding traffic class; (iii) ρ i : is the offered load of class i traffic; (iv) n i = C i /b i , C i is an allotted partition of the link capacity, b i is a bandwidth requirement of class i connections, and x is the largest integer not greater than x; (v) B(ρ i , n i ): is the Erlang loss function, that is, under the assumptions of exponential arrivals and general session holding times [5], it is the blocking probability for traffic of class i, i = 1, . . . , K.
It needs to find C 1 , C 2 , . . . , C K taking into account known δ i , ρ i , b i , i = 1, . . . , K and the restriction imposed by given link capacity, C: Let us remark that variations of C i imply a discrete changing of the function B(ρ i , n i ). Hence, it is practicably impossible to provide the strict equality in (1). It is reasonable to replace (1) by an approximate equality as follows: But, even in this case, the above problem is difficult and complex combinatorial problem. For its simplification, the following asymptotic approximation has been used [3]. If the capacity of link and the offered loads are increased together: and ρ > n, then the Erlang loss function can be approximated by Taking into account the PDAC problem, the authors of [3] consider the limiting regime when and Under these conditions, the asymptotic approximation of the Erlang B formula has been used and (1) has been replaced by simplified equations as follows: In practice, the limited regime (7) is not appropriate. But the simplification (8) can be used without the conditions (7). Actually, the approximation (6) can be applied without the condition (4). We prove it below.
Approximate Erlang B Formula.
We assert that for the desired accuracy of the approximation (6) an offered load has to exceed a certain threshold. The concrete value of the threshold is given by the following theorem. Proof.
Here and below, we use the following designation: Assume that ρ > n. First, we rewrite the Erlang B formula Remark that Taking into account properties of geometrical progression, we have Hence To prove the second inequality of the theorem, we use the following upper bound of the Erlang loss function [6]: Transform this as follows: It implies We have n/ρ < 1. Hence, EURASIP Journal on Wireless Communications and Networking 3 Thus, for any such that it follows that From the inequality (20), we obtain the condition (9). The proof is completed.
Note that the approximate formula (6) can provide the required accuracy in the case of ρ < n + 1/ . Actually, if = 0.01, n = 200, then the required accuracy is reached for ρ = 270 < 300. Thus, the condition (9) is sufficient but not necessary. It guarantees the desired accuracy of the approximation for any small and n.
It is clear that for some values C, b i , ρ i , δ i , we can obtain C 1 > C in (24) or C i < 0 in (23). Therefore, the problem is unsolvable and PDAC implementation is impossible for the given parameters.
More precisely, if C 1 > C, then we have from (24) Using the following equality: we derive From the inequality C i < 0, we can write Therefore, By substituting the expressions (24) for the C 1 into (30), we get after some manipulations the following inequality: Note that the problem (22) has been formulated under the condition Actually, it implies Thus, the region of acceptability for PDAC problem (22) is defined by It follows from the theorem that the approximation (6) is applicable even for n = 1 and any small > 0 if ρ > 1/ − 1. In spite of this fact, the solution above cannot be useful for small values of the ratio C i /b i . In this case, the loss function B(ρ i , n i ) is sensitive to fractional part dropping under calculation n i = C i /b i . For example, if b i = 128 kb/s, ρ i = 2, and we obtain C i = 255 kb/s, then the approximate value of the blocking probability is about 0.004. But n i = C i /b i = 1 and B(1, 2) ≈ 0.67. Thus, the offered approximate formula is useful if the ratio C i /b i is relatively large.
Alternative Problem Statements.
Let n i be the number of channel assigned for class i traffic, i = 1, . . . , K. Each class i is characterized by a worst-case loss guarantee α i [7,8].
EURASIP Journal on Wireless Communications and Networking
Consider the following optimization problem: Assume that for all i ∈ {1, . . . , K} ∃n i ∈ N : B(ρ i , n i ) = α i . It is well known that the Erlang loss function B(ρ, n) is a decreasing function of n [9], that is, B(ρ, n 1 ) < B(ρ, n 2 ) if n 1 > n 2 . Therefore, the optimal solution (n * 1 , n * 2 , . . . , n * K ) of the problem (35) satisfies the mentioned condition If we designate δ i = 1/α i , then we get Thus, the optimization problem (35) is reduced to the problem (1).
Assume the approximation (6) is admissible. Therefore, the method from previous subsection is supposed to be used, but the optimal solution of the problem (35) can be computed by inverting the formula (36). Taking into account the approximation, we get Note that in practice the solution n * i is not usually integer; thus, it has to be as follows: We now consider the optimization of routing in a network through the maximization of the revenue generated by the network. The optimal routing problem is formulated as where n i is a fixed number of channels for class i traffic and r i is a revenue rate of class i traffic. Obviously, the Erlang loss function B(ρ, n) is an increasing function of ρ. Therefore, the optimal solution (ρ * 1 , ρ * 2 , . . . , ρ * K ) of the problem (40), (41) satisfies the following condition: Hence, the problem (40), (41) can be reduced to the problem (1) as well. Under the approximation, the optimal solution takes the form and the maximal total revenue is
Performance Evaluation
Let us illustrate the approximation quality. The difference Δ(ρ, n) = B(ρ, n) − β(ρ, n) is plotted as a function of offered load in Figure 1. If the number of channel n is relatively small then high accuracy of approximation is reached for heavy offered load. Let us remark that heavy offered load corresponds to high blocking probability. Generally, this situation is abnormal for general communication systems, but the blocking probability B(n, ρ) decreases if the number of channels n increased relative accuracy . Let us designate ρ * = n + 1/ . If the approximation (2) is admissible for ρ * then it is also admissible for any ρ > ρ * . In Figure 2, the behavior of losses function B(n, ρ * ) according to different is shown. Thus, the provided approximation is attractive for a performance measure of queuing systems with a large number of devices.
Next, we consider a numerical example to evaluate the quality of a PDAC implementation based on the proposed method. Assume that C = 640 Mb/s, K = 5, b i = 128 kb/s, ρ i = 1100, δ i = 1 − 0.1(i − 1), i = 1, . . . , 5. In average, there are 1000 channels per traffic class. Following the theorem above, we conclude that the blocking probability can be replaced by the approximation (6) with accuracy about 0.01. Using (23)-(25), find a solution of the simplified PDAC problem and calculate the blocking probability for the obtained values. The results are shown in the Table 1.
Note that 5 i=1 C i = 640 Mb/s and three channels per 128 kb/s have not been used. We get It is easy to see that If K = 10, δ i = 1 − 0.05(i − 1), i = 1, . . . , 10, and other parameters are the same then max i, j δ i B ρ i , n i − δ j B ρ j , n j < 0.001.
If an obtained accuracy is not enough, then the formulas (23)-(25) provide efficient first approximation for numerical methods.
Conclusion
In this paper, a simple nonasymptotic approximation for the Erlang B formula is considered. We find the sufficient condition when the approximation is relevant. The proposed result allows rejecting the previously used limited regime and considers the proportional differentiated admission control under finite network resources. Following this way, we get explicit formulas for PDAC problem. The proposed formulas deliver high-performance computing of network resources assignment under PDAC requirements. Thus, an efficient method for proportional differentiated admission control implementation has been provided. | 2,730 | 2010-09-13T00:00:00.000 | [
"Computer Science"
] |
A Piecewise-Defined Function for Modelling Traffic Noise on Urban Roads
In this paper, a piecewise-defined function is proposed to estimate traffic noise in urban areas. The proposed approach allows the use of the model even in the case of very low or zero flows for which the classical logarithmic form is not suitable. A model based on the proposed approach is calibrated for a real case and compared with the results obtained with a model based only on the logarithmic form. The results obtained show how the proposed piecewise-defined function, linear for low traffic flows and logarithmic for medium-high volumes, is able to better represent real noise pollution levels in all conditions. The proposed approach is particularly useful when comparing two plan scenarios from the point of view of noise effects.
Introduction
In urban areas, road traffic contributes significantly to environmental noise and is its main source. As is evident to anyone living in congested cities, noise causes annoyance, sleep disturbance and damage to human health, reducing the quality of life.
The World Health Organisation [1] has estimated that at least one million years of life are lost every year due to road traffic noise in Western Europe. Impacts on human health have been widely studied. Muzet [2] and Pirrera et al. [3] studied the effects of environmental noise on sleep and, consequently, on health. Some studies [4][5][6][7][8] focused on cardiovascular problems related to noise. Sakhvidi et al. [9] studied the association between noise exposure and diabetes. Jafari et al. [10] evaluated if noise exposure can accelerate cognitive impairment and Alzheimer's disease. Noise and annoyance were studied in [11][12][13][14][15]. Other general studies can be found in [16][17][18], while the effects of noise on property prices were studied in [19][20][21].
The analysis of the literature allows us to identify two main types of models: • general, i.e., applicable to different situations and case studies; • specific, calibrated in correspondence with a specific case study and applicable only in the reference context or in very similar contexts.
General models relate noise to vehicle flow and/or average speed, but also take into account other context-specific characteristics (traffic type, gradient, road surface, barrier geometry, mean wind speed, traffic composition, local topography, etc.). Some examples are the models FHWA [44], CoRTN [45], RLS (see [46,47]), ASJ RTN [48], Harmonoise [49,50], Son Road [51], Nord 2000 [52], NMPB-2008 [53] and CNOSSOS-EU [43]. A comparison of these models and more details can be found in [23]. On the other hand, specific models are calibrated for specific situations, such as urban roads with similar characteristics, or sometimes for single infrastructures. These models estimate the noise emissions, for that specific case study, as a function of the traffic flow and sometimes the average speed of that traffic. Most models are based only on traffic flows; the simplest functional form is the following: L eq (f ) = β 0 + β 1 · log 10 f where: L eq is the equivalent noise level at a specific distance from the centre of the road, in dB(A); β 0 and β 1 are the coefficients of the model to be calibrated; f is the homogenised traffic flow (veh/h).
Other models include the distance of the receiver from the source within the formula, or other features such as the percentage of heavy vehicles or the average flow speed, trying to extend the use of the model to other contexts as much as possible. In almost all formulations, the term log 10 f, or its functional transformation, is present and plays the main role in the calculation. Some papers that use the approach (1) or similar are [26,27,46,[54][55][56][57][58][59].
Model (1) surely cannot be easily generalised but it is very easy to calibrate in specific contexts. A specific model, once calibrated in correspondence of some roads of a city, can be used to estimate noise pollution levels also in other roads of the same city, in order to estimate the exposed population, identify the most critical areas to intervene on and assess the overall effects of interventions on traffic circulation [60].
In this paper, the calibration of a specific model based on a piecewise-defined function is proposed, to avoid the use of the logarithmic form that can create problems for zero or very low flow, which can occur in simulation models of a whole road network.
The paper is organised as follows. Section 2 introduces the proposed approach. Section 3 calibrates a specific model based on the proposed approach and compares it with other models. Section 4 concludes and discusses the research prospects.
The Proposed Approach
Model (1) and most models based on the logarithm of traffic flows work very well when traffic flows are not extremely low or zero. It is well known that power and sound pressure levels vary with a logarithmic law. Indeed, the intensity of auditory sensations is in first approximation proportional to the logarithm of the stimulus and not to its absolute value.
For example, if we use the model calibrated in [59], below: we can see that it is not applicable in case of null flow (the second term would tend to less infinite) and would give a value equal to 17.594 dB(a) in case of 1 veh/h and 34.971 in case of 10 veh/h. As also written in [59], the model is valid only if the flow is greater than about 50 veh/h, at which the equivalent noise level is equal to 47.1 dB(A). In most cases, this underestimation of equivalent noise levels for very low flows is not a problem, because roads with low traffic, or periods with low traffic, do not receive attention in noise analysis. The problem arises when we want to analyse the overall noise level of a city, perhaps by comparing pre-and post-intervention scenarios. For example, in Urban Traffic Plans, it is useful to check with simulation whether changes in traffic patterns may or may not have a positive impact on noise pollution. The approximations included in the simulation models, such for example the discretisation of the study area into traffic zones (zoning), may lead to estimate zero or very low traffic flows on some links of the network and any comparative analysis would be compromised in these cases.
To avoid this problem, in this paper we propose the use of a piecewise-defined function, in order to calibrate specific models, which has a first linear part (for low traffic values) and a second logarithmic part (for high traffic values). In any case, the function must be continuous and derivable; therefore, we propose to extend linearly model (1), with an inclination equal to the first derivative of the model, for flows lower than a minimum value, f min . This way, we can formulate a model that is valid for all traffic conditions. Under these assumptions, the model can be formulated as follows: The calibration of models (3)-(4) requires the calculation of three values, in contrast to model (1) for which it was sufficient to calibrate two coefficients (β 0 and β 1 ); indeed, in addition to the β 0 and β 1 coefficients, it is necessary to calibrate the value of f min .
Model Calibration
The calibration of the model is performed using the same data collected in the testing campaign reported in [59] in the city of Benevento. These data refer to four road sections (see Figure 1) representative of the prevailing type of roads in the urban network; the pavement was bituminous in all cases. The phonometric surveys were conducted by a specialised technician using a Svantek 949 sound level meter, with the periodic certification, equipped with a preamplifier Svantek SV 12L and a pre-polarised microphone Svantek SV22. Before and after the measurements, the tuning of the system was verified with a precision calibrator DELTA OHM. A microclimatic station for measuring temperature and wind speed and direction was used; indeed, the phonometric measures must be recorded in absence of rain, snow or fog and the maximum wind speed must be lower than 5 m/s. The phonometer is disposed to 1.5 m in height from the road level and on the side of the road. The traffic surveys were obtained with a manual procedure, counting cars, light-duty vehicles (LDV) and heavy-duty vehicles (HDV) every 15 min (motorcycle flows are negligible in Benevento), and assuming the following equivalence coefficients: LDV = 2 cars; HDV = 8 cars. Overall, 32 measures were conducted; in Table 1 the main data are summarised. The phonometric measures have to be corrected for the locations A and B according to the distance from the centre of the road; indeed, as reported in Figure 2, the distances are different in these locations since there are parking lots (the measures were, anyway, performed where no cars were parked). Assuming a linear noise source, the sound in locations A and B arrives to the receptor attenuated by the following factor: ∆L eq = 10 · log 10 (6. Table 1. The model was calibrated using the generalized least squares method and the following values were obtained: β 0 = 4.427; β 1 = 22.109; f min = 287 veh/h. With these values, models (3)-(4) become: The value of the coefficient of determination, R 2 , is significantly high (0.878); Figure 3 reports the comparison between measured and estimated data and the comparison between the model curve and the experimental data. This model improves model (2) for which R 2 was equal to 0.847. The RMSE value also improved, decreasing from 2.48 to 2.22.
It is interesting to compare the results obtained between the two models for low traffic flow values. It is noted that the proposed model can provide plausible results in all flow ranges, while the logarithmic model provides acceptable values only for sufficiently high flow values (see Table 2 and Figure 4). It should be noted that model (7) was not calibrated for the case study, so it is possible to compare only the trend of the function, which is similar to that of model (2); model (7) overestimates the noise levels for the case study examined, but, from the trend of the function, it presents the same problems as model (2).
Conclusions
Noise pollution is one of the main external impacts of the road transport system. Numerous models have been proposed in the literature, some more general, others more specific, for its estimation. In almost all models, equivalent noise levels are assumed to be a function of the logarithm of traffic flows; this theoretically correct hypothesis does not, however, allow the use of such models for very low or zero flows. Moreover, the proposed model was also compared with the model proposed in [54], which has been widely used: L eq (f ) = 55.5 + 10.2 · log 10 f + 0.3 P − 19.3 log 10 d where: P is the percentage of heavy vehicles; d is the distance from the source.
For the comparison, we assume a percentage of heavy vehicles of 10% and a distance from the source equal to 3.81 m.
It should be noted that model (7) was not calibrated for the case study, so it is possible to compare only the trend of the function, which is similar to that of model (2); model (7) overestimates the noise levels for the case study examined, but, from the trend of the function, it presents the same problems as model (2).
Conclusions
Noise pollution is one of the main external impacts of the road transport system. Numerous models have been proposed in the literature, some more general, others more specific, for its estimation. In almost all models, equivalent noise levels are assumed to be a function of the logarithm of traffic flows; this theoretically correct hypothesis does not, however, allow the use of such models for very low or zero flows.
On the other hand, in practice, in particular for urban roads, there is always background noise to which the noise emitted by vehicular traffic is added. The approach proposed in this paper, based on the use of a piecewise-defined function to be calibrated for the specific context, makes it possible to use the calibrated model even on roads with very low or no traffic.
Comparison with a logarithmic model calibrated with the same data shows the robustness of the proposed approach.
In the literature, the need to estimate noise emissions at low traffic values has always been considered unimportant, because noise pollution is usually a real problem in case of high traffic flows. Often, models have been calibrated and used only on high traffic roads and at peak times. The need to have a model that is also valid for low (or no) traffic flows occurs when we want to examine, in simulation, the situation of an overall city network, also before and after interventions on traffic schemes. In these cases, it may occur that on some road sections, the level of traffic is very low or zero, even for the approximations of the model. The proposed approach makes it possible to use a unique model to estimate all traffic conditions and, consequently, to compare the intervention scenarios.
The proposed approach can be used to calibrate specific models in other contexts; the calibrated models (5)-(6), on the other hand, have uses limited to the case study or very similar situations.
The research prospects can be directed to test the proposed approach on other urban contexts and to generalise the model based on this type of function so that it can also be used in contexts other than the one in which it has been calibrated.
Funding: This research received no external funding. | 3,183.6 | 2020-07-29T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Pharmacological Properties of Vochysia Haenkeana (Vochysiaceae) Extract to Neutralize the Neuromuscular Blockade Induced by Bothropstoxin-I (Lys49 Phospholipase A2) Myotoxin
Purpose: Bothrops snakes are responsible for more than 70 % of snakebites every year in Brazil and their venoms cause severe local and systemic damages. The pharmacological properties of medicinal plants have been widely investigated in order to discover new alternative treatments for different classes of diseases including neglected tropical diseases as envenomation by snakebites. In this work, we have investigated the ability of Vochysia haenkeana stem barks extract (VhE) to neutralize the neuromuscular effects caused by Bothropstoxin-I (BthTX-I), the major phospholipase A2 (PLA2) myotoxin from B. jararacussu venom. Methods: The biological compounds of VhE were analysed under thin layer chromatography (TLC) and its neutralizing ability against BthTX-I was assessed through twitch-tension recordings and histological analysis in mouse phrenic nerve-diaphragm (PND) preparations. The antimicrobial activity of VhE was assessed against S. aureus, E. coli and P. aeruginosa strains. The aggregation activity of VhE was analysed under protein precipitation assay. Results: VhE showed the presence of phenolic compound visualized by blue trace under TLC. VhE abolished the neuromuscular blockade caused by BthTX-I applying the pre-toxin incubation treatment and partially neutralized the BthTX-I action under post-toxin incubation treatment; VhE contributed slightly to decrease the myotoxicity induced by BthTX-I. The neutralizing mechanism of VhE may be related to protein aggregation. VhE showed no antimicrobial activity. Conclusion: V. haenkeana extract which has no antimicrobial activity exhibited neutralizing ability against the neuromuscular blockade caused by BthTX-I and also contributed to decrease its myotoxicity. Protein aggregation involving phenolic compounds may be related in these protective effects.
Introduction
In Brazil, Bothrops snakes comprise more than 30 species distributed throughout the country and they are responsible for approximately 70 % of snakebites every year; the World Health Organization (WHO) has considered snakebites as a neglected tropical disease due to the numerous cases and difficulties in specific regions to reach antivenom therapy. [1][2][3][4][5] Bothrops venoms induce severe local and systemic damages due to their high enzymatic action basically mediated by proteases and phospholipases A2 (PLA2). 6,7 In envenomation by Bothrops venoms, the local effects are marked by intense necrosis accompanied by edema, equimosis and acute inflammatory activity. In addition, local infections caused by gram-negative anaerobic bacteria derived from oral flora of snakes are considered an important clinical complication in victims of snakebites. 4,8 Snakebites are conventionally treated through antivenom serum therapy; however, based on popular practices, the pharmacological properties of medicinal plants have been widely investigated in order to discover new alternative treatments for different classes of diseases including neglected tropical diseases as envenomation by snakebites. 9 Recent investigations have shown that plant extracts exhibit antimicrobial and antiophidian activities. [10][11][12] Bothrops jararacussu snake, popularly known as Jararacuçu, is widely distributed in Southeast region of Brazil 6 and its venom is composed by enzymatic and non-enzymatic proteins, carbohydrates, peptides, lipids, biogenic amines and inorganic components, similarly to other Bothrops venoms. 13 Bothropstoxin-I (BthTX-I) is a non-enzymatic Lys49 PLA2 isolated from B. jararacussu venom which induces irreversible neuromuscular blockade in vertebrate neuromuscular preparations in vitro characterized by intense myonecrosis, increase in creatine kinase release, muscle contracture and membrane depolarization. [14][15][16][17] Vochysia haenkeana (Vochysiaceae), plant popularly known in Brazil as "escorrega-macaco", "pau-amarelo" and/or "cambarazinho", comes from semidecidual broadleaf forest common in Mato Grosso do Sul, Mato Grosso and Goiás Brazilian States; V. haenkeana exhibits a great variety of secondary metabolites such as tannins, saponins, phenolic compounds, flavonoids and coumarins and it has been cited in ethnobotanical studies related to treatment of respiratory diseases. [18][19][20] However, its pharmacological activities have been poorly investigated. 21 In this work, we have assessed the neutralizing ability of V. haenkeana hydroalcoholic extract against the main PLA2-myotoxin (BthTX-I) from B. jararacussu venom in mouse nerve-diaphragm preparations and its antimicrobial activity against Staphyloccus aureus, Escherichia coli and Pseudomonas aeruginosa strains.
Reagents and BthTX-I
All salts for the physiological solution were of analytical grade. BthTX-I was provided by Dra Adélia Cristina Oliveira Cintra from São Paulo University (USP, Ribeirão Preto, SP, Brazil).
Animals
Male Swiss mice (25-30 g) obtained from Multidisciplinary Center for Biological Investigation (CEMIB/Unicamp) were housed at a maximum of 10 mice per cage at 23 °C on a 12 h light/dark cycle. The animals had free access to food and water ad libitum.
Plant material
The hydroalcoholic extract from stem barks of V. haenkeana was provided by Dr Márcio Galdino dos Santos from Tocantins Federal University (UFT, Palmas, TO, Brazil). The full description about the origin of the extract has been shown elsewhere. 18 The plant exsiccate was deposited in the Herbarium of the Tocantins Federal University (UFT, Porto Nacional, TO, Brazil) as voucher specimen #10.074 by Solange de Fátima Lolis according to the International Code of Botanical Nomenclature (ICBN).
Solubilisation of V. haenkeana extract
In order to find out the ideal solvent for V. haenkeana extract (VhE) without affecting the basal twitch responses recorded in mammalian nerve-muscle preparations, polyethylene glycol 400 (PEG 400, Synth®), dimethyl sulfoxide (DMSO, Sigma ® ) and ethanol (Synth ® ) were selected to be tested. Ethanol (30 µL) showed be the best solvent to solubilize VhE; ethanol 70 % did not change the twitch responses in control experiments recorded from mouse phrenic nervediaphragm (PND) preparations. 22
Thin layer chromatography (TLC)
For TLC, it was used aluminium plates coated with silica gel 60 (0.20 mm thick) containing the fluorescent indicator UV254 (Macherey-Nagel GmbH & Co., Bethlehem, PA, USA) and phytochemical standard in methanol (1 %) (Sigma-Aldrich Co., St. Louis, MO, USA) including quercetin (1), rutin (2), caffeic acid (3), tannic acid (4), coumarin (5), gallic acid (6) and VhE (7); a purposeful empty lane was maintained for observing the solvent race (8). The solvent system (mobile phase, 10 mL) consisted of ethyl acetate, formic acid, acetic acid and water (100:11:11:27), as described elsewhere. 23 The chromatograms were initially stained with diphenylboric acid 2-aminoethyl ester solution (5 % in ethanol) (Sigma-Aldrich Co., St. Louis, MO, USA) followed by polyethylene glycol 4000 solution (5 % in ethanol) (Sigma-Aldrich Co., St. Louis, MO, USA), with visualization under UV light at 360 nm. The retention factor (Rf) and sample colours were visually compared to phytochemical standards. from the Clinical and Laboratory Standards Institute (CLSI), protocol M07-A9. Briefly, four to five colonies were harvested from pure cultures growing on TSA and were used to prepare a bacterial inoculum. Colonies were transferred into tubes containing 5 mL of TSB and cultured at 37 C until reaching turbidity equivalent to 0.1 at 660 nm (approximately 1.5 x 10 8 CFU/mL). Two-fold dilutions of the VhE (from 1000 to 0.4 µg/mL) were made in 96-well plates with 100 µL of Mueller Hinton Broth (MHB; Difco) per well. Then, the bacterial suspension (100 μL) was inoculated, and the plates were incubated for 24 h at 37 °C. The lowest concentration with any visible bacterial growth was taken as the minimum inhibitory concentration (MIC). In addition, bacterial growth was assessed by optical density measurement (660 nm).
Mouse phrenic nerve-diaphragm (PND) preparation
PND preparations were obtained from male Swiss mice killed with isoflurane (Fortvale ® , Vinhedo, SP, Brazil). The preparations were mounted under a resting tension of 5 g in a 5 mL organ bath containing aerated (95 % O2/5 % CO2) Tyrode solution (composition, in mM: NaCl 137, KCl 2.7, CaCl2 1.8, MgCl2 0.49, NaH2PO4 0.42, NaHCO3 11.9 and glucose 11.1, pH 7.0) at 37 °C as described elsewhere. 24 The preparations were stimulated indirectly (5-7 V, 0.1 Hz, 0.2 ms) with supramaximal stimuli being delivered from a stimulator (Model ESF-15D, Ribeirão Preto, SP, Brazil) via bipolar electrodes positioned on the nerve. Muscle twitches were recorded using a force displacement transducer coupled to a twochannel Gemini recorder (both from Ugo Basile ® , Varese, Italy). After stabilization for 20 min, VhE and BthTX-I (a single concentration per experiment) was added to the preparations and left in contact for 120 min or until complete blockade. For neutralization assays, we applied two types of protocols, as suggested elsewhere: 25 1) pre-toxin incubation; BthTX-I was maintained under incubation with VhE for 30 min before twitch tension experiments) and 2) post-toxin incubation (VhE was added into the recording chamber 10 min later than BthTX-I).
Protein precipitation measurement
The protein precipitation induced by VhE extract was evaluated as previously described elsewhere. 26,27 Albumin and BthTX-I (10 μg) was incubated separately with VhE (150 μg) following the protocols: 1) pre-toxin incubation with VhE: BthTX-I (50 g/mL) was maintained under incubation with VhE (0.4 mg/mL) for 30 min at room temperature (23-25 °C) and then for 60 min at 37 °C; 2) without pre-toxin incubation with VhE: BthTX-I (50 g/mL) was maintained incubated with VhE (0.4 mg/mL) for 60 min directly at 37 °C. In both of protocols, the mixture was centrifuged at 5,000 rpm for 15 min and the protein concentration in supernatant was measured as essentially described elsewhere. 28 The absorbance obtained was compared with tubes containing only protein (albumin or BthTX-I) to assess the % of protein precipitation. Ethanol (solvent for VhE) was tested alone to verify its influence on the protein precipitation. Tubes containing the same amount of VhE or ethanol, in absence of albumin or BthTX-I, were used as blanks.
Quantitative histological study
The preparations from the preincubation and post toxin assays were analysed by a quantitative morphometric method and compared to Tyrode control, VhE and BthTX-I. At the end of each experiment (after 120 min), three preparations of each group were fixed by a formalin 10 % solution, and processed by routine morphological techniques. Cross-sections (5 µm thick) of diaphragm muscle were stained with 0.5 % (w/v) hematoxylin-eosin, for microscopy examination. Tissue damage (edema, intense myonecrosis characterized by atrophy of the muscle fibers, hyaline aspect, sarcolemmal disruption and lysis of the myofibrils) was verified by three different trained people and it was expressed as a myotoxicity index (MI), i.e., the percentage of damaged muscle cells number divided by the total number of cells in three non-overlapping, non-adjacent areas of each preparation. 29
Statistical analysis
Changes in the twitch-tension responses of PND preparations were expressed as a percentage relative to baseline (time zero) values and morphological alterations measured based on myotoxicity index (MI, in %). The results were expressed as the mean ± SEM and statistical comparisons were done using Student's t-test with p < 0.05 indicating significance. All data analyses were done using Microcal Origin 8 SR4 v.8.0951 (Microcal Software Inc., Northampton, MA, USA) software.
Results and Discussion
Based on popular medicinal practices, stem barks from Vochysia haenkeana (a plant variety of the Vochysiaceae family widely distributed in semidecidual broadleaf forest from west central region of Brazil) has been selected in this work to be investigated in terms of its neutralizing ability against the effects induced by snake venoms and antimicrobial activity. V. haenkeana is popularly known as "escorrega-macaco" and shows long plump trunk and smooth barks. V. haenkeana contains secondary metabolites such as tannins, saponins, phenolic compounds, flavonoids and coumarins. 21 Here, we selected a sample V. haenkeana extract (VhE) to be subjected to Thin Layer Chromatography which revealed the presence of phenolic compounds characterized by a blue spot not matched with caffeic, tannic nor gallic acids; a weak yellow spot (indicated by an arrow) can be also seen in the chromatogram profile suggesting the presence of flavonoids not related to quercetin or rutin (Figure 1). We have not observed the presence of coumarin in VhE under TLC system; our data were inconclusive to justify the involvement of this compound eventually present in VhE with treatment of respiratory diseases, as suggested by popular practices. 19,20 However, it has already shown that commercial coumarin alone does not protect the neuromuscular blockade induced by B. jararacussu or Crotalus durissus terrificus venoms in vitro. 30 Vochysia haenkeana has been poorly studied and its pharmacological properties are still unknown and need to be further explored. It has been shown that VhE does exhibit antitumoral activity in rats with induced Erlich tumor. 18 In addition, our data have shown that VhE also does not promote antimicrobial action against S. aureus, E. coli and P. aeruginosa strains. Investigations about antimicrobial activities of plant extracts associated to local effects induced by snake venoms show potentially useful to indicate alternative methods to treat snakebites since infections caused by bacteria from snake's mouth is often associated to snakebites. 32 Before subjecting VhE to neutralization assays in PND preparations, we have verified whether the extract by itself could cause changes in twitch responses; a concentration-response experiment was carried out using 0.2, 0.4 and 0.8 mg of VhE/mL; the concentrations of 0.2 and 0.4 mg/mL did not cause changes in PND preparations during 120 min incubation (p > 0.05) whereas 0.8 mg of VhE/mL induced a slight decrease in twitch tension from 50 min incubation (p < 0.05 compared to Tyrode solution alone). We have selected the VhE concentration of 0.4 mg/mL to be used in those protocols with neutralization of BthTX-I-induced neuromuscular blockade (50 µg/mL; concentration enough to produce complete neuromuscular blockade in PND preparations) (Figure 2). BthTX-I induces irreversible neuromuscular blockade and myonecrosis in vitro similarly to effects caused by crude venom of Bothrops jararacussu from where it comes from. 16,17 The ability of this toxin to reproduce the neuromuscular effects seen with crude venom becomes it the main myotoxin from B. jararacussu venom. We have applied two different protocols to assess the neutralizing ability of VhE: pre-toxin incubation (BthTX-I was maintained under incubation for 30 min with VhE prior experiments in PND preparations) and post-toxin incubation (VhE was added into the record chamber 10 min after toxin addition). Under pre-toxin incubation, VhE neutralized completely the neuromuscular blockade caused by BthTX-I; on the other hand, VhE was not able to avoid the blockade by BthTX-I when added after toxin although it has been noticed a slight attenuation over 120 min (Figure 3). Oshima-Franco et al. 15 studied the presynaptic nature of BthTX-I. Thus, the preincubation of toxin with VhE observed in pre-toxin experiments can be related to ability of VhE to avoid the presynaptic trigger of myotoxin; which, in turn, once triggered VhE was unable of repairing (post-toxin experiments). Table 1 Post-toxin incubation 14.8 ± 11 The neutralizing ability of VhE against the neuromuscular blockade caused by BthTX-I in vitro may be related to its capacity to induced protein aggregation as seen in our experiments in protein precipitation assay (Figure 4). The incubation with VhE promoted the precipitation of BthTX-I reducing the toxinconcentration in 70.5 ± 7.1 %. The incubation of VhE with albumin has also been evaluated as positive control showing 79.4 ± 6.7 % of precipitation. Ethanol (solvent for VhE) alone was assessed under protein precipitation assay as negative control in order to refute its influence in that protein aggregation seen with VhE; it did not promote protein aggregation in both of compounds albumin and BthTX-I. The pharmacological action of plant extracts to neutralize the neuromuscular activity of snake venoms and their toxins in vitro shows still unknown but it is frequently associated with protein precipitation, proteolytic degradation, enzyme inactivation, metal chelation and antioxidant action. 30 Flavonoids and tannins are the main components related with those activities and both of them were previously observed in VhE. 21,27,34 Although we have not found evidence for tannic acid in this extract, responsible for antiophidian mechanism of plant extracts as suggested by Melo et al., 30 the protein precipitation promoted by VhE may be related to its neutralizing effect against BthTX-I.
Conclusion
V. haenkeana stem barks extract abolished the neuromuscular blockade caused by BthTX-I under pretoxin incubation treatment; however, it was not efficient to neutralize the toxin-induced neuromuscular blockade under post-toxin treatment. In both of treatments VhE contributed to decrease the myotoxicity caused by toxin. These effects may be related to protein aggregation involving phenolic compounds which represent the major constituent of VhE, as shown by TLC. VhE showed no antimicrobial activity against S. aureus, E. coli and P. aeruginosa strains. | 3,722.6 | 2017-09-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
TECHNOLOGIES OF STRATEGIC MANAGEMENT OF INDUSTRIAL ENTERPRISES
2019. Т. 19, No 1. С. 131–138 13
Introduction
Generating, forming and developing ideas of strategic management of industrial enterprises are interpreted in different ways in scientific literature [1][2][3][4][5][6][7][8][9][10]. Nevertheless, the majority of foreign approaches, concepts [11][12][13][14][15][16] and schools of strategic management [17] don't go beyond mere managerial developments described at the verbal level while developing these or those ideas of the western management. Outside the mentioned concepts and schools there are all technological and technical aspects of performance of industrial enterprises [18,19]. However it is extremely difficult to ensure effective functioning of production companies and corporations without taking them into account [20,21]. In turn, most native authors make the views and recommendations about issues of strategic management of industrial enterprises in almost full compliance with approaches and schools of the western strategic management [22][23][24]. As a result, strategic management of modern industrial enterprises operating in conditions of global instability [24-28] needs a new technology focused on national conditions of business. It is necessary to add that a similar situation is considerably explained by the fact that the practice of national strategic management is still not too prominent and the regulatory base of corporate management hasn't been shaped yet [29]. As a result there is no tried and tested ideology of devising a corporate strategy as there is no analysis of long-term dynamics of their realization either. So as developments and recommendations in the field of corporate management of both foreign and native scientists are of little use to be used effectively by Russian enterprises and corporations, before using the mentioned recipes of The origin, formation and development of the ideas of strategic management of industrial enterprises is often based on the works of Western management scientists, who do not take into account the technological and technical aspects of the companies. This situation leads to low management efficiency of domestic enterprises in modern conditions, characterized by a high degree of instability. The actual task is to form our own management practices, taking into account the conditions in which Russian companies have to compete.
In this article, the attempt of devising a strategy of management of an industrial enterprise is made taking into account its major aspects of technical and technological development based on a developed information and computer system. The criteria to choose the options of strategic development of industrial enterprises are also presented. The positions which are necessary to consider when developing a strategy of production companies and corporations are analyzed.
The analysis made it possible to present the main provisions of the formation of a model of management of an industrial enterprise. Including the consideration of the enterprise as a materialized stream, that converts raw materials and materials into finished products of a certain range. Considering the management of the enterprise in the form of two contours: operational and strategic management, which are based on methods and technologies of system, situational, quantitative analysis. Implying the automation of management functions of the enterprise, which should be comprehensive, providing high-quality, timely and complete information with all the main stages of decision-making. Based on an integrated information system of the enterprise, the operational and main data warehouses should be interconnected.
The complex of scientific statements presented in this article forms the scientific and practical basis for the formation of management strategies for a wide variety of industrial enterprises and organizations.
production management, the owners of companies together with top managers have to work on justification of introducing these or those actions from an arsenal of advancing a corporate strategy, yet widely used in other countries every time.
In this article the attempt of forming a strategy of management of an industrial enterprise is made taking into account its major aspects of technical and technological development on the basis of a developed information and computer system.
Devising technologies of strategic management of industrial enterprises
Contrary to a popular belief of Russian businessmen, arranging a system of strategic planning in a corporation is not a craze which came from the West but a vital necessity. The external environment changes so quickly that only operational measures on adapting the enterprise to new realities fulfilled by top management are already not enough. So as not only to survive but also to strengthen the competitive positions in the market, it is necessary to be engaged in strategic planning at a professional level. Today the need for broadening a planning horizon to coordinate short, mid and long-term goals of development, to create some kind of "a bridge" between the prospective targets of development of the enterprise and ongoing planning for a year has obviously become imminent.
In this regard, the head of any industrial enterprise should remember that irrespective of the nature of operating productions, the enterprise always endures three stages of its development [25]. At a competition stage as a rule the industrial enterprise successfully develops, increasing the volumes of the sold production in the world markets. Having reached the limit of competitiveness of its production, the industrial enterprise enters the second stage -expansion during which the enterprise compensates the reducing quality of its production compared to their competitors by the scale of its productions and sales. But the limits of expansion aren't boundless (they are outlined both by opportunities of competitors, and an extremity of the global market) and having exhausted them, the enterprise with inevitability approach the third stage of the development -remission. The enterprise begins to lose its competitive advantages. It isn't able to reduce the prime cost of its production, to carry out cardinal modernization of its productions and the concepts used by it approaches and methods of management cease to correspond to the accelerating dynamics of external influences.
We cannot but acknowledge that instability of a macroeconomic background and other influences of external environment almost for certain won't allow enterprises to realize the drawn-up strategic plans intact. However, updating strategic plans and even strategic objectives of the enterprise is an absolutely normal phenomenon allowing the enterprise to properly compensate dangerous impacts on the market position of the enterprise.
At the end of the last century foreign (followed by native) ideologists of creating a corporate strategy specified that the goal of a company consists in developing such a strategy in which the strengths of the company and weaknesses of the competitors were used and which could also be capable to neutralize the weaknesses of the company and the strengths of its rivals. The ideal was seen as competition in the healthy and growing branch with a strategy whose cornerstone is the advantage which cannot be copied or neutralized by rivals. The choice of strategic options was offered to be carried out on certain principles -the criteria of choice. They can be presented in the following groups: -development of scenarios (scenarios are developed proceeding from strategic uncertainty and also opportunities and threats of the environment); -need of steady competitive advantage; -compliance with organizational vision and purposes (vision as an idea of a future state and setting tasks of the company have to encourage correct strategic decisions); -feasibility of strategy (the enterprise has to have resources and competences as well as technical and technological basis necessary for strategy implementation).
Creating strategies of development of industrial enterprises can be carried out in various ways. Plus, in different regions of the world such strategy can even differ radically from each other.
Native industrial enterprises and corporations can carry out the following when deciding on strategic and tactical priorities: 1. Assessing administrative concepts and programs used in the company while at the same time giving preference to those which include aspects of their technical and technological development.
2. To carry out the analysis of company structure and its personnel structure to provide not only the solution of tasks of operational management of the industrial enterprise, but also effective strategic development of the latter.
3. To improve the system of the developed production logistics at the industrial enterprise including the effective system of material support of productions, account and control of the movement of material resources on production shops of the enterprise, up to realization of finished goods and its delivery to customers.
4. To increase adequacy of the operational management system to the current tasks of the enterprise and requirements of the owners (however proved and worked through the accepted development strategy of the industrial enterprise was, it can't but be based on effective approaches, methods and models of operational management of production activity of the company). 5.
To increase the quality of project management and the system of project management at the enterprise and estimate adequacy of the used production technologies of the company and need of switching to more modern industrial ways.
6. To make the analysis of information and computer infrastructure of the company and the ACS, PCS systems, etc. used at the enterprise. The most essential party of strategic development of an industrial enterprise is creating a modern information and computer infrastructure which is based on automated control systems for all fields of activity of the industrial enterprise and processes of production and economic activity which are adequate to the increased requirements of users and the level of conducting business processes.
7. To carry out the analysis of dynamics of technical and economical operating rates of the industrial enterprise or corporation.
8. To estimate the structure and quality of the used mathematical methods and models in management of an industrial enterprise. To make the structure and contents of relevant mechanisms and models of clever management. To improve the system of preparing and taking administrative decisions and the level of their validity.
The main shareholders and heads of industrial enterprises who wish to achieve high efficiency of the decisions made on business management have to encourage in every possible way so that the system of preparing and taking administrative decisions which was developed at the industrial enterprise or corporation provided the most productive realization of various tasks while achieving main objectives of the company both on the executive and strategic level.
At the same time it is necessary to remember that the process of technological renewal at industrial enterprises is cost demanding. The main shareholders and heads of industrial enterprises have to have an opportunity to reasonably estimate offers on creating projects on technological renewal of the production capacities which are available at the enterprise in the conditions of political, economic and social instability and also the increasing uncertainty in whether there will be customers of the products ready to get them of higher quality but at the increased price or on the contrary -whether it is necessary to focus on low quality of production but at lower prices, etc. In the conditions of decreased production in the world and owners' lack of considerable financial resources to switch industrial enterprises to the latest industrial technologies, it is obvious to say that in the near future a similar switch will become economically reasonable. It is quite possible that in case of lack of confidence that new technologies will certainly allow for achieving good results in the fight against competitors, it makes sense to pull time for the people in charge of making decisions on strategic imperatives of the company. And only when it becomes clear that the investments in new technologies are absolutely necessary for an industrial enterprise to survive and to secure its competitiveness, it is possible to make such expenditures.
It is worth to remember that using overseas equipment, computer facilities and software made abroad as a part of the information infrastructure of an industrial enterprise doesn't guarantee that the enterprise will be capable to carry out the production tasks and its heart -the automated information system won't glitch in case of deterioration in a military-political situation in the world.
Within the analysis of indicators of production and commercial activities of the industrial enterprise it is very important to find compliance (or discrepancy) of the developing dynamics of technical and economic indicators with other strategic imperatives of the development of the enterprise.
When devising a development strategy of the industrial enterprise or corporation it is very important to consider that in case of discrepancy of the developing dynamics to the intentions and plans of the owners and heads of the industrial enterprise, the correction of the whole set of the specified strategic imperatives is necessary. It has to provide positive dynamics of indicators of technical and economic development of the company in the long run.
In this regard, the mathematical models used in different calculations have to correspond to the fact that the results received on their basis satisfy the inquiries of the people making decision at all levels of the management of the company. If any of the used models create an obstacle for solving any tasks within production chains or an administrative vertical, such model has to be replaced with more suitable or undergo substantial adjustment at the level of logical-mathematical modelling.
Depending on dynamics of change of environmental conditions and the nature of features of production activity in the company, the owners and top managers of industrial enterprises have to constantly improve the control system of all the areas and directions of production activity of the corporation, at the same time both introducing new methods and mechanisms of clever management and improving the methods and models of preparing and taking administrative decisions.
On the basis of the listed positions, the main shareholders and heads of any industrial enterprise or corporation can estimate rather objectively the strategic capacity of the company and the prospects of its further development.
Any industrial enterprise has to be constantly concerned with the fact that for the import substitution program could be successfully realized and provided production activity of the company regardless of the interests of the foreign states.
It is very important that there weren't any inefficiently working structural divisions in an industrial enterprise, company or holding.
Effective management of the development of an industrial enterprise has to be based on a modern model of management.
The modern model of management of an industrial enterprise is inconceivable without the complex automated system which includes all administrative, production, warehouse and other divisions possessing, in turn, their own (developed earlier) systems of automation (PCS, CAD, MICS, etc.).
The model of managing an industrial enterprise has to be based on the following basic provisions: 1. The enterprise is looked at from the point of view of turning the substantiated input in the form of raw materials and accompanying materials into finished goods of a certain range.
2. Business management is divided into two contours: operational and strategic management which are based on the methods and technologies of the system, situational and quantitative analysis.
3. Automation of administrative functions of the enterprise has to be complex, providing all main stages of decision-making with qualitative, timely and sufficient information.
4. At the heart of a complex information system of the enterprise there have to be operational and major databases interconnected with each other.
The architecture of the corporate information system of managing an enterprise which is aligned with the times can be presented as a set of open subsystems interacting with each other, each of which is realized in the form of a standard box or independently developed information subsystem of a certain look.
Conclusion
The complex of scientific provisions presented in this article forms a scientific and practical basis for devising a strategy of managing various industrial enterprises and organizations. | 3,609.2 | 2019-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Creating reference gene annotation for the mouse C57BL6/J genome assembly
Annotation on the reference genome of the C57BL6/J mouse has been an ongoing project ever since the draft genome was first published. Initially, the principle focus was on the identification of all protein-coding genes, although today the importance of describing long non-coding RNAs, small RNAs, and pseudogenes is recognized. Here, we describe the progress of the GENCODE mouse annotation project, which combines manual annotation from the HAVANA group with Ensembl computational annotation, alongside experimental and in silico validation pipelines from other members of the consortium. We discuss the more recent incorporation of next-generation sequencing datasets into this workflow, including the usage of mass-spectrometry data to potentially identify novel protein-coding genes. Finally, we will outline how the C57BL6/J genebuild can be used to gain insights into the variant sites that distinguish different mouse strains and species. Electronic supplementary material The online version of this article (doi:10.1007/s00335-015-9583-x) contains supplementary material, which is available to authorized users.
The fundamentals of gene annotation
The value of the mouse genome as a resource largely depends on the quality of the accompanying gene annotation. In this context, 'annotation' is defined as the process of identifying and describing gene structures. However, in the 21st century, genes are increasingly regarded as collections of distinct transcripts-generated, most obviously, by alternative splicing-that can have biologically distinct roles (Gerstein et al. 2007). The process of 'gene' annotation is therefore perhaps more accurately understood as that of 'transcript' annotation (with separate consideration being given to pseudogene annotation). The information held in such models can be divided into two categories. Firstly, the model will contain the coordinates of the transcript structure, i.e., the coordinates of exon/intron architecture and splice sites, as well as the transcript start site (TSS) and polyadenylation site (if known; see ''The incorporation of next-generation sequencing technologies into mouse annotation'' section). Secondly, for a transcript model to have value, it must also contain some level of 'functional' annotation (Mudge et al. 2013); for example, a model may contain the location of a translated region (coding sequence; CDS), alongside flanking untranslated regions (UTRs). However, our understanding of the mammalian transcriptome has evolved rapidly since the genome-sequencing era began. For example, the classical tRNA and rRNA families of small RNA (smRNA) are being joined by an ever increasing number of novel categories, including miRNAs, snoRNAs, and piwiRNAs (Morris and Mattick 2014). Of particularly interest is the discovery of thousands of long non-coding RNA (lncRNA) loci in mammalian genomes, with much of the pioneering work having being done in mouse (Carninci et al. 2005). LncRNAs-typically defined as non-coding, non-pseuodogenic transcripts larger than 200 bp-have been generally linked to the control of gene expression pathways, although a single functional paradigm seems unlikely to be established (Marques and Ponting 2014;Morris and Mattick 2014;Vance and Ponting 2014). In addition, pseudogenes-commonly described as deactivated copies of existing protein-coding genes-have long been a target for annotation projects Pruitt et al. 2014), and such loci can actually contribute to the transcriptome through their expression (Pei et al. 2012). Nonetheless, debate persists as to the proportion of the transcriptome that could be defined as spurious 'noise,' resulting from the essentially stochastic nature of transcription and splicing (Hangauer et al. 2013). Certainly, annotation projects are under increasing pressure to provide users access to the portion of the transcriptome that is truly 'functional' (Mudge et al. 2013). In recent years, this process has become empowered by the advent of next-generation technologies. For example, RNAseq can be used to identify novel transcripts and to provide insights into their functionality (Wang et al. 2009), while proteomics data may allow us to finally understand the true size of mammalian proteomes (Nesvizhskii 2014). Annotation, in short, remains a work in progress, and the major challenge for the future will be to maintain the utility of the reference gene data, while providing a set of models that are an increasingly true representation of the transcriptome as it exists in nature. Here, we provide an outline of how the GENCODE project is continuing to produce comprehensive gene annotation for the reference genome of Mus musculus.
Mouse GENCODE combines manual and computational annotation
The GENCODE project originated as an integral part of the human ENCODE project, where its remit was to identify all 'evidence-based' gene features found within the human genome (ENCODE Project Consortium et al. 2012;Harrow et al. 2012). More specifically, the goal of mouse GEN-CODE is the description of all non-redundant transcripts associated with protein-coding genes and non-coding RNAs (small and long), along with the identification of all pseudogenes. Eight institutes contribute to the project (see Acknowledgments), bringing together expertise in annotation, computational transcriptomics, experimental validation, comparative analysis, protein structure, and mass spectrometry (MS). Figure 1 summarizes the workflow of the GENCODE project. At the core of this process is manual gene annotation produced by the HAVANA group, whereby bespoke interactive software tools are used to create and appraise the alignments of a wide range of data sources-chiefly transcriptomics and proteomics dataagainst the genome sequence (Harrow et al. 2012;Harrow et al. 2014). To complement this work, Ensembl generates a mouse gene annotation set (henceforth 'genebuild') via an entirely computational process, although using similar evidence sources (Cunningham et al. 2015).
Both HAVANA and Ensembl thus build models onto the genome sequence, rather than onto transcript evidence. A disadvantage of this process is that any errors found in the genome sequence will be carried over as errors in the models. However, there are also significant reasons why genome annotation is desirable. In particular, the use of a genome scaffold for the alignment of transcriptional evidence allows for a wider variety of evidence sources to be used, including those that do not represent complete transcripts, e.g. expressed sequence tags (ESTs). Genome annotation is also better suited for the identification of pseudogenes (which may not be transcribed) (Pei et al. 2012), and can be advantageous for the interpretation of next-generation sequencing (NGS) data, as will be discussed in ''The incorporation of next-generation sequencing technologies into mouse annotation'' section. In fact, since HAVANA annotation is fully manual there is effectively no limit to the number of additional evidence sources that may be consulted. For example, publications based on single-locus laboratory studies often contain insights that cannot be accommodated into computational annotation pipelines, though can be effectively 'curated' by annotators. Critically, in-depth comparative annotation is also possible with the manual approach. This process, which essentially involves comparing the mouse genome and transcriptome against those of other species, has two major benefits. Firstly, the annotation of transcript features such as CDS can be performed (where required) with a higher degree of confidence by following the old argument that 'conservation equals function.' Secondly, HAVANA frequently annotates mouse models based on transcript evidence from other species-typically human or rat-when conservation is observed, thus providing additional models that seem likely to be functional.
The key stage in the creation of the GENCODE genebuild is the merging of the HAVANA and Ensembl datasets ( Fig. 1). A new release is generated each time the Ensembl pipeline is re-run, approximately every three months (Harrow et al. 2012). In essence, this process merges transcripts from the two datasets that contain identical intron/exon boundaries, while maintaining models that are found in one set only. The logic behind the merge is that, while manually annotation has higher precision than computational annotation (Guigo et al. 2006), it is a much slower process. Ensembl annotation thus 'fills in the gaps,' covering genes and transcripts that have not yet been targeted by HAVANA. Prior to each merge process, the AnnoTrack software system is used by HAVANA to process and track both potential annotation errors and putative novel annotations suggested by the Ensembl genebuild or other GENCODE participants (Kokocinski et al. 2010). Finally, the Ensembl pipeline provides the annotation of smRNAs, based on datasets from RFAM (Griffiths-Jones et al. 2003) and miRBase (Griffiths-Jones et al. 2006). These sequences are queried against the genome with WU-BLAST, and models are constructed using the Infernal software suite (Eddy 2002). Table 1 provides summary information on the most recent mouse GENCODE release (M5), with equivalent information provided from human GENCODE v22 for comparison. A fully comprehensive summary of the mouse release is presented in Supplementary Table 1. While the HAVANA group has performed manual annotation on the entirety of the human reference genome, the same is not true for the mouse genome. As such, the proportion of mouse GENCODE models that are computationally derived is higher than for the human genebuild. For example, 1 % of protein-coding genes and 11 % of proteincoding transcripts in human are Ensembl-only, compared with 26 % and 20 % in mouse, respectively. It is this factor-rather than underlying biological differences between the two species-that is likely to explain the more obvious tallying divergences. Firstly, the HAVANA group has thus far allocated more resources to the annotation of lncRNAs and pseudogenes in human than in mouse Pei et al. 2012). Conversely, mouse GENCODE currently has over 2000 more protein-coding genes than human. While this observation may actually have at least a partial biological explanation-most obviously, the mouse olfactory gene family is substantially larger than in human (Niimura and Nei 2005)-we anticipate that a significant part of this excess represents either spurious computational CDS predictions on lncRNAs or loci that will be re-annotated as pseudogenes in due course.
Thus far, the HAVANA group has approached mouse annotation from a variety of directions. Initially, four chromosomes sequenced at the Wellcome Trust Sanger Institute (2, 4, 11, and X) were systematically annotated on a clone-by-clone basis during the assembly phase. Secondly, numerous genomic regions and gene families considered of particular interest to the wider community had their annotation prioritized, for example, the major histocompatibility complex on chr17 (unpublished), the Major Urinary Proteins gene cluster on chr 4 (Mudge et al. 2008) and the large complement of immunoglobulin loci found at several sites across the genome (unpublished). The HAVANA group has also been involved in several collaborative projects over the years that have required annotation on a gene-by-gene basis. Examples include the consensus CDS project (CCDS) ), which produces a set of CDS that are agreed upon by HAVANA, Ensembl and RefSeq, and the European Conditional Knockout Mouse Consortium (EUCOMM), in which 1000 mouse protein-coding genes were annotated as part of the wider International Knockout Mouse Consortium (IKMC) to aid phenotype-based investigations into their function (Bradley et al. 2012). Currently, the HAVANA group is funded by the GENCODE consortium to resume systematic While the HAVANA and Ensembl workflows are largely based on the alignment of Sanger-sequenced cDNAs/ESTs and protein sequences against the genome, the gene-by-gene nature of HAVANA annotation allows for further evidence sources to be incorporated. Important contributions are also made from other institutions that are part of the GENCODE project. Briefly, a subset of models are being subjected to experimental confirmation via RT-PCRseq and RACE-seq , in silico pseudogene models predicted using Pseudopipe (Zhang et al. 2006) and Retrofinder from UCSC are used to complement manual annotation, while the APPRIS database is used to provided inferences into the likely 'principal variant' of individual loci (Rodriguez et al. 2013). These contributions are monitored by HAVANA using the AnnoTrack software, which is also used to facilitate the identification and correction of putative annotation errors (Kokocinski et al. 2010) chromosome annotation. Efforts are largely focused on loci not already covered by the EUCOMM/IKMC or CCDS work, which are typically lncRNAs and pseudogenes. However, improvements to protein-coding genes are being made as required.
GENCODE annotation is presented as default in the Ensembl genome browser (see Fig. 2a), In addition, the GENCODE webportal (www.genecodegenes.org) features an embedment of the Biodalliance genome visualization tool (Fig. 2b), allowing users to create their own integrated view of GENCODE transcript models alongside their own experimental datasets (Down et al. 2011). GENCODE can Table 1 A summary of mouse GENCODE annotation release 5, compared against human 22 Images have been collated from the GENCODE webportal (www.gencodegenes.org), which is immediately updated for each new genebuild release. Only major annotation categories or summary counts are shown here; for more detailed counts-e.g., relating to individual long noncoding RNA loci, transcribed versus non-transcribed pseudogenes, immunoglobulin/T cell receptor loci, etc-please consult Supplementary Table 1 or the webportal. Note that the difference in the number of protein-coding transcripts versus the total number of distinct translations within each genebuild is due to the existence of identical CDS on multiple models, typically resulting from alternative splicing within untranslated regions c Fig. 2 Viewing mouse GENCODE annotation in the Ensembl, Biodalliance, and UCSC genome browsers. Mouse GENCODE is the default annotation in the Ensembl genome browser (a), while the GENCODE webportal contains an embedment of the Biodalliance genome visualization tool (b). Both browsers will always feature the most up-to-date genebuild. GENCODE can also be viewed in the UCSC genome browser (c; the 'UCSC Genes' and 'RefSeq Genes' annotation tracks are shown for comparison), although a new release does not become immediately available; version M4 is displayed here. In each screenshot, the 'Comprehensive' GENCODE annotation is presented for the adjacent genes Cox11 and Tom1l1, thus showing all GENCODE models associated with these loci. The Ensembl and UCSC browser screenshots also display the Consensus Coding Sequence (CCDS) project models for these loci , colored green in both cases also be viewed in the UCSC genome browser (Rosenbloom et al. 2015) (Fig. 2c). In contrast, HAVANA manual annotation alone is represented in the specially designated VEGA genome browser . Since this annotation is continually updated, VEGA provides users with access to the most up-to-date HAVANA models prior to the each GENCODE release. The genebuild itself can be obtained from the GENCODE webportal or from the Ensembl site (Cunningham et al. 2015). The GENCODE genebuild is currently available as a GTF file in two forms, 'Comprehensive' and 'Basic' (Harrow et al. 2012). While Comprehensive includes all GENCODE annotation, Basic contains only full-length coding transcripts (i.e., where initiation and termination codons are found), and transcripts annotated with one of the subcategories of lncRNA. One of the major advantages of the GENCODE genebuild is that it contains a sophisticated system of gene-level and transcript-level classifications-termed 'biotypes'-as summarized in Supplementary Table 1. Essentially, genelevel classification separates protein-coding genes, lncRNAs, and pseudogenes, while the wider variety of transcript biotypes provides inferences into the functionality of individual models (Harrow et al. 2012). Of particular note, GENCODE (unlike other genebuilds) describes transcripts likely to be targeted for degradation by the nonsense-mediated decay surveillance pathway (NMD) (Mendell et al. 2004). NMD models are manually annotated, and contain the CDS predicted to trigger this process. GENCODE biotypes can be also used to filter the genebuild; for example, a user may wish to discard all transcripts associated with protein-coding genes that are not themselves annotated as protein-coding. GENCODE also contains a number of fixed-vocabulary 'attributes' attached to particular genes or transcript models within the GTF file. Attributes fall into three categories-pertaining to splicing, translation, or transcriptional support-and provide additional insights into the annotation of a gene or transcript model. For a full list of GENCODE attributes, consult the release notes provided at the GENCODE webportal. Finally, the GTF file also reports whether a model has been created by the manual annotation process or is instead a computational prediction. Note that when HAVANA and Ensembl biotypes conflict during the merge of particular models, the HAVANA decision is given priority.
The incorporation of next-generation sequencing technologies into mouse annotation HAVANA and Ensembl annotation efforts on the mouse draft genome sequence began in 2000. For most of this first decade, annotation was almost entirely based on Sanger-sequenced transcriptomics data, i.e., all publically available cDNAs/mRNAs and ESTs. In more recent years, nextgeneration technologies have transformed RNA sequencing (Robertson et al. 2010), and these datasets offer the potential to similarly transform the annotation process. Nonetheless, the nature of these datasets provides challenges for such endeavors, largely because (1) the amount of data produced in a typical NGS experiment is enormous, and (2) NGS reads (especially those produced by the first wave of sequencing platforms) are typically far shorter than the RNAs from which they are captured, complicating efforts to map these reads to the genome and to generate full-length transcript models Steijger et al. 2013). It would be fair to say that the computational difficulties inherent in NGS data analysis continue to place limitations on the incorporation of these resources into annotation projects. Nonetheless, mouse GENCODE currently benefits from the inclusion of NGS data from a variety of sources. Most obviously, RNAseq can provide the core evidence of transcribed regions, including splice junctions, while CAGE (Cap Analysis of Gene Expression) and polyadenylation sequencing (polyAseq) allow for the transcription start and end points, respectively, to be confirmed. The CAGE protocol specifically targets the 5 0 capped region of RNA molecules, generating large datasets of short sequence tags that can be mapped to the genome and used to infer the locations of transcription start sites (TSS) (Shiraki et al. 2003;Takahashi et al. 2012). In particular, the FANTOM consortium has generated extensive, tissue-specific mouse CAGE libraries as part of the FAN-TOM5 project (Forrest et al. 2014) (see also de Hoon et al. in this issue). Analogously, the polyAseq protocol as used by Derti et al. targets the site of RNA molecules where the polyadenylation tail is added to the maturing transcript (Derti et al. 2012). As for CAGE, large numbers of short sequence reads are mapped onto the genome and extrapolated into polyadenylation sites.
From the outset, NGS sequencing data can benefit genebuilds both by identifying new genes and transcripts and by allowing for improvements to be made to existing models. Such improvements may involve the completion of partial models, although NGS datasets can also provide significant insights into how (or indeed if) transcripts actually function. These inferences can then be passed onto users through the GENCODE functional annotation system, as described in ''Mouse GENCODE combines manual and computational annotation'' section. The novel genes being added to mouse GENCODE as part of this work are almost entirely lncRNAs. Figure 3 illustrates the annotation of one such locus. In this example, RNAseq data processed by Ensembl and/or the Centre de Regulació Genòmica (CRG) highlighted potential novel introns within a HAVANA lncRNA, and this locus was subjected to manual re-annotation. Transcript A alone had been created initially, based on two cDNAs, while transcript B was annotated later based on the NGS datasets. The two introns of transcript B are supported by two RNAseq studies, although since these experiments used short RNAseq reads, a further level of manual interpretation was required to extrapolate complete transcript structures. Transcript B was considered a reasonable extrapolation because in one RNAseq dataset set the two introns are found in spleen and thymus tissues. In contrast, the longer second intron found in the same set was not converted into annotation as it is only found weakly expressed in a single tissue and is not recapitulated in the ENCODE data. The manual process also allowed the lack of coding potential to be reappraised-the putative ORFs suggested in certain of the RNAseq models were rejected as spurious-and the gene was 'biotyped' as an antisense lncRNA. In contrast, Fig. 4 details the re-annotation of a mouse lncRNA into protein-coding gene Naaladl2. In this example, a large protein-coding gene was not originally apparent due to a lack of cDNAs or ESTs in the region. In fact, a short, single-transcript gene was initially annotated in the region, biotyped as a non-coding model because a plausible CDS could not be identified. When reappraised, a comparative analysis made it clear that this transcript was partially orthologous to human protein-coding gene NAALADL2, and that each of these additional human introns has support in mouse RNAseq libraries. This allowed for the construction of a conserved 795aa protein-coding transcript in mouse, while the original lncRNA transcript could now be seen to be a non-coding alternative transcript. In addition, it proved possible to extrapolate the true extent of the final exon by manually appraising RNAseq read coverage graphs alongside polyAseq data (Fig. 4b).
As noted in ''Mouse GENCODE combines manual and computational annotation'' section, GENCODE contains a Fig. 3 The NGS-supported annotation of a novel mouse lncRNA locus.
Two HAVANA-annotated lncRNAs transcripts (A OTTMUST00000139812; B OTTMUST00000020448) found within the same gene (OTTMUSG00000009012) are displayed. Model A was annotated initially, based on Sanger-sequenced transcriptomics data; model B was subsequently added based on the NGS data. These supporting evidence sets are displayed below as follows, from top to bottom: mouse ESTs; mouse cDNAs; introns supported by Illumina RNAseq data (i.e., split reads) obtained separately from David Adams at WTSI (red; ArrayExpress ID: E-MTAB-599) and ENCODE (purple), both processed by the Ensembl RNAseq pipeline; RNAseq models based on ENCODE data, separately constructed using the Ensembl and CRG RNAseq pipelines; polyAseq site and filtered CAGE transcription start site regions predicted by Derti et al. (Gene Expression Omnibus ID: GSE30198) and FANTOM (DDBJ accession: DRA000991), respectively. The presence of CAGE and polyAseq data at the start and end point of transcript B confirms that the complete model has been annotated large number of incomplete models, and RNAseq data can now be used to 'complete' these. While the presence of partial models in GENCODE allows users to work with exons and splice junctions that may nonetheless be biologically important, one issue is that the functional annotation of such models tends to be more predictive. In fact, even when a model is based on a cDNA, it cannot be assumed that the sequence captured is full-length, i.e., contains the true TSS or endpoint. However, the observation of significant CAGE data at the beginning of other transcript evidence can be used to confirm that the TSS has been found, adding confidence to the subsequent functional annotation. For example, in Fig. 3, the presence of CAGE data at the start of model B indicates that exons are not missing at the 5 0 end, ruling out the possibility that a 5 0 extension to the model could uncover a legitimate CDS. Transcript endpoints can be identified with polyAseq tags, and these datasets actually suggest that the 3 0 UTRs of human and mouse models in GENCODE are frequently too short. In fact, polyAseq data and regular RNAseq are readily combined during manual annotation to resolve the true extent of 3 0 UTR sequences (Fig. 4b). Furthermore, using such data, HAVANA has been able to identify and reclassify dozens of transcripts that were incorrectly classified as lncRNAs when in fact they represented extended 3 0 UTRs of upstream protein-coding genes (unpublished observation). Finally, note that HAVANA annotates polyadenylation features (both sites and regulatory signals) Fig. 4 The re-annotation of a protein-coding gene in mouse GENCODE. a Originally, a single mouse lncRNA transcript (OTTMUS00000129569; top of diagram) was annotated based on cDNA AK012899.1 (not shown), creating gene OTTMUS000000 51138. During reappraisal, comparative annotation and RNAseq analysis showed that the locus is orthologous to human protein-coding gene NAALADL2 (major transcript OTTHUMT00000347390 is shown), allowing for the generation of novel mouse protein-coding transcript OTTMUS00000140064, and the reclassification of OTT-MUS00000129569 as a non-coding transcript. Split read-supported introns are shown in blue and green, from David Adams at WTSI (ArrayExpress ID: E-MTAB-599) and ENCODE, respectively, both processed by Ensembl. CAGE data from FANTOM5 (DDBJ accession: DRA000991) support the presence of a TSS for OTT-MUS00000140064. b While ESTs and RNAseq models did not allow for the final exon to be annotated with confidence, its structure could be resolved manually based on RNAseq read coverage graphs (three examples from David Adams data are shown) alongside polyAseq data from Derti et al. (Gene Expression Omnibus ID: GSE30198). Several non-coding EST-based transcripts subsequently added as models at the 5 0 end of the locus are not featured. The locus spans approximately 1 Mb of genomic sequence directly onto the genome sequence, and that these features are not explicitly linked to individual transcript models. Similarly, at the present time, HAVANA does not annotate additional transcript models where the only difference is in the usage of distinct polyadenylation features.
While RNA sequencing methodologies are providing clear insights into the size of the transcriptome, the size of the proteome remains far harder to elucidate. For the most part this is because, while alternative splicing has the potential to generate large numbers of alternative proteinisoforms, a minority of alternative transcripts have had their functionality experimentally confirmed (Mudge et al. 2013). As such, a significant amount of the CDS annotation in GENCODE is considered 'putative.' The underlying problem is that it is far harder to obtain protein sequences than it is to obtain RNA or DNA sequences (Faulkner et al. 2015). However, from an annotation perspective at least, the situation is improving. Firstly, ribosome profiling (RP; also known as Ribo-seq or ribosome footprinting) provides a way around the difficulties in dealing with protein molecules by instead capturing and sequencing fragments of RNA that are bound to ribosomes (Ingolia et al. 2009;Ingolia et al. 2011;Lee et al. 2012). This technique can be modified to specifically map initiation codons, with obvious potential benefits to annotation pipelines. Nonetheless, it should be emphasized that RP maps sites of ribosome occupancy on RNA molecules; it does not obtain actual protein sequences, and debate about the correct way to interpret these data is ongoing (Ingolia 2014). At the present time, HAVANA only uses RP data to resolve situations where it is not obvious which initiation codon to use in a CDS.
Secondly, advances in MS have led to a significant increase in the number and quality of deduced peptide sequences becoming available to annotation projects (Yates 2013;Nesvizhskii 2014), leading to a similar expansion in the number of repositories to hold such data (Perez-Riverol et al. 2014). While MS peptides can be used to validate existing CDS, the greater interest for annotation projects at the present time is in the discovery of novel CDS. In fact, a pair of recent publications claimed that there may be significant numbers of missing protein-coding genes in the human genome, based on MS-supported novel translations found in transcribed regions out with the current set of protein-coding genes (Kim et al. 2014;Wilhelm et al. 2014). However, the validity of these interpretations has been called into question (Ezkurdia et al. 2014). We believe that both the calling of peptide-spectrum matches (PSMs) and the mapping of these sequences back to the genome should be based on highly conservative parameters (Brosch et al. 2011). Furthermore, the interpretation of PSM to genome alignments should be subjected to manual scrutiny. In this way, we observe that PSMs that do not fall within known protein-coding genes are commonly associated with pseudogenes. Furthermore, PSMs within pseudogenes or lncRNAs frequently cannot be linked to canonical initiation codons upstream (unpublished observations). Essentially, HAVANA does not make proteincoding genes solely based on MS data where either the evidence is equivocal or the biological interpretation is unclear. As a consequence, neither the mouse nor the human GENCODE genebuilds currently contain 'orphan' proteins-i.e., CDS that lacks orthologs or paralogs in other species-where the only supporting evidence for translation is PSMs from MS experiments. However, orphan proteins could theoretically be added to these genebuilds in the future, provided this annotation is supported by vigorous functionality based experimental studies.
New horizons-the annotation of other mouse strains
To date, mouse GENCODE annotation has focused on the reference genome of Mus musculus, strain C57BL/6J (Waterston et al. 2002). However, a major interest in mouse genomics is to identify differences both between distinct mouse species and laboratory strains of the same species. Over the last decade, the HAVANA group has worked on a number of alternative mouse genomes as part of external collaborations. For example, candidate Insulindependent diabetes (Idd) regions on six chromosomes have also been annotated in one or more of the NOD/MrkTac, NOD/ShiLtJ, and 129 strains (Steward et al. 2013). Today, researchers have increasing access not just to regions of alternative mouse genomes, but to the entire genomes themselves (Yalcin et al. 2012). In particular, the Mouse Genomes Project is an ongoing effort to provide highquality genome sequences for both classical laboratory strains and wild-derived inbred mice; see Adams et al. in this issue. While variant sites can be imputed from such alternative genomes and simply displayed against the reference mouse genome [for example, using the BioDalliance tool at the GENCODE webportal (Down et al. 2011)], the interpretation of such variation is made easier if alternative annotation models are also available. This is especially true when considering structural variation, which has been a focus of comparisons between mouse genomes (Yalcin et al. 2011;Keane et al. 2014). Annotation projects are particularly interested in large-scale structural variation, as this phenomenon is often linked to changes in gene copy number; such events may be of interest to both medical and evolutionary biologists (Bailey and Eichler 2006;Chain and Feulner 2014). In our experience, manual annotation is highly desirable for such complex regions; computational analysis pipelines may fail to interpret the correct evidence for a particular gene copy, especially where several genes have highly similar sequences, and may also fail to correctly identify pseudogenization events.
For the last few years, the mouse reference assembly has been improved under the guidance of the Genome Reference Consortium (GRC) (Church et al. 2011). The first remit of the GRC is to fix errors and close sequence gaps in the draft sequence. In the former case, the HAVANA and RefSeq groups play a key role in identifying indels and nonsense mutations within mouse protein-coding genes. These findings are reported to the GRC, and when the sequence region has been reappraised the results are fedback to curators, who update the gene annotation if necessary. For example, a protein-coding gene with a putative sequencing error may turn out to be a genuine pseudogene. The GRC also provides alternative assemblies ('alt loci') of regions that are variable between genomes (http://www. ncbi.nlm.nih.gov/projects/genome/assembly/grc/). The Idd regions annotated by HAVANA are now included in the GRC as alt loci. In total, GRCm38.p3-the version of the mouse reference genome released in March 2014-contains 99 alt loci, featuring sequence from 13 additional mouse genomes. All alt loci produced by the GRC will be incorporated into the GENCODE genebuild. In due course, the complete genome sequences provided by the Keane group will be added to the mouse GRC repository, and they will become targets for manual annotation. It is both unfeasible and unnecessary that each of these genomes will be subjected to complete manual annotation. We anticipate that a large proportion of the existing reference assembly annotation models will simply be 'lifted across' between genomes. Manual annotation will then be employed to (a) investigate and improve loci that have failed to project successfully, and (b) to specifically target regions of known genomic complexity-e.g., dynamically evolving gene families-where accurate annotation is likely to be particularly difficult. Furthermore, the manual annotation process will once again provide an important 'QC' service on these sequences, helping to distinguish true variant sites from artifacts or errors that arose during the genome sequencing, assembly, or alignment stages.
Future prospects
The GENCODE annotation of the mouse reference genome is continuing along several fronts. Firstly, not all gene features are represented at the present time, in terms of exons, transcripts, and even whole loci (Mudge et al. 2013;Cunningham et al. 2015). The mouse GENCODE gene and transcript counts are thus expected to rise consistently over the coming years as manual annotation continues and further transcript libraries become available. However, while the number of RNAseq reads available already runs into the hundreds of millions, concerns have been raised about the power of this technique to find transcripts with very low expression levels (Oshlack and Wakefield 2009). Cap-tureSeq is proving to be highly useful in this regard, being a method by which transcripts with extremely low expression can be enriched through the use of tiling arrays designed across regions of interest (e.g., intragenic space) prior to high-depth sequencing (Mercer et al. 2012;Clark et al. 2015). We anticipate that this methodology will be used to uncover new mouse lncRNAs, in particular those with restricted expression profiles.
Secondly, a significant amount of work remains to be done in the functional annotation of the mouse transcriptome, in particular in allowing users to distinguish transcripts that are biologically interesting from those that are not. While the completion of mouse (or human) functional annotation cannot be considered a short-term goal, we anticipate that annotation projects such as GENCODE will be able to make significant progress over the next few years. Initially, the completion of currently incomplete GENCODE models will be of enormous assistance in this regard. Here, we have outlined methodologies for model completion that can be carried out at the present time based on short-read RNAseq coverage graphs and models, as well as CAGE and polyAseq. However, longer RNAseq read libraries are becoming available using platforms such as PacBio (these data are already proving useful for human annotation (Sharon et al. 2013))-while nanopore-based RNA sequencing is on the horizon (Clarke et al. 2009)and in due course we anticipate that true full-length RNA sequences will negate the need to combine RNAseq with separate end-sequencing protocols (Picelli et al. 2014).
Another advantage of NGS is that insights can be gained into levels of transcription, which can be compared-for example-between tissues or developmental stages (Wang et al. 2008;Lin et al. 2014). For the human transcriptome, several projects have already sought to identify 'dominant' transcripts; i.e. the transcript (or protein) in a particular gene that has the highest, most consistent level of expression Gonzalez-Porta et al. 2013;Ezkurdia et al. 2015). In the near future, improvements to RNAseq technologies will complement the maturation of single-cell protocols, allowing us to observe changes in transcript expression profiles with increasing accuracy and resolution. Meanwhile, functional transcripts can also be extrapolated based on their evolutionary conservation (Fig. 4a). GENCODE is integrating the output of the APPRIS pipeline, which aims to identify the 'principal' RNA produced by a gene on the basis of exonic conservation (alongside inferences made into the protein structure) (Rodriguez et al. 2013). For mouse and human GENCODE, the principal APPRIS isoform for each protein-coding gene is designated in the GTF file, or if no model matches these strict criteria a single 'candidate' model can instead be selected based on its score or length. We emphasize that such methodologies extrapolate functionality through the use of proxies, and that the true descriptions of functionality must ultimately come from single-gene laboratory studies. Even so, we would argue strongly that annotation projects such as mouse GEN-CODE must do all they can to provide guidance into transcript functionality at the present time, given that the high demand for this information. For example, the development of the CRISPR/Cas system for genome engineering is completely changing the landscape of mouse genomics, offering a simple method by which mouse genes can be disrupted or switched on and off (Jinek et al. 2012;Mali et al. 2013;Qi et al. 2013;Wang et al. 2013). However, uncertainties regarding the functionality of transcriptional complexity within genes, antisense to genes, and within intragenic space currently represent hurdles to both the design of CRISPR/Cas assays and the interpretation of the results produced. In a wider context, gene annotation will always be an integral component of genome science, from medical to evolutionary biology. It is therefore important that all steps are taken to ensure that genebuilds are as accurate and comprehensive as possible. | 8,262.8 | 2015-07-18T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A Secure Anonymous Authentication Protocol for IoT Based Health-care System using Wireless Body Area Network
: The current technology in healthcare and its information are enhanced with IoT system. In most of IoT system, there exists a gateway between a wireless body area network (WBAN) and the internet to upload and retrieve the health information. These IoT gateways normally transmit data to a cloud. Therefore, the importance of IoT devices and health care data could be critical. Hence security constraints are required to retain the data. This paper introduces a novel concept called a secure anonymous authentication protocol with advanced encryption standard (SAAPAES) a cryptographic scheme to guarantee the security services and to protect confidential client data in the healthcare system. Our SAAPAES protocol offer the following aspects 1. Anonymous authentication, this is one of the easy and efficient way to protect patient/doctor identities from the server on cloud storage by using hash key authentication algorithm by disclosing the security aspects like password and username. 2. Patient’s health information is encrypted with SAAPAES and then uploading these data in cloud. Finally, downloading the health information from personal data assistant (PDA) in decrypted form using SAAPAES. The proposed authentication approach provides an efficient authentication mechanism with high security in the health-care system.
Introduction
In recent past, the field of telemedicine has gained high thrust due to the requirement in various medical experts worldwide, which demanded interaction between the experts through virtual meet like video conferencing. But, when it comes for an expert to know the medical background of a patient who is less informed on the technical background of his physical disorders, it has become a prominent feature to maintain a database of the patient that could be accessed by such appropriate physician to know the patient's problem (Anzanpour et al. 2008) In such event, cloud-based healthcare system helps in storing the confidential data of patients and their health condition. But unfortunately, security and privacy of such data stored has become the main concern (Challa et al. 2007; Li et al. 2014). To maintain the security of healthcare data, both cloud service providers and healthcare organizations should take unavoidable measures to secure safe handling of patient's data mainly from the target of unethical attackers. Therefore, high-security measure and assurance must be required in cloud-based healthcare systems (Zanjal et al. 2016). (Farahani et al. 2014) has pictured that the organizations must make sure that the sensitive health report is stored on cloud in a more secure and encrypted way, such that, they do not have the control over the security of the data access devices, being used to transmit the data or else it may create a substantial risk with growth of network size with new network devices.
Governments' policies should be in place to ensure that the cloud service providers should comply with all necessary means to secure patients' data privacy. If such requirements are met by the cloud service providers, there is an opportunity for efficient management of data with proper security. The first in row to mainly fear over the data is the patient, whose information is stored in the cloud, must have control over his health records. The patient should be privileged for granting access to persons only who possess the corresponding key.
Proposed methodology
This section provides the information on the architecture of proposed model, flow of healthcare information accounting the authentication and registration are discussed.
The proposed system provides a platform where the patient's personal health information is stored in a cloud which can be accessed by any authenticated doctor to know the medical background of patient who appears before him. This handles the medical history of each individual and provide access to all registered hospitals to read or update the data for future use by any other registered doctor. The hospital which accesses the database must be registered and might have got a license which is noted as a 'unique database accessing code'. The patients' details are stored in the database, and an identification number will be generated during this process. Whenever they go for any treatment, their medical data will be stored into the database using their identification number without the requirement of any personal proof or exposing the personal details of them. Information gathered from patients is highly confidential and should be shielded from the hacker, or any third party may abuse the patient's information and use for illicit purposes.
Data in PDA (Personal data assistant)
PDA in the framework oversees patient details digitally and allocates access mechanism to different authorities. These PDA data can be updated and processed after cloud storage whenever and wherever necessary and become promptly available to the specified specialists and clients.
Anonymous authentication
As data should be kept secret and confidential from adversaries, it is necessary that every user in the health care system must be authenticated. This helps the administrator confirm the identity and access of the patient's records to the doctor instantly. Here the users can register with personal details for authentication so that the users can upload or view the medical records.
The main idea of user authentication is to find matching information between user and server.
Among these factors' password authentication is the simple and best approach in network applications due to its low costs and easy implementation. The following steps must be followed by the user to register in the hospital registration center: S issues access card (AC) with , , & ℎ(. ) parameters to U through a secure channel.
2.4a Login
If user U wants to login. The user must enter unique user and . Then the system performs the following steps: 1. By using random number , & to compute a value ℎ( ∥ ) ′ and calculate ′ .
If equation (7) satisfies, the system generates a and computes messages ( 1 , , 2 ) or else the login request gets discarded.
2.4b Authentication
After receiving the login request ( 1 , , 2 ), S performs the following steps to authenticate U 1. S uses the received value and its secret key to obtain ′ and ′.
To check the authentication message 2 is valid or not, S computes ′ and ′ .
If equation (14) satisfies, S confirms that U is a legal user and responds with a message 3 to U Or else reject 2. When receiving 3 , U first verifies whether the message is valid or not.
If the equation (16)
Cloud storage
Cloud users should register their details to get permission to access the cloud data. Data owner accepts the request from users, then share the data private key. Data users get key from the owner to access the cloud data. Then, the users can log in using their credentials and upload file after encryption. And later she/he can download the file using the identical key. When uploading the file, the content will be encrypted using SAAPAES encryption before saved into the database. 128-bit SAAPAES encryption is used to provide security to the user uploaded data.
SAAPAES is a fast-symmetric encryption algorithm.
Experimental results
The above analytically demonstrated work was implemented in Cooja toolkit, and security is implemented using homomorphic encryption as shown in figure 2. The data was imported from the SQL database, and the security was implemented over that data.
Patient/Doctor login page
The initial authentication is performed through the user credentials that are verified over the stored database provided the user has already registered to create username and password. The login page has the following three attributes as username, password, and user type for login. If not registered the page will be redirected to registration page.
Patient or Doctor Registration Form
Patient or doctors are given with unique credential for logging in. After completing the registration process, either patient or doctor can access the system to upload/view medical data.
Encrypting the data using SAAPAES
Authorization is the process of confirming a user's privilege to access a given platform. A unique file ID is generated to upload a file in the cloud environment. Authorized user can use this ID for downloading and editing their uploaded data. After uploading a medical file, these details are encrypted by using SAAPAES encryption algorithm. The key size employed for encrypting the plain text is 128 bits. Figure 3 shows that, the encryption of the patient data in user end and stored encrypted data in IoT Server.
Decrypting the data using SAAPAES
The cipher texts will be retrieved from the cloud and decrypted to get the original plaintext. The decryption is done with the help of a private key which is made available to the doctors and other users who need the healthcare data. The key is generated from the SAAPAES algorithm which is also used for decrypting the text files. Figure 4 shows that decrypted user data stored in IoT server with hash authentication key for security.
Replay attack
In our scheme, a timestamp mechanism is employed to avoid the replay attack. If an intruder try to replay the preceding messages ( , 1 , 2 ) to obtain authentication, the intruder fails to do the task because timestamp is different for each session. As result the attacker will be unable to authenticate using earlier messages.
User impersonation attack
During registration process, server S generates a timestamp to calculate parameter P using user's and secret key , where is unique for each user. As a result, the attacker attempting to guess with an unknown is not possible. Hence user impersonation attack is not possible in our proposed scheme.
Eavesdropping attack
An attacker mostly target the unsecure connection in eavesdropping attack. In our proposed system all the patients information are encrypted by AES. AES is one of the best encryption protocol thereby eavesdropping attack can be prevented in our proposed approach.
Man-in-middle attack
The attacker establishes an independent connection between a valid sender and receiver without the knowledge of true sender and receiver is known as a man-in-the-middle attack. In our proposed system the use of HMAC provides authentication to validate the geniune user and AES is used for encryption thereby the system is secure from attackers. 39ms & 78ms is given in figure 5(e). The proposed SAAPAES algorithm provides the security of about 97% and is higher than the existing algorithms with security of 87%, 82% and 95% is given in figure 5(f).
Password guessing attack
The
Conclusion
Healthcare On the performance side, the 128-bit SAAPAES encryption approach has advanced and strengthened the security of records to its highest-degree secrecy with minimum energy consumption comparatively which was justified using simulations. | 2,472.4 | 2021-08-02T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
Behavior Under Cyclic Loading of Freshwater Ice and Sea Ice With Thermal Microcracks
The combination of thinning ice, larger waves, and damage due to diurnal thermal cycling motivate the need to better understand the impact of flexing under the action of oceanic waves on the strength of thermally cracked ice. To that end, new experiments were performed on freshwater, lab‐grown ice and first‐year natural sea ice. Both materials were cracked by thermal shocking and then subsequently cyclically flexed. Initially, the thermal cracks weakened both materials. When the cracked ice of either origin was cyclically flexed under fully reversed loading, its flexural strength, initially reduced by the stress‐concentrating action of the cracks, recovered to the strength of non‐cracked, non‐flexed ice. When the cracked ice was cyclically flexed non‐reversely, its strength recovered only partially. During reversed cyclic flexing, the cracked region experienced alternately compressive and tensile stresses. We suggest compression resulted in contact of opposing crack faces followed by sintering leading to strength recovery. During non‐reversed cyclic flexing, contact and sintering were reduced and ice strength did not fully recover. The tendency for cracks to heal during cyclic flexing may lessen their threat to the structural integrity of an ice cover.
to several kilometers (Evans & Untersteiner, 1971), and their depth can reach tens of centimeters (Milne, 1972) can serve as the initiators of failure via fatigue, evidence for which is the relatively sudden breakup of the arctic ice cover on at least two occasions (Asplin et al., 2012;Collins et al., 2015). The strength of thermally damaged ice was investigated earlier by Murdza et al. (2022a). In that study, Murdza et al. (2022a) thermally shocked laboratory-grown freshwater and saline ice as well as natural first-year sea ice and they investigated the change in flexural strength at different time intervals after the shocking/cracking. Initially, the cracks weakened the materials in accordance with the expectations from fracture mechanics theory. However, within tens to hundreds of seconds, the cracks healed and the strength recovered completely.
The aim of the present work was to investigate the behavior of thermally cracked ice under flexing. Given the consequences of breakup and the unknown contribution of thermal microcracks on fatigue, new experiments were performed in the laboratory to gain some insight into the mechanical behavior of thermally cracked ice under cyclic loading. With that idea in mind, a set of thermal cracks was introduced into specimens either before or during flexing. As will become apparent, when the ice was cycled under reversed loading, the cracked material strengthened, while when the ice was cycled non-reversely such that the cracked region was always under the tension, the ice weakened. To our knowledge, this is the first report on the cyclic loading of thermally damaged ice.
Materials and Methods
We studied the same kind of S2 freshwater ice (i.e., salt-free) and natural first-year sea ice that we studied earlier (Murdza, Polojärvi, et al., 2021;Murdza, et al., 2022a;Schulson et al., 2022). Freshwater ice was produced in the laboratory through unidirectional solidification of local tap-water, following a standard procedure (Golding et al., 2014;Smith & Schulson, 1993). We measured salinity of this tap-water to be less than 0.1 ppt. During freezing, all salts, if any, are expelled from the ice due to very low segregation coefficient for most impurities in ice. The sea ice was harvested in the form of a submeter size block from the ice cover on the Beaufort Sea during the winter 2020 and then stored at −30°C in a cold room at Dartmouth's Ice Research Laboratory. Both types of ice were polycrystalline and characterized by columnar-shaped grains whose long axis was parallel to the direction of growth. Each type possessed the S2 crystallographic growth texture in which the c-axes of the grains were confined more or less to the horizontal plane, but randomly oriented within that plane. The grain size (column diameter) of the freshwater ice so produced was 5.5 ± 1.3 mm and its density was 914.1 ± 1.6 kg · m −3 . Grain size, density, and salinity of first-year sea ice were 2.7 ± 0.4 mm, 906 ± 4 kg · m −3 and 3.0 ± 0.3 ppt, respectively. From such materials, test specimens were machined in the form of beams that were milled to final dimensions: thickness h = 13 mm (along the columns), width w = 75 mm, and length l = 300 mm.
All the experiments were conducted at −10°C. The specimens were allowed to reach thermal equilibrium at this temperature prior to testing. The thermal shock and subsequent damage were introduced to all ice specimens through spraying with either liquid nitrogen (−196°C) or medical spray (−52°C, this spray contains pure 1,1,1,2-tetrafluoroethane HFC 134a) for ∼1-2 s across a narrow (∼20 mm) band in the middle of one of the largest faces or by placing a cold (−30°C) steel plate directly on the ice (similarly to Gold (1961Gold ( , 1963). Results from the experiments showed that the different types of thermal cracking had no significant effect on the flexural strength (Murdza et al., 2022a). The thermal shock was introduced only to one surface of the specimen and created a network of randomly oriented grain-sized cracks, both inter-granular and trans-granular. Within the freshwater ice, the cracks were clearly visible to the unaided eye and penetrated ∼4 ± 1 mm into the ice (see Figure 1a of (Murdza et al., 2022a)). This is consistent with observations by Gold (1963) where the author reported that the crack depth in ice as a result of contact between ice plate and colder brass plate was between 0.24 and 0.39 cm. In sea ice, due to its opacity, cracks were less visible, and it was not possible to estimate their penetration depth. Although there is a wide variety of thermal cracks in nature, we believe that thermal shock that was introduced in our experiments resulted in a crack pattern in ice samples that mimicked the pattern in nature (cracks extend through multiple grains in length and ∼20%-30% of ice thickness in depth).
To investigate the effect of thermal shocking on the behavior of ice under reversed and non-reversed cyclic flexing, we introduced the thermal shock, either 24 hr before cycling, or immediately before cycling, or, to minimize microstructural changes, during initial cycling when specimens were sprayed during the first cycle.
Cycling of cracked specimens was done by flexing under 4-point loading using a custom-built frame was attached to a servo-hydraulic loading system housed within the same cold room in which the specimens were thermally shocked (Murdza et al., 2018(Murdza et al., , 2019(Murdza et al., , 2021b. The specimens were cycled at the same temperature of −10°C at which they had been equilibrated. The samples were loaded across the columns at a constant outer-fiber center-point strain rate of ∼10 −4 s −1 which resulted in frequency of ∼0.1 Hz (∼10 s period) which is approximately the frequency of ocean swells (Collins et al., 2015). In order to reach higher outer-fiber stress amplitudes during cycling, the maximum outer-fiber stress was gradually increased, as described earlier (Iliescu et al., 2017;) to a pre-determined level and then held constant at that level while the ice was cycled for an additional number of cycles, typically ∼500. Stress amplitude is defined as one-half of the difference between the maximum and the minimum outer-fiber stress.
In order to reduce the potential for opposing crack surfaces to come into contact and potentially heal, additionally to reversed cycling, we also cycled specimens in a non-reversed manner such that the shocked region was always under tension; that is, we raised the mean stress from zero, where mean stress is defined as one-half the sum of the maximum stress and the minimum stress. During the non-reversed cycling we varied the minimum stress from 0 to 0.5 MPa. This means that for a given maximum outer-fiber stress the stress amplitude decreased as the mean stress increased.
The flexural strength was obtained from the load at failure, P, using the relationship: where L = 254 mm is the distance between the outer pair of load lines and b and h denote the width and thickness of the beam, respectively.
Results
In total, 40 measurements were made on cracked ice, including 27 on freshwater ice, and 13 on sea ice. Fewer tests were performed on sea ice owing to the limited availability of the material. Results for each experiment are available in the data repository. Figure 1 and Table 1 show the effect of cycling on the flexural strength of S2 freshwater ice and S2 sea ice, with and without thermal cracks. The data for crack-free freshwater ice (solid, black points) were obtained from and are shown for comparison. A few points are noteworthy.
Freshwater ice (Figure 1): 1. Flexural strength of cracked freshwater ice (points filled in red) fully recovered upon fully reversed cyclic flexing for ∼500 times (i.e., under zero means stress) over the range of maximum outer-fiber stress explored from 0.4 to 1.5 MPa. The flexural strength of the thermally cracked ice upon cycling increased linearly as the maximum outer-fiber stress increased, which is similar to the behavior of non-cracked ice during cycling. The similarity in slopes of cracked and non-cracked ice may be attributed to the internal back-stress buildup suggested earlier . The only difference between non-cracked and cracked ice after cycling is that, for given cyclic flexing conditions, the cracked ice was slightly weaker than non-cracked ice (by ∼0.4 MPa), owing perhaps to some residual stress concentrating effects of the damage. 2. The interval of time between thermal shocking and reversed cycling (i.e., thermal shocking imposed either 24 hr before cycling, imposed right before cycling, or imposed at the beginning of cycling as load began to rise) appears not to affect the behavior of freshwater ice during fully reversed cycling nor to affect its ultimate strength. 3. Cracked ice that was cycled in a non-reversed manner with a minimum outer-fiber stress of 0 MPa but the same maximum outer-fiber stress as during reversed cycling (red triangles pointing downwards in Figure 1) had the same flexural strength after cycling as cracked ice that was cycled under fully reversed loading. These experiments were conducted at 0.5, 1.2, and 1.5 MPa maximum outer-fiber stress. 4. The flexural strength of cracked ice that was cycled under a non-reversed manner with a higher minimum outer-fiber stress of either 0.3 or 0.5 MPa recovered partially. The flexural strength measured after cycling was lower when compared to the strength of cracked ice that was cycled fully reversely. Moreover, the obtained strength values are lower than the flexural strength of pristine, non-cycled ice and cracked, non-cycled ice that was allowed to recover without cycling. 5. As well, the flexural strength of ice that was cracked and then strengthened by reverse cycling (red triangular points, Figure 1) relaxed when allowed to anneal at −10°C for 48 hr before bending to failure. During annealing its flexural strength decreased from 2.28 ± 0.13 MPa to 1.58 ± 0.05 MPa. This relaxation is essentially the same as that exhibited by pristine ice that was strengthened by cyclic loading and then annealed (Murdza et al., 2022b); it is attributed to the relaxation of an internal back stress induced by cycling.
Sea ice (Table 1): 1. Thermal cracking imposed at the beginning of reversed cycling does not affect the strength of sea ice after ∼500 cycles; that is, the strength of cracked ice after it was cycled for ∼500 times (1.47 ± 0.04 MPa) is about the same as the strength of pristine, non-cracked, non-cycled sea ice (1.40 ± 0.07 MPa). 2. The number of cycles matters. Thermal cracking imposed at the beginning of reversed cycling does affect the strength of sea ice after only 50 cycles. Specifically, the strength recovers partially. The strength of pristine non-cracked non-cycled ice (1.40 ± 0.07 MPa) is reduced immediately after thermal shocking to 0.91 ± 0.04 MPa. The strength partially recovers after the ice was cycled for 50 times to 1.19 ± 0.03 MPa. . The solid pink line indicates the average flexural strength of non-cracked non-cycled freshwater ice plus and minus one standard deviation, that is, 1.73 ± 0.25 MPa (Timco & O'Brien, 1994). Red dashed line represents the trend for cracked freshwater ice that was cycled reversely.
10.1029/2023GL102889 5 of 7 3. The flexural strength of cracked ice that was loaded under non-reversed cycling between outer-fiber tensile stresses of 0.5 and 0.7 MPa for more than 500 cycles partially recovered to 1.16 ± 0.05 MPa. This strength is lower than the strength of cracked ice that was cycled reversely (1.47 ± 0.04 MPa) and that of pristine ice that was never cycled (1.40 ± 0.07 MPa).
The other point to note is the fracture path. When cycled reversely, the crack at failure generally did not pass through the pre-cracked region, but instead through pristine ice. In contrast, when cycled non-reversely where both minimum and maximum stresses are tensile, the crack at failure generally passed through the thermally cracked region. An implication is that when crack faces are not allowed to come together during cyclic flexing, healing may be impeded when compared to the case of fully reversed cyclic flexing during which crack surfaces are in contact for some time during each cycle.
Discussion
The question that motivated this work is whether thermal cracks that are introduced into ice affect behavior under cyclic loading, specifically whether the flexural strength of cracked ice changes upon cyclic flexing. The results show that in all experiments both freshwater ice and sea ice recover their strength either partially or fully when compared with the flexural strength of ice that is shocked and then bent to failure immediately. As shown earlier (Murdza et al., 2022a) the as-cracked strength is governed by fracture mechanics. When cyclically flexed reversely for ∼500 cycles, the flexural strength of both freshwater ice and sea ice recovers fully and is either about the same as or greater than the strength of non-cycled, non-cracked ice. When cycled non-reversely where both outer-fiber minimum and maximum stresses in the cracked region are tensile, the flexural strength recovers only partially.
As was discovered and discussed earlier (Murdza et al., 2022a), the ice that was thermally shocked but not cyclically flexed heals and completely recovers its strength relatively quickly (∼10-300 s). In those experiments, after thermal shocking the ice was allowed to rest in the cold room while opposing crack faces were able to come into contact, mainly due to the subsequent warming and expansion of the material around the shocked area. We suggested that the healing of cracks occurred primarily from sintering via surface diffusion and may have been assisted as well by the presence of a quasi-liquid layer on the crack faces (Murdza et al., 2022a). To ensure sintering and surface diffusion, the crack faces must be in contact. A similar process is thought to have occurred in the present experiments when cyclic flexing included a compression phase. Compression may even have enhanced healing by increasing the area of contact through creep. When the crack region was held under tension, contact of the crack surfaces was prevented and so was the recovery.
It is important to point out the time required for the recovery of ice strength. In the previous study (Murdza et al., 2022a), it was found that ∼300 s is enough to recover the strength of cracked (but not cyclically flexed) freshwater ice and that even less time (<10 s) is required for cracked (but not cyclically flexed) sea ice to fully heal. Given that period of a cycle in the present study is ∼20, 300 s is equivalent to 15 flexing cycles. However, here we observed that even after 50 cycles sea ice does not fully recover its strength. It is necessary to apply ∼500 cycles to recover the ice strength completely. This observation may be explained by the fact that during reversed cycling the cracked region experiences both tensile and compressive stresses and healing is impeded during the tensile part of a cycle. Local plasticity and crack tip blunting may have also contributed to the increase in strength upon cycling.
Should the behavior of ice on the larger scale reflect the behavior on the smaller scale, given that natural sea ice covers undergo reversed cyclic loading under the action of oceanic waves, it seems possible that these covers with thermal cracks may not weaken.
Conflict of Interest
The authors declare no conflicts of interest relevant to this study.
Data Availability Statement
The data presented in this paper are available at the Arctic Data Center website (Murdza et al., 2023). | 3,871 | 2023-05-30T00:00:00.000 | [
"Materials Science"
] |
|
Bank XYZ is one of the biggest banks in Indonesia. It has IT Service Desk function under Information Technology Division that specifically handles complaints or problems related to IT. Knowledge management in IT service desk can help to increase the availability of information and knowledge for the IT service desk team who must provide explanations to users. IT Service Desk of Bank XYZ has built knowledge management system using open-source platform since 2017. It is called SDKPedia which was developed inhouse by IT Service Desk team and never been evaluated since it was built. The objective of this study is to evaluate SDKPedia as knowledge management system used in IT Service Desk Bank XYZ. Evaluation was carried out based on Delone and McLean assessment criteria. Survey is distributed to IT Service Desk worker and 31 valid feedback is used in this study. To determine the indications that have a substantial impact and result in a net advantage for SDKPedia, the PLS-SEM algorithm is utilized. Service quality is the only exogen latent variable that affected intension of use. While the other two exogen latent variables are system quality and information quality, did not have significant impact for intention to use or user satisfaction. Considering the findings of this study, several improvements can be made by the IT Service Desk manager to make the quality of SDKPedia better. The points that need more attention are information quality and system quality.
INTRODUCTION
IT Service Desk is a critical function in an organization or company, especially those who offer public services. It is a primary single point of contact for all customers or users that need help or assistance with IT services [1][2] [3]. It usually can be contacted 24 hours, 7 days through various communication media, such as telephone, email, messenger, etc. In the current digital era, IT Service Desk can even be accessed via social media.
IT Service Desk provides various services, for example request fulfillment, incident handling, and escalation to other team for incident that needs assistance from other parties [2], [4]. During incident resolution time, IT Service Desk must inform the progress to user. The incident is classified into several categories that has different Service Level Agreement (SLA).
Banking as one of the industries that provide services to public, is required to have a good and reliable IT Service Desk. Bank XYZ is one of the biggest banks in Indonesia that has tens of millions customer and dozens of applications or services. It has IT Service Desk function under Information Technology Division that specifically handles complaints or problems related to IT. It is separated from Contact Center Division that received all complaints from customer directly. IT Service Desk handles only IT problems that cannot be solved by contact center or working unit (i.e., branch). Because of its specialization, the team of IT Service Desk Bank XYZ consists of workers with IT background.
In carrying out its work, an IT Service Desk needs a system that can store and manage the information needed to resolved complaints or problems. The system not only store all information about customer's complaints, but it should also help IT Service Desk team to accelerate its services by providing solution quickly and accurately. That is why knowledge management play important role for IT Service Desk. Currently, Bank XYZ has one application named SDKPedia as knowledge management system used in IT Service Desk. SDKPedia is used to store information about complain or incident and can be used as reference in solving problems.
Knowledge management in IT service desk can help to increase the availability of information and knowledge for the IT service desk team who must provide explanations to users [5]. Knowledge management that is well managed can improve service to users which has an impact on customer satisfaction and loyalty. It also can reduce number of calls or complaints that will also reduce operational cost of the company. The aim to create knowledge management systems (KMS), or information systems (IS) that are particularly created to enable knowledge management [6]. KMS as an application system and IS system, combines and integrates capabilities for managing both explicit and tacit knowledge that useful for organization [7]. KMS provides a system that helps an organization to learn by keeping significant and important knowledge and facilitate the employee to access information and knowledge that needed [8].
Since it was created in 2017, the use of SDKPedia in IT Service Desk Bank XYZ has never been evaluated. There has never been an exact measure of the experience of workers in the IT Service Desk in using SDKPedia as a knowledge management system that helps work in the IT Service Desk. Therefore, the objective of this study is to evaluate SDKPedia's uses as a Knowledge Management System in the IT Service Desk. To achieve this goal, research questions were made in this study: what factors influence user satisfaction and utilization in using SDKPedia in IT Service Desk Bank XYZ?
Knowledge Management
Knowledge management is defined as doing what is necessary to get most out of knowledge resource [9]. Knowledge management is the process of developing, acquiring, disseminating, and using practical knowledge to enhance an organization's performance by maintaining and sharing the accumulated knowledge related its processes, procedures, and methods [10]. Knowledge management is a crucial component of organizational strategy which can improve its performance and increase knowledge for organization [11]. Knowledge management describes a process to create, transfer, share, store the knowledge and application of knowledge, as well as the evaluation of the effects of knowledge on organizational performance [12].
Knowledge management is considered as an integrated approach that offers a variety of advantages, but its main highlights include encouraging collaboration by preserving and disseminating existing knowledge within organizations while creating opportunities for the creation of new knowledge. Additionally, it provides organizations with the resources they require to successfully apply their knowledge to accomplish their mission, vision, and objectives.
Knowledge Management System
Knowledge management system (KMS) is a system for storing and retrieving knowledge or information to advance comprehension, collaboration, and process alignment [13]. KMS serves the foundation for all collaborative activity and serves to unite communities and groups. KMS is an advanced information systems that contain online databases, information, directories, and application where users' exploration is an important consideration to be exploited [14]. KMS enables decision makers to effectively interact with the system in terms of knowledge storing, communication, and cooperation. It contains databases with system for capturing, storing, organizing, and searching the useful knowledge and information.
SDKPedia
IT Service Desk Bank XYZ has been established since 2009. Since then, information and knowledge has been stored using makeshift tools. The tools for storing information and knowledge have also changed in line with the replacement of management at the IT Service Desk. The notion to develop a system for storing and sharing knowledge and information that could be easily accessed and last for a long time didn't first surface until 2017. Due to the urgent need at that time, a tool was sought that was available open source, met the need to store, update, and share data, and was easy to customize according to the needs of IT Service Desk Bank XYZ.
Using a ready-to-use platform available open source, SDKPedia was built. SDKPedia was built as a place to store information and knowledge that every worker in the IT Service Desk has. Information and knowledge stored in SDKPedia can be accessed by all workers in the IT Service Desk and can also be updated or even deleted if the information is no longer relevant. The existence of SDKPedia helps workers at IT Service Desk Bank XYZ to get the information they need to carry out their daily work more quickly.
Delone and McLean Success Model
Delone and McLean proposed an assessment criterion for measuring the IS success in 1992, as an attempt to address the ambiguity in defining IS success due to its complex and interdisciplinary nature [15]. This model is based on the socio-technical approach and includes the technological and human components of using the system. There are six criteria for measuring the performance of IS, including system quality, information quality, service quality, intention to use, user satisfaction, and net benefit [16]. System quality, information quality, and service quality are main dimensions specifically used to evaluate the system, where intention to use and user satisfaction are thus impacted by these aspects [17]. Figure 1 show the relationship between the criteria of Delone and McLean Information System success model.
System quality denotes the desired information system's characteristics, which might be reflected in overall system performance and assist users in their needs [15] [16]. It includes the accessibility and adaptability of the system, taking into consideration its usability and accessibility [18]. The required qualities of the system output are indicated by the information quality and measures how much of the data is valid, accurate, and full, as well as how much of it the user can understand [15] [16]. Service quality indicates the level assistance that received by system users from the IT support team and information systems organization, where the organization will give user the services they were promised. Intention to use describe how users intend to utilize the system [19]. Net benefits show how the IS to contribute and give impact to the success of individuals, groups, and organizations.
Management System Measurement Research
There are several research that focus on evaluates and measure the successful of knowledge management system. Research by Sensuse et al. [16] evaluate the ELISA chatbot's benefits to the firm as a knowledge application using the Delone and McLean model with SEM-PLS algorithm to process and analyze the data. The results demonstrate that information quality, service quality, and intention to use all have an impact on ELISA user satisfaction. By understanding the correlation between variables such as user desire to use, user satisfaction, and net benefit, this study is intended to improve the ELISA application and provide a recommendation for IT managers. less than 1% of disposition letters were generated and the system has not been fully utilized. This research focuses on identifying the variables that affect KMS implementation of Online Disposition Website. Found that system quality and service quality positively impact to KMS use; where knowledge quality, service and KMS use positively impact to user satisfaction; KMS use and user satisfaction positively impact to net benefit.
Hypotheses Development
This study using Delone and McLean IS assessment criteria to evaluate the SDKPedia used at Bank XYZ. It will adapt the relationship between the SDKPedia's three key Delone and McLean model criteria are system quality, information quality, and service quality; and users' intentions to utilize the KMS and their satisfaction with it. User satisfaction and intention to use have an impact on the organization's net benefit. H1: System quality is significantly impacted to intention to use. H2: Information quality is significantly impacted to intention to use. H3: Service quality is significantly impacted to intention to use. H4: System quality is significantly impacted to user satisfaction. H5: Information quality is significantly impacted to user satisfaction. H6: Service quality is significantly impacted to user satisfaction. H7: Intention to use is significantly impacted to user satisfaction. H8: Intention to use is significantly impacted to net benefit. H9: User satisfaction is significantly impacted to net benefit.
Data Collection and Processing
The data for this study was obtained using questionnaire via Google form that was sent to 40 IT Service Desk Bank XYZ workers online. The questionnaire is structured to get the required informationof hypotheses developed at this study. The questionnaire was designed refer to study literature that relevant to this study to evaluate of SDKPedia with 5-point Likertscale (firmly disagree equals 1, and strongly agree means 5). Table 1 are list of question for the questionnaire used at this study.
The data collected as the result of questionnaire will be processed and analysed using Partial Least Square Structural Equation Modelling (PLS-SEM) algorithm and processed using smartPLS 3.2.9 software. PLS-SEM is a structural modelling approach that is widely used in applied research, which is often found in research of information systems, knowledge management, business strategic and marketing [22]. PLS-SEM used in predictive research to test theoretical frameworks, that investigate many constructs and the relationships between these constructs. For this PLS-SEM, there are two phases for the measurement, first is to validate the result models and second is to test the structural hypotheses [23]. PLS-SEM can be used for analyzing differences or contrast relationships between identified variables [24]. PLS-SEM is an approach technique for analyzing statistical data with a small sample size, does not involve normality, is able to work without distribution assumptions with nominal factors and scale intervals [25]. PLS-SEM aims to get predictions from a predetermined model and also the theories used.
RESULT AND DISCUSSION
Questionnaires were distributed to 40 workers at the IT Service Desk, but only 31 people filled out the survey. Nine workers were monitoring operators in IT Service Desk who did not use SDKPedia in their daily work. Respondent demographics can be seen in Table 2.
Respondent demographics are based on the years of service of the respondent at Bank XYZ. This is because the longer the respondent has worked at Bank XYZ, the more experience the respondent has in using knowledge management. SDKPedia was created in 2017, this means that respondents with less than 5 years of experience only have experience using SDKPedia as a knowledge management system in their daily work. Whereas workers who have worked for more than 5 years have experienced conditions where the IT Service Desk does not yet have a proper knowledge management system, so that it can provide a more objective evaluation of SDKPedia. Thus, the results of the evaluation of SDKPedia represent an objective assessment of respondents who have only used SDKPedia, as well as respondents who have used knowledge management systems other than SDKPedia. The result of PLS-SEM algorithm with Delone and McLean Model show at Figure 2, which indicates the weighting calculation of each indicator and latent variable based on hypotheses proposed before.
Coefficient of Determination
To measure how well the statistical model can predict observed outcome, coefficient of determination (denoted R 2 ) is used. The coefficient of determination is divided into three categories, substantial, moderate, and weak. Substantial is if R2 equals 0,75, moderate is if R2 equals 0,5, and weak is if R2 equals 0,25 [26]. From the table 3, R square values for Intention to use is 0,531. This indicates that three mains of latent variable are system quality, information quality and service quality explain 53,1% of the variance of Intension of Use. Intension of Use then together with the other latent variables (SQ, IQ, and SV) determine the coefficient of determination of User Satisfaction. R square values of user satisfaction is 0,787. This indicates that the four latent variables explain 78,8% of the variance of user satisfaction. Intension to use and user satisfaction was then used to determine the coefficient of determination of Net Benefits resulted to 0,740 or 74%. Summary of R square values can be seen in Table 3.
Path Coefficient
In general, a latent variable is said to be significant if it has a path coefficient of more than 0.2 [16]. Table 4 shows the result of path coefficient to observe the significance of the relationship between latent variables and to test the hypotheses proposed. The summary of path coefficient can be seen in Table 6. The intention to use gets the strongest effect from system quality which has path coefficient value is 0.571, not effected by service quality which has the coefficient value is -0.097. While information quality does not have a significant affect and not predict user satisfaction and service quality does not predict intention to use. The user satisfaction gets the strongest effect from intention to use and the lowest effect form information quality.
Outer Loading
The outer loadings, for a well-fitting reflective model should be above 0.70 as the threshold value [27]. Table 5 shows the result of outer loading that indicate the relationship between the indicators and latent variables.
Only one indicator that has value below 0.7, which the outer loading for variable NB4 is 0.696, shows the correlation between NB4 and latent variable Net benefit did not reach the threshold value. For other indicators, the outer loading has value above 0.70. It is necessary to measure the reliability factor and the validity of the structural model to validate the survey results that have been conducted. Reliability is measured from the reliability of indicators and internal consistency reliability. While convergent validity and discriminant validity can be used to measure validity. The component used to assess the reliability of an indicator is called composite reliability. The value of composite reliability greater than 0.6 that will accept and it show the variable meet the criteria. A summary of these indicators can be seen from table 6.
Discriminant Validity
The measuring model has two validity and reliability assessments that are determined by investigating internal consistencies, convergent and discriminant validity [23]. According to the Fornell-Larcker theory, discriminant validity describes as the square root value of each latent variable's AVE, with its value greater than the correlation coefficients between the latent variables [16] [22].
T Statistic of Path Coefficient
The path coefficient will be significant if the T statistic is greater than 1.96 when using a T-test with a significance level of 5% [22].
Bootstrapping method used for calculating and test the significance to produce the T-statistic. Table 8 below shows the result of T statistic and p values using 500 subsamples of Bootstrapping for inner model of hypotheses proposed. We get four hypotheses rejected from the Bootstrapping calculation. The relationship between system quality and information quality to user satisfaction, as well as the correlation between information quality and service quality to intention to use. System quality indicated the performance of SDKPedia application, such as the easiness to use and access, also the respond time. Information quality related to the information that provided by SDKPedia, which the information can meet the user needs, the completeness and relevant of information, and also the information provide is easy to understand. Service quality related to the service provided by SDKPedia, such as user manual or help function, technical suported that can be accessed any time, if the users face a problem when using SDKPedia.
Intention to use describes the willingness of SDKPedia users to use SDKPedia repeatedly and whether users use SDKPedia to help them make decisions. User satisfaction describes whether users are satisfied with the aspects contained in SDKPedia, such as efficiency, effectiveness, and whether SDKPedia has met user needs. Net benefits are related and show the benefits that will be obtained by using SKDPedia. These advantages may take the shape of improved performance, accelerated task completion, and efficient knowledge management.
Service quality has more significant impact to user than system quality. While system quality had an impact only to use intention, but service quality and intention to use had an impact to user satisfaction. Net benefit is influenced by intention to use and user satisfaction. Service quality is the only exogen latent variable that affected intension of use. However, neither intention to use nor user satisfaction were significantly impacted by the other two exogenous latent variables, information quality or system quality. User satisfaction is significantly impacted by the intention to use. User satisfaction and intention to use both significantly affect net benefits.
Considering the findings of this study, several improvements can be made by the IT Service Desk manager to make the quality of SDKPedia better. The points that need more attention are system quality and information quality. Useability is one factor that influences system quality. Since SDKPedia developed using ready-to-use applications that are available for free with a minimum of features and customization, then it can be considered to do reengineering of SDKPedia, so it can be easier to use and has a complete search feature.
On the information quality variable, the indicators that are assessed indicate the quality of the information stored in SDKPedia. One of them is related to the completeness of the information available in SDKPedia. The completeness of this information is closely related to the awareness of workers in the IT Service Desk to actively contribute to record any information or knowledge they have. Therefore, management at the IT Service Desk can develop a strategy that focuses on increasing worker awareness to record information or knowledge on SDKPedia. For example, by creating an event with prizes for the most contributors of SDKPedia.
This study has implications for academic and practical practices. For academic practices, this enriches references for evaluating knowledge management implementation in an organization or company. This can help the next similar study or research that discussed knowledge management evaluation. For practical practices, this study can be used as input to managers at IT Service Desk Bank XYZ to improve the quality of SDKPedia as knowledge management that is used to help with daily work. In conducting the evaluation, this study specifically used the Delone and McLean assessment criteria to assess the technical aspect of SDKPedia, and not assess the social approach, the socio-technical aspects were not considered in this study. So that it cannot be discussed the relationship between technical aspects and socio-technical aspects to form a good knowledge management system. This is the limitation of the study. In the future, it is necessary to conduct evaluation using other criteria and to conduct not only technical aspect evaluation but also socia-technical, so that recommendations for improvement for knowledge management used in IT Service Desk Bank XYZ can be more comprehensive. | 5,045.6 | 2023-07-31T00:00:00.000 | [
"Business",
"Computer Science"
] |
UAV-PDD2023: A benchmark dataset for pavement distress detection based on UAV images
The UAV-PDD2023 dataset consists of pavement distress images captured by unmanned aerial vehicles (UAVs) in China with more than 11,150 instances under two different weather conditions and across varying levels of construction quality. The roads in the dataset consist of highways, provincial roads, and county roads constructed under different requirements. It contains six typical types of pavement distress instances, including longitudinal cracks, transverse cracks, oblique cracks, alligator cracks, patching, and potholes. The dataset can be used to train deep learning models for automatically detecting and classifying pavement distresses using UAV images. In addition, the dataset can be used as a benchmark to evaluate the performance of different algorithms for solving tasks such as object detection, image classification, etc. The UAV-PDD2023 dataset can be downloaded for free at the URL in this paper.
Value of the Data
• The UAV-PDD2023 dataset, captured by UAVs, provides a basis for pavement distress detection using deep learning.It is highly useful for municipal authorities and pavement agencies to conduct low-cost pavement condition monitoring.• The UAV-PDD2023 dataset is of great value for developing new deep convolutional neural network architectures or modifying existing architectures to enhance network performance.Researchers can utilize this data for algorithm training, validation, and testing, aiming to develop algorithms for pavement distress detection using UAV.• The dataset supports detection and classification of pavement cracks Longitudinal cracks (LC), Transverse cracks (TC), Alligator cracks (AC), Oblique crack(OC), Repair(RP) and Potholes (PH).It can be further expanded to include other distress categories.• Researchers can utilize these datasets to benchmark the performance of various algorithms for addressing similar problems, such as image classification and object detection.
Data Description
The UAV-PDD2023 image dataset consists of 2440 images collected from China, with over 11,158 instances of pavement distresses.Pavement images were captured using a UAV.The effectiveness of UAVs in the health monitoring of civil infrastructure has been demonstrated [ 1 , 2 ].To enhance the practicality of the dataset, it incorporates images of pavement distress captured during clear weather conditions as well as within an hour after rainfall.Additionally, the dataset encompasses images taken from roads with varying construction qualities.The roads in the dataset consist of highways, provincial roads, and county roads.This dataset includes annotations for six categories of distresses: Longitudinal cracks (LC), Transverse cracks (TC), Alligator cracks (AC), Oblique crack (OC), Repair (RP) and Potholes (PH).
The criteria for defining different types of pavement distress are as follows: Transverse cracks are herein defined as fissures oriented perpendicularly to the road's central axis or the direc-tion of pavement installation.On the other hand, longitudinal cracks are those that run parallel to the road's central axis or the direction of pavement.Cracks exhibiting angular dispositions within the range of 25 to 70 degrees concerning the road's central axis are categorized as diagonal cracks.The alligator cracks are distinguished by the presence of a sequence of interconnected fissures on the surface of roads.These fissures merge both longitudinally and transversely, resulting in the formation of multifaceted angular fragments, reminiscent of the pattern observed on the back of an alligator.Repairs are indicative of segments of the road surface where fresh materials have been applied for the purpose of replacing and mending the preexisting pavement.Potholes, conversely, are depressions discernible on the road surface, typically assuming a concave, bowl-like shape.They are often characterized by sharp upper edges and vertical sides adjoining the upper rim of the cavity.Sample images of the dataset are shown in Fig. 1 , the directory structure of the dataset file is illustrated in Fig. 2 .The dataset is divided into three folders: Annotations, ImagesSets and JPEGImages.The Annotations folder contains XML files in PASCAL VOC format [ 3 ].These files include information about the pavement distress types, as well as the coordinates of the bounding boxes.Fig. 3 shows an example of the annotation information in an XML file with one transverse crack and two oblique cracks.The < filename > describes the name of the annotated image, while the < size > indicates the dimensions and number of channels of the image.The < object > denotes the category and position of the bounding boxes.The original images are in the folder JPEGImages.
The file ImageSets contains the names of the images used for training, testing, and validation.The images in the dataset are divided into training, validation, and test sets in appropriate proportions.The ratio of the test set to the trainval set is 2:8, where trainval set refers to the combination of the training and validation sets.Then, the trainval set is further split into the training set and validation set in an 8:2 ratio.This result is provided in four files: "train.txt,""val.txt,""test.txt,"and "trainval.txt,"which are located within the "Main" folder inside in the "ImageSets" folder as shown in Fig. 2 .
We integrated annotation boxes with the images.This approach aids in confirming the accuracy of label position and boundaries, ensuring alignment with actual cracks.Fig. 4 shows the visualization result after the annotation is completed.The "Images" folder contains 2440 images with a resolution of 2592 × 1944 pixels.The images were taken from different types of roads, including highways, provincial roads, and county roads, satisfying the requirement of pavement distress detection of different types of roads.It is worth noting that the dataset also includes pavement images without distress.These images of well-maintained road are included to facilitate false-positive detection for models developed for pavement distress detection.
Experimental Design, Materials and Methods
Images were captured using a camera installed on a UAV.we take into account the weather conditions and the quality of road construction when selecting flight routes.We conducted research in multiple locations and captured images from various types of roads.
The scale of a photograph is determined by the focal length of the camera and the flying height above the ground.To select the altitude for UAV flight, the heights of the structures in the shooting area, the size of the roads, and the size of the distress in the images should be considered.Operating at an excessively high altitude may result in the diminishment of pavement distress size in imagery, subsequently exerting an influence on the recognition process.To cover the entire width of the pavement, the minimum flying altitude is set as follows Eq. ( 1) : where H represents the UAV flying altitude; f is the focal length of the camera; a is the camera sensor size, and W is the actual width to be shot.In this study, a is 28.2 mm, the focal length f is 47mm, and W is 18m, so the flight altitude H is calculated as 30m.At this height, the crack is still visible and does not affect the recognition.
During the capturing process, the camera of the UAV was positioned vertically downward, perpendicular to the ground.For stationary shooting, the UAV hovered at a fixed point to capture pavement images.For dynamic shooting, the UAV moved at 0.8 m/s to capture pavement images.To diversify the dataset and include different weather conditions, some images were captured one hour after rain.The camera has a resolution of 20 megapixels, with image dimensions of 5184 pixels in width and 3888 pixels in height.The large image size was unsuitable for annotation and algorithm training.Therefore, the images were divided into four equally sized After image screening, image mirroring, and other dataset augmentation techniques, a total of 2440 images were obtained.The cracks were labeled using the labelme tool in the PASCAL VOC format [ 3 ].The annotations include different types of distress labels and their locations in the images.
For each annotated image, the image was firstly enlarged ensuring that no cracks were overlooked.In cases where certain target features were less evident, as shown in Fig. 6 (a), there is a short transverse crack in the area covered by the yellow layer, but it is a derivative of the main crack, the two are very close to each other, and the small crack is parallel to the main crack, so the two cracks are labeled as one crack.Fig. 6 (b) in the area covered by the white layer, alligator cracks and a long crack are connected.The cracks are labelled as a transverse crack and an alligator crack.
Six distress types were labeled for the collected images.After completing all the annotation processing tasks, the resulting dataset contains approximately 11,158 annotation boxes.
Collected Data Analysis
Traditional methods of pavement distress identification, relying on manual observation or pavement inspection vehicles, often involve capturing images from relatively close proximity to the ground [ 4 ], resulting in limited image information that hampers model training.Utilizing UAVs for pavement distress detection offers advantages such as cost-effectiveness and environmental friendliness, this approach can effectively reduce road maintenance expenses [ 5 ], compared with traditional time-consuming and costly pavement inspection methods.
In this study, a UAV equipped with a high-resolution camera was used to capture pavement distress images to train deep learning, and the best image quality was achieved through reasonable flight settings.Compared with other pavement distress images captured from a relatively close distance with a ground camera [ 6 , 7 ], the images collected by the UAV are taken from a higher altitude and encompass more lanes, significantly improving the efficiency of pavement inspection.Additionally, this study captured images of pavement distresses under two different weather conditions and various road types, enhancing the model's generalization capability.And the dataset categorizes pavement distress into six distinct types and annotates them according to the PASCAL VOC format, simplifying data processing for researchers and enabling developers to focus more on algorithm development.Scholars can utilize this dataset to explore improved algorithm models for UAV image recognition and develop UAV-based pavement inspection systems.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article.
Fig. 3 .
Fig. 3. Example of the annotation information in an XML file. | 2,293.2 | 2023-10-01T00:00:00.000 | [
"Computer Science"
] |
Biological and Physical Properties of a @-Endorphin Analog Containing Only D-Amino Acids in the Amphiphilic Helical Segment 13-31*
Our approach to the modeling of B-endorphin has been based on the proposal that three basic structural units can be distinguished in the natural peptide hormone: a highly specific opiate recognition sequence at the N terminus (residues 1-5) connected via a hydrophilic link (residues 6-12) to a potential amphiphilic helix in the C-terminal residues 13-31. Our previous studies showed the validity of this approach and have demonstrated the importance of the amphiphilic helical structure in the C terminus of &endorphin. The present model, peptide 5, has been designed in order to evaluate further the requirements of the amphiphilic secondary structure as well as to determine the importance of this basic structural element as compared to more specific structural features which might occur in the C-termi-nal segment. For these reasons, peptide 5 retains the three structural units previously postulated for ,%en-dorphin; the major difference with regard to previous models is that the whole C-terminal segment, residues 13-31, has been built using only D-amino acids. In aqueous buffered solutions as well as in 2,2,2-trifluoroethanol-containing solutions, the CD spectra of peptide 5 show the presence of a considerable amount of left-handed helical structure. Enzymatic degradation studies employing rat brain homogenate indicate that peptide 5 is stable in this milieu. In 6- and b-opiate receptor-binding assays, peptide 5 shows a slightly higher affinity than &endorphin for both receptors while retaining the same 6/p selectivity. In opiate assays on
Biological and Physical Properties of a @-Endorphin Analog Containing Only D-Amino Acids in the Amphiphilic Helical Segment 13-31* (Received for publication, January 3, 1984) Jacky P. Blanc
$ and Emil Thomas Kaiser5
From the Laboratory of Bioorganic Chemistry and Biochemistry, The Rockefeller University, New York, New York 10021 Our approach to the modeling of B-endorphin has been based on the proposal that three basic structural units can be distinguished in the natural peptide hormone: a highly specific opiate recognition sequence at the N terminus (residues 1-5) connected via a hydrophilic link (residues 6-12) to a potential amphiphilic helix in the C-terminal residues 13-31. Our previous studies showed the validity of this approach and have demonstrated the importance of the amphiphilic helical structure in the C terminus of &endorphin. The present model, peptide 5, has been designed in order to evaluate further the requirements of the amphiphilic secondary structure as well as to determine the importance of this basic structural element as compared to more specific structural features which might occur in the C-terminal segment. For these reasons, peptide 5 retains the three structural units previously postulated for ,%endorphin; the major difference with regard to previous models is that the whole C-terminal segment, residues 13-31, has been built using only D-amino acids.
In aqueous buffered solutions as well as in 2,2,2trifluoroethanol-containing solutions, the CD spectra of peptide 5 show the presence of a considerable amount of left-handed helical structure. Enzymatic degradation studies employing rat brain homogenate indicate that peptide 5 is stable in this milieu. In 6-and b-opiate receptor-binding assays, peptide 5 shows a slightly higher affinity than &endorphin for both receptors while retaining the same 6 / p selectivity. In opiate assays on the guinea pig ileum, the potency of peptide 5 is twice that of &endorphin. In the rat vas deferens assay, which is very specific for &endorphin, peptide 5 displays mixed agonist-antagonist activity. Most remarkably, peptide 5 displays a potent opiate analgesic effect when injected intracerebroventricularly into mice. At equal doses, the analgesic effect of peptide 5 is less than that of &endorphin (10-15%) but longer lasting. In conjunction with our previous model studies, these results clearly demonstrate that the amphiphilic helical structure in the C terminus of &endorphin is of predominant importance with regard to activity in rat vas deferens and analgesic assays. The similarity between the in vitro and in vivo opiate activities of &endorphin and peptide 5, when compared *This research was supported in part by United States Public Health Service Program Project Grant HL-18577 and by a grant from the Dow Chemical Company Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
to the drastic change in chirality in the latter model, demonstrates that even a left-handed amphiphilic helix formed by D-amino acids can function satisfactorily as a structural unit in a B-endorphin-like peptide.
It has been proposed that peptides which bind to amphiphilic surfaces such as phospholipid vesicles, membranes or receptors, will themselves possess regions of amphiphilic secondary structure complementary to those of the target surfaces (2)(3)(4)(5)(6)(7)(8). One of these particular secondary structures, the amphiphilic helix, has been examined in several studies (4, [8][9][10][11] where synthetic peptides, a priori designed to form such a structure, have provided considerable information on the relationship between the general characteristics of the helical structure and the particular physical and biological properties of the respective peptides. In examining various peptide sequences suitable for a similar structural approach, we focused our attention on @-endorphin, a 31-residue peptide hormone with potent opiate activities (Fig. 1). We have proposed (1) that three separate regions can be distinguished in the natural sequence: a highly specific opiate recognition site in the Nterminal residues 1-5, identical to Met-enkephalin; a hydrophilic spacer region in residues 6-12; and a 16-residue sequence between Pro-13 and Gly-30 capable of forming an amphiphilic a-or n-helix (Fig. 2). The hydrophobic domain resulting from the formation of the helical structure covers one-half of the helix surface and, in the a-helical conformation, is continuous and twists along the length of the helical axis.
In order to investigate our hypothesis, four peptide models of @-endorphin were synthesized ( Fig. 1) and their physical and biological properties were determined (12-14). All four peptides were able to reproduce many of the properties of @endorphin. When differences were noticed among the properties of the model peptides, they could be rationalized on the basis of the presence of an amphiphilic helical segment in the C terminus of the natural molecule. Our results led to the conclusion that the C terminus of @-endorphin does not have a highly specific function in binding to the p-and &opiate receptors or in the activity on the GPI. ' Concerning the properties which are related to the potential amphiphilic segment, the presence and the shape of a hydrophobic domain strongly influence: the formation of a helical structure as well as its inherent stability; (6) the self-association properties of the peptides; and (c) their resistance toward enzymatic degradation.
We have also demonstrated, with the study of peptide 4, that the amphiphilic character of the helical region in residues 13-31 is of critical importance in the specific interaction of /3endorphin with the c-receptor (14).
It is not clear why peptide 3 is 15-30% as potent an analgesic agent as @-endorphin (13), while peptides 1 and 2 show no measurable analgesic potency. One of the possibilities is that a *-helix is the conformation required to induce an antinociceptive effect. Another possibility is that the twisted hydrophobic domain of an a-helical form is required for analgesic activity. On both helix surfaces, additional specific side chain interactions, for example an aromatic residue in position 27, may also be important.
We have now investigated the properties of a new compound, peptide 5. For this model, the natural sequence of bendorphin was retained in residues 1-12 ( Fig. I), but the whole 13-31 segment was built using only D-aminO acids and should, in a left-handed a-helical conformation, form an amphiphilic structure fairly closely related to the one found in @-endorphin or peptide 3 (Fig. 3). The results reported here fully confirm our hypotheses and, in relation to our previous studies (4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14), demonstrate the versatility as well as the usefulness of our general structural approach in the understanding of the physical and pharmacoiogical properties of a variety of biologically active peptides. Peptide Synthesis and Purification-The solvents and reagents used for the synthesis were purified according to standard published methods (15). Boc derivatives of the amino acids were purchased from Peninsula or Bachem. All the protected amino acids were assayed for homogeneity by TLC. The Boc derivatives of the amino acids employed were as follows: L-and D-glutamine, glycine, Dleucine, N"2-chlorobenzyloxycarbonyl-~-and D-lysine, L-methionine, L-and D-phenylalanine, D-proline, 0-benzyl-L-serine, 0-benzyl-L-threonine, and 0-2,6-dichlorobenzyl-~-tyrosine.
Chloromethylated, 1% cross-linked, styrene-divinylbenzene co-polymer (0.67 mmol Cl/g) was esterified using Boc-D-glutamine and anhydrous KF (16). By picric acid titration, the substitution level was found to be 0.403 mmol of Boc-D-Gln/g of resin. Using 2.48 g of resin, which corresponds to 1 mmol of the first amino acid, the synthesis of the fully protected peptide proceeded according to automated solid phase methods described previously (17). Cleavage of the peptide from the polymeric support and deprotection was carried out by reaction with anhydrous HF in the presence of anisole at 0 'C (18). The peptide was then extracted from the peptide and resin mixture with 20 and 50% aqueous acetic acid containing 5% dithiothreitol and the extracts were lyophilized. This material was gel-filtered through a Bio-Gel P-2 column (1% AcOH, 1 mM dithiothreitol), and the fractions eluted in less than 2 times the void volume were pooled and lyophilized. This peptide mixture was then subjected to ion exchange chromatography on CM-Sephadex C-25 (50 mM KCl, 50 mM sodium borate, pH 8.3, containing 20% formamide) using a linear gradient of 0.15-0.7 M NaCl. By monitoring the absorbance at 270 nm, a major peak eluting in the middle of the gradient was collected and desalted on a Bio-Gel P-2 column (1% AcOH, 1 mM dithiothreitol). Further purification of peptide 5 was achieved by reverse phase HPLC on an Altex CIS semipreparative column. The desalted solution was concentrated and loaded onto the column in portions of 1-2 ml. After washing out the solvents with a 0.02 M sodium phosphate buffer, 0.1 M in sodium perchlorate, pH 2.6, containing 20% acetonitrile, a gradient of 40 to 43% CH&N in the same buffer was applied over 22 min at a flow rate of 3.0 ml/min. Peptide 5 was eluted near two-thirds of the gradient and a base-line separation was achieved from the other impurities present. After desalting (Bio-Gel P-4,1% AcOH) of the material collected by HPLC and lyophilization, peptide 5 was obtained with a high purity. Analytical HPLC on an Altex C,, column eluting at 1.5 ml/min with a gradient of 38-43% CH3CN in 20 min (buffer: 0.02 M sodium phosphate, 0.1 M sodium perchlorate, pH 2.6) showed a major symmetrical peak at 210 nm with no detectable impurities. The overall yield was 3% based on the initial Boc-D-Gln-resin substitution level. For amino acid analysis, Opiate Receptor Binding-The affinities of peptide 5 for brain opiate receptors were compared to those of P-endorphin by determining the ability of these peptides to inhibit the specific binding of [3H] DADL, 0.6-0.7 nM (&receptor assay), or [3H]DHM, 0.5-0.6 nM ( preceptor assay), to guinea pig brain whole membrane preparations. The procedure was essentially the same as previously described (13,19).
Opiate Assays on the Guinea Pig Ileum and Rat Vas Deferem-GPI (20) and RVD (21) opiate assays were performed according to established procedures, using white female Hartley guinea pigs (400-500 g) and white male Sprague-Dawley rats (250-300 9). Assays were performed essentially as described previously (12,13). In the GPI assays, the tissues were suspended between the electrodes at 0.5-1.0g tension and subjected to electrical pulses of 1.2-ms duration at 80 V and 0.1 Hz. In the RVD assays, the isolated tissues were suspended at 0.2-g tension and stimulated with electrical pulses of 0.7-ms duration at 70 V and 0.1 Hz. In GPI or RVD assays of P-endorphin, the time allowed for re-equilibration was kept at a minimum (2-5 min) to reduce possible enzymatic degradation. In RVD assays, tissues were much slower to respond fully to additions of peptide 5, and the time allowed for re-equilibration was frequently greater than 30 min, since no indication of reversal of the opiate effect due to degradation of the peptide was observed. Because of the mixed agonist-antagonist behavior of peptide 5 in RVD assays, its agonist dose-response curve was determined by single-dose challenges (22)(23)(24)(25)(26).
Resistance to Proteolysis-The relative resistance of peptide 5 toward degradation by proteolytic enzymes endogenous to rat brain was determined by the same method as previously described (12,13). Aliquots corresponding to the various time points were analyzed for peptide 5 by loading 100-200 p1 onto an Altex C,, analytical HPLC column fitted with a guard column and previously equilibrated with 0.02 M sodium phosphate buffer, pH 2.6, 0.1 M sodium perchlorate/ acetonitrile (5842, v/v). Peptides were eluted isocratically with 42% CH3CN at a flow rate of 1.5 ml/min. The amount of peptide 5 was quantitated by integration of its absorbance peak at 210 nm, relative to standard samples in water.
Analgesic Assays-@-Endorphin and peptide 5 were tested for theirantinociceptive properties by the hot plate method (27). Experiments were performed as described previously (13). Peptides were injected intracerebroventricularly (28) in 5 pl of 0.5% saline solution. Nalox-one was administered subcutaneously as a single dose of 30 r g in 100 p1 of saline solution.
The analgesic effect on each mouse was calculated at each time point after injection using the equation: % analgesia = [(PL -CL)/ (60 s -CL)] X 100, where CL is mean control latency, and PL is postinjection latency.
Design of Peptide 5
With this new model, we decided to assess the importance of the secondary helical structure by reproducing it using a sequence of only D-amino acids. In this regard, peptide 5 has been designed to retain the natural sequence of p-endorphin in residues 1-12, but in the C-terminal segment 13-31 only D-amino acids have been used (Fig. 1). The previously studied peptide 3 (13) was used as a basis for the construction of the hydrophobic domain of peptide 5 . Assuming a left-handed ahelical conformation for the 13-31 segment of peptide 5, the respective positions of the D-amino acids in the sequence have been chosen so that they will allow the formation of an amphiphilic a-helix with a hydrophobic domain fairly similar in size and in shape to that of peptide 3 (Fig. 3). The same standard set of amino acids used in previous studies was again utilized: leucines as hydrophobic residues, glutamines as neutral hydrophilic residues, lysines as basic hydrophilic residues. The respective amounts of each of these amino acids were chosen in order to provide a high helix-forming potential (29) and to retain the overall hydrophobic-hydrophilic balance of the natural compound.
Peptide Synthesis and Purification
Peptide 5 was synthesized by standard solid phase methods and was purified by gel permeation chromatography, ion exchange chromatography and reverse phase HPLC. The final product was homogeneous by analytical reverse phase HPLC, had the expected amino acid composition, and showed the correct sequence by Edman degradation.
CD Studies
The CD spectra of peptide 5 (0.02 M sodium phosphate, pH 7.4,0.16 M KCl) definitely show the presence of a left-handed a-helical structure, with two distinct maxima at 222 and 208 nm (30) (Fig. 4). Because of the lack of consistent data in the literature on the values of the mean residue ellipticities for this particular structure (30-34), we did not think it reasonable to calculate the relative contributions of the different secondary structures. However, from the simplicity of the CD spectra obtained, we think that no major structures other than a-helix and random coil are to be found in peptide 5.
A concentration dependence of the mean residue ellipticity at 222 nm over the range 1.0 x 10-5-2.0 X 10" M is indicative of a self-associative process (35) (Fig. 4). The experimental results can be fitted by an equation describing either cooperative trimerization or tetramerization. Since, in these calculations, the experimental values for (01222 may be affected by variations in the L-and D-amino acid random coil structures, the results should be considered only as suggestive. The effects of various proportions of TFE on 1.0 X M solutions of peptide 5 were studied. A net increase of the mean residue ellipticities at 222 and 208 nm indicates an augmentation of the helical structure at higher TFE concentration, the largest change being observed between 5 and 20% TFE (Fig. 4).
Opiate Receptor Binding
The abilities of peptide 5 to inhibit binding of [3H]DADL or I3H]DHM to the membranes were compared to those of @endorphin in the same assay. At the concentrations used in these experiments, I3H]DADL should label &receptors selectively and I3H]DHM should label w-receptors selectively (36,37). The affinity of peptide 5 for either F-or 6-opiate receptors was almost identical to that of @-endorphin, with the same 6/ p selectivity. The results for peptide 5 are compared with those previously obtained for other models in Table I.
Opiate Activities on the Guinea Pig Ileum and Rat Vas Deferens
The opiate activities of peptide 5 were determined on isolated GPI and RVD preparations in Krebs-Ringer solution at 37 "C. The results are compared to those of previous models in Table 11. GPI-Peptide 5 was able to inhibit almost completely the electrically stimulated contractions of this tissue. This effect was totally reversed by the opiate antagonist naloxone (1-2 PM), indicating that the model peptide was acting directly on opiate receptors. The rate of response to an added dose of peptide 5 was in the range of 2-4 min, which is similar to that previously observed for either peptide 3 or @-endorphin (12,13). Over a period as long as 30 min, no spontaneous reversal of the opiate effect was observed, this indicating an apparent stability of peptide 5 toward enzymatic degradation. No antagonist effect of peptide 5 was observed on @-endorphin action; thus, the peptides were acting in an additive way. The ICso value determined for peptide 5 by probit analysis of the dose-response curves was 29.5 & 11.8 nM, which is almost identical to the value previously reported for peptide 3 (13).
RVD-Compared to previous model peptides, the action of peptide 5 on this tissue was more complex. In 30% of the experiments, some sort of temporary tetanization of the muscle could be observed on the first addition of a dose of peptide 5. This effect was not prevented by naloxone, and muscles subsequently responded to the action of either peptide 5 or pendorphin in the same way as the ones which did not show this type of behavior.
The most important difference, as compared to previous models, was that peptide 5 displayed a mixed agonist-antagonist activity. For this reason, its agonist dose-respone curve has been determined by single-dose challenges, i.e. between each tested dose, the muscle was washed several times and re-equilibrated at the maximal amplitude of the electrically stimulated twitches. The dose-response curves obtained by that method were submitted to probit analysis, and the ICs0 was determined as 225 + 51 nM. This opiate agonist effect of peptide 5 on the RVD was fully reversed by micromolar concentrations of naloxone (Fig. 5 ) .
The antagonist potency of peptide 5 was assayed by testing the activity of an identical dose of @-endorphin on the same muscle pretreated or not with peptide 5. A dose-related inhibition of the action of @-endorphin could be observed (Fig. 51, and, at a concentration of 150 nM, peptide 5 was able to antagonize almost 50% of the effect of @-endorphin. The results are summarized in Table 111.
Resistance to Proteolysis
The resistance of peptide 5 toward proteolysis by enzymes endogenous to rat brain was assayed as described under "Experimental Procedures." HPLC analysis of aliquots with-
TABLE I1
Opiate agonist actiuities of @-endorphin and peptides 1-5 in guinea pig ileum and rat uas deferens assays Data were obtained by probit analysis of the dose-response curves. "The ratio is expressed as: % inhibition on pretreated tissue/% inhibition on untreated tissue.
On a tissue not pretreated with peptide 5, such a dose of 0endorphin already caused a maximal inhibitory effect. The effect of a dose of 0-endorphin identical to the one used on the untreated tissue was too low to be accurately measured.
drawn at different incubation times showed that the recovery of peptide 5 from the incubation mixture was poor, but that the amount recovered was almost identical at any time and accounted for approximately 10% of the standard. This constant recovery suggests that, under these conditions, peptide 5 is strongly resistant to enzymatic degradation. The same phenomenon of poor recovery has been observed for all our P-endorphin analogs, the maximum recovery varying from 15 to 30% (12)(13)(14). One of the likely explanations is that, for these amphiphilic peptides, very substantial nonspecific binding to membranes or other substances present in the incubation mixture is occurring. However, a partial and rapid degradation, followed by product inhibition of the proteolytic enzymes involved, has not been ruled out.
Analgesic Assays
The antinociceptive properties of peptide 5 were determined by icv injection into mice and subsequent testing of the latency of pain perception using a hot plate assay at 55 "C. Peptide 5 caused significant and long lasting analgesia at doses of 1, 3, 10, and 20 pg/mouse. The maximal effect was usually attained 40-60 min after injection and thereafter was slowly reversed. Subcutaneous injection of naloxone 41 min after administration of peptide 5 caused a complete reversal of the antinociceptive effect as tested 20 min later; This demonstrates that the analgesia was mediated by opiate receptors and not by other nonopiate pathways. In these naloxone-treated mice, analgesia returned 60 min later to the level of the untreated group, presumably because naloxone was cleared from the central nervous system more rapidly than peptide 5. These effects are illustrated in Fig. 6 by the results obtained for 10-pg doses of peptide 5 .
In addition to its analgesic effect, peptide 5 produced a number of other opiate-like behavioral effects that have been observed for @-endorphin (38), including explosive motor behavior, Straub tail, and catalepsy. The cataleptic state persisted during the first 3 h in mice injected with 10 and 20 pg of peptide 5. Fig. 7 compares the time courses of analgesia for a 3-pg dose of &endorphin, peptide 5, and peptide 3. The maximal effect for p-endorphin was already observed 20 min after injection. It diminished rapidly thereafter, and almost no significant effect could be detected 60 min after injection.
Structural Characterization of P-Endorphin
Their maximal effect was attained in about 60 min and later a slow diminution of this effect occurred during the next 90 min. The potency of peptide 5 is compared to those of 8-endorphin and peptide 3 in Fig. 8, using the maximum effect observed for each dose of each peptide, regardless of the time after injection. This figure shows that peptide 5 has a lower potency than @-endorphin for producing analgesia in mice but, as opposed to peptide 3, that its efficacy is comparable to that of the natural compound. The potency of peptide 5 for inducing analgesia can be estimated as being 10 to 15% of that of p-endorphin. DISCUSSION Our hypothesis (1) that there are three separate structural regions in p-endorphin has now been investigated by studying five model peptides. Previous reports had provided ample evidence that the enkephalin segment was an absolute requirement (21,(39)(40)(41), and our studies (12)(13)(14) have shown that the proposed spacer region (residues 6-12) did not have a very specific function with regard to the examined physical and biological properties of the model peptides. Therefore, our main interest has been the modeling of the potential amphiphilic helical segment in the C terminus of @-endorphin, and the results of our previous studies showed that the amphiphilic helix has a specific function with regard to analgesia and its interaction with the t-opiate receptor.
The questions that we tried to address with peptide 5 are more complex. Carrying our hypothesis to its limit, if an amphiphilic helix is the major structural determinant in the C-terminal segment of &endorphin, an analog designed to possess a similar feature should display some activity even if this secondary structure is formed by a D-aminO acid sequence.
A consequence of using amino acids of the D-configuration should be that if there is any important stereospecific requirement in the corresponding region of the natural compound, the results obtained for the analog should point it out quite clearly.
Examination of the CD spectra of peptide 5 reveals two major points: 1)in aqueous buffered salt solutions, this peptide shows a considerable amount of left-handed a-helical structure which increases on addition of TFE (Fig. 4); and 2) a dependence of [8]222 on peptide 5 concentration is observed between 1 X and 2 X M (Fig. 4). Several structure-promoting agents, including TFE, have been shown to induce helical structure in P-endorphin (22,(42)(43)(44), and a correlation between helicity in the C terminus of @-endorphin and in uitro opiate activities has been reported (45,46). The results obtained for peptide 5 demonstrate the ability of this model peptide to adopt the same preferred helical conformation as p-endorphin or peptide 3, although of opposite handedness, in a medium chosen to mimic to a certain extent the environment of the opiate receptor.
In both 6-and p-opiate receptor binding assays, peptide 5 was as potent as @-endorphin and approximately 8 times less potent than peptide 3 ( Table I). These results are in good agreement with the conclusions of our previous studies (14), and the small variations in binding affinities observed either for reported P-endorphin analogs (47) or for models as different as peptides 1 , 3 , 4 , and 5 ( Fig. 1) definitely show that the C-terminal segment of p-endorphin plays a nonspecific role in the binding to these types of opiate receptors.
Previous reports (39,40,46) as well as our own work on Bendorphin analogs (12,13), in particular the study of peptide 4 (14), had led us to the conclusion that the GPI assays were relatively insensitive to changes in the C-terminal region of @-endorphin. The observation that peptide 5 displays the same potency as peptide 3 and is only three times less potent than peptide 4 in this assay ( Table 11) shows again that, in this case, a structural element like the amphiphilic a-helix is not necessary in the C terminus of @-endorphin. Moreover, these results strongly suggest that no specific interaction of this particular region of the molecule is needed to induce an opiate effect on the GPI.
In view of the drastic change in chirality in the 13-31 segment, it is remarkable that peptide 5 displays potent activity in the RVD assay, which is very specific for pendorphin (21,40,41). The naloxone reversibility of this agonist effect clearly demonstrates that it is mediated by opiate receptors. The agonist ICs0 of 225 k 51 nM is almost identical to that obtained for peptide 3 (Table 11) and such a low value definitely indicates that peptide 5 is a p-endorphin analog as opposed to either a morphine or enkephalin analog for which the lower values are in the range of 4000 to 5000 nM (21,41). Further support for this fact can be derived from the antagonist activity of peptide 5 (Table 111) which, to our knowledge, is the first opiate reported to display such a behavior. One likely explanation of the mixed agonist-antagonist activity of peptide 5 is that this peptide and p-endorphin bind competitively to the same receptor, but that their respective efficacy in turning the receptor into an active state is markedly different.
Since peptide 4 is poorly, if at all, amphiphilic, and its action on the RVD has been shown not to be mediated by the t-receptor (14), the results obtained for peptide 5 clearly demonstrate that an amphiphilic helix is a prerequisite for pendorphin to interact with the €-opiate receptor.
The reported potencies of P-EP1-23 (-200 nM), P-EP1_21 (>2000 nM), and shorter N-terminal fragments (>50,000 nM) (41, 43) strongly suggest that the recognition site for the t-receptor in the C terminus of 8-endorphin does not lie in residues 24-31. The lower potency of peptide 5 , when compared to p-endorphin or peptide 1 , is in agreement with the previously proposed (13,35,48,49) importance of the presence in position 18 of an aromatic moiety, which is a prominent feature on the helix surface, and could be necessary in order to induce full agonist activity.
One of the most striking properties of peptide 5 is its analgesic effect when injected icv into mice (Fig. 6). The time course of analgesia is different for p-endorphin and peptide 5, the effect of the latter compound being very similar to that of peptide 3 (Fig. 7). One explanation for the slower onset of the analgesic effect of peptide 5 can be derived from the proteolysis experiments, where the low recovery of this peptide from rat brain homogenate could be due to nonspecific tissue binding. This same phenomenon, in the analgesic assay, could prevent the model peptide from diffusing throughout the brain as rapidly as p-endorphin. Presumably, the slower diminution in potency observed for peptide 5 compared to pendorphin i s due to the greater resistance of the model compound toward enzymatic degradation, as shown in the in vitro proteolysis experiments. However, the apparent stability of peptide 5 in the latter experiments still contrasts with even the slow decrease in its analgesic potency. Nevertheless, the fact that peptide 5 displays a longer lasting analgesic activity than @-endorphin illustrates the potential utility of our structural approach in the development of stable synthetic hormones.
The analgesic potency of peptide 5 can be estimated as being 10-15% of that of p-endorphin. This indicates that some of the specificity of @-endorphin for producing analgesic effects has been lost in the design of the model peptide. As pointed out earlier (13), potential amphiphilic a-helical structures are ubiquitous in the C-terminal regions of P-endorphin analogs that have potent analgesic activity, including all species variants that have been tested (50). The most remarkable finding that emerges from the analgesic study of peptide 5 is that this peptide, which not only is highly nonhomologous to @-endorphin in its C terminus, but whose whole 13-31 segment consists of D-amino acids, displays a potency almost equivalent to that of other analogs with very minor changes in the C terminus (39, 51-56). This result makes it evident that an amphiphilic a-helical structure, be it formed by L-or D-amino acids, is a predominant factor with regard to the analgesic activity of P-endorphin. We pointed out earlier (13) that, on the helix surface, similar features can be found in the various natural @-endorphins, for example, the presence of an aromatic residue at the C terminus of the hydrophobic domain, surrounded by basic residues. The lower analgesic potency observed for peptide 5, as compared to that of pendorphin, could be due to a decreased ability of a left-handed helical structure to accommodate properly some of these features.
In conclusion, the study of peptide 5, as part of our general structural approach, has shown how, with a small number of synthetic peptides, it has been possible to examine thoroughly the structure-function relationships in a naturally occurring peptide hormone. For @-endorphin, this has been done by considering the natural polypeptide in terms of three structural domains with different particular characteristics. Combining specific design for each different model with the study of several physical and pharmacological properties having various degrees of specificity for fi-endorphin, it has been possible to determine the different contributions of the particular domains to the overall activity profile of the natural molecule. The similarity between the in vitro and the in uiuo opiate activities of peptide 5 and P-endorphin, when compared to the nonhomology in amino acid composition and most importantly to the drastic change in amino acid chirality, is a striking proof that, in biologically active peptides of moderate length, the structural factors are not to be overlooked. In this view, the structural principles outlined here and in our previous studies should have many applications in the understanding of other peptide hormones and in the development of stable synthetic biologically active peptides with a high specificity of action. | 7,483 | 1984-08-10T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The Benefits of the ZnO / Clay Composite Formation as a Promising Antifungal Coating for Paint Applications
Featured Application: Herein, we provide an inorganic composite as a paint preservative for antifungal applications. From a paint commercial additive, we add synthesized ZnO nanoparticles in order to improve the antimicrobial response. The main aim of this study is to generate a cost-e ffi cient, eco-friendly material, without human health risk. We have developed a method for the controlled dispersion of ZnO nanoparticles on paint additive through a cooperative assembly-directed process at room temperature. The antifungal response of the inorganic composite is tested against the common fungi, Aspergillus niger . In this work we take a step further, studying the composite developed in a paint application. We believe that this paper can serve as a turning point in the search of alternative preservatives against fungi in di ff erent environments from industrial to hospital. Abstract: The presence of mold is a serious problem in di ff erent environments as industrial, agricultural, hospital and household, especially for human health. Large quantities of mold spores can potentially cause allergic reactions and respiratory problems. Therefore, it is essential to keep buildings free of fungi without harming human health and the environment. Here, we pose a composite of modified bentonite clay and ZnO nanoparticles as an alternative antifungal preservative. The new composite is obtained by an easy and eco-friendly method based on a dry nanodispersion, without altering the properties of each material. The antifungal test reveals a robust response against fungi thanks to the ZnO nanoparticles’ contribution. Our results reveal that the antifungal activity of ZnO / clay composite is governed by both a uniform distribution and an adequate concentration of the ZnO nanoparticles onto the clay surface. Specifically, we find that for concentration below 10 wt.% of the ZnO nanoparticles, the nanoparticles are well dispersed onto clay giving rise to an excellent antifungal response. By contrast, when the concentration of ZnO increases, the formation of ZnO agglomerates onto the clay surface is favored. This e ff ect provokes that antifungal behavior changes towards a more moderate improvement. Finally, we have demonstrated that this composite can be used as a promising paint preservative for antifungal applications.
Introduction
Fungicide use is extensive in different environments such as industrial, agricultural, hospital and household. Its applications range from the protection of seed grain during storage, suppression of temperature. The antifungal response of the inorganic composite is tested against the common fungi, Aspergillus niger. Thanks to excellent results, this composite can be used in a wide range of applications. Therefore, we go one step further on potential applications by testing the antifungal activity of ZnO composite in a waterborne paint matrix.
Materials and Methods
Size modification process of the micrometric ZnO via chemical route. All the chemicals were directly used without further purification. Micrometric ZnO (microZnO), used in the reaction, consists of hexagonal prisms with lengths of 1-2 µm. First, 6 wt.% micrometric zinc oxide (ZnO, Asturiana de Cinc S.A., Arnao, AS, Spain) was added to 3.6 mol of glycerol (Sigma-Aldrich, Madrid, MAD, Spain) under stirring at room temperature. It should be highlighted that micrometric ZnO (microZnO), used in the reaction, consists of hexagonal prisms with lengths of 1-2 µm, see Figure S1a,b (Supplementary Materials). When the suspension was homogenized, 3.6 mol of urea (CO(NH 2 ) 2 , Sigma-Aldrich, Madrid, MAD, Spain) was added. Subsequently, the reaction was heated in a silicone bath at 120-140 • C for 2 h with continuous agitation at 300 rpm. The role of urea was to provide the reaction with ammonia and CO 2 , from its decomposition. The presence of ammonia generates a basic pH during the reaction. Under these conditions, the Zn species were reacted with H 2 O and high-pressurized CO 2 to create a hydrozincite phase (see Figure S1b (Supplementary Materials)). It was demonstrated that the hydrozincite (that is, reaction intermediate product) contains a high proportion of porous distribution, which should be produced by its morphologies in form of nano-sheet. After naturally cooling, the precipitate was isolated by filtration and washed with water and ethanol several times to remove impurities. The white powder was dried at 80 • C for 24 h. Finally, the product was thermally treated at 500 • C for a short time, 5 min, in the air. Note that the heat treatment at 500 • C produces a new phase transformation from hydrozincite to ZnO (see Figures S1b,c and S2 (Supplementary Materials)), which is faster and further completes at a higher temperature of 400 • C [26]. Due to the morphology of the hydrozincite in the formof nano-sheets, the heat treatment at 500 • C generates the appearance of cracks in its structure, which are expanded leading to the formation of the ZnO nanoparticles in order to decrease the surface energy. The experimental details are schematically shown in Figure S1c (Supplementary Materials).
Formation Process of Inorganic Composites. The modified clay (Clay) was supplied by NanoBioMatters Industries S.L. (Paterna, VAL, Spain) The main composition of modified clay is bentonite clay (68.1%-78.1%), modified with 20-30 wt.% of hexadecyltrimethylammonium bromide (C 19 H 42 BrN) and 1.9 wt.% of silver. The inorganic composite was obtained by the combination of modified clay (ZnO/Clay) with a different percentage of ZnO (from 2 to 60 wt.%) by using the dry dispersion methodology. The advantages of using dry dispersion are to ensure both the stability of the inorganic composite and the homogeneous distribution of ZnO in spite of the low dose.
Paint preparation. The selected paint was supplied by Xylazel S.A. (Madrid, MAD, Spain) The formulations of waterborne paint were treated with 0.5 wt.% of inorganic composite ZnO/Clay.
Characterization. Crystalline phases were characterized by X-ray diffraction (XRD, X'Pert PRO Theta/2theta of Panalytical, Cu Kα radiation, PANalytical, The Netherlands). The pattern was recorded over the angular range 20 • -70 • (2θ) with a step size of 0.0334 • and a time per step of 100 s, using Cu Kα radiation (λ = 0.154056 nm), with a working voltage of 40 kV and current of 100 mA. The morphology of samples was evaluated using primary electrons images of field emission scanning electron microscopy (FE-SEM, Hitachi S-4700, Tokyo, Japan). An image processing and analysis program (Leica Qwin, Leica Microsystems Ltd., Cambridge, UK) were performed to determine the average particle size from FE-SEM micrographs. In addition, a detailed morphology and crystal structure of the sample was evaluated using a transmission electron microscope (TEM/STEM, JEOL 2100F, Tokyo, Japan) operating at 200 kV and equipped with a field emission electron gun providing a point resolution of 0.19 nm. For TEM sample preparation, the particles were carefully suspended in ethanol and dispersed using Appl. Sci. 2020, 10, 1322 4 of 11 an ultra-sonication bath for 10 min. The suspension was dropped on a lacey carbon copper TEM grid. After the evaporation of ethanol, the particles were kept at the grid.
Antifungal activity test. The pathogenic fungus selected for testing is Aspergillus niger (CECT 2807). Antifungal tests were performed by the Bauer-Kirby disk diffusion assay with some modifications. First, the culture of A. niger (initial concentration of 5.20 × 107 spores/mL) was inoculated on the surface of Petri dishes. After that, filter paper disks were impregnated with a suspension of 0.5% different inorganic composites ZnO/Clay. Then, the inoculated Petri dishes were incubated at 37 • C for 7 days. The effectiveness of ZnO/Clay samples was evaluated by measuring the inhibition diameter of the grown fungus in the Petri dish. For the paint antifungal activity, no filter paper disks were used. The paint was in direct contact with the fungi culture. Two kinds of the test were performed to know the antifungal activity of modified paint, one with fresh paint and another with dry paint. The dry paint was obtained after a 90 • C thermal treatment for 24 h. All tests were performed in triplicate and the values were expressed in millimeters.
Initial Premises: High Antimicrobial Response of the Nanoparticulated ZnO
The particle size of the micrometric ZnO is modified by a chemical process and then, the obtained material is thermally treated at 500 • C for a short time, 5 min, in the air. (The reader can find more information about the ZnO synthesis in the Material and Methods section) The obtained ZnO is structurally characterized by X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). The XRD pattern of obtained ZnO shows a single crystalline structure ( Figure S2a (Supplementary Materials)). The position and intensity of the diffraction peaks match the hexagonal wurtzite structure ZnO (JCPDS Card No. 36-1451). The FTIR data ( Figure S2b (Supplementary Materials)) displays a strong absorption IR band at 437 cm −1 , which is ascribed to vibration Zn−O modes [27,28]. Other IR bands relate to traces of thermal treatment. At 3470 cm −1 , the broad IR band is assigned to the O−H stretching mode (ν(OH)) of the hydroxyl group of absorbed water. The IR bands observed between 1630 and 860 cm −1 are associated with stretching modes of carbonate group, coming from remains of thermal treatment [29]. Therefore, structural characterization confirms the synthesis of ZnO. Additionally, we carry out a morphological characterization of synthesized ZnO by FE-SEM ( Figure S2c (Supplementary Materials)), which shows nanoparticles' agglomerates with a heterogeneous distribution and irregular forms. The image analysis of the nanoparticles displays the average size to 56 ± 8 nm ( Figure S2c (Supplementary Materials)). Note that ZnO is nanoparticles without a defined organization. Specifically, Figure S2d (Supplementary Materials) allows observing clearly and unequivocally the nanometric character of the obtained ZnO. A close study of a particle is shown in Figure S2e It should be noted that recently we have been evidenced an excellent antimicrobial response of the nanoparticulated ZnO against multidrug-resistant organisms (MDROs), which strongly depends on the crystalline defects of ZnO [30]. In the next section, we go a step further and will demonstrate that the high antimicrobial activity can be used for the development of an inorganic composite as a paint preservative for antifungal applications
Finding a Potential Technological Application of the ZnO/Clay Composite
To study the relevant role as a preservative of nanoparticulated ZnO, market available raw materials are used. The selected matrix was modified bentonite clay with the presence of another inorganic preservative as silver. The choice of modified bentonite clay with Ag cations, hereafter Clay, is due to the well-known antibacterial properties of Ag [31,32]. Even, previous studies show the good antibacterial activity of this modified clay against different types of bacteria [30,33]. Therefore, it is mandatory to complement the antibacterial activity of Ag cation with a preservative against fungi such as ZnO. For a better integration of ZnO obtained with modified bentonite clay, we pose a dry dispersion of them at room temperature to avoid alteration of the materials [34]. Recently, the dry dispersion method was proposed to obtain hierarchical nanoparticle-microparticle systems with unusual properties by mixing oxides of dissimilar materials [34][35][36][37][38]. These hierarchically-dispersed materials raise additional environmental value since their synthesis is particularly clean. They are prepared by mixing oxides of different materials using a residue-free and solvent-free nanodispersion method that leads to hierarchical nano-microparticle systems [34][35][36][37][38]. More importantly, it has been proven that, after the partial reaction of two oxides, the creation of interfaces at the nano-scale range endows these materials with new properties that range from magnetic or optic to catalytic [34,35,37,38], due to proximity and diffusion phenomena. Consequently, here, the dry dispersion method is taking advantage of the surface energy differences between dissimilar oxide that it attained by shaking the modified clay with ZnO obtained, and 1 mm ZrO 2 balls in a 60 cm 3 nylon container for 5 min at 50 rpm using a tubular-type mixer.
Different composites ZnO/Clay are obtained according to the concentration of nanoparticulated ZnO, from 2 to 60 wt.%. The structural characterization of the composites is carried out by XRD ( Figure 1). As discussed above, the XRD pattern of ZnO obtained matches the hexagonal wurtzite structure ZnO (JCPDS Card No. . The XRD pattern of bentonite ( Figure S3 (Supplementary Materials)) shows the typical diffraction peaks (JCPDS Card No. 003-0019) and the presence of cristobalite as an impurity (JCPDS Card No. 039-1425). As expected, the XRD pattern of ZnO/Clay composite displays an increase in the intensity of the diffraction peaks relative to ZnO while the diffraction peaks of clay decrease their intensity accordingly.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 11 dispersion method was proposed to obtain hierarchical nanoparticle-microparticle systems with unusual properties by mixing oxides of dissimilar materials [34][35][36][37][38]. These hierarchically-dispersed materials raise additional environmental value since their synthesis is particularly clean. They are prepared by mixing oxides of different materials using a residue-free and solvent-free nanodispersion method that leads to hierarchical nano-microparticle systems [34][35][36][37][38]. More importantly, it has been proven that, after the partial reaction of two oxides, the creation of interfaces at the nano-scale range endows these materials with new properties that range from magnetic or optic to catalytic [34,35,37,38], due to proximity and diffusion phenomena. Consequently, here, the dry dispersion method is taking advantage of the surface energy differences between dissimilar oxide that it attained by shaking the modified clay with ZnO obtained, and 1 mm ZrO2 balls in a 60 cm 3 nylon container for 5 min at 50 rpm using a tubular-type mixer. Different composites ZnO/Clay are obtained according to the concentration of nanoparticulated ZnO, from 2 to 60 wt.%. The structural characterization of the composites is carried out by XRD ( Figure 1). As discussed above, the XRD pattern of ZnO obtained matches the hexagonal wurtzite structure ZnO (JCPDS Card No. . The XRD pattern of bentonite ( Figure S3 (Supplementary Materials)) shows the typical diffraction peaks (JCPDS Card No. 003-0019) and the presence of cristobalite as an impurity (JCPDS Card No. 039-1425). As expected, the XRD pattern of ZnO/Clay composite displays an increase in the intensity of the diffraction peaks relative to ZnO while the diffraction peaks of clay decrease their intensity accordingly. In order to clarify the addition of ZnO and the dispersion state of ZnO/Clay composites, a FE-SEM characterization is performed. For this, it is necessary to separately characterize the morphology of each ZnO/Clay composite components (that is, the bentonite clay and the nanoparticulated ZnO). So, the bentonite clay is composed of layers or compacted plates ( Figure S4a (Supplementary Materials)). While the morphology of nanoparticulated ZnO ( Figure S4b (Supplementary Materials)) are nanoparticles with sizes of ca. 56 nm. Note that the expected behavior about the dispersion of nanoparticulated ZnO is a partial coating of clay aggregates, according to the dry dispersion procedure [34,36,37]. Figure 2 displays the changes in clay morphology corresponding to the addition In order to clarify the addition of ZnO and the dispersion state of ZnO/Clay composites, a FE-SEM characterization is performed. For this, it is necessary to separately characterize the morphology of each ZnO/Clay composite components (that is, the bentonite clay and the nanoparticulated ZnO). So, the bentonite clay is composed of layers or compacted plates ( Figure S4a (Supplementary Materials)). While the morphology of nanoparticulated ZnO ( Figure S4b (Supplementary Materials)) are nanoparticles with sizes of ca. 56 nm. Note that the expected behavior about the dispersion of nanoparticulated ZnO is a partial coating of clay aggregates, according to the dry dispersion procedure [34,36,37]. Figure 2 displays the changes in clay morphology corresponding to the addition of nanoparticulated ZnO. The 2 wt.% of dispersed ZnO on clay (Figure 2a) is practically not observed in the micrograph. From 6 wt.% of ZnO (Figure 2b), nanoparticles of ZnO are deposited onto the clay surface. For concentration between 10 wt.% (Figure 2c) and 20 wt.% (Figure 2d) of dispersed ZnO, the presence of nanoparticle onto clay surface increases, reaching a coated clay. Figure 2e shows the first appearance of ZnO agglomerates due to a high concentration of ZnO nanoparticles. When the concentration of ZnO increases, the quantity, and size of agglomerates also rise ( Figure 2f). Consequently, a dispersion mechanism can be established depending on the deposited material concentration. The clay agglomerates serve like the base of the dispersion of the ZnO (Figure 2g). At low concentrations of ZnO, from 2 to 10 wt.% (Figure 2g), the distribution of ZnO is uniform onto the clay surface. Once the surface coating is reached, the agglomeration of ZnO nanoparticles occurs (Figure 2g). (Figure 2d) of dispersed ZnO, the presence of nanoparticle onto clay surface increases, reaching a coated clay. Figure 2e shows the first appearance of ZnO agglomerates due to a high concentration of ZnO nanoparticles. When the concentration of ZnO increases, the quantity, and size of agglomerates also rise ( Figure 2f). Consequently, a dispersion mechanism can be established depending on the deposited material concentration. The clay agglomerates serve like the base of the dispersion of the ZnO (Figure 2g). At low concentrations of ZnO, from 2 to 10 wt.% (Figure 2g), the distribution of ZnO is uniform onto the clay surface. Once the surface coating is reached, the agglomeration of ZnO nanoparticles occurs ( Figure 2g). The antifungal response of ZnO/Clay composites is tested by the Bauer-Kirby disk diffusion assay. The fungus selected for the assay is Aspergillus niger, which is incubated at 37 °C for three days after ZnO/Clay addition. As shown in Figure 3 a,b, the presence of nanoparticulated ZnO improves the antifungal inhibition ratio of the bentonite clay. Evaluating the antifungal evolution as a function of the ZnO concentration (Figure 3c), an increase in the sporulation inhibition diameter (SID) is observed. Modified bentonite clay with Ag cation has a very low antifungal response (called B). This fact is in accordance with the lack of antifungal activity of Ag that literature reports. By contrast, it has been reported by Monte-Serrano et al. [33] that the use of the modified bentonite clay with Ag cations guarantees an effective antibacterial activity. This fact should endow the ZnO/Clay system of a combined action between Ag-Zn cations, where the Ag cations guarantee the antibacterial activity, while Zn cations provide the antifungal activity. At low ZnO concentrations, between 2 to 10 wt.% (named as C to E), the SID increases exponentially. When 20 wt.% of ZnO (F) is reached, the SID The antifungal response of ZnO/Clay composites is tested by the Bauer-Kirby disk diffusion assay. The fungus selected for the assay is Aspergillus niger, which is incubated at 37 • C for three days after ZnO/Clay addition. As shown in Figure 3a,b, the presence of nanoparticulated ZnO improves the antifungal inhibition ratio of the bentonite clay. Evaluating the antifungal evolution as a function of the ZnO concentration (Figure 3c), an increase in the sporulation inhibition diameter (SID) is observed. Modified bentonite clay with Ag cation has a very low antifungal response (called B). This fact is in accordance with the lack of antifungal activity of Ag that literature reports. By contrast, it has been reported by Monte-Serrano et al. [33] that the use of the modified bentonite clay with Ag cations guarantees an effective antibacterial activity. This fact should endow the ZnO/Clay system of a combined action between Ag-Zn cations, where the Ag cations guarantee the antibacterial activity, while Zn cations provide the antifungal activity. At low ZnO concentrations, between 2 to 10 wt.% (named as C to E), the SID increases exponentially. When 20 wt.% of ZnO (F) is reached, the SID behavior changes towards a more moderate improvement. This asymptotic character is maintained for higher ZnO concentrations as 40 wt.% (G) and 60 wt.% (H). Therefore, the relationship between the addition of ZnO and antifungal activity is not linear. There are two well-differentiated linear regressions whose intersection occurs at 10 wt.% (E). At concentration below 10 wt.%, ZnO nanoparticles are well dispersed onto clay, which coincides with faster efficacy growth of the antifungal activity (see Figure 3c green zone). When agglomerates appear on clay at concentration upper 10 wt.%, the growth of the antifungal response is reduced (see Figure 3c red zone). Hence, ZnO agglomerates onto clay surface are less effective with respect to the uniformly dispersed ZnO. These results prove the importance of both the uniform distribution and an adequate concentration of the ZnO nanoparticles onto the clay surface for the antifungal activity of composite.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 11 behavior changes towards a more moderate improvement. This asymptotic character is maintained for higher ZnO concentrations as 40 wt.% (G) and 60 wt.% (H). Therefore, the relationship between the addition of ZnO and antifungal activity is not linear. There are two well-differentiated linear regressions whose intersection occurs at 10 wt.% (E). At concentration below 10 wt.%, ZnO nanoparticles are well dispersed onto clay, which coincides with faster efficacy growth of the antifungal activity (see Figure 3c green zone). When agglomerates appear on clay at concentration upper 10 wt.%, the growth of the antifungal response is reduced (see Figure 3c red zone). Hence, ZnO agglomerates onto clay surface are less effective with respect to the uniformly dispersed ZnO. These results prove the importance of both the uniform distribution and an adequate concentration of the ZnO nanoparticles onto the clay surface for the antifungal activity of composite. According to the promising results against fungi, the antifungal response of ZnO/Clay composite is tested in a waterborne paint matrix. The same method and conditions as in the previous antifungal test are used. The experiments are collected in Figures 4a and S5 (Supplementary Materials). The waterborne paint was modified with a concentration of 0.5 wt.% of ZnO/Clay composite with a ZnO content of 10 wt.% (i.e., 0.05 wt.% of ZnO nanoparticles). The very low concentration is selected due to the adequate compromise between ZnO dispersion and antifungal response. Two conditions of the waterborne paint, fresh and dry, are taken into account. As shown in Figure 4a, the fresh paint without the presence of ZnO/Clay composite, as control, displays a poor inhibition diameter. The fresh paint with ZnO/Clay composite incorporated shows a SID greater than control. By contrast, the dry paint practically does not have sporulation inhibition. The dry paint with ZnO/Clay composite added improves considerably the inhibition of fungi growth. Figure 4b collects the sporulation inhibition diameter (SID) observed in the Bauer-Kirby disk diffusion assay. The fresh paint SID (I) may be related to the paint composition such as additives, fillers, pigments, solvents, and resins. Once the ZnO/Clay composite is incorporated into the paint (II), the action diameter against fungi growth increases. This increase in antifungal activity indicates synergy between fresh According to the promising results against fungi, the antifungal response of ZnO/Clay composite is tested in a waterborne paint matrix. The same method and conditions as in the previous antifungal test are used. The experiments are collected in Figure 4a and Figure S5 (Supplementary Materials). The waterborne paint was modified with a concentration of 0.5 wt.% of ZnO/Clay composite with a ZnO content of 10 wt.% (i.e., 0.05 wt.% of ZnO nanoparticles). The very low concentration is selected due to the adequate compromise between ZnO dispersion and antifungal response. Two conditions of the waterborne paint, fresh and dry, are taken into account. As shown in Figure 4a, the fresh paint without the presence of ZnO/Clay composite, as control, displays a poor inhibition diameter. The fresh paint with ZnO/Clay composite incorporated shows a SID greater than control. By contrast, the dry paint practically does not have sporulation inhibition. The dry paint with ZnO/Clay composite added improves considerably the inhibition of fungi growth. Figure 4b collects the sporulation inhibition paint and ZnO/Clay composite. When the paint is dry, the SID difference between control (III) and modified paint (IV) increases significantly. Therefore, the antifungal response of waterborne paint without modification is due to the volatile compounds of its formulation. It should be highlighted that the ZnO/Clay composite keeps the good antifungal response both for fresh paint and for dry paint. In both cases, fresh and dry paint with ZnO/Clay composite improve the antifungal response.
The antifungal activity observed in ZnO/Clay composite ( Figure 3) is maintained for its application in paints (Figure 4). The excellent antifungal activity of fresh and dry paint points to the availability of ZnO in the matrix thanks to the dry dispersion method. The ZnO concentration added in the finished product for antifungal response is 0.05 wt.%, a value well below other antifungals such as zinc pyrithione, typical addition ~2.0% [7]. At a very low concentration of ZnO/Clay (0.5 wt.%), the paint does not significantly increase the cost production and improves considerably the antifungal response.
Conclusions
In summary, we are able to achieve a paint preservative based on ZnO nanoparticles and modified clay. The ZnO/Clay composite formation by a dry dispersion allows obtaining a controlled and uniform surface coating of ZnO. The addition of ZnO nanoparticles improves the antifungal properties of modified clay against a common fungus, A. niger. The use of ZnO/Clay composite as a fungicide is in accordance with the established guidelines to find alternatives from natural resources. To evaluate the effectiveness of ZnO/Clay composite as a preservative, we introduce the composite in a waterborne paint matrix. The antifungal test shows the improved response in both fresh and dry paint with the ZnO/Clay composite addition. It is worth noting that the combination of modified clay and ZnO nanoparticles in one preservative composite product and at a low concentration, such as 0.5 wt.%, optimizes the cost production and the amount of material required, besides tolerating the incorporation in other matrixes. Therefore, this composite is a strong candidate for a huge range of applications such as coatings in outdoor environments, the formation of antifungal textile fibers or integration into polymers. The antifungal activity observed in ZnO/Clay composite (Figure 3) is maintained for its application in paints (Figure 4). The excellent antifungal activity of fresh and dry paint points to the availability of ZnO in the matrix thanks to the dry dispersion method. The ZnO concentration added in the finished product for antifungal response is 0.05 wt.%, a value well below other antifungals such as zinc pyrithione, typical addition~2.0% [7]. At a very low concentration of ZnO/Clay (0.5 wt.%), the paint does not significantly increase the cost production and improves considerably the antifungal response.
Conclusions
In summary, we are able to achieve a paint preservative based on ZnO nanoparticles and modified clay. The ZnO/Clay composite formation by a dry dispersion allows obtaining a controlled and uniform surface coating of ZnO. The addition of ZnO nanoparticles improves the antifungal properties of modified clay against a common fungus, A. niger. The use of ZnO/Clay composite as a fungicide is in accordance with the established guidelines to find alternatives from natural resources. To evaluate the effectiveness of ZnO/Clay composite as a preservative, we introduce the composite in a waterborne paint matrix. The antifungal test shows the improved response in both fresh and dry paint with the ZnO/Clay composite addition. It is worth noting that the combination of modified clay and ZnO nanoparticles in one preservative composite product and at a low concentration, such as 0.5 wt.%, optimizes the cost production and the amount of material required, besides tolerating the incorporation in other matrixes. Therefore, this composite is a strong candidate for a huge range of applications such as coatings in outdoor environments, the formation of antifungal textile fibers or integration into polymers. | 6,374.2 | 2020-02-15T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Chemistry"
] |
Multiobjective Optimization Based on “Distance-to-Target” Approach of Membrane Units for Separation of CO2/CH4
The effective separation of CO2 and CH4 mixtures is essential for many applications, such as biogas upgrading, natural gas sweetening or enhanced oil recovery. Membrane separations can contribute greatly in these tasks, and innovative membrane materials are being developed for this gas separation. The aim of this work is the evaluation of the potential of two types of highly CO2-permeable membranes (modified commercial polydimethylsiloxane and non-commercial ionic liquid–chitosan composite membranes) whose selective layers possess different hydrophobic and hydrophilic characteristics for the separation of CO2/CH4 mixtures. The study of the technical performance of the selected membranes can provide a better understanding of their potentiality. The optimization of the performance of hollow fiber modules for both types of membranes was carried out by a “distance-to-target” approach that considered multiple objectives related to the purities and recovery of both gases. The results demonstrated that the ionic liquid–chitosan composite membranes improved the performance of other innovative membranes, with purity and recovery percentage values of 86 and 95%, respectively, for CO2 in the permeate stream, and 97 and 92% for CH4 in the retentate stream. The developed multiobjective optimization allowed for the determination of the optimal process design and performance parameters, such as the membrane area, pressure ratio and stage cut required to achieve maximum values for component separation in terms of purity and recovery. Since the purities and recoveries obtained were not enough to fulfill the requirements imposed on CO2 and CH4 streams to be directly valorized, the design of more complex multi-stage separation systems was also proposed by the application of this optimization methodology, which is considered as a useful tool to advance the implementation of the membrane separation processes.
Introduction
Membrane separation processes are considered to be of great potential in addressing the drawbacks of conventionally based amine processes for CO 2 capture and for natural gas sweetening or biogas upgrading [1,2]. For the separation of CO 2 from CH 4 , the first biogas upgrading plants were installed with technologies used in the natural gas industrial processing. For biogas upgradation, several technologies are currently available, ranging from absorption, adsorption and membrane-based gas permeation; in addition, advancements are being made in the direction of cryogenic separation, in situ methane enrichment and hybrid membrane-cryogenic technologies. The market situation for biogas upgrading has changed rapidly in recent years, making membrane separation achieve a significant market share with traditional biogas upgrading technologies [3,4]. Membrane gas separation is a mature and expanding technology, as covered in the perspective analysis performed by Galizia et al. [5], who pointed out that the availability of better membrane materials, meaning a higher permeation without compromising selectivity and stability, would promote faster growth.
The separation mechanism of a membrane gas permeation is usually a compromise between selectivity and permeability, which are the key parameters of the membrane performance. Remarking some studies that compiled and reviewed membrane materials for CO 2 /CH 4 separation [6][7][8][9][10], the characteristics of some representative materials were given in terms of CO 2 permeability and ideal selectivity (CO 2 /CH 4 ). Among the most representative materials studied for CO 2 /CH 4 separation, it was pointed out that cellulose acetate was the most used polymer for large scale CO 2 separation, despite a significant selectivity reduction when processing a highly pressurized natural gas mixture in comparison to single gas permeability data. This is due to a possible effect of plasticization, it being the scope for investigating other polymeric materials that are more stable at process conditions, such as polydimethylsiloxane (PDMS) [11]. The development of new membrane materials, including polymers and hybrid materials, will rely on a multidisciplinary approach that embraces the broad fields of chemical and materials engineering, polymer science and materials chemistry, as well as accurate process understanding in order to close the gap with their implementation in large scale applications [5,12].
A major challenge for developing effective gas separation membranes is overcoming the well-known permeability-selectivity trade-off for light gases in polymeric materials, which leads to an upper bound that serves as reference for evaluating the advances in highly permselective membrane materials, and, in turn, influencing the material design [13][14][15][16]. The efforts for enhanced CO 2 /CH 4 separation have been focused on the development of large-scale projects by improving stability and efficiency, which are linked to the innovation of materials, thermally rearranged (TR) polymers, polymers of intrinsic microporosity (PIM), which are two types of polymers that consistently perform at or beyond the polymer upper bound for certain gas pairs (O 2 /N 2 and CO 2 /CH 4 ), biopolymer-based membranes and mixed matrix membranes (MMM) formulations and blending systems, where ionic liquids were included [14,15,[17][18][19][20][21][22][23][24]. The routes to develop better membranes were covered in these referenced reviews, which introduce some large-scale applications where better membranes based on new advanced materials could be implemented. Five different approaches to better materials have been described: (i) unconventional-conventional polymers, (ii) nano-porous polymers with PIMs and TR polymers as examples, (iii) facilitated transport materials, (iv) mixed matrix membranes that are also revitalized by new sieve materials, such as metal-organic frameworks (MOFs), and (v) inorganic membranes with excellent stability but scale-up difficulties. From a series of large gas separation applications where better membranes would either greatly expand their use or allow for entry into a new market, an estimation of the membrane permeance and selectivity required to achieve commercial viability was included [5]. The target membrane performance for the competitive separation of CO 2 /CH 4 in CO 2 removal from natural gas required a selectivity in the range of 20-30 and a CO 2 permeance above 100 GPU in order to capture a portion of the much larger amine absorption market.
Superglassy membranes have also been proposed in hybrid membrane/amine processes for natural gas sweetening, as reported recently [25], with recommendations for further research on producing mixed matrix membranes of superglassy polymers with anti-aging properties, mixing superglassy polymers with porous and non-porous fillers to overcome physical aging and thin film composite membranes.
The hydrophobic or hydrophilic character of the membranes was also considered of relevance, as they were affected differently to the CO 2 and CH 4 permeances in the presence of impurities (such as water vapor or other non-methane hydrocarbons as minor components). The hydrophilicity can be tuned up by modifying the membrane material composition, covered in more detail in recent publications related to mixed matrix membranes [26][27][28], resulting in the innovation of materials that may contribute to the integration of the membrane technology in real scale production plants. Multilayer composite membranes also offer the possibility to optimize membrane layer materials independently, allowing for the transfer of the selective layer properties to different geometries, which could be more easily implemented at a large scale [29].
The outlook related to the opportunities for advancing membranes given by Park et al. [16] included the tasks of modeling at all length scales as needed in order to develop a coherent molecular understanding of key features, from membrane properties, which provide insight for future materials design, to membrane configuration and module design, as well as the membrane process optimization (operating conditions, product quality targets). Remarking the efforts toward the modeling and optimization tasks in conjunction with the materials innovation aspects, Ohs et al. [30] demonstrated the use of upper-bound properties of membranes coupled with process modeling to identify economically optimal combinations of permeability and selectivity in the reported study for nitrogen removal from natural gas. Such studies for other gas separations of interest would be desirable in order to provide appropriate targets for materials design, show the opportunities for membranes in both existing and emerging applications and implement the methodologies to scale promising membranes from laboratory studies to the thousands of square meters needed for large applications. All of these purposes need the modeling and optimization tasks for the process design to effectively address separation requirements.
Taking into account the remarked keys, the main objective of this work is to realize a complete comparison of the performance of different non-commercial CO 2 /CH 4 selective membranes (modified commercial hydrophobic PDMS and non-commercial hydrophilic ionic liquid-chitosan (IL-CS) composite membranes previously developed and characterized by this research group) and to identify the optimal design and operation conditions that maximize their technical performance.
The membranes used for this study are flat-sheet composite membranes, the commercial PERVAP 4060 (DeltaMem AG, CH-Allschwil) with a 1-1.5 µm thick PDMS top layer and a total thickness of 180 µm, which was also modified by a NaOH treatment in order to enhance the attraction of CO 2 more preferentially and a self-prepared IL-CS/PES composite membrane fabricated in our laboratory with a similar selective layer thickness as the commercial hydrophobic membrane.
These membranes were selected due to their promising permeance and selectivity from gas permeation studies covering the separations of CO 2 /N 2 and CO 2 /CH 4 in previous works of the research group [24,28], focusing, in this study, on (i) the modification of a commercial PDMS membrane by a NaOH treatment to attract CO 2 preferentially, and (ii) the use of a biopolymer-based membrane (with ionic liquid inclusion) in a robust support as the proposed options for tuning up the membrane separation properties. The performance of these membranes from single gas permeation tests and the surface characterization studies by ATR-FTIR were also reported elsewhere, the NaOH treatment being remarkable, and the enhanced CO 2 /CH 4 separation ability of the membranes containing ionic liquid due to the strong absorption selectivity towards CO 2 . The effect of the ionic liquid addition was also reported in the literature [31,32], with the use of room temperature ionic liquids to improve the interphase morphologies of membranes in mixed matrix membranes, and the study of gas transport properties of tailored CO 2 -philic anionic poly (ionic liquid) composite membranes. The hybridization effect of chitosan (CS) by introducing a determined percentage by weight of a highly CO 2 -absorbing ionic liquid was further considered in order to improve the selectivity of pure CS membranes.
Since the optimization of the performance of a membrane module for gas separation is not a trivial task (multiple objectives related to the purities and recoveries of the different gases present in the mixture must be considered), a "distance-to-target" approach can provide valuable results. Standard multiobjective optimization methods, such as εconstraints, result in Pareto fronts when applied to two objectives, or Pareto surfaces when three objectives are considered [33,34]. In these Pareto solutions, none of the conflicting objective functions can be improved in value without degrading some of the other objective values. However, when more than three objectives are simultaneously considered, the corresponding Pareto sets of solutions become more complex and cannot be translated as a Processes 2021, 9, 1871 4 of 25 simple graphical representation. Besides, if additional subjective preference information is not defined, all of the Pareto solutions can be considered equally good, and the selection of the optimal conditions of a preferred unique solution is not direct. In contrast, a "distance to target" approach provides some advantages when compared to the mentioned standard multiobjective optimization methods [35]. For example, this approach provides a single Pareto solution rather than Pareto sets of solutions regardless of the numbers of objectives defined. As this optimization approach provides practical guidelines by measuring and quantifying the magnitude toward previously defined targets, this single Pareto solution is determined by the minimization of its distance to the objective values. Consequently, the "distance to target" approach applied in this work provides a better scenario for the direct comparison of the several alternative membranes for the separation of gas mixtures.
Model Development
Although there are numerous models reported in the literature for gas separation by hollow fiber permeators, most of them are based on a differential approach [36]. Typically, a set of coupled nonlinear differential equations are solved to define the module performance. The resulting set of differential equations, together with the specified feed flow rate, pressure and composition, as well as the permeate outlet pressure, form a boundary value problem. Iterative techniques can be used to solve these problems, but this methodology can be burdensome when complex considerations are taken into account, such as multicomponent mixtures, non-constant permeability coefficients, temperature effects or multi-stage configurations. As a result of these reasons, an alternative strategy has been adopted in this work. The representative hollow fiber is divided into a series of n perfectly mixed stages in the axial direction, and mass balances are enforced in each section. This procedure is formally equivalent to using first order finite differences to develop a set of coupled difference equations from the differential mass balances for this problem [37]. The bore-side feed countercurrent flow arrangement is the most frequently used configuration for gas separation using asymmetric hollow fiber membranes, and a mathematical model is thus developed here for this configuration ( the corresponding Pareto sets of solutions become more complex and cannot be translated as a simple graphical representation. Besides, if additional subjective preference information is not defined, all of the Pareto solutions can be considered equally good, and the selection of the optimal conditions of a preferred unique solution is not direct. In contrast, a "distance to target" approach provides some advantages when compared to the mentioned standard multiobjective optimization methods [35]. For example, this approach provides a single Pareto solution rather than Pareto sets of solutions regardless of the numbers of objectives defined. As this optimization approach provides practical guidelines by measuring and quantifying the magnitude toward previously defined targets, this single Pareto solution is determined by the minimization of its distance to the objective values. Consequently, the "distance to target" approach applied in this work provides a better scenario for the direct comparison of the several alternative membranes for the separation of gas mixtures.
Model Development
Although there are numerous models reported in the literature for gas separation by hollow fiber permeators, most of them are based on a differential approach [36]. Typically, a set of coupled nonlinear differential equations are solved to define the module performance. The resulting set of differential equations, together with the specified feed flow rate, pressure and composition, as well as the permeate outlet pressure, form a boundary value problem. Iterative techniques can be used to solve these problems, but this methodology can be burdensome when complex considerations are taken into account, such as multicomponent mixtures, non-constant permeability coefficients, temperature effects or multi-stage configurations. As a result of these reasons, an alternative strategy has been adopted in this work. The representative hollow fiber is divided into a series of n perfectly mixed stages in the axial direction, and mass balances are enforced in each section. This procedure is formally equivalent to using first order finite differences to develop a set of coupled difference equations from the differential mass balances for this problem [37]. The bore-side feed countercurrent flow arrangement is the most frequently used configuration for gas separation using asymmetric hollow fiber membranes, and a mathematical model is thus developed here for this configuration ( Figure 1). The main assumptions employed in the model development are:
−
The deformation of the hollow fibers under pressure is negligible; − The membrane permeability is independent of the concentration and pressure; − The pressure changes in the retentate and permeate streams in the lumen and shell sides are negligible; − The concentration polarization on both sides of the membrane is negligible; − The gas flows are evenly distributed, and the end effects resulting from flow direction changes are negligible; − The gas on the lumen and shell sides of the hollow fibers is in a plug flow; − The membrane module is operated at a steady state.
Material balances on the cell (global, on the tube side and on the shell side) Flow across the membrane Cell continuity x Ain(i) = x A(i−1) (11) y Ain(i) = y A(i+1) Relationship between individual and total flows (definition of molar fractions) Membrane transport properties: the number 2736 in Equation (18) is the conversion factor from the membrane permeability (Perm) expressed in GPU to specific gas permeabilities (Perm A and Perm B ) expressed in m 3 Definition of process design and performance parameters.
Processes 2021, 9, 1871 6 of 25 These defined purities and recoveries must be considered as the main indicators of the performance of the separation process, and specifications can be fixed to these parameters. Therefore, the optimization of the process will focus on the achievement of maximal purities and recoveries as functions of the optimal pressures on both sides of the membrane and the module stage cut (which defined the total membrane area of the module) for each feed composition. However, the optimization of the design and operation of a hollow fiber module for gas separation is not a trivial task. In most cases, both gases are considered products, and, consequently, purity and recovery requirements will be imposed. In these circumstances, contradictory objectives must be counterbalanced, since it is not possible to maximize purity and recovery simultaneously. Therefore, a multiobjective problem must be defined, with at least four different conflicting targets (purities and recoveries of both gases), although a higher number of objectives could appear if the membrane area, energy consumption or economic aspects are considered as additional relevant targets. The discarding of standard multiobjective optimization methods, such as ε-constraints, can be justified in order to overcome this drawback, by proposing a methodology based on a "distance-to-target" approach. For example, this approach can provide a single Pareto solution rather than Pareto sets of solutions based on the distance to the objective values. In addition, this approach is more adequate in identifying the best way to improve suboptimal solutions by finding minimal projections onto the optimal limits [38]. The Euclidean distance between the individual solutions and the optimization targets of a problem can be used as base of this approach [39]. The Euclidean distance D in an n-dimension space is defined by Equation (25): where C i are the components of the vector to be optimized and G i are those of the specified target. In this work, the components of the vector target include the purities and recoveries of both gases present in the CO 2 /CH 4 mixture. In the current study, a normalized equally weighted distance D N was employed as the main indicator to identify the optimal performance of the gas separation process, applying Equation (26): where n represents the number of dimensions of the space (number of objectives). Since the four objectives considered in this work were percentages, the presence of 100 in the denominator implied that the definition of D N warranted the distance values to be normalized in the range between 0 (closest to the target) and 1 (furthest to the target), so a direct and easily comparable outlook of the results was obtained (another clear advantage over conventional multiobjective optimization methods). In this case, since the four objectives must be maximized, all of the components of the normalized target vector were equal to 100. The modelling and optimization tasks were performed by using the GAMS programming language (The General Algebraic Modeling System), the CONOPT solver being selected.
Model Validation: Determining the Number of Cells from a Reference System
Before the validation of the model developed in this work, an internal parameter that determines the performance of the model must be defined: the number of cells to comprise each membrane fiber, taking a reference system. Figure 2 shows the evolution of the recovery of O 2 (permeate) and the corresponding purity for the separation of air with fibers made of cellulose acetate as function of the number of cells considered in the model [40].
Processes 2021, 9, x FOR PEER REVIEW 7 of each membrane fiber, taking a reference system. Figure 2 shows the evolution of the covery of O2 (permeate) and the corresponding purity for the separation of air with fibe made of cellulose acetate as function of the number of cells considered in the model [40 The selection of the number of cells must take into account the balance between t precision of the model and its calculation load. On the one hand, when a low number cells was chosen, the corresponding calculation load is light and the model can be run fa but the obtained result can be imprecise and inadequate in representing the system. As example, if the design selected 10 cells, the number of equations required was 321, w 882 non-zero elements in the model. On the other hand, a high number of cells can obta much more precise results, but at the expense of heavy calculation loads. For instance, t consideration of 300 cells increased the number of equations to 9891, and 27562 non-ze elements were included in the model. Under these higher load conditions, the model o tained a 40.0% purity and 76.1% recovery, whereas the corresponding values in the ca of 10 cells were 39.2% and 74.5%, respectively (the underestimation of the parameters w above 2% in both cases). The selection of 100 cells was preferred in this work, as it provid an adequate compromise between the model load (3291 equations and 9162 non-zero e ments) and its precision (underestimation not higher than 0.15% when compared to t selection of 300 cells).
In order to validate the developed mathematical model, the values predicted by t model were compared to experimental data previously published for air separation e ploying cellulose-acetate-based asymmetric hollow fibers [40]. Figure 3 presents the e perimental and calculated O2 and N2 molar fractions in the obtained permeate and rete tate streams, respectively, as a function of the stage cut (which correlates the feed a retentate streams through the total membrane area available for permeation) for the bo side feed countercurrent flow conditions. The selection of the number of cells must take into account the balance between the precision of the model and its calculation load. On the one hand, when a low number of cells was chosen, the corresponding calculation load is light and the model can be run fast, but the obtained result can be imprecise and inadequate in representing the system. As an example, if the design selected 10 cells, the number of equations required was 321, with 882 non-zero elements in the model. On the other hand, a high number of cells can obtain much more precise results, but at the expense of heavy calculation loads. For instance, the consideration of 300 cells increased the number of equations to 9891, and 27,562 non-zero elements were included in the model. Under these higher load conditions, the model obtained a 40.0% purity and 76.1% recovery, whereas the corresponding values in the case of 10 cells were 39.2% and 74.5%, respectively (the underestimation of the parameters was above 2% in both cases). The selection of 100 cells was preferred in this work, as it provides an adequate compromise between the model load (3291 equations and 9162 non-zero elements) and its precision (underestimation not higher than 0.15% when compared to the selection of 300 cells).
In order to validate the developed mathematical model, the values predicted by the model were compared to experimental data previously published for air separation employing cellulose-acetate-based asymmetric hollow fibers [40]. Figure 3 presents the experimental and calculated O 2 and N 2 molar fractions in the obtained permeate and retentate streams, respectively, as a function of the stage cut (which correlates the feed and retentate streams through the total membrane area available for permeation) for the bore-side feed countercurrent flow conditions. The agreement between the experimental data and the modeled predictions is factory. The R 2 values of the correlation lines between the concentrations obtained the model and the experimental ones were 0.993 and 0.998 for O2 and N2, respectively comparative analysis of the results revealed that the model underestimated the pro tivity of the membrane, with modeled O2 concentrations slightly lower than the ex mental ones (below 2% in average), especially for the lowest stage cut values. There this case was not subjected to the overestimation of the membrane productivity that previously reported by some authors when the pressure losses of the lumen side o membrane were not considered [31].
Case Study to Optimize: Separation of CO2/CH4 with Both Components as Targ
This study is focused on the estimation of the potential of different non-comme membranes for the separation of CO2/CH4, working with two types of highly CO2-pe able membranes whose selective layers possess different hydrophobic and hydrop characteristics. These membranes may be employed for different applications wher separation of both gases is required, such as biogas upgrading, natural gas sweeteni enhanced oil recovery [31,32,41]. The study of the technical performance of the sele membranes can provide a better understanding of their potentiality.
The two types of membranes selected for this study were: (i) a modified comme hydrophobic membrane with a polydimethylsiloxane (PDMS) top layer (DeltaMem CH-Allschwil) and (ii) a hydrophilic flat sheet composite membrane with a hydrop ionic liquid-chitosan (IL-CS) thin layer on a commercial polyethersulfone (PES) sup developed in our laboratory. The chitosan biopolymer (CS matrix hybridized wi ethyl-3-methylimidazolium acetate ([emim][ac]) ionic liquid (IL) as filler) was coate the polyethersulfone (PES) support, as the surface modification of robust supports vided the option of tuning up the membrane separation properties and decreasin probability of defects when the thickness of the membranes was significantly redu Both membranes were immersed in NaOH 1M solutions and washed thoroughly b characterization. The NaOH treatment was used to enhance the affinity towards acid molecules, such as CO2, contributing to increasing the CO2 separation properties other gases and, therefore, leading to a higher selectivity.
These membranes were selected due to their promising permeance and select parameters among different flat sheet dense and thin film composite membranes afte permeation experiments covering the gas mixtures CO2/N2 and CO2/CH4 carried o The agreement between the experimental data and the modeled predictions is satisfactory. The R 2 values of the correlation lines between the concentrations obtained from the model and the experimental ones were 0.993 and 0.998 for O 2 and N 2 , respectively. The comparative analysis of the results revealed that the model underestimated the productivity of the membrane, with modeled O 2 concentrations slightly lower than the experimental ones (below 2% in average), especially for the lowest stage cut values. Therefore, this case was not subjected to the overestimation of the membrane productivity that was previously reported by some authors when the pressure losses of the lumen side of the membrane were not considered [31].
Case Study to Optimize: Separation of CO 2 /CH 4 with Both Components as Targets
This study is focused on the estimation of the potential of different non-commercial membranes for the separation of CO 2 /CH 4 , working with two types of highly CO 2permeable membranes whose selective layers possess different hydrophobic and hydrophilic characteristics. These membranes may be employed for different applications where the separation of both gases is required, such as biogas upgrading, natural gas sweetening or enhanced oil recovery [31,32,41]. The study of the technical performance of the selected membranes can provide a better understanding of their potentiality.
The two types of membranes selected for this study were: (i) a modified commercial hydrophobic membrane with a polydimethylsiloxane (PDMS) top layer (DeltaMem AG, CH-Allschwil) and (ii) a hydrophilic flat sheet composite membrane with a hydrophilic ionic liquid-chitosan (IL-CS) thin layer on a commercial polyethersulfone (PES) support developed in our laboratory. The chitosan biopolymer (CS matrix hybridized with 1-ethyl-3-methylimidazolium acetate ([emim][ac]) ionic liquid (IL) as filler) was coated on the polyethersulfone (PES) support, as the surface modification of robust supports provided the option of tuning up the membrane separation properties and decreasing the probability of defects when the thickness of the membranes was significantly reduced. Both membranes were immersed in NaOH 1M solutions and washed thoroughly before characterization. The NaOH treatment was used to enhance the affinity towards acid gas molecules, such as CO 2 , contributing to increasing the CO 2 separation properties from other gases and, therefore, leading to a higher selectivity.
These membranes were selected due to their promising permeance and selectivity parameters among different flat sheet dense and thin film composite membranes after gas permeation experiments covering the gas mixtures CO 2 /N 2 and CO 2 /CH 4 carried out in previous studies of the research group [24,28,42]. The configurations of the polymeric dense layer on a porous support in the type of thin film composite membrane, flat-sheet or hollow fiber, were considered, as multilayer composite membranes also offer the possibility to optimize membrane layer materials independently, allowing for the transfer of the selective layer properties to different geometries that could be more easily implemented at a large scale.
The membranes used for this study are flat-sheet composite membranes, the commercial PERVAP 4060 (DeltaMem AG, CH-Allschwil) with a 1-1.5 µm thick PDMS top layer and a total thickness of 180 µm, which was modified by a NaOH treatment, and a self-prepared hydrophilic IL-CS/PES composite membrane fabricated in our laboratory with a similar selective layer thickness and the same NaOH treatment as the commercial hydrophobic membrane. The performance parameters were obtained from gas permeation experiments, in a laboratory stainless-steel cell, which provided an effective membrane area of 15.6 cm 2 , operating at 298 K and a feed pressure of 2 atm (pressure ratio 4).
The permeance and selectivity parameters for the two types of membranes are compiled in Table 1. The performance of these membranes in terms of the Robeson's upper bound, as a useful screening tool for the development or innovation in membrane materials, and the surface characterization studies by ATR-FTIR, were also reported elsewhere [24], the NaOH treatment being particularly remarkable in attracting CO 2 preferentially. These data sets are required for the evaluation of the process performance, focusing on the tasks of (i) membrane system modelling (flow patterns), (ii) sensitivity analysis in the simulation of a single stage process, multistage or hybrid configurations and (iii) process optimization objectives of the product quality (given in terms of purity and recovery variables), as well as the separation process costs. The permeance of the most permeable component (CO 2 in this case) in GPU (1 GPU = 10 −6 cm 3 (STP) cm −2 s −1 cmHg −1 ) was defined as the pressure-normalized flux of the gas component through a membrane. The selectivity was calculated as the ratio between the permeance of the fast and slow gas components in a gas pair; in the case of this work, CO 2 and CH 4 , respectively. [28,42] From the data included in Table 1, it can be pointed out that (i) the CO 2 /CH 4 separation factor of the commercial PDMS membrane was increased by the NaOH treatment (from single and mixed gases permeation experiments) and (ii) the hydrophobic PDMS membrane showed a lower CO 2 /CH 4 selectivity than the improved hydrophilic IL-CS/PES composite membrane (IL 2), considering this fact as key for the selection of this type of membrane to achieve the product quality targets in further implementation, contributing to providing highly CO 2 -permeable and thermally robust polymers.
Module Simulation
The developed model was applied to the comparison of the performance of the four membranes selected as the case study. These CO 2 /CH 4 -selective membranes can be employed for different application where the separation of both gases is required, such as biogas upgrading, natural gas sweetening or enhanced oil recovery. The study of the technical performance of hollow fiber modules made of the selected membranes can provide a better understanding of their potentiality. The influence of the main design and operation variables (applied pressures and stage cut) on the simulated modules was studied by means of a sensitivity analysis. The scale of the process was fixed to provide enough of a membrane area to treat a feed flowrate of 1 m 3 /h (STP) with an initial molar composition of 35% CO 2 and 65% CH 4 . The influence of varying the feed pressure in the range from 2 to 10 atm (permeate side at atmospheric pressure) while keeping the stage cut constant at 0.5 over the purities and recoveries is graphed in Figure 5.
Processes 2021, 9, x FOR PEER REVIEW 10 of 26 biogas upgrading, natural gas sweetening or enhanced oil recovery. The study of the technical performance of hollow fiber modules made of the selected membranes can provide a better understanding of their potentiality. The influence of the main design and operation variables (applied pressures and stage cut) on the simulated modules was studied by means of a sensitivity analysis. The scale of the process was fixed to provide enough of a membrane area to treat a feed flowrate of 1 m 3 /h (STP) with an initial molar composition of 35% CO2 and 65% CH4. The influence of varying the feed pressure in the range from 2 to 10 atm (permeate side at atmospheric pressure) while keeping the stage cut constant at 0.5 over the purities and recoveries is graphed in Figure 4. The results revealed the expected trend: the higher the feed pressure, the higher the recoveries and purities for both gases. All of the membranes took advantage of high pressures, but the most important increment corresponded to the membrane that showed the best performance: the IL2 membrane. It was able to achieve a 69.8% CO2 purity with practical total recovery (greater than 99.7%) and losses of CH4 in the permeate below 25%, working at 10 atm. On the contrary, IL1 was the membrane that exhibited the worst performance. Under maximal pressure operation conditions, the achieved percentage of CO2 purity was 49.0% and the corresponding recovery was below 70%. The PDMS membrane was just slightly better than the IL1 membrane, whereas the PDMSt membrane showed a performance more similar to the IL2 membrane. Therefore, once again, the effectiveness of the followed treatment of the virgin membrane was confirmed. Nevertheless, for all of the membranes, the purities and recovery trend to attain chateau values and the increment The results revealed the expected trend: the higher the feed pressure, the higher the recoveries and purities for both gases. All of the membranes took advantage of high pressures, but the most important increment corresponded to the membrane that showed the best performance: the IL2 membrane. It was able to achieve a 69.8% CO 2 purity with practical total recovery (greater than 99.7%) and losses of CH 4 in the permeate below 25%, working at 10 atm. On the contrary, IL1 was the membrane that exhibited the worst performance. Under maximal pressure operation conditions, the achieved percentage of CO 2 purity was 49.0% and the corresponding recovery was below 70%. The PDMS membrane was just slightly better than the IL1 membrane, whereas the PDMSt membrane showed a performance more similar to the IL2 membrane. Therefore, once again, the effectiveness of the followed treatment of the virgin membrane was confirmed. Nevertheless, for all of the membranes, the purities and recovery trend to attain chateau values and the increment in the feed pressure did not imply a relevant increase in the performance parameters. Although the purities and recoveries did not rise significantly once a critical feed pressure was achieved, another advantage of the implementation of high pressures even above these critical vales was the reduction in the membrane required to obtain a fixed stage cut. Once the ratio of the feed pressure to permeate pressure was fixed (which assured that the performance parameters were maintained constant, as shown in Table 2), the membrane area required was directly proportional to the feed pressure, and the membrane could be reduced by half just by doubling the feed pressure. Taking this into account, the use of a vacuum in the permeate side of the membrane to allow the feed at atmospheric pressure implies an increased membrane area, which can be difficult to compensate by the savings due to the avoided feed pressurization [43]. In a similar way, the stage cut was modified in the range from 0.2 to 0.8 to evaluate the evolution of the module performance, while the feed pressure was fixed at 4 atm (retentate side at atmospheric pressure). The obtained results can be observed in Figure 5. In this case, the contradictory effects of increasing the stage cut must be highlighted. On the one hand, high stage cut values implied a higher membrane area, which promoted the permeation of CO 2 to the permeate stream and the achievement of high CO 2 recovery values. For example, a total recovery of CO 2 (corresponding recovery value above 99.9%) was obtained when a 0.80 stage cut was applied to the PDMSt membrane or 0.65 stage cut in the case of the IL2 membrane. The other two membranes (PDMS and IL1) attained CO 2 recovery values of around 95% for the maximal considered stage cut. Moreover, these high recovery values corresponded to the high purity of the CH 4 retentate stream (just pure CH 4 when total CO 2 recovery was possible and values around 90% for the PDMS and IL1 membranes). However, on the other hand, as a consequence of the great amount of gas permeated, the purity of the CO 2 permeate stream was low for high stage cut values. At the maximal stage cut value (0.80), the CO 2 purity values ranged from 41.1% for the IL1 membrane to 43.8% for the IL2 membrane. This fact corresponded to unaffordable losses of CH 4 in the permeate stream, with recovery values for this gas in the range of 27.5-30.8%. These results gave a clear idea about the balance between the different objectives for the technical optimization of the separation process based on these types of membrane modules.
Finally, the sensitivity analysis investigated the performance of the membrane modules under different compositions of the feed stream, in the range from a 0.2 to 0.8 molar fraction of CO 2 , with a feed pressure of 4 atm and a constant stage cut equal to 0.5. Once again, as in the case of the stage cut, opposing effects appeared ( Figure 6). Whereas the selection of enriched CO 2 feed streams favored the production of high-purity CO 2 permeate, the corresponding CO 2 recoveries decreased, since more CO 2 escaped from the module in the retentate stream. The opposite situation occurred in the case of CH 4 : the treatment of CO 2 -rich streams implied a low purity of CH 4 in the retentate stream, but with high recovery values (reduced losses of this gas to the permeate stream) [44]. For example, the IL2 membrane was able to attain a 96.6% CO 2 purity from the initial 0.65 molar fraction, whereas the IL1 achieved a 76.6% purity, but the corresponding CH 4 purities were 66.6% and 46.6%, respectively. Finally, the sensitivity analysis investigated the performance of the membrane modules under different compositions of the feed stream, in the range from a 0.2 to 0.8 molar fraction of CO2, with a feed pressure of 4 atm and a constant stage cut equal to 0.5. Once again, as in the case of the stage cut, opposing effects appeared ( Figure 6). Whereas the selection of enriched CO2 feed streams favored the production of high-purity CO2 permeate, the corresponding CO2 recoveries decreased, since more CO2 escaped from the module in the retentate stream. The opposite situation occurred in the case of CH4: the treatment of CO2-rich streams implied a low purity of CH4 in the retentate stream, but with high recovery values (reduced losses of this gas to the permeate stream) [44]. For example, the IL2 membrane was able to attain a 96.6% CO2 purity from the initial 0.65 molar fraction, whereas the IL1 achieved a 76.6% purity, but the corresponding CH4 purities were 66.6% and 46.6%, respectively.
Module Optimization
The sensitivity analysis provided a clearer idea about the influence of the main design and operation variables of the membrane modules. It could be considered as a previous step to the optimization covered in this section. The results of the optimization of the PDMS, PDMSt and IL2 membranes are compiled in Tables 3 and 4 for the vacuum permeate (0.2 atm) and pressurized feed (20 atm), respectively.
Module Optimization
The sensitivity analysis provided a clearer idea about the influence of the main design and operation variables of the membrane modules. It could be considered as a previous step to the optimization covered in this section. The results of the optimization of the PDMS, PDMSt and IL2 membranes are compiled in Tables 3 and 4 for the vacuum permeate (0.2 atm) and pressurized feed (20 atm), respectively. When only one optimization objective was considered, the individual distance of that objective was minimized, whereas when the simultaneous optimization of more than one objective was taken into account, the target was the minimization of the sum of the individual distances to each specific objective. Finally, the results were compared to the situation where the objective was the minimization of the normalized distance for all of the objectives.
As expected, the consideration of just a single objective resulted in extreme values of the stage cut. On the one hand, the maximization of the CO 2 purity (or the CH 4 recovery) matched with the selection of the lowest cut stage value (0.05 was imposed as a restriction of the system). On the other hand, the maximal allowed value of the cut stage (0.95) was required to achieve the maximal CH 4 purity (or the CO 2 recovery). As the cut stage and corresponding membrane area increased, so did the CH 4 purity as a consequence of the preferential permeation of CO 2 . Nevertheless, this reduced CO 2 partial pressure promoted CH 4 permeation through the membrane and decreased CH 4 recovery [9]. In the case of the PDMS membrane, the design of a module with a 0.95 stage cut resulted in a CH 4 purity of 99.2% and 99.8% for vacuum and pressurized conditions, respectively, with a CO 2 recovery equal to 99.9% in both cases. The other two membranes, PDMSt and IL2, were able to able to attain a total CO 2 recovery and pure CH 4 (higher than 99.99%) for stage cut values below the imposed upper limit in both vacuum and pressurized conditions: from 0.551 in the case of IL2 in pressurized conditions to 0.829 for PDMSt under vacuum conditions. When multiobjective optimization was taken into account, the system required compromised optimal conditions that counterbalanced the different targets. Nevertheless, in the case of the PDMS membrane, extreme cut stage values were still an optimal solution under specific circumstances. For instance, the simultaneous optimization of CH 4 purity and recovery was obtained when the stage cut value was equal to 0.05 (lowest allowed limit) for both vacuum and pressurized conditions. However, the optimization of CO 2 purity and recovery required a reduction in the stage cut from its maximal allowed limit to 0.846 and 0.796 for vacuum and pressurized conditions, respectively. In all cases, for all membranes, the stage cut value to optimize CO 2 purity and recovery was higher than the one resulting from the optimization of CH 4 purity and recovery, although these values were close when the IL2 was employed under pressurized conditions. For this membrane, whereas the value of the stage cut of 0.391 optimized CO 2 purity and recovery, the optimization of CH 4 purity and recovery occurred for a stage cut value equal to 0.374. Within this interval, the optimal value that minimized the sum of the individual distances of the four objectives simultaneously was found: a cut stage value equal to 0.385 allowed for the achievement of recovery percentages above 90% for both gases, with purity values equal to 86.2% and 97.0% for CO 2 and CH 4 , respectively. The recovery and purity values attained with the PDMS and PDMSt membranes were lower than those of the IL2 membrane, which clearly demonstrated its higher potentiality.
The analysis of the values of the normalized distance of all of the solutions compiled in Tables 3 and 4 revealed some interesting facts. Firstly, the consideration of just an individual objective resulted in very extreme values, which were able to optimize the objective selected, but at the expense of the other objectives. These objectives that were not taken into account were maintained very far away from their targets, and the corresponding D N value was high. Moreover, the consideration of the conditions to minimize the sum of the individual distances did not match with the optimal conditions to attain the minimal D N value. In all cases, the optimal stage cut values for the minimal distance were lower than the values obtained to minimize the sum of the individual distances. Under these circumstances, the four recovery and purity values were more counterbalanced, avoiding the presence of a lower single value that can result in a distance penalty. In fact, the consideration of the four objectives simultaneously without the "distance-to-target" approach resulted in D N values higher than the case that considered only two objectives. For instance, the IL2 membrane showed lower D N values for the optimization of CH 4 recovery and purity (0.149 and 0.082 for vacuum and pressurized conditions, respectively) than for the optimization of the four objectives (0.153 and 0.085 for vacuum and pressurized conditions, respectively). This fact confirmed the importance of the selection of a very effective tool to define the optimal conditions in multiobjective scenarios.
The pressure restrictions and feed composition had a great influence on the optimal conditions of the membrane modules. When the system operated under vacuum conditions, extensive membrane areas were required. Besides, since an adequate separation performance required a sufficient high pressure ratio (the ratio between feed and permeate pressures), severe vacuum conditions became necessary [45]. The influence of the maximal pressure value allowed in the feed side of the module on the optimal stage cut, recovery and purity values, as well as the resulting membrane areas, is shown in Figure 7. Once again, since the higher the upper limit for pressure, the higher the recovery and purity values, the value of D N decreased continuously as the maximal allowed pressure was increased (from 0.147 to 0.082 for 5 and 20 atm, respectively). Only small variations in the optimal stage cut values appeared, ranging from 0.367 to 0.380 for 20 and 5 atm, respectively. However, the most relevant issue was the membrane area required under optimal conditions. In addition to reduced recovery and purity percentages, the operation under lower pressure conditions implied the requirement of a huge membrane area. The required area increased from 3.9 m 2 for 20 atm to 31.4 m 2 in the case of 5 atm. This great difference pointed to the selection of the highest possible pressure in the feed side of the membrane module in order to minimize the amount of membrane required to carry out the separation, which, in addition, resulted in the highest performance in terms of recovery and purity.
tions, respectively). This fact confirmed the importance of the selection of a very effective tool to define the optimal conditions in multiobjective scenarios.
The pressure restrictions and feed composition had a great influence on the optimal conditions of the membrane modules. When the system operated under vacuum conditions, extensive membrane areas were required. Besides, since an adequate separation performance required a sufficient high pressure ratio (the ratio between feed and permeate pressures), severe vacuum conditions became necessary [45]. The influence of the maximal pressure value allowed in the feed side of the module on the optimal stage cut, recovery and purity values, as well as the resulting membrane areas, is shown in Figure 7. Once again, since the higher the upper limit for pressure, the higher the recovery and purity values, the value of DN decreased continuously as the maximal allowed pressure was increased (from 0.147 to 0.082 for 5 and 20 atm, respectively). Only small variations in the optimal stage cut values appeared, ranging from 0.367 to 0.380 for 20 and 5 atm, respectively. However, the most relevant issue was the membrane area required under optimal conditions. In addition to reduced recovery and purity percentages, the operation under lower pressure conditions implied the requirement of a huge membrane area. The required area increased from 3.9 m 2 for 20 atm to 31.4 m 2 in the case of 5 atm. This great difference pointed to the selection of the highest possible pressure in the feed side of the membrane module in order to minimize the amount of membrane required to carry out the separation, which, in addition, resulted in the highest performance in terms of recovery and purity. Another important factor that must be taken into consideration for the multiobjective optimization of the membrane modules is the feed composition. The influence of the feed composition on the optimal stage cut, recovery and purity values, as well as the resulting membrane areas, is shown in Figure 8. In this case, the multiobjective optimization re vealed a different trend when compared to the case of the sensitivity analysis under the constant stage cut previously presented. Although the contradictory effects of the in creased CO2 molar fraction in the feed composition were maintained, the process perfor mance parameters affected were different. While both purities maintained the previously Another important factor that must be taken into consideration for the multiobjective optimization of the membrane modules is the feed composition. The influence of the feed composition on the optimal stage cut, recovery and purity values, as well as the resulting membrane areas, is shown in Figure 8. In this case, the multiobjective optimization revealed a different trend when compared to the case of the sensitivity analysis under the constant stage cut previously presented. Although the contradictory effects of the increased CO 2 molar fraction in the feed composition were maintained, the process performance parameters affected were different. While both purities maintained the previously identified trend (CO 2 purity increased and CH 4 decreased), the recovery values changed their tendencies. On the one hand, higher CO 2 feed fractions involved an increased CO 2 recovery, mainly as a consequence of increased values for the corresponding optimal stage cuts (from 0.244 with 0.20 feed fraction to 0.808 with 0.80 feed fraction). On the other hand, CH 4 recovery followed the opposite trend, and lower values were obtained for high CO 2 feed fractions (which is also a direct result of increased stage cuts, which implied a higher permeation of CH 4 through the membrane). As a result of the different evolutions followed by the purities and recoveries identification of the optimal feed composition for the IL2 membrane is not obvious optimal DN values for 0.20, 0.35, 0.50, 0.65 and 0.80 CO2 feed fractions were 0.132, 0 0.063, 0.056 and 0.058, respectively). Therefore, the search for the optimal feed comp tion was carried out, and the result was a composition with a 0.69 CO2 molar fraction. As a result of the different evolutions followed by the purities and recoveries, the identification of the optimal feed composition for the IL2 membrane is not obvious (the optimal D N values for 0.20, 0.35, 0.50, 0.65 and 0.80 CO 2 feed fractions were 0.132, 0.082, 0.063, 0.056 and 0.058, respectively). Therefore, the search for the optimal feed composition was carried out, and the result was a composition with a 0.69 CO 2 molar fraction. The resulting optimal CO 2 feed fraction was far away from the typical biogas characteristics, with a CO 2 content below 0.40 [46,47], so the application of the IL2 membrane to other applications, such as the oil enhanced recovery, where feed streams with a CO 2 fraction of around 0.75 are common [48], can also be suggested.
Another relevant aspect that must be highlighted is the evolution of the required membrane area. Although there was a continuous increasing of the optimal stage cut value, the membrane area followed a slight reduction: 4.0 m 2 was required for a feed composition with 0.20 CO 2 , whereas only 2.6 m 2 was required when the module was fed with 0.80 CO 2 . This fact can be explained by the enhanced permeation of CO 2 when the CO 2 -rich stream was fed, which allowed for a higher partial pressure gradient of CO 2 between both sides of the membranes.
Lastly, a comparison of the performance of the main membranes selected in this study to that of other CO 2 /CH 4 mentioned by other researchers in some recent publications was carried out [49][50][51]. These referenced membranes were a generic polymeric blend membrane, an asymmetric polysulfone membrane and a polysulfone coated with a PDMS membrane, respectively. The results are compiled in Table 5 and clearly demonstrate the competitiveness of the IL2 when compared to other available membranes, since it showed the best technical performance in terms of the distance to the target. The comparison also revealed that, although the treatment of the virgin PDMS membrane was effective in significantly improving its performance, the resulting PDMSt was not yet able to surpass the performance of the other referenced membranes. Nevertheless, in all cases, the values of the purities obtained were not enough to fulfill the requirements imposed on CO 2 and CH 4 streams in order to be directly valorized, taking into account that purity values above 90% and 98% may be required for CO 2 and CH 4 , respectively [52,53]. Therefore, the design of more advanced separation processes based on multiple stages of membranes modules would also be proposed [54], so further work to consider the design of this type of layouts with the most promising membrane modules will be performed.
Conclusions
The mathematical model developed in this work has been successfully applied to represent the performance of membrane separation units with two types of innovative membranes for CO 2 /CH 4 separation. After the validation of the model with experimental data, it was used to simulate the performance of the separation process by modified commercial PDMS and non-commercial IL-chitosan composite membranes under different design and operation conditions, paying attention to the effects due to different pressures, stage cuts and feed compositions.
The optimization of the separation process, considering both gases in the feed mixture as targets, resulted in the definition of different multiobjective scenarios. A "distance-totarget" approach was selected for the simultaneous consideration of all of the objectives, and the results demonstrated that the maximal allowed feed pressure must be selected for the optimization of the separation, while the optimal stage cut was dependent on each specific membrane.
The obtained results allowed us to conclude that the ionic liquid-chitosan composite membranes (IL-CS/PES) improved the performance of other innovative membranes, with purity and recovery percentage values of 86% and 95%, respectively, for CO 2 in the permeate stream, and 97% and 92% for CH 4 in the retentate stream. The multiobjective optimization calculations allowed us to determine the process design and performance parameters, such as the membrane area, pressure ratio and stage cut required to achieve maximum values for components separation, in terms of purity and recovery for both components.
In addition, each membrane presented an optimal feed composition, which should be taken into account to select the most adequate membrane for a determined application. The modification of the PDMS membrane by treatment with NaOH represented an effective way to improve the separation performance, and the improved IL-chitosan membrane appeared more competitive than other innovative membranes presented in referenced previous studies. If the separation process performance in terms of purity and recovery would not be enough to obtain streams that fulfill the requirements imposed on CO 2 and CH 4 direct valorization, further efforts would also be considered toward the design of more complex multi-stage separation processes. The applied optimization methodology was proposed as a useful tool to advance the implementation of the membrane separation systems, in conjunction with the development and innovation efforts of membrane materials. | 13,062 | 2021-10-21T00:00:00.000 | [
"Engineering"
] |
Vulnerability studies in the fields of transportation and complex networks: a citation network analysis
In recent years, studies on network vulnerability have grown rapidly in the fields of transportation and complex networks. Even though these two fields are closely related, their overall structure is still unclear. In this study, to add clarity comprehensively and objectively, we analyze a citation network consisting of vulnerability studies in these two fields. We collect publication records from an online publication database, the Web of Science, and construct a citation network where nodes and edges represent publications and citation relations, respectively. We analyze the giant weakly connected component consisting of 705 nodes and 4,584 edges. First, we uncover main research domains by detecting communities in the network. Second, we identify major research development over time in the detected communities by applying main path analysis. Third, we quantitatively reveal asymmetric citation patterns between the two fields, which implies that mutual understanding between them is still lacking. Since these two fields deal with the vulnerability of network systems in common, more active interdisciplinary studies should have a great potential to advance both fields in the future.
Introduction
Our society is surrounded by a great variety of network systems such as power grids, communication systems, water supply systems, gas pipelines, or transportation systems. The quality of our lives relies on the service level of these critical 1 3 infrastructures. As Rinaldi et al. (2001) describe, critical infrastructures are mutually dependent, if they have implemented an even higher level of their services.
However, recent natural and man-made disasters such as floods, hurricanes, tornadoes, volcanic eruptions, earthquakes, tsunamis, or terrorist attacks have revealed an inherent weakness of these interdependent systems. According to Helbing (2013), as the complexity and interaction strength increase, these systems can become unstable, creating uncontrollable situations. In other words, failures can spread out in multiple systems because of the dependency among different systems.
A transportation system is one of the critical infrastructures. In the transportation field, in the 1990s and the beginning of the 2000s, researchers mainly focused on transport network reliability (e.g., Bell and Iida 1997;Asakura 1999;Bell 2000;Chen et al. 2002;Nicholson et al. 2003). Typically, studies on the transport network reliability consider both, the occurrence probability and the consequences. However, according to Taylor (2017), around the beginning of the 2000s, researchers started to discuss something different from reliability: transport network vulnerability. Studies on the transport network vulnerability focus on catastrophic events that rarely occur but bring about devastating consequences on the society. These studies consider only the consequence without the occurrence probability, because the expected values cannot capture the characteristics of such catastrophic events (e.g., D'Este and Taylor 2003;Taylor and D'Este 2007;Taylor 2008). In this way, transport network vulnerability emerged as an important area in the transportation field.
At almost the same time as the emergence of the transport network vulnerability, a new scientific field called complex networks (also known as network science) appeared after two pioneering works (Watts and Strogatz 1998; Barabási and Albert 1999). This field especially focuses on topological characteristics of various realworld network systems. In complex networks, vulnerability is also an important topic. So far, a large number of studies have investigated network vulnerability, especially about influences of the removals of nodes/edges on the performance of the network systems.
As already pointed out in a review paper by Mattsson and Jenelius (2015), one root in the transport network vulnerability is complex networks. In other words, the vulnerability of transport networks has been studied from the perspectives of the network topology. It seems that concepts and methods in these two fields are closely related and interdisciplinary studies between them should have a great potential to advance both fields. However, an overall structure is currently lacking. Before discussing the possibility of more active interdisciplinary studies, it is necessary to grasp how these two fields have evolved over time and how they constitute the current structure. In the present study, to understand the overall structure of the vulnerability studies in these two fields, we analyze a citation network where nodes and edges represent publications and their citation relations, respectively. This approach enables us to obtain the entire picture of the vulnerability studies in a comprehensive and objective manner. Our focus is to understand the entire structure of the previous studies rather than scientometric indexes of individual papers, researchers, and journals (e.g., Heilig and Voß 2015;Jiang et al. 2020;Modak et al. 2019).
The rest of this paper is organized as follows. We propose a framework for analyzing a citation network in Sect. 2. In Sect. 3, we apply the proposed framework to our problem which aims to understand the overall structure of the vulnerability studies and discuss the obtained results. This paper ends with conclusions in Sect. 4.
Methods
In this section, we propose a framework for analyzing citation networks. The proposed framework can be applied to any topic in any research field. It consists of six steps as shown in Fig. 1.
Data collection
As the first step, we collect publication records from an online publication database, the Web of Science Core Collection (hereinafter referred to as WoS). We perform a search in the WoS with "network" and "vulnerability" as keywords. Next, we refine publications into the five WoS categories: 1) transportation science technology, 2) transportation, 3) engineering multidisciplinary, 4) physics multidisciplinary, and 5) multidisciplinary sciences. Then we further refine publications into four types of documents in the WoS: 1) article, 2) proceedings paper, 3) book chapter, and 4) review.
Construction of a citation network
As the second step, we construct a citation network where nodes and directed edges represent publications and citation relations, respectively. We construct a citation network not only from the publications obtained from the WoS described Fig. 1 The proposed framework for analyzing a citation network in the previous section, but also from related publications. The related publications are defined as the ones which are cited by more than 10 publications in the data set obtained from the WoS. We use CitNetExplorer (Van Eck and Waltman 2014) to add these related publications. The reason why we add the related publications is to consider publications which are not directly related to network vulnerability, but present important concepts for the vulnerability studies (e.g., books that present fundamental concepts for the transport network analysis).
Extraction of the giant weakly connected component
As the third step, we extract the giant weakly connected component (hereafter referred to as GWCC) from the constructed network. A weakly connected component is a component where at least one path exists between any pair of nodes without considering the direction of edges. The GWCC is the largest component among all weakly connected components in a network.
Community detection
As the fourth step, we perform a community detection algorithm to the extracted GWCC. In general, a community in a network is a dense part in the network. Therefore, community detection enables us to objectively identify main research domains in a citation network. In this study, we use an algorithm called Louvain algorithm (see Blondel et al. 2008).
The Louvain algorithm aims to maximize a quality function for community detection, modularity Q, which is defined as follows: where A ij is the weight of the edge between nodes i and j, k i (= ∑ j A ij ) is the sum of the weights of the edges attached to node i, C i is the community that node i belongs to, (C i , C j ) is 1 if C i = C j (nodes i and j belong to the same community) and 0 otherwise, and m = ( . It is noted that for an unweighted graph, the weight of an edge is one and the total weights of a node equals to its degree (the number of edges attached to the node). Modularity Q takes a value between -1 and 1. Networks with high modularity have dense connections between the nodes within communities but sparse connections between nodes in different communities. In other words, a higher value of modularity Q implies a better quality of community detection.
The Louvain algorithm is an agglomerative algorithm which works by taking a node and joining it into groups, and joining groups with other groups and so on. This algorithm consists of two steps and they are repeated iteratively (Barabási 2016).
Vulnerability studies in transportation and complex networks
Step 1
The algorithm starts with a weighted network with N nodes, initially assigning each node to a different community. For each node i, we evaluate the increase of the modularity if we place node i in the community of one of its adjacent nodes j. Then we move node i into the community where the modularity increases most (and it is also a positive value). If all of the increased modurality is negative, node i stays in its original community. We repeat this process until no more improvement of the modularity is achieved. The change of the modularity Q which is obtained by a node i into a community C is calculated as follows: where ∑ in is the sum of the weights of edges inside of the community C, ∑ tot is the sum of the weights of edges of all nodes in the community C, k i is the sum of the weights of the edges of node i, k i,in is the sum of the weights of the edges of node i to nodes in the community C, and W is the sum of the weights of all edges in the whole network.
Step 2
A new network can be constructed based on the identified communities in step 1. The communities obtained in step 1 are aggregated. In other words, nodes belonging to the same community are merged into one node. This generates selfloops which correspond to edges between nodes in the same community. Once step 2 is completed, steps 1 and 2 are repeated until no further improvement of the modularity is achieved.
Main path analysis
As the fifth step, we perform the so-called main path analysis (Hummon and Dereian 1989) to each of the detected communities. The main path analysis is used to uncover the time-series major paths in a citation network.
A publication always cites already existing publications. As shown in Fig. 2, we define sources and sinks as publications with zero in-degree (not cited by other publications) and zero out-degree (not citing other publications), respectively. (2) Step 1 As the first step of the main path analysis, we calculate the search path count for each edge (Batagelj 2003). The search path count is the number of times the edge is traversed through all possible combinations between sources and sinks. The logic behind using this index is that if an edge occupies a route through which more knowledge flows, it should have a certain importance in the knowledge-dissemination process (Liu and Lu 2012).
Step 2
As the second step, main paths are identified based on the traversal counts of each edge. Specifically, we apply global key-route search (Liu and Lu 2012). The global key-route search is conducted as follows: (1) to specify edges with the certain fraction of the highest traversal counts, (2) to search forward from the specified edges toward sources to identify a path that has the largest overall traversal counts, (3) to search backward from the specified edges toward sinks to identify a path that has the largest overall traversal counts, and (4) to connect paths from sinks to sources including the specified edges. By definition, multiple main paths can be identified. In a simple example of a citation network shown in Fig. 3, we can identify the edge with the highest search path count (the thickest edge) and identify the two main paths from this edge.
Citation patterns among different communities
As the sixth step, we visualize the number of intra/inter-community edges by using a Sankey diagram. Sankey diagrams are a type of flow diagram in which the width of the arrows is proportional to the flow rate (Schmidt 2008). An intra-community edge is an edge connecting two nodes in the same community and an inter-community edge is an edge connecting two nodes in different communities. This visualization makes it possible to understand knowledge flow among different communities.
Results and discussions
Now the proposed framework is applied to our problem, which aims to understand the overall structure of the vulnerability studies in the fields of transportation and complex networks.
Descriptive statistics of publication records
In this section, we show some descriptive statistics of the publications obtained from the WoS. We collect 1,034 publication records which have "network" and "vulnerability" as keywords. This data collection was performed on August 18th, 2018.
Distribution of publication year
In Fig. 4, we show the distribution of the published items by year from 1993 to 2018 (25 years span). This figure indicates that the number of studies on the network vulnerability has grown rapidly after 2000.
Distribution of document types
In the WoS, each publication can be classified into one or more types of documents. The obtained data set involves four types of documents including article, proceedings paper, book chapter, and review. Most records are classified into article and/or proceedings paper (Table 1).
Distribution of journals
In Table 2, we show the distribution of articles in the top 10 journals. 407 articles (52.2%) are published in these 10 journals. Some journals cover a broad area of scientific fields such as PLoS ONE (17.6%), Scientific Reports (5.39%), and Proceedings of the National Academy of Sciences of the USA (3.34%). 76 papers (9.76%) covers all areas of critical infrastructure protection, which is a concept relating to the preparedness or response to serious incidents on infrastructure systems.
Distribution of countries
In Table 3, we show the top 15 countries about the authors' affiliations. The number of publications are counted by authors' affiliations of each paper. For example, when an article is written by two authors whose affiliations are in the United States and China, it is counted for both countries.
Visualization
We construct a citation network with 1181 nodes (1034 publications from the WoS and 147 related publications) and 4601 directed edges. In Fig. 5, we visualize the constructed network. This network has many disconnected components. Specifically, it has 462 weakly connected components and the largest one (GWCC) has 705 nodes and 4584 edges. Vulnerability studies in transportation and complex networks
Cumulative in-degree distributions
Here we show the cumulative in-degree distribution on the log-log plot in Fig. 6. In-degree of a node is the number of edges directed into that node. In a citation network, in-degree represents the number of citations of the publication. From Fig. 6, we can observe a straight line in the log-log plot. This indicates that the in-degree distribution follows an approximate power-law distribution. In other words, most papers have a few citations, but a few papers have many citations (the most cited paper by Albert et al. (2000) has 134 citations in this data set).
Giant weakly connected component
We extract and analyze the GWCC in the constructed network. The GWCC has 705 nodes and 4584 directed edges. This means that 59.7% (705 out of 1181) of all nodes and 99.6% (4,584 out of 4,601) of all edges are included in the GWCC.
Community structure in the giant weakly connected component
We identify seven communities: (1) transport vulnerability, (2) interdependent networks, (3) topological vulnerability, (4) cascading failures, (5) metro and maritime networks, (6) resilience, and (7) scattered community. When two communities locate closely in Fig. 7, this means that the publications in these two communities tend to cite each other. The size of each node represents the in-degree of the node. The color of each node represents the community the node belongs to. The value of Fig. 7 The community structure in the giant weakly connected component the modularity Q is 0.461. According to Fortunato and Barthelemy (2007), this is large enough to consider that the network has community structures. We show the number of nodes and intra-community edges in each community in Table 4. The community size (the number of nodes) ranges from 7 to 208 and the number of intra-community edges connecting two nodes in the same community ranges from 7 to 1225. The sum of the number of nodes in communities is 705, because each node belongs to only one community in the Louvain algorithm. The total number of the intra-community edges is 3093, which indicates 67.5% of all edges in the GWCC are the intra-community edges.
Main paths in each community
We conduct the main path analysis for all the communities except for the scattered community. In this study, the edges with the top 10% highest search path count are specified at the first step in the main path analysis. For example, if a citation network has 200 edges, 20 edges are specified and main paths are identified from these edges. In the following sections, we show the obtained main paths in each community and briefly describe how research has been developing there over time.
Transport vulnerability
We show the main paths in the transport vulnerability community in Fig. 8. The identified main paths consist of 26 out of 208 publications.
There are three sources (nodes with zero out-degree) in the main paths (Bell and Iida 1997;Chen et al. 1999;D'Este and Taylor 2001). During the 1990s, the transport network reliability became an important research topic (Taylor 2017). Bell and Iida (1997) present fundamental concepts about transport network analysis including the transport network reliability. Chen et al. (1999) propose a concept of capacity reliability which is defined as the probability where the network capacity can accommodate a certain demand at a required service level. Standard approaches on the transport network reliability consider both the occurrence probability and the consequences. On the other hand, D'Este and Taylor (2001) show that such measures do not work well in some situations where the occurrence probability is very small (but not zero) but the consequence is catastrophic with an example of the Australian National Highway System. This indicates the necessity of a new concept of the transport network vulnerability, which is different from the transport network reliability.
After the realization of the necessity of the concept of the transport network vulnerability, D'Este and Taylor (2003) define the vulnerability for nodes and links: (1) a node is vulnerable if a small number of links significantly decreases the accessibility of the node, and (2) a link is critical if the loss of the link significantly decreases the accessibility of the network or of particular nodes. Then the identification of critical parts in transport networks is of great interest. Taylor et al. (2006) discuss three different measures to evaluate the decrease of accessibility using the Australian National Transport Network road system as an example. Jenelius et al. (2006) introduce link importance indices and site exposure indices based on the increase in generalized cost under closures of links. Chen et al. (2007) present network-based accessibility measures considering both of the demand and supply sides.
There are two streams toward a study by Jenelius and Mattsson (2012). One stream starts from a paper by Jenelius (2009). Different from the above-mentioned studies, Jenelius (2009) focuses on the vulnerability not at the link level but at the regional level. Jenelius (2009) adopts two different views on the vulnerability: (1) a region is important if the consequences of a disruption in this region are severe for the whole network (a view from the supply perspective), and (2) a region is exposed if the consequences of a disruption somewhere in the network are severe for the users in the region (a view from a demand perspective). These regional importance and exposure measures are extended from ones proposed by Jenelius et al. (2006). The other stream starts from a paper by Sullivan et al. (2009). They review studies on the vulnerability analysis of transportation networks. They show that the performance metrics, terminology, methods in the vulnerability studies can vary dramatically depending on the application or research goals. Sullivan et al. (2010) then propose a method to identify the most critical links and evaluate the network robustness with an index which can be used to compare networks with different sizes, topology, and connectivity levels.
The two streams are integrated by Jenelius and Mattsson (2012). They present a method to analyze the vulnerability of transportation networks under area-covering disruptions such as flooding, snowfall, or storms. Lu et al. (2014) analyze the vulnerability of road networks in the case of flooding. They study inter-city travels and how people adopt to flooding impacts in Bangladesh.
Papers described above mainly focus on the vulnerability of road networks. On the other hand, from the middle of the 2010s, the vulnerability of public transport networks (PTNs) has been investigated intensively. Cats and Jenelius (2014) develop a method to evaluate the dynamic vulnerability for PTNs. They extend a concept of centrality measure in complex networks, edge betweenness centrality, and propose two formulations considering dynamic and stochastic natures of PTNs. This is a successful example of an integration of concepts in transportation fields and complex networks. Rodríguez-N., E. and García-P., J. C. (2014) also investigate the vulnerability of PTNs, using the subway network in Madrid as an example. They show that circular lines play an important role for the network robustness. In addition, a strategy of capacity enhancement to mitigate impacts of unexpected disruptions in PTNs is proposed by Cats and Jenelius (2015).
There are two review papers in the year of 2015. Reggiani et al. (2015) review 33 papers about vulnerability and resilience in transportation networks. Mattsson and Jenelius (2015) also review studies on vulnerability and resilience. They show that there are two distinct traditions in transport vulnerability studies: (1) topological vulnerability studies which analyze topological characteristics and (2) system-based vulnerability studies which attempt to analyze supply and demand.
From the paper by Mattsson and Jenelius (2015), two streams are observed. One stream is about the vulnerability of road networks. Bell et al. (2017) present capacity weighted spectral partitioning to find potential bottlenecks in large transportation networks. The topics of the three sinks (nodes with zero in-degree) which cite the paper by Bell et al. (2017) include an optimization framework to evaluate vulnerability under disruptions of multiple links (Xu et al. 2017), a mixed-integer linear programming to evaluate disruptions of links (Karakose and McGarvey 2018), and the vulnerability of road networks regularly threatened by heavy snow (Jin et al. 2017).
Another stream is about the vulnerability of PTNs. It includes four papers (Jenelius and Cats 2015; Cats 2016; Shelat and Cats 2017;Malandri et al. 2018). These studies aim to assess the vulnerability of PTNs in consideration of the specific natures of PTNs in both of the demand and supply sides. Jenelius and Cats (2015) propose a method to evaluate the value of adding new links for the robustness and Vulnerability studies in transportation and complex networks redundancy of PTNs. Cats (2016) presents a method to evaluate the robustness of alternative network designs. Shelat and Cats (2017) introduce a method of a stochastic user equilibrium for PTNs considering boarding denials at stations and perceived increase in travel time because of on-board crowding. Malandri et al. (2018) present a model of temporal and spatial spill-over effects in PTNs.
Summing up the above, the concept of the transport network vulnerability emerged about the year of 2000 to study catastrophic events that rarely occur but cause catastrophic consequences. The obtained main paths suggest that, in the 2000s, the vulnerability of road networks was mainly studied. These papers especially focus on the identification of critical links and exposed nodes. Then, after the paper by Cats and Jenelius (2014) which extends the concept of edge betweenness centrality in complex networks, the vulnerability of PTNs has been studied intensively. After the review paper by Mattsson and Jenelius (2015), two streams about the vulnerability of road networks and PTNs are observed. The papers in the main paths are mainly published in transportation-related journals, such as Transportation Research Part A: Policy and Practice, Transportation Research Part B: Methodological, and Networks and Spatial Economics.
Metro and maritime networks
In Fig. 9, we show the main paths in the community of metro and maritime networks. We identify two disconnected main paths. The upper one represents the main paths on the vulnerability of metro and railway networks. The lower one represents the main paths on the vulnerability of maritime shipping and global energy supply networks. There are four sources in the upper main paths (Latora and Marchiori 2002;Angeloudis and Fisk 2006;Derrible and Kennedy 2010;Johansson and Hassel 2010). All of these studies analyze the vulnerability of subway or railway networks in the perspective of network topology by directly applying the methods in complex networks. Latora and Marchiori (2002) investigate the topological properties of the MBTA (Boston underground transportation system) and bus network. They show that MBTA+ bus network is a small-world network with high efficiency in both of the global and local scales. Angeloudis and Fisk (2006) analyze the world's largest subway networks. Their results show that subway systems can be classified into two classes of networks with exponential degree distributions which are robust to random failures. Derrible and Kennedy (2010) also analyze 33 metro networks in the world and show these networks are scale-free and small-world networks. Johansson and Hassel (2010) investigate the vulnerability of a railway network consisting of five systems from the perspective of interdependent networks. study the vulnerability of Shanghai subway network in China. They show that the subway network is robust to random failures but fragile to targeted attacks, especially to the highest betweenness node-based attacks. There are two streams from this paper. One stream starts from a paper by Zhang et al. (2016b). They conduct a vulnerability analysis on the high speed railway networks in China, the US, and Japan. They show that high speed railway networks are very fragile to malicious attacks. Xu et al. (2018) investigate the evolutionary process of China's high-speed rail network by applying complex networks. The other stream begins with Zhang et al. (2016a). They study the urban rail transit networks by analyzing the hub structures in the networks in consideration with transfers. Zhang et al. (2018) then investigate the topological vulnerability of three metro networks in Guangzhou, Beijing, and Shanghai.
In the lower main paths, there are two sources (Dijkstra 1959;Brown et al. 2006). Dijkstra (1959) proposes an algorithm to find the shortest paths in a network. Brown et al. (2006) propose models for robust critical infrastructures against terrorist attacks. In this way, these two papers are not directly related to maritime networks, but they present widely-applied concepts. Zavitsas and Bell (2010) propose a method to identify vulnerable components in the global energy supply networks and investigate the optimal network structure. Ducruet (2016) then analyzes the role of two canals, Suez and Panama, from the perspective of network topology. They investigate the effect of removals of the canal-dependent flows from the network by using the method in complex networks. Viljoen and Joubert (2016) also apply the theory in complex networks to analyze the vulnerability of the global container shipping network against targeted attacks on edges. They show that removing the highest betweenness edges would greatly impact the transshipment. They point out that these edges are not always the edges that carry the most traffic and could be overlooked as unimportant. Viljoen and Joubert (2018) investigate the vulnerability of the supply chain networks by applying the concept of multilayered networks in complex networks.
Summarizing the above, we identify the two disconnected main paths: (1) the vulnerability of subway or railway networks and (2) the vulnerability of maritime shipping and global supply chain networks. Interestingly, there is a clear difference
3
Vulnerability studies in transportation and complex networks between papers in the transport vulnerability and metro and maritime networks communities. That is, all of the papers except for two source papers (Dijkstra 1959;Brown et al. 2006) in the metro and maritime networks community directly apply the methods in complex networks and analyze the topological vulnerability. Especially, the topological vulnerability of subway and railway networks have been investigated more than the ones of the other transportation modes. One possible reason is that collecting and handling the data of network topology of subway and railway networks is easier than other transportation modes. On the other hand, the main paths also suggest that studies on maritime networks from the perspectives of network topology are currently limited and further studies are desired in the future. It is noted that, differently from the transport vulnerability community, most papers in the main paths in this community are published in a complex network-related journal, especially 9 out of 15 papers in the main paths are published in Physica A: Statistical Mechanics and Its Applications.
Resilience
We show the main paths in the resilience community in Fig. 10. The main paths consist of 16 out of 54 papers.
In the obtained main paths, there are five sources (Holling 1973;Pimm 1984;Bruneau et al. 2003;Murray et al. 2008;Cox et al. 2011). According to Reggiani (2013), there are two different ways to define resilience. One is proposed by Holling (1973) and another is proposed by Pimm (1984). Both of the definitions were originally proposed for ecological systems. The resilience based on definition by Holling (1973) is the perturbation which can be absorbed before the system changes into another state. On the other hand, in the definition by Pimm (1984), the resilience Fig. 10 Papers on resilience main paths is measured as the speed of return to equilibrium state. After these two works, in the field of earthquake engineering, Bruneau et al. (2003) present a framework for defining seismic resilience and specifying quantitative measures of resilience. This framework includes three measures of resilience: (1) reduced failure probabilities, (2) reduced consequences from failures, and (3) reduced time to recovery. Murray et al. (2008) present a review of methodologies of vulnerability analysis for infrastructure networks. They explore how different methods impact the infrastructure planning and policy development. Cox et al. (2011) present operational metrics to evaluate resilience of transportation systems. They apply the proposed methods to evaluate the impacts of the London subway bombings in 2007. As described in this paragraph, the source papers include a wide range of fields such as ecological fields, earthquake engineering, and transportation engineering.
Reggiani (2013) proposes a conceptual or methodological framework for analyzing resilience of transportation systems. The author points out that the conditions of hubs can largely affect the entire transportation networks based on studies about the vulnerability of scale-free networks in complex networks. There are two streams from the paper by Reggiani (2013). One stream starts from O'Kelly (2015). This paper studies the resilience of networks with hubs (especially cyber and air passenger/freight networks). Networks whose degree distribution follows power-law have resilience to random attacks, but at the same time they have fragility to targeted attacks. The author concludes that developing mitigation strategies for networks with hub-spoke structures is a critical issue. This paper is cited by two papers Zhang et al. 2017). propose a q-Ad-hoc hub location problem for multi-modal freight transportation systems. This strategy uses alternative hubs in an ad-hoc manner, which give alternative routes satisfying supply and demand in disrupted network systems. Zhang et al. (2017) investigate a hub location problem for resilient power projection networks in military actions. Another stream starts from Griffith and Chun (2015). They study the resilience and vulnerability of economic networks associated with spatial autocorrelation. The sink paper of this stream by Östh et al. (2018) presents a framework for analyzing the mechanism which drives spatial systems from the perspective of the resilience of urban areas.
We can also observe a stream from a paper by Franchin and Cavalieri (2015). They study the seismic resilience using complex network theory. They present an index of resilience based on the evolution of efficiency of communication between citizens during the relocation of the population after an earthquake. Fu et al. (2016) then provide a model which simulates the evolution of infrastructure networks. This model can be used to construct an infrastructure system by reducing cost, increasing efficiency, and improving resilience. Ouyang and Fang (2017) present a framework to optimize resilience of infrastructure systems against intentional attacks. The sink papers of this stream are related to community resilience in consideration of infrastructure, social, and economical aspects (Mahmoud and Chulahwat 2018) and vulnerability to multiple spatially localized attacks on critical infrastructures (Ouyang et al. 2018).
Summing up the above, the knowledge flow about the concept of resilience is visualized in the main paths. Interestingly, the obtained main paths visualize how the concept of resilience has been extended throughout different research fields. That is, the resilience was initially conceptualized for ecological systems in the 1970s and 1980s, and they have been developed in other fields including earthquake engineering and transportation engineering. In the same way as the transport vulnerability and metro and maritime networks communities, concept applications from complex networks were observed. Especially the resilience of networks with hubs (e.g., passenger/freight networks (O'Kelly 2015)) has been analyzed based on studies about scale-free networks in complex networks. The main paths seem to be divided into two streams. One stream is related to the resilience of transportation and urban systems, which is shown in the upper part of Fig. 10. These papers are mainly published in Networks and Spatial Economics and Transport Policy. Another stream is about the resilience of critical infrastructure systems against disasters (especially for earthquakes), which is shown in the lower part in Fig. 10. Many papers in this stream are published in Computer-Aided Civil and Infrastructure Engineering.
Topological vulnerability
We show the main paths in the topological vulnerability community. The main paths consist of 13 out of 127 publications (Fig. 11).
There are four sources in the obtained main paths (Watts and Strogatz 1998; Barabási and Albert 1999; Barabási et al. 1999;Albert et al. 1999). Two papers, Watts and Strogatz (1998) and Barabási and Albert (1999), are well-known studies. Watts and Strogatz (1998) propose a simple model for generating so-called small-world networks. These networks are not completely regular nor random, but they lie somewhere in between. They found that small-world networks can be highly clustered, but have small average shortest path length. Barabási and Albert (1999) show that many networks in the real world have a common property that their degree distribution follows a power-law. These networks are so-called scale-free networks. They found two generic mechanisms of this type of networks: (1) growth: a network expands by adding new nodes and (2) preferential attachment: new nodes attach preferentially to existing nodes with a high degree. The other two sources, Albert et al. (1999) and Barabási et al. (1999), are also well-cited papers. Albert et al. (1999) analyze the network topology of the World-Wide Web whose nodes and directed edges are Fig. 11 Papers on topological vulnerability main paths documents and URL hyperlinks, respectively. They show that both of the in-degree and out-degree distributions follow a power-law. Barabási et al. (1999) investigate the properties of scale-free networks. They present a mean-field theory which can be used to predict dynamics of individual nodes and to calculate analytically the degree distribution. These sources are not directly related to vulnerability, but they provide widely applied concepts.
After these works, Albert et al. (2000) explore error and attack tolerance of networks by applying percolation theory. They investigate effects of removals of nodes/ edges on the network connectivity. They show that scale-free networks are highly robust against random failures. However, they are extremely vulnerable against targeted attacks on a few nodes with high degree. This paper by Albert et al. (2000) is the most-cited paper in the data set in this study.
There are mainly two streams after the paper by Albert et al. (2000). One stream is about network efficiency and vulnerability in scale-free networks. Crucitti et al. (2003) investigate the error and attack tolerance of scale-free networks with the concepts of global and local efficiency. They show that both of the global and local efficiency are not affected by random failures, but they are extremely vulnerable to attacks on highly connected nodes. Criado et al. (2007) analyze the correlation between efficiency, vulnerability and cost of networks and show strong correlations between them. Hernández-Bermejo et al. (2009) study the network efficiency of scale-free networks. They show that when a node is added to a network, the highest increase of network efficiency brought by the new node connects to the most connected existing node. They conclude that preferential attachment seems to lead to trade-off, more efficient network but more vulnerable against targeted attacks.
Another stream is about identifying critical nodes and defence strategies. Wu et al. (2007) introduce a model of intentional attacks with incomplete information. In this model, random failures and intentional attacks in Albert et al. (2000) are two extreme cases. They show that hiding a small fraction of nodes chosen randomly in scale-free networks can protect the whole network against intentional attacks. This is because a few hidden hubs can keep playing crucial roles for network connectivity. Ghedini and Ribeiro (2011) point out that vulnerability analysis based only on the centrality measurements may have some bias. They show that failures on lower centrality nodes sometimes have serious damage on the network connectivity. Ghedini and Ribeiro (2014) then propose strategies to identify vulnerable parts and to mitigate damage by changing network topology by rewiring or adding edges. The topics of the two sinks by Zhang et al. (2016c) and Chaoqi et al. (2018) are related to fuzzy evaluation of the vulnerability of networks and the vulnerability to attacks on multiple nodes, respectively.
In summary, papers in the main paths in this community mainly investigate the influences of the removals of nodes and/or edges randomly or intentionally. These studies analyze the static aspects of the vulnerability without considering network flow, which is different from papers in the cascading failures community as described in the next section. The pioneering work is the paper by Albert et al. (2000), which is the most-cited paper in this data set. The main finding is the robust, yet fragile property of scale-free networks. That is, scale-free networks are robust to random failures, but they are extremely fragile to intentional attacks on hubs. This property has been confirmed by using different methods and some defence strategies have been proposed. Methods proposed in these studies can be applied to any kind of network systems. In fact, some papers in the metro and maritime networks community apply these methods to transportation networks (especially to subway and railway networks). The three source papers, Watts and Strogatz (1998), Albert et al. (1999), and Albert et al. (2000), are published in Nature and the other source, Barabási and Albert (1999), is published in Science. Many of the other papers are published in Physica A: Statistical Mechanics and Its Applications.
Cascading failures
We show the main paths in the cascading failures community in Fig. 12. We identify 23 papers out of 110 publications.
In the topological vulnerability community, papers analyze the vulnerability of networks by iteratively removing nodes/edges without considering effects of network flow. However, in many network systems, the flow of some physical quantities is a crucial factor characterizing their vulnerability. In consideration of the existence of flow, the removals of nodes/edges redistribute the flow, possibly leading to catastrophic consequences by the domino effects. Typical examples are blackouts, communication disturbances, or financial crises. This kind of phenomena is called cascading failures.
There are six sources in the cascading failures main paths (Motter and Lai 2002;Wasserman and Faust 1994;Amaral et al. 2000;Albert et al. 2004;Kinney et al. 2005;Crucitti et al. 2005). Two papers by Wasserman and Faust (1994) and Amaral et al. (2000) are not directly related to cascading failures, but they present fundamental concepts in network theory. Wasserman and Faust (1994) cover various methodologies about social network analysis. This book includes concepts, such as centrality or cluster which are basic knowledge not only for social networks but also for Fig. 12 Papers on cascading failures main paths any kind of networks. Amaral et al. (2000) analyze properties of diverse networks in the real world and show how different classes of small-world networks emerge. According to the obtained main paths, a pioneering work on cascading failures is a paper by Motter and Lai (2002). They propose a simple model for cascading failures where every node has a finite capacity to handle flow and overloaded nodes are successively removed from a network. They show that intentional attacks on a network with highly heterogeneous distribution of flow can easily lead to large-scale cascading failures. They also demonstrate that the breakdown of a single node with the largest load is sufficient to collapse the entire network. The other three papers by Albert et al. (2004), Kinney et al. (2005), and Crucitti et al. (2005), are related to blackout as cascading failures in power grids. Albert et al. (2004) study cascading failures in power grids in the background of large-scale blackout in the United States in 2003. They show that a power grid is robust against most failures, but breakdowns of some key nodes greatly reduce the function in the global scale. Kinney et al. (2005) analyze the robustness of the North American power grid against cascading failures. They show that the North American power grid is highly robust to random failures. However, targeted attacks on the few transmission substations with high betweenness and degree centrality cause serious damages on the entire system. Crucitti et al. (2005) further analyze the vulnerability of power grids in Spain, France, and Italy.
After these source papers, there are two papers in the middle of the 2000s (Crucitti et al. 2004;Rosas-Casals et al. 2007). Crucitti et al. (2004) propose an extended model based on the model proposed by Motter and Lai (2002) in the following two points: (1) overloaded nodes are not removed from a network, but flow avoids these nodes, and (2) damage is quantified by the decrease of network efficiency proposed by Latora and Marchiori (2001). They also demonstrate that the breakdown of a single node with the largest load is sufficient to collapse the entire network. Rosas-Casals et al. (2007) show that the robustness of 33 different European power grids which have exponential degree distributions is similar to the robustness of scale-free networks.
From about the year of 2010, there are three main streams in the main paths. The first stream starts with a paper by Chen et al. (2009). Papers on this stream are about the integration of methods between complex networks and electrical engineering. Chen et al. (2009) propose a model for structural vulnerability analysis of power grids based on Kirchoff's laws. Chen et al. (2010) further investigate the vulnerability of power grids. They improve traditional approaches by considering electrical power flow distribution instead of the shortest-path assignment. Their results indicate that a critical regime might exist where the network is vulnerable to both random failures and intentional attacks. Wang et al. (2011) define a new measure, electrical betweenness, by considering the maximum demand of load and the capacity of generators. They show that electrical betweenness distribution follows power-law and critical points can be identified as a node with high electrical betweenness. Ma et al. (2013) analyze the robustness of power grids with electrical efficiency. They show that the robustness does not always increase monotonically with capacity, which is different from some previous studies (e.g., Motter and Lai 2002;Crucitti et al. 2004;Koç et al. 2014). They propose a new metric, effective graph resistance, extended from the average shortest path length with the concept of the impedance. This metric can be used to estimate the robustness of a power grid against cascading failures. Koç et al. (2015) propose a measure to assess the robustness of power grids and apply it to a network constructed by real-world data.
The second stream begins with a paper by Bompard et al. (2009). Papers on this stream also analyze the power grids using the integrated methods between complex networks and electrical engineering. Bompard et al. (2009) propose extended metrics to overcome the disadvantages of purely topological metrics such as efficiency, degree or betweenness centrality for the vulnerability analysis of power grids. Pagani and Aiello (2013) present a survey of studies about properties of different power grids in the United States, Europe, and China. Highlights of this survey are as follows: (1) degree distributions in power grids tend to follow an exponential distribution, (2) very few nodes have high betweenness centrality, which is similar to properties of networks with power-law degree distributions, (3) power grids are robust to random failures, but extremely vulnerable to targeted attacks on critical parts such as nodes with high degree or nodes and edges with high betweenness centrality, and (4) results obtained by complex network analysis are very close to the ones obtained by traditional electrical engineering. Kim and Ryerson (2017) analyze the South Korean power grid. Their results show that the South Korean power grid is more vulnerable compared to ER random networks (Erdos and Rényi 1960) and BA scale-free networks . Eisenberg et al. (2017) develop a method to improve response for blackouts and identified critical components in power grids.
The third stream starts from a paper by Wei et al. (2012). This stream is slightly different from the other two streams. Many papers in the other two streams aim to integrate the methods in complex networks and electrical engineering. On the other hand, papers in the third stream tend to develop models for cascading failures in more general networks. They propose a model for cascading failures with the local preferential redistribution rule. The load of a broken node is redistributed to neighbour nodes depending on their degree. Jin et al. (2015) propose a model for cascading failures in directed and weighted networks. Zhu et al. (2016) extend a model for cascading failures with fuzzy information to consider with uncertain failures or attacks. Zhu et al. (2018) investigate the way to improve robustness against cascading failures with two multi-objective optimization problems which consider operational cost and robustness. Ji et al. (2016) analyze the structure of the global electricity trade network with nations as nodes and international electricity trade as edges.
Summing up the above, papers in the cascading failures main paths investigate the vulnerability of network systems in consideration with the network flow. Removals of nodes/edges redistribute the network flow and possibly cause the propagation of failures in the entire network. After the pioneering work by Motter and Lai (2002), models for cascading failures have been developed in different ways. Many papers show that the breakdown of only a few nodes with high centrality measures cause devastating cascading failures in the whole network. This is a consistent result as findings in the papers in the topological vulnerability community show. In addition, many papers in the obtained main paths analyze blackouts as cascading failures in power grids and some papers discuss the integration of concepts and methods between complex networks and electrical engineering. An interesting finding is that the results obtained from traditional electrical engineering and complex networks are similar. Many papers in the main paths are published in complex network-related journals such as Physical Review E and Physica A: Statistical Mechanics and Its Applications.
Interdependent networks
We show the main paths in the interdependent networks community in Fig. 13. We identify 26 papers out of 141 publications.
There are 11 source papers in the obtained main paths (Rinaldi et al. 2001;Erdos and Rényi 1960;Callaway et al. 2000;Bollobás and Béla 2001;Cohen et al. 2001;Goh et al. 2001;Newman 2002Newman , 2003Dorogovtsev and Mendes 2013;Newman et al. 2006;Rosato et al. 2008). All of these papers are cited by a pioneering paper on the vulnerability of interdependent networks by Buldyrev et al. (2010). There are three books and one review paper which summarize the fundamental theory of complex networks (Bollobás and Béla 2001;Newman 2003;Dorogovtsev and Mendes 2013;Newman et al. 2006). The contents of the other sources are various such as random graphs (Erdos and Rényi 1960), percolation theory (Callaway et al. 2000;Cohen et al. 2001), betweenness distribution (Goh et al. 2001), disease spreading (Newman 2002), interdependency between critical infrastructures (Rinaldi et al. 2001;Rosato et al. 2008).
Before the paper by Buldyrev et al. (2010) in 2010, most studies in complex networks analyzed the vulnerability of a single network. However, as Rinaldi et al. (2001) point out, most critical infrastructures have interdependency among them. This indicates that influences of failures in one system can propagate through dependency and other systems can be also seriously damaged. To study the influences of such an interdependency on the vulnerability of networks, Buldyrev et al. (2010) propose a model for cascading failures in interdependent networks. They show that interdependency makes the system vulnerable and they behave totally differently when they are isolated networks. They conclude that it is necessary to consider interdependency to assess the vulnerability of network systems.
After the paper by Buldyrev et al. (2010), methods to analyze the vulnerability of interdependent networks have been extended in different ways. There are four papers (Brummitt et al. 2012;Parshani et al. 2011;Bashan et al. 2013;Majdandzic et al. 2014) toward a paper by Gao et al. (2014). Brummitt et al. (2012) show that there is an optimal fraction of interconnected node pairs between two mutually dependent networks which minimizes the size of cascading failures. Parshani et al. (2011) introduce two types of edges, connectivity edges and dependency edges, and propose an analytical framework to evaluate the vulnerability of interdependent networks. Connectivity edges are the ones in each network and dependency edges are the ones between different networks. They show that high density of the dependency edges makes the network vulnerable. Because many infrastructure network systems are spatially embedded, Bashan et al. (2013) study the vulnerability of spatially embedded interdependent networks. They show that spatially embedded networks are more vulnerable compared to non-embedded networks. Majdandzic et al. (2014) analyze spontaneous recovery of network systems after disrupted events. Even though this study is not directly related to interdependent networks, Gao et al. (2014) mention that the recovery of damaged interdependent networks is a possible future research direction.
All of the above-mentioned papers are cited by the review paper by Gao et al. (2014). They review studies on the vulnerability of interdependent networks. They show that many data sets can be represented as network of networks including networks of different transportation networks such as flight networks, railway networks and road networks. They summarize some interesting and surprising findings on the vulnerability of interdependent networks and they conclude that one possible research direction is to develop methods to improve the robustness of spatially embedded interdependent networks. After the paper by Gao et al. (2014), there is one review paper and one book chapter . Liu et al. (2015) review the framework for analyzing interdependent networks. They show that there are two strategies to decrease the vulnerability of interdependent networks: (1) protecting high-degree nodes, and (2) increasing the degree correlation between networks. A book chapter by Danziger et al. (2016) also reviews the vulnerability of interdependent networks including discussions about the role of connectivity and dependency edges in mutually dependent networks. The topics of the sink papers are related to temporal networks (Cho and Gao 2016) and the vulnerability of interdependent networks based on the concept of controllability (Faramondi et al. 2018).
From the paper by Bashan et al. (2013), we observe one stream. Pu and Cui (2015) study the longest-path attack on networks and they show that homogeneous networks are vulnerable to the longest-path attacks. Da Cunha et al. (2015) propose a module-based method to attack networks. This method outperforms other ways of attacks based on degree or betweenness centrality. Shekhtman et al. (2016) review studies on failure and recovery in interdependent networks. They point out that, even though it is found that spontaneous recovery can occur in a single network (Majdandzic et al. 2014), recovery in interdependent networks is more complicated and it should be further studied. The topics of the two sink papers are related to the vulnerability of spatially embedded two-interdependent networks (Vaknin et al. 2017) and the vulnerability of two-interdependent networks which are operated by two different entities (Fan et al. 2017).
Summing up the above, after the paper by Buldyrev et al. (2010), studies on interdependent networks have been developed rapidly in the 2010s. Many studies have reported that interdependent networks behave completely different from single networks and they are more vulnerable compared to such isolated networks. It is more difficult to protect interdependent networks compared to single networks. Moreover, in recent years, some papers show that spatially embedded interdependent networks are much more vulnerable than non-embedded ones. Transportation networks are often taken as examples of such spatial embedded interdependent networks. Some papers point out that many studies should pursue defence or recovery strategies for spatially embedded interdependent networks (e.g., Bashan et al. 2013;Gao et al. 2014;Vaknin et al. 2017). We expect that interdisciplinary studies between complex networks and transportation engineering should contribute to construct strategies to protect spatially embedded interdependent networks in the future. In the main paths in this community, many papers are published in complex network-related journals such as Physical Review Letters or Physical Review E.
Citation patterns among different communities
Finally, we discuss the relationships among the identified communities by visualizing the Sankey diagram. In Fig. 14, we show the Sankey diagram of intra/intercommunity edges among seven communities. The numbers shown in the left side represent the number of the citing articles and the ones shown in the right side represent the number of the cited articles. Hence, the total value of each side is equal to the number of edges in the GWCC, 4584.
The first thing to note about Fig. 14 is that we observe asymmetric citation patterns between the two fields of transportation and complex networks, when we assume that the three communities (transport vulnerability, metro and maritime networks, and resilience) are transportation-related communities and the others (topological vulnerability, cascading failures, and interdependent networks) are complex network-related communities. Especially, papers in the topological vulnerability are cited well by all the other communities. This result is intuitive, because not only the most cited papers (e.g., Watts and Strogatz 1998; Barabási and Albert 1999;Albert et al. 2000) are included in this community but also some papers in the topological vulnerability propose the methods to evaluate the topological vulnerability of any kind of networks. These methods can be directly applied to transportation networks. Actually, papers in the metro and maritime networks apply these methods
3
Vulnerability studies in transportation and complex networks to evaluate the topological vulnerability of subway, railway, or maritime shipping networks. In addition, as we observe in the main paths in the transport vulnerability, the vulnerability of public transportation networks (PTNs) has rapidly grown after the successful integration of the concepts between complex networks and transportation engineering (Cats and Jenelius 2014). Moreover, the papers in the resilience community analyze the resilience of networks with hubs based on the results in complex networks (e.g., Reggiani 2013;O'Kelly 2015).
The second thing to note about Fig. 14 is that there are few citations between the transport vulnerability and cascading failures communities in both directions. Papers in the cascading failures study how failures propagate in network systems. However, most papers focus on only blackouts in power grids. That is, universal and specific properties of cascading failures across different network systems have not been discussed deeply. In the transport vulnerability community, only a few recent papers try to integrate the methods in complex networks with specific natures of the PTNs to understand how these systems are disrupted (e.g., Cats and Jenelius 2014;Cats 2016;Malandri et al. 2018). More studies should be required in this direction and it also should be discussed in association with cascading failures in other network systems.
The third point to note about Fig. 14 is that there are few citations between transportation-related communities and the interdependent networks community. As described in the main paths in the interdependent networks community, the vulnerability of interdependent networks has been studied intensively in the last decade. Some recent papers show that spatially embedded interdependent networks are extremely vulnerable with transportation networks as typical examples and it is also pointed out that defence strategies for such networks should be studied more deeply (e.g., Bashan et al. 2013;Gao et al. 2014;Vaknin et al. 2017). However, these studies do not consider specific natures of transportation networks in both of the demand and supply sides. More studies are desirable to understand influences of these specific properties on the vulnerability of spatially embedded interdependent networks.
Conclusions
In this study, we have comprehensively and objectively revealed the overall perspectives of the vulnerability studies in the fields of transportation and complex networks by applying the citation network analysis. We collected publication records from the online publication database, the Web of Science, and we constructed the citation network. Then we analyzed the giant weakly connected component consisting of 705 nodes (publications) and 4,584 directed edges (citation relations). The proposed framework can be applicable to any research fields. Our main results and implications are summarized as follows: 1. We have identified seven major research domains: (1) transport vulnerability, (2) metro and maritime networks, (3) resilience, (4) topological vulnerability, (5) cascading failures, (6) interdependent networks, and (7) scattered community. 2. We have identified the major research backbones in each of the detected research domains and briefly described how research has been developed over time. 3. We have shown that, among various transportation modes, studies on the vulnerability of maritime networks are still limited and more studies are desirable in the future. 4. We have quantitatively revealed the asymmetrical citation patterns between the fields of transportation and complex networks, which implies that the mutual understanding between them is currently lacking. 5. We have shown that there are few citations between the transport vulnerability and cascading failures communities in both directions. This result suggests that the findings in the two communities have not been discussed deeply, even though they study the dynamical aspects of the vulnerability of network systems in general.
6. We have shown that there are few citations between transportation-related communities and the interdependent networks community. The vulnerability of spatially embedded interdependent networks has been studied in complex networks in recent years with transportation networks as typical examples. However, these studies usually do not consider influences of specific natures of transportation networks in both of the supply and demand sides. More studies are desirable to understand influences of these specific properties on the vulnerability of spatially embedded interdependent networks. | 13,680.8 | 2020-09-29T00:00:00.000 | [
"Geography",
"Engineering",
"Computer Science"
] |
A comprehensive strategy for exploring corrosion in iron-based artefacts through advanced Multiscale X-ray Microscopy
The best strategy to tackle complexity when analyzing corrosion in iron artefacts is to combine different analytical methods. Traditional techniques provide effective means to identify the chemistry and mineralogy of corrosion products. Nevertheless, a further step is necessary to upgrade the understanding of the corrosion evolution in three dimensions. In this regard, Multiscale X-ray Microscopy (XRM) enables multi-length scale visualization of the whole object and provides the spatial distribution of corrosion phases. Herein, we propose an integrated workflow to explore corrosion mechanisms in an iron-nail from Motya (Italy) through destructive and non-destructive techniques, which permit the extraction of the maximum information with the minimum sampling. The results reveal the internal structure of the artefact and the structural discontinuities which lead the corrosion, highlighting the compositional differences between the tip and the head of the iron nail.
One of the main challenges in Cultural Heritage is the inspection of inner and hidden parts of an artefact, that may provide crucial information with minimal sample processing. Invasive techniques are often the only way to explore the original alloys of complex objects from rim to core and the corrosion process when it occurs in depth 1-4 . In particular, archaeological iron artefacts undergo corrosion phenomena, resulting in the loss of the metal core, which leads to the loss of information about the function of the object and the forging processing. The corrosion of iron consists of a stratification in two distinct zones, before reaching the inner metal core, if still present. The first region after the metal core, the so-called dense product layer (DPL), is constituted by iron oxides and oxyhydroxides and appears relatively dense. The second one, i.e., transformed medium (TM), is composed of iron oxyhydroxides and typical minerals of soil 5 .
The past five decades has witnessed the emergence of many works in which various strategies are proposed to explore the layered corrosion system of iron artefacts from different environments through cross-sections preparation [5][6][7] . This approach is reliable, but the archaeological artefacts are unique pieces and cannot always be sacrificed. Furthermore, when allowed, sampling must be minimal and must not compromise the functionality and aesthetics of the object. Therefore, if the metal core is hidden deep, there is a risk that it will not be detected and studied.
In the last few years, X-ray imaging, i.e, radiography and tomography, has been exploited in archaeometry to investigate depth corrosion, volumes, morphology and to detect cracks and other structural defects nondestructively and in three dimensions (3D) [8][9][10][11][12][13][14][15][16][17][18][19][20] . X-ray Microscopy (XRM) had been already used in a huge variety of materials applications, such as lithium-ion batteries, aerospace, additive manufacturing, electronics and semiconductors engineering 21,22 due to its capability to investigate specimens at their microstructural level with resolution down to the micro-and nano-meter scale in a completely non-invasive way, allowing to understand 23 . For such reasons, this technique is especially well suited to the inspection of the internal structure of stratified archaeological objects. The major constraint arises from the impossibility to obtain chemical information on the sample. Because of this, "correlative microscopy" combines a variety of microscopy techniques to access a much larger range of information, such as microstructure, mineralogy and chemistry about a specific region of interest (ROI) of the sample 17 . This would not be possible by using a single instrument. Nevertheless, this requires the 2D/3D spatial registration of multiple techniques on the same ROI. Therefore, if the latter is located within a non-expendable zone of the artefact, this approach cannot be achieved without permanently damaging the sample.
In this work, we present a characterization workflow in which non-invasive and invasive techniques are combined to explore the corrosion system of an archaeological iron nail from the archaeological site of Motya (Sicily-Italy).
The iron nail (IV century BC) was unearthed in the US 1210 of the Area F, the so-called "Western Fortress", during the excavations carried out by Sapienza Archaeological Expedition in 2003 24,25 . This Fortress, bordering the West Gate and the adjacent city walls, is a rectangular building erected at the mid of the VI century and then, after Dionysios' attack in 397 BC, renovated and dedicated to cult activities. The nail after the excavation was immediately set into a sealed box that stabilized the humidity (max 50%) at temperature s between 10 and 25 °C until it was brought to the lab.
The guidelines for the selection of these analytical methods are dictated by the necessity to get the maximum information on the whole artefact with the minimum sampling.
We started with a characterization of the external surface of the artefact, then exploring its interior both through a destructive cross-section of the tip (Fig. 1a) and through numerous 2D virtual tomographies of the entire sample, which allowed us to greatly increase the performance of the traditional analyses with a minimal sampling processing. Optical microscopy observations and micro-Raman spectroscopy were performed to investigate the external mineralogy of the patina. Chemical composition analyses by SEM-EDS on cross-section collected from the tip of the nail were performed to study microstructure of corrosion products, the nature of exogenous and slag inclusions. Moreover, the capabilities of XRM are employed to explore the corrosion propagation and verify the presence and the thickness of metal core in the entire object, monitoring the variations in density between different phases of the artefact. Indeed, sub-micron X-ray microscope ZEISS Xradia Versa 610 overcomes the limits of X-ray computed tomography (CT), offering a setup that enables non-destructive, multi-length scale visualization with an imaging field of view range from tens of millimeters down to tens of micrometers, and a true spatial resolution of 500 nm 21 .
Nails play a vital role in most construction contexts (e.g., ships, houses, sheds, boxes, coffins, doors, roofs, carts, sledges, etc.) and represent useful markers of technological advancements. In most excavation contexts, these functional artefacts are the only remaining record of wooden structures and, therefore, often have a pivotal role in archaeological arguments. The recording and interpretation of their function and manufacturing is thus of considerable importance for the reconstruction of ancient population lifeways.
This work aims to investigate the evolution of the corrosion process into a nail, for which a proper consideration must be given when studying the mechanical property of the object (i.e., strength, stiffness, durability) and its original microstructure that enable crucial information about manufacturing and performing features of the nail.
Results and discussion
Mineralogical composition of the external patina. Figure 1a shows a photographic image of the nail with inventory number MF.03.53. The surface of the nail was investigated with a Leica microscope M 125C equipped with a digital camera revealing a reddish-brown coarse crust that covered the artefact (Fig. 1b). Micro-Raman spectroscopy revealed that this rust layer evolved toward lepidocrocite (γ-FeO(OH), as revealed by the characteristic bands identified at 218, 251, 312, 346, 379, 528, 655 cm −1 (Fig. 1c).
This mineral phase precipitates in slow aerial oxidation and hydrolysis of solid or aqueous iron compounds, at pH conditions greater than 3-5, as shown in Pourbaix diagrams of iron specific to chloride-containing aqueous media 26 .
Microstructure and chemical composition in 2D real and virtual sections.
A small fragment of the tip of the nail, as indicated in Fig. 1a, was carefully sampled and embedded in epoxy resin in order to prepare a "real" cross-section for SEM-EDS analysis.
As observed on the SEM image of Fig. 2, the shank of the nail is made up of a solid and quadrangular section, which has been forged with two-by-two opposing and orthogonal hammering. Thus, the corrosion layers and cracks are oriented in the direction of the length of the element.
Due to the stress generated by multiple forging operations in more directions, the core of the artifact is the most fragile and less compact. In Fig. 2a spongy structures and voids induced by the incomplete compaction of the iron-solid are visible during thermo-mechanical processing.
As shown in Fig. 2b, two corrosion layers are identified in iron nails as follows: dense product layer (DPL) composed of corroded iron phases and transformed media (TM), where the precipitated iron corrosion phases coexist with some soil elements (Ca, Si, Al) and slag inclusions, but no track of uncorroded metallic core is detected. The DPL shows two substructures, consisting of a light-area of magnetite (EDS spot A) and a slightly dark matrix of goethite (EDS spot B) (see also Supplementary Figs. S1 and S2). The TM layer, instead, is characterized by the presence of goethite and lepidocrocite (see also Supplementary Fig. S3).
The occurrence of magnetite, goethite and lepidocrocite and the absence of chloride phases suggest a low Cl − activity in alkaline media, typical of the Lagoon-like system of Motya, as reported in 27 . The grey-level scale of the 2D tomographies is related to the absorption coefficient, i.e. density value, of each region of the artefact: light grey corresponds to the metal core remaining, medium grey corresponds to corrosion layers and dark grey represents cracks and soil inclusions.
Scientific Reports
Several features are evident in the visualization of the 2D cross-sections. Metal core consistently does not appear until slice 90. As it progresses getting closer to the nail head, the metal core, which appears as light grey, increases in volume and becomes more compact, suggesting that the tip has corroded away, whereas the head and upper part of the shank still remain.
The corresponding percentages of each layer are shown per slice in the graph (Fig. 3b), where the amount of metal core remaining is measured.
The slices between 408 and 574, which correspond to the centre of the nail shank, have an iron content of 36-37%, whereas the slices 303 and 618 contain less metallic iron (16-20%). These sections are found in correspondence of reddish-brown clusters of corrosion products.
The lowest percentage of the metal core is recorded at the tip; conversely the higher iron content (39%) is noticed in slice 780, just below the nail head, and shows a more compact and regular metal section, suggesting isotropic pressure on all four sides. The head of the nail, on the other hand, has a metal iron content of about 28%. The main trend observed in Fig. 3b shows a significant uncorroded metal core in the nail head, while the www.nature.com/scientificreports/ tip is deeply corroded; however, some fluctuations are observed in the shank (e.g. slice 618), probably due to local stressed areas. It was frequently observed that the head of the nail was still in good condition, while its body was heavily transformed into iron-rich rust 28 . This phenomenon suggests that corrosion was the result of moisture from the environment in contact with soils. It is possible that the nail was mounted vertically in the atmosphere where the bottom of the nail corroded faster than the top of the nail.
Description of the 3D morphology. The morphology of the entire iron nail obtained via low-resolution X-ray scan is illustrated in Fig. 3c and Supplementary Movie 1.
To investigate how corrosion present in the sample varies spatially, a segmentation procedure was performed. According to the X-ray absorption coefficients of the main elements present in the sample and in relationship with the chemical composition previously analysed, we could differentiate the metal (blue), corrosion (red) and soil (yellow) volumes (Fig. 3c).
As shown by 3D model, the layer of soil perfectly follows the profile of the nail, distributing itself evenly over it and covering it like a thin "sock"; the corrosion products give the actual shape and dimensions to the object, resulting by the volume expansion during transformation of iron metal to iron oxy-hydroxides. The small nucleus of iron, which appeared in blue, indicated the loss as a result of corrosion.
A uniform corrosion from outer inwards towards the core of the nail was observed. Generally, the metal core occupies only 24% of the total volume of the product, while about 62% has been transformed into rust and 19% is soil.
Sagittal cut of the 3D reconstruction, displaying the internal region of one half of the nail, shows that the head and upper part of the shank were very compact, whereas along the tip the percentage volume of the metal decreases (Fig. 3d).
Microscale visualization of iron corrosion products on XRM experiments.
Although a low-resolution but large-field XRM scans is suitable to obtain a general overview of the nail understanding the phase structure and distribution, it is also necessary to run higher-resolution XRM scans of a selected volume of the nail (Fig. 4a) in order to study the stratigraphy of corrosion products and structural breaks in more detail (Supplementary Movie 2). Several structural discontinuities (green) have been observed in the inner metal core (blue) (Fig. 4b,c). High resolution scans revealed that most of the structural breaks are parallel to the metal/ corrosion products interface, suggesting an origin from a lateral hammering during the forging phase and not during the nailing.
2D slice of Fig. 4d,e shows a further differentiation between the iron corrosion phases based on their greyscale value and false colors. These layers and sub-layers are designated as goethite (pink) and magnetite (turquoise), according to the literature 29 . Compared with low-resolution scans, some veins composed of iron oxide (maghemite or/and magnetite) can now be observed in the dense product layer, not connected to the metal core. www.nature.com/scientificreports/ First, at the metal/DPL interface, iron oxidizes and generates magnetite. With long-term burial, the corrosion front progresses and forms iron oxyhydroxides (goethite) in which a magnetite strip is present as a presumed trace of the initial layer. Indeed, the main phase observed at the interface between the metal and the rust layer is goethite. The formation of the TM, in red, could be explained by the dissolution of iron and reprecipitation of iron corrosion/oxidation products over the surface of the object, perhaps over the original surface 27,[30][31][32][33] .
Goethite and lepidocrocite, detected by Micro-Raman technique in the TM layer, are difficult to discriminate on the basis of their attenuation coefficients as the two polymorphs have the same density. For this reason, we generically indicate the TM layer as a single corrosion phase designed with red color. Different grain orientation of the iron in the nail also played a role on the rate of corrosion. Furthermore, intergranular corrosion was observed in the metal core via X-ray tomography.
Advantages and drawbacks of XRM over conventional techniques. In this research the preliminary investigation of the corrosion system of an iron nail was performed by traditional techniques. Micro-Raman spectroscopy allows us to verify an active corrosion on the external patina as confirmed by the presence of red-brown clusters of lepidocrocite. SEM imaging shows the lack of a metal core in the tip of the nail and the occurrence of corroded iron phases in the DPL and TM. Slag inclusions and quartz particles are embedded in goethite matrix. In order to evaluate if the section studied was representative of the entire sample, we perform a low-resolution scan of the nail with XRM. The challenge is to extract as much useful information as possible from the 3D visualization and image segmentation about the spatial distributions of phases in iron-based archeological nail. The main advantage is the ability to create virtual sections without cutting the sample. In particular, XRM workflow resulted an appropriate technique for: (1) exploring the stratigraphy of the sample without any invasive sample preparation as in traditional techniques, (2) investigating the internal structure of the artefact such as structural discontinuities to shed light on the manufacturing technique, (3) estimating the ratio of phases in each virtual cross section through the whole object.
In this way, we have demonstrated the existence of metal core in the shank and the head of the iron nail. However, the low-resolution scan represents a limiting factor for the identification of the corrosion sub-layers.
Thus, to tackle the complete characterization of the virtual section which includes the uncorroded metal, we worked also with high-resolution scan, covering all length scales from micrometric resolution to centimeters. Combining SEM-EDS data with XRM images of the entire nail the evolution of internal nail corrosion can be tracked.
The information provided by the tomography about the state of conservation of an archaeological artefact have two interesting applications: from museum and conservative point of view, a periodic inspection of sample could help to study joins, hidden cracks and repairs or earlier restoration attempts in a non-destructive manner. On the other hand, from a diagnostic point of view, this analysis could represent a preliminary check of the whole object to address the sampling of the section in order to investigate a specific area of interest, when it is possible. An important limitation of XRM scan is represented by the impossibility to obtain precise chemical information about the sample components.
Methods
Micro-Raman spectroscopy: was used on polished cross-sections to determine the mineralogical composition of phases. Micro-Raman analyses were performed at room temperature using a Renishaw RM2000 equipped with a Peltier-cooled charge-coupled device camera in conjunction with a Leica optical microscope with 10×, 20×, 50× and 100× objectives. Measurements were performed using the 50× objective (laser spot diameter of about 1 μm) and the 785 nm line of a laser diode. Two edge filters blocked the Rayleigh-scattered light below 100 cm −1 . For this reason, the study of ultra-low wavenumber Raman spectra in the region < 100 cm −1 is overlooked. In order to avoid damaging the patina and to prevent the fluorescence from covering the Raman signal, the laser power was lowered. No baseline was subtracted from the recorded spectra. The spectra obtained were compared with GRAMS spectroscopy software and databases available in the literature.
SEM-EDS investigations: were carried out by FEI-Quanta 400 instrument, equipped with X-ray energydispersive spectroscopy (Department of Earth Sciences, Sapienza University, Rome, Italy). SEM imaging was collected both in secondary electron (SE) and back scattered electron (BSE) modes. Energy-dispersive X-ray spectroscopy (EDS) spectra and X-ray maps were also used to show the distribution of the elements through the samples. Sample was coated with a thin layer of carbon in order to avoid charging effects. EDS spectra were collected at a 25 kV and a pressure in the analysis chamber of 10 −4 Pa. X-ray microscopy: was performed using a laboratory X-ray microscope (Zeiss, Xradia Versa 610) at Sapienza Nanoscience & Nanotechnology Laboratories (SNN-Lab) of the Research Center on Nanotechnology Applied to Engineering (CNIS) of Sapienza University. When scanned, the nail was fixed on the sample stage between the X-ray source and detector by a clamp sample holder. The sample stage can rotate around a fixed axis to ensure a full 360° scan. For the low-resolution scan the sample-to-detector distance was set to 26 mm, and source-tosample distance was 221 mm. The voltage and current of the X-ray beam were 150 kV and 152 μA, respectively, and the power was set to 23 W. Scans were performed from 0° to 360° using a 0.4× objective and the exposure time for each projection was set to 2 s. A total scanning time of circa 2 h was required. Acquired images were obtained with a pixel size of 61.31 μm and binned (2 × 2 × 2). For the high-resolution scan sample-to-detector distance was set to 95.6 mm, and the source-to-sample distance was set to 25.6 mm. The voltage and current of the X-ray beam were 150 kV and 152 μA, respectively, and the power was set to 23 W. Scans were performed from 0° to 360° using a 0.4× objective and the exposure time during each projection was set to 1 s. A total scanning time of circa 2.5 h was required. The stacks of grayscale images were imported into the Reconstructor Scout-and-Scan Scientific Reports | (2022) 12:6125 | https://doi.org/10.1038/s41598-022-10151-w www.nature.com/scientificreports/ software to manually find the center shift and the beam hardening constant. The image processing step was carried out using the Dragonfly Pro software. 3D models reconstruction: 3D models were reconstructed using the Reconstructor module of Zeiss Scout and Scan control softwareScout&Scan Control System Reconstructor software -Zeiss (Version 16.1.13038.43550) by which reconstruction parameter was identified. Regarding the low-resolution scan, the Center Shift was manually set at − 0.800, the Beam Hardening Constant was set manually at 0.09 and 1601 projections were selected to reconstruct the 3D model. Concerning the high-resolution scan, the Center Shift was manually set at 1.200, the Beam Hardening Constant was manually set at 0.07, and 1601 projections were used to reconstruct the 3D model. In both cases, at the end of the process a TIFF stack was exported and subsequently imported in Dragonfly Pro software (Version 2020.1 Build 797, Object Research System) for post processing. Dragonfly Pro was used to visualize the iron nail excluding the signal coming from the air around the sample through the window leveling tab. | 4,855.2 | 2022-04-12T00:00:00.000 | [
"Materials Science"
] |
Hybrid evolutionary algorithm based fuzzy logic controller for automatic generation control of power systems with governor dead band non-linearity
: A new intelligent Automatic Generation Control (AGC) scheme based on Evolutionary Algorithms (EAs) and Fuzzy Logic concept is developed for a multi-area power system. EAs i
PUBLIC INTEREST STATEMENT
The AGC problem has been one of the major concerns for power system engineers and is becoming more significant these days owing to the increasing size and complexity of interconnected power systems. For efficient and successful operation of an interconnected power system, the main requirement is the retention of an electrical power system characterized by nominal frequency, voltage profile and load flow configuration. Designing an efficient AGC strategy is necessary to ensure the fulfilment of requirements. An efficient AGC scheme based on GASA-tuned fuzzy approach is proposed in this manuscript. This hybrid GASATF technique exhibits its efficacy to great extent in power system which constitutes different types of turbines, and also has system load perturbations in one of the power system area.
Introduction
The growth of large interconnected power system is to minimize the occurrence of the black outs and providing an increasing power interchange among distinct system under the huge interconnected electric networks. The power system operators enhance load-interchange-generation balance between control areas, and adjust the system frequency as close as possible nominal values. Automatic Generation Control (AGC) is necessary to keep the system frequency and the inter-area tie-line power as close as predefined nominal values. A reliable and committed power utility should cope with load variations and disturbance effectively. It should give permissible high quality of power while controlling frequency within acceptable limits. The problem of AGC has been extensively analysed during the last few decades. Elgerd and Fosha in 1970 were the first to propose the design of optimal AGC regulators using modern control theory for interconnected power systems (Elgerd & Fosha, 1970). The proposed scheme provided better control performance for a wide range of operating conditions than the performance of conventionally designed control schemes. Since the classical gain scheduling methods may be unsuitable in some operating conditions due to the complexity of the power systems such as non-linear load characteristics and variable operating points, the modern approaches were preferred for use. Following the work of Elgerd and Fosha, many AGC schemes based on modern control theory have been suggested in literature (Ibraheem & Kothari, 2005;Shayeghi, Shayanfar, & Jalili, 2009;Singh & Ibraheem, 2013). They were followed by AGC schemes based on intelligent control concepts after a very long time.
In Chown and Hartman (1998), a Fuzzy Logic Controller (FLC) as a part of the AGC system in Eskom's National Control Centre based on Area Control Error (ACE) as control signal for the plant is described. Moreover, Yousef proposed an adaptive fuzzy logic load frequency control of multi-area power system in Yousef (2015), Yousef et al. (2014). Anand and Jeyakumar (2009) incorporated the system with governor dead band, generation rate constraint and boiler dynamics non-linearities in the system models. Fuzzy Logic Algorithm (FLA) has been employed to design FLC for the system to overcome the drawback of conventional Proportional-Integral Controller. It has circumvented the controller gain problem to some extent but did not give more accurate (Albertos & Sala, 1998) and precise optimal gains for FLC in the AGC due to need of the exact system operating conditions. Therefore, Evolutionary Algorithms (EAs) are introduced to fight with the controller optimum gain problem (Boroujeni, 2012;Devi & Avtar, 2014;Ghoshal, 2004;Yousef, AL-Kharusi, & Mohammed, 2014;Pratyusha & Sekhar, 2014;Saini & Jain, 2014;Singhal & Bano, 2015). Authors discussed classical controller gains tuning through Non-Dominated Shorting Genetic Algorithm-II (NSGA-II) technique for AGC of an interconnected system. Integral Time multiply Absolute Error, minimum damping ratio of dominant eigenvalues and settling times in frequency and tie-line power deviations considered as multiple objectives and NSGA-II is employed to generate Pareto optimal set. Further, a fuzzybased membership value assignment method was employed to choose the best compromise solution obtained from Pareto solution set. This method was also investigated with non-linear power system model (Yegireddi & Panda, 2013). EAs were found capable to give global optimum gains for FLCs to handling sensitive controlling issue of AGC. FLCs are characterized by a set of parameters, which are optimized using EAs i.e. GA, SA to improve their performance (Boroujeni, 2012;Ghoshal, 2004;Saini & Jain, 2014). EAs were efficiently applied to AGC of power systems and have shown its ameliorated performance without systematic and precise data of the power system model.
In this article, the design of optimal AGC scheme for a three-area interconnected power system is investigated with three diverse controller's i.e. classical integral control, FLA and GASA-tuned FLC controllers. Moreover, a hybrid GASATF technique constitutes GA and SA approach to determine output fitness function from the fuzzy Mamdani algorithm. This is used as the input for GA-SA technique to design the optimal gains for AGC scheme. The designed AGC scheme yielded ameliorated system dynamic performance under various operating conditions of a three-area interconnected hydro-thermal power systems with and without considering Governor Dead Band (GDB). In this article, studies have been carried out by considering 0.01 p.u.MW perturbation in one of the power system areas. The simulation study carried out using MATLAB/SIMULINK toolbox 2014a version platform. The investigations include the non-linear effect of GDB. Power systems dynamic performance has been studied by investigating the response plots of the disturbed areas (∆F 1 , ∆F 2 , ∆F 3 , ∆P tie12 , ∆P tie23 and ∆P tie31 ). Also the investigations of response plots obtained for ∆X g , ∆F 3 , ∆P tie31 , ACE 3 and U 3 with inclusion of GDBs. The system nominal system parameters are presented in Appendix A.
Power system model for investigation
It is a three-area interconnected power system consisting of power plants with reheat, non-reheat thermal and hydro turbines, and is interconnected via EHV AC tie-line. The single-line diagram of the multi-area interconnected power system model is presented by Figure 1. The optimal AGC controllers are designed considering (i) linear model of the system and (ii) non-linear model of the system with GDBs. The transfer function model of both the models with and without GDB non-linearity is shown in Figure 2. This model exhibits linear and non-linear both characteristics of the power system behaviour.
Effect of governor dead band non-linearity
Most of the real-time AGC of power systems include non-linearity in itself through several ways. The sort of non-linearity may be materialistic affect, Generation Rate Constraints (GRCs), GDB and load, etc. The real-time simulation cannot be realized without incorporation of non-linearity and the demo of real-time AGC model may also be incomplete without non-linearity. Therefore, one of the most prominent non-linearity in the form of GDB is considered in the proposed power systems model. Movement of thermal or hydro GDB can not be permitted without specified tolerable limits for flow of steam/water. The GDB effect indeed can be significant in AGC studies. In the model, a GDB impression is supplemented to all control areas to simulate non-linearity (Gozde & Taplamacioglu, 2011;Yegireddi & Panda, 2013). Explaining function method is utilized to exhibit the GDB in the control areas. The GDB non-linearity moves to create a continuous sinusoidal oscillation of natural time schedule near to T 0 = 2 s. GDB linearization is done by the method using means of deviation and rate of deviation in the movement. With these considerations, GDB is taken into account by adding limiters to the turbine input valve. In this work, the backlash of approximately 0.5% is selected. The GDB transfer function in the power systems model is presented as:
Hybrid EA-based fuzzy logic controller structure
Basically, ACE is the measure of short-term error between generation and electric consumer demand. Power system performance is termed as good if a control area closely matches generation with load demand. AGC of the power systems means minimizing the ACE to zero. Hence, the systems frequency and tie-line power flows are maintained at their scheduled values (Elgerd & Fosha, 1970), respectively: where i = 1, 2, 3.
The input to ACE i is rate of change for integral control action. It can be defined as: The ACE i is tuned in using of Integral Square Error (ISE) criterion which has the form of: (1) For the present investigation; this error is considered as ACE i and evaluating ISE as a fitness function as: Over the last few decades, FLCs have been developed for analysis and control of several types of systems (Anand & Jeyakumar, 2009;Chown & Hartman, 1998;Yousef, 2015;Yousef et al., 2014). But FLCs have not been exploited to its full capability to counter the problems associated with the systems having non-linear characteristics. It has been demonstrated that its performance is far from being stringent and time optimal (Albertos & Sala, 1998). Subsequently, this problem has been successfully solved using EAs for AGC problem of power systems (Yegireddi & Panda, 2013). Therefore, researchers have proposed the combination of FLA and EAs to design the controllers exploiting the salient features of both the algorithms. The hybrid structure of these techniques exhibited their adaptable qualities in non-linear systems.
FLC consists of FLA approach with integral control action in the closed loop control system. The FLC has input signal, namely ACE i of the systems, and then its output signal (U i ) is the input signal for GASATF. FLC feedback gains (K i ) are optimized with the help of EAs i.e. combination of GA and SA technique. Finally, the output signal from the GASATF called the new U i is used in the AGC of power systems.
These optimal gains are globally optimal for the system to be controlled. These global optimal gains give exact value of the fitness function for developing AGC scheme. These global optimal gains can also handle the AGC problem when system is operating with nominal system parameters. The gain scheduling of the power systems is a very important and effective process of its controlling (Saini & Jain, 2014). In view of this, ultimately GASA-tuned FLC is proposed in the present article. This scheme is shown in Figure 3. The presented GASA strategy enables to evaluate global optimal value of fuzzified K i followed by the modification of fuzzified fitness function of the FLC.
The GASATF heuristic incorporates SA in the selection process of Evolutionary Programming (Boroujeni, 2012). The solution string comprises all feedback gains and is encoded as a string of real numbers. The population structure of GASATF is shown in Figure 4. The GASATF heuristic employs blend crossover and a mutation operator suitable for real number representation to provide it a better search capability. The main objective was to minimize the ACE i augmented with penalty terms corresponding to transient response specifications in the system frequency and tie-line power flows. These two disturbances yield the error for the AGC systems. The heuristic is quite general and various aspects like non-linear, discontinuous functions and constraints are easily incorporated as per requirement. The fuzzy fitness function is taken as the summation of the absolute values of the three Table 1. In the FLCs, there is an input and output selected in the fuzzy Mamdani inference system which is described in Figure 5. The input and output membership functions of FLA for ACE i in both test cases are shown in Figures 6 and 7. A triangular membership function shapes of the derivative error and gains according to integral controller are chosen to be identical for the FLC. However, its horizontal axis limits are taken at different values for evaluating the controller output. The characteristics of the FLC present a conditional relationship between ACE i and U i as shown in Figures 8 and 9.
In this graph, x-level shows the controller's input and y-level presents output of the FLC. Rule base are defined in the scaling range of [−1, 1] without GDB, and [−1.3, 1.3] with GDB of the power system.
Pseudo code
The proposed technique can be understood from the following pseudo code which is given below in the steps of implementation; Step 1: Create random population.
Step 2: In the beginning, assume the current temperature (T), initial temperature (T 1 ) and final temperature (T MAXIT ), parent strings (N), children strings (M) as per FLC structure with scaling factor limits (fitness function limits) (Singh & Das, 2008).
Step 3: For each and every parent number is defined by i. Then, create m(i) children using crossover process.
Step 4: Modify the mutation operator with the probability margin P m .
Step 5: Obtain the best child for every parent (initial competition level in that process).
Step 6: Select the best child as a parent for the future generation.
For every family, accept the best child as the parent for future generation if objective quantity of the best child (Y 1 ) is less than objective quantity of its parent (Y 2 ) that is equal to Y 1 < Y 2 or exp(Y 2 −Y 1 ) T ≥ (a random number uniformly divided between 0 and 1).
Step 9: Reschedule step 10 for every child; go to step 11.
Step 10: Increase every count by 1, If (Y 1 < Y 2 ) or T c ≥ in which T c is the current temperature at that time and Y LOWEST is the lowest objective quantity ever got in the calculations.
Step 11: Acceptance number of the family is equal to count (A).
Step 12: Add up the acceptance numbers of all the families (S).
Step 13: For every family i, evaluate the number of children to be generated in the next generation as per the following formula: m(i) = exp(TC×A)
S
; in which total number of children generated by all the families (TC).
Step 14: Decrease the temperature after the each iteration.
Step 15: Repeat the steps number 3-14 until a defined number of iterations have been achieved or desired result has been completed.
The GASATF presented for straight forward calculation of the fuzzy fitness function. After the fuzzy fitness function tuning, GASATF technique creates global optimal feedback gains for the AGC. For adopting the best generation the criteria is created by optimal GASATF approach which is reported in Table 2.
Results and discussion
The power system model under investigation is simulated on MATLAB/SIMULINK platform to carry out investigations with GASA-tuned FLC for the AGC scheme. The dynamic responses of power system models are obtained using GASATF-based AGC schemes by creating 1% load perturbation in area-3. The optimal feedback gains for all investigated controllers are presented in Table 3. In Table 4, frequency dynamic of area-3 is exhibited by numerical analysis. These dynamic responses are plotted in Figures 10-15. The proposed controller resulted in the response plots with less settling time, minimum peak overshoot and under shoot as compared to those obtained with IC and FLCbased AGC schemes. Further, the investigation of these dynamic responses reveals that the dynamic performance with GASATF controller is better than that obtained other compared controllers. However, an appreciable improvement in the dynamic performance of power system is visible while using proposed controller rather than using IC and FLC.
The dynamic performance of proposed AGC controller is also obtained by considering effect of GDB non-linearity. The plot of Figure 16 presents the effect of the GDB on dynamic response of speed governor systems. Dynamic response of the governor valve reveals that opening of valve is faster in non-reheat turbine as compared the reheat and hydro turbines. Figure 16 shows that governor valve movement in hydro turbine is slow. The dynamic responses of speed governor of non-reheat thermal turbine are found to be more oscillatory than speed governor of other turbines. The speed governor responses are found to be uniform and proportional to the area inertia, the control output U 3 remains unchanged from its value just following the disturbance until AGC scheme becomes effective after 1-4 s. The investigation of the dynamic response of Figures 17-20 reveals that the dynamic responses are more oscillatory due to presence of GDB. The realistic behaviour of GDB is shown by these dynamic responses which are taken different scenarios.
Further inspection of these system responses reveals that AGC controllers based on integral control action offer a very sluggish response trends associated with large number of oscillatory modes, large settling time and a considerable amount of steady state error. Whereas, the dynamic response achieved with AGC controller based on GASATF control action settles quickly as compared to that obtained with other controllers. The responses are not only settling quickly but also the settling trend is very smooth, and associated with a few number of oscillatory modes.
Conclusion
In a nutshell, the present work includes a new method for calculating optimal feedback gains of AGC controller is presented using hybrid GASA-tuned fuzzy logic design technique. From the exhibited results, the dynamic performance of proposed hybrid AGC controller is found to be superior over classical and fuzzy AGC controllers. It is also observed that the GDB non-linearity produced oscillation in dynamic responses of power systems model under investigation until AGC become effective after 1-4 s. The GASATF controller gives ameliorated performance in the multi-area interconnected power systems including different sort of turbines.
The most important feature of the design of optimal AGC controller is that the proposed algorithm considers transient response characteristics as hard constraints which are strictly satisfied in the solutions. This is in contrast to the classical technique-based regulator designs where these constraints are treated as soft constraints. The proposed technique has been tested on a power system model under a disturbance condition and found good enough to give desired results. This proves to be a good alternative for optimal controller design where direct treatment of response specifications is required. | 4,148.2 | 2016-04-06T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Has the introduction of two subspecies generated dispersal barriers among invasive possums in New Zealand?
The introduction of species into new environments provides the opportunity for the evolution of new forms through admixture and novel selection pressures. The common brushtail possum, Trichosurus vulpecula vulpecula from the Australian mainland and T.v.fuliginosus from Tasmania, were introduced multiple times to New Zealand from Australia to become one of New Zealand’s most significant pests. Although derived from two subspecies, possums in New Zealand are generally considered to be a single entity. In a previous analysis, we showed that possums in the Hawkes Bay region of New Zealand appeared to consist of at least two overlapping populations. Here, we extend that analysis using a genotype-by-sequencing approach to examine the origins and population structure of those possums and compare their genetic diversity to animals sampled from Australia. We identify two populations of each subspecies in Hawkes Bay and provide clear evidence of a contact zone between them in which a hybrid form is evident. Our analysis of private alleles shows higher rates of dispersal into the contact zone than away from it, suggesting that the contact zone functions as a sink (and hence as a barrier) between the two subspecies. Given the widespread and overlapping distribution of the two subspecies across both large islands in New Zealand, it is possible that many such contact zones exist. These results suggest an opportunity for a more targeted approach to controlling this pest by recognising sub-specific differences and identifying the contact zones that may form between them.
Introduction
Introductions of species into novel environments can provide the opportunity for the evolution of new forms as they naturalise (Sz} ucs et al. 2017). In some cases, alien species have been introduced to the same location on multiple occasions from different sources, creating genetic combinations of individuals that have not previously existed (Kolbe et al. 2004). Such admixture can enhance fitness through hybrid vigour or increased genetic diversity (Barker et al. 2019;Dlugosch, Parker 2008;Facon et al. 2008;Hirsch et al. 2017;Rius, Darling 2014) or reduce fitness through outbreeding depression (Rhymer, Simberloff 1996). It is unclear how frequently genetic factors are intrinsic to the success or otherwise of invasive species (Barker et al. 2019;Rius, Darling 2014) although propagule pressure (something that will affect the levels of genetic diversity within an invader) is one of the few acknowledged general characteristics of successful invaders (Bock et al. 2015). Identifying the role that such factors can play in successful establishment, may offer novel perspectives or pathways for the identification of high-risk invaders or provide opportunities for enhanced control of invasive pests (Sz} ucs et al. 2017).
The introduction of common brushtail possums (Trichosurus vulpecula; hereafter called possums) to New Zealand from Australia is one well documented instance where different forms of the same species have been brought together into a new environment. Introduced to establish a fur trade (Montague, 2000), possums were shipped to New Zealand from Tasmania and several other undocumented locations on mainland Australia, most likely in New South Wales and Victoria, between 1858 and the 1920s. All introductions ceased after that time but the transfer of possums within New Zealand continued into the 1940s. Possums are endemic to Australia with five currently recognised subspecies distributed across the continent and on several offshore islands (How, Hillcox 2000;Kerle et al. 1991). New Zealand has no naturally occurring marsupials and possums are now one of the most serious threats to biodiversity and agriculture in New Zealand damaging vegetation (Cowan 1991;Nugent et al. 2001); preying on birds and their eggs (Brown et al. 1993); and spreading bovine tuberculosis (Innes, Barker 1999). New Zealand management agencies have conducted extensive possum control for decades (Donnell 1995;Morgan 1990;Nugent, Cousins 2014;Nugent, Morriss 2013) and now aim, with the support of conservation groups, to eradicate brushtail possums as part of the ''Predator-free New Zealand'' campaign (Russell et al. 2015).
It is well known that the different colour forms seen in New Zealand possums are representative to some extent of their different ancestry (Kean 1971;Pracy 1974;Triggs, Green 1989), but it is generally assumed (although seldom stated) that New Zealand possums breed indiscriminately with respect to those origins (Montague 2000). However, a recent microsatellite DNA analysis of possums in Hawkes Bay, on the North Island of New Zealand, identified at least two genetically distinct groups of possums, separated by a zone of putatively introgressed animals (Sarre et al. 2014). That work implied that there were at least two breeding forms in the region, probably representing the Tasmanian subspecies and the south-eastern mainland subspecies (T.v.fuliginosus and T.v.vulpecula respectively), rather than one continuous population (Sarre et al. 2014). That proposition is supported by historical records which show that there were three recorded introductions of possums into the area ( Fig. 1): possums taken directly from Tasmania and released in the Lake Waikaremoana area in 1898; ''black furred'' possums (also presumed to be Tasmanian in origin but translocated from elsewhere in New Zealand) released on the Mahia Peninsula a short time after the first introduction; and possums from an unknown location on mainland Australia released in the Mohaka river area in the early 1900s (Pracy 1974;Sarre et al. 2014).
Resolving complex introduction histories and patterns of admixture in invasive species can be difficult, but access to population level genomic data can reveal how species integrate into new environments and whether diverse origins have enhanced or inhibited their invasion success. Here, we use genotype-bysequencing to examine the origins and contemporary population structure of possums in Hawkes Bay, and compare their genetic diversity to animals sampled from across their native range in Australia. We test the proposition that the sub-populations, as previously defined by the distribution of microsatellite DNA variation (Sarre et al. 2014), are representative of the ancestral mainland and ancestral Tasmanian origins, and that the Tasmanian and mainland forms have established a hybrid swarm in the Hawkes Bay region that affects the direction and rate of gene flow. We also use asymmetries in the number of private alleles between pairs of sub-populations to determine the direction of that geneflow. A symmetrical distribution of private alleles among sub-populations indicates symmetry in the rates of gene flow between subpopulations and subspecies and would argue against the formation of genetic barriers between the two subspecies. We demonstrate that one sub-population is genetically identifiable as the Tasmanian subspecies T.v. fuliginosus, that possums from that sub-population are distinct from animals that are most likely mainland in origin, and that they have maintained that distinctiveness even after more than 80 years of contact. We also show that the sub-population believed to originate from mainland Australia is indeed largely T.v.vulpecula in origin, although we could not identify the mainland region from which those possums were obtained. Our data support the proposition that brushtail possums in Hawkes Bay do not behave as a single panmictic population, but rather as two subspecies with a narrow zone of contact. Gene flow into and out of that contact zone is asymmetrical and indicative of a genetic barrier between the two subspecies. The formation of similar hybrid barriers elsewhere in New Zealand could have important implications for the control of this invasive species.
Sample selection
In order to determine the Australian origins of the invasive possums at our study site in Hawkes Bay, New Zealand, we obtained samples from a broad range of locations in eastern mainland Australia (n = 19) and Tasmania (n = 11). Specifically, we sampled tissue (mainly ear tissue) from museum specimens and road killed animals, from six states and territories (Northern Territory, Australian Capital Territory, Queensland, New South Wales, Victoria and Tasmania) and Flinders Island (located in the Bass Strait between Tasmania and mainland Australia; Fig. 2). These samples encompass the four commonly recognised subspecies of Trichosurus vulpecula in eastern and northern Australia (Kerle et al. 1991): T.v vulpecula (south-eastern Australia), T.v. arnhemensis (Northern Territory), T.v. johnstonii (Queensland) and T.v.fuliginosus (Tasmania and Flinders Island) and represent the subspecies most likely to include the source populations for New Zealand possums (Fig. 2). Possums from the Hawkes Bay region of New Zealand were sampled from 24 sites (n = 253) and were selected as a subset of the samples previously genotyped using microsatellite DNA (Sarre et al. 2014).
Specifically, we applied New Hybrid analysis (Anderson, Thompson 2002) to the microsatellite data previously reported by Sarre et al. (2014) to estimate the percentage of ancestry for each individual possum and indicate whether an individual is an F1 or F2 cross between subspecies. We then selected individual possums that represented all possible combinations of F1, F2 and 'pure' ancestry, as well as an even number of males and females, and used the NewHybrid analysis to sort all 1,605 individuals into three genetic groups: (1) largely Tasmanian in origin, (2) largely Australian mainland in origin, and (3) of mixed ancestry. We then stratified for collection site within those three groups, focussing most on samples from the sites closest to the three known sites of introduction and the previously identified contact zone (Fig. 2), and then randomly chose individuals across those strata for inclusion in the present study using genotyping-by-sequencing.
Genotype-by-sequencing DNA was extracted from each 2mm 2 piece of ear tissue sample by Diversity Arrays Technology Pty Ltd (DArT) using a salting out method as reported by (Couch et al. 2016). We used the DArTseq TM genotype-by-sequencing approach, which uses a combination of restriction enzyme complexity reduction and high throughput sequencing (Courtois et al. 2013;Cruz et al. 2013;Kilian et al. 2012) to simultaneously identify and genotype SNP markers in the absence of a reference genome. A double digest approach with the restriction enzymes PstI and SphI was used for complexity reduction before the addition of custom adapters (Kilian et al. 2012) to restriction site overhangs. Fragments that included both a PstI and an SphI adapter were amplified using primers complementary to the adapters. These also incorporated molecular identifier barcode tags, to allow multiplexing of up to 96 samples per sequencing run. PCR conditions consisted of: denaturation at 94 0 C for 1 min; 30 cycles of 94 0 C for 20 s, 58 0 C for 30 s and 72 0 C for 45 s; and a final extension period of 72 0 C for 7 min. PCR products were pooled for sequencing on an Illumina HiSeq 2000 platform using 77 cycles of single end sequencing. Raw sequence reads were filtered and processed using a proprietary DArT analytical pipeline. Poor quality sequences were removed, with more stringent criteria being placed upon the barcode region than the rest of the sequence. Approximately 2,000,000 sequences per barcode were identified and used in marker calling, during which identical sequences were collapsed and filtered before screening to identify variable markers using DArT proprietary SNP and SilicoDArT algorithms (DArTsoft14).
Analysis of genotype-by-sequencing data
The R package dartR (Gruber et al. 2018) was used for population genomic analyses of the DArTseq SNP data, including F-statistics, principal coordinates analysis (PCoA) and fixed difference analysis. SNPs were filtered using the RepAvg function (Reproducibility C 0.95), CallRate (0.95), with lower and upper read depth thresholds of 5 and 90 respectively, and to exclude monomorphic loci and secondary SNPs.
We estimated ancestry proportions using SNP data and geographic location in tess3r (Caye et al. 2016), for K values of between two and eight with the default settings (code supplied on request). Fixed differences were identified between the New Zealand and Australian (mainland and Tasmanian combined) possum populations and through a second analysis in which we considered only the sub-populations at the Hawkes Bay study site identified using PCoA and tess3r (Caye et al. 2016).
To further define and visualise the population genetics of animals in Hawkes Bay, we ran a STRUCTURE analysis (Pritchard et al. 2000) using all loci with identical filters (n = 502) for each value of K (assumed number of ancestral populations) between 2 and 5 with 10 repeats each of 100,000 iterations. The first 50,000 iterations were disregarded as burnin and then each of the 10 repeats were summarised using the programs Evanno (Evanno et al. 2005) and Clumpp (Jakobsson, Rosenberg 2007).
We ran STRUCTURE using both ancestral models (No Admixture and Admixture) because they differ in their underlying assumptions about the origins of the population under study (Porras-Hurtado et al. 2013). The No Admixture scenario assumes no prior knowledge about the origin of populations under study. In our case, we know something about the origins of the populations drawn from the introduction history and we have an hypothesis arising from a previous microsatellite DNA study (mainland, Tasmanian and Hybrid) so for this analysis we ignored that knowledge and checked to see if the three groups still emerged. We then ran the Admixture model using our prior knowledge of the origin of the populations to provide insight into the proportion of individuals that can be attributed to each of the presumed ancestral populations. In order to do this, we combined individuals from those sites closest to the mainland introduction sites, those from the previously identified hybrid zone (Sarre et al. 2014) and those closest to the sites populated by possums from Tasmania into subpopulations A, B and C respectively. We expected that A and C (Mainland and Tasmanian origin) would each have a unique signature, while individuals from population B should show a mixture of both A and C.
We also used a NewHybrids analysis to examine population structure among Hawkes Bay possums. While broadly similar to that provided by STRUC-TURE, NewHybrids analysis differs in that it uses a detailed inheritance model to check to see if individuals are indeed hybrids when two potential source populations mainly comprising of pure individuals can be identified (Anderson, Thompson 2002). As NewHybrids can only be applied to a maximum of 200 loci, we reduced the available loci to 15,523 by filtering on call rate (C 0.95), and then selected the most informative (based on their polymorphic information content as implemented in dartR) 200 SNPs for NewHybrids analysis, using 30,000 sweeps with a burn in of approximately 100 sweeps, and using the Jefferies prior for both Pi and Theta. To further validate the result and its independence from the 200 loci identified as most informative, we re-ran the analysis with 200 randomly selected loci 30 times.
Directionality of gene flow through asymmetry in private alleles
To further determine the direction of interactions within the Hawkes Bay study site, we estimated the levels of asymmetry of gene flow between subpopulations using a novel approach based on the numbers of private alleles present in each subpopulation. Specifically, we used the idea that a subpopulation that has a net loss of individuals towards another sub-population through emigration will lose private alleles over time as it shares them with the recipient sub-population. Under this process, the subpopulation that has fewer emigrants will lose private alleles (through sharing) at a slower rate that its paired sub-population while also accumulating private alleles through genetic drift. We used this logic to explore the direction of gene flow by counting the number of private alleles between sub-populations across the study area while accounting for differences in sample size. This approach differs from those developed for microsatellite DNA markers (Kennington et al. 2003;Sundqvist et al. 2016) by using the raw counts of private alleles rather than comparisons between observed and simulated asymmetries. Under our approach, we consider that a detectable asymmetry in the number of private alleles between neighbouring sub-populations is indicative of asymmetric geneflow between those two sub-populations. Conversely, symmetry in the numbers of private alleles is indicative of equal rates of gene flow in both directions between two sub-populations.
To study the gene flow into and out of the contact zone using private alleles we clustered adjacent sample sites with the aim of obtaining an approximately even number of individuals per cluster. We used the proximity of individuals as the main criterion for clustering, while also considering the genetic similarity of individuals as indicated by the NewHybrid approach (see above) using microsatellite DNA data gathered previously by Sarre et al. (2014). Using this approach, we generated six clusters (A1, B1, B2, B3, C1, C2) within the three sub-populations and confirmed their validity by running a Discriminant Analysis of Principal Component (Jombart et al. 2010). We then counted the number of private alleles for each pairwise comparison among the 6 subpopulations. Sample sizes of individuals in each subpopulation varied between 20 and 36. To account for the uneven number of individuals, we sampled all subpopulations to 20 individuals (with replacement) and counted the number of private alleles in each pair of sub-samples. To reduce noise in the data set, we filtered the identified markers by minor allele frequency with a threshold of [ 0.1 (based on 40*0.1 [ 4 alleles) so that a cluster contained at least 5 alleles before an allele could be called private. This resulted in a data set of 506 SNPs, with a minimum minor allele frequency of 0.1 and mean minor allele frequency of 0.23. The mean number of private alleles between each resampled pair of sub-populations was then calculated from a thousand repetitions of the procedure. Finally, we calculated the mean and 95% confidence intervals of the number of private alleles and the pairwise ratio of private alleles. Sub-population pairs that had a private allele ratio larger than one, in more than 975 out of 1000 cases, were considered significantly asymmetric and hence regarded as exhibiting asymmetric gene flow.
Results
We genotyped 283 individual possums from Australia and New Zealand using the DArTseq approach, generating 27,339 SNPs, 6,596 of which were retained after filtering. Ten individuals were also removed by filtering because of high numbers of missing data leaving a total of 273 possum genotypes for analysis. We identified far fewer private alleles in the T.v.fuliginosus samples from Tasmania (mean = 25.3; n = 11) than in T.v.vulpecula sampled from mainland Australia (mean = 296.6; n = 12). Possums sampled in Tasmania also exhibited lower levels of expected (0.04) and observed (0.03) heterozygosity relative to their mainland counterparts (0.13 and 0.11 respectively). There was strong support (17.3% weight for PCoA Axis 1) for differentiation between Tasmanian individuals of T.v.fuliginosus (including those from Flinders Island) and all other individuals sampled from mainland Australia (Fig. 3). These findings are indicative of the Tasmanian history of geographical isolation, following separation of the island from mainland Australia by rising sea levels more than 9,000 years ago (Veevers 1991). There was also good support for differentiation between T. v. vulpecula and two other mainland subspecies (T. v. arnhemensis and T. v. johnsoni) although there was overlap between the latter two subspecies (Fig. 3).
Within the Hawkes Bay study area, we identified the three distinct sub-populations (A, B, and C) seen previously using microsatellite DNA analysis (Sarre et al. 2014). Sub-population A, genetically the closest to the introduction site of animals from mainland Australia, is discrete from sub-populations B and C, but does not cluster with any mainland Australian animals sampled here (Fig. 4). Sub-populations B and C cluster most closely together and have a lower pairwise F ST (0.03; Table 1) Table 1). No fixed allelic differences were observed between any pairwise combination of the Hawkes Bay sub- (Table 2). Private alleles were observed in all pairwise combinations of groups. Overall, these data suggest that that both B and C are composed largely of descendants of Tasmanian animals. All three Hawkes Bay genetic sub-populations exhibit private alleles when compared to each other (Table 2) suggesting that gene flow is restricted between them. The smallest number of private alleles were seen between sub-populations B and C, suggesting that gene flow is higher between those two than between sub-populations A and B or between A and C.
The PCoA analysis including all mainland Australian, Tasmanian, and Hawkes Bay samples (Fig. 4) shows that the majority of mainland Australian possums are genetically different from both New Zealand and Tasmanian individuals. In contrast, the native Tasmanian possums cluster very closely with the Hawkes Bay possums of putative Tasmanian ancestry (sub-populations B and C), providing additional genetic evidence for Tasmanian ancestry for those Hawkes Bay animals. The possums from subpopulation A appear to be distinct from both Hawkes Bay sub-populations B and C and from the native Tasmanian and mainland Australian samples. We identified two mainland Australian individuals, one sampled from around the Sydney area of New South Wales and the other from the Australian Capital Territory, that cluster closest to the Hawkes Bay individuals of putatively mainland ancestry. We also used SNP genotype data combined with geographic information on each sample location to generate a visual representation of population genetic structure among the Hawkes Bay possums only using tess3r ( Fig. 5; Fig. S1). Here, we examined three scenarios for the ancestral makeup of the possums. (1) Two ancestral sub-populations (mainland Australia and Tasmania origins; k = 2); (2) Three ancestral subpopulations (T.v.vulpecula-mainland, T. v. fuliginosus-Tasmania, and the contact zone population; k = 3); and (3) Four ancestral sub-populations (T.v.vulpecul-mainland, two different introductions of T.v.fuliginosus-Tasmania, and the contact zone population; k = 4). Our estimates of ancestry proportions and ancestral allele frequencies show a clear disjunction between the two subspecies at their contact zone ( Fig. 5; Panel 1), with the contact zone being genetically closer to the populations descended from T.v.fuliginosus than to those descended from T.v.vulpecula. The contact zone itself seems to run to the west of the Mohaka River that divides the two release points (Fig. 5; Panel 2). We find no evidence of two distinct ancestral groups of T.v.fuliginosus arising from the dual introductions of possums from Tasmania ( Fig. 5; Panel 3) and it appears that this subspecies is relatively homogenous at its source within Tasmania (Fig. 4). These results were similar to those obtained using the STRUCTURE analysis (Fig. 6) which showed that the most appropriate number of groups to emerge from both No Admixture and Admixture models was three (Fig. S2) and, as predicted, demonstrated that the sub-population designated as ''B'' most closely resembled a combination of sub-populations A and C. Similarly, the NewHybrid plots ( Fig. S3; S4) show that sub-population B consists largely of F2 hybrid individuals along.
with small numbers of both parental genotypes while sub-populations A and C are comprised largely of parental individuals with mainland and Tasmanian origins respectively. There is little evidence of F1 individuals in any of the Hawkes Bay animals genotyped.
Finally, asymmetric geneflow was identified between several pairs of the six genetic clusters using our private allele approach (Fig. 7, Table S1). The number of private alleles was consistently lower in sub-population A1 (mainland origin individuals) than in the three sub-populations within the contact zone (B1, B2 and B3). The same was true for subpopulation C1 (Tasmanian origin individuals) relative to sub-populations within the contact zone (B1, B2 and B3). We also identified a significant asymmetry in the geneflow between sub-populations C1 and C2. There was no apparent effect of distance on the asymmetries between clusters (Fig. S6). In summary, the asymmetry in private alleles indicates that the direction of Table 2 Fixed differences, number of private alleles, and corrected number of private alleles between Hawkes Bay, New Zealand groups of possums (groups A, B and C) and Australian common brushtail possums sampled from Tasmania (T.v. fuliginosus) and the south eastern mainland (T.v. vulpecula).
The accumulation of fixed differences between populations indicates low levels of gene flow. The ''corrected'' number of private alleles has been divided by the larger of the two samples sizes to account for the difference between groups geneflow is into the contact zone, with only minor geneflow outwards.
Discussion
New Zealand possums are the subject of enormous national pest animal control effort and considerable scientific endeavour (Byrom et al. 2016;Clout, Sarre 1997;Eason et al. 2017;Forsyth et al. 2018;Goldson et al. 2015;Livingstone et al. 2015;Montague 2000;Rouco et al. 2018;Rouco et al. 2017). While the history of their introduction from Tasmania and the Australian mainland is extremely well documented, New Zealand possums are considered almost universally to be representatives of a single lineage, albeit one with colour variants. This view is consistent with the prevailing taxonomy that discriminates between them only at the subspecific level (Kerle et al. 1991).
Here, we demonstrate that this expectation of possums mixing indiscriminately with respect to subspecies, does not hold in practice in the introduced New Zealand population, at least within the Hawkes Bay study area. Specifically, possums in Hawkes Bay that are derived from two Australian subspecies are more closely resembling their two ancestral entities separated by a contact zone than one integrated population. This is despite having ample opportunity to disperse, mingle, and interbreed for over 80 years (between 40 and 80 generations). We found that new genetic forms, representing individuals of mixed ancestry (F2 ?) between the two subspecies, occur mainly in the intermediate contact zone between the sites of the original introductions. We also observe that gene flow tends to move inwards, into that contact zone more commonly than outwards and found no evidence of parental genotypes moving across the contact zone and into the opposite parental sub-population. As such, the contact zone that has formed appears to act as a sink and limits dispersal.
Our results provide clear evidence that possums in Hawkes Bay behave as three distinct groups rather than a single panmictic population-a result that reflects previous findings of genetic structure at the site (Etherington et al. 2014;Sarre et al. 2014). We show that Hawkes Bay possums with putative Tasmanian ancestry have high genetic similarity to modern Tasmanian populations (T.v.fuliginosus), with only modest levels of introgression of alleles from T.v.vulpecula. In contrast, Hawkes Bay possums with putatively Australian mainland origin are genetically distinct from all Australian mainland individuals genotyped. As such, we were unable to identify the mainland Australian source for this group. These genome-level observations from the Hawkes Bay region mirror genetic patterns found more broadly in New Zealand possums using allozyme electrophoresis (Triggs, Green 1989). In that study, populations consisting predominantly of black possums were shown to be genetically similar to possums in Tasmania, while predominantly grey populations were genetically closer to Victorian and New South Wales possums.
There are several possible explanations for the genetic differences observed between the putative T.v.vulpecula animals in Hawkes Bay and the T.v.vulpecula animals sampled from mainland Australia. One possibility is that hybridisation between individuals of mainland Australian and Tasmanian origin masks the genetic signature of the mainland origins of New Zealand possums. Alternatively, the possums that derive from mainland Australia may be descended from a mixture of animals from several mainland sources, that were brought together in Hawkes Bay, and therefore do not resemble any single extant Australian population. Another possibility is that genetic drift-through founder effects and 80 years of isolation and/or selection imposed by a New Zealand environment-may have altered the genetic makeup of the Hawkes Bay population of T.v.vulpecula sufficiently to diverge from mainland Australian possums. This may even have resulted in the retention of genetic variants that are now rare (or even lost) in mainland Australia-a phenomenon seen in stoats introduced to New Zealand from England (Veale et al. 2015). Each of these three factors may contribute to our observations, although the fact that possums from sub-population A are genetically more similar to T.v.fuliginosus and to sub-populations B and C (Fig. 4), than they are to T.v.vulpecula, suggests that there has been leakage of T.v.fuliginosus alleles into sub-population A. A more detailed genomic haplotypic study, combined with mitochondrial DNA analysis, may resolve this issue. Additional sampling closer to the original release site for T.v.vulpecula in the Hawkes Bay may also help to identify the mainland location of origin for these possums.
Our results provide strong evidence that animals in sub-population B are a hybrid form of T.v. vulpecula and T.v. fuliginosus-the product of interbreeding between sub-populations A and C. The formation of a stable contact zone between two colonisers can arise when competition between the two is stronger than intraspecific competition or when hybrids are inviable or less fit than the parent taxa (Goldberg, Lande 2007). To our knowledge, interactions between the two subspecies or the viability of their offspring in nature have not been reported. We cannot therefore distinguish between these two possibilities. The positioning of the contact zone between two colonisers is likely to be attracted to, but not necessarily centred upon, a partial dispersal barrier (Goldberg, Lande 2007). In the case of the possums of Hawkes Bay, the presence of the Mohaka River between the release sites of the mainland Australian origin T.v.vulpecula and the Tasmanian origin T.v. fuliginosus provides such a partial barrier, and may have influenced the positioning of the contact zone near the eastern edge of this river.
The distribution of private alleles among the possum sites suggests that there is more immigration into the contact zone than out of it, from both the T.v.fuliginosus dominated population in the east and north of the zone, and from the T.v.vulpecula dominated population to the south west. The net result is that very few individuals with predominantly T.v.vulpecula ancestry (sub-population A) migrate completely through the contact zone (sub-population B) and integrate into the predominantly T.v.fuliginosus population (sub-population C), or vice versa. The contact zone thus appears to act as a sink for the two subspecies and, as such, represents a nontopographical (invisible) barrier to dispersal.
The existence of non-topographical or invisible barriers have implications for management, in this case particularly for the national program aimed at a predator free New Zealand (Russell et al. 2015). Populations with a mix of both grey (T.v.vulpecula) and black (T.v.fuliginosus) individuals occur in many parts of New Zealand (Cowan 2001;Kean 1971;Triggs, Green 1989) and hybrid animals have been reported previously in the Orongorongo valley on the North Island (Kean 1971). It is therefore possible that many such contact zones occur on the main islands of New Zealand. These zones could inhibit local dispersal and provide barriers that were previously invisible, but which could now be exploited for control. Conversely, possums can disperse quickly into depopulated habitats (Efford et al. 2000;Ji et al. 2001), so poisoning or trapping across contact zones could inadvertently break down existing subspecific barriers and increase local rates of dispersal. Documentation of the existence of additional contact zones on the main islands of New Zealand would provide greater opportunities to take advantage of these invisible barriers, with potential to produce faster localised eradication results.
Possums are the principal wildlife host of bovine tuberculosis in New Zealand (Morgan 1990), and the main cause of Tb persistence in cattle and deer herds in New Zealand, although the disease has not been recorded in Australian possums (Nugent, Cousins 2014;Tweddle, Livingstone 1994). If hybrid possums are less fit than either of the parental subspecies then the risk that possums are susceptible to bovine tuberculosis may be exacerbated. Moreover, there is some evidence that black coated animals, presumably T.v.fuliginosus, exhibit higher resistance to 1080 poison (Henderson et al. 1999;McIlroy 1982) than T.v.vulpecula-a phenomenon that has previously been hypothesised for marsupials more generally (Deakin et al. 2013). Thus, baiting with 1080 (a common approach used to control possums in New Zealand) could affect disproportionately the viability of one subspecies over the other, or even the viability of certain hybrids over others. Further research in this area, particularly through genomic analyses (Eldridge et al. 2020) will be particularly important if total eradication of possums is to be a key management objective.
In conclusion, we have identified clear evidence of a contact zone or border between two subspecies of introduced brushtail possums in Hawkes Bay, New Zealand, that produces a hybrid form. More work is required to determine whether this zone has formed because of inter-sub-specific competition or reduced hybrid fitness. The contact zone functions as a sink, with an asymmetry of gene flow tending towards the contact zone and away from both subspecies. It is therefore likely that the zone functions as a barrier to dispersal between the subspecies. Greater sampling would be required to determine the extent of the contact zone. Given the widespread overlapping distribution of the two subspecies in New Zealand, it is possible that there are many such hybrid zones that could function as barriers to dispersal across New Zealand. These hybrid zones could provide opportunities for managers to adopt a more nuanced approach to control of this introduced pest species, by accounting for differences between the subspecies and by taking advantage of previously unrecognised borders at contact zones. the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 7,663.6 | 2021-08-11T00:00:00.000 | [
"Geology"
] |
Phytochemical analysis on the aerial parts of Teucrium capitatum L. with aspects of chemosystematics and ethnobotany
Abstract The phytochemical analysis on the aerial parts of Teucrium capitatum L. collected from a new population in Central Italy, led to the identification of eight compounds, i.e. pheophytin a (1), poliumoside (2), apigenin (3), luteolin (4), cirsimaritin (5), cirsiliol (6), 8-O-acetyl-harpagide (7) and teucardoside (8) belonging to four different classes of secondary metabolites. Pheophytin a (1) represents a newly identified compound in the genus whereas compounds (7–8) are newly identified compound in the species. The chemotaxonomic and ethnobotanical aspects relative to the presence of these compounds were widely discussed suggesting important conclusions for both. Graphical Abstract
Introduction
Teucrium capitatum L. is a perennial sub-shrub belonging to the Lamiaceae family. The etymology of its name derives from the Greek word se tjqiom (te ucrion) and the Latin word c aput (head) and they refer to Teῦjqo1 (Teucer), the first king of Troy to use species of this genus in the medicinal field, and to the typical disposition of its inflorescence , a glomerule, respectively. Linnaeus was the first one to scientifically describe this species (Pignatti 1982).
From the morphological point of view, this species is characterized by an erected stem which is woody only at the base and is spread with slight and small glandular hairs. Leaves are sessile with an entire margin and hairy on both surfaces. The inflorescence is composed to form small and branched terminal and axillary glomerules. The corolla is extremely hairy and is formed by five small white or pink petals. The fruit is a schizocarp constituted by four oval reticulated light brown mericarps. The pollination is through insects (Pignatti 1982).
The species is widely distributed in the entire Mediterranean basin and along the Black Sea (Navarro 2020). In Italy, it can be found everywhere except in Valle d'Aosta, Trentino Alto Adige and Lombardia in arid and rocky areas and in garriga (sub-shrub open Mediterranean vegetation type), up to 1800 m. a.s.l. (Conti et al. 2005). The species is morphologically similar and taxonomically close to Teucrium polium L.. Actually, it was considered in the past as a subspecies of this (Teucrium polium L. subsp. capitatum) but it recently raised to the rank of nominal species after genetic and morphological considerations. In fact, T. capitatum presents some morphological differences from T. polium like its reduced dimensions and its reduced inflorescences (www.worldfloraonline.org 2022).
In general, some works are present in literature on the phytochemistry of the species with the old and new name. Despite, most of them focused on the essential oil composition (Cozzani et al. 2005;Kerbouche et al. 2015;Khani and Heydarian 2014;Chabane et al. 2021;Maccioni et al. 2021). Indeed, only a few phytochemical studies treated about some non-volatile compounds like phenyl-ethanoid glycosides (Pedersen 2000), flavonoids (Harborne et al. 1986;Stefkov et al. 2011) and diterpenoids (Marquez et al. 1981;Camps et al. 1987;Fernandez et al. 1985).
The species has been used in the folklore medicine of several countries. In particular, in Jordan, the decoction or the infusion of its leaves is administered to treat gastro-intestinal diseases and diabetes (Oraib et al. 2013) and to treat diarrhea, wounds, scab and colic in sheep, goats and cows (Mohammed et al. 2016). In addition, in Palestine, the decoction of the leaves is used against psoriasis (Shawahna and Jaradat 2017). Notwithstanding, recent studies demonstrated the acute toxicity of this species (Dourakis et al. 2002) as well as many other Teucrium (Stickel et al. 2000;Savvidou et al. 2007;Grafakou et al. 2020) and Ajuga species (El Hilaly et al. 2004;Luan et al. 2019). These toxic effects are only due to the presence of specific secondary metabolites, i.e. neo-clerodane diterpenes having a furan ring, which are quite common in Teucrium and Ajuga species (Frezza et al. 2019a). In the liver, this furan ring is oxidized by the cytochrome P450 3A4, an enzyme with the task to oxidize small molecules to favor their removal from the body, producing reactive epoxide molecules with a strong alkylating nature which makes them very toxic for the liver even at micromolar concentration (Zhou et al. 2004(Zhou et al. , 2007. For this reason, the use of Ajuga and Teucrium species in the folklore medicine is being discouraged. Yet, further considerations and evaluations are needed.
The aims of this work were multiple: to perform a phytochemical analysis on the species; to study a new population; to draw deeper chemotaxonomic conclusions on the species at the genus and family levels; to verify the possibility to also use this exemplar in the ethnobotanical field.
To the best of our knowledge, during this study, pheophytin a (1) was identified in the genus for the first time whereas 8-O-acetyl-harpagide (7) and teucardoside (8) were identified in the species for the first time. In fact, given its nature, pheophytin a (1) , is potentially ubiquitous in the plant kingdom. It has been found in various plant species belonging to different families, e.g. in some Lamiaceae species such as Ocimum labiatum (N.E.Br) A. J. Paton (Kapewangolo et al. 2017) and Origanum onites L. (Azcan et al. 2000) but also in other families like Sapindaceae (Semaan et al. 2018), Theaceae (Kusmita et al. 2015), Moraceae (Bafor and Kupittayanant 2020) and Plantaginaceae (Frezza et al. 2019b). For this reason, this compound cannot be considered of any chemotaxonomic value. Poliumoside (2) has been already evidenced in the species (Andary et al. 1985(Andary et al. , 1988 but also in other species of the Teucrium genus like T. polium L. (Chabane et al. 2021;Venditti et al. 2017a), T. yemense Deflers (Essam 1995) and T. chamaedrys L. (Mitreski et al. 2014) as well as in the genus Callicarpa L. of the Lamiaceae family (Liu et al. 2013) and in other families like Oleaceae (Andary et al. 1992), Paulowniaceae (He et al. 2000), Plantaginaceae (Zhou et al. 1998), Orobanchaceae (Tuncay Agar and Cankaya 2020) and Scrophulariaceae (Grice et al. 2003). The co-occurrence of phenyl-ethanoid glycosides such as poliumoside (2) and iridoids such as 8-O-acetyl-harpagide (7) and teucardoside (8) has a taxonomical relevance in Asteridae (Jensen 1992), the subclass that comprises the Lamiales order where the family Lamiaceae is included. All the flavonoids identified in this accession (apigenin (3), luteolin (4), cirsimaritin (5) and cirsiliol (6)) have been already found in the species (Harborne et al. 1986;Stefkov et al. 2011) but also in many other species of the genus (Venditti et al. 2017a(Venditti et al. , 2017bMitreski et al. 2014;Harborne et al. 1986;Jari c et al. 2020), in the Lamiaceae family (Frezza et al. 2019a) and in many other families like Compositae, Plantaginaceae, Scrophulariaceae, Leguminosae, Apiaceae, Rosaceae and Cactaceae (Miean and Mohamed 2001; L opez-L azaro 2009; Tom as- Barberan et al. 1988;Porter and Harborne 1994;Di Bella et al. 2022). According to these data, none of these flavonoids can be used as chemotaxonomic marker. 8-O-acetyl-harpagide (7) has been already identified in the genus only in Teucrium chamaedrys (Frezza et al. 2018) and T. orientale L. (Oganesyan et al. 1986) even if it is quite known in the Lamiaceae family especially in Ajuga L. (Venditti et al. 2016a), Melittis L. (Venditti et al. 2016b), Sideritis L. (Venditti et al. 2016c), Lamium L. (Alipieva et al. 2003) and Stachys L. species (Venditti et al. 2017c). As a matter of fact, this compound is considered as one of the main chemotaxonomic markers of the Lamiaceae family (Frezza et al. 2019a). Nevertheless, it has been also found in other families like Scrophulariaceae (Venditti et al. 2016d) and Plantaginaceae (K€ upeli et al. 2005) even if less frequently. Lastly, teucardoside (8) has been evidenced, in general, only in Teucrium marum L., T. subspinosus Pourr. ex Willd. (Bianco et al. 2004), T. polium (Jaradat 2015) and T. yemense (Abdel-Sattar 1998). As a matter of fact, this compound is considered as one of the main chemotaxonomic markers of the Teucrium genus (Frezza et al. 2019a). The presence of both compounds (7) and (8) is perfectly in accordance with the biosynthetic pathway of iridoids which is peculiar of Lamiales, involving the biogenetic Route II with 8-epi-loganic acid and 8-epi-loganin as precursors, that leads to the production of iridoid derivatives showing the 8a-stereochemistry such as 8-O-acetyl-harpagide, which are further transformed into aucubin derivatives with the insertion of a double bond at the C7-C8 positions (Jensen 1991(Jensen , 1992. Given the identified compounds, from the phytochemical point of view, the studied accession is surely a member of the Teucrium genus within the Lamiaceae family since some of their chemotaxonomic markers were identified. Anyway, even if there are macroscopic differences between T. polium and T. capitatum that have justified their distinction from a morphological point of view, no chemotaxonomic markers have been evidenced till now that may justify this from this point of view . In this sense, further phytochemical analyses are absolutely necessary studying other known and new populations collected in different areas of the world since it is well known that the secondary metabolite pattern is deeply affected by several factors like environment, genetics and growth area (Toniolo et al. 2014).
It is noteworthy to underline that no neo-clerodane diterpene has been evidenced during this study. The absence of neo-clerodane diterpenes is also very important under the chemotaxonomic standpoint since, as already explained, neo-clerodane diterpenes having a furan ring are considered as other fundamental chemotaxonomic markers of Teucrium and Ajuga species (Frezza et al. 2019a). Actually, this same situation has been already observed in T. chamaedrys (Frezza et al. 2018) as well as in Ajuga chamaepitys (L.) Schreb. (Venditti et al. 2016a), Ajuga genevensis L. (Venditti et al. 2016e), Ajuga tenorei C. Presl. (Frezza et al. 2017) and Ajuga reptans L. (Frezza et al. 2019c), all collected in Central and Northern Italy. This accession was also collected in Central Italy, and this almost suggests the possibility that the absence of neo-clerodane diterpenes may even be a peculiarity of Ajuga and Teucrium species growing in these places despite further phytochemical studies on different populations and species are needed to confirm this hypothesis. In addition, the absence of neo-clerodane diterpenes having a furan ring is very important under the ethnobotanical standpoint since these compounds are extremely toxic as already explained (Zhou et al. 2004(Zhou et al. , 2007. This absence and the presence of the identified compounds in the studied accession suggest the possibility to use it in the ethnobotanical filed because of the really beneficial biological properties associated to them. In fact, pheophytin a (1) exerts strong antioxidant, antimutagenic, chemopreventive and anti-inflammatory activities (Hsu et al. 2013;Queiroz Zepka et al. 2019). Poliumoside (2) possesses strong antioxidant and antiproliferative properties (He et al. 2000(He et al. , 2001. Apigenin (3) is a potent anti-inflammatory, antioxidant, antitumoral, antiproliferative, antidiabetic, antidepressive, neuroprotective, anxiolytic and sedative compound (Salehi et al. 2019). Luteolin (4) exhibits strong anti-inflammatory, antioxidant, antimicrobial, antitumoral and chemopreventive effects (L opez-L azaro 2009). Cirsimaritin (5) has good antispasmodic, antimicrobial, antioxidant, anti-inflammatory, antitumoral and antidiabetic properties (Pathak et al. 2021). Cirsiliol (6) exerts strong antitumoral, sedative, hypnotic and muscle-relaxing effects (Al-Shalabi et al. 2020;Viola et al. 1997;Mustafa et al. 1995). 8-O-acetyl-harpagide (7) possesses potent antifungal, antibacterial, antipyretic and antitumoral effects (Shafi et al. 2004;Singh et al. 2006). Lastly, teucardoside (8) shows strong antioxidant and antitumoral activities (Elmasri et al. 2016). Yet, further pharmacological and toxicological studies are necessary before the use of this accession in the ethnobotanical filed can really take place.
Experimental part
See the supplementary material.
Conclusions
The phytochemical analysis on the aerial parts of T. capitatum collected from a new population in Italy evidenced the presence of eight compounds belonging to four different classes of non-volatile secondary metabolites. One of these was identified in the genus for the first time while two of these were identified in the species for the first time, during this work. The presence of all these compounds confirms the belonging of the studied accession to the Teucrium genus within the Lamiaceae family since some of their chemotaxonomic markers were identified. Yet, this was not enough to confirm the correct promotion of T. capitatum to the rank of a nominal species from the chemotaxonomical point of view.
No neo-clerodane diterpene was identified in the studied accession and this is important under the chemotaxonomic and ethnobotanical standpoints since this may reinforce a phytochemical peculiarity of Teucrium and Ajuga species collected in certain areas of Italy as well as suggest the possibility to use this accession in the folklore medicine even if further studies in different fields are necessary in both cases. | 2,907.8 | 2022-06-01T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Medicine"
] |
Disorder of Biological Quality and Autophagy Process in Bovine Oocytes Exposed to Heat Stress and the Effectiveness of In Vitro Fertilization
The main problem in dairy herds is reproductive disorders, which are influenced by many factors, including temperature. Heat stress reduces the quality of oocytes and their maturation through the influence of, e.g., mitochondrial function. Mitochondria are crucial during oocyte maturation as well as the process of fertilization and embryonic development. Disturbances related to high temperature will be increasingly observed due to global warming. In present studies, we have proven that exposure to high temperatures during the cleaving of embryos statistically significantly (at the level of p < 0.01) reduces the percentage of oocytes that cleaved and developed into blastocysts eight days after insemination. The study showed the highest percentage of embryos that underwent division in the control group (38.3 °C). The value was 88.10 ± 6.20%, while the lowest was obtained in the study group at 41.0 °C (52.32 ± 8.40%). It was also shown that high temperature has a statistically significant (p < 0.01) effect on the percentage of embryos that developed from the one-cell stage to blastocysts. The study showed that exposure to a temperature of 41.0 °C significantly reduced the percentage of embryos that split relative to the control group (38.3 °C; 88.10 ± 6.20%). Moreover, it was noted that the highest tested temperature limits the development of oocytes to the blastocyst stage by 5.00 ± 9.12% compared to controls (33.33 ± 7.10%) and cleaved embryos to blastocysts by 3.52 ± 6.80%; the control was 39.47 ± 5.40%. There was also a highly significant (p < 0.0001) effect of temperature on cytoplasmic ROS levels after 6 and 12 h IVM. The highest level of mitochondrial ROS was found in the group of oocytes after 6 h IVM at 41.0 °C and the lowest was found in the control group. In turn, at 41.0 °C after 12 h of IVM, the mitochondrial ROS level had a 2.00 fluorescent ratio, and the lowest in the group was 38.3 °C (1.08). Moreover, with increasing temperature, a decrease in the expression level of both LC3 and SIRT1 protein markers was observed. It was proved that the autophagy process was impaired as a result of high temperature. Understanding of the cellular and molecular responses of oocytes to elevated temperatures will be helpful in the development of heat resistance strategies in dairy cattle.
Introduction
A crucial problem that has been observed in recent years is the reduction of reproductive efficiency in high-yielding dairy cows. This has a direct impact on dairy farming, its profitability, as well as the dairy industry [1,2]. Dairy cattle are sensitive to changes in environmental conditions caused by climate change, mainly due to their fast growth, intensive metabolism, and productivity [3]. Heat stress decreases feed intake, the efficiency of milk yield, and health status, as well as impairing thermoregulation, which affects animal husbandry [3]. The first symptoms of heat stress are changes in the behavior of cows, which are manifested as shorter lying periods and increased thirst [4]. It is also noted that the reproductive potential of dairy cows may be reduced due to elevated temperatures, which also reduces ovarian and corpus luteum function in summer and follicle growth and development; delays sexual maturation of individuals, extending the age of the first calving; and may also affect the retention of placenta [1,5,6]. Moreover, during heat stress in cows, there is an increase in cortisol secretion, which then interferes with the synthesis of follicle-stimulating and luteinizing hormones. This may result in estrus disorders but also reduces the quality of oocytes and impairs development, growth, maturation, and functioning of dominant and preovulatory follicles [5,7]. Exposure to heat stress in animals can lead to atresia of the ovarian follicles and significantly reduce fertility [3,8]. There is also evidence of the effect of higher temperatures on folliculogenesis and oogenesis. These processes are crucial for the oocyte's development, maturation, and acquisition of competence, which occurs under the influence of signals and hormones [9].
High temperatures will be recorded more and more often due to global warming. Healthy adult cows maintain a body temperature of 38.4-39.1 • C in thermoneutral zones, i.e., in the range of outdoor temperatures of 4-16 • C [10]. Above these temperature indications, i.e., above 25 • C, an increase in body temperature is observed in cattle, which results in heat stress [8,11,12]. Moreover, heat stress has been shown to disrupt the production of steroid hormones, significantly alter the composition of the follicular fluid, and reduce granulosa cells (GCs), which are essential for developing oocyte competence in cows [1,5]. In addition, high ambient temperature causing heat stress promotes the production of reactive oxygen species (ROS), which can adversely affect the redox balance in granulosa cells [1,13]. Accumulation of ROS in granulosa cells induces apoptosis as well as a decrease in estradiol and progesterone synthesis [9]. During folliculogenesis, a mature and/or Graafian follicle forms from a pool of primary follicles. Primary follicles, composed of an oocyte surrounded by a layer of granulosa cells, are insensitive to heat stress. Therefore, exposure to high temperatures during follicle development may impair oocyte competence [14].
Furthermore, this stress has long-term effects on the oocytes, as it takes two or three estrus cycles to regenerate them from damage caused by high temperatures [9]. Another aspect of heat stress is inducing a body response. Heat shock proteins are chaperone proteins that participate in heat protection at the cellular level. Their expression occurs as the body's response to temperature changes, which aims to maintain homeostasis [15,16]. Heat-shock proteins 70 and 90 (HSP70, HSP90) mediate regulating various physiological processes, including folliculogenesis, oogenesis, or embryo development. HSPs have been shown to promote cell survival by inhibiting cell apoptosis. However, exposure to strong HS increases cell susceptibility to apoptosis compared to mild exposure [15,17].
The hyperthermia experienced by cows in different cycles reduces the growth of the ovarian follicle, directly affects the reduction of the oocyte pool of ovarian cells, and can also affect the oocyte, which subsequently compromises the ability of the embryo and fetus to develop [5,18,19]. Furthermore, studies have shown that the functionality of mitochondria is a crucial determinant of the development potential of oocytes affecting their quality and, thus, fertilization potential [20,21]. Therefore, any dysfunction or too few mitochondria may affect the abnormal development of oocytes and early embryos [20]. Furthermore, it has been shown that exposure of bovine oocytes to high temperatures may disrupt the function of oocyte mitochondria, causing, among others, DNA fragmentation [22].
The quality of oocytes, i.e., their ability to develop into an embryo, is affected by many factors, including diet, stress, and in vitro maturation (IVM) conditions [23,24]. Moreover, the quality of the oocytes is closely related to the granulosa cells [20]. Studies have shown that the functionality of mitochondria is a crucial determinant of the development potential of oocytes affecting their quality [20,21,25,26]. The oocyte contains an average of 10 5 -10 6 mitochondria, which participate in oxidative phosphorylation and are the primary source of ATP [20]. ATP levels are linked to oocyte developmental competence, including fertilization potential and subsequent embryonic development. In addition, mitochondrial transport may protect oocytes from aging [26]. Lipid metabolism, an essential substrate for energy generation, also occurs in the mitochondria. In turn, lipid metabolism disorders, including increased amounts of fatty acids and incredibly saturated fatty acids in the oocyte, may inhibit the process of fertilization and embryo development [26]. These organelles also maintain Ca 2+ ion homeostasis, which is necessary during oocyte IVM and fertilization [27,28]. Another role of the mitochondria is to support the early development of the oocyte and embryo, as glycolysis is limited during oocyte maturation, which is observed until early pre-implantation embryonic development. Therefore, the oocyte's mitochondria are the primary energy source during the development of the embryo [29,30]. It has been found that dysfunction or reduction in the number of mitochondria in oocytes can affect the abnormal growth of these cells and early embryos [20]. Mitochondria are structured in a manner causing them to be highly sensitive to thermal stress [25,28].
Autophagy is believed to be the primary mechanism regulating the intracellular delivery of dysfunctional proteins and protein aggregates to the lysosome [31], thereby playing a pivotal role in both physiological and pathophysiological contexts [32]. It can be induced by various stressors, ranging from food deprivation to hypoxia or disturbance of homeostasis. Its primary phylogenetically conserved role is as an evolutionary catabolic pathway supporting cellular development, maintenance, and homeostasis [33].
Several studies have shown an association between oocyte quality and mitochondrial function [34]. The number of mitochondria and their function is regulated by the organized processes of mitochondrial biosynthesis and degradation in cells [35]. SIRT1 has been shown to be an essential mitochondrial deacetylase involved in the biological functions of mitochondria [36]. It is also involved in the processes of regulation of autophagy and mitochondrial function in cells related to maintaining the REDOX balance [37]. In turn, LC3 is a protein marker located on the autophagosome membrane [35], which also mediates the antioxidant defense processes of oocytes and may be a helpful marker to determine their competence in response to heat stress [38].
The aim of this study was to assess the impact of heat stress on the in vitro fertilization process (experiment 1) and to evaluate the effects of this stress on mitochondrial and cytoplasmic processes in bovine cells (experiment 2). An additional objective was to determine the effect of heat shock on autophagy processes in oocytes (experiment 3).
Experiment #1: Influence of Heat-Stress Exposure on In Vitro Fertilization Processes
The study showed the highest percentage of embryos that underwent division to be in the control group (38.3 • C). The value was 88.10 ± 6.20%, while the lowest was obtained in the study group at 41.0 • C (52.32 ± 8.40%) ( Figure 1). For the study group at 39.8 • C, 74.65 ± 8.90% of the divided embryos were obtained. The highest percentage of oocytes that reached the blastocyst stage was recorded in the group subjected to heat stress at 39.8 • C (48.20 ± 6.60%), while the control group (38.3 • C) was 33.33 ± 7.1% ( Figure 1). The lowest percentage of oocytes that reached the blastocyst stage were recorded in the test group at a set temperature of 41.0 • C (5.00 ± 9.12%) ( Figure 1). In terms of embryo development to the blastocyst stage, the highest values were obtained in the test group at 39.8 • C (55.65 ± 7.90%); the control group (38.3 • C) had a value of 39.47 ± 5.40%, and the lowest percentage of embryos that reached the blastocyst stage was observed in the group at 41.0 • C (3.52 ± 6.80%) ( Figure 1). All groups differed in a statistically highly significant way (at the level of p < 0.01). The influence of temperature was highly influential in each developmental group of germ cells.
Comparing the control group (38.3 • C) to the experimental group (39.8 • C), there was a significant difference in the number of embryos that developed from the unicellular stage to blastocysts (p < 0.05) (Figure 2). At the same time, there were highly significant differences (p < 0.01) concerning the group exposed to the temperature of 41.0 • C. In the control group, after 6 h, the percentage of embryos that developed from the unicellular stage to blastocysts was 41.0 ± 5.2%, while the lowest rate was in the research group at 41.0 • C (30.1 ± 5.1%). A similar situation concerned the number of embryos developing after 12 h of observation. The difference between the 39.8 • C and 41.0 • C groups was highly statistically significant (p < 0.01) and amounted to 37.1 ± 3.9% and 7.7 ± 6.4%, respectively ( Figure 2).
Experiment #2: Influence of Heat-Stress Exposure on Mitochondrial and Cytoplasmic Processes
The effect of temperature on the level of ROS cytoplasm was highly significant (p < 0.0001), both after 6 and 12 h ( With increasing temperature, a decrease in the expression level of both LC3 and SIRT1 protein markers was observed. The expression level of LC3 was highest in the 38.3 • C group (30.30 ± 2.7) and lowest at the highest temperature (19.04 ± 2.1). Identical results were obtained when analyzing changes in SIRT1 level. The highest value was observed at 38.3 • C (48.12 ± 4.6) and the lowest at temperatures above 41 • C (22.00 ± 2.0). The effect of temperature on SIRT1 protein expression was statistically highly significant (p < 0.01). The above results clearly indicate that the autophagy process was impaired as a result of high temperature.
Discussion
The study aimed to assess the impact of heat stress on the process of in vitro fertilization (experiment 1) and to evaluate the effects of this stress on mitochondrial and cytoplasmic processes in bovine cells (experiment 2). An additional objective was to determine the effect of heat shock on autophagy processes in oocytes (experiment 3). Studies show that exposure of cows to heat stress contributes to hyperthermia and disorders within the reproductive system [28,39].
Our research has proven that exposure to heat during in vitro maturation of bovine oocytes impairs their growth and developmental competence. This may confirm that oocytes are sensitive to various factors, including temperature [40]. Furthermore, it has been shown that the environmental conditions prevailing in the summer do not entirely block the developmental competence of oocytes but contribute to the reduction of their development and cleavage and limit the achievement of the blastocyst stage after fertilization. Presented studies showed that the culture of bovine oocytes at stress temperatures of 39.8 • C and 41.0 • C for 6 h and 12 h of in vitro maturation reduced the cleavage capacity from the unicellular stage to the blastocyst stage.
According to current knowledge, the first 12 h of in vitro maturation are crucial in maintaining the developmental competence of oocytes, which then affects their embryonic development ability. During this time, the oocyte goes from the induction phase to the synthesis phase [41]. Then, several changes occur: de novo synthesis of proteins, their accumulation, post-translational modifications, organization of microtubules, or chromatin condensation [41][42][43]. Exposure of oocytes to high temperatures is a significant factor contributing to reduced fertility, including reduced conception rates in inseminated cows [44]. In addition, it reduces the viability of granulosa cells, limiting the production of estradiol necessary during oocyte maturation or embryo development [39]. Oocytes exposed to temperature shock show reduced cleavage capacity and a low capacity to develop into a blastocyst [39]. Heat stress impairs the developmental competence of oocytes at the stage of the germinal vesicle [45]. Reducing cattle's reproductive potential translates into herds' production efficiency and generates direct losses for the dairy industry [42,44].
Stamperna et al. [40] reports that exposure of bovine oocytes to high temperatures for as little as 3 h may result in disturbances in the structure of these cells [40]. In another study by Stamperna et al. [41], the effect of short-term exposure of IVM oocytes and embryos to temperature shock was assessed. For this purpose, the cumulus-oocyte complexes matured in vitro for 24 h at 39 • C in the control group and at 41 • C in the test group for 6 h. Cleavage was assessed 48 h after fertilization. Increased temperature to 41 • C for 6 h has been shown to disrupt oocyte maturation and inhibit cleaved blastocyst formation, which translates into a reduction in embryo quality [41]. The results obtained by Stamperna et al. [41] are similar to the results presented. Moreover, the authors of [41] prove that even a short exposure to temperature shock causes disturbances in the genomic regulation of oocyte maturation. It has also been shown that the given exposure time (6 h) to the temperature of 41 • C disturbs the oxidative balance of the oocyte and cumulus cells [41]. Numerous studies [40][41][42]46] show that exposure of oocytes to high temperatures, both in vivo and in vitro, reduces the ability to fertilize and develop the embryo [40][41][42]46]. Another consistent study by Ispada et al. [42] showed that 14 h exposure of IVM oocytes to a temperature of 41 • C reduces the ability of the oocyte to cleave and develop into the blastocyst stage.
In the study, as a result of a temperature shock (39.8 • C and 41.0 • C), an increase in the content of reactive oxygen species in the mitochondria and cytoplasm of oocytes was obtained. Reactive oxygen species damage the genetic material of cells, leading to cell dysfunction and even inducing apoptosis [47]. Reduced function of oocyte mitochondria is observed during exposure to heat stress [42]. The proper functioning of the mitochondria is crucial for maintaining the competence of the oocyte [48]. In cumulus cells, an increase in oxidative metabolism, an energy source, is observed. These processes make up the process of oocyte maturation. Exposure to high temperatures between 18-21 h of IVM has been shown to lead to cytoplasmic disturbances similar to those associated with cell aging [41]. These changes lead to DNA fragmentation, which can directly result in cell apoptosis [42].
Oocytes exposed to high temperatures are prone to higher production of reactive oxygen species, increasing the metabolic activity of the cells and consequently exposing the cells to oxidative stress [49]. Heat stress changes the activity of antioxidant enzymes in the cell, increasing its susceptibility to oxidative damage within the oocyte structures and affecting the development of embryos [42]. High temperatures can damage cellular organelles, including the mitochondrion, limiting their functioning [50]. Mitochondria play an essential role in oocytes. They participate in calcium homeostasis; the production of mitochondrial ATP, which is the primary source of energy, during respiration; the regulation of oxidation and reduction reactions in the cytoplasm; and they participate in oocyte apoptosis [26,28,51]. Thermal shock opens the permeability transition pores, reducing mitochondrial activity in immature and mature oocytes [42,49]. Changes taking place in the mitochondria of oocytes have a direct impact on their developmental abilities [51]. The increase in ROS due to heat stress damages the mitochondria during oocyte maturation, reducing their developmental competence [42,49]. In the conducted study, as a result of a temperature shock (39.8 • C and 41.0 • C), an increase in the content of reactive oxygen species in the mitochondria and cytoplasm of oocytes was observed. Reactive oxygen species damage the genetic material of cells, leading to cell dysfunction and even inducing apoptosis [47]. The proper functioning of the mitochondria is crucial for maintaining the competence of the oocyte [48]. In cumulus cells, an increase in oxidative metabolism, an energy source, is observed. These processes make up the process of oocyte maturation. Exposure to high temperatures between 18-21 h of IVM has been shown to lead to cytoplasmic disturbances similar to those associated with cell aging [41]. These changes lead to DNA fragmentation, which may result in cell apoptosis [42].
Lee et al. [52] experimented on 158 oocytes from Holstein cattle and 123 oocytes from Jersey cows, with which the effect of temperature on the development of female gametes was investigated. The tested temperature was 40.5 • C and 37.5 • C as a control. The study showed that ROS levels differed between Holstein (19.30 ± 0.74) and Jersey (16.46 ± 1.10) oocytes in control temperature. In turn, significant differences were obtained under heat stress (40.5 • C), in which Holstein oocytes indicated a decreased mean of ROS levels (32.20 ± 0.91) compared to Jersey oocytes (20.34 ± 1.07) [52]. Moreover, it suggests that the excessive reactive oxygen species lead to cytoplasmic defects within the oocyte and damage to the genetic material [52]. Properly functioning of the mitochondria is crucial during stressful conditions when the amount of reactive oxygen species increases, contributing to permanent damage. Furthermore, induced oxidative stress causes damage to the mitochondria, which reduces the source of energy and the availability of ATP. These phenomena can hurt the viability of the oocyte [40]. Disorders of mitochondrial respiration, the product of which is energy in the form of ATP, affect ATP levels in the cytoplasm [26]. Elevated levels of ATP in the oocyte cytoplasm are observed when cells are exposed to heat stress [19,53]. This may impair oocyte development, fertilization potential, or subsequent embryonic development [26,51].
The relationship between ROS levels and ATP is not sufficiently understood. Some studies suggest that excessive ROS production in oocytes causes a decrease in intracellular ATP concentration [54], while other data indicate that SIRT may increase ATP levels and thus protect cells from ROS-mediated oxidative damage [37]. SIRT1 is an essential mitochondrial deacetylase that regulates mitochondrial biological functions of mitochondria [36] and is directly related to the regulation of autophagy and mitochondrial function in cells, which can increase the ATP content of cells and protect them from excessive reactive oxygen species (ROS) and oxidative damage [37]. In our study, we showed that excessive ROS production and an increase in ATP concentration were associated with a decrease in SIRT1 and LC3.
Due to the progressive warming of the climate, the reduction of fertility in dairy cows and the decrease in their milk yield may significantly limit the dairy industry.
Materials and Methods
Considering the fact that this research covers only veterinary procedures, which are not experimental but merely routine breeding activities, it was not necessary to obtain the approval of the local animal ethics committee for this purpose.
Fertilization (In Vitro)
Bovine ovaries were obtained from a local abattoir located approximately 60 min from the laboratory. Ovaries were transported to the laboratory in 0.9% (w/v) NaCl at room temperature. The ovaries were sliced, and oocyte-cumulus complexes were collected into a beaker containing oocyte collection medium as previously described [56].
Spermatozoa (fresh, from one male, collected according to Kowalczyk et al. [57]) that had been purified by Percoll gradient centrifugation [58] and suspended in SP-TALP were added to the matured oocytes at a density of approximately 1 × 10 6 spermatozoa per well. Coculture of spermatozoa and COCs proceeded for 10 h, after which time the putative zygotes were denuded of cumulus cells by vortexing in a 2.0 mL microcentrifuge tube containing 0.5 mL Hepes-TALP for 5 min. The culture media were prepared using recipes described by Parrish et al. [58].
Heat Shock Induction
Coculture of spermatozoa and COCs was performed for 6 and 12 h at 38.3, 39.8, and 41.0 • C. After fertilization, putative zygotes were cultured at 38.3 • C for 8 days. The cleavage rate was determined on day 3 after insemination, and development to the blastocyst stage was determined on day 8 after insemination. This experiment was replicated eight times using a total of 200-210 oocytes per treatment. The stage of blastocyst development was assessed by the method previously described by Schrock [56].
One-Cell Stage
After the coculture of spermatozoa and COCs was complete (10 h after insemination), putative zygotes were cultured in 50 mL microdrops in groups of 25-30 at 38.3 • C continuously (control group) or were exposed to 39.8 or 41.0 • C for 6 or 12 h (treatment groups). After this time, all embryos were cultured at 38.3 • C for the duration of the culture. Cleavage rate was determined on 72 h after insemination, and development to the blastocyst stage was determined on day 8 [56].
Reactive Oxygen Species Measurement
Reactive oxygen species (ROS) levels were measured according to the methodology described by Payton et al. [19]. For this purpose, the level of reactive oxygen species was measured after culturing cumulus-oocyte complexes at the germ follicle stage for 6 h at 38.3 (control), 39.8, and 41.0 • C. After 12 h, oocytes were vortexed in 0.3% hyaluronidase to remove cumulus cells, and zona pellucida was removed with 0.5% pronase. The oocytes after this treatment were ready for the assessment of ROS levels in the mitochondria using
Measurement of ATP Content
Cumulus-oocyte complexes were matured for 12 h at 38.3, 39.8, and 41.0 • C. After a total of 24 h, cumulus cells and zona pellucida were removed from a subset of oocytes using the same methods as described above. Denuded oocytes were frozen at −80 • C for later analysis of ATP. ATP content was assessed in individual embryos using the ATP Determination kit (Invitrogen™/Life Technologies/Thermo Fisher Scientific) [59]. A sixpoint standard curve (0-5 pmol) was considered in each test series. Standard curves were generated, and the ATP content was calculated using the formula obtained from the linear regression of the standard curve.
Statistical Analyses
Analyzing the effect of temperature on protein levels, a one-way analysis of variance with the LSD (Least Significant Difference) test was used [61]. Significant differences (p < 0.01) are indicated by different capital letters and two asterisks (**; A,B,C).
Conclusions
The results obtained in this study confirmed that exposure of maturing oocytes to high temperatures has a devastating effect on their developmental competence and subsequent blastocyst development. In addition, exposure to heat changes mitochondrial function by affecting energy levels (ATP) and redox potential (ROS levels). Higher ATP synthesis in oocytes exposed to temperature shock may confirm that the mitochondria of these oocytes had a reduced mitochondrial membrane potential. Moreover, with increasing temperature, a decrease in the expression level of both LC3 and SIRT1 protein markers was observed.
Heat stress reduces the reproductive performance of cattle, so a thorough understanding of the cellular and molecular responses of oocytes to elevated temperatures will be helpful in the development of heat resistance strategies in dairy cattle.
Author Contributions: We declare that the contribution of the mentioned authors (M.W., A.K., W.K., P.C. and E.C.-P.) to this work was significant and consisted of: designing the experiment (A.K. and W.K.), developing the methodology (M.W., A.K. and W.K.), performing the experiment (A.K., M.W. and P.C.), analyzing the results (E.C.-P.) and preparing this manuscript (M.W., A.K., W.K., P.C. and E.C.-P.). All authors have read and agreed to the published version of the manuscript.
Funding:
The APC/BPC is financed/co-financed by Wroclaw University of Environmental and Life Sciences. This research was carried out as a part of the activities of the leading research group-Animal Science for Future (ASc4Future).
Institutional Review Board Statement:
The experiment has been performed as part of routine activities during the current semen production in the reproductive station and did not require the approval of the ethics committee.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,185.2 | 2023-07-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
] |
Optical diffraction for measurements of nano-mechanical bending
We explore and exploit diffraction effects that have been previously neglected when modelling optical measurement techniques for the bending of micro-mechanical transducers such as cantilevers for atomic force microscopy. The illumination of a cantilever edge causes an asymmetric diffraction pattern at the photo-detector affecting the calibration of the measured signal in the popular optical beam deflection technique (OBDT). The conditions that avoid such detection artefacts conflict with the use of smaller cantilevers. Embracing diffraction patterns as data yields a potent detection technique that decouples tilt and curvature and simultaneously relaxes the requirements on the illumination alignment and detector position through a measurable which is invariant to translation and rotation. We show analytical results, numerical simulations and physiologically relevant experimental data demonstrating the utility of the diffraction patterns. We offer experimental design guidelines and quantify possible sources of systematic error in OBDT. We demonstrate a new nanometre resolution detection method that can replace OBDT, where diffraction effects from finite sized or patterned cantilevers are exploited. Such effects are readily generalized to cantilever arrays, and allow transmission detection of mechanical curvature, enabling instrumentation with simpler geometry. We highlight the comparative advantages over OBDT by detecting molecular activity of antibiotic Vancomycin.
Micro-cantilevers are the most widely deployed micro-mechanical system (MEMS), initially developed for atomic force microscopy 1 , but now serving as ultra-sensitive force transducers for applications ranging from airbag release to motion detection in mobile telephones. They have enabled nanobiotechnology 2,3 , branching beyond imaging into single-molecule manipulation and force metrology 2,4 , as well as multifunctional lab-on-a-tip 2 techniques. Cantilevers are promising for future medical diagnostic devices because they are both sensitive, with unlabeled biomolecules detected down to femtomolar concentrations within minutes 5,6 , and because they can be multiplexed on arrays that allow multiple simultaneous differential measurements 7,8 . The biochemical sensitivity of cantilevers derives from the ability to detect small motions of their untethered ends, usually via the optical beam deflection technique (OBDT) 9,10 implemented extensively for AFM-like devices. While conceptually simple, the need for careful alignment by specialists and a laser spot size small compared to the dimensions of the cantilevers limit general applicability outside of specialized research laboratories as well as the miniaturization needed both for enhanced sensitivity and massive multiplexing.
One reason for the preeminence of OBDT is that when the first atomic force microscopes (AFM) were developed 30 years ago, inexpensive digital imaging (DI) was unavailable. Current ubiquity of DI makes the adoption barrier negligible. In this paper, we describe how cheap DI enables a much more robust method, namely far field diffractive imaging, for optical readout of cantilever arrays. The method operates with light beams which can be much larger than individual cantilevers and whose angle of incidence and reflection need not be precisely set and measured, thus removing the obstacles presented by OBDT for non-expert use, miniaturization and multiplexing, and thus opening optically readout cantilever arrays to numerous applications outside specialist research laboratories. It relies on the interference fringes easily visible for all objects with features on the scale of the wavelength of light, and we illustrate it for ordinary cantilevers, where the fringes are derived from their edges, as well as cantilevers into which we have inserted-using focused ion beam prototyping -periodic arrays of slots to create gratings whose diffraction patterns are very sensitive to bending.
The scientific literature describes a variety of cantilever metrology techniques based on diffraction and interference. Interferometry techniques have been implemented using optical fibre 11 or microscope objectives 12 making the method highly sensitive to misalignments. Some rely on diffraction gratings 13,14 or interdigital structures [15][16][17] . Existing work using DI have limited themselves to calculating centroid positions ignoring any observed diffraction patterns 18 . The new technique we describe differs significantly from previous methods in several aspects, including a more sophisticated exploitation of inexpensive DI and total dispensation with any reference beam. It depends fundamentally on an observable that is invariant to translation and rotation and on the ability to decouple the measurement of tilt and curvature of the cantilever.
The paper starts with a description of design constraints and interference effects associated with all-optical readouts of cantilever bending, and then describe our tests of the diffractive method for unpatterned and patterned cantilevers, first for remote temperature sensing and then for biomedicine, where we examine antibiotic action.
Huygens-Fresnel description
Optical techniques for MEMS metrology require the illumination of a region or all of the device probed. We focus our attention on the details of OBDT depicted in Fig. 1a, whose operation consists of reflecting a focused or collimated laser beam from the free end of a cantilever and measuring the position of the reflection projected onto a segmented photo-diode or similar position-sensitive device.
We model the optical system using the Huygens-Fresnel principle, where the light re-emitted by the cantilever, both reflected or transmitted, can be understood as the summation of an infinite number of infinitesimal point sources located on the cantilever surface. A cantilever acts as a rectangular slit source, a finite region emitting light in the plane ξ η ( , ) with a phase delay given by its curvature. We study Fresnel's approximation for the optical wave in the observing plane (x,y) for a given geometry and illumination. The beam projected onto the device is assumed to have a 2D Gaussian profile of size σ centred at the origin.
Ideal infinite plane.
In the ideal case of a perfectly flat, infinite reflecting surface, the incident beam will be reflected unperturbed into a detector, maintaining the original profile. We model the finite size of the detector and consider the Gaussian beam projected on to a 4-segment photodiode of size a with gaps 2δ and calculate the differential signal V of the segmented sensor versus beam displacement d (See Fig. 1a for diagram and supplementary material for derivation of exact solution, approximations, analysis, and further figures).
We find that approximating the measured signal as proportional to the beam displacement (and therefore to the cantilever curvature) ( σ ∝ V d d ( ) / ) implies maximum gain and linearity; and it is valid within 1% only if δ σ < <a 7 / 3, and σ < d /4. We observe in equation (SE14) that a smaller spot size σ appears to increase the gain, as previously reported 19,20 , but it must be noted that this is true only if δ is much smaller than σ. Because the signal is linear within 1% only for σ < d /4 a small laser spot size will also restrict the measuring dynamic range. For beams not collimated but focused at a finite distance, σ θ = z and β = d z 2 (See Fig. 1a where z is the detector distance, β is the cantilever deflection angle, and θ the beam divergence) and therefore the signal proportional to β θ / is independent of z and is maximized by reducing the beam divergence θ. Considering that the minimum beam cross-section diameter is given by φ λ π θ = 4 / 21 , we see that a small beam divergence determines the minimum size of the laser spot and consequently the minimum width of the cantilever, if diffraction is to be avoided, as we will see.
Implications for standard readout. The infinite plane model could only be valid if the reflected beam cross-section is completely contained in a flat reflecting surface (Fig. 1b), otherwise any illuminated edge of the cantilever (Fig. 1c) will cause a diffraction pattern to appear on the photo-detector. Figure 1d,e shows calculated diffraction patterns caused by a Gaussian beam reflected from cantilevers with edges respectively far and close to the beam centre. Illuminating the cantilever's edge causes a broad-tailed asymmetric diffraction pattern, significantly different from the typically assumed Gaussian intensity distribution. This diffraction artefact causes an asymmetric dependence of the measured signal on the cantilever deflection. As shown in Fig. 2a, for negligible diffraction the curve is the Error function, symmetric around zero, but for considerable diffraction that symmetry is lost. The artefact is trivial for controlled systems, where a feed-back loop maintains the cantilever bending at a small constant value and the excursions from this value are small during experimentation. Interestingly, the asymmetry of the sensitivity could become relevant for uncontrolled systems, such as bio-markers, and systems where calibration and measurement happen at opposite sides of the deflection curve. For instance, in single-molecule force-spectroscopy experiments, signal versus cantilever bending calibration curves are obtained from upward-bending (pushing the cantilever into the surface) long trajectories whereas sample measurements are downward-bending for pulling 4,22 . Fig. 2b shows the normalized difference between considering or neglecting diffraction. The differences between the diffraction-calculated curve and the Error function are shown in Fig. 2b in normalized units of σ (the size of the illuminating beam at the detector) and for different magnitudes of the dimensionless parameter λz/σ 2 . Estimations of curvature changes or cantilever displacement could therefore be under or overestimated by more than 10%.
We have seen that the ideal case for OBDT is the limit where the illuminating beam is much smaller than the cantilever. In practice, small numerical aperture systems create focal spots comparable in size to cantilever widths (10-100 μm). Carefully aligning and focusing a small laser spot onto the centre of a cantilever end, to avoid the diffraction caused by the edges, can become a tedious task, if at all possible. We investigate the opposite limit, where the illuminated area is much bigger than the cantilever, and the diffraction pattern caused by the finite cantilever size contains all the information we need.
Cantilever diffraction detection method
Now we demonstrate that the shape of the diffraction pattern reveals the details of surface curvature. Figure 3 shows a cartoon of two operational modes of the new proposed readout method. A broad laser source illuminates the whole cantilever with close to homogeneous intensity while a CCD or CMOS detector captures some of the diffraction fringes created by either reflected (Fig. 3a) or transmitted light (Fig. 3b) 23 .
In the supplementary material we demonstrate that in reflection mode (Fig. 3a) the diffraction pattern from a rectangular cantilever of dimensions (w, l) curved along the ξ axis with shape given by a quadratic polynomial ξ ξ + a b 2( ) 2 is related to the pattern of a straight cantilever by the transformation and observe that the term n = 2az causes a shift and m = 1 + 4bz a magnification of the diffraction pattern 24 . These results hold after applying the condition ξ λ z / 2 , the Fraunhofer approximation for the far field. We see that also in the Fraunhofer far-field approximation the diffraction pattern caused by a micro-structure experiences a The light transmitted through the cantilever causes the diffraction pattern. Even though typical transmittance of solid cantilevers could be insufficient to provide an acceptable signal to noise ratio in transmission mode, the fact that the features of interest in the diffraction pattern i.e. position shift and magnification are independent of cantilever shape, any arbitrary pattern of holes through the cantilever would allow higher transmittance. In particular an array of slits provides the advantage of high intensity high order diffraction peaks detectable at wide angles away from the direct incident beam.
Scientific RepoRts | 6:26690 | DOI: 10.1038/srep26690 magnification given by the curvature of the surface and a shift in position given by the tilt of the surface as defined by the change of variables in equation (1). Therefore, tracking the changes in position and shape of a diffraction pattern enables to monitor the tilt and curvature of the cantilever independently. This result also holds for different cantilever shapes in reflection mode.
To model the transmission mode (Fig. 3b), we took a step back and considered the Fresnel approximation not in a plane but from a curved source. We follow the same procedure as before but with z replaced by ξ ξ + + z a b 2 . We approximate up to second order in the numerator exponent and re-arrange and find a similar transformation To the extent that x z we can neglect the terms containing x in the right hand side and recover a similar result as before, this time only for small changes in cantilever tilt and curvature, implying a pattern shift of az and pattern magnification of 1 + 2bx, respectively. The magnitudes differ from the previous result by a factor of two because, for a given cantilever displacement, light travels the path only once in transmission mode but twice in reflection mode.
The ideal resolution for tilt a and curvature b estimations is expected to be of the order of − p kz s 2 /(2 ) j and − kz s 2 /(2 ) j respectively, where p is the pixel size, j is the effective number of bits of digitization resolution, k is the diffraction pattern size in units of linear number of pixels in the detector and s is the number of samples.
We next explore experimentally both the case of a flat cantilever and also the case where the cantilever has a series of narrow slits forming a diffraction grating.
To verify the usefulness and performance of this detection method we have built the trial setup of Fig. 4a. We used a cantilever array in a windowed flow cell that allowed simultaneous measurements using the new diffractive readout method and the classic OBDT. We capture diffraction patterns generated by the cantilevers and study the changes as the cantilever rotates (goniometer tilt) and curves (temperature change). Fig. 4b shows the 2D pattern acquired with a CCD. Figure 4c,d shows the pattern profile and confirms experimentally our prediction that, independent of the details of the pattern, changes in curvature magnify the pattern profile and changes in tilt only displace the pattern without significant deformation.
To exemplify the data acquisition procedure Fig. 5 shows measurements for the cantilever as the temperature is cycled in the range 25 °C to 33 °C. The gold-coated silicon cantilever acts as a bimetallic strip and the differential thermal expansion causes a homogeneous curvature of significant magnitude 25 . The observed deflection is around 72 nm/°C. A reference diffraction pattern is recorded by the CCD camera at the beginning of the experiment (Fig. 5a) as a spatial array of intensities. At the same time the initial position of the OBDT spot in the CCD is recorded also as a reference. All further patterns and spot positions are measured sequentially in time and compared with their respective references. The difference between the pattern intensity arrays are calculated (Fig. 5b). We define a figure of merit (FOM) as the root mean square value of the difference between the observed pattern and the reference pattern (Fig. 5b).
2 Figure 5g shows that the FOM calculated from the diffraction pattern closely correlates with the measurements from OBDT evidencing that the far-field diffraction readout can replace the classic OBDT. We observed a resolution of 0.95 nm/ Hz at normal video rate, or 0.47 nm at 100 frames per second.
Previous results can be reproduced both in reflection and transmission mode, but the latter may suffer from a poor signal to noise ratio if the cantilever features a small transmissivity. An interesting consequence of our analysis is that, where the approximations hold, the principle of diffraction readout applies independent of the form of the cantilever. In particular it applies also to a periodic array of slits. We performed a second experiment with a cantilever featuring a series of narrow slits created by the Focused Ion Beam technique as shown in Fig. 6b. Here the light transmitted through the array of slits causes an intense series of Bragg peaks (Figs 6c and 7a) with spacing reciprocal in relation to the spacing of the slits. The weaker Fraunhofer peaks between the Bragg peaks result from the finite number of slits. More slits will increase the number of Fraunhofer fringes and decrease their intensity. The entire feature-rich pattern is sensitive to the phase differences caused by cantilever deflection and consequently, contrary to other techniques, a patterned cantilever allows the detection of bending across a broad range of detection angles.
To further test the method in challenging conditions of practical interest we reproduce previous results on binding of antibiotics to target peptides 8 . Un-patterned cantilevers were either sensitized or passivated by selectively forming a self-assembled molecular monolayer by individual incubation in micro-capillary tubes. Passive control cantilevers were coated with polyethylene glycol (PEG) and target cantilevers were coated with drug-sensitive mucopeptide analogue in a procedure detailed elsewhere 8 . Figure 8a shows bending of cantilevers coated with a bio-mimetic bacterial cell wall target in response to 250 μM Vancomycin detected with both the diffractive method and OBDT. Upon repetition at different Vancomycin concentrations we obtain the saturation curves in Fig. 8b,c a consequence of small instabilities of the physical system (incuding particularly microfluidics) and significantly bigger than the intrinsic resolution of the measurement methods.
Advantages and perspective
Diffraction features generated by a microstructure such as a cantilever are exquisitely sensitive to geometrical details such as curvature, tilt, position of the edges and roughness of the surface 24 . We have shown that this sensitivity can on one hand yield artefacts that slightly skew assumed calibrations of OBDT. On the other hand, they provide alternative means for detecting independently changes of tilt or curvature. Avoiding diffraction from the cantilever surface requires comparatively narrow illuminating beams or broad flat reflection areas. If it exists at all, the optimum position and focus of the illuminating beam will tend to be narrow, and therefore continuous and tedious re-alignment and re-calibration could be necessary. Another potential problem of highly focused laser beams in liquid (often ignored) is that the measured intensity becomes sensitive to transient perturbations caused by aggregates or other impurities drifting across the beam at the narrow focus, be they suspended in the liquid or diffusing in the cantilever surface. There are several advantages to be gained by measuring the details of diffraction patterns, instead of trying to avoid them.
Keeping the laser spot larger than the cantilever dimensions makes crucial calibration adjustments, such as the exact knowledge of the laser spot position 26 , superfluous and therefore alignment becomes a fast, simple and reliable procedure. A broad illumination beam also minimizes the relative magnitude of perturbations caused by particles crossing the illumination beam and eliminates temperature gradients in the cantilever 25,27,28 . The optical diffractive readout does not necessarily rely on having a reflective surface and therefore the choice of surface coatings is widened. Beside the reflection component, cantilevers featuring slit arrays allow high signal-to-noise levels in transmission mode, and more interestingly, the detector can be located off-axis around high order Bragg peaks, offering a much broader set of geometrical configurations for the detector. The relative change in size of the diffraction pattern is of particular interest because it is invariant to lateral translation and rotation and independent of the shift caused by changes in tilt, making the measurement intrinsically robust to small perturbations in the detector and cantilever positions and orientations.
We have provided an exact analytical model for parabolic bending and reflection mode and an approximate analytical solution for transmission mode when x z. We have also defined a figure of merit (FOM) that allows an effective implementation of the detection technique independent of these analytical considerations or other modelling.
The high order Bragg fine structure resembles that exploited for oversampled X-ray crystallography 29 . We have a visible light analogue of the X-ray experiments and here the information from the phase difference created by the cantilever curvature is contained in the details of the intensity between Bragg peaks.
Measuring the details of a diffraction pattern requires a more complex detection device such as a CCD or CMOS sensor array, as opposed to a simpler split photodiode. This increase in complexity is justified by the increased amount of information available, as tilt and bending is simply decoupled, in significant contrast to OBDT where there is no shape information. CCD and CMOS sensor also feature reduced bandwidth, but this is not a limitation for probing systems with relevant time scales much longer than the CCD frame acquisition period, such as the ones shown here. It is also worth noting that the ubiquity of digital imaging today, especially as compared to when OBDT was developed in the early 1990s, makes the use of position-sensitive optical detection a very competitive option for modern low-cost instrumentation.
The presented far-field technique distinguishes itself from the related NANOBE 24 in many forms, including that it does not demand a lens to maintain the near field condition, allows both reflection and transmission mode for patterned and un-patterned cantilevers, it exploits high order Bragg fine structure similarly to over-sampled X-ray crystallography, permits off-axis detection, by working in the far-field and not near-field . In addition, it can operate in modes either sensitive to single cantilever deflection or to differential displacement, depending on the illumination profiles. In the current work we have also demonstrated an independent model-free FOM measurable that perfectly correlates with OBDT. At the far field, if more than one cantilever is illuminated at a time, the observed diffraction pattern will be sensitive to the differential displacement, while in the near field there is negligible overlap between information from near cantilevers.
In summary, we have analysed mathematically the popular optical beam deflection technique (OBDT) for measuring cantilever deflection and found that the conditions for maximum gain, linearity and symmetry require illumination spot sizes that are heavily constrained by the geometry of the detector and the cantilever, with β π λ θ = w/ sin( ), α π λ θ = s/ sin( ), n = 20 and λ = 632.8 nm.
desired dynamic range and gain, and the divergence of the illuminating beam. Ignoring such constraints can cause detection errors in excess of 10%, acknowledging such constraints provides robust instrumentation design guidelines not previously available in the literature. These considerations are likely to be of concern mainly for cantilevers that undergo significant changes in curvature, such as for biosensors and in single-molecule force spectroscopy.
We propose a diffraction readout method which decouples as independent observables the cantilever tilt and curvature of the cantilevers and does not require precise alignment. The excellent correlation observed (Fig. 5g) between the OBDT and our proposed diffraction method demonstrates similar performance at ideal conditions. The advantages of replacing OBDT with the more robust diffraction method comes not from an increased performance at ideal conditions but from the special features of the technique i.e. from the intrinsic resilience to various artefacts, mainly the misalignments of the optical components and from the ability to inform about the shape details, decoupling tilt from curvature. While specialized instrumentation operated by trained scientists in ideal conditions, such as imaging AFM, may not benefit directly from this technique, the combination of unique attributes of the diffraction technique makes it especially suitable to bring cantilever technology to the consumer market for bio-sensing applications and related fields. We have demonstrated the fundamental principles and practicality of our approach for a clinically relevant application. The approach taken could enable not only more robust pharmacological research instruments as decribed here, but even portable medical diagnostic tools, featuring high performance without specialist operators.
Methods
Readout. We used a cantilever array chip (IBM) where each cantilever was 500 μm long, 100 μm wide and 0.9 μm thick, and coated with a layer of 2 nm titanium followed by 20 nm of gold. The array chip was mounted in an aluminium flow cell with sapphire windows at both sides and a thermoelectric Peltier element and thermocouple for temperature control. A broad laser beam (HeNe 632.8 nm, 5.0 mW, HRR050 Thorlabs)) illuminates the surface of a single cantilever to test the diffractive readout method. Simultaneously, a narrowly focused laser beam was used to measure the cantilever bending using OBDT as control. Both reflected beams were projected onto CCD sensors (ORCA-AG from Hamamatsu, Pixel size: 6.45 μm × 6.45 μm and FireWire 400 Color Industrial Camera DFK 31AF03 with sensor Sony ICX204AK, Pixel size 4.65 μm × 4.65 μm) mounted on calibrated goniometers (Rotation Stage RV160CCHL and xyz-stage VP-25X from Newport) to allow recording the intensity at different angles. CCD sensors were approximately at a distance of 100 mm for reflection mode and 250 mm for transmission mode. Our raw data consist of the diffraction patterns generated by the cantilevers captured as 12 bit TIFF images. Exposure times on the order of milliseconds were adjusted to avoid saturation and maximize dynamic range. The expansion of the pattern by approximately 12.5% observed in Fig. 4c corresponds to a change in curvature of δ = . | 5,797.4 | 2015-05-01T00:00:00.000 | [
"Physics"
] |
Beyond market orientation : An operationalisation of stakeholder orientation in higher education
This paper aims at developing a set of items attempting to operationalize the stakeholder orientation for higher education institutions. Stakeholder orientation is viewed as a more relevant framework for understanding and managing the external pressures exerted on universities. We hence break off with the market orientation framework which has been suggested in previous literature. The paper is based on a literature review on stakeholder orientation and market orientation. Its main outcome is that we add to the scattered literature on stakeholders in higher education, what can be seen as a departing point for the adoption of stakeholder orientation in higher education. The originality of my contribution stands through three points: (1) while a few authors have used the market orientation framework to study changes in higher education, this paper relies on the stakeholder theory to show that stakeholder orientation is more relevant for this sector because it encompasses the market orientation dimensions, and fits to the peculiarities of the above sector. (2) Based on an extensive literature review, the paper makes first paces in suggesting items for the dimensions of stakeholder orientation in higher education. (3) Going beyond competition, I find “collaboration” as an important dimension which was missing in previous strategic analysis of the higher education sector.
In higher education, which is the focus of the present paper, two trends can be pinpointed: The first merely suggests that Market Orientation is necessary if institutions are to face successfully their changing environment.Braun and Merrien (1999) hold for example that: "…market orientation is one of the ways the governance of higher education is to evolve" (Figure 1).De Jonghe and Vloeberghs (2001) suggest also that "A market orientation is supposed to take place in universities, but this does not always happen in the optimal way."According to Davies (2001), the introduction of quality systems that recognize customer orientation and market orientation is an important step towards sustaining entrepreneurial endeavour in higher education.Haug (2001) adds that competition between national institutions and trans-national suppliers of E-mail<EMAIL_ADDRESS>+243820114815, +32474832723 education and formation, coupled with a greater freedom of choice among institutions, may affect institutional strategy.He contends that "institutions which recognise new demands and adapt their supply will be more likely to develop and overcome challenges." The second trend tends to straightforwardly transpose, in empirical studies on higher education, the Kohli and Jaworski's and/or Narver and Slater's models of Market Orientation (Caruana et al., 1998a(Caruana et al., , 1998b;;Siu and Wilson, 1998;Wasmer and Bruner, 1999;Flavian and Lozano, 2006;Webster et al., 2006;Hemsley-Brown and Oplatka, 2010).From then on, it is clear that market orientation is either implicitly or explicitly seen as a potential managerial solution to the changes undergone by higher education institutions.
In a recent study published in the International Journal of Quality and Service Sciences, Bugandwa-Mungu-Akonkwa (2009) highlights the way market orientation rhetoric is emerging as a new management paradigm in higher education, and provides a relevant critic in the way market orientation is being introduced into the sector.The point in his work is not to dismiss the relevance of the above strategy for higher education.Rather, while criticizing its theoretical transpositions in such a particular sector, he contends that market orientation is to be implemented in higher education to face the changing environment.
The purpose of the present paper is to qualify the above position, suggesting that stakeholder orientation; which includes the market orientation concept, is more relevant and inclusive as a strategic orientation for the higher education setting.The main aim is to raise from diverse literature, items that are likely to operationalize the stakeholder orientation of higher education institutions.This study discusses previous literature on stakeholder theory and the way it leads to stakeholder orientation.On the trail of this discussion, other dimensions of the stakeholder orientation concept, knowingly competitor orientation, collaborative orientation, inter-functional coordination, and responsiveness are presented.Then, a scale is proposed, containing dimensions and their relative items which attempt to capture the complexity of marketing in higher education from a stakeholder perspective.The next section summarizes the way changes in higher education environment led to changes in universities' management and how these changes are theorized in the managerial literature.
FROM THE CONTEXT OF HIGHER EDUCATION TO MARKET ORIENTATION TRANSPOSITIONS
The changing context of higher education and its confrontation with market forces are exerting intense pressures on the management of these institutions.The new public management can help understand the link Akonkwa 69 between higher education context and market orientation.This theory allows to delineate the changes undergone by institutions, and the ways the latter could adapt to these changes.From a broad managerial literature on higher education, Figure 1 sums up the institutional adaptations to directions that institutions are likely to follow.Figure 1 sums up the way the whole sector of higher education is shaken by various external changes all over the world.These include falling public support, increases in academic fees (students/parents' participations), the need to diversify funding (which is consistent with the resources dependence theory of Pfeffer and Salancik (1978)) while competing with other institutions for the same sources, rise of accountability requirements from multiple stakeholders to universities.
Summing up insights from the different definitions, market orientation can be defined as a culture and a set of behaviour or activities oriented towards current and latent customers' needs, the analysis and understanding of both competitors and the macro-environment, in order to adapt the organization's supply to customers' requirements and to the external environment and hence, improve the organizational performance.
As mentioned by Heiens (2000), the market orientation concept may encompass several different approaches to the strategic alignment of the organization with the external environment.
The popularity of this strategy has been justified by its positive impact on organizational performance (Slater and Narver, 1994;Jaworski and Kohli, 1993;Gotteland and Haon, 2010;Mahmoud, 2011).However, although Slater and Narver found no main effect for customer versus competitor focus on market performance, Heiens (2000) realizes that they do recognize that trade-offs between customer and competitor monitoring must necessarily be made because businesses have limited resources.This author's argumentation is of interest for my discussion as it stresses the fact that market orientation focus only on customers and competitors, as illustrated in Table 1.
Whatever the strategic approach adopted, it is clear from Table 1 that market orientation focuses only on competitors and customers (Ferrel et al., 2010).In the marketing of higher education institutions, managers tend to be rather "customer preoccupied", justifying the overutilization of the customer orientation concept in several researches, when analyzing the market orientation strategy (Siu and Wilson, 1998;Caruana et al., 1998a;1998b;Wasmer and Bruner 1999;Smith, 2003).Yet, this concept has raised controversies in the marketing literature of higher education (Driscoll and Wicks, 1995;Franz, 1998;Lowe, 2004).The debate can be synthesized as being both ideologicalis the "customer" concept semantically the right one to design students?andoperational (who are customers in higher education, and how to satisfy their multiple and necessarily conflicting needs?).This problem led a number of authors to rightly reject the dimension "customer orientation" in the conceptualization of market orientation, preferring that of "stakeholder orientation" (Sargeant et al., 2002;Gainer and Padanyi, 2002;2005;Greenley and Foxall, 1998).While supporting that customer orientation is irrelevant for higher education, this research however does not support the aforementioned authors' opinion since they treat stakeholder orientation as being part of market orientation.Ferrell et al. (2010) and Duesing (2009) have yet clearly demonstrated that stakeholder orientation is more inclusive than market orientation.So, although it has been extensively transposed in non-profit organizations (Padanyi and Gainer, 2004;Hashim and Abu-Bakar, 2011) and particularly in higher education (Flavian et al., 2013;Hemsley-Brown and Oplatka, 2010;Wasmer and Bruner, 1999;Flavian and Lozano, 2006;Webster et al., 2006), we contend that "stakeholder orientation" would be more insightful as a strategic orientation to better manage external pressures in higher education.Matsuno and Mentzer (2000) eventually realized it when they suggested a definition of market orientation that includes relevant individual market participants (competitors, suppliers, and buyers) and influencing factors (social, cultural, regulatory, and macroeconomic factors); which can be seen as an important step towards stakeholder orientation.Their definition meets what Lambin (2000) and Lambin et al. (2005) call "macro-environment".So, in their definition, market orientation can be effectively implemented only through a stakeholder perspective.Although all sectors are concerned with this large perspective, higher education seems to have more to gain in adopting it, because it is more directly influenced by external decisions from different stakeholders (students/parents, enterprises, policy-makers e.t.c).This is what is discussed in the following section.
Stakeholder theory and higher education
In this section, we discuss the increasing role of stakeholders in higher education, the necessity to identify these stakeholders and to develop a strategic orientation rooted in stakeholder theory in order to better manage them.
The issue of stakeholders in higher education has become so acute that authorities in Academia i as well as researchers ii are consecrating to it a growing importance.This trend is probably rooted in the fact that institutions are required to demonstrate effectiveness and efficiency through service provision, to compete with private providers in the endeavour to diversify their sources of funding (Gürüz, 2003;Slaughter and Leslie, 1999); a trend which is likely to continue.This requires institutions to be open to their external environment and create structures aiming at managing complex relationships with their stakeholders.
Several definitions have been provided about stakeholders (Freeman, 1984;Clarkson, 1995;Kotler and Fox, 1985).Mitchell et al. (1997) lists 25 publications with different various definitions of stakeholders.They go from broader definitions defining stakeholders as any group or individual who affect or is affected by the achievement of an organization's objective; to specific definitions such as "Stakeholders are those that bear some form of risk as a result of having invested some form of capital, human, financial, something of value, in a firm (Duesing, 2009).Duesing (2009) provides an interesting historic review of stakeholder theory.According to him, the key elements of this theory are: (i) Balancing the conflicting claims of various stakeholders in the firm; (ii) Social responsibility of the firm, beyond profit (including moral and ethical dimensions); (iii) Configuring firm's objectives so as to give each group a measure of satisfaction; (iv) Links with organizational performance; All of these peculiarities apply to higher education, supporting the necessity of the stakeholder framework in any endeavour to develop strategic orientations in this context.
The stakeholder theory (Freeman, 1984;Clarkson, 1995;Donaldson and Preston, 1995) allows researchers to broaden their focus using a wider set of relationships among multiple stakeholders rather than depending only on an economic relationship.One of the primary underpinnings of stakeholder theory suggests that firms are responsible to an array of stakeholders and that they should direct their efforts towards this array of Akonkwa 71 stakeholders in a manner that best fits the organization (Duesing, 2009).For that purpose, organizations should identify who their stakeholders are.Indeed, in order to discuss the operationalization of stakeholder orientation in higher education, it is of great importance to know who stakeholders are in that sector.Although the issue has become clearer in commercial sector (Duesing, 2009), the problem is rather tough in higher education.Several studies have looked into it without finding a consensus (Kotler and Fox, 1985;Conway et al., 1994;Newby, 1999;Raanan, 2003).Some of them have wrongly treated the student as the unique customer of higher education (Caruana et al., 1998;Keneley and Hellier, 2002).Kotler and Fox (1985) have suggested a more complete list as shown in Figure 2.This diagram defines the area of possible exchanges between higher education institutions and their stakeholders.In the stakeholder theory, distinction is done between primary stakeholders whose contributions are required for the survival of the firm (they invest human and financial capital in the organisation) and secondary stakeholders (those upon whom the firm is not highly dependent: Freeman, 1984;Clarkson, 1995;Berman et al., 1999).The issue of who the primary stakeholders are is important because no strategy can be developed without targeting one or more specific group.Different authors have suggested who primary and secondary stakeholders are in the commercial sector (Post et al., 1996;Greenley and Foxall, 1998;Agle et al., 1999;Hillman and Keim, 2001).Duesing (2009) for example, has identified customers, employees, Investors and competitors as the primary stakeholders for commercial small enterprises.Although much has been done in commercial literature as of whom stakeholders are, this still is not the case in higher education.In this sector, indeed, different researchers identify different stakeholders, following different methodologies (Kotler and Fox, 1985;Raanan, 2004;Mainardes et al., 2010;Chapleo and Simms, nd).Kotler and Fox's figure, hows how diversified stakeholders can be in higher education.Based on scattered information from Mainardes et al. (2010), Chapleo and Simms, (n.d), Raanan (2004), and Duesing (2009), the following stakeholder groups plays such a big role in higher education that any strategy should keep them into mind: Students, Employees (Academic and Administrative), Investors/Enterprises, and Policy-makers.This study recognizes that other groups could be added to fit to each particular institution, stakeholder importance being contingent to the context.As for those retained in this study, the stakeholder orientation towards each of the aforementioned groups is discussed in the next section.
Towards a stakeholder orientation in Higher Education
Very little research has been done that examines how Source: Kotler and Fox (1985).
higher education relates to stakeholders in a marketing perspective (Kotler and Fox, 1985;Raanan, 2003;2004;2005).Management theorists have often found that paying attention to stakeholdersbe they primary or secondary in the sense of Preston and Post (1975) and Mitchell et al. (1997) is not only a highly appealing idea, but it is also good for business (Jones, 1995).In fact, according to the normative stakeholder theory, firms should be responsible to the varied interests of all stakeholders rather than merely to the economic wellbeing of stockholders alone (Jawahar et al., 2001).The management of the range of stakeholders' competing demands is one of the primary functions of management and inspires the strategic orientation towards stakeholders.Ferrel et al. (2010) noted that the market orientation construct focuses on customers and competitors, and only indirectly on other stakeholders group.They suggested replacing this concept by a more encompassing one, that of "stakeholder orientation".According to the authors, stakeholder orientation is "the organizational culture and behaviours that induce organizational members to be consciously aware of and proactively act on variety of stakeholder issues.Importantly, stakeholder orientation stimulates a general concern for a variety of actors rather than focusing on any specific group.In this understanding, stakeholder orientation does not designate any stakeholder group as more important than another and the prioritization of stakeholders may change depending on the issue, as it is contingency based and is a function of contextual aspects surrounding the organization.This position clearly justifies the need to discuss how a specific context as higher education can be stakeholder orientated.It surpasses market orientation in that not only "customer orientation" is replaced by inclusion of diverse constituencies but also other dimensions are added to obtain a better operationalization of stakeholder orientation.As a beginning point, Ferrell et al. (2010) state that stakeholder orientation includes customers, community, employees, suppliers, investors, and sustainability.Considering the overlapping part of their figure, and based on Duesing (2009) who identify employees, customers, investors and competitors as the primary stakeholders, it becomes clearer that stakeholder orientation includes all aspects of market orientation.As stakeholders will often have conflicting needs (for example in higher education, the needs of labour market, or those of parents might be very different from those of students), it is highly useful to develop a specific stakeholder orientation for higher education institutions.So far, we have suggested students, employees (Faculty and Administrators), Investors and Policy-makers as being relevant for strategy development in the higher education setting.Hereunder, we discuss orientation towards each of them.
Students' orientation
Students are the most cited stakeholders of higher education institutions (Mainardes et al., 2010).Without students, there is no university.In the context where
Customers focus High Low
High Strategically orientated (both orientations are stressed by the organization and resources equally allocated).
Marketing Warriors (The organization is highly focused to competitors).It is competitor oriented.
Low
Customer preoccupied (effort is more focused to customers, their satisfaction).
Strategically Inept (none of these orientations is adopted, which can be dangerous for an organisation).
Source: Heiens (2000).See this author for a discussion of the 4 cases.
public funding is steadily declining, students ensure the university's survival.Students play different roles in universities.They are learners, co-producers of knowledge, products, and clients in the strict sense when it comes of support services (library, restaurants, Internet connections, car-parking, e.t.c.) (Sirvanci, 1996;2004;Lowe, 2007).Students' orientation will be defined as the degree to which universities try to take into account the current and latent needs of their students (current and potential, and in a more extensive sense, their parents).Consistent with the Malcolm Baldridge Criteria for Education and based on the roles stated above, this orientation implies both the learning process and students' satisfaction.Indeed, the Malcolm Baldridge National Quality Award Education Pilot Criteria distinguishes students from "other stakeholders".
Furthermore, replacing the concepts "customer focus" and "satisfaction" respectively by "student focus" and "students and stakeholder satisfaction", the MBNQA recognizes not only the diversity of stakeholders, but also the importance of students among them.The items suggested in this paper will reflect this by trying to encompass aspects of students learning as well as students' satisfaction towards support services.Based on the literature on Education, the two aspects constitute the two dimensions of students' satisfaction (Dweck and Elliot, 1983;Bowden and Marton, 1998;Halbesleben et al., 2003;Sirvanci, 2004;Lammers et al., 2005;and Eagle and Brennan, 2007).Whilst satisfaction towards learning might be difficult to assess, measures of satisfaction about the use of other educational facilities can be very useful for higher education institutions.So, the items suggested for this dimension include both aspects of students' orientation (learning process and students' satisfaction).
Employees (Academic and non-academics)
Whatever the type of organization considered, employees are seen as the first organization's customers.For the case of higher education, see also Raanan (2005) and Bakomeza (2011).Employee satisfaction is defined as the company's intention to address the interests of its employees and satisfy their employment needs (Yau et al., 2007).In all organizations, especially service organizations, they are the cornerstones of the success of any strategy.To improve quality education, universities must encourage academic staffs' interactions with students.Students' satisfaction can depend also on the way nonacademic staff organize their services in the different departments.When employees are satisfied with their jobs, they tend to work harder and perform more effectively for their employers (Berman et al., 1999).From the employer view, business that pay strategic attention to employees will prioritize job security, workplace amenities, and other forms of benefits to satisfy their employees.In an internal marketing perspective, many researchers have found that the orientation toward the interests of employees contributes to the success of the organization (Berry, 1984;Berry and Parasuraman, 1991;Greenley and Foxall, 1998;Appleyard and Brown, 2001;Bou and Bertran, 2005;Bakomeza, 2011).
Investors' and enterprises' orientation: Investor orientation is defined as the strategic orientation directed toward those with both an equity and risk stake in the organization.In higher education, investors may include governments, enterprises, alumni, and any person whose resources are engaged in the institution.Enterprises particularly, have essentially three roles in their relation with higher education: (1) research contracts, (2) lifelong learning, and (3) spaces for students' training and professional subjects in the institutions.
About research contracts, universities usually promote and develop research and technology transfer in order to reinforce their own worldwide influence, to increase their financial resources, to contribute to the economic development, and to create new ways of learning and knowledge development for the interest of students.In a number of universities, the proportions of revenue resulting from these kinds of contracts tend to increase and even compensate state funding.Further, universities offer a large range of continuing training for professionals, and these are often funded by enterprises 1 .Amongst the supplied training, we can identify short courses for enterprises and administrations, long courses with possibility to value professional experience of mature students, distance learning, e.t.c.Enterprises often suggest trainings to universities' students, especially in Business schools, whilst university departments organize forums to allow meetings between enterprises and students.These forums allow students to be connected to professionals and enterprises, and allow enterprises to be informed about professional modules organized in universities.Hence, relations between higher education institutions and enterprises go beyond the merely financial ones; which makes enterprises to be considered as undeniably important stakeholders.
Policy-makers
Universities should also develop strategies toward policymakers.This stakeholder group is very important in all aspects of higher education life.They are responsible for the rules (texts) governing higher education; they fund the sector (teaching and learning, as well as research) deciding on the amount and the allocating mechanism, all of which may have serious impact on the institution's survival and the whole sector, according to the resource dependence theory.Policy makers include Governments, Quality assessment Agencies, and other national and international agencies.The influence of this stakeholder group is present in public as well as private institutions but is contingent to the types of partnerships and interplays between the two actors.To generate items which might be used to operationalize stakeholder orientation as conceptualized hitherto, I resorted to three sets of research about stakeholders: The first categorizes stakeholders in different types and suggests possible strategies for their management (Savage et al., 1991;Donaldson and Preston, 1995).The second is more normative in suggesting potential items to operationalize the stakeholder orientation (Greeley and Foxall, 1998).Conclusively, this study resorts to Duesing (2009), who has developed a first systematic tested scale for small enterprises based on Yau et al. (2007).Summing up these works, it roughly appears that the key behaviour of a stakeholder-centred approach includes not exhaustively the following: researching needs; commitment to students and other stakeholders; providing services of value; focusing on student and other stakeholder satisfaction; measuring and reporting satisfaction, encouraging stakeholders' comments and complaints.However, as suggested by Ferrell et al. (2010), stakeholder orientation includes also the "Competitor Orientation".
Competitor orientation
In a non-profit organization, competitors are groups, organizations or any other alternatives which attempt to attract the attention and loyalty of funders and beneficiaries (Kotlter and Andreasen, 1996).In higher education, Meek and Wood (1997), Mok (2000), Thys-Clément (2001), and Bugandwa (2008;2009) have explained how competition is emerging and becoming fierce, both at national and international levels.Competitor orientation can be defined as any activity aiming at understanding the strengths and weaknesses of the main organization's competitors (current and/or potential), and the way the organization reacts to these competitors' strategies and actions (Narver and Slater, 2000;Liao et. al, 2000).Although non-profit and public organizations such as universities generally balk at seeing similar organizations as competitors (Kotler and Andreasen, 1996), they are aware that competition is a major step of marketing implementation process (Kotler and Fox, 1985;Wood et al., 2000).The items suggested measuring this orientation draw on Narver and Slater (1990); Lambin (2000); Slater and Narver (1994;2000) and Duesing (2009).Vazquez et al. (2002) have shown that the attitude of non-profit organizations vis-à-vis competition and the nature of the latter vary according to whether the organizations act in the user's perspective or the backer's perspective.In the users' perspective, the institutions providing the same public utility are to be considered, not as mutual threats, but as partners and thereby, source of collaboration.So, institutions will engage in efforts to increase capacities to maintain a more effective provision of social benefits for all parts.Thereby, it is clear that beyond competition, collaboration is a dimension to be included in any conceptualization of market orientation for higher education institutions.Trim (2003), quoting Guzkowka and Kent (1999), defines collaboration as "a shared unity of purpose".Liao et al. (2000) define "collaborative orientation" as the extent to which an organization focuses its efforts on the exploitation of the whole potential of collaboration with other organizations, say the stakeholder groups, for both resources acquisition and mutual provision of non commercial goods and services.This paper has taken advantage of the above contributions to include the "Collaborative dimension" in the operationalization of stakeholder orientation in higher education.
Inter-functional coordination
It covers all activities; information transmission, processing and control, aiming to insure that different functions constitute a coherent set to contribute to the organizational endeavour to improve its products/services.The very first publications about coordination go back to Lawrence and Lorsch (1967), Khandwalla (1972), Mintzberg (1979), andFord et al. (1988) who supported that when environment uncertainty and complexity grow, coordination or integration of different organizational parts becomes very important.Inter-functional coordination implies the degree to which information on stakeholders and macro-environment is shared in the whole organization.It also refers to the sense of common values and beliefs and their relation with reaching organizational objectives.This supposes that the process of creating value for stakeholders is not the matter of only one function, but rather of the whole organization (Porter, 1985;Kohli and Jaworski, 1990).Inter-functional coordination implies a clear understanding and fluid circulation of information about customers' expectations and organizational beliefs, and their communication through formal and informal means (meetings, brainstorming, etc.) to all organizational members (Shapiro, 1988;Cadogan and Diamantopoulos, 1995).It mainly purports to improve the way the organization is supposed to respond to its market.
Responsiveness
Universities are increasingly required to be responsive to multiple societal demands (OECD, 2002).The responsiveness dimension as defined by Kohli and Jaworski (1990), Jaworski and Kohli (1993) and Kohli et al. (1993) consists in organizations' actions to respond to their markets.It implies development, adaptation and implementation of organizations' services, programs, systems and structures to meet their stakeholders' requirements.According to Kohli and Jaworski (1990), an adequate response to the "market" includes: selection of targets, supply of products/services for current markets, conception of new products/services for potential markets; production, distribution and promotion of products/services in a way conducive to a positive feedback from customers.Applying this to higher education, Mora (2001), quoting Neave andVan Vught (1996), states that a higher level of stakeholder orientation is an excellent way to increase the institutional response to social demands.It is very necessary to clarify that more than any other organizations, higher education institutions should not limit their actions to responding to external pressures.Instead of staying under such pressure and undergo their consequences, they are encouraged to act proactively on their environment.This is especially important for higher Akonkwa 75 education institutions since students and other stakeholders can misunderstand their needs or institutions' actions.The scientific approach called "Driving the markets" (Jaworski et al., 2000) or the marketing of supply creation (Lambin, 2000) can help them find valuable solutions to non articulated needs.These approaches suggest organizations to go beyond solely reacting to environmental changes, which could be dangerous for organizational survival and lead to "mission drift".In their spirit, higher education institutions should endeavour to modify stakeholders' behaviours: (explaining to students the requirements of quality in higher education, helping them to inward their role in learning process, discussing with policy-makers and enterprises, on different issues concerning higher education, e.t.c.).
CONCLUDING REMARK, RESEARCH LIMITS AND PERSPECTIVES FOR FUTURE RESEARCH
This paper has dealt with the way stakeholder orientation can be operationalized in higher education.The discussion began by a justification of the rationale to import stakeholder orientation in this sector.Our research retains two main reasons for this trend: First, it has been clearly demonstrated that higher education is evolving in a kind of quasi-market environment characterized by competition to recruit more students, to attract more funds from policy-makers and other backers and investors such as enterprises, alumni, parents, e.t.c.Second, on epistemological level, reflections on how higher education institutions might react to this trend are limited to straightforward transpositions of market orientation models to higher education.In this paper, I have rapidly crossed the evolving context of higher education and discussed the directions that researchers have hitherto suggested as a means to adapt the changes.I have challenged the trend to transpose market orientation models to this sector.The underlying assumption is that although higher education sector is evolving towards market mechanisms, it keeps specific features that make it different from commercial organizations.Compatible with the new stream on managerial literature, I contend that to face social and economical transformations in their sector and their corollary in terms of external pressures, and to effectively manage the multiple and conflicting demands from their constituencies, higher education institutions need develop a stakeholder orientation; transcending the reductive market orientation perspective.This paper contributes to that objective in discussing the main dimensions of stakeholder orientation in higher education.Some of the suggested dimensions substitute to the traditional "customer orientation", diverse constituencies (students, employees, investors, and policy-makers).The other dimensions are drawn from market orientation conceptualizations (competitor orientation, inter-functional coordination, and responsiveness).Conclusively, a new dimension; collaborative orientation, is added to consider the fact that collaboration is an important activity in higher education (teaching as well as researching).Henceforth, we suggest a model of stakeholder orientation that might include 8 dimensions: students' orientation (12), employees' orientation (9), investors and enterprises orientation (8), policy-makers orientation (7), competitor orientation (10), collaborative orientation (9), interfunctional coordination (15), and responsiveness (10).Hence, the whole model comprises 80 items which, in further research, should be refined through a factor analytical procedure.The main contribution of this paper is that it has generated from a multidisciplinary literature, a relevant tool to operationalize the stakeholder orientation concept without neglecting higher education's complexity.This is a prerequisite for any endeavour to measuring the extent of stakeholder orientation for higher education institutions.However, and that is the main limit, the suggested tool (scale) is still a crude one and needs be refined through qualitative inquiries and psychometric process before it can become a usable measure for the concept under consideration.This refinement is an important avenue for further research.Conducting such a research will be a major contribution to the endeavours to the development and extension of marketing research in a stakeholder perspective.
Annex: Suggested items for the stakeholder orientation.
Students' orientation Number
Items 1 The institution gathers information on current and future needs of current and potential students.2 The institution conducts activities aiming at attracting and retaining students 3 Complaints and remarks from students are analyzed to find solutions.4 Students' satisfaction towards university facilities is assessed on a regular basis.5 Perceived service quality is regularly assessed from students perspective 6 Teaching methods stress dynamic students' learning.7 Programmes are set with the aim to stimulate active learning and participation 8 The institution uses information technology to improve teaching and students' learning.9 Individual information is gathered from students and their needs to improve their success.10 The institution uses all available techniques and experiences to stimulate students' commitment.11 The institution creates a social, participative and welcoming environment for students.12 Teacher -Students' interactions are encouraged and stimulated Our institution has regular staff appraisals in which we discuss employees' needs 2.
Our institution tries to pay fair salaries to employees 3.
In our institution, we try to improve quality of work environment on a regular basis.4.
In our institution, we have regular staff meetings with employees 5.
As a manager, I try to find out the true feelings of my staff about their jobs 6.
We survey staff at least once each year to assess their attitudes to their work 7.
Employees are stated to be important in our institution's mission statement 8.
Employees complaints are rapidly treated to find fair solutions 9.
As manager, I formally recognize Employees' achievements to encourage them The institution believes formal research is important to understand enterprises demands.
2.
The institution believes informal research is important to understand enterprises demands.
3.
The institution believes that managers' judgement is important to understand enterprises' demands 4.
The institution discusses the importance of different constituencies engaged in different debates within the institution.5.
Enterprises are stated to be important in our institution's mission statement.6.
Strategies are planned to face the interest of enterprises.7.
Endeavour is done to reduce institutional dependence towards enterprises.8.
Enterprises are encouraged to participate in the decision-making process of the institution.
Table 1 .
Possible focuses in market orientation conceptualizations. | 7,336 | 2013-07-31T00:00:00.000 | [
"Education",
"Business",
"Economics"
] |
Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent
Inspired by studies of human-human conversations, we present methods for incrementally coordinating speech production with listeners’ visual foci of attention. We introduce a model that considers the demands and availability of listeners’ attention at the onset and throughout the production of system utterances, and that incrementally coordinates speech synthesis with the listener’s gaze. We present an implementation and deployment of the model in a physically situated dialog sys-tem and discuss lessons learned.
Introduction
Participants in a conversation coordinate with one another on producing turns, and often co-produce language by using verbal and non-verbal signals, including gaze, gestures, prosody and grammatical structures. Among these signals, patterns of attention play an important role. Goodwin (1981) highlights a variety of coordination mechanisms that speakers use to achieve mutual orientation at the beginning and throughout turns, such as pausing, adding phrasal breaks, lengthening spoken units, and even changing the structure of the sentence on the fly to secure the listener's attention. His work suggests that, beyond a simple errors-in-production view, "disfluencies" help to coordinate on turns, and generally facilitate co-production among speakers and listeners. Goodwin (1981) presents sample snippets of conversations recorded in the wild, annotated to show when the gaze of a listener turns to meet 1 Research conducted during an internship at Microsoft Research the gaze of the speaker (marked with *) and when mutual gaze is maintained (marked with an underline). In the examples reproduced below from Goodwin's work, pauses and repeats are used to align grammatical sentences with a listener's gaze: Anyway, Uh:, We went *t-I went ta bed Restarts can be used as a means of aligning the timing of a full grammatical utterance with the start of the process by which gaze is moving towards the speaker (process indicated by the broken underline), as in the following: She-she's reaching the p-she's at the *point I'm While most work to date in spoken dialog systems has focused on the acoustic channel in physically situated multimodal systems, an opportunity arises to use vision to take the participants' attention into account when coordinating on the production of system utterances. We investigate this direction and introduce a model that incrementally coordinates language production and speech synthesis with the listeners' foci of attention. The model centers on computing whether the listener's attention matches a set of attentional demands for the utterance at hand. When attentional demands are not met, the model triggers a sequence of linguistic devices in an attempt to recover the listener's attention and to coordinate the system's speech with it. We introduce and demonstrate the promise of incremental coordination of language production with attention in situated systems.
Following a brief review of related work, we describe the proposed approach in more detail in Section 3. In Section 4, we discuss lessons learned from an in-the-wild deployment of this approach in a directions-giving robot.
Related work
The critical role of gaze in coordinating turns in dialog is well known and has been previously studied (i.a., Duncan, 1972;Goodwin, 1981). Kendon (1967) found that speakers signal their wish to release the turn by gazing to the interlocutor. Vertegaal et al. (2003) found evidence that lack of eye contact decreases the efficiency of turn-taking in video conferencing.
Most previous work on incremental processing in dialog has focused on the acoustic channel, including efforts on recognizing, generating, and synthesizing language incrementally. For instance, Skantze and Hjalmarsson (2010) showed that an incremental generator using filled pauses and self-corrections achieved (in a wizard of Oz experiment) shorter response times and was perceived as more efficient than a non-incremental generator. Guhe and Schilder (2002) have also used incremental generation for self-corrections.
Situated and multiparty systems often incorporate attention and gaze in their models for turn taking and interaction planning (Traum and Rickel, 2002;Bohus and Horvitz, 2011). Sciutti et al. (2015) used gaze as an implicit signal for turn taking in a robotic teaching context. In an in-car navigation setting, incremental speech synthesis that accommodates user's cognitive load was shown to improve user experience but not users' performance on tasks (Kousidis, et al., 2015).
Model
Motivated by observations from human-human communication dynamics, we propose a model to coordinate speech production with the listeners' focus of attention in a physically situated dialog system. We believe that close coordination between language production and listeners' attention is important in creating more effective and natural interactions.
The proposed model subsumes three subcomponents. The first component defines attentional demands on each system output. For successful collaboration, certain utterances require the listener's focus of attention to be on the system or on task-relevant locations (e.g., the direction the robot is pointing towards), while other utterances do not carry high attentional demands. The second component is an inference model that tracks the listener's focus of attention, i.e., the attentional supply. The third component alters the system's speech production in an incremental manner to coordinate in stream with the listeners' attention. The component regulates production based on identifying when the attention supply does not match the demands.
In the following subsections, we discuss the model's components in more detail, and their implementation in the context of Directions Robot, a physically situated humanoid (Nao) robotic system that interacts with people and provides directions inside our building (Bohus, Saw and Horvitz, 2014). Figure 1 shows a sample dialog with the robot. The proposed coordination model can be adapted to other multimodal dialog systems with adjustments based on the task and the situational context.
Attentional demands
We consider two types of attentional demand. The first one, which we refer to as onset demand, encapsulates Goodwin's observation (1981) that participants in a conversation generally aim to achieve mutual orientation at the beginnings of turns. The model specifies that, at each system phrase onset, the listeners' attention must be on the system. In our implementation, we require that at least one of the addressees of the current utterance is attending. The system infers attention under uncertainty from visual scene analysis, and we express the attentional demand by means of a probability threshold. In the current implementation, this threshold was set to 0.6: the onset attentional demand is satisfied if the probability that at least one of the addressees is attending to the robot is greater than 0.6 when the system is launching a phrase.
In addition, a second type of attentional demand, denoted production demand, is defined at the level of the dialog act by the system developer. During certain system acts, for instance ones that 1 S: Hi there! 2 S: Do you need help finding something? 3 U: Yes 4 S: Where are you trying to get to? 5 U: Room 4505 6 S: To get to room 4505, • walk along that hallway, • turn left and keep on walking down the hallway. • Room 4505 will be the 1 st room on your right. 7 S: By the way, • would you guys mind swiping your badge on the reader below so I know who I've been interacting with? carry important content or that are deemed as unexpected for the listeners, it is important for addressees to attend to the system or to certain taskrelevant objects. The production demand defines where the listeners' attention is expected during the production of the system's utterances, i.e., it defines a set of permitted targets. For instance, when the robot is giving directions in turn 6 from Figure 1, the production demand is set to Robot or PointingDirections-the locations that the robot points to via its gestures as it renders directions. Similarly, when the robot asks users to swipe their badge in turn 7, the production demand is set to Robot and Badge indicating that these are the appropriate targets of attention throughout that particular utterance. In contrast, other dialog acts, such as the robot asking "Where are you trying to get to?" in turn 4, are naturally expected at that point in the conversation, do not impose high cognitive demands, and can be conveyed without requiring attention on the robot throughout the utterance.
Attention supply
The Directions Robot is deployed in front of a bank of elevators. In this environment, the attention of engaged participants can shift between a variety of targets including the robot, other taskrelated attractors (e.g., the direction that the robot is pointing, the sign next to the robot, the user's badge, and the badge reader), personal devices such as smartphones and notepads, and other people in the environment. To simplify, in the implementation we describe here, we model attention supply only over the three targets already mentioned above: Robot, PointingDirection, Badge, and we cluster all other attentional foci as Elsewhere.
The robot tracks the (geometric) direction of visual attention for each participant in the scene via a model constructed using supervised machine learning methods. The model leverages features from visual subsystems (e.g., face detection and tracking, head-pose detection, etc.) and infers the probability that a participant's visual attention is directed to the robot, or to the left, right, up, down, or back of the scene. These probabilities are then combined via a heuristic rule that takes into account the dialog state and the robot's pointing to infer whether the participant's attention is on Robot, PointingDirection, Badge, or Elsewhere.
Coordinative policy
The third component in the proposed model, the coordinative policy controls the speech synthesis engine and deploys various mechanisms, such as pauses, restarts, interjections, to coordinate the system's speech with the listeners' attention. Figure 2 shows a diagram of the currently implemented coordinative policy for onset attentional demand. If the listeners' attention does not meet the attentional demand at the beginning of a phrase, the system will perform a sequence of actions, starting with a wait (pause), followed by an attention drawing interjection such as "Excuse me!", followed by another wait action, followed by launching the phrase. If the onset attentional demand is still not satisfied the phrase is interrupted after 2 words, then another wait action is taken, followed finally by launching the entire phrase. The wait actions are chosen with a random duration between 1.5 and 2.5 seconds. The interjection is skipped if it was already produced once in this utterance, or if the preceding phrase or the remainder of the utterance contains only one word. As soon as the attention supply matches the onset demand, the system launches the phrase. If the demand is met during the interjection, the interjection will still be completed. In addition, the policy will not switch from a wait action to a verbal action if the system detects that the user is likely speaking.
We set both onset and production attentional demands on a per dialog act basis. The surface realization of a single dialog act can however involve multiple phrases, defined here as continuous speech units separated by a pause longer than 250 ms, as signaled by runtime events generated by the speech synthesis engine (• is used to demark phrases in the example from Figure 1.) The coordinative policy uses the attentional demand Attention supply meets onset demand? Figure 3. Actions taken to coordinate production with attentional demands.
Re-speak Phrase Wait
Attention supply meets onset demand? information specified on the dialog act, but operates at the phrase level. In other words, the onset demand is checked at the beginning of every phrase in the dialog act.
In addition to reasoning about onset attention, the proposed model also assesses if production demand is met at the end of phrases, i.e. if the accumulated attention throughout the phrase matched the production demand specified for the dialog act. If this is not the case, a wait is triggered (to re-acquire onset attention), and then the phrase is repeated. If the onset demand is met at any point during the wait, the system immediately repeats the phrase. The variability of the wait durations, coupled with variability in the attention estimates and the times when the specified onset or production attentional demand is met, leads to a variety of production behaviors in the robot.
Deployment and lessons learned
We implemented the model described above in the Directions Robot system and deployed it on three robots situated in front of the bank of elevators on floors 2, 3, and 4 of a four-story building. Appendix A contains an annotated demonstrative trace of the system's behaviors. Additional videos and snippets of interactions are available at: http://1drv.ms/1GQ1ori. While a comprehensive evaluation of the model is pending further improvements, we discuss below several lessons learned from observing natural interactions with the robots running the current implementation.
A first observation is that the usefulness and naturalness of the behaviors triggered by the robot hinges critically on the accuracy of the inferences about attention. When the model incorrectly concludes that the participants' attention is not on the robot (false-negative errors), the coordinative policy triggers unnecessary pauses, interjections and phrase repeats that can be disruptive and unnatural. The attention inference challenge includes the need to recognize both the participants' visual focus of attention (which in itself is a difficult task in the wild) and cognitive attention as being on task. Cognitive attention does not overlap with visual attention all the time. For example, at times participants would shift their visual attention away from the robot as they leaned in and cocked their ear to listen closely. Problems in inferring attention are compounded by lower-level vision and tracking problems.
Second, we believe that there is a need for better integration of the coordinative policy with cur-rent existing models for language generation, gesture production, multiparty turn-taking and engagement. Beyond the number of words in a phrase, the current policy does not leverage information about the contents of phrases that are about to be generated. This sometimes leads to unnatural sequences, such as "Excuse me! By the way, would you mind […]" Another important question is how to automatically coordinate the robot's physical pointing gestures when repeating phrases or when phrases are interrupted. With respect to turn taking, problems detected in early experimentation led to an adjustment of the coordinative policy that we described earlier: the system does not move from a wait to a verbal action if it detects that the user is likely speaking. Beyond this simple rule, we believe that the floor dynamics in the turn-taking model need to take into account the system's discontinuous production, e.g., take into account the fact that the pauses injected within utterances might be perceived by the participants as floor releases. Further tuning of the timings of the pauses, contingent on the dialog state and expectations about when the attention might return, as well as a tighter integration with the engagement model might be required. For instance, we observed cases where the robot's decision to pause to wait for a participant's attention to return from the direction that the robot was pointing (before continuing to the next phrase) was interpreted as the end of the utterance and the participant walked away before session completion.
Third, we find that the definition of attentional demands (both onset and production) need to be further refined (in some cases on a per-dialog state basis) and modeled at a finer level of granularity, down to the phrase level. In an utterance like "By the way, would you mind swiping your badge?", the "By the way" phrase is in fact an attention attractor, and itself does not require attentional demands and thus should be modeled separately.
Conclusion
We presented a model for incrementally coordinating language production with listeners' foci of attention in a multimodal dialog systems. An initial implementation and in-the-wild deployment of the proposed model has highlighted a number of areas for improvement. While further investigation and refinements are needed, the interactions collected highlight the potential and promise of the proposed approach for creating more natural and more effective interactions in physically situated settings. | 3,691.8 | 2015-07-21T00:00:00.000 | [
"Computer Science"
] |
Study of 11Li and 10,11Be nuclei through elastic scattering and breakup reactions
The hybrid model of the microscopic optical potential (OP) is applied to calculate the 11Li+p, 10,11Be+p, and 10,11Be+12C elastic scattering cross sections at energies E < 100 MeV/nucleon. The OP’s contain the folding-model real part (ReOP) with the direct and exchange terms included, while its imaginary part (ImOP) is derived within the high-energy approximation (HEA) theory. For the 11Li+p elastic scattering, the microscopic large-scale shell model (LSSM) density of 11Li is used, while the density distributions of 10,11Be nuclei obtained within the quantum Monte Carlo (QMC) model and the generator coordinate method (GCM) are utilized to calculate the microscopic OPs and cross sections of elastic scattering of these nuclei on protons and 12C. The depths of the real and imaginary parts of OP are fitted to the elastic scattering data, being simultaneously adjusted to reproduce the true energy dependence of the corresponding volume integrals. Also, the cluster models, in which 11Li consists of 2n-halo and the 9Li core having its own LSSM form of density and 11Be consists of a n-halo and the 10Be core, are adopted. Within the latter, we give predictions for the longitudinal momentum distributions of 9Li fragments produced in the breakup of 11Li at 62 MeV/nucleon on a proton target. It is shown that our results for the diffraction and stripping reaction cross sections in 11Be scattering on 9Be, 93Nb, 181Ta, and 238U targets at 63 MeV/nucleon are in a good agreement with the available experimental data.
Introduction
The experiments with intensive secondary radioactive nuclear beams have made it possible to investigate the structure of light nuclei near the neutron and proton drip lines as well as the mechanism of scattering of the weakly bound nuclei. Special attention has been paid to the neutronrich isotopes of helium ( 6,8 He), lithium ( 11 Li), berilium ( 11,14 Be), and others, in which several neutrons are situated in the far extended nuclear periphery and form a halo. A widely used way to study the structure of exotic nuclei is to analyze their elastic scattering on protons or nuclear targets at different energies.
A typical example is the neutron halo in the nucleus 11 Li, revealed as a consequence of its very large interaction radius, deduced from the measured interaction cross sections of 11 Li with various target nuclei [1][2][3]. The halo of the nucleus extends its matter distribution to a large radius. A hypothesis based on the early data [1] about the important role played by the neutron pairing for the stability of nuclei near the drip line is suggested in Refs. [4,5] and, in particular, the direct link of the matter radius to the 2n weak binding in 11 Li is claimed to be attributed to its configuration as a 9 Li core coupled to a di-neutron. The experiments that provide evidence of the existence of a halo in a e-mail<EMAIL_ADDRESS>11 Li and 11 Be nuclei are related not only to measurements of the total reaction cross section for these projectiles but also to the momentum distributions of the 9 Li ( 10 Be) or neutron fragments following the breakup of 11 Li and 11 Be in the collisions with different target nuclei. It was shown that the momentum distribution of the breakup fragments has a narrow peak, much narrower than that observed in the fragmentation of well bound nuclei.
In this work (see also [6,7]), as well as in our previous works considering processes with exotic He isotopes [8][9][10], we use microscopically calculated OPs within the hybrid model [11,12]. In the latter the ReOP is calculated by a folding of a nuclear density and the effective nucleonnucleon (NN) potentials [13] and includes both direct and exchange parts. The ImOP is obtained within the HEA model [14,15]. There are only two or three fitting parameters in the hybrid model that are related to the depths of the ReOP, ImOP and the spin-orbit (SO) part of the OP. For the 11 Li+p elastic scattering we have used the realistic microscopic LSSM [16,17] density of 11 Li, while the density distributions of 10,11 Be nuclei obtained within the quantum Monte Carlo model [18] and the generator coordinate method [19] are used to calculate the microscopic OPs and cross sections of elastic scattering of these nuclei on protons and 12 C. The main aim of our work is twofold. First, we calculate the differential cross sections of elastic 11 Li and 10,11 Be scattering on protons and nuclei at energies less than 100 MeV/nucleon studying the possibility to describe the existing experimental data by calculating microscopically not only the ReOP, but also the ImOP (instead of using phenomenological one) within the HEA and using a minimal number of fitting parameters. Second, we estimate other characteristics of the reaction mechanism such as the total reaction and breakup cross sections and momentum distributions of the cluster fragments. The microscopic OP that contains the volume real (V F ) and imaginary parts (W), and the spin-orbit interaction (V ls ) is used for calculations of elastic scattering differential cross sections. We introduce a set of weighting coefficients N R , N I , N ls R and N ls I that are related to the depths of the corresponding parts of the OP and are obtained by a fitting procedure to the available experimental data. The OP has the form: where 2λ 2 π = 4 fm 2 with the squared pion Compton wave length λ 2 π = 2 fm 2 . Let us denote the values of the ReOP and ImOP at r = 0 by V R (≡ V F (r = 0)) and W I (≡ W(r = 0)). We note that the spin-orbit part of the OP contains real and imaginary terms with the parameters V ls R and W ls I related to V R and W I by the V ls R = V R /4 and W ls I = W I /4, correspondingly. Here V R and W I (and V ls R and W ls I ) have to be negative. The ReOP V F (r) of the nucleon-nucleus OP is assumed to be the result of a folding of the nuclear density and of the effective NN potential and is a sum of isoscalar (V F IS ) and isovector (V F IV ) components and each of them has its direct (V D IS and V D IV ) and exchanged (V EX IS and V EX IV ) parts. The effective NN potential contain an energy dependence usually taken in the form g(E) = 1 − 0.003E and a density dependence with the form for the CDM3Y6 effective Paris potential [13] with C=0.2658, α=3.8033, β=1.4099 fm 3 , and γ=4.0 fm 3 . They have their isoscalar and isovector components in the form of M3Y interaction obtained within g-matrix calculations using the Paris NN potential [13,20]. The ImOP can be chosen either to be in the form of the microscopically calculated V F (W = V F ) or in the form W H obtained in Refs. [11,12] within the HEA of the scattering theory: In Eq. (3), ρ(q) are the corresponding formfactors of the nuclear densities, f N (q) is the amplitude of the NN scattering andσ N is the averaged over the isospin of the nucleus total NN scattering cross section that depends on the energy. The parametrization of the latter dependence can be seen, e.g., in Ref. [8]. We note that to obtain the HEA OP (with its imaginary part W H in Eq. ( 3)) one can use the definition of the eikonal phase as an integral of the nucleonnucleus potential over the trajectory of the straight-line propagation and has to compare it with the corresponding Glauber expression for the phase in the optical limit approximation.
In the spin-orbit parts of the OP the functions f i (r) (i = R, I) correspond to WS forms of the potentials with parameters of the real and imaginary parts V R , , as they are used in the DWUCK4 code [21] and applied for numerical calculations. We determine the values of these parameters by fitting the WS potentials to the microscopically calculated potentials V F (r) and W(r).
Results of calculations of elastic scattering cross sections
We consider 11 Li+p elastic scattering at three energies, 62, 68.4, and 75 MeV/nucleon, for which the differential cross sections have been measured [22][23][24]. In Fig. 1, we give the differential cross section of the elastic scattering 11 Li+p at 62 MeV/nucleon in the cases when W = W H and W = V F with and without accounting for the spinorbit term in Eq. (1). The renormalization parameters N are determined by a fitting procedure. The results of the calculations are close to each other and that is why all of them are presented inside areas shown in Fig. 1. The blue area includes four curves corresponding to W = W H (from which three curves obtained without SO term and one with the SO term), while the grey one includes four curves corresponding to W = V F (from which two curves obtained without SO term and two curves with the SO term). We give in Table 1 the values of the N's parameters, χ 2 and the total reaction cross sections σ R . Figure 1 shows the satisfactory overall agreement of both areas of curves with the experimental data. However, we note the better agreement in the case when W = W H (the blue area) and the values of χ 2 are between 1.40 and 1.47, while in the case W = V F they are between 5.00 and 5.80. The situation is similar also for the other energies. Second, we note that the values of σ R are quite different in both cases (σ R ≈ 455-462 mb for W = W H and σ R ≈ 260-390 mb for W = V F ). Third, one can see from Table 1 and from the comparison with the data in Fig. 1 that the role of the SO term is weak. Its effects turn out to be to decrease Table 1).
As is known, the problem of the ambiguity of the parameters N arises when the fitting procedure is applied to a limited number of experimental data (see, e.g., the calculations and discussion in our previous works [8][9][10]). Due to the fact that the fitting procedure belongs to the class of the ill-posed problems, it becomes necessary to impose some physical constraints on the choice of the set of parameters N. The total cross section of scattering and reaction is one of them, however, the corresponding experimental values are missing at the energy interval considered in the present work. Therefore, we impose another physical criterion, namely the behavior of the volume integrals as functions of the energy. It is known [25] that the volume integrals (their absolute values) for the ReOP decrease with the increase of the energy, while for the ImOP they increase up to a plateau and then decrease. It is accepted that the elastic scattering of light nuclei is rather sensitive to their periphery, where transfer and breakup processes also take place. Therefore, investigating the elastic scattering, one must bear in mind that virtual non-elastic contributions can also take part in the process. The contribution from a surface imaginary term to the OP [Eq. (1)] can be considered as the so-called dynamical polarization potential, which allows one to simulate the surface effects caused by the latter. In fact, the imaginary part of the SO term in our OP [see Eq. (1)] plays effectively this role. However, sometimes one needs to increase the absorption in the surface region and thus, one adds a derivative of the ImOP (surface term): where N s f I is also a fitting parameter. The results for the elastic 10 Be+p scattering cross sections are given in Fig. 2 and compared with the data at energies 39.1 MeV/nucleon [26] and 59.4 MeV/nucleon [27]. First, it is seen from the upper panel that the inclusion of only the volume OP is not enough to reproduce reasonably well the data in the small angles region. Then, after adding the spin-orbit component to the OP the agreement with the data becomes better, in particular for the angular distributions calculated using the GCM density at energies 39.1 MeV/nucleon and 59.4 MeV/nucleon for angles less than 20 • and 30 • , correspondingly, as illustrated in the middle panel of Fig. 2. However, a discrepancy at larger angles remains. At the same time for the cross sections with the account for the ls interaction and using the QMC density we obtain fairly good agreement with the data at both energies and only a small discrepancy is seen at small angles at energy 59.4 MeV/nucleon. Further improvement is achieved when both SO and surface terms are included in the calculations. In this case, as it can be seen from the bottom panel of Fig. 2, the discrepancy between the differential cross sections for the GCM density and the experimental data at larger angles is strongly reduced.
The calculated within the hybrid model elastic scattering cross sections of 11 Be+ 12 C (their ratios to the Rutherford one) at the same energies as for 10,11 Be+p scattering are given in Fig. 3 and compared with the experimental data. In comparison with the case of 10,11 Be+p, the experimental data [26,27] for the scattering on 12 C demonstrate more developed diffractional picture on the basis of the stronger influence of the Coulomb field. It can be seen in Fig. 3 that in both cases of calculations of OPs with QMC or GCM densities the results are in a good agreement with the available data. It is seen also from the figure that it is difficult to determine the advantage of the use for the ImOP W = W H or W = V F , because the differences between the theoretical results start at angles for which the experimental data are not available. ( 10 Be) core (c cluster) and h = 2n or h = n halo must be given. Second, the folding potentials of the interaction of each of the clusters with the incident proton or target nucleus have to be computed. Finally, the sum of these two potentials must be folded with the respective twocluster density distribution of 9 Li ( 10 Be), which means that the wave function of the relative motion of two clusters must be known. The latter is obtained by solving the Schrödinger equation with the Woods-Saxon potential for a particle with a reduced mass of two clusters. The parameters of the WS potentials are obtained by fitting the energy of a given state to the empirical separation energy values of the di-neutron halo ε = 247 KeV of 11 Li and the neutron halo ε = 504 KeV of 11 Be, respectively, and the rms radius of the cluster function. More details how to calculate the characteristics of breakup processes of the [26] and [27], respectively. 11 Li and 11 Be nuclei, namely diffraction and stripping reaction cross sections and the momentum distributions of the fragments, are given in Refs. [6,7]. We perform calculations of the breakup cross sections of 11 Be on the target nucleus 9 Be and heavy nuclei, such as 93 Nb, 181 Ta, and 238 U, and compare our results with the available experimental data [28]. The densities of these heavy nuclei needed to compute the OPs are taken from Ref. [29]. The calculated diffraction and stripping cross sections (when a neutron leaves the elastic channel) for reactions 11 Be+ 9 Be, 11 Be+ 93 Nb, and 11 Be+ 238 U are illustrated in Fig. 4. We note the good agreement with the experimental data from light and heavy breakup targets. The obtained cross sections for the diffraction and stripping have a similar shape. The values of the widths are around 50 MeV in agreement with the experimental ones. Our results confirm the observations (e.g., in Refs. [30,31]) that the width almost does not depend on the mass of the target and as a result, it gives information basically about the momentum distributions of two clusters. Here we note that due to the arbitrary units of the measured cross sections of the considered processes it was not necessary to renormalize the depths of our OPs of the fragments-target nuclei interactions.
Conclusions
In the present work the hybrid model is applied to study characteristics of the processes of scattering and reactions of 11 Li and 10,11 Be on protons and nuclei. The results of the present work can be summarized as follows. (i) The only free parameters in the hybrid model obtained by a fitting procedure to the experimental data whenever they exist are the coefficients Ns that correct the depths of the ReOP, ImOP, SO and surface potentials. These parameters (the deviations of their values from unity) can serve as a quantitative test of our method, but not as a tool to obtain a best agreement with the experimental data. A physical criteria imposed in our work on the choice of the values of the parameters N were the known behavior of the volume integrals J V and J W as functions of the incident energy for 0 < E < 100 MeV/nucleon, as well as the values of the total cross section of scattering and reaction.
(ii) Other folding approaches are used to consider the 11 Li breakup, suggesting a simple 9 Li+2n cluster model, and the 11 Be breakup by means of the simple 10 Be+n cluster model. The latter models are applied to calculate the NSRT15 08003-p.5 diffraction breakup and stripping reaction cross sections. It turns out that the breakup channel of 11 Li+p elastic scattering gives a breakup cross section value that exceeds 80% of the total reaction cross section, while it is about a half of the latter in the case of 6 He+ 12 C [32].
(iii) Predictions for the longitudinal momentum distributions of 9 Li fragments produced in the breakup of 11 Li at 62 MeV/nucleon on a proton target are given. The widths of the peak we obtained are between 70 and 80 MeV/c, while widths of about 50 MeV/c are known from the reactions of 11 Li on nuclear targets 9 Be, 93 Nb, and 181 Ta at an energy of 66 MeV/nucleon.
(iv) The momentum distributions of 10 Be fragments produced in the breakup of 11 Be on 9 Be, 93 Nb, 181 Ta, and 238 U at 63 MeV/nucleon are obtained. There exists a good agreement of our calculations for the diffraction and stripping reaction cross sections with the available experimental data. The obtained widths of about 50 MeV/c are close to the empirical ones.
(v) Future measurements of elastic scattering and breakup reactions of 11 Li and 10,11 Be nuclei on different targets are highly desirable for the studies of the exotic nuclear structure. More complicated three-body approaches and more refined theoretical methods (e.g. CDCC method and its extensions) would allow an accurate interpretation of the expected data. | 4,492.6 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
Stearoyl-CoA Desaturase 2 Is Required for Peroxisome Proliferator-activated Receptor γ Expression and Adipogenesis in Cultured 3T3-L1 Cells*
Based on recent evidence that fatty acid synthase and endogenously produced fatty acid derivatives are required for adipogenesis in 3T3-L1 adipocytes, we conducted a small interfering RNA-based screen to identify other fatty acid-metabolizing enzymes that may mediate this effect. Of 24 enzymes screened, stearoyl-CoA desaturase 2 (SCD2) was found to be uniquely and absolutely required for adipogenesis. Remarkably, SCD2 also controls the maintenance of adipocyte-specific gene expression in fully differentiated 3T3-L1 adipocytes, including the expression of SCD1. Despite the high sequence similarity between SCD2 and SCD1, silencing of SCD1 did not down-regulate 3T3-L1 cell differentiation or gene expression. SCD2 mRNA expression was also uniquely elevated 44-fold in adipose tissue upon feeding mice a high fat diet, whereas SCD1 showed little response. The inhibition of adipogenesis caused by SCD2 depletion was associated with a decrease in peroxisome proliferator-activated receptor γ (PPARγ) mRNA and protein, whereas in mature adipocytes loss of SCD2 diminished PPARγ protein levels, with little change in mRNA levels. In the latter case, SCD2 depletion did not change the degradation rate of PPARγ protein but decreased the metabolic labeling of PPARγ protein using [35S]methionine/cysteine, indicating protein translation was decreased. This requirement of SCD2 for optimal protein synthesis in fully differentiated adipocytes was verified by polysome profile analysis, where a shift in the mRNA to monosomes was apparent in response to SCD2 silencing. These results reveal that SCD2 is required for the induction and maintenance of PPARγ protein levels and adipogenesis in 3T3-L1 cells.
The ability of adipocytes to sense and respond to circulating fatty acid levels is important in maintaining the proper balance between fatty acid storage and fatty acid release for energy utilization. In the case of energy excess, fatty acids are stored in the form of triglyceride, and new adipocytes are generated to efficiently metabolize amino acids, glucose, and fatty acids to triglyceride (1). The key regulator of adipogenesis, the process whereby preadipocytes differentiate into fully mature adipocytes, is the ligand-activated nuclear receptor PPAR␥ 2 (2). Cultured mouse 3T3-L1 preadipocytes are an excellent model system for the study of adipogenesis. These cells differentiate into adipocytes with multilocular lipid droplets through a transcriptional cascade beginning with the rapid and transient expression of C/EBP and C/EBP␦ (3,4). The up-regulation of these transcription factors precedes the expression of PPAR␥ and C/EBP␣, which are critical for the completion of adipogenesis as well as the maintenance of adipocyte-specific gene expression in fully differentiated cells (3,4). Other transcription factors have also been shown to play significant roles in adipogenesis and adipocyte biology (for reviews, see Refs. 3, 5, and 6). However, because PPAR␥ controls the expression of large sets of genes required to maintain the adipocyte phenotype, including C/EBP␣ itself, a loss in the activity or expression of PPAR␥ leads to a loss in adipocyte function (7).
Although it is unclear whether ligands actively modulate PPAR␥ activity in fully differentiated adipocytes, ligand-mediated activation of PPAR␥ appears to be required for transcriptional activity during adipogenesis (8). Because PPAR␥ has a large hydrophobic ligand binding domain (9) and activation occurs in response to fatty acids (10), endogenous long chain fatty acids or their derivatives have been proposed as natural ligands. These include oleate, linoleate, nitrolinoleate, nitrooleate, 9-hydroxydecaenoic acid, arachidonic acid, and 15-deoxy-prostaglandin J2 (11)(12)(13)(14)(15). Despite the many proposed ligands, nitrolinoleate and nitro-oleate are the only fatty acids with a high binding affinity, but it has not yet been verified that these fatty acids are truly endogenous PPAR␥ ligands in adipocytes (13,15). Because several low affinity fatty acid ligands activate PPAR␥ (11-13, 16, 17), this nuclear receptor may instead serve as a general fatty acid sensor, allowing proper expression of fatty acid metabolizing enzymes and the generation of new adipocytes.
In addition, it appears that differentiating adipocytes can fully synthesize a PPAR␥ ligand, since preadipocytes will differentiate and produce a PPAR␥ ligand in the absence of exogenous fatty acids (14,18). Furthermore, overexpression of sterol regulatory element-binding protein-1 (SREBP1) in adipocytes apparently increases ligand production (19), whereas inhibition of acetyl-CoA carboxylase (ACC) (20) or fatty acid synthase (FAS) (21) inhibits adipogenesis. SREBP1 is a transcription factor that controls the expression of many fatty acid metabolizing enzymes, including ACC and FAS. Because ACC and FAS work sequentially to produce palmitate, it is possible that sterol regulatory element-binding protein-1 promotes PPAR␥ ligand production through a pathway involving ACC and FAS. Although there may be several explanations for the requirement of SREBP1, ACC, or FAS for adipogenesis apart from PPAR␥ ligand production, these studies do support the notion that endogenously synthesized fatty acids are required for adipogenesis.
Because adipocytes express multiple fatty acid-metabolizing enzymes, these cells apparently produce highly diverse lipid species that may affect cellular signaling events, including PPAR␥ activation. Thus, the aim of the present study was to identify enzymes involved in fatty acid synthesis or metabolism that may mediate such signaling pathways through their fatty acid products. To achieve this goal, we set up a screen in which 24 fatty acid-metabolizing enzymes were individually depleted using siRNA oligonucleotides to identify enzymes that are required for adipocyte-specific gene expression. Through this siRNA screen, we identified the fatty acid ⌬9-desaturase, stearoyl-CoA desaturase 2 (SCD2), as a required enzyme for 3T3-L1 cell adipogenesis and for the maintenance of adipocyte-specific gene expression in fully differentiated cells. Importantly, SCD2 was found to be required for PPAR␥ induction during differentiation of 3T3-L1 cells and for PPAR␥ expression in fully differentiated adipocytes. Related to this latter effect, SCD2 expression was found to promote protein translation, secondarily affecting PPAR␥ protein levels. Surprisingly, although SCD1 and SCD2 exhibit high sequence similarity, are both expressed in the endoplasmic reticulum of the adipocyte, and are predicted to produce the same products, SCD1 depletion failed to attenuate PPAR␥ expression or adipogenesis. Therefore, these results identify SCD2 as a key regulator of adipocyte function by promoting PPAR␥ protein synthesis and reveal a novel and specific role for SCD2 versus SCD1 in the adipocyte.
EXPERIMENTAL PROCEDURES
Animals-All procedures were carried out following the University of Massachusetts Medical School Institution Animal Care and Use Committee guidelines. Four-week-old male C57BL/6J mice were purchased from The Jackson Laboratory (Bar Harbor, ME) and maintained in a 12-h light/dark cycle. Half of the mice were fed a standard mouse chow (10% kcal of fat), and the other half was fed a high fat diet (55% kcal of fat) ad libitum for 18 weeks. The animals were fasted for 18 h before harvesting the tissues. Animals were sacrificed, epididymal fat pads were harvested from the mice and placed in KRH buffer (pH 7.4) supplemented with 2.5% bovine serum albumin, and RNA was collected using TRIzol (Invitrogen) for subsequent Affymetrix GeneChip analysis.
Cell Culture and Electroporation-3T3-L1 fibroblasts were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 50 g/ml streptomycin, and 50 units/ml penicillin (22). For experiments performed during differentiation, fibroblasts were cultured for 7 days, and 5 ϫ 10 6 cells were electroporated with 20 nmol of siRNA. The electroporation was performed using a Bio-Rad Gene Pulser II at the setting of 0.18 kV and 960 microfarads. Immediately after electroporation, the cells were reseeded into 2 wells of a 6-well plate. After 24 h, differentiation media consisting of 2.5 g/ml insulin, 0.25 M dexamethasone, and 0.5 mM 3-isobutyl-1methyl-xanthine in the culture media described above was added for 72 h in the absence or presence of 1 M rosiglitazone. After 72 h, differentiation media was replaced with culture media for an additional 24 h, and then RNA or protein was collected. For experiments in mature adipocytes, fibroblasts were cultured for 8 days, differentiated into mature adipocytes as described above, and cultured for an additional 7 days. Adipocytes were then electroporated (20 nmol of siRNA/5 ϫ 10 6 cells) as described above. After electroporation, cells were reseeded into multiple-well plates, and RNA or protein was collected 4 -72 h post-electroporation.
Affymetrix Gene Chip Analysis-Total RNA was collected from day 10 adipocytes after 72 h of siRNA treatment or from preadipocyte fibroblasts, adipocytes, and primary fat tissue as described (23). Subsequent reactions were carried out as already described (24). Only signals considered present were used for further analysis. If more than one probe is present, only one representative probe is shown.
RNA Isolation and Real Time-PCR-Total RNA was collected using TRIzol (Invitrogen), and reverse transcription and real time-PCR analysis were carried out as already described (24,50). Primers were chosen from the PrimerBank online data base (25). AKT1 was used as the internal control.
Oil Red O Staining-Cells were fixed with 4% formaldehyde for 1 h at room temperature, washed 3 times with PBS, permeabilized with P-buffer (0.5% Triton X-100, 1% fetal bovine serum, and 0.05% sodium azide) for 20 min, incubated with Oil Red O solution (5 mg/ml Oil Red O solid dissolved in isopropanol then diluted to a 60% working solution with double-distilled H 2 0) for 30 min, washed 3 times with distilled water, and analyzed by light microscopy or visual inspection.
[ 35 S]Methionine/Cysteine Labeling and Immunoprecipitation of PPAR␥-Seventy-two hours after electroporation of cells with siRNA, one 100-mm plate of cells was starved of methionine and cysteine for 2 h and then labeled with 500 Ci of [ 35 S]methionine/cysteine for 4 h. Cells were then lysed in ice-cold buffer containing 25 mM Hepes (pH 7.5), 0.5% Nonidet P-40, 1 mM EGTA, 1 mM EDTA, 1% SDS, 12.5 mM NaF, 5 mM sodium pyrophosphate, 5 mM -glycerophosphate, 5 mM sodium vanadate, 1 mM phenylmethylsulfonyl fluoride, 5 g/ml aprotinin, and 10 g/ml leupeptin. Total cell lysates of 1 mg of protein were immunoprecipitated overnight with 20 g of mouse monoclonal antibody against PPAR␥ followed by incubation with 50 l of protein A-Sepharose beads for 2 h at 4°C. The beads were then washed 5 times with lysis buffer before boiling for 5 min in Laemmli buffer. Protein was then separated on an 8% SDS gel, transferred to nitrocellulose, and exposed to a phosphor screen for 60 h. The screen was then visualized with a PhosphorImager (Molecular Dynamics). The nitrocellulose was then immunoblotted as described above using goat polyclonal antibody against PPAR␥ to detect the efficiency of the immunoprecipitation.
Polysome Profile and Reverse Transcription-PCR-Polysome profiles were generated as described previously (26 -28). Briefly, after siRNA transfection, cells were reseeded into one 10-cm dish. After 24 or 72 h, cycloheximide (Sigma) was added at a final concentration of 100 mg/ml for 10 min. Cells were then washed with PBS, trypsinized, pelleted, and resuspended in polysome buffer (20 mM Tris-HCl (pH 7.5), 10 mM NaCl, 3 mM MgCl 2 ) containing 150 g/ml cycloheximide and 100 units/ml RNasin (Promega). After determining the cell number in each sample, Triton X-100 was added to the cell suspension at a final concentration of 0.3% (v/v), and cells were passed through a 27-gauge needle 5 times to ensure lysis. The nuclei were then pelleted by centrifugation at 4°C and 12,000 ϫ g for 5 min. The supernatant was then layered on a linear 10 -50% sucrose gradient in polysome buffer containing 10 g/ml cycloheximide and 3.3 units/ml RNasin, and the gradients were centrifuged in a Beckmann SW41Ti Rotor at 141,000 ϫ g at 4°C for 4 h. The gradients were fractionated into 1-ml fractions, and the UV absorption at A254 was recorded. Twelve fractions were collected, and RNA was then extracted from each fraction using TRIzol (Invitrogen). Equal volumes of each fraction were then reverse-transcribed, and real time PCR was performed as already described (24).
Expression of Fatty Acid Metabolizing Enzymes in Cultured
Adipocytes and Primary Adipose Tissue-To establish a siRNAbased screen of broad scope, we first identified key enzymes in the major pathways of fatty acid metabolism that are clearly expressed in both mouse 3T3-L1 adipocytes and primary mouse adipose tissue. Fig. 1 illustrates eight pathways of fatty acid metabolism that were considered for our studies, which include -oxidation, -oxidation, ␣-oxidation, elongation, desaturation, nitration, epoxygenation/hydroxylation, and isomerization. Identification of the enzymes shown in Fig. 1 was accomplished by Affymetrix GeneChip microarray analysis of samples obtained from 3T3-L1 preadipocytes versus 3T3-L1 adipocytes (6 days after initiation of differentiation) and from the adipose tissue of mice fed a normal diet versus a high fat diet for 16 weeks. Table 1 presents the list of specific genes we selected by this analysis, all of which were found to be significantly expressed in both model systems. Boldface shows values for -fold change in expression for genes that are significantly up-regulated or down-regulated in response to 3T3-L1 differentiation ( Table 1). The values obtained for SCD1 and SCD2 are highlighted within the rectangle.
The fatty acid-metabolizing enzymes shown in Table 1 and Fig. 1 allow the generation of many different fatty acid products and derivatives from the same initial fatty acid substrate. A saturated fatty acid such as palmitate may be 1) -oxidized by the cytochrome P450 enzyme CYP4f16, forming a dicarboxylic acid, 2) -oxidized in peroxisomes by the acyl-CoA oxidases ACOX1 and ACOX2 or in the mitochondria by the acyl-CoA dehydrogenases ACADl and ACADvl, cleaving two carbons per cycle from the fatty acid, 3) ␣-oxidized in peroxisomes by phytanoyl-CoA hydroxylase, cleaving one carbon per cycle from the fatty acid, 4) elongated by ELOVL1, ELOVL3, ELOVL5, and ELOVL6, which are present in the endoplasmic reticulum, adding two carbons per cycle to the fatty acid, or 5) desaturated by various enzymes found in the endoplasmic reticulum, including stearoyl-CoA desaturase 1 or stearoyl-CoA desaturase 2, forming a cis double bond between the 9 and 10 carbons, or fatty acid desaturase 1, fatty acid desaturase 2, or fatty acid desaturase 3, forming a cis double bond between 5 and 6, 6 and 7, and possibly the 4 and 5 carbons, respectively. In addition, the double bond in an unsaturated fatty acid may change position through the isomerase, ALOXe3, found in the cytoplasm, be nitrated by nitric oxide species produced by nitric-oxide synthase, found in the cytoplasm, or be oxidized by the cytochrome P450 enzymes CYP2f2, CYP2c55, CYP20a1, CYP26b1, CYP1b1, found in the endoplasmic reticulum, adding an epoxide, hydroxyl, or peroxyl group to the fatty acid. An epoxide may then be further metabolized by the epoxide hydrolase, EPHX1, present in the endoplasmic reticulum, or EPHX2, present in the cytoplasm, producing dihydrodiols. Additionally, these pathways can operate in tandem, changing the carbon length or position of a side group or double bond within the fatty acid. Because a fatty acid produced from any one of these pathways may affect cell signaling events or other processes, these enzymes listed in Table 1 were targeted in a siRNA-based screen to determine whether they affect adipocyte gene expression in 3T3-L1 cells.
SCD2, but Not SCD1, Is Required for 3T3-L1 Adipogenesis-To identify fatty acid metabolizing enzymes that are required for 3T3-L1 adipogenesis, siRNA oligonucleotides directed against each of the enzymes identified by the microarray analysis in Table 1 were electroporated into 3T3-L1 preadipocytes before differentiation. Because PPAR␥ appears to be activated by an endogenous ligand during adipogenesis (8,(11)(12)(13)16), we reasoned that if a depleted enzyme is required specifically for the production of a PPAR␥ ligand, the addition of an exogenous ligand may reverse the effect of such enzyme depletion. Thus, in our screen the enzymes were also depleted in the presence of the PPAR␥ specific ligand, rosiglitazone, as a control. The initial screen monitored the mRNA transcript levels by real time PCR of the differentiation-induced proteins PPAR␥ and GLUT4 (Fig. 2). As expected, the well established required factors for adipocyte differentiation, PPAR␥ and FAS (21), did indeed attenuate PPAR␥ and GLUT4 expression in this screen when depleted by siRNA and acted as positive controls. In addition, rosiglitazone treatment did not restore PPAR␥ or GLUT4 levels upon siRNA-based depletion of PPAR␥ ( Fig. 2 and supplemental Fig. 1). Importantly, of the remaining 24 enzymes screened, only SCD2 depletion potently inhibited gene expression during adipogenesis (Fig. 2, A and B).
Interestingly, despite the predicted similarity in substrate selectivity between SCD1 and SCD2 (29), depletion of SCD1 in 3T3-L1 cells did not inhibit PPAR␥ or GLUT4 expression (Figs. 2 and supplemental Fig. 1). Furthermore, the addition of rosiglitazone did not restore the transcript levels of PPAR␥ or GLUT4 upon loss of SCD2 or FAS ( Fig. 2 and supplemental Fig. 1). This suggests that if SCD2 and FAS are involved in PPAR␥ ligand production during adipogenesis, the enzymes are also required for an independent function.
In an attempt to confirm and extend these findings, expression of PPAR␥ protein was measured in a second screen of 10 enzymes, again revealing that SCD2, but not SCD1, is abso- Affymetrix Gene Chip analysis of fatty acid metabolizing enzymes in differentiating 3T3-L1 adipocytes and primary adipocytes from mice fed a normal chow or high fat diet RNA was collected from 3T3-L1 cells before differentiation or 2, 4, and 6 days post-differentiation and subjected to Affymetrix Gene Chip analysis. RNA from three different samples was collected and pooled and then analyzed on one array; each experiment was done in triplicate, resulting in a total of nine RNA samples and three arrays for each timepoint. From the primary adipocytes, RNA was collected from 23 mice fed a normal chow diet and 14 mice fed a high fat diet for 18 weeks. The mice were divided into three groups within each diet condition, and the RNA from each group was pooled and then analyzed on one array, resulting in three arrays for each diet condition. Shown are representative values for the -fold changes in gene expression during adipogenesis or due to high fat diet. Boldface shows values that are significantly up-regulated or down-regulated in response to 3T3-L1 differentiation. The values obtained for SCD1 and SCD2 are in the rectangle. The asterisk denotes a p value Ͻ0.05.
lutely required for expression of this transcription factor (Fig. 3A). In addition, when preadipocytes differentiate into adipocytes, the cells become smaller and rounder, losing their fibroblastic morphology. The cells also acquire the ability to accumulate lipid in the form of triglyceride, appearing as lipid droplets in the cytoplasm (3,14). Oil Red O staining of accumulated neutral lipids in cells 4 days after the initiation of differentiation confirms that PPAR␥ and SCD2 are required for the lipid accumulation (Fig. 3B) and morphological changes (data not shown) that occur during adipogenesis, whereas SCD1 is not. Therefore, SCD2, but not SCD1, is required for several aspects of adipogenesis, including the induction of adipocyte specific genes, the increase in lipid accumulation, and the gain in the adipocyte morphology.
To verify that the inhibition of adipogenesis by depletion of PPAR␥, FAS, or SCD2 is not due to general toxicity, metabolic activity was measured in the cells using the tetrazolium compound, MTS. MTS is reduced by the cells to a colored formazan product, presumably by NADPH or NADH produced by dehydrogenase enzymes, and therefore, is an indirect measure of dehydrogenase activity. As seen in Fig. 4A, depletion of the various enzymes using siRNA did not cause a reduction in dehydrogenase activity, and therefore, the inhibition of adipogenesis does not appear to be due to general toxicity. We also found an increase in the expression of several caspases with SCD2 depletion (supplemental Table 1). Because caspases are involved in apoptosis, a TdT-mediated dUTP nick-end labeling assay was performed to ensure that the siRNA treatment does not induce apoptosis. This assay utilizes fluorescein-12-dUTP and terminal deoxynucleotidyltransferase to fluorescently label the fragmented DNA of apoptotic cells on the free 3ЈOH DNA ends. The fluorescence of the cell population is then quantitated by flow cytometry to determine the extent of apoptosis occurring within the cell population. As can be seen in Fig. 4B, depletion of PPAR␥, FAS, SCD1, or SCD2 does not induce apoptosis, and therefore, the affects on gene expression are not due to this toxic event.
SCD2, but Not SCD1, Is Required for Adipocyte-specific Gene Expression in Fully Differentiated Adipocytes-Real time PCR analysis reveals that SCD2 expression is higher in preadipocyte fibroblasts than SCD1 expression (supplemental Fig. 2, A and C), but 6 days after the induction of differentiation, SCD1 expression increases by 23-fold (supplemental Fig. 2B) 2, B and C). This dramatic induction of SCD1 expression results in higher SCD1 than SCD2 expression in fully differentiated cells (supplemental Fig. 2, A and C). Because SCD2 depletion inhibits the increase in SCD1 expression during adipogenesis (Fig. 2C), perhaps the inhibition of adipogenesis is not due solely to SCD2 depletion but is dependent on a decrease in total desaturase activity. Therefore, perhaps the more profound effect of SCD2 depletion on adipogenesis is simply due to its higher expression in the preadipocyte. We, therefore, tested whether SCD1 or SCD2 are required to sustain adipocyte-specific gene expression in fully differentiated adipocytes (7 days after initiation of differentiation), when SCD1 expression is dramatically higher than SCD2 expression (supplemental Fig. 2C). Remarkably, real time-PCR analysis of the products of several adipocyte genes revealed that SCD2, but not SCD1, is necessary for optimal expression of the PPAR␥-regulated genes, phosphoenolpyruvate carboxy kinase, and ACC in fully differentiated cells (Fig. 5A). However, SCD2 knockdown in these fully differentiated adipocytes only caused a minor decrease in PPAR␥ mRNA expression (Fig. 5A), in contrast to SCD2 depletion in cells before differentiation ( Fig. 2A). Therefore, the expression of PPAR␥1 and PPAR␥2 protein was determined by Western blot (Fig. 5B). Surprisingly, the protein levels of both PPAR␥ isoforms were markedly decreased in fully differentiated adipocytes upon siRNA-mediated depletion of SCD2 and not affected by depletion of SCD1. FAS is also required for phosphoenolpyruvate carboxy kinase and ACC expression in fully differentiated cells, but this effect is not due to a decrease in PPAR␥ expression, since FAS depletion did not cause a sig-nificant decrease in PPAR␥ mRNA or protein expression (Figs. 5, A and B). Thus, the maintenance of PPAR␥ protein in fully differentiated cultured adipocytes is specifically dependent on SCD2 activity, explaining the requirement of SCD2 for phosphoenolpyruvate carboxy kinase and ACC gene expression.
To compare the sets of adipocyte genes regulated by SCD2 depletion versus PPAR␥ depletion, Affymetrix Gene Chip analysis was performed in fully differentiated adipocytes electroporated with siRNA directed against PPAR␥, SCD1, or SCD2. Fig. 6 illustrates the results of this analysis as a heat map showing the comparison of genes that change in expression with the different siRNA treatments. The green bars represent genes that are significantly up-regulated, and the red bars represent genes that are significantly down-regulated in the cells treated with siRNA versus scrambled nucleotide control. Not surprisingly, SCD2 depletion has a profound effect on gene expression that strongly parallels the effects of PPAR␥ depletion, whereas loss of SCD1 shows no similarity to PPAR␥ depletion in its effect on gene expression (Fig. 6). Likewise, a closer analysis of genes highly expressed in the adipocyte reveals similar changes in gene expression due to PPAR␥ and SCD2 depletion but not SCD1 depletion (supplemental Table 1). In these experiments, PPAR␥ depletion by siRNA was only about 50% (data not shown). Therefore, these results demonstrate the powerful requirement of PPAR␥ for optimal adipocyte-specific gene expression, as previously published (7). Furthermore, these results illustrate the distinct roles that the highly similar desaturases SCD1 and SCD2 fulfill in the fully differentiated adipocyte.
SCD2 Is Required for Optimal Protein Synthesis in 3T3-L1 Adipocytes-The reduction in PPAR␥ protein but not mRNA expression in response to SCD2 depletion in fully differentiated adipocytes may be due to a decrease in its synthesis or an increase in its degradation. Cultured adipocytes were, therefore, treated with cycloheximide to inhibit protein synthesis and determine whether PPAR␥ degradation is increased upon loss of SCD2. Using this standard method to determine the protein degradation rate in the presence of cycloheximide, PPAR␥ protein levels were assessed in adipocytes that were electroporated with scrambled siRNA or siRNA directed against SCD2. As seen in Fig. 7, the rate of loss of PPAR␥ protein is rapid upon this treatment, exhibiting a short half-life of ϳ1.5 h similar to what has been previously reported (30). However, the rate of PPAR␥ degradation is similar between control and SCD2depleted cells, indicating no change in response to loss of SCD2. Therefore, these results confirm rapid turnover of PPAR␥ protein in adipocytes and indicate that SCD2 does not promote PPAR␥ degradation.
The results in Fig. 7 indicate that the decrease in PPAR␥ protein levels in response to the loss of SCD2 in fully differentiated adipocytes is due to decreased synthesis of PPAR␥ protein. To determine whether SCD2 is required for PPAR␥ protein synthesis, newly synthesized protein was labeled with [ 35 S]methionine/cysteine, and PPAR␥ protein was immunoprecipitated from control and SCD2-depleted cells. The radioactive signal generated from the immunoprecipitated protein indicates protein that has been newly synthe- sized, whereas the Western blot of the immunoprecipitated protein shows the total amount of protein present. As seen in Fig. 8, newly synthesized PPAR␥1 and PPAR␥2 are reduced by ϳ50% in the SCD2 depleted cells, which is similar to the decrease in total protein levels ( Figs. 8 and 5B). Therefore, because PPAR␥ degradation is not altered (Fig. 7), the decrease in newly synthesized protein appears to be due to a decrease in protein synthesis.
A common method to monitor the translational efficiency of a particular mRNA is by polysome profile analysis. This methodology separates monosomes from polysomes using a sucrose density gradient, which is then fractionated to generate an absorbance profile, indicating which fractions contain monosomes and polysomes. Subsequently, mRNA is isolated from each fraction to determine the degree to which a particular mRNA associates with monosomes or polysomes. To verify that translation of PPAR␥ is indeed decreased in response to SCD2 depletion, polysome profile analysis was performed, and the distribution of PPAR␥ mRNA with monosomes and polysomes was determined. The UV absorbance at A 254 reveals a decrease in the absorbance in the heavy polysome fractions and an increase in absorbance in the light polysome and 80 S monosome fractions in cells depleted of SCD2, suggesting that less ribosomes are associated with mRNA, and there is a global reduction in translation. Real time PCR analysis also reveals that the PPAR␥ mRNA shifts toward the lighter polysome and monosome fractions, confirming that PPAR␥ is less efficiently translated in the absence of SCD2 (Fig. 8). Therefore, the decrease in PPAR␥ protein expression is due to a FIGURE 5. SCD2 is required for PPAR␥ protein, but not mRNA expression, as well as the expression of the PPAR␥-regulated genes, phosphoenolpyruvate carboxykinase, and ACC in fully differentiated 3T3-L1 adipocytes. Seven days post-differentiation, adipocytes were electroporated with PBS or scrambled nucleotide as controls or siRNA against PPAR␥, FAS, SCD1, or SCD2 transcript. After 72 h, RNA was collected to determine the expression of adipogenic markers by real time PCR using AKT1 as an internal control (A) or protein was collected to determine the expression of PPAR␥ by Western blot (B). Changes in protein expression were quantified by densitometry; the values for PPAR␥ represent both PPAR␥1 and PPAR␥2 isoforms, since both isoforms show a similar decrease. The values represent the average of three independent experiments, and the asterisk denotes a p value Ͻ0.05. PEPCK, phosphoenolpyruvate carboxykinase.
decrease in general protein synthesis and does not specifically affect PPAR␥ translation.
DISCUSSION
The major finding reported here is the unexpected requirement of the fatty acid desaturase isoform SCD2 for both adipo-genesis and the maintenance of the adipocyte phenotype in cultured 3T3-L1 cells (Figs. 2, 3, 5, and 6, and supplemental Fig. 1). SCD2 regulates adipogenesis at least in part by controlling the transcription of the nuclear receptor PPAR␥ ( Fig. 2A and supplemental Fig. 1), whereas in fully differentiated adipocytes SCD2 is required for optimal protein synthesis, including PPAR␥ translation (Figs. 7-9). Thus, in 3T3-L1 preadipocytes and adipocytes, PPAR␥ protein levels are remarkably dependent on the expression levels of SCD2. Interestingly, the inhibition of adipogenesis by SCD2 depletion was not restored by the addition of the PPAR␥-specific ligand, rosiglitazone (Figs. 2 and 3 and supplemental Fig. 1). Therefore, SCD2 does not appear to be regulating the production of a PPAR␥ ligand. Rather, these data indicate that in preadipocytes one or more unsaturated fatty acids generated by the SCD2 enzyme or a protein-protein interaction dependent on SCD2 is necessary for the normal functioning of the transcriptional machinery that drives PPAR␥ expression and also to maintain protein synthesis rates in mature adipocytes.
The surprisingly powerful effects of depleting SCD2 in cultured adipocytes suggest a special role for this enzyme in adipocyte function. We tested the effects of depleting 24 enzymes that catalyze reactions in fatty acid metabolism in our siRNAbased screen, but only FAS and SCD2 were found to be necessary for adipogenesis (Figs. 2 and 3 and supplemental Fig. 1). Mice express 4 isoforms of SCD (SCD1-4), which exhibit ϳ80% sequence similarity, whereas humans have two isoforms (SCD1 and SCD5) that are ϳ60% similar in sequence (31)(32)(33). However, all four mouse SCD isoforms are nearly 80% similar to human SCD1 (31,32,34,35). Mouse SCD1 is the best characterized SCD isoform and is expressed in adipose tissue, liver, muscle, and sebaceous glands, SCD2 is expressed ubiquitously, SCD3 is expressed in the harderian gland and in sebocytes in the skin, and SCD4 is expressed in the heart (31). The reason for multiple highly homologous isoforms in the mouse has remained unclear, especially since SCD1 and SCD2 apparently utilize the same substrates with the same efficiency (29). One possible explanation for the redundancy in SCD isoforms is the need for differential expression in various tissues during specific stages of development (31). However, depletion of SCD1 in fully differentiated cells did not have a major impact on adipocyte-specific gene expression despite the higher expression of SCD1 versus SCD2 (supplemental Fig. 2C). Therefore, despite the predicted similarity in substrate usage and common cellular localizations of the enzymes, SCD1 and SCD2 appear to have disparate cellular functions in 3T3-L1 adipocytes (29,31,34,35). Interestingly, Affymetrix Gene Chip analysis reveals that when mice are put on a high fat diet, SCD2 expression increases 44-fold, whereas SCD1 expression shows little change (Table 1). These data suggest that SCD2 may also have a specific role in promoting adipogenesis in vivo since its expression increases during a time of increased adipogenesis (36) despite the already high expression of SCD1 (34,37).
It should be noted that the requirement for a ⌬9-desaturase during adipogenesis is somewhat surprising since Gomez et al. (38) showed that adipogenesis of 3T3-L1 cells is not affected when induced in the presence of the SCD chemical inhibitor, sterculic acid. Perhaps this discrepancy can be explained by a selectivity of the inhibitor for the highly homologous protein, SCD1, thereby preserving SCD2 activity and adipogenesis. This would be consistent with our results showing that the depletion of SCD1 did not attenuate adipogenesis. Nevertheless, our studies presented here are not the first evidence suggesting sep-arate cellular functions of the enzymes, since SCD1 deficiency leads to skin abnormalities despite SCD2 expression in the skin (32).
PPAR␥ protein expression was found to be dramatically reduced upon SCD2 depletion in mature adipocytes (Figs. 3A and 5B), which explains why there is a decrease in the expression of many PPAR␥-regulated genes (Fig. 5A). Furthermore, the reduction in PPAR␥ protein expression is due to a decrease in general protein synthesis and not degradation, since the turnover of PPAR␥ protein when protein synthesis is inhibited by cycloheximide is unaffected by the depletion of SCD2 (Fig. 7). Consistent with this interpretation, there is a decrease in newly synthesized PPAR␥ protein as determined by [ 35 S]methionine/ cysteine labeling (Fig. 8) and in the association of actively translating ribosomes with mRNA, including PPAR␥ mRNA (Fig. 9). Because SCD2 is required for general protein synthesis, PPAR␥ is not the only protein that is reduced in expression upon SCD2 depletion. In fact, examination of the total lysate from cells labeled with [ 35 S]methionine/cysteine shows a significant 15% decrease in newly synthesized protein from SCD2depleted cells (data not shown). Unlike PPAR␥, however, many proteins decrease in expression on the transcript level; conversely, many transcripts also increase in expression with SCD2 depletion (Fig. 6), which taken together makes it difficult to determine the effect of SCD2 on total protein synthesis. SCD2 depletion does result in the post-transcriptional decrease in expression of proteins other than PPAR␥, such as AKT1 and  catenin. The decreased expression of these proteins also appears to be due to a decrease in translational efficiency since the association of AKT1 and  catenin mRNA shifts from polysomes to monosomes (data not shown). However, we have not verified that the synthesis of these proteins is decreased using [ 35 S]methionine/cysteine metabolic labeling or determined if the degradation rate of these proteins increases with SCD2 depletion; therefore, we cannot conclude that the decrease in their expression is due to a decrease in translation.
Altogether, our data indicate that unsaturated fatty acids may regulate a pathway to enhance the machinery of protein translation in adipocytes. Because oleate is a major unsaturated fatty acid product of SCD2, we tested whether exogenous addition of oleate would restore the decrease in PPAR␥ protein levels with SCD2 depletion (29,31). However, even the addition of oleate at a concentration as high as 1 mM did not restore PPAR␥ levels (data not shown). Therefore, perhaps SCD2 is required to produce an unsaturated fatty acid other than oleate, or is required for the proper shuttling of an unsaturated fatty acid, as seen with linoleate in the SCD2 knock-out mouse (31), FIGURE 9. SCD2 depletion decreases polysome association with mRNA in cultured adipocytes. Seven days post-differentiation, adipocytes were electroporated as described, and after 24 h of siRNA transfection, cytoplasmic extracts were prepared and fractionated on a 10 -50% sucrose gradient. The absorbance of each fraction was determined at A 254 , and total RNA was extracted from fractions 2-13. PPAR␥ mRNA was quantified from equal volumes of the fractions using real time PCR and expressed as a percentage of the maximum PPAR␥ mRNA in each sample. The data shown represents one of four experiments with similar results. or is necessary for a protein-protein interaction that regulates translation.
To our knowledge the only previously published evidence of regulation by unsaturated fatty acids of protein synthesis is by arachidonic acid or eicosapentaenoic acid. Arachidonic acid has been shown to both activate and inhibit protein translation in diverse cell systems, whereas eicosapentaenoic acid has been shown to inhibit translation initiation by inducing eIF2␣ phosphorylation (39 -41). Therefore, we examined eIF2␣ phosphorylation in response to depletion of SCD2 but did not find a difference between SCD2-depleted and control adipocytes (data not shown). Protein synthesis can also be controlled through the protein kinases AMPactivated protein kinase and mTOR (42). An increase in AMP-activated protein kinase activity could lead to decreased peptide elongation through activation of eEF2 kinase, which then phosphorylates and inhibits eEF2, a factor that promotes protein chain elongation. Interestingly, this pathway may be regulated by unsaturated fatty acids, since SCD1 deficiency in mice leads to increased AMP-activated protein kinase activity in the liver (43). In SCD2-depleted adipocytes, we did find an approximate 80% increase in AMP-activated protein kinase phosphorylation and a small 20% increase in eEF2 phosphorylation compared with control cells (data not shown). However, these increases in AMP-activated protein kinase and eEF2 phosphorylation associated with SCD2 depletion do not appear to mediate the decrease we observe in protein synthesis, since eliminating the increase in phosphorylation of eEF2 by the dual depletion of eEF2 kinase and SCD2 did not restore PPAR␥ protein levels (data not shown). It is reported that mTOR positively regulates protein synthesis by phosphorylating and activating RS6K and 4EBP1 (44,45). Although SCD2 depletion causes a reduction in RS6K and 4EBP1 protein levels, it does not reduce the phosphorylation of these proteins, suggesting the mTOR pathway is not affected (data not shown). Consistent with these results, inhibition of mTOR with rapamycin also decreases RS6K1 and RS6K2 activity but does not affect PPAR␥ levels (44,46,47). Therefore, it remains unclear how SCD2 regulates mRNA association with polysomes, and this is an important question for future studies to address.
It will also be interesting in future studies to test whether SCD2 plays a unique role in modulating glucose homeostasis in mice. White adipose tissue is a key regulator of whole body metabolism through its ability to control glucose disposal and insulin sensitivity in peripheral tissues (1,17). This regulation appears to be mediated by two main mechanisms (1,17,48), 1) storing excess fatty acids in the form of triglyceride to prevent lipotoxicity in peripheral tissues and 2) secreting insulin-sensitizing factors, such as adiponectin. PPAR␥ plays a central role in both of these processes by promoting expression of genes involved in fatty acid esterification to triglyceride (48) and the expression of adiponectin (48,49). SCD2 may have profound influence on these processes through its regulation of PPAR␥ and adipogenesis. Unfortunately, SCD2 Ϫ/Ϫ mice do not survive and can not be studied in this regard. Thus, these important questions regarding the physiological role of SCD2 in whole body metabolism must await the generation of mouse models with tissue-specific depletion of this enzyme. | 8,622.6 | 2008-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Numerical analysis of hydrodynamics influenced by a deformed bed due to a near-bank vegetation patch
This study uses a 2D hydro-morphological model to analyze hydrodynamics over flat and deformed beds with a near-bank vegetation patch. By varying the patch density, the generalized results show that the hydrodynamics over deformed beds differs a lot from those over flat beds. It is found that the deformed bed topography leads to an apparent decrease in longitudinal velocity and bed shear stress in the open region and longitudinal surface gradient for the entire vegetated reach. However, the transverse flow motion and transverse surface gradient in the region of the leading edge and trailing edge is enhanced or maintained, suggesting the strengthening of secondary flow motion. Interestingly, the deformed bed topography tends to alleviate the horizontal shear caused by the junction-interface horizontal coherent vortices, indicating that the turbulence-induced flow mixing is highly inhibited as the bed is deformed. The interior flow adjustment through the patch for the deformed bed requires a shorter distance, La, which is related to the vegetative drag length, (Cda) , with a logarithmic formula (La1⁄4 0.4ln [(Cda) ]þ b, with b1⁄4 3.83 and 4.03 for the deformed and flat beds). The sloping bed topographic effect in the open region accelerating the flow may account for the quick flow adjustment.
INTRODUCTION
Vegetation widely occurs near banks of natural waterways such as rivers, channels and streams. The blockage of vegetation leads the original flow path to be modified, resulting in an evident reduction of flow velocity in the vegetated region but an increase in the adjacent open region (Rominger & Nepf 2011;Yan et al. 2016). The sediment transport pattern for the vegetation region is dominated by the depositional process and for the open region by the erosional process. With a mobile bed, the bed tends to aggrade in the vegetation region and tends to degrade in the open region. As a consequence, diverse bed forms are established around the near-bank vegetation. These bed forms are identified by pool, riffle and sediment bars (Kim et al. 2015;Xu et al. 2019). Together with seasonal flow variability, these bed forms provide habitats for different species (Xu et al. 2012;Santos et al. 2014;Zhao et al. 2015;Ghaderi et al. 2020) from the perspective of aquatic ecosystem restoration.
For a channel bed occupied by a near-bank vegetation patch, flow mass gets redistributed with more flow being directed to the adjacent open region. As a consequence, the flow in the open region accelerates with velocity increasing and in the vegetated region decelerates with velocity decreasing. The junction of two regions forms significant flow shearing, which under a certain patch density tends to induce flow instability to resemble horizontal coherent vortices with pronounced turbulent activity. The coherent vortices continuously growing along the patch lead to the exchange of flow mass and momentum between the low-velocity vegetated region and high-velocity open region (Rominger & Nepf 2011;Nepf 2012;Yan et al. 2016). As the flow exits the vegetated reach, flow separation occurs, leading to the flow in the two regions recovering to a uniform pattern.
When the bed is erodible, bed scour in the open region adjacent to the patch is triggered as flow accelerates or bed shear stress increases. However, suspended loads tend to deposit in the vegetated region for flow deceleration. At equilibrium, the deformed bed topography exerts an effect on the hydrodynamics. With the same sediment size, the generated scour pool in the open region should alleviate the velocity and bed shear stress to prevent further bed erosion. Correspondingly, other hydrodynamic parameters associated with flow velocity, for instance, water surface gradient and turbulence, should also be influenced by the deformed bed topography. Therefore, the application of hydrodynamic knowledge of near-bank vegetated flows obtained for a flat bed may not be relevant for a deformed bed. Numerous studies have demonstrated the pronounced impact of bed topography on hydrodynamics under a range of geometric boundaries (Blanckaert 2010;Koken & Constantinescu 2011;Konsoer et al. 2016;Chang et al. 2017). For instance, Blanckaert (2010) experimentally examined bed topographic effects on hydrodynamics in a bend channel and found that a deformed bed topography might lead to the enhancement of secondary flows and alleviation of flow shear-induced turbulence. There are few studies investigating how a deformed bed topography impacts the hydrodynamics in near-bank vegetated channels either experimentally or numerically. There are studies showing how hydrodynamics behave around a circular patch with deformed bed topography (Chang et al. 2017;Gu et al. 2018), implying that the scoured hole in the patch wake tends to inhibit the development of a vortex street.
This study aims to examine the difference in flow characteristics around near-bank vegetation over a flat bed or deformed bed. To achieve this goal, a 2D depth-averaged hydro-morphological model is employed to simulate flow and bed adjustment around a near-bank vegetation patch. Specifically, the flow motion is solved by shallow water equations with the vegetation effect modeled by the drag force method and the bed deformation solved by the Exner equation with pre-solved flow fields. One advantage of numerical modeling is to conveniently monitor the dynamic change of hydrodynamics and morphodynamics during computation. The co-evolution of hydrodynamics and morphodynamics reflect how the erodible bed immediately responds to the adjustment of the hydrodynamics, which are significant issues concerned in this study. The main objective of this study is to analyze the impact of a deformed bed on the hydrodynamics. Therefore, to stay focused, the parameter influencing the bed deformation adopted in the modeling is vegetation density. Other parameters such as patch length, patch width, water discharge and sediment size were maintained as constant and are not included in the analysis.
Model description
A 2D depth-averaged hydro-morphological model (Nay2DH, an open source code) was used to simulate flow around a nearbank vegetation patch and to simulate bed morphological evolution. The details of the hydro-morphological model were documented in the Supplementary Material. Thereby, a brief description of the model is presented here. This model incorporates a hydrodynamic module and a morphodynamic module, which are explicitly correlated with each other. The hydrodynamic module is the shallow water equations, governing the depth-averaged effect of water flow motion. To model the vegetation blockage effect on the water flow, the vegetation patch is regarded as a porous media and its effect is represented by the drag force method (a quadratic velocity law), which has been widely used in modeling vegetated flow Zeng & Li 2014;Yan et al. 2016;Xu et al. 2019). However, the turbulence arising from the vegetation stems cannot be modeled in this method, which actually is significant for sediment motion such as particle saltation (Tajnesaie et al. 2020).
To close the Reynolds stress terms, the standard k-ε turbulence model is used to solve the eddy viscosity for its effectiveness and stability. The sediment motion and fluvial processes are then solved by the morphodynamic module. With the flow velocities computed by the hydrodynamic module, a bed-load transport equation estimates the sediment flux, for which in this study we employ Meyer-Peter and Müller's formula. The bed deformation is solved by the Exner equation, which links the change in bed elevation to the bed-load flux in the longitudinal and transverse directions. To avoid unrealistic large-Water Supply Vol 22 No 2, 1547 gradient bed topography, a slope failure model based on sediment's angle of repose is used for bed deformation correction (Fischer-Antze et al. 2001;Jang & Shimizu 2005).
The governing equations of the flow motion and bed deformation are discretized by the finite difference method. The cubic interpolated pseudoparticle (CIP) method is mainly applied as a third-order numerical scheme, which ensures the accuracy of the numerical solution ( Jang & Shimizu 2005). For the flow governing equations, at the inlet, the water discharge is specified as the inflow boundary and at the outlet a constant water depth is specified. At the sidewall, the no-slip condition is applied for flow velocities, turbulent kinetic energy and the dissipation rate. For the governing equations of morphodynamics, at the inlet, no sediment was supplied to carry out a clear water scour process which is consistent with the experiment. A detailed mathematical description can be found in Jang & Shimizu (2005) and the model manual.
Flume experiment for model validation
The validation of the hydro-morphological model was conducted by a flume experiment (Xu et al. 2019), which demonstrates the bed morphological change around a near-bank vegetation patch as shown in Figure 1. The bed slope was set to 1/300. The near-bank patch with a length of 2 m occupies half of the width of the flume bed with a width of 0.31 m. The patch is located in the center region of the experimental reach which is about 6 m in length. A layer of sediment with a median size of 1.42 mm was laid on the flume bottom. The layer thickness was set as 6.5 cm which is thick enough to ensure the bed being erodible during the entire scour experiment. The vegetation patch was mimicked by an array of solid cylinders, which were rectilinearly distributed. With the cylinder spacing of 0.03 m  0.03 m, the patch density a ¼ 5.56 m À1 . Assuming the drag coefficient C d ¼ 1.2, the drag length scale (C d a) À1 ¼ 0.15 m. For the experiment, the water discharge Q ¼ 45 m 3 /h and the downstream depth H d ¼ 0.085 m. Therefore, the averaged velocity U ¼ 0.474 m/s. To direct the flow from the flat bottom to the sediment bed smoothly, a sloping gravel layer was installed in front of the sediment bed. At equilibrium of bed scour, the bed topography was measured by a point gauge in a dense grid. Acoustic Doppler Velocimetry (ADV) was used to measure flow velocity over the equilibrium bed topography for different cross-sections, as shown in Figure 1.
Model validation
The grid size (dx ¼ 0.1 m and dy ¼ 0.01 m) was chosen for modeling work after a grid convergence test, which can be found in the Supplementary Material. The validation of the depth-averaged hydro-morphological model was conducted by comparing the simulated flow velocities and bed topography with flume data. Figure 2(a) presents the verification with respect to the longitudinal velocities along the patch. It is apparent that the simulated flow velocities agree well with the experimental results. The average error is presented in Equation (1): in which N denotes the point number of each profile. The average error ranges from 8% to 19%, with a lower error in the entrance region and higher error in the further-downstream region. This is likely because the flow along the patch gradually evolves with the generation of secondary flows, which impacts the redistribution of flow mass and momentum to lead to the difference in error. A 2D numerical model, however, is relatively poor for simulating the effect of secondary flows. Figure 2(b) shows the performance in modeling bed deformation. The simulated longitudinal bed profiles fit the measured points. In particular, the matching (y/b v ¼ 1.06 and 1.6) is better for the open region. For the vegetated region, the bed degradation for the downstream reach in the vegetated region is underestimated. This might be because the drag force method characterizing only the effect of individual cylinders is not sufficient to describe sediment erosion through individual cylinders. However, the simulated bed deformation for the open region is accurate, which draws more attention for research purposes in this study. Therefore, it can be said that the hydro-morphological model can generally give satisfying results, and the model is used for further analysis of the hydrodynamics over flat and deformed bed topography, which will be emphasized in the following several sections.
Bed topography characteristics
The intention of this study is to explore the impact of bed deformation on hydrodynamics in a near-bank vegetated channel. Therefore, it is better to know the bed topography characteristics under different patch densities. Figure 3 shows bed topographic evolution under different patch densities starting from the same flat bed. It can be observed that as the patch density increases, the scour pool in the open region adjacent to the patch is produced and enhanced by the accelerating flow velocities and the eroded sediment tends to deposit in the downstream region. The pool enhancement occurs in both the vertical and longitudinal dimensions. For the smallest density (a ¼ 1 m À1 ), the patch exerts a lighter impact on the flow, which as a result leads to a scour pool located in the far-downstream region with the bed near the entrance less eroded. A significant finding is that as the patch density increases, the deepest area of the pool tends to expand upstream, resulting in the elongation of the scour pool. However, the longitudinal dimension of the pool is kept comparable to the patch length even if the patch density continuously increases.
3.3. Hydrodynamics over deformed bed topography: water surface, flow velocity, bed shear stress and turbulence Knowing the bed topography characteristics under increasing patch density, we next examine how the deformed bed topography impacts the hydrodynamics. The first examined parameters are longitudinal and transverse velocities, which are shown in Figure 4. The longitudinal velocity performs an adjustment after entering the vegetated reach, with flow decelerating in the vegetated region and accelerating in the open region both for the flat bed and deformed bed. However, the bed deformation topography induces a response in the longitudinal velocity differing from that for the flat bed. Over the flat bed, the velocity for the open region continuously increases as flow propagates downstream, with the magnitude peaking in the region of the trailing edge (downstream end). Meanwhile, the velocity magnitude continuously increases with the increasing patch density but the high velocity core continues to expand upstream. Over the deformed bed, the longitudinal velocity distribution is similar to that over the flat bed with a major difference being that the longitudinal velocity is significantly reduced compared with that over the flat bed. It can be observed that the longitudinal velocity in the open region has a similar magnitude range even if the patch density increases. The decreasing velocity with the increasing patch density in the vegetated region requires an increasing net discharge in the open region to meet the mass continuity law. The larger crosssectional area due to the deepening and elongation of the scour pool ensures the increasing net discharge without increasing the velocity. Another phenomenon is that for the deformed bed, a wider transverse transitional region links the low velocity region and high velocity region in contrast to a sharp variation for the flat bed. Moreover, the transitional band is wider for a larger patch density. For the transverse velocity for the flat bed (Figure 4(b)), negative and positive zones occur near the leading edge and trailing edge, reflecting the flow convergence and divergence behavior corresponding to the presence of the near-bank patch. The increasing patch density enhances the magnitude of the two zones for the flat bed. For the deformed bed, a more pronounced transverse flow motion can be observed even over a minor deformed bed for the smallest patch density. Likewise, a larger patch density induces more enhanced transverse flow motion but the enhancement might be lighter than that for the flat bed. The pronounced transverse motion indicates that the bed deformation may enhance the effect of the secondary flows evolving along the patch, which was found in the channel occupied by a near-bank patch. Of interest is that for both the flat bed and deformed bed, the positive velocity zone near the trailing edge transforms from a long pattern to a short one as the patch density increases, indicating that the flow separation effect gets more pronounced.
Due to the blocking of the vegetation patch, the momentum is redistributed spatially due to the contribution of different hydrodynamic effects. In between, the effect of the hydraulic pressure gradient characterized by the water surface gradient plays an essential role. For the flat bed, the longitudinal surface gradient (S x ¼ @z s =@x) becomes pronounced for the vegetated reach ( Figure 5(a)). This is attributed to outstanding vegetative resistance, which is partially balanced by the force due to the positive hydraulic pressure gradient. Under a small density (a ¼ 1 m À1 ), the water surface gradient tends to be uniform along the patch. As the patch density increases, a higher magnitude core appears in the entrance region of the patch. This might be because the entrance energy work consumed by the patch related to velocity loss is more pronounced for a larger density. Furthermore, the high core diffuses to the adjacent open region with a light decay, which is more pronounced for a larger density. For the deformed bed, the longitudinal water surface gradient is generally lower than that for the flat bed. However, it can be observed that the distribution pattern is similar to that for the flat bed. The sloping bed topography may account for this decay effect. Interestingly, negative value arises in front of the vegetated reach for the deformed bed, which is negligible for the flat bed.
Compared with the longitudinal component, the transverse surface gradient (S x ¼ @z s =@x) performs similarly for the flat bed and deformed bed ( Figure 5(b)). At the entrance, a positive-negative zone is distributed in an orderly way. The first positive zone indicates that the hydraulic pressure may drive transverse flow motion toward the open region and the incipient transverse motion of sediment from the vegetated zone to the open zone. The pattern of the transverse surface gradient well corresponds to the transverse velocity. Downstream of the exit, a negative-positive zone pair is also distributed in an orderly way. The first negative zone corresponds to the flow separation in the patch wake, which may also partially contribute to the transverse motion of sediment toward the patch wake. Furthermore, the effect of the transverse surface gradient pattern is enhanced as the patch density increases.
The distribution of the bed shear stress around the patch impacts the sediment motion and stable bed topography ( Figure 6). For the flat bed, the bed shear stress concentrates in the open region near the trailing edge similar to the pattern of longitudinal velocity, indicating the most erodible area. The bed shear stress in the vegetated region decreases greatly due to vegetation resistance. As the patch density increases, the zone of concentrated bed shear stress tends to expand both upstream and downstream. However, the concentrated zone for the initial condition is not consistent with the orientation of the scour pool at equilibrium under a larger patch density, indicating that the bed shear stress redistributes during the bed topographic adjustment. After the bed topography adjustment is completed, the bed shear stress distribution performs a distinct pattern. The bed shear stress in the vegetated zone only differs slightly from that for the flat bed. This is because the bed in the vegetated zone is maintained well by the vegetation without apparent bed deformation. The concentrated bed shear stress in the open region, however, greatly decreases as the bed is eroded. This can be explained by the fact that the excessive bed shear stress vanishes as the upper-layer sediment is eroded. A larger patch density still leads to higher bed shear stress. This is because the more sloping bed topography exerting a topographic effect on sediment under a larger patch density requires larger critical shear stress to drive the incipient motion of sediment compared with uniform flow.
The partial blocking of the near-bank patch produced a low-velocity zone in the vegetated region and a high-velocity zone in the open region, which produced a significant flow shear along the interface. The flow shear inducing the flow instability gives rise to horizontal coherent vortices. The vortices can be well quantified by the vorticity (¼ @U=@y À @V=@x), which is shown in Figure 7. For the flat bed, the vorticity arises from the junction interface after the flow entering the vegetated reach and laterally develops along the patch until the trailing edge, indicating continuous development of horizontal coherent vortices. The increasing patch density promoting the flow shearing is likely to produce a pronounced vorticity field. The above results are well consistent with previous studies by both experiments and simulations (White & Nepf 2007;Huai et al. 2015). For the deformed bed, the spatial vorticity pattern totally changes compared with that for the flat bed. First, it was found that the magnitude of vorticity for the deformed bed is generally lower than that for the flat bed. This is consistent with the reduced longitudinal velocity for the deformed bed (see Figure 4(a)). Unlike the continuous development of the vorticity along the patch for the flat bed, a wider significant zone of the vorticity is reached near the entrance region and then the significant zone continues to shrink along the patch. The transformed pattern of vorticity indicates that the bed deformation tends to alleviate the turbulent extent by reducing the effect of the flow shearing. As compensation, the secondary flows indicated by enhanced transverse flow motion (see Figure 4(b)) might play an outstanding role in the exchange of mass and momentum.
DISCUSSION
The verified 2D hydro-morphological model allows a deep analysis of the impact of bed deformation on hydrodynamics. The presentation of hydrodynamics over the flat bed acting as reference allows a better understanding of the effect of the deformed bed topography. When a patch of vegetation partially occupies the channel bed, the hydrodynamics firstly adjust due to the spatial balance between the individual components of momentum. Flow velocity shows a characteristic pattern regarding both the longitudinal and transverse velocity. The longitudinal velocity decelerates for the vegetated region and accelerates for the open region and peaks near the trailing edge for the flat bed, which is closely related to the distribution of bed shear stress. The distribution of longitudinal velocity and bed shear stress indicates that the bed erosion in the open region initiates near the trailing edge of the patch. For a small patch density (a ¼ 1 m À1 ), the deepest area of the scour pool is consistent with that of bed shear stress. However, as the patch density increases, the deepest area of the scour pool expands upstream, differing from the patterns of the longitudinal velocity and bed shear stress for the flat bed. Interestingly, the elongated pool profile varies consistently with the distribution of the longitudinal velocity and bed shear stress for the deformed bed, the magnitudes of which, however, are significantly reduced.
With the bed topography being deformed, the growth in longitudinal velocity in the lateral direction is developed with a larger width at the downstream of the vegetated reach (See Figure 4(a)); and this effect is enhanced as the pool is deepened by the increase in patch density. This suggests that the lateral development of the longitudinal velocity is attributed to the transverse sloping bed topography. An extreme scenario is that for the flat bed the lateral development of longitudinal velocity has a sharp transition near the junction. For a flat bed, the transverse growth of the longitudinal velocity or flow mixing is impacted by the generated horizontal coherent vortices indicated by vorticity (Nepf (2012) stated that for a flat bed horizontal vortices form for ah .0.1, where h ¼ water depth). The distribution of the vorticity along the junction for the deformed bed is much lower than that for the flat bed. Therefore, the above statements indicate that the deformed bed, by reducing the velocity in the open region, diminishes the shear effect (mixing) in the junction region but exerts a topographic effect to enhance the transverse flow mixing. The essential cause is thought to be that the bed topographic effect might enhance the effect of secondary flow, which is likely to exceed the mixing effect induced by horizontal vortices. The increased depth-averaged transverse velocity around the patch might support this thought (see Figure 4(b)). Strong secondary flows are commonly present over deformed bed topography such as compound beds Yang et al. 2007).
The 2D depth-averaged model cannot simulate the generation of the secondary flow evolving along the patch, which has been found in flume experiments and numerical simulations (Nezu & Onitsuka 2001;. Some hydrodynamic parameters, however, (for instance, transverse velocity and transverse surface gradient) can indicate its formation. For either a flat bed or deformed bed, pronounced transverse surface gradient occurs as the flow encounters the patch. The positive value indicating hydraulic pressure head tends to drive the transverse flow motion from the patch to the open region, which is confirmed by the negative transverse velocity (see Figure 4(b)). As the patch density increases, the surface gradient and transverse motion both become more pronounced so that secondary flows are most likely to be triggered. However, when the bed is deformed, the transverse flow motion near the leading edge is enhanced despite the fact that the surface gradient is reduced. The transverse motion for the deformed bed is negatively contributed by the reduced surface gradient but more positively enhanced by the sloping bed topography. Therefore, the deformed bed topography is expected to induce more intense secondary flows compared with those for the flat bed. The secondary flows induce transverse momentum exchange and thus can promote the flow adjustment. Water Supply Vol 22 No 2, 1553 Previous studies show that the interior adjustment of flow in the vegetated region is related to the blocking effect, characterized by the patch drag length ((C d a) À1 ) and patch width (b v ). Rominger & Nepf (2011), based on scale analysis and experimental data, proposed a quantitative formula (Equation (2)) to describe such a relation, i.e.: Likely, it is worth examining how the deformed bed topography impacts the interior flow adjustment. Figure 8(a) shows the longitudinal variation of the longitudinal velocity in the vegetated region (y/b v ¼ 0.25) as the patch density increases for the flat bed and deformed bed. The full adjustment of flow is estimated by the velocity which varies by less than 5%. It can be clearly observed that for all scenarios the full adjustment of flow (regarding the longitudinal velocity) for the flat bed needs a longer distance (L a ). The increasing patch density (negatively related to (C d a) À1 ) leads to a further decrease in the distance of full flow adjustment for both the flat bed and deformed bed, agreeing well with the observed results by previous studies. By plotting the drag length and longitudinal velocity, the two sets of data for the flat bed and deformed bed can be well described by a logarithmic formula, respectively, as shown in Figure 8(b). Interestingly, the two formulas differ from each other only by a difference of 0.2. In other words, the interior flow adjustment distance for the deformed bed is shorter than that for the flat bed by an intersect of 0.2, systematically. However, the obtained relation greatly deviates from the proposed formula by Rominger & Nepf (2011) in terms of low patch density (high (C d a) À1 ). This might be because the setting patch length for sparse vegetation is insufficient for the interior full adjustment of flow.
CONCLUSIONS
The growth of near-bank vegetation is likely to induce the adjustment of both hydrodynamics and surrounding bed topography. How the deformed bed topography impacts hydrodynamics determines bed stability, future fluvial evolution and the ecological effect. Furthermore, the existing knowledge for near-bank vegetated flows for flat beds may bring uncertainty for a deformed bed. However, this important issue has been rarely addressed by existing studies. The present study conducts a numerical investigation of the impact of a deformed bed topography on hydrodynamics in an open channel occupied by a near-bank vegetation patch. The strategy of this study is to first verify the 2D hydro-morphological model with the hydrodynamics and topography of a deformed bed in a laboratory flume and to secondly apply the model to conduct bed scour numerical experiments under a variety of patch densities. The following summary can be drawn from the simulated results: (1) The simulation shows that the deepest area of the scour pool adjacent to the patch expands upstream as the patch density increases. However, the longitudinal velocity and bed shear stress for the flat bed (before the initiation of bed scour) peaking near the trailing edge of the patch indicate the bed shear stress continuously adjusts during the scour process.
(2) The formation of a deformed bed topography leads to a significant reduction of the longitudinal velocity and bed shear stress. The transverse motion of flow near the leading edge and trailing edge indicating flow convergence and divergence, however, is apparently enhanced, which may strengthen the secondary flows along the patch. The deformed bed topography apparently alleviates the longitudinal water surface gradient but has a lighter effect on the transverse surface gradient. (3) The vorticity arising from coherent vortices along the junction between the patch and adjacent open region is strongly inhibited by a deformed bed topography compared with that for a flat bed. This indicates that a deformed bed topography can reduce the extent of turbulence and flow mixing between the vegetated region and open region. Alternatively, the secondary flows generated in the open region can compensate for the flow mixing effect of the reduced turbulence. (4) We studied the flow adjustment over a deformed bed topography and found that the interior adjustment distance regarding the longitudinal velocity is shortened due to the presence of the bed topography. This phenomenon can be explained by the effect of the sloping bed topography near the entrance of the vegetated reach. A logarithmic relation is found to exist between the adjustment distance and vegetative drag length, which deviates from the formula proposed by previous studies in terms of sparse vegetation. The shortening of flow adjustment distance by the deformed bed topography is also suggested by the redistribution of the net water discharge for the vegetated region and open region.
However, the gained results should be applied with caution due to the scale effect of the flume. Comparably, the flume scale is more likely to occur in a high-gradient mountainous stream, which is often characterized by narrow width and relatively small aspect ratio (width-to-depth ratio). This also sets the research direction for near-bank vegetated hydrodynamics. Furthermore, the hydro-morphological model used, which cannot consider the turbulence arising from vegetation stems, might underestimate sediment transport in the vegetation region. | 7,092.4 | 2021-09-29T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A Novel Multimedia Player for International Standard—JPEG Snack
The advancement in mobile communication and technologies has led to the usage of short-form digital content increasing daily. This short-form content is mainly based on images that urged the joint photographic experts’ group (JPEG) to introduce a novel international standard, JPEG Snack (International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC) IS, 19566-8). In JPEG Snack, the multimedia content is embedded into a main background JPEG file, and the resulting JPEG Snack file is saved and transmitted as a .jpg file. If someone does not have a JPEG Snack Player, their device decoder will treat it as a JPEG file and display a background image only. As the standard has been proposed recently, the JPEG Snack Player is needed. In this article, we present a methodology to develop JPEG Snack Player. JPEG Snack Player uses a JPEG Snack decoder and renders media objects on the background JPEG file according to the instructions in the JPEG Snack file. We also present some results and computational complexity metrics for the JPEG Snack Player.
Introduction
This article proposes a novel multimedia player for JPEG Snack. We also present several experimental results to show the results of the JPEG Snack Player. The significant contributions of the article are listed: • Description of JPEG Snack encoded file; • Description of JPEG Snack decoder; • Description of JPEG Snack System decoder; • Development of novel multimedia player for JPEG Snack files as there is no multimedia player for the JPEG Snack standard, which is in the international standard under publication stage; • Analysis of the complexity of the player.
The rest of the article is organized as follows: Section 2 summarizes the related work. Similarly, Section 3 briefly explains the JPEG Snack encoded file, followed by Section 4, which comprehensively discusses the JPEG Snack Player. In Section 5, the experimental results are presented. Section 6 presents the comparison of the features of the JPEG Snack Player with other media players. Section 7 presents the limitations and future directions. Finally, Section 8 concludes the work.
JPEG Snack Encoded File
According to the ISO/IEC International Standard (IS), 19566-8 [14], a JPEG Snack file follows the ISO/1EC 10918-1 file format. In the JPEG Snack file, the application 11 (APP11) marker for the JPEG universal metadata box format (JUMBF) box [31] of the JPEG Snack representation and metadata are placed after the start of the image (SOI) marker. At the same time, the APP11 markers for embedding the media data can be placed anywhere before the start of the scan (SOS) marker. Figure 1 shows the file organization of the JPEG Snack file.
JUMBF Box for JPEG Snack
A JUMBF box for JPEG Snack consists of one JPEG Snack description box (JSDB), one instruction set box (INST), and multiple object metadata boxes (OBMBs).
JSDB
A JSDB contains the number of objects and the start time required for the JPEG Snack representation.
INST
An INST contains the information and instructions about the representation of the JPEG Snack composition.
OBMB
Each OBMB contains the media type associated with each media object embedded in the JPEG Snack file. These media types are listed in Table 1. These boxes are explained in detail in [14].
Role of Sensors
A JPEG Snack file needs data from visual sensors, such as the camera, and sound sensors, such as the microphone. As explained above, the JPEG Snack file contains embedded multimedia data, so the inputs to the JPEG Snack encoder can be portable network graphics (PNG) or JPEG-1 images taken from the camera, videos recorded with the camera, or audio recorded with a microphone. The sensors used for the images and videos are compact cameras [32], 360 • cameras [33], digital single-lens reflex (DSLR), and adventure cameras. For audio, microphone sensors are used. The role of sensors in the JPEG Snack file is illustrated in Figure 2.
Methodology
The backbone of the JPEG Snack Player is the JPEG Snack decoder. The JPEG Snack decoder decodes the JPEG Snack file, and the decoded information is rendered. The JPEG Snack Player displays JPEG Snack representations based on the layer and position information obtained from the JPEG Snack decoder. The high-level flow diagram of the JPEG Snack decoder is shown in Figure 3.
JPEG Snack Decoder
Three things are required to decode the JPEG Snack: (a) the background default JPEG image, (b) the playback timeline, and (c) the layer and position of the snack. These components are shown in Figure 4. JPEG Snack decoders decode default background images and translate instructions about displaying embedded objects on the default images. The default image is a JPEG-1 background image with JPEG Snack content embedded using APP11 markers. An embedded object's timeline tells when it will appear on the background image and for how long. The layer and position of the embedded object specify on what portion of the default image it will be displayed and what its size will be.
JPEG Snack System Decoder
A JUMBF parser delivers the JPEG Snack stream to the system decoder. JPEG Snack streams contain media and metadata about object structures and composition descriptions. The appropriate media decoders are invoked, and compositor-object descriptions control playback on the local device. The JPEG Snack system decoder is shown in Figure 5. JPEG Snack's system decoder takes JPEG codestream data. There are two types of embedded JUMBF boxes: JPEG Snack content type JUMBF boxes and embedded file content type JUMBF boxes. Metadata are in the JPEG Snack content type JUMBF box, whereas media data are in the embedded file type JUMBF box. A JUMBF parser extracts metadata and passes them through an object composer. From the JUMBF parser output, the object composer extracts media format, time, and position. The media decoder takes inputs such as media format, time, and media data and outputs media files. Media decoders can decode images or other media formats. Media output and z-order from the object composer are sent to the compositor, which creates snack representations and displays them according to playback timelines.
JPEG Snack Player Algorithm
JPEG Snack Player follows Algorithm 1. Initially, the JPEG Snack file is decoded, and after decoding, the background JPEG image is picked from the media files and displayed on the player's screen. After showing the background image, the embedded media files are displayed according to the layer and position information related to each media file. The embedded media files are audio, videos, captions, and a group of images. The media type tells us about the embedded media; if it is audio, then the audio player is used to play the audio concurrently with the background image and the time specified in the JPEG Snack file. If the embedded file media type is an image, it is displayed in the background image according to the position specified in the JPEG Snack file. Similarly, if the embedded file media type is video, then a video player is used to play video in the specific position of the background JPEG image. Captions are also overlaid on the background image according to the information in the JPEG Snack file. Figure 6 shows the visual representation of the steps involved in the JPEG Snack Player Algorithm. Figure 6. Visual representation of the steps involved in the JPEG Snack Player Algorithm.
Experimental Results
The JPEG Snack Player enables users to play JPEG Snack files in three different modes: When the select files button is pressed, it allows the user to pick the JPEG Snack file with two objects embedded in it with the following values of the JSDB, INST, and OBMBs described in Appendix A in Tables A1-A4. When the file is selected, the background image is displayed on the plot area of the player, as shown in Figure 7. As in this JPEG Snack file, two objects are embedded, so these two objects are displayed according to the instructions. The first object is displayed after two seconds, as the start time is 2000 ms, and is shown in Figure 8a. Object 1 persists, and the second object is displayed after three seconds, as shown in Figure 8b. JPEG Snack files can have embedded images, audio, videos, a group of images, and captions. Therefore, JPEG Snack Player can play all the multimedia mentioned above on the background JPEG image. Figure 9 shows the JPEG Snack player playing a JPEG Snack file in which a group of photos is embedded. The values of JSDB, INST, and OBMB are presented in Appendix B in Tables A5-A7, respectively. In this example, when the JPEG Snack file is selected, the background JPEG image is displayed, as shown in Figure 9a. When the JPEG Snack file is played, after two seconds, the first image from the sequence of images is displayed as shown in Figure 9b. Similarly, after three seconds, the second image from the series of images is shown on the JPEG Snack Player, as shown in Figure 9c. After four seconds, all the pictures disappear. Similarly, Figure 10 shows the JPEG Snack Player playing a JPEG Snack file with a caption and JPEG image embedded. In this example, a JPEG-1 image and caption are embedded in the background JPEG file. After two seconds, the first object, i.e., JPEG-1 image, appears on the background image. After three seconds, the embedded caption appears on the image. JPEG Snack Player extracts the media type of the embedded multimedia files and plays accordingly. The values of JSDB, INST, OBMB for Object 1 and OBMB for Object 2 are presented in Appendix C in Tables A8-A11, respectively. Likewise, Figure 11 shows the JPEG Snack Player playing a JPEG Snack file with an mp4 video and JPEG image embedded. In this example, a JPEG-1 image and mp4 video are embedded in the background JPEG file. After two seconds, the first object, i.e., JPEG-1 image, appears on the background image. After three seconds, the embedded video appears on the image. The embedded video is played for a short duration of time and then it disappears as the value of persistence is zero. The values of JSDB, INST, OBMB for Object 1 and OBMB for Object 2 are presented in Appendix D in Tables A11-A14, respectively. We also evaluated the JPEG Snack Player by calculating the performance parameters. JPEG Snack Player Application takes 9.8 MB of disk space during execution. The total application installer size is 2.6 MB.
We also evaluated the decoding time of the JPEG Snack decoder and the decoding time of the JPEG Snack Player. The following Table 2 and Figure 12 compare the decoding time in seconds. The decoding time is evaluated on a laptop having the following specifications: core-i5, 7th generation, with each core being of 2.60 GHz. The system also possesses 8 GB of random access memory (RAM). The system is also equipped with a 512 GB solid-state drive (SSD). The laptop is designed by Hewlett-Packard (HP) Computer hardware company, Palo Alto, California, United States.
Limitations and Future Directions
Currently, the software is only available for use on personal computers in its current version. There is the possibility of extending it to a smartphone app by importing a JPEG Snack file decoder as a library in the Android application, which can be used to process JPEG Snack files. To enjoy JPEG Snack files online, the software can be extended to the web-based version to make it possible to enjoy the files online. Furthermore, it is also possible to include the JPEG Snack editor in the JPEG Snack Player so that users would be able to update and customize the embedded content of the JPEG Snack files within the player.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. First Example JUMBF Boxes
Tables A1-A4 present the values of JSDB, INST, and OBMB for the first object and OBMB for the second object, respectively, for the third example. This means that the second object will not persist Life 2 This means that Object 1 and Object 2 will be displayed simultaneously for 2 s Next-use 0 It means that this instruction will not be reused Tables A5-A7 present the values of JSDB, INST, and OBMB for the first object, respectively, for the second example. This instruction is executed with the next instruction Next-use 0 It means that this instruction will not be reused
Appendix C. Third Example JUMBF Boxes
Tables A8-A11 present the values of JSDB, INST, and OBMB for the first object and OBMB for the second object, respectively, for the third example. This means that Object 1 and Object 2 will be displayed simultaneously for 2 s Next-use 0 It means that this instruction will not be reused Table A10. Values of the third OBMB parameters for the first example.
Parameter Value Description
Toggle 0000 0000 This shows that no optional field is used ID 1 Identifier of the box Media type 'image/jpg' The embedded media is a JPEG-1 image Location self#jumbf = Object 1 The image is embedded in the same file Table A11. Values of the second OBMB parameters for the third example.
Parameter Value Description
Toggle 0000 0110 This shows that style and opacity are present ID 2 Identifier of the box Media type 'text/utf-8' The embedded media is a caption Style css_code The style of the caption is embedded in the form of a style file Opacity 0.6 The opacity of value is 0.6, which means that the transparency will be 60% Location self#jumbf = Object 2 The caption is embedded in the same file
Appendix D. Fourth Example JUMBF Boxes
Tables A12-A15 present the values of JSDB, INST, and OBMB for the first object and OBMB for the second object, respectively, for the fourth example. This means that Object 1 and Object 2 will be displayed simultaneously for 2 s Next-use 0 It means that this instruction will not be reused | 3,232.6 | 2023-03-01T00:00:00.000 | [
"Computer Science"
] |
A survey of gyrodactylid parasites on the fins of Homatula variegata in central China
In this study, two parasites on the fins of Homatula variegata were recorded from March to September 2016. A dissection mirror was used to examine the distribution and quantity of the ectoparasitic Gyrodactylus sp. and Paragyrodactylus variegatus on the host Homatula variegata in different seasons. The present study explored possible explanations for the site specificity of gyrodactylid parasites in 442 Homatula variegata infected with 4307 Gyrodactylus sp. (species identification is incomplete, only characterized to the genus level) and 1712 Paragyrodactylus variegatus. These two gyrodactylid parasites were collected from fish fins, and the fish were harvested in China's Qinling Mountains.The results indicated that the highest number of Gyrodactylus sp., which was numerically the dominant species, appeared on the fish fins in April, while the highest number of Paragyrodactylus variegatus was found on the fish fins in March. The two parasite species appeared to be partitioned spatially, with Gyrodactylus sp. occurring more frequently on pectoral and pelvic fins, and P. variegatus occurring more frequently on caudal fins. However, Gyrodactylus sp. appeared to occur on fish of all lengths, while P. variegatus tended to occur more abundantly on shorter fish rather than on longer fish. At lower Gyrodactylus sp. infection levels (<100), the pelvic and pectoral fins were the main locations of attachment, followed by the dorsal fin. For infections of more than 100 parasites, more samples of Gyrodactylus sp. were located on the pectoral fin. For a low number of Paragyrodactylus variegatus infections (<100), the pelvic and pectoral fins were the preferred locations of attachment, followed by the caudal fin. Between April and September, there were many monogenean parasites on fish fins, and the fish size was within the range of 5–10 cm. However, when a fish was longer than 10 cm long, the number of parasites on its fins greatly decreased.
Introduction
Among the members of the class Monogenea, viviparous gyrodactylids are one of the most common parasites in wild and cultured fish, causing great ecological and economic harm [1]. Some gyrodactylids show significant microhabitat specificity, but this is highly variable among species. Some researchers focused on the site preference of Gyrodactylus turnbulli on Poecilia reticulata (guppy) in an experimental environment. Studies have found that lymphocytes in fish epithelial tissues have a direct effect on parasites after they come into contact with a host. The host's innate and adaptive immune system determines where the parasite lives [2][3][4]. However, the distribution of parasites on fish is strongly correlated with the age of infection. Water quality and water nutrition are factors that determine the abundance of fish parasites. However, changes in water temperature and seasons are also determinants of parasite abundance [5][6]. Other groups have reached similar conclusions by studying the behavior of parasites (G. colemanensis) on Salvelinus fontinalis fry [7]. Parasites attached to any part of fish epithelial tissue. In particular, many parasites occur on the edges of the tail, pectoral and peritoneal fins. Parasites periodically migrate to the edges of the fins and can travel through the body to reach other fins [8]. Recently, the fish Homatula variegata (Dabry de Thiersant, 1874) has become an increasing concern due to its potential aquaculture in China [9]. Gyrodactylus sp. and Paragyrodactylus variegatus (You, King, Ye and Cone, 2014) [10] are two parasites that are found on the fins and, occasionally, the body surface, of H. variegata in Xunyangba. However, gyrodactylids that live on the surface of fish appear to be less specific in terms of their environmental requirements and therefore occur in a variety of locations. This assumption, has led to a lack of information on the positioning of parasites on this fish; most authors locate parasites on the major branches of the body of a fish, such as the gills or torso [11]. However, no specific studies on the distribution of gyrodactylid parasites in Homatula variegata have been performed. Therefore, this study attempts to describe the position specificity of Gyrodactylus sp. in Homatula variegata.
Ethical note
This study was approved by the Animal Care and Use Committee of Shaanxi Normal University.
Study area and sample collection
The fish (Homatula variegata) were collected (n = 442) with seine nets from late March to late September 2016 in Xunyangba (33.33˚N, 108.33˚E), Ningshan, County located on the southern slopes of the Qinling Mountains in Shaanxi Province, central China. The water temperatures on the collection days were recorded (Table 1).
Each fish was individually placed in a plastic tank filled with filtered river water and was transported to a field laboratory and examined within one hour. The fish were euthanized with excessive eugenol anesthetic fluid and fixed with 5-10% formalin. The total length of each fish was recorded, and the fins were examined for the presence of parasites that were removed and immediately identified on temporary wet mounts. These two species of Gyrodactylus were found under a dissecting microscope (OLYMPUS, SZ61, 45X), Then they were placed on glass slides that had drops of glycerin-water with pointed ophthalmic forceps. If there was a cap-like bone piece structure covering the base of the central hook, then it was recorded as Paragyrodactylus variegatus; otherwise, it was recorded as Gyrodactylus sp.
The parasites were stored in formalin. Almost all the parasites stuck to the skin after immobilization. Voucher specimens of the parasites and host were deposited in the Fish Disease Laboratory, Shaanxi Normal University (Accession number: H. variegata: Acc.HV20160012; Gyrodactylus sp.: Acc.GS20160001 and P. variegatus: Acc.PV20160001).
Analysis of parasite location
The different numbers of parasites on each fin corresponding to different fish body lengths were examined, from March to September, 2016. The parasite species and location of parasites on the host's fins were examined by using a two-way ANOVA, with the number of parasites on different fins used as the dependent variable. The microhabitat occurrence for each gyrodactylid species was determined by observing its position on the fins. The distributions of each gyrodactylid species on each of the different fins were compared by two-way ANOVA with multiple comparisons (Tukey's HSD test) to assess the significance of the difference. The significance level was set at p < 0.05.
Relationships between water temperature, parasite load location and fish size
To test for the overall effects of water temperature and fish body length on the distribution of the number of parasites on the host fins, a generalized linear model (GLM) was also built using water temperature or fish body length as predictors. To further explore the relationships of fish size (length) and water temperature with the number of parasites on different fins, Spearman correlation statistical analysis was conducted using SPSS (Statistical Package for Social Sciences, v21).
The number dynamics of gyrodactylid parasites and distribution of parasites on fish fins
The fish (Homatula variegata) were collected (n = 442) with seine nets from late March to late September 2016 in Xunyangba (33.33˚N, 108.33˚E), Ningshan County located on the southern aspects of the Qinling Mountains in Shaanxi Province. The water temperatures on the collection days were recorded. We measured the body length of each host. We specifically collected two gyrodactylid parasites from the host fins. The species and number of parasites on different fins were recorded in detail. Tukey's HSD test was used to analyze the significance of parasite distribution on the fins, and GLM was used to analyze the effect of host length and water temperature on the location of parasites. To explore the specific effect of water temperature and length on the number of parasites on fins, a Spearman correlation analysis was performed.
Temporal changes in the number of parasites on the fins were analyzed. The highest number of Gyrodactylus sp. on the fish fins appeared in April, while the highest number of P. variegatus on the fish fins occurred in March. The number of Gyrodactylus sp. and P. variegatus showed roughly similar trends, with the number of the two parasites (mean ± SE) on fins decreasing in May (4.17 ± 0.74 and 2.6 ± 0.43, respectively); the number of parasites on the fins decreased in August (2.09 ± 0.31 and 2.17 ± 0.27) and increased in September (5.53 ± 0.43 and 5.23 ± 0.45) (Fig 1). Of the 7 months, the number of the two gyrodactylid parasites on the fins were lowest in August. There was no significant correlation between number of parasites and the water temperature, which ranged from 7˚C (in March) to 23˚C (in July) (Gyrodactylus sp.: r = -0.149, p = 0.751; P. variegatus: r = 0.090, p = 0.847).
On the other hand, the number of P. variegatus on the pectoral fin was relatively high in May and July. The number of parasites detected on the dorsal fins and anal fins was relatively low (Fig 2A). The number of parasites on different fins decreased significantly during April and May. In June, the number of Gyrodactylus sp. increased on the pectoral fin ( Fig 2B).
For Gyrodactylus sp., there was a significant difference (two-way ANOVA, F = 11.97, df = 4, p < 0.001) among the mean number (mean ± SE) of parasites that were distributed on the five fins. All data were examined from a total of 442 specimens of Homatula variegata from Xunyangbain the Qinling Mountains of Shaanxi Province, central China, which were collected from March to September 2016. The total length of the host (± 0.1 cm) ranged from 3.1 to 14.6 cm. The number of fish and the water temperature for each sampling period are recorded in Table 1. For P. variegatus, there was also a significant difference (two-way ANOVA, F = 30.94, df = 4, p < 0.001) among the mean numbers (mean ± SE) of parasites that were distributed on the five fins. In general, the mean number (mean ± SE) of Gyrodactylus sp. infecting different fin parts was higher than that of P. variegatus (Fig 2) by month. However, we also detected a contrasting pattern between the densities of specific parasitic infections of the fins. Although there was no difference between the pectoral fins and the pelvic fins of the two gyrodactylid parasites (Tukey's HSD, df = 4, p > 0.05), and the number of parasites on these two fins was higher than that on any other fin (Tukey's HSD, df = 4, all p < 0.001; Fig 3). The patterns of parasite number were comparable in Gyrodactylus sp. and P. variegatus but were clearly different from those on respective fins (two-way ANOVA, estimated marginal mean test).
By comparison, the number of Gyrodactylus sp. was significantly higher on pelvic fins and pectoral fins than on other fins (test of between-subjects effects, F (4, 4410), p < 0.001). Both parasite species appeared in moderate amounts on the dorsal fins (Gyrodactylus sp.) and caudal fins (P. variegatus), respectively (Fig 3). On the other hand, the number of P. variegatus was also significantly higher on pelvic, pectoral and caudal fins than that on anal and dorsal fins (Tukey's HSD, df = 4, both p < 0.001). No other significant differences in the number of specific parasitic infections among host fins were detected. At lower levels of infection, parasites preferentially colonized the pelvic, pectoral and dorsal fins ( Table 2). Most fins were infected with a small number of parasites. The number of P. variegatus present in fish was less than 100. As the number of parasites on each fish fin increased by 11 to 100, pelvic fins continued to be the main area of attachment, followed by the pectoral fin. However, at a level of 100 or more parasites per fish, relatively few Gyrodactylus sp. were observed on some fins, such as caudal fins. The total number of Gyrodactylus sp. was higher than that of P. variegatus on all fins per fish. The cause of this phenomenon needs further study.
Relationships between the number of parasites on the fins and the body length of the fish and water temperature
Of the 442 fish collected, 135 were less than 7 cm in length, accounting for 30.5% of the total; 183 fish had a body length of more than 7 cm but less than 10 cm, accounting for 41.4% of the total;124 fish had a body length of more than 10 cm, accounting for 28.1% of the total. Interestingly, throughout the study period (March to September 2016), host fish were infected with the two gyrodactylid parasites at significantly fluctuating levels (Table 3). By examining the fins of all the fish samples, a large number of the two parasites were observed on the fins of the fish hosts with relatively shorter body lengths (1-5 cm). There were fewer of these two parasites on the fins of fish hosts with relatively larger body lengths (greater than 5 cm). GLM analysis showed that host body length had a significant effect on the number of Gyrodactylus sp. on the pectoral fins. Host body length had a significant effect on the number of P. variegatus on the caudal and anal fins. Water temperature had a significant effect on the number of these two gyrodactylid parasites on all fins, and the specific differences need further analysis (Table 4).
After statistical analysis, water temperature and host body length had a high negative correlation (spearman) with the number of Gyrodactylus sp. on the fins (Table 5). However, these factors were relatively less relevant to the number of P. variegatus on the fins. Independently, the correlation between water temperature and the number of Gyrodactylus sp. on the fins was higher. Interestingly, the water temperature was negatively related to not only the number of Gyrodactylus sp. on the fins but also the host body length but to a lesser extent. These results are essentially consistent with the HSD analysis and GLM analysis (Table 4).
Discussion
Although it has been reported in China, little research has been conducted on the survival of gyrodactylids on Homatula variegata, for which a total of two genera were found: Gyrodactylus (von Nordmann, 1832) and Paragyrodactylus (Gvosdev et Martechov, 1953). Our investigation revealed that Gyrodactylus sp. and P. variegatus survive on this host. The microhabitat of monogeneans living on fins has been investigated by many authors [12][13][14]. Monogeneans exhibit the characteristics of aggregate parasitism. For example, benedeniines are significantly parasitic on specific fins [12]. Studies have found that parasites that are attached to the dorsal fins or pelvic fins of fish may be designed to evade host predation, competition, and local immune responses [15,16]. In addition, each developmental cohort that inhabits different fish fins can receive exclusive food and spatial resources [14].
In the present study, the two species of parasites appear to have subtle spatial partitions in their common resources. Gyrodactylus sp. occurred most frequently on the pectoral and pelvic fins, while P. variegatus occurred on the caudal fins.
In this study, we investigated the average water temperature of the sampling points during sampling. Water temperature is thought to be a factor affecting a parasite's ability to reproduce [17][18][19]. Moreover, there is a certain degree of correlation between the water temperature and number of parasites on fins, but the influence of water temperature is distinct in different parasite species [20]. The relationship between temperature and parasite reproduction is complex. Some literature has noted that the number of parasites increases with increasing water temperature [21,22]. On the other hand, for some species, elevated temperatures can be a limiting factor for survival and reproduction [23,24]. In our study, we found that the number of Gyrodactylus sp. and P. variegatus on the fins of H. variegata showed a different trend; specifically, the number of Gyrodactylus sp. number on the fins reached its highest point in April, gradually declined in summer and increased again in autumn (Fig 1). Some previous findings support the results of increased numbers in summer [18,25], which is consistent with our findings. In addition, although the water temperature in July (23˚C) was higher than that in June (21˚C), the volumes of the two fin parasites in July were lower than those in June. The reason may be that the immunity of the host fish increases with increasing water temperature, thus leading to a decline in the number of parasites. Previous studies have demonstrated the same results in higher levels of infection and with weaker host immunity [20,26]. Another reason for this outcome may have been the changes in aquatic environmental factors that are were caused by the increased water temperatures in July, which led to a decrease in the number of aquatic environmental factors. In addition to temperature, photoperiod, salinity and water flow can influence the success of infection. Studies have found that host fish are only infected by monogenean parasites during the day, and low-temperature and high-salt waters are more conducive to parasitic infections of fish [27]. Interestingly, the number of Gyrodactylus sp. on the fish fins decreased significantly relative to that of P. variegatus, in May. The cause for this is unknown at this time but could involve interspecific competition. The number of Gyrodactylus sp. rose again in September, possibly due to changes in the water flow rate. More detailed work on this topic is clearly required. We found a negative correlation between the number of Gyrodactylus sp. and the body length of H. variegata, and there was a relatively weak negative correlation between the number of P. variegatus and the body length of the fish. This finding is
PLOS ONE
Two gyrodactylid on Homatula variegata similar to the results of some previous studies. Thus, there was a negative correlation between parasite species richness and fish body size [28][29][30]. Due to the increase in parasite amount, in comparison to larger fish, smaller fish hosts may be more susceptible to disease [1]. However, some previous literature observed the opposite, in which there was a positive correlation between the number of parasites on the fins and the size of the host [31][32][33][34]. Some researchers have noted that the relationship between the number of parasites on the infected sites of fish and the length of the host should also be highlighted. This effect is more pronounced in small fish, which have a higher number of parasites on the body surface [35,36]. Another possible reason might be that fish with a longer body length may be found in microhabitats with less exposure to parasitic infections [30]. Fish use group behavior and immune responses to reduce the risk of parasites [37]. Perhaps large fish are better suited to finding groups. The current scope of the study reveals that there was less aggregation of parasites on large Homatula variegata fins, which is consistent with the idea that the chances of avoiding infection are enhanced.
It is important to emphasize, however, that not only the host size but also the ecology of each host species affects the species richness of the parasite [34]. Different kinds of Gyrodactylus are parasitic to different parts of the host. Gyrodactylus masu is found on the body surface of salmonids, and the fins, gill arches and gill filaments are the main locations [38]. By observing the parasitic behavior of the five species of Gyrodactylus parasites, researchers found that four of the parasites prefer to parasitize the fish surface and fish gills. When studying two parasites (G. colemanensis and G. salmonis) on the surface of the salmonids, most of the G. colemanensis were attached to the edge of the fin. Gyrodactylus salmonis attached to the head and body surface of the fish [7]. Studies have found that different types of Gyrodactylus have different haptor shapes, which may lead to differences in their habitats [7,39]. The morphology of the haptors of each Gyrodactylus species is probably adapted to the surface of their host.
Conclusions
In summary, through the investigation of two parasites that infect Homatula variegata, we found that (1) the highest number of Gyrodactylus sp. on the fins appeared in April and March, whereas the number of Paragyrodactylus variegatus on the fins appeared in June. That is, the peak number of the two parasites on the fins showed a time niche separation. However, the trend in the number of parasites on the fins was similar during from May to September. The number of the two parasites on the fins of the host rise again in the autumn. (2) The two gyrodactylid parasites seemed to partition their common resources spatially. Gyrodactylus sp. preferred to parasitize the pectoral and pelvic fins, while P. variegatus preferred to parasitize the caudal fins. This may be explained by the avoidance of predation competition and the host's local immune response. This selection mode can be used as a potential delivery policy in gyrodactylids. The main factors leading to the preference for specific habitats have not yet been determined and may be linked to physiological, environmental, ecological and physical factors. More research is required to clarify this preference. | 4,832.8 | 2020-03-18T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Mitochondria-Targeted Delivery Strategy of Dual-Loaded Liposomes for Alzheimer’s Disease Therapy
Liposomes modified with tetradecyltriphenylphosphonium bromide with dual loading of α-tocopherol and donepezil hydrochloride were successfully designed for intranasal administration. Physicochemical characteristics of cationic liposomes such as the hydrodynamic diameter, zeta potential, and polydispersity index were within the range from 105 to 115 nm, from +10 to +23 mV, and from 0.1 to 0.2, respectively. In vitro release curves of donepezil hydrochloride were analyzed using the Korsmeyer–Peppas, Higuchi, First-Order, and Zero-Order kinetic models. Nanocontainers modified with cationic surfactant statistically better penetrate into the mitochondria of rat motoneurons. Imaging of rat brain slices revealed the penetration of nanocarriers into the brain. Experiments on transgenic mice with an Alzheimer’s disease model (APP/PS1) demonstrated that the intranasal administration of liposomes within 21 days resulted in enhanced learning abilities and a reduction in the formation rate of Aβ plaques in the entorhinal cortex and hippocampus of the brain.
Introduction
Alzheimer's disease (AD) has significant clinical, social, and economic consequences for society [1,2]. The number of newly diagnosed AD cases is expected to grow further owing to the increase in life expectancy and in the accuracy of diagnostic procedures, such as neuropsychological tests, computed tomography, and magnetic resonance imaging [3]. AD manifests mainly in sporadic form, and only 10% of cases are a result of genetic pathology [4][5][6]. There is currently no cure for AD treatment. All available medicines provide symptomatic relief with some improvement in the quality of life for patients with a mild to moderate disease form. A lack of consensus on the pathogenesis and etiology of AD is the reason for this ineffective therapy. This, in turn, makes it impossible to separate causes from consequences.
There are several established hypotheses that explain the pathogenesis of AD. The so-called "cholinergic hypothesis" is associated with a decrease in the synthesis of the neurotransmitter acetylcholine. Thus, cholinesterase inhibitors such as donepezil, rivastigmine, and galantamine are currently used for symptomatic therapy of AD [7,8]. The amyloid beta (Aβ) hypothesis is based on the abnormal hydrolysis of amyloid precursor protein, leading to the extracellular accumulation of the amyloid peptides Aβ (1→40) and Aβ (1→42) , which gradually aggregate into neurotoxic plaques [9,10]. The tau hypothesis is associated with the observation of the formation of intracellular neurofibrillary tangles based on hyperphosphorylated tau protein [11]. However, research based on these three classical theories has not led to the development of effective drugs for AD treatment. In addition, the frequency of side effects and increasing dosages of medications have prompted scientists to search for new potential therapeutic targets. In light of the main goal of our study, the hypothesis of mitochondrial dysfunction is a relevant topic [12][13][14][15]. The dysfunction of neuronal cell mitochondria is currently considered to be a marker of the early stage of AD, appearing 10-20 years before clinical manifestations of the disease. The appearance of reactive oxygen species, disruption of the mitochondrial structure, and the development of oxidative stress, followed by the start of apoptosis, are the result of mitochondrial dysfunction [16,17]. It is important to note that mitochondrial dysfunction leads to additional accumulation of amyloid beta , which, in turn, disrupts the functioning of mitochondria, i.e., a so-called "vicious circle" is formed, which is very difficult to "break" [18]. Antioxidants such as vitamin E, vitamin C, CoQ10, curcumin, quercetin, resveratrol, caffeine, and α-lipoic acid have been tested for AD therapy [19,20]. Another possible reason for AD development is the use of devices based on electromagnetic radiation, such as phones, wireless internet, bluetooth devices, ovens, radars, and laptops [21,22]. Microwave-induced neurotransmitter damage delays the signaling process, causing harmful damage to the body.
Interest in the search for medicines that act simultaneously on several molecular targets, i.e., "multitarget drugs", is explained by the multifactorial nature of AD [23][24][25][26]. On the one hand, combination therapy can be implemented through the targeted synthesis of new compounds that combine different types of activity [27][28][29][30]. However, the search for and creation of new drugs is a long and expensive path. Therefore, it is interesting to reveal new facets of already existing drugs either in a simple combination of two drugs with each other or by encapsulating them in nanocontainers [1,[31][32][33]. Within the scope of the second strategy, our attention was drawn to liposomes that can encapsulate both hydrophilic and hydrophobic substrates [34][35][36][37][38][39][40] and allow for combination therapy of AD. In addition, it should be noted that liposomes have other advantages, including high biocompatibility, bioavailability, the ability to overcome biological barriers, and the protection of encapsulated substrates from premature degradation [41][42][43][44]. Of the three routes of administration of nanoformulated drugs for AD therapy-oral, transdermal, and intranasal-the latter is considered as the most promising [45]. The use of liposomes for intranasal drug delivery has been documented in numerous examples [46][47][48][49].
Therefore, the aim of this work was to develop cationic liposomes consisting of soy phosphatidylcholine (PC), cholesterol (Chol), and tetradecyltriphenylphosphonium bromide , loaded with antioxidant α-tocopherol (TOC) in the lipid bilayer and donepezil hydrochloride (DNP) in the water core ( Figure 1) for intranasal administration. After the optimization of the liposomal formulation, in vitro studies on the colocalization of modified liposomes with the mitochondria of neuronal cells were carried out. Then, cationic liposomes were tested in vivo as a dosage form for the therapy of transgenic mice with a model of AD (APP/PS1). The in vivo experiment was carried out in three stages: (1) a behavioral test for the recognition of a novel object; (2) a quantitative assessment of Aβ plaques in the brain; (3) a quantitative assessment of the immunoexpression intensity of synaptophysin, a molecular marker for the presynaptic vesicles.
Results
The objects of the present study were liposomes of classical composition (soy phosphatidylcholine and cholesterol), modified with a cationic surfactant with a triphenylphosphonium head group for dual loading of α-tocopherol (10%) (TOC) and donepezil hydrochloride (DNP). At the first stage, the influence of various lipid concentrations and the presence of TOC on the physicochemical characteristics of the formulations was evaluated. The primary goal in developing nanosized drug delivery systems is achieving controlled aggregate sizes (≈100 nm) with a low polydispersity index (PdI ≤ 0.25). This task can be successfully achieved using an extruder in the final stage of nanoparticle preparation. As can be seen from the data obtained by dynamic light scattering (DLS), the hydrodynamic diameter (D h ) of all systems on the day of preparation was in the range of 103-115 nm, and the PdI was between 0.067 and 0.231 ( Table 1). The stability of the systems was monitored by DLS at certain time intervals. It was found that the liposomes are stable for more than 5 months at 4 • C, after which changes in the determined DLS parameters (D h , PdI, and zeta potential (ζ)) were observed (Table S1). Table 1. Physicochemical parameters of liposomes modified with TPPB-14, 4 • C. The data are presented as mean values ± SD. *-the difference with regard to a similar system without TOC is statistically significant at p ≤ 0.05; ** at p ≤ 0.01; *** at p ≤ 0.001. # -the differences from the PdI values on the first day are statistically significant at p ≤ 0.05; ## at p ≤ 0.01; ### at p ≤ 0.001. Statistical analysis was performed using one-way ANOVA test. The zeta potential is another important factor that determines the effectiveness of nanocarriers for intranasal delivery. It is known that cationic particles are retained longer on the nasal mucosa due to electrostatic interaction between positively charged nanoparticles and negatively charged mucin residues. Figure 2 shows the diagrams of changes in the zeta potential of PC/Chol/TOC/TPPB-14 liposomes (15 mM, 20 mM, and 30 mM) during storage. As can be seen, low zeta potential values on the first day of preparation were characteristic for all three systems with different lipid concentrations. The zeta potential in all cases was equilibrated within a week and remained approximately at the same level for up to 5 months of storage. It should be noted that the same trend was observed for cationic systems without TOC at different lipid concentrations. In each case, the maximum zeta potential of liposomes was observed in the second month of measurements (Table S1). It is also worth noting that in the case of the system with 15 mM of lipid content, the zeta potential barely reaches +25 mV (Figure 2a), whereas in systems with 20 mM (Figure 2b) and 30 mM (Figure 2c) of lipid content, the zeta potential exceeds this mark (red dotted line) and reaches +35 and +32 mV, respectively.
System
Data from DLS, specifically the size and morphology of liposomes, were confirmed using transmission electron microscopy (TEM) for the PC/Chol/TOC/TPPB-14 system (20 mM). As seen in the microphotographs in Figure 3a, the liposomes had well-defined boundaries and a round shape with a size of around 90-100 nm. To compare the size of the liposomes obtained by TEM and DLS, all microphotographs were processed to measure the diameter of all particles in the field of view. The obtained results are presented as a diagram of the distribution of the number of particles by size in Figure 3b. It was found that the largest number of particles had a diameter of 80-110 nm, which is in good agreement with the DLS data. Figure 3c shows a diagram of the distribution of particles averaged by the number of particles, which qualitatively and quantitatively corresponds to the diagram obtained during the processing of the TEM results. At the next stage, in order to select the most optimal system of modified liposomes, the cholinesterase inhibitor DNP was loaded into the liposomes. The efficiency of drug encapsulation is one of the fundamental criteria for selecting a lead system. It was determined for both DNP and TOC. For quantitative determination of the content of substrates in liposomes, their extinction coefficients were first determined ( Figure S1). According to the calculations, there was no statistical difference in the encapsulation efficiency of TOC with varying lipid concentration (95 ± 1% for 15 mM, 96 ± 1% for 20 mM, and 30 mM), and the encapsulation efficiency of DNP slightly increased with incremental lipid concentration in the system ( Figure S2). Based on the obtained data, a system with an average lipid content of 20 mM was chosen for further research. This is also due to the lower content of cationic surfactant in the system, compared to the 30 mM system, which reduces the risk of acute toxicity of liposomes. This system also had a higher zeta potential and stabilized more quickly (Figure 2b).
The rate of DNP release from unmodified and modified liposomes was evaluated. The experiment was carried out on the example of a system with 20 mM of the lipid moiety. The main question that is interesting to answer is how the encapsulation of DNP and TOC affect the release rate of cholinesterase inhibitor. To trace the nature of DNP release, cumulative release percent versus time graphs were plotted ( Figure 4). The absorption spectra of DNP are presented in Figure S3. It can be seen that the encapsulation of DNP in liposomes obviously leads to a decrease in its release rate from the dialysis bag. This phenomenon may have a positive effect when using the liposomal form of the drug in vivo, since a so-called prolongation of action is achieved, which can potentially allow "a reduction in the frequency of medication. It should be noted that the presence of TOC in the system enhances this effect in the case of modified liposomes. For a more detailed understanding of the DNP release, the resulting curves were processed using various kinetic models, namely, the Korsmeyer-Peppas, Higuchi, First-Order, and Zero-Order models ( Figure 4).
The main parameters at this stage were the rate constant (k), the coefficient of determination (R 2 ), and the diffusion release exponent indicating the mechanism of substrate release (n) (for the Korsmeyer-Peppas model). According to the results presented in Table 2, it becomes apparent that the Korsmeyer-Peppas model is the most suitable for describing the kinetics of DNP release from liposomes, since the R 2 for liposomal systems exceeds 0.99. High R 2 values have also been obtained for the Higuchi model, which predicts that drug release occurs via diffusion. This is confirmed by the values of the diffusion release exponent, determined by the Korsmeyer-Peppas model. The value of the exponent n ≤ 0.45 indicates that the drug substance is released according to Fick's law or by the diffusion mechanism. For comparison, we also processed the DNP release curves in the initial part, namely, from 0 to 270 min ( Figure S4). As shown in the graphs, in the initial section of the curves, the convergence of the experimental points with linear fits is much better. This is also confirmed by the values of the rate constant and the coefficient of determination (Table S2).
Within this study, we relied on the mitochondrial cascade hypothesis of AD development and assessed the mitotropic activity of liposomes by incorporating TPPB-14 into their bilayer. The experiment was conducted on a culture of rat motoneurons using a confocal microscope. As seen in Figure 5, the modified PC/Chol/TOC/TPPB-14 liposomes penetrate the mitochondria of neuronal cells better than unmodified ones, as evidenced by the yellow fluorescence (overlap of fluorescence from two probes in column C). The calculated Pearson's correlation coefficient values are 0.29 ± 0.01 and 0.46 ± 0.01 for liposomes composed of PC/Chol/TOC and PC/Chol/TOC/TPPB-14, respectively ( Figure S5). To determine the biological activity of the selected system in vitro, the antioxidant activity of liposomes with the antioxidant TOC was investigated using chemiluminescence analysis. Upon the introduction of antioxidants into the system, the number of radicals decreases, and with it, the intensity of chemiluminescence drops. The work by Lissi E.A. [50] describes an approach to measure total antioxidant capacity that takes into account this feature of the curves-the TAR (total antioxidant reactivity) method and the TRAP (total reactive antioxidant potential) method. It is believed that TRAP reflects the amount of antioxidant in the system, and the TAR method reflects its activity, i.e., the rate of interaction between the antioxidant and radicals. According to the results, the inclusion of TPPB-14 and TOC in the lipid bilayer increases the antioxidant activity of the system compared to individual TOC at the same concentration ( Table 3). The degree of chemiluminescence quenching and the duration of the latent period in the presence of the tested samples are also presented in Figure S6. After confirming the stability of the liposomes, their capacity for dual loading of substrates, and their in vitro effectiveness, the study moved on to in vivo experiments. Firstly, the uptake of the liposomes into the brain via intranasal administration was tested. Rhodamine B (RhB) was selected as a visualizing agent and was encapsulated in the liposomes using a passive loading method. Free RhB and encapsulated RhB were administered intranasally at a dose of 0.5 mg/kg. Non-treated rats were used as a control. It was shown that the intranasal administration of cationic liposomes leads to the effective absorption of RhB in brain (Figure 6c), compared to the free form of the probe (Figure 6b) and the control group of rats (Figure 6a), presumably due to the high zeta potential. The identification of encapsulated RhB in the brain may indicate the ability of the investigated systems to bypass the blood-brain barrier. In the final stage of the study, PC/Chol/TOC/TPPB-14 liposomes (20 mM) loaded with both the cholinesterase inhibitor DNP and the antioxidant TOC were tested as a drug delivery system for the treatment of mice with an AD model. Since the goal of the experiment was to slow down the progression of the disease, therapy was started at an early stage when only the first signs of pathological changes were detected, corresponding to the age of mice at 6 months. The experiment was performed in two stages. The first stage was devoted to a behavioral test that allowed for the assessment of memory impairment ("the novel object recognition test"). Within 18 days before the test, and during the test (20 min prior to its start), the PC/Chol/TOC/TPPB-14/DNP liposomes were intranasally administered to the mice. It was shown that in the wild-type control group (TG−), the mice preferred the novel object over the familiar one with a probability of 69.4 ± 4.3% ( Figure 7). In the case of the control group of transgenic mice (TG+), the preference for the novel object was significantly lower (43.3 ± 4.8%, p = 0.0006). At the same time, the group of transgenic mice that received intranasal administration of liposomes for 21 days showed an interest in the novel object with a probability of 57.1 ± 2.8%, which did not differ statistically (p = 0.077) from the control group of TG− wild-type mice ( Figure 7). It is important to note that during the intranasal administration of liposomes loaded with DNP and TOC for 21 days, no side effects were observed. None of the mice showed any signs of behavioral changes or movement difficulty. Eating and drinking habits were normal. At the second stage, a quantitative assessment of Aβ plaque formation and of the intensity of the synaptophysin immunoexpression in the brain of the control group of transgenic (TG+) mice and the group of transgenic (TG+) mice treated with PC/Chol/TOC/TPPB-14/DNP liposomes was carried out. It was shown that intranasal administration of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days significantly reduced the mean number of Aβ plaques and the mean percentage area of Aβ plaques in the hippocampus and entorhinal cortex of TG+ mice ( Figure 8). Thus, the mean number of Aβ plaques in the entorhinal cortex decreased from 5.86 ± 0.46 in control TG+ mice to 3.72 ± 0.25 (p = 0.00013) in TG+ mice treated with liposomes ( Figure 8a). The mean area of Aβ plaques decreased from 0.12 ± 0.01% to 0.07 ± 0.01% (p = 0.0008), respectively (Figure 8b). In the dentate gyrus (DG) of the hippocampus, the mean number of Aβ plaques decreased from 2.61 ± 0.31 to 1.41 ± 0.30 (p = 0.0008), and the mean area of Aβ plaques decreased from 0.09 ± 0.01% to 0.06 ± 0.01% (p = 0.03). In addition, a significant decrease in the number of Aβ plaques from 1.41 ± 0.28 to 0.68 ± 0.29 (p = 0.012) was observed in the area CA1 of the hippocampus after liposome administration (Figure 8a). Thus, intranasal administration of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days prevented memory impairment and slowed down the rate of Aβ plaque formation. Representative microphotographs of Aβ plaques in brain cross-sections of the entorhinal cortex and hippocampus of TG+ mice are shown in Figure S7. A quantitative assessment of the synaptophysin immunoexpression in the mice's entorhinal cortex and hippocampus did not reveal significant changes in the intensity of immunoexpression between the control groups of TG− and TG+ mice ( Figure 9). This absence of statistically significant differences is most likely explained by the early stage of pathology development in 6-month-old mice. The intranasal administration of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days led to a significant increase in the synaptophysin immunoexpression in the entorhinal cortex and hippocampus of transgenic mice (TG+) (Figure 9). Thus, in the entorhinal cortex, synaptophysin immunoexpression increased by 6% (p = 0.016) and 20% (p = 0.004) compared to the control TG− and TG+ mice, respectively. In the dentate gyrus (DG) of the hippocampus, immunoexpression intensity increased by 14% (p = 0.040) and 38% (p = 0.001), in the area CA1 by 27% (p = 0.016) and 36% (p = 0.009), and in the area CA3 by 32% (p = 0.004) and 49% (p = 0.0001) compared to control TG− and control TG+ mice, respectively.
Discussion
The aim of this study was to enhance the therapeutic effect of the cholinesterase inhibitor DNP by incorporating it into mitochondria-targeted cationic liposomes along with the antioxidant TOC. The cationic surfactant with a triphenylphosphonium head group was selected as the mitochondria-targeting agent, which, due to its amphiphilic nature, can easily integrate into the lipid bilayer of liposomes. Previously, our research group investigated the mitochondrial targeting of liposomes modified with triphenylphosphonium cationic amphiphiles with different lengths of hydrocarbon tails and at different ratios of amphiphiles/lipids in the case of cancer [51]. In addition to oncological diseases [52][53][54], the mitochondria-targeted drug delivery strategy is gaining more importance in the context of such disorders as retinal ischemia-reperfusion injury [55], cardiovascular diseases [56], stress-related neurodegenerative diseases [57][58][59], etc. It is worth noting that in the above cases and in many others, the key role in pathogenesis is played by dysregulation of the production of reactive oxygen species, for which the mitochondria of cells are responsible [60][61][62]. Such intense interest in mitochondria as targets for the treatment of various diseases is explained by the fact that mitochondria have a wide range of functions, such as the formation of cell energy units, the control of Ca 2+ homeostasis, the signaling of reactive oxygen species, etc. According to one of the current hypotheses (the mitochondrial cascade hypothesis), mitochondrial dysfunction appears at the earliest stages of AD, which leads to oxidative stress in brain cells. Further, this can lead to additional accumulation of Aβ plaques, which, in turn, disrupts the functioning of mitochondria [63][64][65]. Therefore, the combination of traditional therapy for AD (inhibition of brain cholinesterase) with a mitochondria-targeted drug delivery strategy seems to be very attractive and promising. Based on previous experience, we focused on the tetradecyl homologue with a PC/amphiphile ratio of 50/1 and attempted to trace the influence of the lipid component of the systems, specifically its concentration (Table 4), on the physicochemical characteristics of the formulations. It is known that the size distribution and polydispersity of liposomes can affect their physical stability, which, in turn, is influenced by the method of nanoparticle preparation, storage conditions, and zeta potential. For example, in [66,67], it was shown that liposomes exhibit better stability over time when stored at 4 • C compared to 25 • C and 37 • C. Many authors have also shown the critical role of the zeta potential of lipid nanoparticles in their stability during storage [68][69][70]. It has been established that a zeta potential of ≈30 mV is optimal for preventing their coagulation. The choice of production method is also an important step in the creation of lipid nanocontainers. For example, the method of ethanol injection with an incorrect selection of the component ratios can lead to the destruction of nanoparticles [71]. In addition, we assume that the uniformity of the size distribution of aggregates also affects the distribution of potential-carrying components in the bilayer (in our case, surfactant molecules), which brings us back to the role of the zeta potential in the stability of nanoparticles. From this point of view, methods based on the extrusion of liposomal dispersion are more likely to obtain a monodisperse system [72] than the ultrasonic method, for example [73]. Analysis of the data obtained using dynamic and electrophoretic light scattering (Tables 1 and S1) allowed us to identify several patterns regarding the investigated formulations. (1) The addition of TPPB-14 to the system increases the PdI in all cases. Moreover, the higher the concentration of the lipid component (and the concentration of TPPB-14, respectively), the more pronounced this effect. This may be related to the fact that in the system with 20 mM lipids, the concentration of TPPB-14 is very close to the critical micelle concentration (by tensiometry), and in the system with 30 mM lipids, its concentration is slightly higher [74]. It is likely that surfactant molecules do not immediately incorporate into the lipid bilayer of the liposomes and remain in solution, contributing to the decrease in the monodispersity of the system. (Table 1). Since TOC is a hydrophobic substrate, it can be assumed that the co-inclusion of the antioxidant and TPPB-14 in the lipid bilayer creates some competition, and TOC slightly hinders the incorporation of surfactants. However, this difference is not critical, as the zeta potential of modified liposomes in the presence of TOC remains above +25 mV (in the case of 20 mM and 30 mM) (Figure 2).
To confirm the size, morphology, and polydispersity of the liposomes, transmission electron microscopy (TEM) was used. The images clearly show that the liposomes have a rounded shape (Figure 3a) with an average diameter of 92 ± 17 nm (measured across all particles in the field of view). More detailed size distribution diagrams obtained using TEM and DLS can be seen in Figure 3b,c, respectively. It is important to note that DLS data reflect the diameter of the liposomes along with the solvent shell, and the sizes may be slightly larger than those determined by TEM. Given this fact, it can be concluded that the two methods used to determine the diameter of the investigated formulations are highly correlated with each other.
The proposed approach of dual drug loading in a nanocarrier is being actively developed now. Thus, there are some examples of combined delivery of drugs for AD therapy: (1) donepezil hydrochloride (or memantine hydrochloride)/insulin sensitizer in polycaprolactone-g-dextran-based polymer vesicles [75]; (2) metformin/romidepsin and rosiglitazone/vorinostat in poly(ethylene glycol)-poly(ε-caprolactone)-based polymer nanoparticles, additionally stabilized with poloxamer [76,77]; and (3) siRNA/rapamycin in a nanocarrier based on lectin and KLVFF peptides [78]. There is also an example of combining two drugs without nanocontainers-donepezil and memantine in Namzaric™ [79]. In the present work, it was extremely important to find out how much of both drugs, TOC and DNP, can be loaded into the lipid bilayer and the hydrophilic core of liposomes, respectively. There were two different methods for the determination of liposome encapsulation efficiency regarding DNP and TOC. For DNP, a common filtration/centrifugation technique was used to separate free and encapsulated substrate [80][81][82]. For TOC, the method of extraction of the unencapsulated substrate in ethanol was used [83,84]. It has been found that the encapsulation efficiency of liposomes toward TOC is independent of the total lipid concentration and is approximately 96%. In the case of DNP, some differences were found, namely, the higher the lipid content, the higher the EE ( Figure S2). This dependence is quite explainable, since the higher the lipid concentration in the system, the more liposomes in the solution and, accordingly, "reservoirs" for the drug substance in the same volume. Based on the literature data, we can conclude that, in general, DNPs are characterized by fairly high encapsulation efficiency values: 93 ± 5.33% [85], 62.5 ± 0.6 [86], and 84.91 ± 3.31% [87], which was also shown in the present work.
For further investigations, we focused our attention only on one system, namely, modified liposomes with 20 mM of lipid content. Such a choice is based on the fact that this system showed the best zeta potential and sufficiently high encapsulation efficiency. In addition, a high concentration of surfactants in the system can increase the toxicity of liposomes toward biological systems in vitro and in vivo. Along with toxicity, drug release patterns are a very important characteristic of nanoscale drug delivery systems. In addition, the use of mathematical models to describe the kinetics of substrate release, and sometimes even to predict it, is becoming a classic tool in the development of dosage forms. The main and frequently used models in the literature are the Korsmeyer-Peppas, Higuchi, First-Order, and Zero-Order models [88][89][90], which were also applied in this work. First, as can be seen from Figure 4, the encapsulation of DNP in liposomes leads to a slowdown in the rate of its release from the dialysis bag, which is typical for nanoscale drug delivery systems [49,91]. This may be due to the fact that the substrate takes longer to cross the lipid bilayer first before being released from the dialysis bag. According to profiles presented in Figure 4, in the initial section of the curves, a rapid release of the substrate from the bag is observed, and the rate then slows down. It should be noted that the most prolonged release of DNP is observed from the PC/Chol/TOC/TPPB-14 system. Inclusion of TOC also slows down the release of the substrate, probably due to the denser packing of the lipid bilayer.
This phenomenon is also confirmed by the data presented in Table 2, namely, the values of the rate constant (k) determined by different models. The PC/Chol/TOC/TPPB-14 system has the lowest release rate constant and the highest coefficient of determination (R 2 ) in all models. The data obtained are important from the point of view of choosing the most optimal system for further experiments, as well as for the development of new formulations in the future. It also becomes clear that the Korsmeyer-Peppas and Higuchi models are the most suitable for describing the kinetics of DNP release, i.e., release occurs by diffusion mechanism [92]. It should be noted that the processing of the substrate release curves in their initial section confirms the data obtained for the entire curve ( Figure S4 and Table S2). The absorption spectra of DNP at different time intervals for all systems are presented in Figure S3.
Prior to moving on to in vivo experiments, the mitotropic activity of liposomes was evaluated on a culture of rat motoneuron cells. The efficiency of the systems was evaluated based on two criteria: the presence of yellow staining, which appears when colocalizing the mitochondrial dye (i.e., mitochondria) and the fluorescent lipid (i.e., liposomes); and Pearson's correlation coefficient, which reflects the relationship between the fluorescence intensity of two dyes. The closer the correlation coefficient value is to 1, the stronger the relationship between the two random variables, namely, the fluorescence intensities of the corresponding dyes. According to the results, modified liposomes indeed penetrate the mitochondria of neuronal cells better than unmodified ones, which is confirmed by microphotographs ( Figure 5) and Pearson's correlation coefficient ( Figure S5).
As described earlier, the current study aimed to test the hypothesis that the combined use of an antioxidant with an cholinesterase inhibitor would affect the pathogenesis of AD. The antioxidant activity of free TOC and TOC in modified liposomes was determined using a luminol-induced chemiluminescence analysis in vitro. At this stage of the study, it was already clear that there was no need to test unmodified liposomes further, as they did not meet the required physicochemical characteristics, namely, the need for a cationic charge, and as a result, they did not possess mitotropic activity ( Figure 5). The antioxidant activity was evaluated by two methods: TAR and TRAP. The TAR method was used to determine the degree of quenching of the luminescence intensity of peroxyl radicals, and the TRAP method measured the latency period of the peroxyl radical curve. It should be noted that the choice of 2,2 -azobis(2-amidinopropane) dihydrochloride (AAPH) as the source of radicals was not random, since the rate of decomposition of azo compounds is not affected by additives, which, together with several methods of assessment, increases the quality and purity of the experiment [93]. From the data presented in Table 3, it can be seen that free TOC has low antioxidant activity (36.4 ± 2.9%), despite being considered a strong antioxidant. This may be due to the hydrophobicity of TOC and the fact that the experiment is conducted in an aqueous solution, where TOC's low solubility in such a medium may lead to rapid inactivation as indicated by the short latent period (slightly over 3.5 min) ( Table 3). Similar assumptions were also made by the authors of [94], as TOC activity is usually determined in non-polar environments. Liposomes are ideal for solving this problem, as they can encapsulate both hydrophilic and hydrophobic substrates, increasing their solubility. Indeed, in the case of modified liposomes loaded with TOC, a longer latent period of interaction between radicals and the antioxidant system is observed (more than 8 h) ( Figure S6) with an almost 100% reduction in chemiluminescence, possibly due to the protective action of liposomes toward TOC (Table 3).
After evaluating the physicochemical characteristics of liposomes and their efficacy in vitro and ex vivo, the study proceeded to in vivo experiments. The ability of cationic liposomes to be absorbed into the rat brain via intranasal administration was first tested. Visualization of the free Rhodamine B (RhB) and the liposome-incorporated RhB in brain tissue sections was carried out using a fluorescent microscope. According to the results presented in Figure 6, free RhB does not reach the brain at all, whereas in the case of modified liposomes, there is an effective delivery of the probe to the brain, as evidenced by the green fluorescence in Figure 6c. We hypothesize that this difference is due to the fact that modified liposomes are able to remain in the nasal mucosa for a longer period of time, primarily due to their high positive zeta potential. Similar assumptions have already been put forward by several research groups, but this mainly concerns polymeric aggregates [95][96][97]. The size of the carriers is also important for intranasal drug delivery. In [98], it was experimentally demonstrated that nanoparticles with a diameter <200 nm were better retained, and for longer periods of time, in the mucous membrane of the nasal cavity of rats, which makes it possible to increase the effectiveness of drug delivery to the brain.
Currently, a large number of research groups are working on the problem of increasing the effectiveness of DNP in the treatment of AD. These groups are mainly focused on finding an optimal delivery system and its route of administration [99]. In this regard, the intranasal route of administration appears to be an attractive choice for many researchers, who have demonstrated the effectiveness of DNP-loaded liposomes, SLNs, nanoemulsions, etc., compared to the free form of DNP [87,[100][101][102][103][104]. It is worth noting that although there is evidence in the literature confirming the effectiveness of intranasal administration of DNP included in nanocarriers in vitro and in vivo, there are almost no in vivo results on animal models of AD. Therefore, in the final stage of this work, we tested the system PC/Chol/TOC/TPPB-14/DNP as a dosage form for treating mice with AD models. As mentioned above, it has already been shown that free DNP penetrates poorly into the brain via intranasal route. Therefore, at this stage, we assessed the effectiveness of only cationic liposomes.
The experiment on mice with the AD model was performed in two stages: a behavioral "novel object recognition" test and a quantitative assessment of Aβ plaques and the intensity of synaptophysin immunoexpression in the brain of mice. According to the results of the first stage, the use of PC/Chol/TOC/TPPB-14/DNP liposomes allowed for the restoration of the learning ability of the AD model mice (TG+) (57.1 ± 2.8%) almost to the level of healthy wild-type animals (TG−) (69.4 ± 4.3%) (Figure 7). It is worth noting that during the 21-day experimental period, the liposomal drug formulation did not cause any irritation of the nasal mucosa of mice, which is an important criterion in the selection of the drug formulation and its route of administration. The second stage of the experiment was devoted to the quantitative evaluation of Aβ plaques and the intensity of synaptophysin immunoexpression in the brain of mice. The analysis was performed in the entorhinal cortex area of the brain and hippocampus (DG, CA1 and CA3), as these brain regions are responsible for memory formation and impairment. It was demonstrated that the intranasal administration of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days significantly reduced the percentage of total area of Aβ plaques in the hippocampus and entorhinal cortex of TG+ mice compared to the control group (Figures 8 and S7). It should be noted that the differences between the values for the liposomal drug formulation and the control are statistically significant in all studied areas of the brain except for the area CA3. Thus, the intranasal administration of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days not only helped to alleviate memory impairment but also influenced the development of AD in transgenic mice by slowing down the rate of Aβ plaque formation.
It should be noted that at the final stage of the study, the level of immunoexpression of synaptophysin (the marker of synaptic contacts) was evaluated, the reduction of which in critical areas of the brain likely leads to memory, cognitive, and behavioral dysfunction. After intranasal administration of the liposomes, the intensity of synaptophysin immunoexpression in all examined areas of the brain was even higher than in the healthy TG− control group (Figure 9). A similar effect has previously been described in the study of antioxidants present in green tea extracts [105]. It can be concluded that the use of PC/Chol/TOC/TPPB-14/DNP liposomes for 21 days has a positive effect on synaptic plasticity. This study has some limitations, which should be noted. There are general limitations that characterize fundamental research on liposomes, namely, the difficulty in scaling up the production of liposomal formulations and the differences between animal and human models [106,107]. There are also particular limitations for our systems, the elimination of which can be considered as prospects for further development of the work. The enhancement of the mucoadhesive properties of nanocontainers, testing other combinations of drugs that can act on several targets of AD, and the assessment of new surfactants as modifiers can be considered as further steps. In addition, it is important to start treatment with the proposed systems at the early stages of the disease [108]. The most important limitation is the obtainment of approval for biomedical use of the cationic surfactants [109]. However, these limitations do not reduce the importance of our study, as to the best of our knowledge, this is the first study on cationic liposomes with dual substrate loading for intranasal administration.
Liposome Preparation Protocol
To identify the optimal composition of liposomes, the concentration of the components was varied over a wide range: 15 mM, 20 mM, and 30 mM (toward PC/Chol or PC/Chol/TOC) ( Table 4). The PC/TPPB-14 ratio in all cases was 50/1. The lipid film hydration method was chosen as a method for the preparation of liposomes. The corresponding weights of lipids were dissolved in 100 µL of chloroform. To include TOC and TPPB-14 in the lipid bilayer, their stock solutions (in ethanol and chloroform, respectively) were prepared and dosed to the lipid mixture to obtain the appropriate concentrations ( Table 4). The organic solvents were then evaporated on a rotary evaporator RE-52AA (Shanghai Jingke Scientific Instrument Co., Ltd., Shanghai, China) under vacuum until a lipid film formed. The final film was hydrated with Milli-Q water (in the case of empty liposomes) or an aqueous solution of DNP (in the case of drug-loaded liposomes). In both cases, the liposome dispersions were frozen in liquid nitrogen and thawed in a water bath (5 cycles). The size of the resulting liposomes was controlled by passing the dispersions through a polycarbonate membrane with a pore size of 100 nm using an LiposoFast Basic extruder (Avestin, Ottawa, ON, Canada). Liposomes were stored at 4 • C.
Determination of the Size, Zeta Potential, and Morphology of Liposomes
To control the size, zeta potential, and stability of liposomes, dynamic and electrophoretic light scattering was used. The measurements were carried out on a Zetasizer Nano ZS device (Malvern Instruments Ltd., Worcestershire, UK) at 25 • C. For measurements, all solutions were diluted with Milli-Q water to 2 mM (toward PC). All characteristics of the device and research methods are described in [36].
The size and morphology of the liposomes were confirmed by transmission electron microscopy using a Hitachi HT7700 Exalens microscope (Hitachi High-Technologies Corporation, Tokyo, Japan). For the experiment, fresh dispersions of liposomes were prepared, the concentration of which was carefully selected to achieve an acceptable number of aggregates in the field of view without the formation of a thick film on the grid, i.e., 5 µM. Sample was dispersed on 300 mesh 3 mm copper grid (Ted Pella) with continuous carbon-formvar support films and dried at room temperature. The images were acquired at an accelerating voltage of 100 kV. The calculation of the diameter of the aggregates was carried out using ImageJ software (Version number is 1.53t).
Quantification of Encapsulation Efficiency (EE%)
The methodology for determining the encapsulation efficiency of TOC and DNP differs due to their hydrophobic and hydrophilic nature, respectively. In the case of TOC, the method of extraction of the unencapsulated substrate in ethanol was used [83]. Unencapsulated DNP was separated from liposomes using Amicon ® Ultra-0.5 Centrifugal Filter Units (Merck Millipore, Burlington, MA, USA) and Eppendorf MiniSpin microcentrifuge (Eppendorf, Hamburg, Germany). In this case, 0.4 mL of the liposomal dispersion was added to the filter and centrifuged for 10 min at 10,000 rpm. Next, an ethanol solution of TOC and an aqueous solution of DNP from the bottom of the centrifuge filter were diluted to measure absorption spectra on a Specord 250 Plus (Analytik Jena AG, Jena, Germany) using a 0.2 cm quartz cuvette. Encapsulation efficiency was calculated using the following formula: Total amount of substrate − free substrate Total amount of substrate × 100% The experiment was performed at least three times to confirm the validity of the results, which are presented as the mean ± SD.
Quantification of DNP Release Rate In Vitro and Release Kinetic Model Fitting
The release rate of DNP from liposomes (20 mM) was determined by dialysis. Briefly, samples of free DNP and the liposomal form of DNP with a volume of 3 mL were placed in dialysis bags with a pore size of 3.5 kDa (Scienova GmbH, Jena, Germany). Then, the dialysis bags were immersed in beakers containing 60 mL of phosphate-buffered saline (PBS) with a concentration of 0.025 M and pH = 7.4. The concentration of DNP in all samples was 0.5 mg/mL. The DNP release was monitored spectrophotometrically (Specord 250 Plus, Analytik Jena AG, Jena, Germany) at 37 • C with constant stirring (250 rpm). Measurements were conducted using 1 × 1 cm quartz cuvettes. The optical density of DNP was measured from 190 nm to 500 nm, and for calculation of the release rate, the optical density at 317 nm was selected. The experiment was stopped after 24 h, and the optical density at that point was considered to be 100%. The results are presented as the cumulative release percent versus release time. DNP release profiles were fitted to Korsmeyer-Peppas, Higuchi, First-Order, and Zero-Order models using OriginPro 8.5 software according to the following equations in Table 5: Table 5. Equations for kinetic models of substrate release.
Kinetic Model Equation
Korsmeyer-Peppas Zero-Order Q t = k 0 · t Where Q t is a fraction of drug released in time t; Q ∞ is the total fraction of drug released; k KP is the release constant taking into account the structural and geometric characteristics of the dosage form, %/min n ; n is the diffusion release exponent; k H is the Higuchi release constant, %/min 1/2 ; k 1 is the first-order release constant, 1/min; k 0 is the zero-order release constant, %/min.
Colocalization
The degree of colocalization of liposomes with mitochondria was determined using a primary culture of rat motoneurons prepared as described by Sibgatullina and Malomouzh [110]. The cells were seeded in 24 × 24 mm glass plates. On the 6th day of cultivation, PC/Chol/TOC and PC/Chol/TOC/TPPB-14 liposomes with DOPE-RhB were added to culture medium and incubated for 24 h. Then, cells were washed twice with PBS and incubated for 20 min in a medium containing MitoTracker Green FM (Thermo Fisher Scientific, Waltham, MA, USA) to stain the mitochondria of the cells. The colocalization degree of liposomes with mitochondria was evaluated using a Leica SP5 TCS confocal scanning microscope (Leica Microsystems, Wetzlar, Germany). DOPE-RhB was excited at 561 nm, and MitoTracker Green FM at 488 nm. The fluorescence emission of DOPE-RhB and MitoTracker Green FM was collected at 570-700 nm and at 500-540 nm, respectively. Pearson's correlation coefficient was used to identify the dependence between the fluorescence intensities of the two dyes. The validity of the results was checked using the Student's t-test, and p-values of less than 0.05 were considered significant.
Antioxidant Activity In Vitro
The antioxidant activity of free TOC (ethanol solution) and TOC-loaded liposomes was determined using an in vitro chemiluminescence assay, in which the intensity of chemiluminescence is a measure of the amount of radicals. Luminol (98%) (Alfa Aesar, Haverhill, MA, USA) was used as a luminophore, the luminescence of which was activated by 2,2 -azobis(2-amidinopropane) dihydrochloride (AAPH, 98%) (Acros Organics, NJ, USA). Before the experiment, a solution of luminol was prepared with a concentration of 1 µM dissolved in 0.1 M NaOH solution. Immediately prior to analysis, the stock solution of luminol was diluted four times with Milli-Q water. For chemiluminescent analysis, 1 mL of the reaction mixture was placed in a cuvette of a Lum-1200 instrument (DISoft, Russian Federation) thermostated at 30 • C. The total volume of the reaction mixture was composed of the following components: 400 µL of 250 µM luminol, 500 µL of 0.1 M TRIS buffer with pH = 8.8 (Fisher Chemical, Waltham, MA, USA), and 100 µL of 40 mM AAPH aqueous solution. Then, the baseline chemiluminescence level was measured for 20 min, after which 10 µL of the test compound was added to the cuvette with the reaction mixture, and the chemiluminescence level was measured. The results obtained were expressed in % toward the initial baseline chemiluminescence level, which was taken as 100%. The results were processed using the PowerGraph and OriginPro 8.5 software.
Animals
In vivo experiments involving animals were carried out in accordance with the Direc-
Visualization of Liposomes into the Rat Brain
To visualize liposomes in the brain, free RhB and RhB in PC/Chol/TPPB-14 (15 mM) liposomes were administered intranasally at a dose of 0.5 mg/kg (400 µL per rat) to Wistar rats. One hour after administration of the liposomal dispersion, animals were euthanized with isoflurane, transcardially perfused with 300 mL of cold PBS (pH = 7.4). Rat brains were removed and frozen in liquid nitrogen. The obtained samples were stored at -80 • C. Twenty-four hours before the experiment, samples were moved to a −20 • C freezer.
For microscopic imaging, samples were cut into 10 µm sections using a Tissue-Tek Cryo3 microtome (Sakura Finetek, Torrance, CA, USA). RhB fluorescence in the brain of rats was observed on a Leica TSC SP5 MP confocal laser scanning microscope (Leica Microsystems, Wetzlar, Germany) using a Cyanine 3 filter at λ ex = 550 nm and λ em = 570 nm. Non-treated animals were used as a control.
Novel Object Recognition Test
The experiments were carried out on transgenic mice of both sexes weighing 24-25 g expressing a chimeric mouse/human protein-a precursor of amyloid beta and a mutant human presenelin-1 (line B6C3-Tg(APP695)85Dbo (APP/PS1)). Liposomes were administered intranasally at 50 µL/mouse for 21 days at a DNP concentration of 1 mg/kg. The control group of animals was administered with an equivalent amount of water. To determine the effectiveness of the proposed therapy in mice with a model of AD, a novel object recognition test was carried out on the 19th day of the therapy [111]. During the test, liposomes were administered 20 min before the start. On the first day, the mice were placed individually into the square testing arena with black walls (50 cm in length, 50 cm in width, 38 cm in height) for 5 min without any objects. On the second day, two identical objects were placed in the central part of the arena, and the mice were allowed to explore the objects for 10 min. On the third day, the mice were presented with a familiar and a novel object for 10 min. The time of exploration of each object by the mice was recorded using a digital camera. After each test, the arena was cleaned with a solution of 70% ethanol. At the end of the test, the preference index (exploration of novel object/total exploration time × 100) was calculated.
Thioflavin S Staining and Immunohistochemistry
On the 21st day of liposomal therapy, mice were anesthetized with isoflurane, transcardially perfused with 30 mL of PBS (pH = 7.4) and then with 4% paraformaldehyde in PBS. After decapitation, the brain was removed, kept for 24 h in a 4% paraformaldehyde solution, and transferred to a 30% sucrose solution in PBS containing 0.02% sodium azide. The brain hemispheres were frozen in a Neg 50 embedding medium, and frontal sections were made with a thickness of 20 µm on a Tissue-Tek Cryo3 microtome (Sakura Finetek, Torrance, CA, USA). To visualize Aβ plaques, brain samples were stained for 5 min with Mayer's hematoxylin solution (Biovitrum, Saint Petersburg, Russia) and then for 5 min with a 1% solution of Thioflavin S diluted in 50% ethanol. The number and area of Aβ plaques were counted using a LeicaDM 6000 CFS confocal scanning microscope (Leica Microsystems, Wetzlar, Germany). Data analysis was carried out in the entorhinal cortex and hippocampus at ×10 magnification. The results were averaged over 10 sections of the brain of each animal.
To assess the intensity of synaptophysin immunoexpression, the resulting brain sections were incubated in PBS containing 0.1% Triton X-100, 1% BSA, and 1.5% normal donkey serum for 30 min. Next, the sections were transferred to a solution (1:500) of primary rabbit monoclonal antibody to synaptophysin (ab 32127, Abcam) and incubated for 12 h at 4 • C, then washed with PBS and incubated in a solution (1:200) of secondary donkey anti-rabbit (Alexa Fluor ® 488) antibodies (ab 150073, Abcam) for 1.5 h at room temperature in the dark. The intensity of synaptophysin immunoexpression was analysed using a LeicaDM 6000 CFS confocal scanning microscope (Leica Microsystems, Wetzlar, Germany). Data analysis was carried out in the entorhinal cortex and hippocampus at ×10 magnification. The results were averaged over 8 sections of the brain of each animal.
Statistics
All data processing was performed using Microsoft Excel 2016 ® and OriginPro 8.5. Results are expressed as the mean ± standard deviation. Statistical analysis of the results of in vivo experiments (determination of the number of Aβ plaques and intensity of synaptophysin immunoexpression) was carried out using the Mann-Whitney test. ANOVA statistics with Tukey's post hoc test were used to analyze the results of the behavioral test and DLS data. The validity of the colocalization results was checked using the Student's t-test. Significance was tested at the 0.05 level of probability (p).
Conclusions
For AD treatment, a protocol for the obtainment of new multitargeted lipid carriers by modifying liposomes with tetradecyltriphenylphosphonium bromide and dual loading of substrates (α-tocopherol and donepezil hydrochloride) for intranasal administration was developed. It was shown that cationic liposomes have a high encapsulation efficiency of donepezil hydrochloride, as well as a significant antioxidant activity. From the Korsmeyer-Peppas model, it was confirmed that the release of donepezil is based on Fickian diffusion. The mitotropic activity of the liposomes was investigated on a culture of rat motoneurons using a confocal microscope. The modified liposomes showed significantly better colocalization with the mitochondria of neuronal cells compared with unmodified ones. Photographs of rat brain slices confirm the penetration of modified fluorescently labeled nanocarriers into the brain in vivo. Studies using transgenic mice with an AD model showed that the intranasal administration of liposomes for 21 days reduces the average number of Aβ plaques in the entorhinal cortex from 5.86 ± 0.46 in TG+ mice in the control group to 3.72 ± 0.25 (p = 0.00013) in TG+ mice treated with modified liposomes, and their area from 0.12 ± 0.01% to 0.07 ± 0.01% (p = 0.0008), respectively. A downtrend in the average number and total area of Aβ plaques was also observed in the dentate gyrus and CA1 region of the hippocampus. Thus, the intranasal administration of modified liposomes made it possible to decrease memory impairment and to influence the development of AD in transgenic mice by slowing down the rate of Aβ plaque formation. Acknowledgments: TEM images were obtained at the Interdisciplinary Center for Analytical Microscopy, Kazan (Volga Region) Federal University, Russia.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. | 12,171.6 | 2023-06-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Escherichia albertii, a novel human enteropathogen, colonizes rat enterocytes and translocates to extra-intestinal sites
Diarrhea is the second leading cause of death of children up to five years old in the developing countries. Among the etiological diarrheal agents are atypical enteropathogenic Escherichia coli (aEPEC), one of the diarrheagenic E. coli pathotypes that affects children and adults, even in developed countries. Currently, genotypic and biochemical approaches have helped to demonstrate that some strains classified as aEPEC are actually E. albertii, a recently recognized human enteropathogen. Studies on particular strains are necessary to explore their virulence potential in order to further understand the underlying mechanisms of E. albertii infections. Here we demonstrated for the first time that infection of fragments of rat intestinal mucosa is a useful tool to study the initial steps of E. albertii colonization. We also observed that an E. albertii strain can translocate from the intestinal lumen to Mesenteric Lymph Nodes and liver in a rat model. Based on our finding of bacterial translocation, we investigated how E. albertii might cross the intestinal epithelium by performing infections of M-like cells in vitro to identify the potential in vivo translocation route. Altogether, our approaches allowed us to draft a general E. albertii infection route from the colonization till the bacterial spreading in vivo.
Ethics statement
The protocols involving animal handling were approved by the Research Ethics Committee of UNIFESP, project license number 0342/09. "Comitê de Ética em Pesquisa da UNIFESP/ Hospital São Paulo" (CEP UNIFESP/HU-HSP) is in accordance with Good Clinical Practice (GCP) of the International Council for Harmonisation (ICH), formerly the International Conference on Harmonisation (ICH). Animals are handled under "Brazilian Guidelines For The Care And Use Of Animals In Educational Activities Or Scientific Research" standards that are in accordance with Brazilian Law 11.794/2008, which defined procedures to be employed in the scientific use of animals.
Bacterial strains
The invasive E. albertii 1551-2 strain (intimin subtype omicron) and its isogenic mutants obtained in previous studies by our group (Table 1) were statically cultured in Luria Bertani broth (LB) for 18 h at 37˚C. Antibiotics were added to select resistant strains as indicated in Table 1. The mutant strain 1551-2Δtir was constructed employing the one-step allelic exchange recombination method [35]. Primers containing a 40-bp region homologous to the 5' and 3' ends of the tir gene and a specific sequence for the zeocin (zeo) resistance-encoding gene (tir-zeo-F ATG CCT 1 ATT GGT AAT CTT GGT CAT AAT CCC AAT GTG AGT GGT CAT CGC TTG CAT TAG AAA GG and tir-zeo-R TTA AAC GAA ACG ATT GGA TCC CGG CAC TGG TGG GTT ATT CGA ATG ATG CAG AGA TGT AAG) were used to amplify the Zeo cassette [36]. Amplicons obtained in the PCR reaction were electroporated into competent wild type bacteria harboring the pKOBEG-Apra plasmid. The selection of recombinant bacteria were done on Zeo-containing LB agar plates (60 μg/mL), and the tir deletion in the isogenic mutant was confirmed by PCR. In addition, the loss of pKOBEG-Apra plasmid was confirmed by testing the mutant strain for apramycin susceptibility.
Adhesion and Invasion assays
Quantitative assessment of bacterial association and invasion was performed as described previously [24,41]. Briefly, differentiated Caco-2 cells were infected with 10 7 colony-forming units (CFU) of E. albertii strain 1551-2 and its isogenic mutants for 6 h. Thereafter, cell monolayers were washed three times with phosphate buffered saline (PBS). While one set of monolayercontaining wells was lysed in 1% Triton X-100 for 30 min at 37˚C, another set was treated with 100 μg/mL of gentamicin (Sigma, USA) for one hour at 37˚C, and then washed 5 times prior to lysis. Following cell lysis, bacteria were resuspended in PBS and quantified by plating serial dilutions onto MacConkey agar plates to obtain the total number of cell-associated bacteria and of intracellular bacteria. The invasion indexes were calculated as the percentage of the total number of cell-associated bacteria that were located in the intracellular compartment. Assays were carried out in triplicate, and the results from at least three independent experiments were expressed as the percentage of invasion (mean ± standard error).
Animals
Female Wistar-EPM rats,~3 months-old and weighting 200-250 g, were obtained from the Central Animal Facility of Universidade Federal de São Paulo (UNIFESP). After 14 days of environment adaptation, stool samples were collected for coproculture and E. coli recovered from each animal were screened for the presence of the eae gene, which encodes the adhesin intimin, by PCR (AE11 5'-CCCGGCACAAGCATAAGCTAA-3' and AE12 5'-ATGACT CATGCCAGCCGCTCA-3', generating a fragment of 917 bp [42]). This procedure was performed to avoid the use of experimental animals that were colonized by either E. coli or Citrobacter rodentium, a murine pathogen that also promotes AE lesion formation [43]. Prior to the assays, animals were fasted for 24 h with access to water.
In vivo organ culture (IVOC) bacterial colonization assay
For removal of ileum fragments, rats were held under anesthesia (pre-atropinization, induction of inhalation anesthesia with isoflurane and maintenance with intramuscular injection of 0.1 mL/100 g body weight ketamine + xylazine (4:1 1). After antisepsis, rats were subjected to median laparotomy for the collection of intestinal fragments of~0.5 cm 2 . Briefly, ileal segments were removed, sectioned longitudinally at its antimesenteric border and placed onto a sterile filter paper with its serous portion facing the filter. This procedure allowed the exposure of the entire apical surface of the mucosa to the bacterial inoculum. Fragments were kept in Dulbecco's Modified Eagle Medium (DMEM-Gibco Invitrogen, USA) supplemented with 10% fetal bovine serum (Gibco Invitrogen, USA) [44]. Fragments were infected with 10 10 CFU for 6 h of incubation (37˚C, 5% CO 2 ); fragments were then washed, macerated and suspended in PBS and plated in serial dilution onto MacConkey agar plates containing 20 μg/mL nalidixic acid [21] for quantification (calculation of the total number of mucosa-associated bacteria). Infected IVOC preparations were also fixed for electron microscopy analysis.
In vivo bacterial translocation assay
Animals were maintained under anesthesia (intramuscular injection of 0.1 mL/100 g body weight of ketamine and xylazine (4:1) during the entire procedure. Additional half dose of anesthetic was administered when necessary. Bacterial translocation (BT) was induced by a midline incision, oroduodenal cannulation, injection of 10 10 CFU/mL resuspended in 10 mL of saline through the catheter, and bacterial retention for a period of 2 h, within a portion between the duodenum and ileum, by means of ligatures [39]. The E. coli rat strain R6, which is devoid of the DEC virulence genes, such as the eae gene, was used as a BT-positive control strain [39], while the non-pathogenic E. coli strain HB101 was used as BT-negative control. Bacterial inoculation causes a transient dilation of the small bowel, which disappeared within a short period. Blood (1 mL), mesenteric lymph nodes (MLN), spleen and liver were then collected, weighed, macerated and suspended in PBS. Subsequently, bacterial colonies were enumerated after plating serial dilutions onto MacConkey agar plates containing 20 μg/mL nalidixic acid, to estimate the number of translocated bacteria. The results were expressed by mean log 10 values of CFU/g tissue.
M-like cell differentiation
M-like cells were obtained as previously described [45][46][47] with modifications. Briefly, Caco-2 cells (10 5 cells/filter) were seeded on the upper chamber of a Millicell filter (3.0-μm pore diameter, Millipore, USA) and kept in DMEM as described above for 10 days at 37˚C in an atmosphere of 5% CO 2 . The lower chamber was also filled with DMEM. During this incubation period, transmembrane electric resistance (TEER) was measured every two days using the Millicell 1 ERS (Electrical Resistance System, Millipore), until it reached~420 mΩ. Afterwards, Raji-B cells (10 6 cells/mL) were seeded at the Millicell lower chamber and cultured in RPMI-1640, as described above, for 6 days. In parallel, in some filters, Caco-2 cells were kept in monoculture for an additional 6 days (non-differentiated cells). Since galectin-9 is expressed on M cells but not on Caco-2 cell surface [47],
In vitro bacterial translocation assay
Bacterial suspensions (10 7 CFU) in DMEM (as described above, except for 1% antibiotics) were inoculated in the upper chamber of filters bearing either M-like/Caco-2 or Caco-2 cells only for 6 h. Filters were transferred to a well containing fresh medium (DMEM without antibiotics) every hour and the medium from the lower chamber was collected for bacterial quantification at 6 h [46]. In parallel, the transmembrane electric resistance (TEER) was measured. At the end of the infection period, monolayers were washed with PBS and fixed for microscopy.
Transmission electron microscopy (TEM)
Infected monolayers and ileum fragments were first fixed in 2% glutaraldehyde (EMS, USA) for at least 24 h at 4˚C. After primary fixation, cells and fragments were washed 3 times with PBS (10 min) and subjected to secondary fixation with 1% osmium tetroxide (EMS, USA) in 0.1 M sodium cacodylate buffer for 30 min. After being washed three times with distilled water, preparations were dehydrated through a graded ethanol series (50%, 75%, 85%, 95% and 100%), and propylene oxide (100%). Preparations were then gradually embedded in Araldite, which was allowed to polymerize for 24-48 h at 60˚C. Ultrathin sections were placed on Formvar (EMS, USA) coated 200 mesh copper grids and stained with 4% aqueous uranyl acetate (Merck, Germany) and Reynold's lead citrate (Merck, Germany). Grids were examined under TEM (LEO 906E-Zeiss, Germany) at 80 kV [48].
Statistical analysis
Differences in bacterial adherence, invasion percentages and translocation or differences in TEER of infected M-like cells were assessed for significance by using an unpaired, two-tailed t test (GraphPad Prism 4.0).
Intimin, Tir and T3SS are essential for invasion of human intestinal cells cultured in vitro
Strain 1551-2 had been previously evaluated regarding its ability to invade differentiated Caco-2 cells [24]. In this study, Caco-2 cells were infected with bacterial suspensions of the wild type or its isogenic mutant strains (Table 1). Compared to the wild type strain the adherence index of mutant strains was not altered (Fig 1A) while the invasion index decreased significantly (Fig 1B), except for fimA mutation that did not affect the adherence or invasion indexes (Fig 1A-1B). These results confirm that E. albertii 1551-2 invasion depends on intimin and/or proteins injected by the T3SS, such as Tir, but not on T1P. Besides that, as the T3SS mutant did not inject Tir into Caco-2 cells, it is possible that, in the absence of its receptor, the 1551-2 intimin might recognize another host cell membrane structure as site for adhesion, but not for invasion, as confirmed by results obtained in invasion assays with 1551-2Δtir strain. Complementation of T3SS mutant restored the invasion index to the wild type values (Fig 1B).
E. albertii 1551-2 colonizes rat enterocytes in in vitro organ culture (IVOC)
To evaluate whether E. albertii 1551-2 could colonize the rat intestinal mucosa, ileal fragments (approx. 0.5 cm 2 ) were individually infected with bacterial suspensions of the wild type or its isogenic mutant strains (Table 1). Methylene blue staining of the intestinal fragments was performed to confirm that all tissue layers were well preserved (S1 Fig). SEM images confirmed that the wild type strain strongly adhered to the intestinal mucosa (Fig 2A), whereas the T3SS-mutant comparatively showed a weaker adherence (Fig 2B). Noninfected fragments showed well-preserved bacterial-free brush borders (Fig 2C). Similarly to the wild type strain, the intimin, Tir and T1P mutants remained adherent to the intestinal mucosa (S2 Fig). Besides bacterial adherence, TEM images showed that the wild type strain caused AE lesions with characteristic pedestals underneath adhered bacteria on the rat mucosal surface (Fig 2D). In contrast, the T3SS-translocon mutant failed to cause AE lesions ( Fig 2E), and non-infected fragments showed well-preserved bacterial-free brush borders (Fig 2F).
The number of CFU recovered from rat intestinal mucosa in vitro decreased significantly in the absence of the T3SS-translocon, while mutant strains deficient in intimin, Tir or T1P production, as well the T3SS mutant complemented strain, showed similar adherence levels in comparison with the wild type strain (Fig 2G).
E. albertii strain translocates across rat intestinal barrier in vivo
To reduce the number of animals utilized in the next approach, we selected the T3SS-translocon mutant for in vivo comparison with wild type strain based on results obtained with the IVOC infection assay. Our results demonstrated that E. albertii 1551-2 reached the liver, while the T3SS-translocon mutant was not recovered from this organ. These findings suggest that, as a consequence of the reduced adhesion of this mutant to the intestinal mucosa, as observed ex vivo, fewer bacteria were available to cross the intestinal barrier, reach and survive in the MLN (Fig 3).
E. albertii 1551-2 translocates across M-like cells
Considering our results in the BT assay described in Materials and Methods and that pathogens such as Shigella species use M cells to cross the intestinal barrier, we performed E. albertii infection of M-like cells in vitro to identify the potential BT route employed in vivo. Prior to infection, we confirmed the conversion of part of the Caco-2 cells to M-like cells as described elsewhere [47], by demonstrating the expression of galectin-9 on M-like cell surface but not on Caco-2 cells (S3 Fig). Moreover, cellular morphology alterations [45] were observed on M-like cells, such as a reduced number of microvilli, flattened apical surface and disorganized cytoplasm (Fig 4A), while fully differentiated Caco-2 cells displayed preserved brush borders (Fig 4B). The presence of M-like cells significantly increased bacterial translocation (Fig 4C) as compared to differentiated Caco-2 cells (Fig 4D).
For quantitative E. albertii 1551-2 translocation assessment, tEPEC prototype strain E2348/69 was used as control [46]. We demonstrated that E. albertii translocation through Mlike cells was significantly more effective than through differentiated Caco-2 cells (2,962.0 ±546.0 and 184.2±91.6, p = 0.0024, respectively) (Fig 5A), and as previously demonstrated [46], the presence of M-like cells did not increase the transcytosis of tEPEC E2348/69 in a significant manner as compared to differentiated Caco-2 cells (1.203±0.528 and 0.417±0.247, p = 0.1480, respectively). Additionally, E. albertii 1551-2 translocated through M-like cells more effectively than tEPEC E2348/69 (p = 0.033, Fig 5A). In order to exclude bacterial paracellular migration due to increased permeability as an invasion route, transepithelial electrical resistance was measured hourly during the infection period ( S3 Fig). Comparison between Mlike cells infected with the wild type or the T3SS mutant strains demonstrated a significant decrease in bacterial recovery (p = 0.0029, Fig 5B) with the latter strain, while complementation of the mutant strain restored its translocation capacity (p = 0.0418, Fig 5B and S4 Fig). Contrarily, non-significant differences between CFU recovered from T1P mutant and its complemented strains were observed with M-like cells (Fig 5B).
Discussion
Previous data from our laboratory showed that the 1551-2 strain invaded HeLa cells [21] with invasion being dependent on the intimin-Tir interaction, since the intimin mutant (1551-2eae::Kn) was non-invasive [21]. Later on, we demonstrated that, in contrast with the wild type 1551-2 strain that displayed a localized pattern of adherence (formation of compact bacterial clusters) in HeLa cells, its T3SS-mutant adhered weakly, while the intimin mutant adhered, showing a T3SS-dependent diffuse pattern of adherence [36]. In addition, Pacheco et al., 2014 [24] showed that the 1551-2 strain invades, persists and multiplies inside differentiated Caco-2 cells up to 48 h.
In this work, we demonstrated for the first time that intimin, Tir and T3SS are essential for invasion of enterocytes in vitro, since mutations in the corresponding genes abolished bacterial uptake. Bacterial adherence was preserved in mutants, including the T3SS mutant, which did not adhere on HeLa cells in a previous study [36]. This fact might be due to the interaction between either intimin or T1P and Caco-2 cell surface receptors. It has been previously demonstrated that Tir and Map, and EspF can induce tEPEC invasion of HeLa and Caco-2 cells, respectively [49,50].
We have previously shown that an aEPEC strain, 1711-4, is able to translocate across the rat gastrointestinal barrier and be isolated from the MLN, spleen and liver [51]. The mechanisms promoting this bacterial translocation, however, are unknown. Generally, studies on colonization and infection by enteropathogens are conducted with Caco-2 cells, but although this cell line mimicries enterocytes from the human small intestine, it does not represent the complex intestinal mucosa, since it is devoid of the mucosal layer and other intestinal cell types. It was demonstrated that EHEC [52] as well as tEPEC E2348/69 [44] colonize human IVOC. More recently, Etienne-Mesmin et al., [53] demonstrated that EHEC colonize and translocate into ileum fragments from mice, where Peyer's patches are available, but quantification was not performed. In the present study, we evaluated E. albertii capacity to colonize the rat intestinal mucosa in the IVOC model, to mimicry the first steps that lead to bacterial translocation from the intestinal lumen to the extra-intestinal sites demonstrated in vivo. We showed for the first time the interaction of E. albertii with rat intestinal mucosa ex vivo, which could be an alternative model to study AE-producing pathogens' interaction with more complex intestinal tissues. In this model, colonization was detected after 30 min of infection, and invasiveness was revealed after 2 h, when E. albertii 1551-2 could be found inside the enterocytes. Additionally, we demonstrated that E. albertii adherence to the rat IVOC depends on T3SS, as previously demonstrated in human IVOC for tEPEC E2348/69 [44], but not on intimin, Tir or T1P, since in the absence of these genes, bacterial adherence was qualitatively and quantitatively preserved. Thus, the use of this model may optimize the selection of potentially invasive strains to be tested in vivo, thus reducing the number of animals used to assess the fate of invasive E. albertii from the intestinal lumen to extra-intestinal sites.
We selected the T3SS mutant to compare to the wild type strain, since this mutant strain had previously shown a significantly reduced capacity to interact with the host epithelium in an ex vivo model, losing the capacity to invade cultured intestinal cells in vitro.
It has been reported that some E. albertii strains isolated from birds are able to adhere and to invade HEp-2 cells [54] and to reach the liver and spleen of one day-old chicks in vivo, possibly by disrupting the intestinal barrier, despite the minor intestinal mucosa alterations [54]. In this study, using an in vivo bacterial translocation assay in rats, we recovered the E. albertii 1551-2 strain in the MLN and liver but not spleen, while the T3SS mutant completely lost translocation capacity. It has been reported that T3SS-dependent effectors such as EspF, Map and NleA disrupt tight junctions that contribute to the integrity of the intestinal barrier [55][56][57]. In addition, some infectious processes can disturb the intestinal epithelium, for example, neutrophil migration during inflammation; this event promotes a transitory epithelial barrier destabilization, which exposes the basolateral side, either allowing enterocyte invasion [58] or offering an alternative route for bacterial translocation from the intestinal lumen to extraintestinal niches.
Based on our finding that E. albertii 1551-2 can reach the MLN and liver in vivo and that the invasion level through the basolateral surface is higher than at the apical surface of T84 cell monolayers [22], we investigated how E. albertii might cross the intestinal epithelium. It is well known that enteropathogens can reach basolateral receptors and promote enterocyte invasion in vivo by transcytosis through M cells [25,59]. According to Hase and coworkers [28], bacterial translocation depends on T1P-GP2 interaction, since isogenic mutant or non-T1P producer strains were unable to translocate through M-like cells. On the other hand, Inman and Cantey [60] described that a rabbit EPEC strain (RDEC-1) produced AE lesion on the M cell membrane, suggesting that AE lesions could prevent bacterial internalization, thus preventing transcytosis and antigen presentation, thereby delaying the immune response.
In this study, E. albertii 1551-2 translocation was significantly more effective through Mlike cells than Caco-2 cells only. This could not be observed with tEPEC as previously demonstrated by [46]. We also demonstrated that translocation depended on functional T3SS, and that T1P mutation did not compromise bacterial translocation, contrary to what was found by Hase et al., [28]. These differences could be due to allelic FimH alterations in T1P in different strains. Therefore, these data suggest that E. albertii 1551-2 may reach the enterocyte basolateral surface in vivo after M cell translocation. Etienne-Mesmin et al., [53] also found that EHEC O157:H7 and O113:H2 and their respective intimin and Shiga toxin mutants translocated more effectively through M-like cells in comparison with Caco-2 cells. Cieza et al., [61] reported that the translocation of adherent-invasive E. coli (AIEC) through M-like cells depends on IbeA (an invasin); however, E. albertii strain 1551-2 is devoid of the ibeA gene (not shown) and other invasion-related genes [62], reinforcing that the bacterial translocation ability of this E. albertii strain is due to intimin-Tir interaction.
Altogether, our results demonstrated for the first time that both ex vivo and in vivo bacterial infection of rat intestinal mucosa are useful models to study E. albertii interaction with the host. We also showed that E. albertii 1551-2 may also cross the intestinal mucosa in vivo possibly using M cells as a route to reach extra-intestinal organs. | 4,908.4 | 2017-02-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Review on Complete Mueller Matrix Optical Scanning Microscopy Imaging
: Optical scanning microscopy techniques based on the polarization control of the light have the capability of providing non invasive label-free contrast. By comparing the polarization states of the excitation light with its transformation after interaction with the sample, the full optical properties can be summarized in a single 4 × 4 Mueller matrix. The main challenge of such a technique is to encode and decode the polarized light in an optimal way pixel-by-pixel and take into account the polarimetric artifacts from the optical devices composing the instrument in a rigorous calibration step. In this review, we describe the different approaches for implementing such a technique into an optical scanning microscope, that requires a high speed rate polarization control. Thus, we explore the recent advances in term of technology from the industrial to the medical applications.
Introduction
Polarization-based imaging techniques are powerful approaches having the unique ability to produce specific contrasts for revealing hidden information [1]. Over the last few decades, it has shown its efficiency in multiple areas showcasing its particular sensitivity to the medium structure and orientation [2]. As follows, it has been referred to a wide range of experimental applications from astrophysics [3,4], remote target detection [5][6][7][8], and micro/nanoparticles in a turbid or highly scattering medium [9][10][11][12][13][14]. Especially, Mueller matrix polarimetry is the most comprehensive method as it provides the full polarimetric response of a sample through its 4 × 4 Mueller matrix (MM). At least 16 intensity measurements, obtained by a set of 16 coding and decoding polarization states, must be performed to determine the 16 real independent element of the Mueller matrix m ij [15]. Various methods were then developed based on either temporal, spatial, or spectral polarization coding and decoding [16]. For imaging microscopy applications, two approaches provide the complete Mueller matrix elements m ij (x,y) to be measured across a sample in two dimensions.
The first and most common approach incorporates the full-field imaging approach using a CCD or a CMOS camera including full-field polarization coding and decoding stages. Most wide-field Mueller imagers use dynamic polarization optics to modulate light polarization in the temporal domain. The methodology consists of a sequential acquisition of at least 16 polarization-resolved intensity images of the sample for reconstructing the complete Mueller matrix in just few seconds up to few minutes. For tracking any fast polarimetric changes, recent works have proposed a snapshot Mueller matrix microscope based on the simultaneous coding/decoding of the polarization states physically splitted in the plane of the sensor which is of great interest for biomedical diagnosis [17]. The idea is to couple a micro-polarizer array with a CCD or CMOS camera. Each pixel of the camera is linearly polarization encoded (0 • , 45 • , 90 • and 135 • ) and a group of four pixels gives a super pixel allowing the reconstruction of the four Stokes coefficients through a field-programmable gate array (FPGA) card in real-time [18]. Nowadays, since this passive technique does not require any sophisticated mathematical model or expansive optical features, it exists numerous commercial polarization-resolved camera, dedicated for remote sensing or widefield imaging [19][20][21]. However, the limitation of such a method is the need of taking into account the crosstalk between pixels and the loss of spatial resolution due to the definition of the super-pixels [22]. Another limitation is the difficulty to be implemented with other imaging modalities due to the space required for the polarimetric optical features.
In an attempt to preserve the axial optical resolution through the illumination volume, the second approach consists in point-by-point scanning laser microscopy (SLM) of the sample. Particularly, this review article is dedicated to the description of the experimental advances in the field of complete MM in optical scanning microscope for imaging pixelby-pixel the xy focal plane of the objective. This measurement method differs from using the scanning beam and is based on collecting the angular fingerprint of the sample as it is often proposed in polarization-resolved scatterometry [23]. In this review, we demonstrate the capability of this method for imaging localized contrasts based on the single optical and structural properties of the sample at the microscale. Commonly, the imaging of such small objects is made using fluorescence techniques showing an extraordinary capability of tracking localized molecules beyond the diffraction limit. Nevertheless, it requires the use of fluorophores and high light-dose that can alter the sample organization. For this reason, the main advantage of developing a full MM imaging system is that there is no labeling process or a priori knowledge of the sample. Thus, this technique is known to be a label-free method and the contrasts is based on the single interpretation of the light-matter interaction arisen from the sample optical fingerprint. We show that developing full MM in a scanning microscopy configuration offers the advantage to be easily implementing into commercial setups giving the capability toward multimodal imaging. We used databases such as PubMed and the Web of Science to gather information for this article. First, we discuss the different approaches of coding and decoding the polarization suitable with the scanning beam across the sample. Furthermore, we present different applications of this technique, demonstrated its performances from the earliest works to the most recent advances in the optical scanning microscopy field.
Mueller Matrix Optical Scanning Architecture
Any MM polarimeters are composed of two optical modules aiming to encode/decode the polarization states after interaction with the medium. First, the Polarization States Generator (PSG) encodes the polarization states from the incoming light source (emitted by a lamp or a laser diode), as described by the Stokes vector S in = [S 0 , S 1 , S 2 , S 3 ] in , where S 0 is the total transmitted light, S 1 , S 2 and S 3 are related to the linear horizontal/vertical, +45 • /−45 • and circular right/left polarization states. Their combinations provide valuable information increasing the information available, such as the Degree Of Linear Polarization (DOLP), and Circular (DOCP) [24]. Then, after the sample, the transformation of the polarized light is decoded by the Polarization States Analyzer (PSA) giving the output Stokes vector as S out = [S0, S1, S2, S3] out and collected by a photodetector. In the literature, the mix of the PSA and the detector part is also referred to as the Polarization States Detector (PSD). The MM, noted M, describes how the polarization state of the input light changes upon interaction with the sample by where S in and S out are the input and output Stokes vectors. Since the imaging contrasts erase from the only linear optical response of the medium with the polarized light, there is no need of using high power laser source or extremely sensitive photodetectors, which decreased drastically the cost and the complexity of polarization-resolved techniques. The main challenge for implementing this technique in an SLM configuration is to adapt a proper methodology for encoding/decoding the polarization states in a way to retrieve the full MM pixel-by-pixel. Additionally, the critical issue for extracting the accurate MM is the calibration of the full optical system. It has to count on the polarization state inaccuracy from the PSG and PSA errors combined with the polarimetric artifacts because of the multiple interaction with the optical features composing the microscope (lenses and mirrors). In SLM, the specimen is scanned by a diffraction limited spot of a laser light after reflection in galvanometric mirrors. Then, the light is transmitted or reflected by the in-focus illuminated volume element (voxel) of the specimen and is focused onto a photodetector. For MM SLM, the lateral resolution is approaching the diffraction limitation given by the objective numerical aperture express as We present the different configuration available from such a technique Figure 1. Usually the calibration steps consist in measuring the polarimetric contributions independently of the sample of (1) the PSG and PSA blocks and (2) the optical microscope features fingerprints. First, the PSG and PSA could be calibrated separately using dedicated methods according to its architecture. Studying the conditioning of the measurement matrix have been used by several authors to optimize the polarimeter performance. Particularly, if a simple set of rotated polarizers and waveplate are used for encoding the polarization states, a simple model such as Eigenvalue Calibration Method (ECM) [25][26][27] have been proven to be robust and versatile enough. In this way, the trace of the matrices measured using simple reference sample at specific orientation gives a Condition Number (CN) that resumes the robustness against noise propagation through the whole system. If the PSG and PSA are more complex to model, for instance by using electro optics devices (Pockels cells or photoelastic modulators) [28], the system is considered as a "black box" evaluated by a figure of merit named Equally Weighted Variance (EWV) criterion where only the global experimental noise propagation is considered [29]. In a confocal mode for the reflection imaging configuration, the double pass through the sample has to be considered [30]. Indeed, it becomes even crucial for polarization tracking through the overall Point Spread Function (PSF) volume. For this reason, a variant method of the ECM has been introduced requiring the measurements of the polarimetric properties of reference samples such as linear polarizer or waveplate at specific orientation. However, in a confocal mode it is important to recalibrate the system for each pinhole size resulting in an increasing of the number of measurements. Additionally, it is important to assume that the light is reflected by a perfect mirror at normal incidence, and that the fingerprint of the polarization features is independent of the direction of propagation of the light. This final hypothesis is not easy to achieve experimentally limiting the use of such approach for versatile implementation. Second, the microscope body is composed of successive multiple optics (lenses, filters and mirrors) that transform completely the generated polarization states. Thus, the measurement pixel-by-pixel of its optical properties could be easily performed by removing any sample in transmission or by placing a simple reflective mirror in reflection. The double pass ECM method could be also used and provides simultaneously the Mueller matrices of the encoding/decoding blocks and of the microscope body [30]. Finally, the Mueller matrix of the sample is isolated by using matrices inversion based on Mueller/Stokes formalism, as follow Most of the MM SLM microscope deals with slow polarization states encoding/ decoding rate, by means slower than the pixel dwell time achieved by the galvanometric scanner (GS). However, this limitation has been overcome by simply acquiring successive images with different polarization states. Most recent advances for getting the full MM images at the pixel dwell time rate have proposed a snapshot approach, inspired by Optical Coherence Tomography (OCT) setups able to get the MM in the faster way based on our knowledge. Although the full-field technique is potentially faster than the scanning technique, the latter opens the way for implementation of various imaging modalities (Mueller/confocal/nonlinear) on a same scanning microscope [31].
Temporal Domain Encoding/Decoding
In the 1970s/1980s, the very first development in polarization-resolved microscopy was of interest in the research community for its advantage of extracting the label-free contrasts. Due to the low technological requirement, the temporal control of the polarized light was the first approach for such instrumentation based on the use of rotating optical features or electro-optics modulators. In a single-point measurement, the technique was based on the differential polarized (linear or circular) intensity collection, leading to the determination of only few Mueller matrix elements [32][33][34][35][36][37]. Thus, the method has been deported into an SLM configuration for imaging high-ordered macromolecules and biopolymers, such as chromatin or blood cells [38][39][40][41][42][43]. The main motivation for such implementation was to substitute the fluorescence techniques by imaging and tracking very localized molecules based on the single optical response of the sample without sample preparation. Later, the first full MM polarimeter imaging associated to an SLM configuration have been proposed for ophthalmology, based on OCT imaging in 1999 [44]. It allows the imaging of the full MM in reflection configuration using LCVRs. Even if there is no need of any moving part, the technique consists in measuring 16 successive double-pass images with an exposure time of 4 s, resulting in a full acquisition in one minute. Considering that this approach required a low power source for not damaging the retinal tissues, the method is coupled with an important post-data treatment for reducing the noise and increasing the Signal-to-Noise ratio (SNR). A block diagram resuming this technique in transmission configuration is proposed Figure 2. In the confocal mode, an aperture slightly smaller in diameter than the Airy disc image is positioned in the image plane in front of the detector [46]. The ability of this method to reduce out-of-focus blur signal, and thus permit accurate non-invasive optical sectioning that makes confocal scanning microscopy so well suited for the imaging and three-dimensional tomography of biological specimens. It has several additional advantages over conventional optical microscopy, including the possibility of a significant improvement in lateral resolution and the capability for direct non-invasive serial optical sectioning of intact specimens. In 2005, the first confocal Mueller reflection microscope was developed for the retinal diagnosis [27]. The scheme suggested to increase as much as possible the polarization encoding/decoding speed rate allowing the highest number of measurements for improving optically the SNR. For this reason, the generation of the polarization states is performed thanks to two Pockels Cells (PC) allowing the encoding of the polarization in few MHz. In an effort to speed up the decoding time, the method was coupled with a combination of multiple polarization resolved detectors based on the Division of Amplitude (DoA) method [47]. Thus, the time for getting the full MM is around 50 ms and reach a lateral optical resolution of 30 µm. However, the implementation of such a technique into multimodal system is difficult considering the physical space required. Furthermore, using multiple polarization optics leads to errors in aligning the optical axis taken into account from the ECM. Nowadays, the multiple reflection optics used for this method, such as beamsplitters, fresnel rhombs or wollaston prism, could be reduced in very miniature modules of few centimeters designing from metasurfaces [48].
In 2008, an already existing commercial transmission SLM was modified for MM reflection imaging of glaucoma [49]. The system was completely automatized based on the use of Liquid Crystal Variable Retarder (LCVR) and required 72 successive images polarization resolved in 4 s with 20 µm lateral resolution. It is worth noting that such a sample does not exhibit all the polarimetric changes in presence of a pathology, decreasing the number of acquisition needed for studying only few polarimetric parameters.
More recently, a MM polarimeter has been implemented into a non-linear commercial transmission scanning microscope using simple motorized optical devices [50]. It allows the acquisition of only four images coded by four distinct polarization states (horizontal, vertical, 45 • and right circular) sequentially also based on DoA method. The main application was the study of starch molecules under certain conditions (thermal and mechanical stress). Similarly, a similar approach has been extended for small animal embryonic development analysis in the reflection configuration [45].
The ability of such a technique paves the way of acquiring the full MM images of the sample simultaneously with non-linear confocal microscopy due to its compactness and versatility. This could be of interest for tracking the source of the label-free contrasts for thick objects where the polarimetric imaging modality gives only an average information of the scattering sample fingerprint through the PSF volume. A promising advance in this direction has been proposed in 2020 for characterizing scattering media in MM reflectance configuration in a confocal mode [51]. The instrument demonstrated is capabilities by measuring the increasing of depolarization through highly scattering medium such as milk solution. The polarimetric changes of layered-like structure has been validated through Monte Carlo simulations and showed that the depolarization and retardance caused by a birefringent fiber is influenced by its shape and location. This work has been completed by performing a depth-resolved imaging of cornea that combines here Mueller matrix in both transmission and reflection configuration with non-linear microscopy utilizing Two-Photon Excitation Fluorescence (TPEF) and Second Harmonic Generation (SHG) [52]. In this work, both the PSG and PSAs are composed of a pair of LCVRs and a linear polarizer and the voltages are synchronized with the GS. Then, the MM images are extracted pixel-by-pixel based on the measurements of four different PSA states for each of the six PSG states, so a set of 24 images is obtained. These promising results show the capability of extracting the polarimetric contrasts at a specific depth through the sample randomly organized. Indeed, they demonstrate that the random changes in the corneal model as a layered medium in the order of micrometer can strongly affect its polarization properties. However, it is worth noting that these preliminary results deal with pure deterministic samples (here only birefringent) where the measured scattering results mainly from the accumulation of random orientation across the illumination volume.
Spectral Domain Encoding/Decoding
The previous techniques are based on acquiring a set of a sequentially generated polarization resolved images and recombined them in a post data process for recovering the full MM image, although this approach is not suitable for any dynamic characterization of the sample limiting the use of such a technique for in vivo microscopy imaging. Indeed, the MM polarimeter should be able of acquiring the full MM in the pixel dwell time by mean in few microseconds and requires a very fast intensity modulation of the polarized light.
For wide field imaging, the solution has been found by using multiple electro-optics devices in series such as Photoelastic Modulators (PEMs) or PC triggered with the camera frame rate via a data acquisition (DAQ) board [53]. Briefly, the method is based on the analysis of the time-variation of the intensity induced by the electro-optics effect. Thus, the detected signal is a channeled spectrum represented in the Fourier domain by complex modulation amplitudes at numerous frequencies. A 50 MHz FPGA counts the edges of each PEM modulation signal and locks when a unique phase between all the four PEMs is occurring within a short time in a nanosecond timescale. Then, the FPGA sends a trigger to the CCD to gate the signal in 0.5 µs, giving a full image acquisition took in approximately 20 ms. Another approach consisting in dealing directly with the Fourier transform of such modulated signal where the complex amplitudes are linear combinations of the MM elements [54]. In this method, the common approach is to choose different PEMs with separated working frequencies. This kind of optical device is composed of a passive crystal subjected to periodic mechanical stress, resulting in a time-varying birefringence due to the photoelastic effect [28,55]. In numerous earlier works, polarization-resolved setups used one PEM with an LP oriented orthogonally as a PSG for measuring the differential circular polarized intensities [33,36,56]. Coupled with a lock-in detection at the reference frequency of the PEM, the passive setup was able to provide few elements of the MM in order of tens of kHz. In order to speed up the acquisition rate and the number of elements, common solutions proposed to add another PEM in the PSA synchronized with the first one [57]. In recent works, this technique upgraded such a measurement by dealing with 3 and 4 PEMs [58]. The main advantage brought by adding multiple PEMs is that all the elements of the MM can be retrieved without any mechanical moving part. Thus, this experimental approach has been proven to be of interest for studying ultrafast conformational changes in biopolymers [59,60]. For such a technique, the acquisition speed has reached 100 µs (10 kHz repetition rate), very close to the pixel dwell time from any GS.
Even if the speed for acquiring the full MM image is fast, a new method for fast polarization encoding has recently emerged and is based on spectral coding of polarization (channeled spectropolarimeters) by using passive elements such as birefringent plates [61]. The principle is the parallelization of polarization states in the spectral domain so that the polarimetric response of a sample can be calculated when using a single-channel spectrum I(ν), where ν is the optical frequency. Each channeled spectrum I(ν) is periodic and is composed of discrete frequencies from 0 to 12.f0 that are integer multiples of the fundamental one f0. This last one depends on both the thickness e and birefringence ∆n of the retarders by the relation f0 ∆ne/c, where c is the celerity of light in vacuum. Such polarimeters have the potential to perform polarimetric scanning microscopy thanks to their speed provided that the thickness ratio of the retarders is well-chosen [62]. The first experimental snapshot Mueller polarimeter based on spectral coding of polarization using a broad spectrum source (superluminescent diode), thick retarder plates, and a CCDbased spectrometer [63]. At this early stage, the device was developed in a non-imaging transmission configuration but recently upgraded for SLM polarization-resolved SHG [64]. The acquisition rate of this approach is limited by only the spectrometer performances that can be high (hundred of MHz). Then, inspired by OCT technology, the technique has been upgraded by using a wavelength-swept laser source, high order retarders, and a single-channel detector [65]. The device uses a wavelength-swept source (SS) laser instead of the broad spectrum source and a photodiode instead of the spectrometer, which results in a much simpler optical setup as shown in Figure 3a. For instance, the PSG and the PSA composed of simple polarization features (linear polarizers and retarder slices) have been compacted in mechanical blocks with centimeter dimensions [31,66]. A singlepoint Mueller matrix is measured at the rate of the SS, which could be hundreds of kHz. This compactness allows for straightforward implementation on a commercial scanning microscope and has been upgraded in transmission into a commercial SLM already use for performing TPEF and SHG imaging [31].
In this work experiment, the fundamental thickness e was chosen to generate six modulations at the frequency f0. This rather large spectral analysis window could be reduced by using thicker retarders but at the expense of the number of samples per channeled spectrum. The channeled intensity spectrum I(ν, t) is modulated by the polarimetric fingerprint of the sample and can be expressed as where a k,t and b k,t are linear combinations of Mueller matrix elements m ij . In the configuration resumed Figure 3b, the sum stops at n = 12 leading to 12 Fourier amplitudes in real and imaginary part, giving finally 25 values used to retrieved the 16 Mueller elements simultaneously. The same philosophy has been proposed also in a multimodal approach for label-free cancer cervix study [67] by coupling MM wide field imaging with optical scanning OCT. Despite the extraordinary advantage offers by the speed of this technique, an important assumption is that the sample is achromatic within the excitation wavelength range. Additionally, the modeling of the instruments assumed that all the linear retarders used do not present any diattanuation or depolarization. Thus, a precise validation of the polarimetric properties of the sample should be done in the working spectral range. Finally, thanks to the high speed rate and the spectral information that give access to numerous measurable quantities at the same time, this method has been successfully proposed for real-time imaging through an endoscopic approach [66]. The main issue of endoscopic technique polarization-resolved is decoupling at any time the signatures of the sample from the optical fiber polarization transformation. We have summarized the typical performances reached by the methods presented in this section in Table 1. Additionally, we have reported the acquisition speed for the 16 MM images in Figure 4.
Mueller Matrix Applications
MM microscopy has proven to be a promising tool for a wide range of applications, mostly used for biomedical diagnosis. It is explained by the need of imaging at a high speed rate any tissue modification induced by the presence of a pathology in a non-invasive way without any labelling process. Additionally, this research field required to investigate at distance and from outside in vivo pathological signature in fast way. This has been of interest for characterising micro-patterned structures and layered metasurfaces, that requires a fast and an automatized process. For utilizing this technique from the biomedical diagnosis into industrial applications, the MM imaging experimental architecture is almost the same. The main difference comes from the interpretation of the polarimetric response of the samples. Indeed, it is well stated that biological media exhibits stronger depolarization than solid-state structures due to the random organization overall the illumination volume [11]. This leads to an arduous task in tracking the source of the contrast at a specific depth, limiting this technique nowadays for the direct use in clinical applications, such as for the histopathology.
Ophthalmology
As presented in the last section, the main application of MM microscopy using SLM was devoted to ophthalmology. The main reason comes from the desire in finding an alternative approach for diagnosing any pathology at rate faster than the eye motion similar to earlier methods proposed by OCT imaging. Moreover, coupling with additional signal processing, MM offers the reduction of the exposure time that limit the damage through the eye. Besides, it has been shown that in presence of pathology, the disorganization of thick tissues exhibits strong modification in the birefringence and depolarization and MM has the capability of quantifying these parameters. This explains that most of the incomplete MM microscopes tends to track these only two parameters reducing the number of information but accelerating the process for encoding/decoding the polarization states [68,69].
First, because the ocular media and the retina in the human eye exhibit rather complicated polarization properties, every technique based on collecting the light scattered back into the retina in double pass is affected by polarization. Second, another challenge for such a system is the fast motion and optical properties in the living eye, required a high speed rate encoding/decoding of the polarization states. Third, to avoid any damage, the wavelength should be chosen wisely so as to reduce the radiation exposition and limit the absorption that participates in reducing drastically the polarimetric SNR. Some images obtained in the SLM configuration for MM are presented Figure 5. One of the first application of MM has been proposed in 2000 that studied the influence of the eye pupil size on the birefringence [70]. For limiting the exposure time but acquiring a high number of image, the number of pixel is quite small. Furthermore for such an application, the numerical aperture could not be high for investigating at distance the medium limiting drastically the optical resolution. The quantification of the polarimetric properties of the different part of the eye (optic nerve, macula, cornea and retina) is shown in Figure 5b. As it can be noted in this application, the contrast distribution for some parameters are completed blurred by the coupling effect of the back reflection and highly scattering regions in the eye. This is challenging where tracking slight changes at early pathological stages in a low SNR area is mandatory. Recent works in 2020 shown in Figure 5c have successfully overcome this issue by acquiring the elementary MMs in 80 µm depth using a confocal approach [52]. This study has been performed on extracted rat cornea in both transmission and reflection configuration. The main purpose was to investigate the propagation of the polarimetric changes through a highly scattering environment layer-by-layer induced by the random distribution of collagen and cells. Additionally, with the purpose of tracking the source of the polarimetric contrasts, MM imaging acquisition is coupled with non-linear imaging modalities and has been coupled with Monte Carlo simulations. As expected, the study reveals that the depolarization and retardance increase in depth and that a high polarization heterogeneity can be noted in xy and z direction since the data collected correspond to average values through the confocal volume. However, this heterogeneity in the polarization properties are not only explained by the random distribution of the localized biological components, but also by the influence of measured metrics, such as the glucose level in the blood. This results in extremely challenging issue for decoupling the optical rotation from glucose from the pure deterministic polarimetric response of the layered biological arrangement.
Biomedical Diagnosis and Tissue Organization
Recently, many works have been focused on developing MM polarimeter to investigate the modification of the tissues induced by pathology at the cellular level [71][72][73]. Indeed, MM offers the capability of tracking changes at distance in a non-invasive and label free way since no high power is required for detecting the polarized light. Coupling with SLM, a better contrast emerged from this technique, resolving sub-microscopic objects optical properties giving the capability of quantifying pathologies at early stages.
Cancers induce drastic modifications in cells size and collagen organization. Actually, the scoring of pathologies are performed by specialists and the accuracy of the analysis in completely operator dependant. This is the reason that the recent emerging methods proposed to automatize the measurements and the interpretation of the results through statistical interpretation and machine learning. The approach is almost the same and consists in measuring the associated physical changes to the cancer that exhibit strong polarimetric effects comparatively to the healthy area. Thus, MM provides an interesting method for imaging confined localized structures and bring quantitative methods for staging the pathologies. It is the purpose of the applications proposed in Figure 6.
More particularly, the work in Figure 6i proposed a statistical approach for scoring liver fibrosis biopsis by analyzing the retardance and depolarization parameters compared to the SHG response. This pathology induces an accumulation of the type I and III fibrillar collagen through the extracellular matrix in the hepatic tissue. The non-linear signal erases from the anisotropic arrangement in the collagen structure and are correlated to the retardance images as observed. Based only on this parameter, it is impossible to distinguish the different arrangement in the collagen fibers, different from the random accumulation in the fibrosis or from the organized collagen composing blood vessels. Based on a pure visual interpretation, this can leads a bad scoring of the fibrosis. However, it is well stated that the random distribution of deterministic objects (here pure birefringent) leads to depolarization, MM gives the opportunity of discriminating the two kinds of organization through this parameter. Furthermore, Figure 6ii shows that this discrimination can be done on the image extracted from the retardance orientation (i.e., birefringent fast axis) where it is possible to follow the rotation of the fiber in space with high imaging contrast. It could be of interest in the case the experiment requires the tracking of a specific polarimetric change under certain conditions as it has be done for fixed zebrafish reported in Figure 6iii [45]. This work has proposed to acquire a detailed fingerprint at different developmental stages and interpret all the MM parameters. This is key in understanding the morphological changes of the animal during development. It has been shown that each parameters could be use as a specific markers for local structures since the physical effects are reliable to a dedicated structure and organization. For zebrafish, it has been proven than the imaging contrast from Pd comes from the thickness heterogeneity while the R values erases from the tissues and muscles.
Material Science
Besides these main applications for the biomedical diagnosis, the characterization of the full optical property of nanostructures at high speed rate and in a cheap way offered by SLM is suitable in real-world industrial manufacturing applications. The main challenge of this application is the capability of characterizing the sample at the nanoscale. In polarimetry, most of the ellipsometers are dedicated for characterizing thin films in microelectronics [75][76][77]. However, this approach is based on the analysis of the diffraction pattern in widefield imaging and are compared with models using elementary decomposition of the MM [78,79].
The characterization of nanostructures has been widely studied by electron microscopy or in a non-linear optical approach. These approaches provides valuable information related to the sample conformation at the nanoscale but are based on the high power light dose absorption and are invasive in term of sample preparation. MM has proven to overcome these issues and its easy experimental implementation gives the opportunity of tracking live polarimetric changes. One of the few examples in the literature related to in vivo live imaging using SLM has been proposed in Reference [50] as shown in Figure 7a. This work consisted in studying the polarimetric modifications of starch granule under certain conditions (thermal, hydration level and temporal). Indeed, starch is of interest due to its easy, cheap availability and its concentric shell structure forms a natural photonic crystal structure easy to model. In the past year, studying the starch structures has been of interest since it is a major source of our daily diet and its conformation plays a significant role for the quality of food. In this work, the denaturation and the phase transition after a thermal change has been studied through the Degree of Polarization (DOP) Linear (DOLP or Circular (DOCP) indicating that starch has a greater degree of ultrastructural amylopectin order. It has proven that the degradation at the intramolecular level occurs in hydrated starch whereas disruption of surface in dry only while no significant change of polarization properties are observed at higher temperatures in case of dry starch. Thanks to the wide range of parameters provided by MM, other physical quantities could be use for interpreting crystal at the microscopic level as shown in Figure 7b. Conventional polarization microscopy study the polycrystalline structures of rocks using simple cross polarizers. It reveals the presence of different minerals by rotating the sample, resulting in intensity changes which comes from the difference in birefringence and the thickness of the crystals. The advantage of moving into an MM configuration is the capability of quantifying the absolute value of D and as well as the orientation of the mineral without moving parts.
Conclusions
Mueller matrix microscopy is an interesting label-free approach for understanding the organization of any medium that demonstrate its potentiality in the many fields. This technique has multiple advantages over other traditional optical imaging modalities, such as being a versatile and flexible tool, with low complexity and low cost. More particularly, we have presented some methods using miniaturized and passive encoding/decoding modules of the polarization, easier to placed inside any microscope body. Thus, the use of low power laser source and cheap photodetectors prove that this technique could be an affordable tool by any research group. In this review, we presented the earliest and the most advanced numerical models that have proven their accordance to experimental data. We showed that the architectures are inspired by Mueller-Stokes setups simple to design and build. However, at such scale, it could be arduous to identify properly the source of the contrast since the polarimetric changes occur at every elementary layers in the illumination volume. Additionally, switching from the single point configuration into an imaging system is not straightforward, and different sources of optical artifacts could pollute the measurements. These come from the polarization transformation through the whole optical setup that alter the polarization quality and hide the true sample fingerprint. It becomes even more challenging when all these sources of error have to be evaluated at rate fast enough to be suitable with the scanning beam approach. Thanks to the recent advances and demonstrations offered by the upgrade of the SLM, this approach shows its capability to be easily implemented in different optical architecture, providing simultaneous diverse source of contrasts. Furthermore, it is easy to imagine 3D real-time measurement in the polarization-based imaging research field.
Funding: This research received no external funding.
Abbreviations
The following abbreviations are used in this manuscript: | 8,348.2 | 2021-02-11T00:00:00.000 | [
"Physics"
] |
The circular RNA hsa_circ_000780 as a potential molecular diagnostic target for gastric cancer
Background The present study aimed to identify a specific circular RNA (circRNA) for early diagnosis of gastric cancer (GC). Methods Totally 82 patients with GC, 30 with chronic nonatrophic gastritis and 30 with chronic atrophic gastritis were included in this study. Four of the 82 GC patients were selected for screening. Total RNA from malignant and adjacent tissue samples was extracted, and circRNAs in four patients were screened. According to the screening results, the eight most upregulated and downregulated circRNAs with a statistically significant association with GC were identified by real-time fluorescent quantitative polymerase chain reaction (PCR). Then, the most regulated circRNA was selected for further sensitivity and specificity assessments. CircRNA expression was examined by quantitative reverse transcriptase PCR in 78 GC (21 and 57 early and advanced GC, respectively) and adjacent tissue samples, as well as in gastric fluid samples from 30 patients with chronic nonatrophic gastritis, 30 with chronic atrophic gastritis, and 78 GC. Results A total of 445 circRNAs, including 69 upregulated and 376 downregulated circRNAs, showed significantly altered expression in GC tissue samples. Hsa_circ_000780 was significantly downregulated in 80.77% of GC tissue samples, with levels in GC tissue samples correlating with tumor size, tumor stage, T stage, venous invasion, carcinoembryonic antigen amounts, and carbohydrate antigen 19–9 levels. Strikingly, this circRNA was found in the gastric fluid of patients with early and advanced GC. Conclusions The present study uncovered a new circRNA expression profile in human GC, with hsa_circ_000780 significantly downregulated in GC tissue and gastric fluid specimens. These findings indicate that hsa_circ_000780 should be considered a novel biomarker for early GC screening.
awareness of cancer prevention and low compliance of gastroscopy screening; in addition, the number of digestive endoscopists cannot meet the needs of the general population for gastroscopy screening [3]. In recent years, robust advances in human genome sequencing, epigenetics, circular RNA (circRNA) assessment tools, and other molecular biological techniques have enabled the search for molecular diagnostic targets for GC. Gene molecular targets are widely distributed in the human body (blood, urine, feces, and various body fluids); additionally, the samples are easily obtainable, and the detection technology is mature. Among the various methods for studying gene mutations, circRNAs are a promising target for the molecular diagnosis of GC [4][5][6][7][8][9].
CircRNAs are closed circular genetic structures with no 3′-end poly-A structure and 5′-an end cap structure [10]. They range from hundreds to thousands of base pairs in length, and are not degraded by RNA exonuclease; circR-NAs are stable in nature and exist widely in the biological community, with evolutionary conservatism [11]. Studies have reported that while circRNAs are widely considered miRNA sponges [12], not many of them own more predicted miR-binding sites than expected [13,14]. Recent studies have shown abnormal circRNA expression in various tumor cells affects tumor occurrence, proliferation, and invasion [15][16][17]. Due to the stability of circular RNAs, they have been increasing investigated as potential tumor markers in recent years [18][19][20], especially in GC [21]. Scientist have observed that circ_002059, circ_0000745, circ_00000181, circ_0047905, circ_0014717, circ_0001017, and circ_0061276 are significantly downregulated in patients with GC, with good sensitivity and specificity in the diagnosis of GC [21][22][23][24][25][26]. However, no report has assessed circ_000780. Additionally, the role of microRNAs has been highlighted in the development and maintenance of drug resistance in GC, which is the most critical cause of GC treatment failure. CircRNAs act as miRNA sponges and affect gene regulation and expression [27,28]. Although the global cir-cRNA expression profile in human GC continues to be investigated, no circRNA with a clinical value in GC has been reported. Moreover, the role of circRNAs in early diagnosis of GC is not fully understood. Therefore, the present study aimed to identify a specific circRNA for early diagnosis of GC.
Sample collection
A total of 82 patients with GC admitted to the Cancer Hospital Affiliated to Hainan Medical College and examined in the endoscopy center from January 2017 to December 2018 were recruited in this study after institutional ethics clearance. Inclusion criterial were: (1) < 80 years of age; (2) complete clinical data available; (3) scheduled selective GC surgery; (4) no previous chemotherapy besides adjuvant treatment before operation; (5) no active gastrointestinal bleeding or obstruction. Exclusion criteria were: (1) uncontrolled diabetes or hypertension, coronary heart disease, stroke, cardiovascular, and/or cerebrovascular diseases; (2) severe underlying diseases such as pulmonary, liver, and/or kidney dysfunctions; (3) requiring resection of other organs. Of the 82 patients, four were selected for the circRNA chip screening study. They included two men (one with T3N1M0, moderately differentiated adenocarcinoma; one with T3N2M0, poorly differentiated adenocarcinoma) and two women (one with T3N1M0, moderately differentiated adenocarcinoma; one with T3N2M0, poorly differentiated adenocarcinoma). The average age, weight, and height of the four patients were 56.7 years, 58.3 kg, and 168 cm, respectively. The remaining 78 patients with GC (Table 1) were selected for endoscopic biopsy and gastric fluid sample collection. These patients were included in the validation study of differential circRNA expression. The diagnostic criteria for early GC (EGC) and advanced GC (AGC) were based on the National Comprehensive Cancer Network clinical practice guidelines in oncology (version 3.2016). Additionally, 30 patients with chronic nonatrophic gastritis (CNAG) and 30 with chronic atrophic gastritis (CAG) were randomly selected as the control group. The diagnostic criteria for CNAG and CAG were according to the consensus opinion of the 2012 Chinese Chronic Gastritis of Gastroenterology Branch of the Chinese Medical Association. GC specimens were obtained by cutting 0.5 cm 3 of the whole layer of the GC tissue, whereas paracancerous tissue specimens were obtained by cutting 0.5 cm 3 of the mucosa at least 5 cm away from the tumor body. The samples were separated from the body, quickly sliced to the required size, placed into storage tubes and stored in liquid nitrogen. Endoscopic tissue and gastric juice samples were extracted from 78 patients with GC (21 patients with EGC and 57 with AGC), 30 with CNAG, and 30 with CAG. Table 1 illustrates the baseline characteristics of the patient and control groups. All specimens were collected and pretreated according to a previously described protocol and preserved at − 80 °C until RNA extraction [29].
Total RNA extraction and reverse transcription
Total RNA from tissue and gastric fluid samples were extracted using TRIzol reagent (Invitrogen, Life Technologies Inc., Germany). RNA concentration was measured by reading absorbance at 260 nm (OD 260 ) on a NanoDrop ND-1000 instrument (Thermo Fisher Scientific, DE, USA). RNA integrity was verified by denaturing agarose gel electrophoresis. Finally, total RNA was transcribed into cDNA through the GoScript Reverse Transcription (RT) system (Promega, WI, USA) following the manufacturer's protocol.
Microarray hybridization of circRNAs
GC tissue samples and matched adjacent noncancerous tissue specimens were selected for circRNA expression profiling using Human circRNA Array v2 (Arraystar, MD, USA). Total RNA was digested with RNase R (20 U/ μL, Epicentre, Inc., Madison, WI, USA) to remove linear RNAs and enrich circRNAs. The enriched circRNAs were amplified and transcribed into fluorescent cRNA by the random priming method (Super RNA Labelling Kit; Arraystar). Labeled cRNAs were hybridized onto Human circRNA Array v2 (8 × 15 K, Arraystar). Slides were incubated for 17 h at 65 °C in a hybridization oven (Agilent, CA, USA). After washing the slides, the arrays were scanned on an Agilent Scanner (G2505C). The scanned images were then imported into the Agilent Feature Extraction software for grid alignment and data extraction. Quantile normalization and subsequent data processing were performed with the R software package. The expression profile of circRNAs, identified through volcano plot filtering between GC and paired adjacent noncancerous tissue samples, was statistically significant [fold change (FC) ≥ 2.0 and P ≤ 0.05]. Hierarchical clustering was performed to depict the distinguishable expression pattern of circRNAs among samples. The cir-cRNA/microRNA interaction was predicted using Tar-getScan [30] & miRanda [31].
Quantitative reverse transcription-polymerase chain reaction
The eight most upregulated and downregulated circR-NAs exhibiting the greatest differences in expression between groups were selected for quantitative reverse transcription-polymerase chain reaction (qRT-PCR) verification in the four GC specimens and their adjacent tissues. qRT-PCR was performed with GoTaq qPCR Master Mix (Promega) on an Mx3005P Real-Time PCR System (Stratagene, CA, USA) in accordance with the manufacturer's protocols. Divergent primers of the top eight upregulated and downregulated circRNAs and convergent primers of β-actin (H) were designed and synthesized by Aksomics (Shanghai) Biotechnology Co. Ltd. The divergent primers could only amplify circRNA and differentiate contaminants from the linear isoforms. Table 2 lists the circRNA primer sequences used for this procedure.
RT-PCR was performed as follows: 40 cycles of 95 °C for 10 s and 60 °C for 60 s for amplification; annealing at 95 °C for 10 s, 60 °C for 60 s, and 95 °C for 15 s with slow heating from 60 to 99 °C (at 0.05 °C/s).
The target and housekeeping genes in each sample were analyzed by RT-PCR. According to the gradient dilution DNA standard curve, the expression levels of Table 2 Primer sequences for the assessed circRNAs the target and housekeeping genes in each sample were directly generated on an Applied Biosystems ViiA ™ 7 Real-Time PCR System (ThermoFisher Scientific, USA). Target gene concentration in each sample divided by that of the housekeeping gene was considered the relative expression level of the gene.
Statistical analysis
Statistical analyses were performed with the SPSS 22.0 software (SPSS, IL, USA). When comparing the GC and paired noncancerous tissue groups for profile differences, the "FC" (ratio of group averages) between the groups for each circRNA was computed. The statistical significance of the difference was estimated by the t test. CircRNAs with FCs ≥ 2.0 were considered to be significantly differentially expressed. The analysis outputs were filtered, and differentially expressed circRNAs were ranked according to characteristics such as FC value, P value, and chromosome location. Differences in hsa_circ_000780 levels between the GC and paired adjacent noncancerous tissues were assessed by the t test for paired data; multiple groups (CNAG, CAG, EGC, and AGC) were assessed by one-way analysis of variance with post-hoc LSD test.
Correlations between hsa_circ_000780 levels and clinicopathological factors were further analyzed by the Analyze-Correlate-Bivariate menu of SPSS 22.0. A P value < 0.05 was considered statistically significant.
Profiles of circRNAs in GC
A total of 13,617 circRNAs were detected in the assessed GC and paired noncancerous samples by circRNA microarray analysis. Among them, 445 circRNAs were aberrantly expressed with statistical significance (P < 0.05 and FC > 2.0) between the GC and paired noncancerous tissues. FC filtering (Fig. 1a) or volcano plot filtering (Fig. 1b) was used to identify circRNAs whose differential expression was statistically significant. Hierarchical clustering was performed to depict the differential circRNA expression pattern among samples (Fig. 1c). Of the 445 circRNAs, 69 (15.51%) were significantly upregulated and 376 (84.49%) were significantly downregulated. The eight most upregulated and downregulated circRNAs, respectively, which were screened out and then validated in 4 pairs of gastric cancer and adjacent tissue samples are listed in Table 3. The expression of hsa_circ_000780 was the most altered in cancer tissue samples versus adjacent tissue specimens (P = 0.001240).
Expression of hsa_circ_000780 in GC
The sample size in this study was expanded to 78 patients with GC and their matched adjacent noncancerous tissues to verify the accuracy of the above microarray and qRT-PCR data. The expression levels of hsa_circ_000780 in these tissues were measured by qRT-PCR. The relative expression levels of hsa_circ_000780 in GC and Fig. 1 The circRNA expression profiles in GC and paired adjacent noncancerous tissues. a Scatter plots were used to compare circRNA expression levels between GC and paired adjacent noncancerous tissues. b Volcano plots were used to visualize the differential expression of circRNAs between GC and paired adjacent noncancerous tissues. The red and green points in the plot represent the differentially expressed circRNAs with statistical significance. c Hierarchical cluster analysis of circRNAs expressed in GC (red bar) and paired adjacent noncancerous (blue bars) samples matched adjacent noncancerous tissue samples were 6.87 × 10 -4 ± 3.12 × 10 -4 and 11.67 × 10 -4 ± 2.29 × 10 -4 , respectively (P < 0.001). The distribution of hsa_ circ_000780 is shown in Fig. 2. Taking the mean value of hsa_circ_000780 expression in paracancerous tissues as the critical value for GC diagnosis, hsa_circ_000780 expression was considered to be low in 80.77% (63/78) of GC specimens, versus only 7.69% (6/78) in the paracancerous group.
Amounts of hsa_circ_000780 in gastric juice specimens
Next, hsa_circ_000780 levels in gastric fluid samples from 30 patients with CNAG, 30 with CAG, 21 with EGC, and 57 with AGC were assessed by qRT-PCR. The values for the CNAG, CAG, EGC, and AGC groups were (15.63 ± 2.44) × 10 -4 , (12.59 ± 2.13) × 10 -4 , (4.28 ± 0.98) × 10 -4 , and (4.39 ± 1.15) × 10 -4 , respectively (Fig. 4). The expression levels of hsa_circ_000780 significantly differed in the CNAG and CAG groups compared with the EGC and AGC groups (P < 0.001). The hsa_circ_000780 levels were significantly decreased in the gastric fluid of the GC group. No significant difference in hsa_circ_000780 levels was found between the AGC and EGC groups (P > 0.05) or between the CNAG and CAG groups (P > 0.05). Taking the mean value of hsa_circ_000780 expression in CNAG, CAG and GC gastric juice specimens as the critical level for GC diagnosis, hsa_circ_000780 expression was considered to be low in 100% (78/78) of GC juice specimens, versus 0% (0/60) in the gastritis group. The PPV and NPV were both 100%.
Discussion
Several studies have demonstrated the involvement of circRNAs in the proliferation, apoptosis, invasion, and metastasis of human tumors [21,26,32]. Huang et al. [33] reported 16 upregulated and 84 downregulated circRNAs in GC. Of these, only hsa_circ_0000026 was downregulated by a fold change of 2.8 in GC as detected by qRT-PCR, and this difference was significant. Dang et al. [34] examined the expression profiles of five pairs of GC and matched non-GC tissues, and found 713 differentially expressed circRNAs in GC, including 191 and 522 upregulated and downregulated, respectively. Shen et al. [32] performed circRNA microarray analysis and stated that 347 upregulated and 603 downregulated circRNAs were observed in GC compared with normal gastric tissue. Of 20 randomly selected circRNAs, 10 were confirmed to have differential expression. The circRNA microarray results in the present study revealed a new circRNA expression profile in human GC, and the differentially expressed circRNAs detected above showed a significant difference compared with those reported in other studies [26,34]. This study showed that 445 circRNAs were significantly dysregulated in GC. Of these, 15.51% were upregulated and 84.49% were downregulated. The retrieve these circRNAs in GC. These results suggested the genetic heterogeneity of GC. In addition, most differentially expressed circRNAs in this study were found on human chr1, chr3, chr4, chr6, and chr11. Shao et al. [26] observed that the differentially expressed circRNAs were mainly transcribed from chr1 and chr3, suggesting that despite the great heterogeneity in the genetic mechanism of GC, there are overlaps in the expression of circRNAs. This finding may provide a direction for further investigation of GC pathogenesis and diagnostic targets. The circRNA expression profiles in GC further confirmed that circRNAs are closely associated with GC. However, only few circRNAs have been shown to regulate carcinogenesis in GC [35][36][37][38]. In the present study, hsa_circ_000780 was selected as a target circRNA to validate the accuracy of microarray results. The results revealed that hsa_circ_000780 was significantly downregulated in 80.77% of GC tissue samples. Bioinformatics analysis predicted that hsa_circ_000780 could interact with hsa_miRNA_522-3p, hsa_miRNA_381-3p, hsa_miRNA_300, and hsa_miRNA_15a-3p. MicroRNAs (miRNAs) are a class of small noncoding RNAs of 20-22 nucleotides in length, which play an important role in regulating gene expression by directly binding to the 3'-untranslated regions (3'-UTRs) of target mRNAs [39]. It has been demonstrated in a number of studies that miRNAs are among the pivotal factors in many biological processes, including cell differentiation, cell proliferation, apoptosis, and energy metabolism [40]. Moreover, recent studies have revealed that miRNAs play a dual role in oncology either by enhancing carcinogenesis through inhibiting tumor suppressors or acting as tumor suppressors to downregulate oncogenes. MiR-522-3p up-regulation negatively regulates BLM, with upregulation of c-myc, CDK2 and cyclin E, thereby promoting the proliferation of human CRC cells [41]. MiR-522-3p is an oncogene in glioblastoma by targeting SFRP2 through the Wnt/β-catenin pathway [42]. The miR-381-3p/RAB2A axis induces cell proliferation and inhibits cell apoptosis in bladder cancer [43]. MiR-381-3p targeted and suppressed the NASP gene, and reduced viability, migration, invasion and EMT in HNSCC cells [44]. MiR-300/FA2H affects gastric cancer cell proliferation and apoptosis. The OIP5-AS1/miR-300/YY1 feedback loop facilitates cell growth in HCC by activating the WNT pathway [45,46]. MiR-15a-3p may contribute to adenoma-to-carcinoma progression. MiR-15a-3p and miR-16-1-3p negatively regulate Twist1 to repress gastric cancer cell invasion and metastasis [47,48]. The literature suggests that circular RNAs can regulate the occurrence, growth and metastasis of tumors through a variety of signaling pathways. Additionally, hsa_circ_000780 expression levels in GC were significantly associated with tumor size, stage, degree of invasion, and CEA and CA19-9 expression levels, suggesting that hsa_circ_000780 has the potential to predict clinical prognosis. The gastric juice is a good sample for use in the diagnosis of gastric diseases. In the present study, we further evaluated the expression of hsa_circ_000780 in gastric juice samples from patients with CNAG, CAG, EGC, and AGC. Although hsa_circ_000780 levels in the gastric juice of GC patients were obviously decreased, there was no significant difference between the EGC and AGC groups. This finding indicates that hsa_circ_000780 could be detected in the gastric juice, and has the potential for use as a biomarker for early GC screening.
Conclusions
In conclusion, the present study found a new expression profile of circRNAs in GC. Among the circRNAs detected, hsa_circ_000780 was significantly downregulated in GC, suggesting that it might be involved in the occurrence of GC. The level of this circRNA was related to some clinicopathological characteristics of GC patients. However, its role and mechanism in the occurrence of GC must be further investigated. Interestingly, hsa_circ_000780 could be detected in the gastric juice in early GC, with a significant difference compared with the control group. Therefore, this circRNA has the potential to be used as a novel biomarker for the screening of early GC. However, the sample size of the current study was not large enough, and the research conclusions still need to be further verified. | 4,298 | 2021-03-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Multi-Step Solar Irradiance Forecasting and Domain Adaptation of Deep Neural Networks
The problem of forecasting hourly solar irradiance over a multi-step horizon is dealt with by using three kinds of predictor structures. Two approaches are introduced: Multi-Model (MM) and Multi-Output (MO). Model parameters are identified for two kinds of neural networks, namely the traditional feed-forward (FF) and a class of recurrent networks, those with long short-term memory (LSTM) hidden neurons, which is relatively new for solar radiation forecasting. The performances of the considered approaches are rigorously assessed by appropriate indices and compared with standard benchmarks: the clear sky irradiance and two persistent predictors. Experimental results on a relatively long time series of global solar irradiance show that all the networks architectures perform in a similar way, guaranteeing a slower decrease of forecasting ability on horizons up to several hours, in comparison to the benchmark predictors. The domain adaptation of the neural predictors is investigated evaluating their accuracy on other irradiance time series, with different geographical conditions. The performances of FF and LSTM models are still good and similar between them, suggesting the possibility of adopting a unique predictor at the regional level. Some conceptual and computational differences between the network architectures are also discussed.
Introduction
As is well known, the key challenge with integrating renewable energies, such as solar power, into the electric grid is that their generation fluctuates. A reliable prediction of the power that can be produced using a few hours of horizon is instrumental in helping grid managers in balancing electricity production and consumption [1][2][3]. Other useful applications could benefit from solar radiation forecasting, such as the management of charging stations [4,5]. Indeed, such a kind of micro-grid is developing in several urban areas to provide energy for electric vehicles by using Photo Voltaic (PV) systems [6]. To optimize all these applications, a classical single-step-ahead forecast is normally insufficient and a prediction over multiple steps is necessary, even if with decreasing precision.
Reviews concerning machine learning approaches to forecasting solar radiation were provided, for instance, in [7][8][9][10], but the recent scientific literature has seen a continuous growth of papers presenting different approaches to the problem of forecasting solar energy. According to Google Scholar, the papers dealing with solar irradiance forecast were about 5000 in 2009 and became more than 15,000 in 2019, with a yearly increase of more than 15% in the last period. The growth of papers utilizing neural networks as a forecasting tool has been quite more rapid since they grew up at a pace of almost 30% a year in the last decade and they constituted about half of those cataloged in 2019.
The Recursive (Rec) Approach
Rec approach repeatedly uses the same one-step-ahead model, assuming as input for the second step the forecast obtained one step before [43]. Only one model, f Rec , is needed in this approach, whatever the length of the forecasting horizon and this explains the large diffusion of this approach.
The Multi-Model (MM) Approach
Although the Rec model structure is the most natural one also for h-steps ahead prediction, other alternatives are possible and in this work, we have explored two new structures, here respectively referred as multi-model (MM) and multi-output (MO), which are illustrated below. The peculiarity of the MM model is that, unlike the f Rec , its parameters are optimized for each prediction time horizon, following the framework expressed by Equations (3)- (5). MM thus requires h different models to cover the prediction horizon.
The Multi-Output (MO) Approach
The MO model [44] expressed by Equation (6) can be considered a trade-off between the Rec and the MM since it proposes to develop a single model with a vector output composed by the h values predicted for each time step. Î (t + h − 1), . . . ,Î(t + 1),Î(t) = f MO (I(t − 1), I(t − 2), . . . , I(t − d)) (6) Energies 2020, 13, 3987 4 of 18 The number of parameters of the MO, is a bit higher than the Rec (a richer output layer has to be trained), but much lower than the MO. However, with respect to the Rec it could appear, at least in principle, more accurate, since it allows its performance to be optimized over a higher dimensional space. A MO in comparison to the Rec offers the user a synoptic representation of the various prediction scenarios and therefore can be more attractive from the application point of view. Furthermore, in case the Rec model has some exogenous inputs, one must limit the forecasting horizon h to the maximum delay between the external input and the output. Nothing changes, on the contrary, for the other two approaches.
Model Identification Strategies
Neural networks are among the most popular approaches for identifying nonlinear models starting from time series. Probably this popularity is due to the reliability and efficiency of the optimization algorithms, capable of operating in the presence of many parameters (many hundreds or even thousands, as in our case). Two different kinds of neural networks have been used in this work. One of the purposes behind this work was in fact to explore, on rigorous experimental bases, if more complex but more promising neural network architectures, such as the LSTM neural networks, could offer greater accuracy for predicting solar radiation, compared to simpler and more consolidated architectures such as feed-forward (FF) neural networks.
LSTM cells were originally proposed by Hochreiter and Schmidhuber in 1997 [45] as a tool to retain a memory of past errors without increasing the dimension of the network explosively. In practice, they introduced a dynamic within the neuron that can combine both long-and short-term memory. In classical FF neural networks, the long-term memory is stored in the values of parameters that are calibrated on past data. The short-term memory, on the other side, is stored in the autoregressive inputs, i.e., the most recent values. This means that the memory structure of the model is fixed. LSTM networks are different because they allow balancing the role of long and short-term in a continuous way to best adapt to the specific process.
Each LSTM cell has three gates (input, output, and forget gate) and a two-dimensional state vector s(t) whose elements are the so-called cell state and hidden state [46,47]. The cell state is responsible for keeping track of the long-term effects of the input. The hidden state synthesizes the information provided by the current input, the cell state, and the previous hidden state. The input and forget gates define how much a new input and the current state, respectively, affect the new state of the cell, balancing the long-and short-term effects. The output gate defines how much the output depends on the current state.
Recurrent nets with LSTM cells appear particularly suitable for solar radiation forecast since the underlying physical process is characterized by both slow (the annual cycle) and fast (the daily evolution) dynamics.
An LSTM network can be defined as in Equations (7)- (8), namely as a function computing an output and an updated state at each time step.
The predictor iteratively makes use of f LSTM starting from an initial state s(t − d − 1) of LSTM cell and processes the input sequence [I(t − 1), I(t − 2), . . . , I(t − d)], updating the neurons internal states in order to store all the relevant information contained in the input. The LSTM net is thus able to consider iteratively all the inputs and can directly be used to forecast several steps ahead (h) at each time t. In a sense, these recurrent networks unify the advantages of the FF recursive and multi-output approaches mentioned above. They explicitly take into account the sequential nature of the time series as the FF recursive, and are optimized on the whole predicted sequence as the FF multi-output. The four neural predictors presented above have been developed through an extensive trial-and-error procedure implemented on an Intel i5-7500 3.40 GHz processor with a GeForce GTX 1050 Ti GPU 768 CUDA Cores. FF nets have been coded using the Python library Keras with Tensorflow as backend [48]. For LSTM networks, we used Python library PyTorch [49]. The hyperparameters of the neural nets were tuned by systematic grid search together with the number of neurons per layer, the number of hidden layers, and all the other features defining the network architecture.
Preliminary Analysis of Solar Data
The primary dataset considered for this study was recorded from 2014 to 2019 by a Davis Vantage 2 weather station installed and managed by the Politecnico di Milano, Italy, at Como Campus. The station is continuously monitored and checked for consistence as part of the dense measurement network of the Centro Meteorologico Lombardo (www.centrometeolombardo.com). Its geographic coordinates are: Lat = 45.80079, Lon = 9.08065 and Elevation = 215 m a.s.l. Together with the solar irradiance I(t), the following physical variables are recorded every 5 min: air temperature, relative humidity, wind speed and direction, atmospheric pressure, rain, and the UV index. However, as explained in Section 2.2, the current study only adopts purely autoregressive models; namely, the forecasted values of solar irradianceÎ(t) are computed only based on preceding values.
A detail of the time series recorded at 00:00 each hour is shown in Figure 1a. We can interpret this time series as the sum of three different components: the astronomical condition (namely the position of the sun), that produces the evident annual cycle; the current meteorological situation (the attenuation due to atmosphere, including clouds); and the specific position of the receptor that may be shadowed by the passage of clouds in the direction of the sun. The first component is deterministically known, the second can be forecasted with a certain accuracy, while the third is much trickier and may easily vary within minutes without a clear dynamic.
Energies 2020, 13, x FOR PEER REVIEW 5 of 18 the neural nets were tuned by systematic grid search together with the number of neurons per layer, the number of hidden layers, and all the other features defining the network architecture.
Preliminary Analysis of Solar Data
The primary dataset considered for this study was recorded from 2014 to 2019 by a Davis Vantage 2 weather station installed and managed by the Politecnico di Milano, Italy, at Como Campus. The station is continuously monitored and checked for consistence as part of the dense measurement network of the Centro Meteorologico Lombardo (www.centrometeolombardo.com). Its geographic coordinates are: Lat = 45.80079, Lon = 9.08065 and Elevation = 215 m a.s.l. Together with the solar irradiance ( ), the following physical variables are recorded every 5 min: air temperature, relative humidity, wind speed and direction, atmospheric pressure, rain, and the UV index. However, as explained in Section 2.2, the current study only adopts purely autoregressive models; namely, the forecasted values of solar irradiance ( ) are computed only based on preceding values.
A detail of the time series recorded at 00:00 each hour is shown in Figure 1a. We can interpret this time series as the sum of three different components: the astronomical condition (namely the position of the sun), that produces the evident annual cycle; the current meteorological situation (the attenuation due to atmosphere, including clouds); and the specific position of the receptor that may be shadowed by the passage of clouds in the direction of the sun. The first component is deterministically known, the second can be forecasted with a certain accuracy, while the third is much trickier and may easily vary within minutes without a clear dynamic.
The expected global solar radiation in average clear sky conditions ( ) (see Figure 1b) was computed by using the Ineichen and Perez model, as presented in [50] and [51]. The Python code that implements this model is part of the SNL PVLib Toolbox, provided by the Sandia National Labs PV Modeling Collaborative (PVMC) platform [52]. The expected global solar radiation in average clear sky conditions I Clsky (t) (see Figure 1b) was computed by using the Ineichen and Perez model, as presented in [50] and [51]. The Python code that
Fluctuation of Solar Radiation
Solar radiation time series, as well as other geophysical signals, belong to the class of the so-called 1/f noises (also known as pink noise), i.e., long-memory processes whose power density spectrum exhibit a slope, α, ranging in [0. 5,1.5]. In other words, they are random processes lying between white noise processes, characterized by α = 0, and random walks, characterized by α = 2 (see, for instance [53]). Indeed, the slope of hourly solar irradiance recorded at Como is about α = 1.1 (Figure 2), while the daily average time series exhibits a slope of about α = 0.6, meaning that solar radiation at daily scale is more similar to a white process.
Fluctuation of Solar Radiation
Solar radiation time series, as well as other geophysical signals, belong to the class of the socalled 1/f noises (also known as pink noise), i.e., long-memory processes whose power density spectrum exhibit a slope, α, ranging in [0. 5,1.5]. In other words, they are random processes lying between white noise processes, characterized by α = 0, and random walks, characterized by α = 2 (see, for instance [53]). Indeed, the slope of hourly solar irradiance recorded at Como is about α = 1.1 (Figure 2), while the daily average time series exhibits a slope of about α = 0.6, meaning that solar radiation at daily scale is more similar to a white process. 2 also shows that the power spectral density has some peaks corresponding to periodicity of 24 hours (1.16•10 −5 Hz) and its multiples.
Mutual Information
To capture the nonlinear dependence of solar irradiance time series from its preceding values, we computed the so-called mutual information M(k), defined as in Equation (9) [54].
In this expression, for some partition of the time series values, pi is the probability to find a time series value in the i-th interval, and pij(k) is the joint probability that an observation falls in the i-th interval, and the observation k time steps later falls into the j-th interval. The partition of the time series can be made with different criteria, for instance by dividing the range of values between the minimum and maximum in a predetermined number of intervals or by taking intervals with equal probability distribution [55]. In our case, we chose to divide the whole range of values into 16 intervals. The normalized mutual information of solar irradiance time series at Como is shown in Figure 3. In the case of Como hourly time series, it gradually decays reaching zero after about six lags. Moreover, it can be observed that the mutual information for the daily values decays more rapidly, thus confirming the greatest difficulty in forecasting solar radiation at a daily scale. 2 also shows that the power spectral density has some peaks corresponding to periodicity of 24 hours (1.16·10 −5 Hz) and its multiples.
Mutual Information
To capture the nonlinear dependence of solar irradiance time series from its preceding values, we computed the so-called mutual information M(k), defined as in Equation (9) [54].
In this expression, for some partition of the time series values, p i is the probability to find a time series value in the i-th interval, and p ij (k) is the joint probability that an observation falls in the i-th interval, and the observation k time steps later falls into the j-th interval. The partition of the time series can be made with different criteria, for instance by dividing the range of values between the minimum and maximum in a predetermined number of intervals or by taking intervals with equal probability distribution [55]. In our case, we chose to divide the whole range of values into 16 intervals. The normalized mutual information of solar irradiance time series at Como is shown in Figure 3. In the case of Como hourly time series, it gradually decays reaching zero after about six lags. Moreover, it can Energies 2020, 13, 3987 7 of 18 be observed that the mutual information for the daily values decays more rapidly, thus confirming the greatest difficulty in forecasting solar radiation at a daily scale.
Energies 2020, 13, x FOR PEER REVIEW 7 of 18 Figure 3. Normalized mutual information of hourly and daily solar irradiance at Como.
Benchmark Predictors of Hourly Solar Irradiance
Multi-step ahead forecasting of the global solar irradiance ( ) at hourly scale was performed by using models of the form (1-8) defined above.
The performance of such a predictor has been compared with that of some standard baseline models. More specifically, we computed:
•
The "clear sky" model, Clsky in the following, computed as explained in Section 2.2, which represents the average long-term cycle; • The so-called Pers24 model expressed as ( ) = ( − 24), which represents the memory linked to the daily cycle; • A classical persistent model, Pers in what follows, where ( + ) = ( ), = 1,2, … ℎ , representing the component due to a very short-term memory.
where is the length of the time series, while ̅ is the average of the observed data.
Benchmark Predictors of Hourly Solar Irradiance
Multi-step ahead forecasting of the global solar irradiance I(t) at hourly scale was performed by using models of the form (1-8) defined above.
The performance of such a predictor has been compared with that of some standard baseline models. More specifically, we computed:
•
The "clear sky" model, Clsky in the following, computed as explained in Section 2.2, which represents the average long-term cycle; • The so-called Pers24 model expressed asÎ(t) = I(t − 24), which represents the memory linked to the daily cycle; • A classical persistent model, Pers in what follows, whereÎ(t + k) = I(t), k = 1, 2, . . . h, representing the component due to a very short-term memory.
Energies 2020, 13, 3987 where T is the length of the time series, while I is the average of the observed data. As concerning to the NSE, an index originally developed for evaluating hydrological models [57], it is worth bearing in mind that it can range from −∞ to 1. An efficiency of 1 (NSE = 1) means that the model perfectly interprets the observed data, while an efficiency of 0 (NSE = 0) indicates that the model predictions are as accurate as the mean of the observed data. It is worth stressing here that, in general, a model is considered sufficiently accurate if NSE > 0. 6.
Regardless of what performance index is considered, it is worth noticing that the above classical indicators may overestimate the actual performances of models when applied to the complete time series. When dealing with solar radiation, there is always a strong bias due to the presence of many zero values. In the case at hand, they are about 57% of the sample due to some additional shadowing of the nearby mountains and to the sensitivity of the sensors. When the recorded value is zero, also the forecast is zero (or very close) and all these small errors substantially reduce the average errors and increase the NSE. Additionally, forecasting the solar radiation during the night is useless, and the power network dispatcher may well turn the forecasting model off. In order to overcome this deficiency, which unfortunately is present in many works in the current literature, and allow the models' performances to be compared when they may indeed be useful, we have also computed the same indicators considering only values above 25 Wm −2 (daytime in what follows), a small value normally reached before dawn and after the sunset. These are indeed the conditions when an accurate energy forecast may turn out to be useful.
Since the Clsky represents what can be forecasted even without using any information about the current situation, it can be assumed as a reference, and the skill index S f (14) can be computed to measure the improvement gained using the f LSTM and the f FF models:
Forecasting Perfomances
The comparison of the multi-step forecast of solar irradiance with LSTM and FF networks was performed setting the delay parameter d to 24, despite the mutual information indicated that it would be enough to assume d = 6. This choice is motivated by the intrinsic periodicity of solar radiation at an hourly scale [54]. We have experimentally observed that this choice gives more accurate estimates for all the models considered. This probably helps the model in taking into account the natural persistence of solar radiation (see the comments about the performance of the Pers24 model, below). The length of the forecasting horizon h, representing the number of steps ahead we predict in the future, was varied from 1 to 12. Data from 2014 to 2017 were used for network training, 2018 for validating the architecture, and 2019 for testing.
The average performances computed on the first 3 hours of the forecasting horizon of the Pers, Pers24, and Clsky models for the test year 2019 are shown in Tables 1 and 2. The Pers predictor performance rapidly deteriorate when increasing the horizon. They are acceptable only in the short term: after 1 h, the NSE is equal to 0.79 (considering whole day samples) and 0.59 (daytime samples only), but after two hours NSE decreases to 0.54 (whole day) and 0.14 (daytime only), and six steps ahead the NSE becomes −0.93 (whole day) and −1.64 (daytime). The Pers24 and Clsky preserve the same performances for each step ahead since they are independent on the horizon. Such models inherently take into account the presence of the daily pseudo-periodic component, which affects hourly global solar radiation. The Pers24 predictor appears to be superior to the Clsky (lower error indicators, higher NSE) confirming that the information of the last 24 h is much more relevant for a correct prediction than the long-term annual cycle. Indeed, the sun position does not change much between one day and the following, and the meteorological conditions have a certain, statistically relevant tendency to persist. Additionally, the Pers24 predictor is the only one with practically zero bias, since the small difference that appears in the first column of Table 1 is simply due to the differences between 31/12/2018 (which is used to compute the predicted values of 1/1/2019) and 31/12/2019. The clear sky model that operates by definition in the absence of cloud cover, overestimates the values above the threshold by 89.86 Wm −2 , on average.
From Table 2, it appears that Pers, Pers24 and Clsky, are not reliable models, on average, especially if the NSE is evaluated by using daytime samples only. Figure 4 reports the results obtained with the three different FF approaches (Figure 4a-c) and the LSTM forecasting model (Figure 4d) in terms of NSE (both in the whole day and in daytime samples only). Generally speaking, Figure 4 shows that all the considered neural predictors exhibit an NSE which reaches an asymptotic value around six steps ahead. This is coherent with the previous analysis about the mutual information (see Figure 3), which, at an hourly scale, is almost zero after six lags.
If the evaluation is carried out considering whole day samples, all the models would have to be considered reliable enough since NSE is only slightly below 0.8, even for prediction horizons of 12 h. On the contrary, if the evaluation is made considering daytime samples only, it clearly appears that models are reliable for a maximum of 5 h ahead, as for higher horizons the NSE value typically falls below 0.6. Therefore, removing the nighttime values of the time series is decisive for a realistic assessment of a solar radiation forecasting model, that would otherwise be strongly biased.
Going in deeper details, the following considerations can be made. Generally speaking, Figure 4 shows that all the considered neural predictors exhibit an NSE which reaches an asymptotic value around six steps ahead. This is coherent with the previous analysis about the mutual information (see Figure 3), which, at an hourly scale, is almost zero after six lags.
If the evaluation is carried out considering whole day samples, all the models would have to be considered reliable enough since NSE is only slightly below 0.8, even for prediction horizons of 12 h. On the contrary, if the evaluation is made considering daytime samples only, it clearly appears that models are reliable for a maximum of 5 h ahead, as for higher horizons the NSE value typically falls below 0.6. Therefore, removing the nighttime values of the time series is decisive for a realistic assessment of a solar radiation forecasting model, that would otherwise be strongly biased.
Going in deeper details, the following considerations can be made. The FF recursive approach performs slightly worse, particularly as measured by NSE and specifically after a forecasting horizon of 5 h. The FF multi-output and multi-model approaches show performances similar to the LSTM. Additionally, one can note that the performance regularly decreases with the length of the horizon for the FF recursive approach and LSTM net, since they explicitly take into account the sequential nature of the task. Conversely, the FF multi-output and the multi-model ones have irregularities, particularly the latter being each predictor for each specific time horizon completely independent on those on a shorter horizon. If perfect training was possible, such irregularities might perhaps be reduced, but they cannot be completely avoided, particularly on the test dataset, because they are inherent to the approach that considers each predicted value as a separate task.
For all the considered benchmarks and neural predictors, the difference between the whole time series (average value 140.37 Wm −2 ) and the case with a threshold (daytime only), that excludes nighttime values (average 328.62 Wm −2 ), emerges clearly, given that during all nights the values are zero or close to it, and thus the corresponding errors are also low.
FF nets and LSTM perform similarly also considering the indices computed along the first 3 h of the forecasting horizon as shown in Tables 3 and 4, for the whole day and daytime, respectively. Table 3. Average performances of FF and LSTM predictors on the first 3 hours (whole day). All the neural predictors provide a definite improvement in comparison to the Pers, Pers24, and Clsky models. Looking, for instance, at the NSE the best baseline predictor is the Pers24, scoring 0.63 (whole day) and 0.28 (daytime only). The corresponding values for the neural networks exceed 0.86 and 0.73, respectively.
FF-Recursive FF-Multi-Output FF-Multi-Model LSTM
An in-depth analysis should compare the neural predictors performance at each step with the best benchmark for that specific step. The latter can be considered as an ensemble of benchmarks composed by the the Pers model, the most performing one step ahead (NSE equal to 0.79), and the Pers24 on the following steps (NSE equal to 0.63 for h from 2 to 12). Under this perspective, the neural nets clearly outperform the considered baseline since their NSE score varies from 0.90 to 0.75 (see the solid lines in Figure 4 referring to the whole day). The same analysis performed excluding nighttime values leads to quite similar results, confirming that the neural networks always provide a performance much higher than the benchmarks considered here.
An additional way to examine the model performances is presented in Table 5. We report here the NSE of the predictions of the LSTM network on three horizons, namely 1, 3, and 6 hours ahead.
The sunlight (i.e., above 25 W/m 2 ) test series is partitioned into three classes: cloudy, partly cloudy and sunny days, which constitute about 30, 30, and 40% of the sample, respectively. More precisely, cloudy days are defined as those when the daily average irradiance is below 60% of the clear sky index and sunny days those that are above 90% (remember that the clear sky index already accounts for the average sky cloudiness). It is quite apparent that the performance of the model decreases consistently from sunny, to partly cloudy, to cloudy days. This result is better illustrated in Figure 5 where the 3-hour-ahead predictions are shown for three typical days. In the sunny day, on the right, the process is almost deterministic (governed mainly by astronomical conditions), while the situation is completely different in a cloudy day. In the last case, the forecasting error is of the same order of the process (NSE close to zero) and it can be even larger at 6 hours ahead. This determines the negative NSE value shown in Table 5.
Energies 2020, 13, x FOR PEER REVIEW 11 of 18 index and sunny days those that are above 90% (remember that the clear sky index already accounts for the average sky cloudiness). It is quite apparent that the performance of the model decreases consistently from sunny, to partly cloudy, to cloudy days. This result is better illustrated in Figure 5 where the 3-hour-ahead predictions are shown for three typical days. In the sunny day, on the right, the process is almost deterministic (governed mainly by astronomical conditions), while the situation is completely different in a cloudy day. In the last case, the forecasting error is of the same order of the process (NSE close to zero) and it can be even larger at 6 hours ahead. This determines the negative NSE value shown in Table 5.
Domain Adaptation
Besides the accuracy of the forecasted values, another important characteristic of the forecasting models is their generalization capability, often mentioned as domain adaptation in the neural networks literature [58]. This means the possibility of storing knowledge gained while solving one problem and applying it to different, though similar, datasets [59].
To test this feature, the FF and LSTM networks developed for the Como station (source domain) have been used, without retraining, on other sites (target domains) spanning more than one degree of latitude and representing quite different geographical settings: from the low and open plain at 35 m a.s.l. to up the mountains at 800 m a.s.l. In addition, the test year has been changed because solar radiation is far from being truly periodical and some years (e.g., 2017) show significantly higher values than others (e.g., 2011). This means quite different solar radiation encompassing a difference of about 25% between yearly average values. Figure 6 shows the NSE for the multi-output FF and LSTM networks for three additional stations. All the graphs reach somehow a plateau after six steps ahead, as suggested by the mutual information computed on Como station, and the differences between FF and LSTM networks appear very small or even negligible in almost all the other stations. Six hours ahead, the difference in NSE between Como, for which the networks have been trained, in the test year (2019) and Bema in 2017, which appears to be the most different dataset, is only about 3% for both FF models and LSTM.
Domain Adaptation
Besides the accuracy of the forecasted values, another important characteristic of the forecasting models is their generalization capability, often mentioned as domain adaptation in the neural networks literature [58]. This means the possibility of storing knowledge gained while solving one problem and applying it to different, though similar, datasets [59].
To test this feature, the FF and LSTM networks developed for the Como station (source domain) have been used, without retraining, on other sites (target domains) spanning more than one degree of latitude and representing quite different geographical settings: from the low and open plain at 35 m a.s.l. to up the mountains at 800 m a.s.l. In addition, the test year has been changed because solar radiation is far from being truly periodical and some years (e.g., 2017) show significantly higher values than others (e.g., 2011). This means quite different solar radiation encompassing a difference of about 25% between yearly average values. Figure 6 shows the NSE for the multi-output FF and LSTM networks for three additional stations. All the graphs reach somehow a plateau after six steps ahead, as suggested by the mutual information computed on Como station, and the differences between FF and LSTM networks appear very small or even negligible in almost all the other stations. Six hours ahead, the difference in NSE between Como, Energies 2020, 13, 3987 12 of 18 for which the networks have been trained, in the test year (2019) and Bema in 2017, which appears to be the most different dataset, is only about 3% for both FF models and LSTM. As a further trial, both FF models and LSTM have been tested on a slightly different process, i.e., the hourly average solar radiation recorded at the Como station. While the process has the same average of the original dataset, the variability of the process is different since its standard deviation decreased of about 5%. The averaging process indeed filters the high frequencies. Forecasting results are shown in Figure 7. Additionally for this process, the neural models perform more or less as for the hourly values for which they have been trained. The accuracy of both LSTM and FF networks improves by about 0.02 (or 8%) in terms of standard MAE and slightly less in terms of NSE, in comparison to the original process. For a correct comparison with Figure 4, however, it is worth bearing in mind that the 1-hour-ahead prediction corresponds to ( + 2) in the graph, since the average computed at hour ( + 1) is that from ( ) to ( + 1) and, thus, includes values that are only 5 minutes ahead of the instant at which the prediction is formulated. As a further trial, both FF models and LSTM have been tested on a slightly different process, i.e., the hourly average solar radiation recorded at the Como station. While the process has the same average of the original dataset, the variability of the process is different since its standard deviation decreased of about 5%. The averaging process indeed filters the high frequencies. Forecasting results are shown in Figure 7. Additionally for this process, the neural models perform more or less as for the hourly values for which they have been trained. The accuracy of both LSTM and FF networks improves by about 0.02 (or 8%) in terms of standard MAE and slightly less in terms of NSE, in comparison to the original process. For a correct comparison with Figure 4, however, it is worth bearing in mind that the 1-hour-ahead prediction corresponds to (t + 2) in the graph, since the average computed at hour (t + 1) is that from (t) to (t + 1) and, thus, includes values that are only 5 minutes ahead of the instant at which the prediction is formulated.
An ad-hoc training on each sequence would undoubtedly improve the performance, but the purpose of this section is exactly to show the potential of networks calibrated on different stations, to evaluate the possibility of adopting a predictor developed elsewhere when a sufficiently long series of values is missing. The forecasting models we developed for a specific site could be used with acceptable accuracy for sites were recording stations are not available or existing time series are not long enough. This suggest the possibility of developing a unique forecasting model for the entire region.
average of the original dataset, the variability of the process is different since its standard deviation decreased of about 5%. The averaging process indeed filters the high frequencies. Forecasting results are shown in Figure 7. Additionally for this process, the neural models perform more or less as for the hourly values for which they have been trained. The accuracy of both LSTM and FF networks improves by about 0.02 (or 8%) in terms of standard MAE and slightly less in terms of NSE, in comparison to the original process. For a correct comparison with Figure 4, however, it is worth bearing in mind that the 1-hour-ahead prediction corresponds to ( + 2) in the graph, since the average computed at hour ( + 1) is that from ( ) to ( + 1) and, thus, includes values that are only 5 minutes ahead of the instant at which the prediction is formulated.
Some Remarks on Network Implementations
The development of many successful deep learning models in various applications has been made possible by three joint factors. First, the availability of big data, which are necessary to identify complex models characterized by thousands of parameters. Second, the intuition of making use of fast parallel processing units (GPUs) able to deal with the high computational effort required. Third, the availability of an efficient gradient-based method to train these kinds of neural networks.
This latter are the well-known backpropagation (BP) techniques, which allow efficient computing of the gradients of the loss function with respect to each model weight and bias. To apply the backpropagation of the gradient, it is necessary to have a feed-forward architecture (i.e., without self-loops). In this case, the optimization is extremely efficient since the process can be entirely parallelized, exploiting the GPU.
When the neural architecture presents some loops, as it happens in recurrent cells, the BP technique has to be slightly modified in order fit the new situation. This can be done by unfolding the neural networks through time, to remove self-loops. This extension of BP technique is known as backpropagation through time (BPTT) in the machine learning literature. The issue with BPTT is that the unfolding process should in principle lasts for an infinite number of steps, making the technique useless for practical purposes. For this reason, it is necessary to limit the number of unfolding, considering only the time steps that contain useful information for the prediction (in this case, we say that the BPTT is truncated). As it is easy to understand, BPTT is not as efficient as the traditional BP, because it is not possible to fully parallelize the computation. As a consequence, we are not able to fully exploit the GPU's computing power, resulting in a slower training. The presence of recurrent units also produces much complex optimization problems due to the presence of a significant number of local optima [60]. Figure 8 shows the substantial difference in the evolution of the training process. As usual, the mean of the quadratic errors of the FF network slowly decreases toward a minimum while this function shows sudden jumps followed by several epochs of stationarity in the case of LSTM. The training algorithm of LSTM can avoid being trapped for too many epochs into local minima, but on the other side, these local minima are more frequent. Energies 2020, 13, x FOR PEER REVIEW 14 of 18 For the sake of comparison, we trained the four neural architectures using the same hyperparameters grid, considering the ranges reported in Table 6. As training algorithm, we used the Adam optimizer. Each training procedure has been repeated three times to avoid the problems of an unlucky weights initialization. While performing similarly under many viewpoints, the FF and LSTM architectures show significant differences if we consider the sensitivity to the hyperparameters values. Figure 9 shows the sensitivity bands obtained for each architecture. The upper bound represents, for each step ahead, the best NSE score achieved across the hyperparameters combinations. The cases in which the optimization process fails due to a strongly inefficient initialization of the weights have been excluded. For the sake of comparison, we trained the four neural architectures using the same hyperparameters grid, considering the ranges reported in Table 6. As training algorithm, we used the Adam optimizer. Each training procedure has been repeated three times to avoid the problems of an unlucky weights initialization. While performing similarly under many viewpoints, the FF and LSTM architectures show significant differences if we consider the sensitivity to the hyperparameters values. Figure 9 shows the sensitivity bands obtained for each architecture. The upper bound represents, for each step ahead, the best NSE score achieved across the hyperparameters combinations. The cases in which the optimization process fails due to a strongly inefficient initialization of the weights have been excluded. For the sake of comparison, we trained the four neural architectures using the same hyperparameters grid, considering the ranges reported in Table 6. As training algorithm, we used the Adam optimizer. Each training procedure has been repeated three times to avoid the problems of an unlucky weights initialization. While performing similarly under many viewpoints, the FF and LSTM architectures show significant differences if we consider the sensitivity to the hyperparameters values. Figure 9 shows the sensitivity bands obtained for each architecture. The upper bound represents, for each step ahead, the best NSE score achieved across the hyperparameters combinations. The cases in which the optimization process fails due to a strongly inefficient initialization of the weights have been excluded. The variability of performances of LSTM across the hyperparameter space is quite limited if compared with that of FF architectures. This is probably because the LSTM presents a sequential structure and is optimized on the whole 12-step-forecasting horizon. The recursive FF model is identified as the optimal one-step ahead predictor, and thus there are some cases characterized by poor performances in the rest of the horizon. The multi-output and multi-model FF seem to suffer of the same problem because, as already pointed out, they predict the values at each time step as independent variables.
Conclusions
The availability of accurate multi-step ahead forecast of solar irradiance (and, hence, power) is of extreme importance for the efficient balancing and management of power networks since they allow the implementation of accurate and efficient control procedures, such as model predictive control.
The results reported in this study further confirm the well-known accuracy of FF and LSTM networks for the above purpose and, more in general, in predicting time series related to environmental variables. Another interesting conclusion is that among the Rec, MM and MO models, the Rec is the one that exhibits the lowest performances. A rough explanation probably lies in the fact that its parameters are optimized over a time horizon of 1 step, but then artificially used for a longer horizon, thus propagating the error. Therefore one of the merits of this study is to clarify that a common practice, namely to identify a model of the type x(t + 1) = f (x(t)), to then predict x(t + h) is not the best choice. To this end, the proposed MM and MO may represent alternatives that are more appropriate.
However, such good performances are obtained at a cost in terms of data and time required to train the network. In actual applications, one has to trade these costs versus the improvement in the precision of the forecasting. In this respect, the solution which appears to be a trade-off between accuracy and complexity seems to be the MO. Indeed, it can reach a very good performance with a minimum increment of the number of parameters compared to the recursive approach. The MM approach performs slightly better than the MO one but requires a different training (and possibly, a different architecture) for each forecasting horizon, which, in the present study, means training over a million parameters.
In more general terms, the selection of the forecasting model should be made by looking at the comparative advantages that a better precision provides versus the effort to obtain such a precision. Though the economic cost and the computation time required to synthesize even a very complex LSTM network are already rather low and still decreasing, one may also consider adopting a classical FF neural network model, which outperforms the traditional Pers24 model and is much easier to train with respect to the corresponding LSTM.
Another of the peculiarities of this work has been showing how performance indices are strongly affected by the presence of null values and, in this respect, nighttime samples should be removed for a correct assessment of model performances.
Both FF and LSTM networks developed in this study have proved to be able to forecast solar radiation in other stations of a relatively wide domain with a minimal loss of precision and without the need to retrain them. This opens the way to the development of county or regional predictors, valid for ungauged locations with different geographical settings. Such precision may perhaps be further improved by using other meteorological data as input to the model, thus extending the purely autoregressive approach adopted in this study. | 10,355.8 | 2020-08-02T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.
Introduction
In this paper, the main purpose is focused on navigation assistance for visually-impaired people in terrain awareness, a technical term that was originally coined for commercial aircraft. In aviation, a Terrain Awareness and Warning System (TAWS) is generally an on-board module aimed at preventing unintentional impacts with the ground [1]. Within a different context, precisely blind assistance, the task of terrain awareness involves traversable ground parsing and navigation-related scene understanding, which are widely desired within the visually-impaired community [2,3].
As a matter of fact, each one of these navigational tasks has been well tackled through its respective solutions, and the mobility of the visually impaired has been enhanced. Along with the increasing demand during everyday independent navigation [2,3], the assistance topic highlights challenges in juggling multiple tasks simultaneously and coordinating all of the perception needs efficiently. In response to these observations, the research community has been motivated to offer more independence by integrating different detectors at the basis of traversability perception, which is considered as the backbone of any VI-dedicated navigational assistive tool [26].
However, a majority of processing pursues a sequential pipeline instead of a unified way, separately detecting different navigation-related scene elements. Thereby, it is computationally intensive to run multiple detectors together, and the processing latency makes it infeasible within the blind assistance context. For illustration, one of the pioneering works [23,35,38] performed two main tasks for its personal guidance system. It approximately runs the full floor segmentation at 0.3 Frames Per Second (FPS) with additional stair detection iteration time ranging from 50-150 ms [35]. In spite of being precise in staircase modeling, this approach depends on further optimization to provide assistance at normal walking speed. A more recent example could be the sound of vision system [16,17,29], which aims to support impaired people to autonomously navigate in complex environments. While their fusion-based imaging was visually appealing, a long latency was incurred when identifying the elements of interest such as ground, walls and stairs. It takes more than 300 ms to compute stereo correspondences and detect negative obstacles [17], let alone other processing components that make it non-ideal for real-time assistance on embedded platforms. This system should be enhanced by avoiding significant delays in its main processing pipeline. Towards this objective, multi-threading is an effective way to reduce latency while sharing computational burden between cores. The commercial version of smart glasses from KR-VISION [42] has shown satisfactory performance for the detection of obstacles and hazardous curbs across different processing threads. It continuously receives images from the sensors and multi-tasks at different frame rates. Alternatively, a unified feedback design was proposed to complement the discrete detection of traversable areas and water puddles within a polarized RGB-Depth (pRGB-D) framework [41]. However, the user study revealed a higher demand for discerning terrain information.
In the literature, a number of systems [43][44][45][46] rely on sensor fusion to understand more of the surrounding scenes. Along this line, proof-of-concepts were also investigated in [47][48][49][50] to use highly integrated radars to warn against collisions with pedestrians and cars, taking into consideration that fast-moving objects are response-time critical. Arguably, for navigation assistance, an even greater concern lies in the depth data from almost all commercial 3D sensors, which suffer from a limited depth range and could not maintain the robustness across various environments [22,26,29,37]. Inevitably, approaches based on a stereo camera or light-coding RGB-D sensor generally perform range expansion [13,14], depth enhancement [22] or depend on both visual and depth information to complement each other [23]. Not to mention the time consumption in these steps, underlying assumptions were frequently made such as: the ground plane is the biggest area [9,10]; the area directly in front of the user is accessible [18,19]; and variant versions of flat world [24,36], Manhattan world [23,27,35,38] or stixel world assumptions [15,25,41]. These factors all limit the flexibility and applicability of navigational assistive technologies.
Nowadays, unlike the traditional approaches mentioned above, Convolutional Neural Networks (CNNs) learn and discriminate between different features directly from the input data using a deeper abstraction of representation layers [51]. More precisely, recent advances in deep learning have achieved break-through results in most vision-based tasks including object classification [52], object detection [53], semantic segmentation [54] and instance segmentation [55]. Semantic segmentation, as one of the challenging tasks, aims to partition an image into several coherent semantically-meaningful parts. As depicted in Figure 1, because traditional approaches detect different targets independently [56], assistive feedback to the users are generated separately. Intuitively, it is beneficial to cover the tasks of the perception module of a VI-dedicated navigational assistive system in a unified manner, because it allows solving many problems at once and exploiting their inter-relations and spatial-relationships (contexts), creating reasonably favorable conditions for unified feedback design. Semantic segmentation is meant to fulfill exactly this purpose. It classifies a wide spectrum of scene classes directly, leading to pixel-wise understanding, which supposes a very rich source of processed information for upper-level navigational assistance in visually-impaired individuals. Additionally, the incessant increase of large-scale scene parsing datasets [57][58][59] and affordable computational resources has also contributed to the momentum of CNN-based semantic segmentation in its growth as the key enabler, to cover navigation-related perception tasks [56]. Two approaches of perception in navigational assistance for the visually impaired. A different example image is used for water hazards' detection, but these images are all captured in real-world scenarios and segmented with the proposed approach.
Based on these notions, we propose to seize pixel-wise semantic segmentation to provide terrain awareness in a unified way. Up until very recently, pixel-wise semantic segmentation was not usable in terms of speed. To respond to the surge in demand, efficient semantic segmentation has been a heavily researched topic over the past two years, spanning a diverse range of application domains with the emergence of architectures that could reach near real-time segmentation [60][61][62][63][64][65][66][67][68]. These advances have made possible the utilization of full scene segmentation in time-critical cases like blind assistance. However, to the best of our knowledge, approaches that have customized real-time semantic segmentation to assist visually-impaired pedestrians are scarce in the state of the art. In this regard, our unified framework is a pioneering attempt going much further than simply identifying the most traversable direction [28,41], and it is different from those efforts made to aid navigation in prosthetic vision [27,69,70] because our approach can be used and accessed by both blind and partially-sighted individuals.
We have already presented some preliminary studies related to our approaches [22,41]. This paper considerably extends previously-established proofs-of-concept by including novel contributions and results that reside in the following main aspects: • A unification of terrain awareness regarding traversable areas, obstacles, sidewalks, stairs, water hazards, pedestrians and vehicles. • A real-time semantic segmentation network to learn both global scene contexts and local textures without imposing any assumptions, while reaching higher performance than traditional approaches. • A real-world navigational assistance framework on a wearable prototype for visually-impaired individuals.
•
A comprehensive set of experiments on a large-scale public dataset, as well as an egocentric dataset captured with the assistive prototype. The real-world egocentric dataset can be accessed at [71].
•
A closed-loop field test involving real visually-impaired users, which validates the effectivity and the versatility of our solution, as well as giving insightful hints about how to reach higher level safety and offer more independence to the users.
The remainder of this paper is structured as follows. Section 2 reviews related work that has addressed both traversability-related terrain awareness and real-time semantic segmentation. In Section 3, the framework is elaborated in terms of the wearable navigation assistance system, the semantic segmentation architecture and the implementation details. In Section 4, the approach is evaluated and discussed regarding real-time/real-world performance by comparing to traditional algorithms and state-of-the-art networks. In Section 5, a closed-loop field test is fully described with the aim to validate the effectivity and versatility of our approach. Section 6 draws the conclusions and gives an outlook to future work.
Related Work
In this section, we review the relevant literature on traversability/terrain awareness and pixel-wise semantic segmentation for the visually impaired.
Traversability Awareness
The study of the traversable part of a surface is usually referred to as traversability [22,41], which has gained huge interest within the research community of blind assistance. Among the literature, a large part of the proposals detected traversability with a commercial stereo camera. As one of the most representative, RANdom SAmpling Consensus (RANSAC) [72] is adapted to model the ground plane. A. Rodríguez et al. [9,10] estimated the ground plane based on RANSAC and filtering techniques by using the dense disparity map. Multiple variations of the RANSAC approach were reported later, each trying to improve the classic approach [12,22]. Furthermore, ground geometry assessment [29] and surface discontinuity negotiation [73] were addressed, taking into account that real-world ground areas are not always planar surfaces [74] and the wearable camera lenses share a distortion given the wide field of view. Inspired exactly by this observation, the stixel world [75] marked a significant milestone for flexibly representing traffic environments including the free road space, as well as static/moving obstacles. In this line, possibilities were explored to leverage the stixel-based techniques for autonomous vehicles and transfer them into assistive technology for the visually impaired [15,25,41]. To overcome the limitation of incompatible assumptions across application domains, [25] followed the Manhattan world stereo method [76] to obtain ground-to-image transformation; [15] clustered the normal vectors in the lower half of the field of view; while [41] integrated Inertial Measurement Unit (IMU) observations along with vision inputs in a straightforward way.
Another cluster of classic methods involves light-coding sensors, which are able to deliver dense depth information in indoor environments. R. Cheng et al. [20] detected ground and obstacles based on the algorithm of seeded growth within depth images. However, as the depth range of the light-coding sensor is limited, namely 0.8-4 m without direct sunshine, speckle-based approaches are just proof-of-concepts or only feasible in indoor environments. Since close-range depth imaging is desirable for safety-critical obstacle avoidance, heuristic approaches were developed to decrease the minimum range of the light-coding sensor in [13,14] by combining active speckle projecting with passive Infrared (IR) stereo matching. As far as longer traversability is regarded, A. Aladren et al. [23] robustly expanded range-based indoor floor segmentation with image intensities, pursuing a complex pipeline, which fails to provide real-time assistance. With the same concern on scene interpretation in the distance, a dual-field sensing scheme [21] was proposed by integrating a laser scanner and a camera. It interpreted far-field image data based on the appearance and spatial cues, which were modeled using the near-field interpreted data. In our previous work [22], large-scale IR stereo matching [14,77] and RGB guided filtering [78] were incorporated to enhance the multi-modal RGB-Infrared-Depth (RGB-IR-D) sensory awareness. It achieves superior detection results of the traversable area, which covers a broader field of view and a longer navigable depth range. However, what remains practically unexplored is the unified awareness of not only traversable areas, but also other navigation-related terrain classes such as stairs and water hazards.
Terrain Awareness
Motivated by the enhanced mobility and higher level demand of visually-impaired people, the research community has begun to integrate different terrain detectors beyond traversability awareness. In this line, the upper-level knowledge is offered by perception frameworks of stairs and curbs, which represent hazardous situations in everyday indoor and outdoor environments. T. Schwarze and Z. Zhong [36] propagated the valid ground plane measurements and tracked the stairs with a helmet-mounted egocentric stereo camera. J.J. Guerrero et al. [35,38] created a chest-mounted personal guidance system to detect ground areas and parametrize stairs in a sequential way. For descending steps' classification, C. Stahlschmidt et al. [39] simplified the ground plane detection and considered depth jumps as the main characteristics by using the point cloud from a Time-of-Flight (ToF) sensor.
Intersection navigation is also one of the major ingredients of independent living. M. Poggi et al. [79] projected the point cloud from a top-view perspective thanks to the robust RANSAC-based ground segmentation and detected crosswalks by leveraging 3D data provided by a customized RGB-D camera and a Convolutional Neural Network (CNN). Taking steps further than the seeded growing ground/obstacle perception [20], R. Cheng et al. [80,81] proposed the real-time zebra crosswalk and crossing light detection algorithms to assist vulnerable visually-impaired pedestrians, which exhibited high robustness at challenging metropolitan intersections. In a previous work [41], we addressed water puddles' detection beyond traversability with a pRGB-D sensor and generated stereo sound feedback to guide the visually-impaired to follow the prioritized direction for hazard avoidance. In spite of the impressive strides towards higher mobility of visually-impaired people, the detection of different terrain classes pursues a sequential manner instead of a unified way. As a consequence, it is not computationally efficient to run different detectors together, and the processing latency is deemed infeasible for time-critical blind assistance.
Semantic Segmentation for the Visually Impaired
Pixel-wise semantic segmentation has emerged as an extremely powerful approach to detect and identify multiple classes of scenes/objects simultaneously. However, the research topic of designing pixel-wise semantic segmentation to assist the visually impaired has not been widely investigated. A team of researchers proposed the semantic paintbrush [82], which is an augmented reality system based on a purely passive RGB-Infrared (RGB-IR) stereo setup, along with a laser pointer allowing the user to draw directly onto its 3D reconstruction. Unlike typical assistive systems, it places the user "in the loop" to exhaustively segment semantics of interest. L. Horne et al. [69,70] presented a computer system to aid in obstacle avoidance and distant object localization by using semantic labeling techniques. Although related, the produced stimulation pattern can be thought of as a low resolution, low dynamic range, distorted image, which is insufficient for our task. With similar purposes for prosthetic vision, A. Perez-Yus et al. [27] adopted a head-mounted RGB-D camera to detect free space, obstacles and scene direction in front of the user.
The Fully-Convolutional Network (FCN) [54], as the pioneering architecture for semantic segmentation, has been leveraged to detect the navigational path in [26], inherently alleviating the need for hand-crafting specific features, as well as providing a reliable generalization capability. A different piece of related work [28] has been recently presented to identify the most walkable direction for outdoor navigation, while semantic segmentation constitutes an intermediate step, followed by a spatial-temporal graph for decision making. It achieved decent accuracy for predicting a safe direction, namely 84% at a predetermined safety radius of 100 pixels. While inspiring, this work focused on the tracking of a safe-to-follow object by providing only sparse bounding-box semantic predictions and hence cannot be directly used for upper-level reasoning tasks. Similar bounding-box interpretation was addressed when ultrasonic sensors and computer vision joined forces [44] by semantically assigning a relative degree of danger, which is limited to only four categories of detected obstructions. Although sporadic efforts have been made along this line, these approaches are unable to run in real time, which is a critical issue for blind assistance. Additionally, they did not provide unified terrain awareness nor demonstrate closed-loop field navigation. Considering these reasons, this task represents a challenging and so far largely unexplored research topic.
Real-Time Pixel-Wise Semantic Segmentation
Semantic segmentation has been fueled by the recently emerging deep learning pipelines and architectures. Among the literature, a vital part of networks is predominantly based on FCNs [54], which were proposed to adapt CNNs, initially designed for classification, to produce pixel-wise classification outputs by making them fully convolutional. SegNet [60] is known as another revolutionary deep CNN architecture with a topologically symmetrical encoder-decoder design. Instead of storing all feature maps, SegNet uses max-pooling indexes obtained from the encoder to up-sample the corresponding feature maps for the decoder, which dramatically reduces the memory and computational cost. ENet [61] was proposed as an efficient alternative to enable the implementation of semantic segmentation in real time. Adopting views from ResNet [83], ENet was constructed with multiple bottleneck modules, which can be used for either down-sampling or up-sampling images. Unlike SegNet's symmetric architecture, ENet has a larger encoder than its decoder as it is believed that the initial network layers should not directly contribute to classification. Instead, the encoder should rather act as good feature extractors and only pre-process the input for later portions of the network, while the decoder is only required to fine-tune the details. This simplified structure allows ENet to perform fast semantic segmentation. However, ENet sacrifices a good deal of accuracy earned by more complex architectures in order to remain efficient.
In our previous work, we proposed ERFNet [64,65], which aimed at maximizing the trade-off between accuracy/efficiency and making CNN-based segmentation suitable for applications on current embedded hardware platforms. With a similar purpose, SQNet [63] used parallel dilated convolutions and fused them as an element-wise sum to combine low-level knowledge from lower layers of the encoder, which helped with classifying the contours of objects more exactly. LinkNet [67] made an attempt to get accurate instance-level prediction without compromising processing time by linking the encoder and the corresponding decoder. These architectures have surpassed ENet in terms of pixel-exact classification of small features. For large-scale scene parsing tasks, PSPNet [84] was proposed to use a decoder with max-pooling layers with diverse widths in order to gather diverse levels of context in the last layers. However, PSPNet requires excessively large processing time, namely more than one second to predict a 2048 × 1024 high-solution image on one Nvidia TitanX GPU card. ICNet [66] proposed a compressed-PSPNet-based image cascade network that incorporates multi-resolution branches under proper label guidance. Although these networks claimed to yield near real-time inference, most of them are designed for autonomous vehicles [63][64][65], biomedical image segmentation [62] or human body part segmentation [68]. None of the current real-time segmentation approaches have been tailored for blind assistance, which is a time-critical, context-critical and safety-critical topic. In addition, architectures in the state-of-the-art have not been thoroughly tested in the real world. Based on this notion, we aim to customize real-time semantic segmentation to aid navigation in visually-impaired individuals and offer an in-depth evaluation, focusing on a quantitative analysis of real-world performance, followed by qualitative results, as well as discussions.
Approach
In this section, our approach to unify the navigation-related terrain awareness is described in detail. As shown in Figure 2, our approach is outlined in terms of the wearable navigation assistance system, which incorporates the robust depth segmentation and the real-time semantic segmentation.
System Overview
In this framework, the main motivation is to design a prototype that should be wearable without hurting the self-esteem of visually-impaired people. With this target in mind, we follow the trend of using head-mounted glasses [22,36,41,46] to acquire environment information and interact with visually-impaired people. As worn by the user in Figure 2, the system is composed of a pair of smart glasses and a portable processor, which can be easily carried, and it is robust enough to operate in rough terrain. The pair of smart glasses, named Intoer, has been made available at [42]. Intoer is comprised of a RGB-D sensor, RealSense R200 [85], and a set of bone-conducting earphones [86].
RGB-D Perception
Illustrated in Figure 3, this pair of smart glasses is quite suitable for navigational assistance due to its small size and light weight, as well as the environmental adaptability. Precisely, it is able to perform large-scale RGB-IR-D perception in both indoor and outdoor environments owing to the active stereo design [87]. It leverages a combination of active speckle projecting and passive stereo matching. An IR laser projector projects static non-visible near-IR patterns on the scene, which are then acquired by the left and right IR cameras. The image processor generates a depth map through the embedded stereo matching algorithm. For texture-less indoor environments, the projected patterns enrich textures. As shown in the indoor scenario in Figure 3, the texture-less black shirt hanging on the chair has been projected with plentiful near-IR patterns, which are beneficial for stereo matching to generate dense depth information. In sunny outdoor environments, shown in the outdoor scenarios in Figure 3 (see the shadow of the user on the ground), although projected patterns are submerged by sunlight, the near-IR components of sunlight shine on the scene to form well-textured IR images.
With the contribution of abundant textures to robust stereo matching, the combination allows the smart glasses to work under both indoor and outdoor circumstances. It is important to remark that although the pair of smart glasses enables indoor/outdoor RGB-D perception, there exist various noise sources, mismatched pixels and black holes in the depth images, as displayed in Figure 4b. According to the technique overview [85], the original depth points generated by the hardware correlation engines are high-quality photometric matches between the left-right stereo image pairs. This allows the embedded algorithm to scale well to noisy infrared images across indoor/outdoor scenarios, delivering accurate, but sparse depth information. For this reason, a large portion of pixels remain mismatched with relatively lower correlation confidence, causing many holes in the original depth image. However, in this paper, the depth image is used for robust obstacle avoidance at the basis of navigation-related terrain segmentation of CNNs. To this end, a dense depth image is preferred to assist the visually impaired so as not to leave out potential obstacles, based on the knowledge that the stereo sensor generally requires a good trade-off between density and accuracy. Following the rationale, unlike our previous work, which performed time-consuming guided hole-filing [22], we use a simple, yet effective way to deal with the noises and pre-process the depth image. 1. We enable a stream of a 640 × 480 RGB image, a stream of a 320 × 240 IR stereo pair with global shutter, which produces a high-speed stream of a 320 × 240 depth image. Depth information is projected to the RGB image so as to acquire a synchronized 640 × 480 depth stream. 2. To achieve high environmental adaptability, the automatic exposure and gain control of the IR stereo pair are enabled, while the power of the IR projector is fixed.
3. To enforce the embedded stereo matching algorithm to deliver dense maps, we use a different preset configuration with respect to the original depth image of RealSense (see Figure 4b), by controlling how aggressive the algorithm is at discarding matched pixels. Precisely, most of the depth control thresholds are at the loosest setting, while only the left-right consistency constraint is adjusted to 30 from the range [0, 2047]. 4. As shown in Figures 2 and 4d,e, the depth images are de-noised by eliminating small segments.
Depth noises can be denoted as outliers in disparity images due to low texture, reflections, noise, etc. [88]. These outliers usually show up as small patches of disparity that is very different from the surrounding disparities. To identify these outliers, the disparity image is segmented by allowing neighboring disparities within one segment to vary by one pixel, considering a four-connected image grid. The disparities of all segments below a certain size are set to invalid. Following [77], we remove small segments with an area smaller than 200 pixels.
The dense depth image with noise reduction leads to robust segmentation of short-range obstacles when using the semantic segmentation output as the base for upper-level assistance. For illustrative purposes, in Figures 1 and 2, 5 m is set as the threshold to segment directly at the pixel level if not classified as navigation-related classes including traversable area, stair, water, pedestrian or vehicle.
Feedback Device
As far as the feedback is concerned, the bone conduction headphones transfer the detection results to the visually impaired for both terrain awareness and collision avoidance. This is important as visually-impaired people need to continue hearing environmental sounds, and the bone conducting interface allows them to hear a layer of augmented acoustic reality that is superimposed on the environmental sounds. The detailed acoustic feedback design will be introduced in Section 5.2.
Real-Time Semantic Segmentation Architecture
Based on the above analysis that navigation assistance is a time-critical, safety-critical, context-critical task, we design our semantic segmentation network with the corresponding key ideas shaping our approach to this project. In order to leverage the success of segmenting a variety of scenes and maintaining the efficiency, our architecture follows the encoder-decoder architecture like SegNet [60], ENet [61] and our previous ERFNet [65]. In architectures like FCN [54], feature maps from different layers need to be fused to generate a fine-grained output. As indicated in Figure 5, our approach contrarily uses a more sequential architecture based on an encoder producing down-sampled feature maps and a subsequent decoder that up-samples the feature maps to match input resolution.
Encoder Architecture
In the perspective of time-critical applications, our encoder builds upon an efficient redesign of convolutional blocks with residual connections. Residual connections [83] were supposed a breakthrough because the degradation problem could be avoided, which is present in architectures with a large amount of stacked layers. Residual layers have the property of allowing convolution layers to approximate residual functions. Formally, the output y of a layer vector input x becomes: where W s is usually an identity mapping and F(x, W i ) represents the residual mapping to be learned. This residual formulation facilitates learning and significantly reduces the degradation problem present in architectures that stack a large amount of layers [83]. The original work proposes two instances of this residual layer: the non-bottleneck design with two 3 × 3 convolutions as depicted in Figure 6a, or the bottleneck version as depicted in Figure 6b. Both versions have a similar number of parameters and enabled almost equivalent accuracy. However, the bottleneck requires less computational resources, and these scale in a more economical way as depth increases. For this reason, the bottleneck design has been commonly adopted in state-of-the-art networks [61,83]. However, it has been reported that non-bottleneck ResNets gain more accuracy from increased depth than the bottleneck versions, which indicates that they are not entirely equivalent, and the bottleneck design still suffers from the degradation problem [83,89]. It is worthwhile to review the redesign of the non-bottleneck residual module in our previous work [64]. As demonstrated in [90], any 2D filter can be represented by a combination of 1D filters in the following way. Let W ∈ R C×d h ×d v ×F denote the weights of a typical 2D convolutional layer, where C is the number of input planes, F is the number of output planes (feature maps) and d h × d v represents the kernel size of each feature map (typically d h ≡ d v ≡ d). Let b ∈ R F be the vector representing the bias term for each filter and f i ∈ R d h ×d v represent the i-th kernel in the layer. Common approaches first learn these filters from data and then find low-rank approximations as a post-processing step [91]. However, this approach requires additional fine tuning, and the resulting filters may not be separable. Instead, ref. [92] demonstrates that it is possible to relax the rank-1 constraint and essentially rewrite f i as a linear combination of 1D filters: wherev i k and (h i k ) T are vectors of length d, σ i k is a scalar weight and K is the rank of f i . Based on this representation, J. Alvarez and L. Petersson [90] proposed that each convolutional layer can be decomposed with 1D filters, which can additionally include a non-linearity ϕ(·) in between. In this way, the i-th output of a decomposed layer a 1 i can be expressed as a function of its input a 0 * according to the following manner: where L represents the number of filters in the intermediate layer and ϕ(·) can be implemented with activation functions ReLU [52] or PReLU [93]. The resulting decomposed layers have intrinsically low computational cost and simplicity. Additionally, the 1D combinations improve the compactness of the model by minimizing redundancies (as the filters are shared within each 2D combinations) and theoretically improve the learning capacity by inserting a non-linearity between the 1D filters [90].
Considering an equal kernel size d for simplicity, it is trivial to see that the decomposition reduces W 2D ∈ R C×d×d×F of any 2D convolution into a pair of W 1D ∈ R C×d×F , resulting in the equivalent dimensions of each 1D pair in dim = 2 × (C × d × F). For this reason, this factorization can be leveraged to reduce the 3 × 3 convolutions on the original residual modules. While larger filters would be benefited by this decomposition, applying it on 3 × 3 convolutions already yields a 33% reduction in parameters and further increases its computational efficiency.
By leveraging this decomposition, "Non-bottleneck-1D" (Non-bt-1D) was proposed in previous work [64,65], as depicted in Figure 6c. It is a redesign of the residual layer to strike a rational balance between the efficiency of the bottleneck and the learning capacity of non-bottleneck, by using 1D factorizations of the convolutional kernels. Therefore, it enables an efficient use of a minimized amount of residual layers to extract feature maps and achieve semantic segmentation in real time.
In addition, our down-sampler block (Figure 6d) inspired by the initial block of ENet [61] performs down-sampling by concatenating the parallel outputs of a single 3 × 3 convolution with stride 2 and a max-pooling module. It is true that down-sampling has the drawback of resulting in coarser outputs, but it also has two benefits: it enables the deeper layers to gather more context, leading to better classification, and it helps to reduce computation, as well as allowing for more complex layers in the decoder. Still, we argue that for the visually impaired, contextual information is more important than pixel-exact small features. In this regard, we perform three down-samplings to maintain a judicious trade-off between learning textures and extracting contextual information. Table 1 gives a detailed description of the integral architecture, where the redesigned residual layers are stacked in the encoder after corresponding down-samplers with different dilation rates. For the terrain awareness in intelligent assistance, we propose to attach a different decoder with respect to the previous work. This key modification aims to collect more contextual information while minimizing the sacrifice of learning textures. Global context information is of cardinal signification for terrain awareness in order to prevent the feedback of confusing semantics. To detail this, several common issues are worthwhile to mention for context-critical blind assistance:
•
The context relationship is universal and important especially for complex scene understanding. If the network mispredicts descending stairs in front of a lake, the visually impaired would be left vulnerable in dynamic environments. The common knowledge should be learned by the data-driven approach that stairs are seldom over a lake.
•
There are many class label pairs that are texture-confusing in classification such as sidewalk/pavement versus roadways. For visually-impaired people, it is desired to identify the traversable areas that are sidewalks beyond the detection of "walkable" ground planes. Following this rationale, such distinctions should be made consistently.
•
Scene targets such as pedestrians and vehicles have arbitrary sizes from the sensor perspective. For close-range obstacle avoidance and long-range warning of the fast-approaching objects, a navigation assistance system should pay much attention to different sub-regions that contain inconspicuous-category stuff.
These risks could be mitigated by exploiting more context and learning more relationships between categories. With this target in mind, we reconstruct the decoder architecture. In this reconstruction, the decoder architecture follows the pyramid pooling module as introduced by PSPNet [84]. This module is applied to harvest different sub-region representations, followed by up-sampling and concatenation layers to form the final feature representation. As a result, it carries both local and global contextual information from the pooled representations at different locations. Since it fuses features under a group of different pyramid levels, the output of different levels in this pyramid pooling module contains the feature map from the encoder with varied sizes. To maintain the weight of global features, we utilize a convolution layer after each pyramid level to reduce the dimension of context representation to 1/N of the original one if the level size of the pyramid is N. As for the situation in Figure 5c, the level size N equals four, and we decrease the number of feature maps from 128-32. Subsequently, the low-dimension feature maps are directly up-sampled to obtain the same size features as the original feature map through bilinear interpolation. Figure 6 contains a depiction of the feature maps generated by each of the blocks in our architecture, from the RGB input to the pixel-level class probabilities and final prediction.
Dataset
In our work, the challenging ADE20K dataset [57] is chosen as it covers both both indoor and outdoor scenarios. Furthermore, this dataset contains traversability-related classes and many scenes that are very important for navigation assistance such as stairs and water areas. To enrich the training dataset, we add some of the images that have the classes of sky, floor, road, grass, sidewalk, ground, water and stairs from the PASCAL-Context dataset [58] and the COCO-Stuff 10K dataset [59]. Hence, the training involves 37,075 images, within which 20,210 images are from ADE20K, 8733 images from PASCAL-Context and the remaining 8132 images from COCO-Stuff. In addition, we have 2000 images from ADE20K for validation. To provide awareness regarding the scenes that visually-impaired people care the most about during navigation, we only use the most frequent 22 classes of scenes or objects for training. Additionally, we merge the water, sea, river, pool and lake into a class of water hazards. In a similar way, the stairs, stairway and staircase are merged into a class of stairs. In total, the training involves 24 classes: water areas, stairs and 22 frequent scene elements.
Data Augmentation
To robustify the model against the various types of images from the real world, we perform a group of data augmentations. Firstly, horizontally flipping with a 50% chance, random cropping and random scaling are jointly used to resize the cropped regions into 320 × 240 input images. Secondly, a random rotation is implemented without cropping by sampling distributions from the range [−20 • , 20 • ]. This intuition comes from the fact that during navigation, the orientation of the smart glasses would be constantly changing and the images would rotate. This is also beneficial to eliminate the needs of previously-used IMU-based processing [22,43] that requires reasonable synchronization between IMU observations and vision inputs, which partially hinders real-time feedback. Thirdly, color jittering in terms of brightness, saturation, contrast and hue is applied. Jittering factors regarding brightness, saturation and contrast here are chosen uniformly from the range [0.8, 1.2]. Hue augmentation is performed by adding a value from the range [−0.2, 0.2] to the hue value channel of the Hue Saturation Value (HSV) representation.
Training Setup
Our model is trained using the Adam optimization of stochastic gradient descent [94]. In this work, training is operated with a batch size of 12, momentum of 0.9 and weight decay of 2 × 10 −4 , and we start with an original learning rate of 5 × 10 −5 that decreases exponentially across epochs. Following the scheme customized in [61], the weights are determined as w class = 1/ln(c + p class ), while c is set to 1.001 to enforce the model to learn more information of the less frequent classes in the dataset. For pre-training, we first adapt the encoder's last layers to produce a single classification output by adding extra pooling layers and a fully-connected layer and finally train the modified encoder on ImageNet [95]. After that, the extra layers are removed, and the decoder is appended to train the full network. In the training phase, we also include Batch-Normalization (BN) [96] to accelerate convergence and dropout [97] as a regularization measure. More precisely, the dropout probability is set to 0.3 in the encoder and 0.1 in the decoder respectively, as this yielded better results in our architecture. With this setup, the training reaches convergence, as shown in Figure 7, when the cross-entropy loss value is used as the training criterion.
Experiments and Discussions
In this section, we performed a comprehensive set of experiments to prove the qualified accuracy and speed for navigation assistance, as well as the real-world performance by comparing with traditional algorithms and state-of-the-art networks.
Experiment Setup
The experiments were performed with the wearable navigation systems in public spaces around Westlake, the Zijingang Campus and the Yuquan Campus at Zhejiang University in Hangzhou and the Polytechnic School at University of Alcalá in Madrid, as well as Venice Beach and University of California in Los Angeles. When navigating in different scenarios, we captured real-world images while keeping moving by using our head-worn smart glasses available at [42]. In this fashion, a real-world egocentric vision dataset can be accessed from the TerrainAwarenessDataset [71]. The metrics reported in this paper correspond to Intersection-over-Union (IoU) and Pixel-wise Accuracy (PA), which prevail in semantic segmentation tasks [57,58]: where TP, FP and FN are respectively the number of True Positives, False Positives and False Negatives at the pixel level, where TP and LP are respectively the number of Correctly-Classified Pixels and Labeled Pixels.
Real-Time Performance
The total computation time of a single frame is 16 ms, while the image acquisition and preprocessing from the smart glasses take 3 ms, and the time cost for the semantic segmentation is 13 ms (at 320 × 240). In this sense, the computation cost is saved to maintain a reasonably qualified refresh-rate of 62.5 FPS on a cost-effective processor with a single GPU GTX1050Ti. This inference time demonstrates that it is able to run our approach in real time, while allowing additional time for auditory [9,22,41] or tactile feedback [11,13,43]. In this paper, we use a highly customized set of stereo sound feedback for assistive awareness. It takes around 40 ms for the sonification of the semantic masks, which will be introduced in Section 5.2. Additionally, on an embedded GPU Tegra TX1 (Jetson TX1) that enables higher portability while consuming less than 10 Watts at full load, our approach achieves approximately 22.0 FPS.
In this experiment, we compare the real-time performance of our architecture with state-of-the-art networks that are designed for efficient semantic segmentation. Table 2 displays the inference time (forward pass) for different resolutions (including 320 × 240, 448 × 256, 640 × 480) on a cost-effective GPU GTX1050Ti. At 320 × 240, a resolution that is enough to recognize any urban scene accurately for navigation assistance, our architecture is the fastest, namely 13 ms. Admittedly, the runtime of SegNet [60] and LinkNet [67] at this resolution is not able to be tested due to the inconsistent tensor sizes at down-sampling layers. For this reason, we test at 448 × 256, another efficient resolution at which most of the architectures can be evaluated. Furthermore, our model is super fast, second to LinkNet [67]. At 640 × 480, a resolution that is close to the average width/height of images from the ADE20K dataset [57], ENet [61] is the fastest, while the runtime of our model is 34 m, resulting in a 29.4 FPS of frame rate. However, for navigation assistance, 320 × 240 is arguably the optimum resolution of the three resolutions, since pixel-exact features are less desired by visually-impaired people, but require higher input resolution, as well as longer processing latency. Still, the average IoU value tested on the ADE20K dataset of our architecture is apparently higher than ENet and LinkNet. When comparing with our previous work, both the speed and accuracy of our architecture are slightly better than ERFNet [65], which was designed for autonomous driving. Our ERF-PSPNet inherits the encoder design, but implements a different decoder, which becomes quite efficient at resolutions that are suitable for navigation assistance. In summary, our network achieves a speed that is as competitively fast as the fastest ones (ENet, LinkNet), while having a significantly better accuracy.
Segmentation Accuracy
The accuracy of our approach is firstly evaluated on the challenging ADE20K dataset [57] by comparing the proposed ERF-PSPNet with deep neural networks in the state-of-the-art for real-time segmentation including UNet [62], SegNet [60], ENet [61], SQNet [63], LinkNet [67] and our previous ERFNet [65]. Table 3 details the accuracy of traversability-related classes including floor, road, grass, sidewalk, ground and other important navigation-related classes including sky, person, car, water and stairs. In our implementation, the IoU value of ENet is higher than SQNet and LinkNet on the ADE20K dataset, which is a challenging dataset requiring the architecture to learn rich contextual information. Since ENet applies multiple dilated convolution layers in order to take a wider context into account, it outperforms SQNet and LinkNet, even though these two networks claimed to achieve higher accuracy than ENet on the datasets for intelligent vehicles. As far as our architecture is concerned, it could be said that the accuracy of most classes obtained with the proposed ERF-PSPNet exceeds the state-of-the-art architectures that are also designed for real-time applications, especially the important improvements achieved on water and stairs. Our architecture builds upon previous work, but has the ability to collect more contextual information without the major sacrifice of learning from textures. As a result, only the accuracy values of sky and person are slightly lower than ERFNet.
Real-World Performance
To analyze the major concern of detection performance for real-world assistance, we collect results over several depth ranges: within 2 m, 2-3 m, 3-5 m and 5-10 m on the TerrainAwarenessDataset [71], which contains 120 images for testing with fine annotations of seven important classes for navigation assistance including: sky, ground, sidewalks, stairs, water hazards, persons and cars. This adequately considers that in navigational assistance, 2 m is the general distance for avoiding static obstacles, while the warning distance should be longer when a moving object approaches, e.g., 3 m for pedestrians and 10 m for cars in urban environments. In addition, the short range of ground area detection helps to determine the most walkable direction [28], while superior path planning could be supported by longer traversability awareness [22], e.g., 5-10 m. Table 4 shows both the IoU and pixel-wise accuracy of traversability awareness, which is the core task of navigational assistance. Here, the traversable areas involve the ground, floor, road, grass and sidewalk. Table 4. On the real-world TerrainAwarenessDataset [71] in terms of traversable area parsing. "With Depth": Only the pixels with valid depth information are evaluated using pixel-wise accuracy. We compare the traversable area detection of our ERF-PSPNet to state-of-the-art architectures and a depth-based segmentation approach 3D-RANSAC-F [9], which estimates the ground plane based on RANSAC and filtering techniques by using the dense disparity map. As the depth information of the ground area may be noisy and missing in dynamic environments, we implemented an RGB image-guided filter [78] to fill holes before detection. In this way, the traditional 3D-RANSAC-F achieves decent accuracy ranging from 2-5 m, and it excels SegNet and ENet from 2-3 m, as the depth map within this range is quite dense thanks to the active stereo design of the smart glasses. However, 3D-RANSAC-F simply segmented the ground plane from obstacles, but has no ability to distinguish traversable areas from other semantic classes such as water areas, resulting in a low IoU on the real-world dataset, where the biggest ground plane assumption fails in a vital part of the images.
Approaches IoU Pixel-Wise Accuracy With Depth Within 2 m 2-3 m 3-5 m 5-10 m
Intriguingly, although ENet exceeds SegNet/LinkNet on the ADE20K dataset, it cannot generalize well in real-world scenarios due to the limited learning capacity that hinders its usability. As a result, SegNet and LinkNet exceed ENet in terms of IoU and pixel-wise accuracy when testing on our real-world dataset. UNet is a classic convolutional network for biomedical image segmentation, which suffers even more from the model capacity because it is designed to use limited available annotated samples. Despite being efficient, it continues to struggle at delivering effective segmentation and predicting high-quality semantics. Still, the proposed ERF-PSPNet outperforms 3D-RANSAC-F and these networks in both ranges by a significant margin, due to the judicious trade-off between learning capacity and inferring efficiency achieved in our architecture. As far as terrain awareness is concerned, even if the IoU is not very high, the segmentation results are still of great use. For the visually impaired, it is preferred to know that there are stairs or there is an approaching pedestrian in some direction even if the shape is not exactly accurate. Furthermore, it is observed in Table 5 that most of the pixel-wise accuracy within different ranges is over 90%, which reveals the capacity of our approach for the unification of these detection tasks. It is noteworthy that IoU values of stairs and persons on the real-world dataset that mainly contains outdoor daytime images are apparently higher than those achieved on the ADE20K dataset. Although our dataset represents totally unseen scenarios, it mainly focuses on assistance-related urban scenes, while most persons are pedestrians and stairs are close to the user. In comparison, ADE20K features a high variability of person postures and far-away stairs. In this sense, ADE20K is more challenging than the real-world dataset in terms of these two classes. Table 5. ERF-PSPNet on the real-world dataset [71] in terms of terrain awareness. "Traversability": Accuracy of the traversable area parsing. Depth information of sky is too sparse to calculate reasonable accuracy values at different ranges.
For traditional approaches, 3D-SeededRegionGrowing and 3D-RANSAC-F both assume a plane model regarding ground area identification for visually impaired applications. This plane model can be recovered using a local approach such as 3D-SeededRegionGrowing exploring neighboring patches, or globally making use of RANSAC for ground plane equation identification. 3D-SeededRegionGrowing relies on the sensor to deliver a dense 3D point cloud and struggles at producing complete segmentation in highly-dynamic environments. Although 3D-RANSAC-F expands the detection range of the traversable area due to the global strategy, its pixel-wise parsing results are also substantially fragmented. It is worth mentioning that FreeSpaceParse [25], a procedure that renders stixel-level segmentation with the original purpose of representing traffic situations, has been applied successfully thanks to the sensor fusion [41] by utilizing attitude angles. However, the procedure tailored to the problem relies on additional IMU observations and could not differentiate between ground and water areas. This problem also exists in other traditional algorithms, while 3D-SeededRegionGrowing even completely misdetects hazardous water areas as traversable areas, due to the assumption that the ground plane should be the lowest part, as it made.
As far as the deep learning-based approaches are concerned, they have the crucial advantages of exploiting a significant amount of data, thus eliminating the dependencies on assumptions. However, for ENet and LinkNet, we can observe that sometimes, trees/walls would be misclassified as sky/ground. In addition, these networks cannot draw a distinction of ground areas vs. sidewalks consistently. This is mainly due to the incompetence to collect sufficient contextual information. Qualitatively, our approach not only yields longer and more consistent segmentation, which will definitely benefit the traversability awareness, but also retains the outstanding ability to provide the terrain awareness within this unified framework. Figure 8. Qualitative examples of the segmentation on real-world images produced by our approach compared with ground-truth annotation, 3D-SeededRegionGrowing [20], 3D-RANSAC-F [9], FreeSpaceParse [25], ENet [61] and LinkNet [67].
Indoor/Outdoor Detection Analysis
We have already proven that our sensory awareness with the smart glasses can deliver robust depth segmentation under different situations in [22]. Here, to prove that our approach can work across indoor/outdoor environments, we evaluate the traversable area segmentation of day/night scenarios from the Gardens Point dataset [98], which mainly contains ground areas in most images along the same trajectory as was originally captured for visual localization. For the reader's information, the Gardens Point dataset was recorded while moving on the Gardens Point Campus of Queensland University of Technology in Brisbane. Qualitatively, our approach can provide quite robust and effective segmentation for traversability awareness. However, as demonstrated in Figure 9, we observe that generally in the daytime, the segmentation of outdoor scenarios is more robust than indoor cases; while at night, the indoor segmentation is slightly better than outdoors. This is mainly because most of the images we used for training are RGB images with well-balanced illumination conditions. To further enhance the robustness in the future, we aim to implement illumination-invariant image pre-transformation, as well as to incorporate near-infrared spectral and pixel-wise polarimetric information [41].
Field Test Setup
We performed a closed-loop field test in February 2018, with six visually-impaired users around Holley Metering Campus in Hangzhou, as displayed in Figure 10. The terrain traversed involves grass, ground and pavement. After learning the stereo sound feedback of the system when wearing our smart glasses, participants had to start the navigation and reach the staircase by hearing real-time acoustic feedback. Obstacles along the trajectory (around 85 m) include low-lying traffic cones, static and moving vehicles/pedestrians, as well as other different kinds of obstacles. For safety reasons, the traffic condition is relatively peaceful compared with urban roadways, and most vehicles are at a low speed when passing through the campus. Figure 11 depicts typical scenarios of the field test and traversable lines, which represent the walkable distances of different directions. In each direction, the farthest distance for navigation is determined by both the traversable areas and the depth images with noise reduction. Unlike our previous work [22,41], the traversable areas are segmented using the semantic masks instead of stixel computation or depth segmentation. For illustrative purposes, the maximum traversable distance in Figure 11 is set to 9.5 m. As a result, it sometimes appears as a flat line, denoting the scenario is obstacle-free and the user should feel quite safe to walk forward. Following [41], the traversable line is mapped to the sounds of instruments, aimed to provide real-time acoustic feedback for hazard avoidance and safety awareness.
Stereo Sound Feedback
Inspired by the evidence that blind individuals manifest supranormal capabilities performing spatial hearing tasks [99] and high sensitivity to left/right sound signals [100], our interaction method is based on sonification, in which data are represented by sound [101]. Precisely, we use a variant of the sound-mapping interface presented in our previous work [41]. This interaction method renders real-time acoustic feedback to visually-impaired people by synthesizing stereo sound from the traversable line, which has already been proven to be efficient in navigational tasks. It aims to transfer the most walkable direction and the traversable distance of the forward direction. The most walkable way is determined using the prioritized direction to avoid close obstacles [41]. Admittedly, the indicated moving direction is partially influenced by the continuous movement of head-worn glasses during navigation. However, the IMU-based alignment is not applied because visually-impaired people usually determine the azimuth by turning the head and hearing the change of sounds. In this regard, the relative change of the sound azimuth is arguably more important than the absolute azimuth for sonification. Our approach excels at the prediction of robust semantics, which is quite suitable for the upper-level sonification of the traversable direction.
In this work, a clarinet sound is used as feedback for the most walkable direction, which can also be denoted as the sound for obstacle avoidance. Because the farther the user deviates from the traversable direction, the more pronounced the sound should be to warn against hazardous obstacles, as depicted in Figure 12a,b, this sound is designed to have louder volume and higher frequency at larger degrees of the stereo sound. We use the clarinet sound because it has a wide pitch range and the volume could be easily controlled within its main registers. In addition, it is suitable for continuous feedback with distinctive tone and great penetrating power. In our implementation, the volumes ranges within [−100, 0] decibels (dB), and the frequency ranges from [−2, 2] semi-tones (st), corresponding to 319-493 Hz.
When the most traversable direction indicates the forward path, a sound of traversability awareness should be fed back to the visually-impaired user. In this way, he/she would feel safe to walk without paying much attention to the sound for obstacle avoidance. Here, we use the water droplet sound for safety awareness considering one of its good properties: the sound of water droplets remains mellow when adjusting the mapping parameters. Another main advantage lies in the timbre of the clarinet and water droplet, which sound quite different, so it would not be confusing when synthesizing these two sounds simultaneously. In addition, the volume mapping is inversely related to the degree of traversable direction when compared with the sound for obstacle avoidance as revealed in Figure 12a,c. At the same time, the traversable distance of the forward path is mapped to the interval of the water droplet sound. When the walkable distance is long, the interval is short, so the user would walk briskly. When the traversable distance is limited, the interval would be relatively longer to remind the user to slow down and pay good attention to the sound for hazard awareness. To simplify the sonification of the semantic masks for unified terrain awareness, we use a similar approach in [9] to detect stairs, pedestrians and vehicles at the basis of semantic masks. An instance of staircase, person or car would be fed back by empirically setting 500 points as the detection threshold for 320 × 240 resolution pixel-wise segmented images within 5 m. This is also beneficial to remove false positives caused by erroneous semantic pixels due to noises. Here, the sounds of stairs, pedestrians and vehicles correspond to the instruments bell, xylophone and horn, respectively. Because a staircase represents a special traversable region, we use a compressor (see Figure 13) to reduce the volume of traversability-related feedback, including the sound for obstacle avoidance and safety awareness as introduced above. Intriguingly, the stairway sound follows the Shepard tone [102], to create the auditory illusion of a tone that continually ascends or descends in pitch, corresponding to the ascending/descending stairs. The ascending stairs and descending steps are distinguished using depth information, so visually-impaired people would perceive the terrain in advance. Our sound system is implemented with FMOD [103], which is a game sound engine supporting high-speed audio synthesis. As a result, our stereo sound feeds back the semantic information within 40 ms.
Field Test Results
During this assistance study, participants would learn the stereo sound feedback in the first place. The working pattern of the system and signals from the bone conduction headset were introduced. Each participant had 10 min to learn, adapt to the audio interface and wander around casually. By touching obstacles, navigating pathways and listening to the sonified sound, we had ensured that the participants fully understood the rules of the sonification-based interaction method. After that, participants were asked to navigate without collisions and reach the staircase (see the staircase images in Figure 11). To provide the participants with a sense of orientation, the users would get an extra hint to turn at the bends or the road intersection. Admittedly, in a more general usage scenario, a higher layer of knowledge could be offered by GPS (Global Positioning System) or topological localization. For readers' information, we have also blindfolded ourselves and traversed more than 5 km without fatigue using the proposed system and AMAP [104].
In this research, all visually-impaired participants completed the test, although sometimes they had to turn to us for help as displayed in Table 6, mostly to ensure the orientation. The number of collisions and time to complete the test were also recorded. Collisions include collisions with obstacles such as traffic cones, vehicles, walls, and so on. The timer started when a participant began the navigation and stopped when the participant completed a single test. As recorded in Table 6, the number of collisions was few when traversing such a long trajectory more of than 80 m, and each user had successfully avoided at least five obstacles. In this test, explicit feedback about directions to traverse were found helpful for navigation assistance. The results suggest that participants were aware of obstacles and semantics with our system and could make use of the stereo sound to keep away from hazards including approaching pedestrians, vehicles and close-range obstacles. They had learned to navigate traversable paths and finally reached the staircase provided with the assistive awareness.
In addition, most of the traversing time suggests that our system supported navigation at normal walking speed, although some individuals took a relatively longer time to finish the test. In summary, the safety and versatility of the navigation assistance system have been dramatically enhanced. Figure 14 depicts the situations of the collisions (only four times in total). Some of the failures were due to our semantic classification framework. For example, it sometimes failed to detect low-lying obstacles as shown in Figure 14a. Instead, the wood had been misclassified as part of the ground because it represents an unseen scenario to our CNN. We believe new datasets will play an essential role to robustify the model against more unseen scenarios, even though our approach is already able to generalize far beyond its training data. Figure 14b is more related to the depth sensory awareness. When approaching the fence, the depth information would be too sparse to enable robust traversable lines. Figure 14c involves a low traffic cone on the way to the staircase. Although users could perceive it at several meters away, it would be out of the vertical field of view when getting closer. This represents a common problem of head-mounted prototypes. For Figure 14d, the collision is more related to the horizontal field of view. Such collisions occur when the user had already bypassed the obstacle, but still scratched the sides. We are interested in panoramic semantic segmentation, which would be useful to provide 360 • awareness to enhance the safety in the future.
Feedback Information from Participants
After the field test, the participants were asked three simple questions including whether the system gives feedback in time, whether the prototype is comfortable to wear and whether the system is useful. In this form, we have the questionnaire in Table 7, together with their suggestions for future improvement. All users answered that the system is useful and could help them to avoid obstacles and perceive semantic environmental information.
As far as the detailed feelings are concerned, User 1 thinks the warning method is acceptable, which is very helpful for obstacle avoidance. She also believes that an adaptive phase would be beneficial to better use the assistive tool. The pair of smart glasses seems to a little bit heavy, applying pressure to her nose. User 2 thinks that learning the feedback in advance helps him become comfortable with and sensitive to the sound, and the smart glasses are suitable for him to wear. User 3 thinks that the warning method is in accordance with his habits, and he hopes the assistive prototype will be more portable. User 4 thinks that the direction of the feedback is accurate and clear, even though the volume seems to be a bit on the low side. He also believes that the system would be more convenient to use if it were wireless. User 5 thinks the sound representing the traversable direction makes her feel safe, but it will be annoying in narrow passages. There is a corner (about 2 m wide) in the field test scenario. Here, sound for hazard avoidance would work due to the existence of close obstacles. For the sake of safety, the direction of the stereo sound would not be constant when the user continuously deviates from the traversable path in such situations. We believe this explains her being a bit confused at the corner (narrow passage in her mind). For the sound representing the stairs, she considers it to be very good due to the upward sensation. Although the pair of smart glasses would slide down after long-term use during her experience, she regards it as highly comfortable to wear. User 6 wants to use the prototype for more time, but he also worries that listening too much to the sound will make him agitated. Moreover, the participants are optimistic about the system and would like to have a more profound experience.
Maturity Analysis
Following [9,105], we offer a maturity study, along with the evolution of the smart glasses. On the one hand, our system is a prototype. It has been tested in real-world environments with visually-impaired users, which validates the approach. The prototype is designed as a pair of smart glasses, with various former designs exhibited in Figure 15. It has evolved since the original version of the hand-hold Kinect sensor (see Figure 15a), which was used in [20,43] as a proof-of-concept. It is only feasible in indoor environments due to its light-coding technology. To support outdoor usage, we have a chest-mounted prototype with a Bumblebee stereo camera (see Figure 15b), which is also similar to the one in [9,10]. In [41], the pRGB-D sensor (see Figure 15c) emerged to unify the detection of traversable directions and water puddles. However, the minimum range of the sensor is limited, posing challenges for close obstacle avoidance. Thereby, we have a pair of smart sunglasses (see Figure 15d) aimed at decreasing the minimum range of the RGB-D sensor [13]. After that, a set of low-power millimeter wave radars (see Figure 15e) were integrated to warn against fast-approaching vehicles from both sides. Figure 15f depicts the important prototype that enables adaptability across indoor/outdoor environments [22]. Figure 15g is an intermediate version targeted at intersection navigation [81]. In this paper, we discard previous invalid designs and maintain the strengths, having the mature version of smart glasses, which are easy to wear from the point of view of most users. In the near future, we would add optional accessories such as a nose pad to make the prototype more comfortable to wear for all visually-impaired people. Figure 15. Evolution of the smart glasses: (a) Kinect used in [20,43], (b) Bumblebee stereo camera used in [9,10], (c) pRGB-D sensor [41], (d) a pair of sunglasses with sensors [13], (e) a 3D-printed prototype with RGB-D sensor and millimeter wave radar, (f) a 3D-printed prototype with RGB-D sensor and IMU [22], (g) a wearable navigation system [81] and (h) the pair of smart glasses that has been made available at [42].
On the other hand, we follow the survey [8] to quantitatively evaluate the maturity. In the survey [8], 14 features and an overall score were defined as an attempt to compare the different approaches designed for visually-impaired people. Such a maturity analysis gives a measure of the system's progress/maturity, as well as its overall satisfaction degree. In this regard, it allows us to compare our approach with traditional assistive technologies, which are not limited to vision-based solutions. Table 8 shows the average score of each feature graded by the users and developers. As the first seven features correspond to the needs of visually-impaired users, these scores were given by the participants of the field test. The remaining features reflect the views of designers, whose scores were graded by the engineers, entrepreneurs and professors.
In addition to Table 8, we also give a whole picture (see Figure 16) for the approaches including the vision-based 3D-RANSAC-F [9] rated by itself and 17 traditional obstacle avoidance systems including Echolocation, Navbelt, vOICe, a prototype designed by University of Stuttgart, FIU, Virtual Acoustic Space, NAVI, a system from University of Guelph, GuidanceCane, ENVS, CyARM, Tactile Handle, TVS, EPFL, Tyflos, FIU cv project and UCSC, which were reviewed in detail and graded by the systematic survey [8]. Among these portable obstacle detection systems, various sensors are integrated such as cameras, ultrasonic sensors and laser scanners, involving different feedback designs between auditory or tactile modalities. Figure 16. Maturity ranking, which shows the total score for each approach. From left to right: Echolocation, Navbelt, vOICe, University of Stuttgart, FIU, Virtual Acoustic Space, NAVI, University of Guelph, GuideCane, ENVS, CyARM, Tactile Handle, TVS, EPFL, Tyflos, FIU cv project, UCSC, 3D-RANSAC-F [9] and our approach. In total, our proposal ranked in the top two of these approaches, second to EPFL, which achieved a score of 45.33 using a stereoscopic sonar system and vibrator-based tactile feedback, but it cannot adequately describe the 3D space and collect semantic information. Comparatively, our high score is mainly contributed by the scores on F1 (real-time), F7 (functionalities), F12 (originality) and F13 (availability). Real-time semantic segmentation allows us to provide assistive awareness in a unified way. It covers multiple perception tasks to aid navigation in visually-impaired individuals. This explains the high scores of F1 (real-time) and F7 (functionalities). As far as F12 (originality) is concerned, our approach represents one of the pioneering efforts to develop pixel-wise semantic segmentation for navigation assistance systems. F13 (availability) denotes that the system is implemented and ready for field navigation. F8 (simple) and F9 (robust) are the relative weaknesses of our framework. F8 (simple) requires the complexity of both hardware and software to be small. In this regard, deep learning-based approaches are intrinsically more complex than traditional prototypes. According to [8], F9 (robust) requires that the system still functions in the presence of partial failures. Although our approach is robust in different environmental conditions, which have been proven by the real-world experiments and the outdoor field test, the depth sensory awareness and pixel-wise semantic segmentation are coupled with each other. As far as the indoor test is concerned, we have also blindfolded ourselves to safely navigate in several buildings using our smart glasses. Because indoor environments contain less navigation-related semantics of interest, our system is more like an obstacle avoidance tool provided with superior traversability awareness, where the warning distance is closer due to safety considerations. Readers are advised to refer to our previous work [13,14,22] to learn more about the precisely-designed feedback system and relevant indoor field tests. In addition, we aim to develop semantics-aware SLAM (Simultaneous Localization And Mapping) by making full use of the indoor semantic masks such as desk, chair and door. Following the rules of the survey [8], F5 and F10 are not scored due to the usage of a single Universal Serial Bus (USB) 3.0 cord and the lack of cost information for future deployment. Despite being a possible subjective measure, it is a good reference for a numerical comparison over the surveyed works.
Conclusions and Future Work
Navigational assistance for the Visually Impaired (VI) is undergoing a monumental boom thanks to the developments of Computer Vision (CV). However, monocular detectors or depth sensors are generally applied in separate tasks. In this paper, we derive achievability results for these perception tasks by utilizing real-time semantic segmentation. The proposed framework, based on deep neural networks and depth sensory segmentation, not only benefits the essential traversability at both short and long ranges, but also covers the needs of terrain awareness in a unified way.
We present a comprehensive set of experiments and a closed-loop field test to demonstrate that our approach strikes an excellent trade-off between reliability and speed and reaches high effectivity and versatility for navigation assistance in terms of unified environmental perception.
In the future, we aim to continuously improve our navigation assistive approach. Specifically, pixel-wise polarization estimation and multi-modal sensory awareness would be incorporated to robustify the framework against cross-season scenarios. Deep learning-based depth interpolation would be beneficial to enhance the RGB-D perception in high dynamic environments and expand the minimum/maximum detectable range. Intersection-centered scene elements including zebra crosswalks and traffic lights would be covered in the road crossing context. Hazardous curbs and water puddles would be addressed to further enhance traversability-related semantic perception by our advanced version of CNNs using hierarchical dilation. In addition, we are interested in panoramic semantic segmentation, which would be useful and fascinating to provide superior assistive awareness.
Moreover, it is necessary to run a larger study with visually-impaired participants to test this approach, while different sonification methods and audio output settings could be compared in a more general usage scenario with semantics-aware visual localization.
Author Contributions: K.Y. conceived of the approach, designed the framework and performed the experiments under the joint supervision of K.W. and L.M.B. K.Y. and E.R. trained the semantic segmentation network. K.Y., W.H. and D.S. developed the stereo sound feedback interface. K.Y., K.W. and J.S. coordinated and organized the field test. K.Y., R.C. and T.C. captured the real-world egocentric dataset. K.Y. wrote the paper. All authors reviewed, revised and approved the paper.
Funding: This research has been partially funded by the Zhejiang Provincial Public Fund through the project of visual assistance technology for the blind based on 3D terrain sensors (No. 2016C33136) and co-funded by the State Key Laboratory of Modern Optical Instrumentation. This work has also been partially funded by the Spanish MINECO/FEDER through the SmartElderlyCar project (TRA2015-70501-C2-1-R), the DGT through the SERMON project (SPIP2017-02305) and from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos, fase III; S2013/MIT-2748), funded by Programas de actividades I+D (CAM) and co-funded by the EU Structural Funds. | 16,387 | 2018-05-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Temporal characterization of laser pulses using an air-based knife-edge technique
We present the characterization of ultrashort laser pulses by using the plasma-induced frequency resolved optical switching (PI-FROSt) technique, implemented in ambient air. This recently developed method allows for a temporal reconstruction of a pulse at its focal spot by utilizing a moderately intense pump laser pulse for generating an ionization-induced ultrafast defocusing lens. When propagating through the produced plasma lens, the probe beam to characterize experiences an increase of its size in the far field. The spectrum of the defocused probe field, measured as a function of the pump-probe delay, allows for a comprehensive characterization of the temporal and spectral attributes of the pulse. We report herein the ability of this technique, initially designed for use in rare gases, to operate in ambient air conditions with similar performance. The method is remarkably straightforward to implement and requires no additional optical component other than a focusing mirror, while delivering laser pulse reconstructions of high reliability.
Introduction
After over three decades of continuous development in ultrafast laser technologies, a wealth of diagnostic tools has emerged for the characterization of femtosecond optical pulses [1][2][3][4][5][6][7].For an intensive review of this topic, we invite the reader to refer to [8,9].In this context, nearly all optical devices designed for pulse characterization require the use of transmissive optics (such as nonlinear crystals, lenses, polarizers, thin glass pieces, and so forth), which can potentially introduce undesired effects on the pulse measurement.For instance, transmissive optics inherently imparts additional spectral phase (which can be nevertheless limited by minimizing the total thickness of the optics) to the pulse under examination, potentially posing challenges, especially for ultra-broadband laser fields measurements.Moreover, in the case of intense laser pulses, transmissive optics may introduce a nonlinear temporal phase due to nonlinear effects or, in the worst scenario, may be subject to optical damage.Lastly, an optical characterization device does not provide the temporal profile of the laser pulse at the exact location where experiments are carried out.Specifically, in pump-probe experiments, the critical pulse characteristics are those at the point where the pump and probe interact, namely, at their focal positions.Recently, a characterization method directly working in air has been developed [10].This technique, called tunneling ionization with a perturbation for the time-domain observation of an electric field (TIPTOE), allows for the direct time sampling of the field to characterize at the focal point.However, since this technique has to resolve the carrier frequency oscillations of the field, it requires to acquire a signal with a sub-cycle resolution.Moreover, the approach can only be applied for moderately chirped input pulses [11].Recently, we demonstrated that photo-induced free electrons left in the wake of a moderately intense laser pump can be advantageously exploited for characterizing the temporal properties of a pulse [12].As recently shown in [13], the key idea of this phase-matching free method was to produce a temporal analogue of the knife-edge technique widely used for determining the spatial intensity distribution of a beam.When created by a bell-shaped pump beam, a plasma distribution is known to act as a negative lens, simply because the refractive index modification induced by free electrons is negative [14,15].As a consequence, when propagating in this low-density plasma, a probe beam will experience a defocusing leading to an increase of its size in the far field.In the time domain, since the plasma is created almost instantaneously by the pump and provided that its lifetime (typically tens to hundreds picoseconds) is longer than the probe duration, only the trailing edge of the probe will be defocused.Combined with a coronagraph placed in the far field so as to obstruct the probe path when it propagates alone, the induced-plasma then acts as a switch that can be viewed as a temporal blade.More particularly, it was shown that measuring the spectrum of the signal propagating around the coronagraph as a function of the pump-probe delay allows for a comprehensive retrieval of the temporal and spectral characteristics of the probe field.This approach, called plasma-induced frequency resolved optical switching (PI-FROSt), features a number of remarkable assets.It is straightforward to implement, free from phase-matching issues, can operate over an exceptionally broad spectral range, in both self-or cross-referenced configurations, at ultra-high repetition rates with no damage threshold [12].In order to assess the performance of the method, a noble gas (argon) was used during our first demonstration.Such a choice simplifies the excitation scheme by avoiding the occurrence of Raman effect.However, while using a static cell allows for controlling the gas composition and pressure, the input window nevertheless introduces additional group velocity dispersion, which can be detrimental for the measurement of few-cycle pulses.In this article, we demonstrate that PI-FROSt measurements can be carried out directly in ambient air.In particular, comparing the results obtained in air with those in argon, it is shown that the Raman-induced molecular alignment [15] taking place in air does not impact the retrieval process, making the technique extremely convenient to use.
Cross-defocusing in molecular samples
A probe pulse, interacting with a rather intense pump in a gas medium, undergoes cross-defocusing that does not solely stem from the plasma.Different other effects, all coming from the nonlinear interaction with the pump, can contribute to the probe size modification in the far-field.For instance, it is known that, far from two-photon resonances, the electronic Kerr effect tends to induce an instantaneous focusing lens.In the present experiment, it means that, when temporally overlapped with the pump beam, the probe beam experiences a cross-focusing effect in addition to the plasma defocusing.If the pump and probe polarizations are parallel (resp.orthogonal), the refractive index change Δ Kerr is proportional to 2 p (resp. 2 p /3), where 2 is the nonlinear refractive index of the medium and p is the pump intensity.Since the defocusing signal is proportional to the square of the refractive index, the use of orthogonally polarized fields reduces by almost one order of magnitude (a factor 9) the Kerr contribution, which can be neglected in standard PI-FROSt measurements [12].In addition to the instantaneous Kerr effect, if the considered molecule does not exhibit a spherical top symmetry (which is the case for oxygen and nitrogen), molecular alignment (i.e., the so-called Raman effect) can take place.For a pulse duration significantly shorter than the rotational period, as with a femtosecond laser pulses, the interaction prepares a rotational wavepacket in the ground state of the molecule by impulsive stimulated Raman transitions.The quantum beatings of the wavepacket manifests itself by a molecular alignment taking place shortly after the pulse turn-on, followed later by periodic and transient revivals of alignment [16,17].This molecular alignment also results in a modification of the refractive index experienced by a probe that propagates in the wake of the pump [18].Calling the angle between the laser pump polarization direction and the molecular axis, the refractive index change Δ align experienced by the probe is proportional to ⟨⟨cos 2 ⟩⟩ − 1/3, where ⟨⟨cos 2 ⟩⟩ denotes the quantum and thermal average of the operator [19].Similar to the Kerr effect, the modification of the refractive index induced by molecular alignment depends on the relative polarization of the probe in relation to that of the pump.More particularly, one has Δ align ⊥ = − Δ align // for a probe polarized perpendicularly with respect to the pump field [18].As mentioned, once produced, the generated molecular wavepacket continues to evolve after the pump is turned off, resulting in a periodic, field-free re-alignment of the molecules.Throughout the wavepacket evolution, the molecular system evolves from a state where it is preferentially aligned along the pump polarization direction (⟨⟨cos 2 ⟩⟩ > 1/3) to a planar delocalization state where it is primarily contained in the plane perpendicular to the pump polarization (⟨⟨cos 2 ⟩⟩ < 1/3).As a consequence, a probe with the same (respectively, orthogonal) polarization as the pump will undergo either additional focusing (respectively, defocusing) or defocusing (respectively, focusing) when the molecules are aligned or delocalized.For moderate pump intensity, molecular alignment induces a refractive index change proportional to the pump intensity [14].Note that the rotational Raman contribution to the defocusing signal for an orthogonally polarized probe field (as implemented in the present work) is decreased by a factor four as compared to a parallel configuration.In the context of PI-FROSt measurements, molecular alignment induced by the bell-shaped pump beam therefore results in a supplemental contribution to the cross-defocusing signal [14,15].More particularly, since pump and probe are orthogonally polarized, the inertial alignment following the pump pulse leads to a negative contribution to the refractive index experienced by the probe that will add to that of the plasma, thus increasing the defocusing signal just after the pump turns off.
Experimental results
In order to evaluate the impact of molecular alignment on PI-FROSt measurements, the signal obtained in air was compared to that recorded in argon under the same configuration.To achieve this, all experiments were conducted with a cell filled with either air or argon (at 1 bar), thus ensuring that the probe pulse experiences exactly the same total group dispersion delay in both cases.The experimental setup is identical to that reported in [12].The measurements have been performed on a Ti:Sa femtosecond laser delivering pulses centered at 802 nm at 1 kHz and duration 35 fs.The present demonstration was carried out with pump and probe pulses with the same central wavelength and crossed polarizations so as to minimize both Kerr and molecular alignment contributions.The pump (resp.probe) energy was set to 40 J (resp.10 nJ). Figure 1(a) depicts the spectrogram (, ), with the delay of the pump pulse relative to the probe, obtained in argon for a compressed probe pulse.This last exhibits, as already observed in [12], an abrupt increase after the pump turns off ( ≈ 0) together with a significant spectral broadening.At this delay, only the falling edge of the probe is diffracted by the plasma, leading to a diffracted pulse with a duration shorter than the initial pulse (manifested as broadening in the spectral domain).Such a broadening provides evidence that the probe pulse has undergone an ultrafast dynamic temporal truncation.The spectrally-integrated signal, displayed as red-dashed line in Fig. 1(h), monotonically increases when decreasing the delay (from ≈ 0) in line with the gradual increase of free electron density following the interaction with the pump pulse.This signal does not exhibit evidence of any instantaneous electronic Kerr response, which would manifest as a slight decrease of the signal around = 0 (since the Kerr contribution has an opposite effect as compared to the plasma), in line with our previous measurements [12].Figure 1(c) depicts the spectrogram measured in ambient air for the same experimental conditions.A comparison with Fig. 1(a) reveals that the spectrograms acquired in both gases are highly similar, then suggesting a relatively minor contribution of molecular alignment in the measured signal.The sole qualitative distinction between the two gases is observable when comparing the spectrally-integrated signals [Fig.1(h)].While the integrated signal recorded in argon monotonically decreases with increasing delay, that measured in air exhibits an initial increase (in the delay region near ≈ −50 fs) followed by the same decrease.This difference arises from the (time-delayed) alignment of N 2 and O 2 molecules.Indeed, as explained above, as the pump and probe are cross-polarized, the alignment introduces a negative refractive index contribution that adds to the plasma contribution, thereby explaining the observed signal increase.In order to reconstruct the probe pulse from these spectrograms, a ptychographic algorithm [13] was first used.The latter enables simultaneous fitting of both the probe spectral characteristics and Δ (i.e., the switch) without any assumptions regarding the functional form of the latter.The retrieved spectrograms [see Figs.1(b,d)] allow to extract, for both gases, the spectral and temporal pulse profiles [depicted as dashed and dotted lines in Figs.1(e-g)].The retrieved intensity profile [Fig.1(f)] reveals a pulse duration close to the expected Fourier-limited 35 fs (full width at half maximum) with a small residual asymmetry as already observed in [12].More interestingly, the reconstructions with the ptychography algorithm for air and argon show an excellent agreement, with no discernible differences on the retrieved temporal profiles.Furthermore, we have confirmed that the output of the ptychography algorithm accurately reproduces the integrated spectrum of Fig. 1(h) (not shown for clarity) and particularly the additional molecular contribution observed in air.Theses findings indicate that the ptychog- raphy algorithm suitably accounts for the residual Raman effect contribution with no impact on the retrieval procedure so that PI-FROSt can be safely conducted in ambient air.Similar to the procedure outlined in [12], a reconstruction using a standard Levenberg Marquardt algorithm with a temporal switch of predefined functional shape can also be implemented.For measurements in atomic gases, neglecting the Kerr contribution, the signal writes as: where is the angular frequency, pump is the pump intensity and is the effective ionization nonlinearity [12].For molecular gases such as air, the switch function Δ() in Eq. 1 should also include the molecular contribution.Nevertheless, our analysis led us to the conclusion that the rotational response could be disregarded with no impact on the retrieval procedure.A pulse reconstruction using the Levenberg-Marquardt algorithm and based on Eq. 1 is displayed in blue solid line of Figs.1(e-g) for the case of air [Fig.1(c)].As evidenced, the retrieved pulse is in excellent agreement with that provided by the ptychography algorithm.This observation demonstrates that, despite its residual presence in the integrated signal, the Raman effect has no impact on the retrieval procedure.We point out that similar conclusions have been obtained for various input chirps or pump energies.At first glance, one might assume that omitting the Raman contribution in Eqs. 1 could lead to a significant error in the retrieval procedure.As a matter of fact, the observed robustness can primarily be attributed to the fact that the contribution of molecular alignment is time-delayed and thus does not alter the region of temporal cutting, which plays the dominant role in pulse reconstruction.The redundancies in the information gathered from the FROSt spectrogram make the spectral phase reconstruction procedure extremely robust.In other words, the algorithm refrains from modifying the reconstructed probe spectrum to better adjust for the Raman contribution of the measured signal because doing so would lead to significant mismatches in other regions of the spectrogram.In order to evaluate the accuracy of the PI-FROSt technique, we inserted several flat SF 11 windows with varying thicknesses (ranging from 5 mm to 35 mm) along the probe path.For each window, PI-FROSt measurements were performed both in air and argon to assess the spectral phase of the probe.A typical measurement in ambient air is depicted in Fig. 2 for = 35 mm.As already observed [12], the chirp induced by the plate leads to a pronounced spectral asymmetry in the cutoff region, the blue components being diffracted before the red ones, as a result of the linear frequency chirp.In order to evaluate the additional phase introduced by the windows, we subtracted the phase (, = 0) previously obtained without any window [see Fig. 1(e)] from the retrieved phase (, ).The resulting phase difference was then fitted with a second-order polynomial in , enabling estimation of the group dispersion delay caused by the windows.The results of this procedure is summarized in Fig. 3.As shown, the retrieved group-delay dispersion linearly increases with , the results obtained in air (blue circles) and argon (red squares) being highly consistent.A linear fit of the obtained curves allows to evaluate the group velocity dispersion of SF 11 .The latter is found to be 186.3fs 2 /mm (resp.187.5 fs 2 /mm) for the measurements performed in air (resp.argon), which is very close to the expectation (186.7 fs 2 /mm).We stress that these findings are particularly noteworthy given that the signal measurements in both air and argon, under identical dispersion conditions, were not conducted sequentially (the measurements were initially performed in air across various thicknesses of dispersive plates before being replicated in argon).The remarkable similarity observed in the pulse reconstructions for the two gases [see for instance Figs.1(e-f)] constitutes a further evidence of the robustness of the PI-FROSt method.Finally, the PI-FROSt signal lifetime was assessed by measuring the integrated signal as a function of the pump-probe delay across a wide temporal range.Here again, the measurements were performed both in argon and air.The evaluation of the ionization relaxation is of prime importance for a potential applicability of the method for ultra-high repetition rates (for instance, for MHz repetition rate high-power Ytterbium lasers).As shown in Fig. 4, the FROSt signal observed in air vanishes in less than 100 ps, while in argon, it demonstrates significantly longer persistence, exceeding 350 ps, which was the upper limit achievable with our translation stage.This rapid decay of the PI-FROSt signal in the case of air opens up the possibility of conducting measurements at very high repetition rates, potentially reaching several gigahertz.
Conclusion
In this paper, we have demonstrated the applicability of the plasma-induced frequency-resolved optical switching technique in ambient air, extending its prior application to noble gases.Through a comprehensive comparison of results obtained in both argon and air, we have established that the contribution of the rotational alignment to the total refractive index experienced by the probe does not compromise the fidelity of the retrieval process when using a ptychography algorithm but also when using a standard Levenberg Marquardt algorithm with a temporal switch of predefined shape (disregarding the molecular Raman response).The capability of PI-FROSt to operate in ambient air alleviates the necessity for a gas cell, rendering the technique extremely straightforward to implement.Furthermore, since no transmissive optic is necessary prior to the pump-probe interaction, the method does not influence the spectral phase of the pulse being measured and is immune to the issue of phase matching constraints.
Figure 1
Figure 1 Experimental (a) and retrieved (b) PI-FROSt signal obtained in argon compared to the one measured (c) and retrieved (d) in air for a 35 fs compressed pulse.Spectral phase (e), spectral intensity (g) and temporal intensity profile (f) retrieved by PI-FROSt measurements in the case of argon (red dashed line) and air (green dotted line) using a ptychographic algorithm.The temporal phase retrieved in air from ptychography is depicted as dashed black line.The blue solid line corresponds to retrieval data obtained in air when a standard Levenberg Marquardt algorithm is used (see text).Spectrally integrated signal as a function of the pump probe delay (h) in the case of argon (red dashed line) and air (solid blue line).Both curves are normalized with the same factor for an accurate comparison.
Figure 2
Figure 2 Experimental (a) and retrieved (b) PI-FROSt signal obtained in air for a probe field chirped with a 35 mm SF 11 flat window.(c) Retrieved spectral intensity (green dot-dashed line), spectral phase (solid blue line) and its associated quadratic fit (dashed red line).(d) Retrieved temporal intensity (solid blue) and phase profiles (dashed black).
Figure 3
Figure 3 Retrieved group delay dispersion as a function of the thickness of the SF 11 flat window inserted in the probe path for air (blue squares) and argon (red circles) with their associated linear fits.
Figure 4
Figure 4 Spectrally-integrated PI-FROSt signal as a function of pump-probe delay obtained in air (blue circles) and in argon (red squares). | 4,428.2 | 2024-04-22T00:00:00.000 | [
"Physics"
] |
Open problems in liquids dynamics: the role of neutron scattering
. We review recent inelastic neutron scattering experiments aimed at investigating still open issues in the microscopic dynamics of liquids. It is shown that the interpretation of experimental results is put on solid ground by the application of modern methods of analysis and lineshape modelling which ensure the fulfillment of fundamental physical properties that the spectra must obey. This last condition becomes crucial to avoid overinterpretations of the genuine information conveyed by scattering data, especially when studying weak signals in the dynamic structure factor. Moreover, we highlight the different roles that neutron data presently play compared to molecular dynamics simulations depending on the nature of the sample, including the case of quantum liquids. In particular, we show how neutron measurements remain an indispensable benchmark in assessing the present capabilities of classical and quantum simulation methods. We also mention the potential of statistical methods, such as Bayesian inference, when applied to neutron data analysis and the opportunity they provide in establishing the spectral features without arbitrary assumptions on the model lineshape.
Introduction
Inelastic neutron scattering (INS) is a master technique for experimental studies on the dynamic behaviour, at the nanometre and picosecond scales, of various classes of liquids.Topical examples are given by the pioneering works on the longitudinal collective excitations (sound waves) in liquid metals as Pb and Rb [1,2], which opened the way to the numerous investigations on metallic liquids carried out up to present times [3,4].In particular, these simple monatomic liquids were more recently taken as reference systems for studies of shear waves through the search for possible low frequency excitations in the dynamic structure factor S(Q,ω) [5], which is accessed by both INS and inelastic x-ray scattering (IXS) techniques when the probe exchanges a momentum ħQ and an energy E = ħω with the sample, ħ being the reduced Planck constant.
However, the true realm of neutron scattering is liquid H2, together with its heavier counterpart D2.Across the years, both these fluids attracted much interest when investigating general aspects of the singleparticle (self) dynamics of an incoherent scatterer like H2 [6][7][8][9], or quantum effects on either the self [10,11] or the collective properties [12][13][14] of these systems.In fact, the low molecular mass of H2 and D2 and the low temperatures at which they are in the liquid phase lead to evident manifestations of quantum behaviour, as * Corresponding author<EMAIL_ADDRESS>detailed in Sect.3.Moreover, H2 and D2 are the most used moderating materials for the production of low-energy (about 1 meV) neutrons.In this respect, the design of new generation hydrogen-based cold neutron sources requires the availability of reliable databases of the scattering law taking explicitly the quantum nature of these fluids into account.Simultaneously, it demands an efficient mapping of the whole kinematic (Q,ω) range for best estimates of the response function.Such requirements unavoidably call for simulation methods capable of providing reliable dynamical data on H2 and D2, since experiments will never cover, in reasonable times, the innumerable kinematic conditions contributing to a total cross section calculation, with varying incident neutron energy.Available semiclassical methods for simulating the dynamics of H2 and D2, like ring polymer molecular dynamics (RPMD) [15] and two versions of the Feynman-Kleinert (FK) approach [16,17], urge then an experimental check, and neutrons are the best probe for this purpose.
Here we review recent INS investigations of both the mentioned classes of liquids, with specific interest in the methods of analysis.In particular, we first address the case of liquids which can be assumed to behave classically, like molten metals, with special focus on the still debated claim that the experimental S(Q,ω) of some metals [18,19] bears the signature of low-frequency transverse-like excitations.As a matter of fact, a dependence on the methods of analysis emerges, so the state of the art of reliable approaches for model fitting to the data is discussed in some detail.
The remainder of the paper deals with the case of a quantum liquid like D2, while discussing the present need for absolute scale comparisons between neutron data and quantum simulations probing the translational dynamics through the centre of mass dynamic structure factor SCM(Q,ω) of the liquid.Indeed, the effectiveness of quantum calculations of the microscopic dynamics has been poorly tested until present, hindering the possible use of existing quantum simulation methods in the mentioned applications related to cold neutron production.The benchmark of neutron measurements evidences the still lacking performance of the semiclassical approximations adopted in each of the available quantum simulation techniques.
Transverse dynamics of liquid metals
In the 1970s, simulation studies of specific time autocorrelation functions [20] have shown that dense fluids can sustain shear wave propagation at sufficiently small wavelengths (around and below 0.5 nm, approximately).The specific functions we are referring to are the transverse current autocorrelation function CT(Q,t) and the velocity autocorrelation function Z(t).The former is given by with jT(Q,t) = j(Q,t) − jL(Q,t), where the current of an N particle system is j(Q,t) = ∑α vα(t) exp[i Q ⋅ Rα(t)], Rα(t) and vα(t) denoting the position and velocity of the α-th particle at time t, and jL(Q,t) is the longitudinal current, i.e. the projection of j(Q,t) on the wavevector Q.The angular brackets in Eq. (1) stand for the statistical average in the canonical ensemble [5].The velocity autocorrelation function is defined by In particular, the spectrum Z(ω) of Eq. ( 2) can be considered, for a liquid, as a sort of equivalent of the phonon density of states (DoS) in solid state physics.Unfortunately, neither Z(ω), nor the spectrum of the transverse current autocorrelation function, CT(Q,ω), can actually be measured.Indeed, past determinations of Z(ω) by incoherent neutron scattering [6][7][8] revealed to be demanding.INS and IXS experiments enable the determination of the space and time Fourier transform of the microscopic density autocorrelation function, i.e. of the dynamic structure factor S(Q,ω) mentioned in the Introduction.However, S(Q,ω) is a longitudinal quantity by definition [5], therefore detectability of transverselike contributions to this function has often been objected as a matter of principle [21].Nonetheless, more than ten years ago, at least two papers analyzing IXS data on two molten metals [18,19] reported the presence of an additional low-frequency component in the measured S(Q,ω).In both works, the same phenomenological model was fitted to the experimental spectra.In particular, the fit function was modelled by adding one or two (depending on Q) damped harmonic oscillator (DHO) doublets [22] to a central Lorentzian, the latter accounting for relaxation processes in an effective way.However, such a modelling disregards the fact that the global fit function has a divergent second frequency moment [5,22] in place of the finite theoretical value kBTQ 2 / M, where kB is Boltzmann constant, T the temperature, and M the atomic mass of the fluid.As we will show, the physical consistency of the adopted models can instead be decisive for a correct interpretation of the experimental results.
At present, there are two possibilities to limit the arbitrariness of fit-based procedures.The first is based on the availability of an exact theory for the functional form of autocorrelation functions and corresponding spectra [23][24][25], where known constraints can be easily imposed.In this case, one can choose a few tentative models guided by physical considerations, then enforce at least the most important sum rules that S(Q,ω) must obey, and finally check which model provides the best fit quality with simultaneous minimization of the number of free parameters.
The other possibility is to control model fitting to the data on a statistical basis, without imposing a priori the number of excitations.At present, we use simple phenomenological models in a complex algorithm exploiting Bayes theorem to estimate the posterior probabilities, conditional to the experimental data, of the various parameters, including the number of excitations itself [26].The information from the posterior distributions permits to control, in a statistical sense, what the data actually support.
Both the theoretically and the probabilistically grounded approaches for the modelling of the scattering signal are briefly described in the next subsections.
Exponential Expansion Theory (EET)
The theory states that any normalized autocorrelation function c(t) / c(0) = 〈 A(0) A(t)〉 / 〈A(0) 2 〉 of a generic dynamical variable A(t) can be represented by a series of exponential terms, called modes, with generally complex amplitudes Ij and frequencies zj, i.e.: Consequently, the spectrum is a sum of so-called generalized Lorentzian lines In particular, if Ij and zj are real, we are dealing with relaxation processes with decay constant zj < 0, which contribute with genuine central Lorentzians to the spectrum.If Ij and zj are complex, pairs of conjugate modes are present in the series and account for damped oscillatory components in the correlation function due to collective excitations characterized by a damping Re(zj) < 0 and a frequency Im(zj).Such complex pairs correspond to two distorted Lorentzian lines in the spectrum, centred at the nonzero frequencies ± Im(zj), respectively.As far as the sum rules are concerned, normalization is guaranteed by enforcing the condition ∑j Ij = 1.In addition, constraints are imposed on the odd time derivatives of c(t) at t = 0 in the form [d p c(t) / dt p ]t = 0 = ∑j Ij zj p = 0, for odd p (5) which ensure the correct behaviour of the function at the time origin and are equivalent to require the finiteness of the even spectral moments in the frequency domain.
Luckily, most correlation functions of interest in liquids dynamics are accurately described by a small number of modes, meaning that the main dynamical processes are actually few.The scheme of the modes may of course change with varying the thermodynamic state and the probed length scale (Q).At each wavevector and thermodynamic state, the best model is chosen on the basis of the corresponding fit quality and of some general knowledge on liquids' behaviour.
In what follows, three models complying with the general EET will be considered for S(Q,ω).The simpler one is a generalized hydrodynamics (GH) triplet with Qdependent parameters [5].It is a single-excitation model consisting of one central Lorentzian, accounting for thermal diffusion and structural relaxation in an effective way, plus one doublet of distorted Lorentzians representing the longitudinal sound waves propagating in the fluid.The model spectrum is constrained to have a finite second frequency moment.The second modelling is the viscoelastic (VE) lineshape [22], which differs from the GH one as regards the relaxation phenomena (two central Lorentzians in place of a single one) and the obeyed sum rules.The VE model again foresees a single oscillatory component due to longitudinal collective excitations and its spectral moments are constrained to converge up to the fourth one.Finally, a two-excitation model, labelled 2C to mean "two complex pairs" [4], is considered in those cases where also shear waves have set in in the liquid.Relaxation phenomena are represented by an effective central Lorentzian and constraints are enforced to ensure, at least, a finite fourth frequency spectral.Detailed-balance asymmetry must be applied to the above models before comparison with experiment.
Bayesian inference
As demonstrated in previous works by some of us [26,27], Bayesian methods can be successfully applied to achieve a minimally biased and probabilistically grounded modelling of experimental spectra.The inferential approach relies on the use of a Markov Chain Monte Carlo algorithm endowed with a Reversible Jump (RJ) option [28] which enables the space of free parameters to contain also the number k of excitations, and identifies the model with the highest posterior probability evaluated according to Bayes theorem, i.e., conditionally on the experimental outcome.As mentioned above, thus far the code foresees the use of simple phenomenological models for S(Q,ω), consisting of an effective central Lorentzian plus a number k of DHO doublets.Details of the Bayesian inference algorithm can be found in Ref. [26].It is important to note that when the number of possible models (numbered by k) is a parameter itself, a Bayesian analysis naturally includes the so-called Occam's razor principle or "lex parsimoniae", which states that between two models providing an equally good account of some evidence, the one containing the lower number of parameters is always to be preferred.Thus, overparametrizations are automatically avoided within the Bayesian approach [27].
In tackling the detectability of a second (transverse) excitation in the experimental S(Q,ω), we are specially interested in the posterior distribution of the parameter k, for which we assumed a uniform prior distribution (i.e., all models are considered to be a priori equally probable) in order to let the inferential process be driven uniquely by the experimental evidence.An example of the results of such a statistical analysis is discussed, along with EET-based ones, in the next section.
INS and simulation results for Au and Ag
Here we summarize the main results of two investigations carried out on liquid Au [3] and Ag [4] by both INS and ab initio molecular dynamics (AIMD) simulations.Both metals were studied experimentally in the so-called neutron Brillouin scattering (NBS) regime, i.e., at the rather low wavevectors ranging from a few inverse nanometres (e.g., 4 nm -1 ) to values slightly exceeding Qp / 2, where Qp ≈ 26 nm -1 is the position of the maximum in the static structure factor S(Q) of both metals.Since liquid metals are characterized by a high sound velocity cs (i.e., 2568 m/s in Au and 2790 m/s in Ag) the rather energetic beam (incident energy E0 ≈ 80 meV, energy resolution HWHM = 1.5 meV, corresponding to 2.3 rad ps -1 in ω) of the small angle BRISP spectrometer at the Institut Laue Langevin (ILL) [29,30] was required to span the mentioned Q range in both cases.At the same time, the experimental work was paralleled by AIMD calculations (details are given in Refs.[3] and [4]) to check the capability of ab initio methods in reproducing the neutron results and, in the case of satisfactory comparisons, use the simulations to extend the study of the dynamics to higher wavevectors.
The first experiment was performed on Au.An example of the experimental S(Q,ω) at the highest Q of the measurements is given in Fig. 1.
An EET analysis of the spectra showed that a simple GH triplet (see red curve in Fig. 1) was more than sufficient to obtain an accurate description of the data at all Q values.Thus, no evidence of transverse excitations was found from the experimental spectra of liquid Au.Nonetheless, the very good agreement between neutron data and simulations (see Fig. 2) justified an EET analysis of the simulated S(Q,ω) in the wide range 4 nm -1 < Q < 70 nm -1 probed by AIMD.In this case, a GH modelling was found to be insufficient, while spectra were perfectly described by a VE lineshape.An example of the fit quality is given in Fig. 3.The VE fits to the simulations thus permitted to better resolve the central peak, but again only a single excitation could be detected.In summary, signs of shear waves turned out to be absent in both the experimental and simulated S(Q,ω) of liquid gold.A study similar to that on Au was later carried out on liquid Ag, again by using the BRISP spectrometer and performing parallel AIMD calculations.As far as the measurements are concerned, a GH modelling was the only justified within the accuracy of the data (see Fig. 5), thus providing no evidence of a second excitation in the neutron spectra, like in the case of gold.By contrast, when we analyzed the AIMD simulations, we found that above Q = 15 nm -1 a VE lineshape became insufficient, while a 2C model (see Sect. 2.1) provided a very accurate description of the simulated spectra, as shown in Fig. 6.Clearly, the algorithm privileges the one-excitation case, confirming the undetectability of transverse modes in the available neutron data.Moreover, very wellshaped unimodal posteriors were obtained for all the parameters [31] indicating the high reliability of the fit results which provided, within the errors, the same longitudinal dispersion curve of Fig. 7. Therefore, different ways to control the plausibility and coherence of model fitting to the data, like the EET and the Bayesian approaches, provide the same results.
To further verify the output of the inferential analysis, it is possible to switch off the RJ algorithm and check the specific posterior distributions pertaining, separately, to the k = 1 and k = 2 cases.Figure 9 shows the corresponding posterior distributions for the undamped frequency Ω s = (ω s 2 +Γs 2 ) 1/2 of sound waves (Γ being the damping), along with that of shear waves Ω t present only in the k = 2 case.It is seen that in the two-excitation case a less symmetric distribution is obtained for Ω s and, more importantly, a broad and nearly flat distribution pertains to Ω t.The second excitation is therefore completely undetermined, confirming the RJ-on result where k is a free parameter.Finally, to better understand why experiments did not reveal a second excitation in S(Q,ω), we performed a rather stringent test on the Ag neutron data.First, we compared the performance of the EET 2C model with the GH one of Fig. 5. Figure 10 shows that at both reported Q values the second complex pair of the 2C lineshape is characterized by a negligible amplitude.Fig. 10.Fits of the 2C lineshape to the experimental S(Q,ω) of liquid Ag.The second complex pair of the model is the dashed green curve hardly distinguishable from the zero axis.Moreover, we found for this inelastic component unreasonably large inelastic shift and an error on the damping doubling the damping value itself, meaning that such a parameter is undetermined.Therefore, according to lex parsimoniae, there are no reasons to choose the 2C result (8 parameters) in place of the GH ] Expt.AIMD l-Ag EPJ Web of Conferences 286, 04002 (2023) https://doi.org/10.1051/epjconf/202328604002ECNS 2023 one (5 parameters) which provides an identical global fit curve.
Then, we analyzed the neutron spectra in the phenomenological fashion adopted in other works [18,19], without any constraint except normalization.Figure 11 puts in evidence quite an embarassing situation, resembling very much that of Fig. 1 of Ref. [18] or of Ref. [19].The previous example demonstrates that the use of constrained models (i.e., obeying at least the most important sum rules) can completely change the results and the deduced physical properties.Indeed, uncontrolled fit procedures, escaping either physical or statistical consistency criteria are likely prone to biases of confirmation.Moreover, the above test shows that the undetectability of low-frequency contributions in appropriate modellings of the experimental S(Q,ω) is not due to the limited resolution or to the scattering of the data-points, but is merely a matter of analysis.
As a final remark regarding this discussion on hightemperature classical fluids, we can conclude that the experimental observation of shear waves is not yet assessed for either system, and still remains out of reach of the present spectroscopic techniques.In fact, convincing indications that S(Q,ω) bears also the signature of the transverse dynamics were only found by means of rather recent, quantitative EET-studies of simulation results [4].On the other hand, neutron data on these classical systems showed the effectiveness of ab initio simulation methods, which open the way to extended investigations of the dynamical behaviour of simple classical liquids, without limitations in wavevector.
"Boltzmann" quantum liquids: D2
Liquid hydrogen and deuterium represent an intermediate case within the few systems displaying quantum behaviour (He, H2, D2, and Ne).In fact, they are so-called Boltzmann quantum fluids, where, differently from the case of helium, exchange effects due to particle indistinguishability are supposed to be negligible in comparison to particle delocalization, and Boltzmann statistics can still be assumed to hold.Therefore, distinct trajectories in phase space can still be defined, and molecular dynamics methods applied, although with obvious differences from a purely classical treatment.This is the reason why H2 and D2, can be considered as moderate quantum fluids, since only single-molecule delocalization is actually responsible for a nonclassical behaviour, as suggested by the limited values [32] reached by the de Broglie thermal wavelength Λ = h / (2π M kBT ) 1/2 even close to their respective triple points.
The characteristic length scale considered for comparisons with Λ at a certain temperature is the mean interparticle distance l = n −1/3 , with n being the number density.At all relevant liquid densities and temperatures of H2 and D2, the condition Λ < l holds [32], implying that overlap of the spatial wave functions of two adjacent molecules, and consequently quantum exchange, does not occur on average, so that quantum statistics need not be invoked.Nonetheless, Λ can reach values of the order of the molecular size, giving anyway rise to quantum delocalization effects that must be accounted for in some approximate, semiclassical way.
A recent review of the available semiclassical approximations for calculations of the dynamic structure factor of moderate quantum fluids, like RPMD [15], FK-Linearized Path Integral (LPI) [16] and FK-Quasi Classical Wigner (QCW) [17], can be found in Ref. [14].
On the experimental side, neutron measurements on these liquids have been few, and often reporting the double differential cross section in arbitrary units (including the rotational contributions) [12,13] rather than the centre of mass dynamic structure factor SCM(Q,ω) in absolute ones.Experimental knowledge of the translational dynamics would instead be very helpful for an in-depth verification of the mentioned simulation methods, enabling or not their possible use in applications.These reasons induced us to perform a neutron measurement on liquid D2, accompanied by the RPMD, FK-LPI and FK-QCW calculations detailed in Ref. [14], all relying on the isotropic Silvera and Goldman intermolecular potential [33].
The experiment was carried out on the BRISP spectrometer in the standard configuration used also for the metallic samples.After corrections for background, attenuation and multiple scattering, the single scattering intensity of a diatomic homonuclear molecule can be schematized as (6) where C is a normalization factor and k1/k0 is the ratio of scattered to incident neutron wavevector.In Eq. ( 6), u(Q) is the intermolecular cross section depending only on the coherent scattering length of D, while J(Q,E) is an intramolecular term accounting for the rotational structure and which depends also on the incoherent scattering length of D. Both u(Q) and J(Q,E) can be confidently calculated within well-known approximations complying, in the case of J(Q,E), with the asymmetry requirements of quantum spectra [14].Finally, R(E) is the instrument energy resolution function.
The quantity we are interested in is the quantum SCM(Q,E), which is related to the Kubo (symmetric) dynamic structure factor SCM,K(Q,E) through the well- K(Q,E) by means of a classical lineshape and perform an overall fit to I (1) (Q,E) according to Eq. ( 6) and to the asymmetry condition quoted above.Again, a GH model for SCM,K(Q,E) was found to provide an accurate description of the data, as shown in Fig. 12(a) for a representative Q value.The various components of the fit function can be better appreciated in the zoom of Fig. 12(b) and are detailed in the caption.6).(b) Zoom of panel (a): the magenta dotdashed curve is the intramolecular component, with visible rotational lines; the dotted and dashed black curves are the elastic and inelastic components which sum up to give, within a normalization factor, the asymmetric GH-based SCM(Q,E) (cyan dashed curve); the green solid curve accounts for a small amount of H2 likely present in the sample.
The fits to I (1) (Q,E) then provided the sought-for absolute scale and resolution-free SCM(Q,E).The latter is compared, at 9 nm -1 , with the various simulation results in Fig. 13.Similar results were obtained at other wavevectors [14].Fig. 13.Resolution-free quantum dynamic structure factor of liquid D2 in absolute units, as obtained from the fits to the neutron data (red solid curve) and from different semiclassical approximations used in available quantum simulation techniques, each one specified in the legend and detailed in Ref. [14].
The comparisons in Fig. 13, needless to say, are quite unsatisfactory independently of the attempted simulation method.Neutron data visibly show much more marked collective excitations.Moreover, simulations do not agree even among themselves.
Despite the clear loss of important details of the dynamical structure, it is anyway valuable that the absolute scale of neutron and simulation data is the same.
In order to compare frequencies and damping coefficients derived from experiment and simulations in more detail, we carried out also a thorough EET analysis of the RPMD and FK-QCW results.All simulation outputs were found to be accurately described by a VE model at the investigated Q values.The corresponding longitudinal dispersion curve and wavevector dependence of the damping coefficient of liquid D2, are finally shown in Fig. 14.Fig. 14.Experimental (red full circles), RPMD (blue empty circles), and FK-QCW (green empty stars) dispersion curve of sound modes.The dashed black line is the hydrodynamic behaviour csQ, with cs = 0.984 nm/ps.The damping coefficient is displayed with red full squares, blue empty squares, and green empty diamonds for experiment, RPMD and FK-QCW, respectively.
Figure 14 shows that, although the dispersion curves do not agree within the errors at some Q value, this property is reasonably captured by simulations, especially by the FK one.Conversely, the striking feature in the plot is the smaller damping deduced from the measurements.
The long lifetime of collective excitations is considered to be one of the main dynamical manifestations of quantum behaviour [34].Apparently, the present simulation techniques do not fully grasp a salient feature of quantum liquids dynamics.Of course, we cannot exclude with absolute certainty some systematic errors in the (demanding) neutron data analysis.However, the disagreement among simulation results suggests that the semiclassical approximations adopted in each technique should anyway be improved.Indeed, while the dispersion curve is acceptably accounted for by computations, which is anyway an achievement, an important property of cold liquids, i.e., the damping of sound waves, is largely missed and systematically overestimated.
Final remarks
The examples reported in this work show that the degree of accuracy with which simulations of S(Q,ω) are able to reproduce experimental data is very different for classical and quantum liquids: it is very high in the first case and still rather low in the second.Expt.
Q [nm -1 ]
As a general result, we demonstrated that unconstrained analyses of S(Q,ω) can be deceptive, while use of the EET enables very accurate and physically-grounded descriptions of both experimental and simulation data.
The future of course lies in the implementation of EET-based models within algorithms exploiting Bayesian inference, so to maximize both physical and statistical consistency of fit results and avert possible biases in the analysis.
We acknowledge the BRISP spectrometer at the ILL, no longer operational, which was the state of the art instrument for neutron Brillouin scattering at thermal energies: a neutron technique of extreme importance for studies of dense liquids dynamics.Regretfully, no equivalent instrument exists at present in the world, which is a deadweight loss.
Fig. 2 .
Fig. 2. Simulated S(Q,ω) at a representative Q value, taking asymmetry and experimental resolution into account (black curve).The red curve is the GH fit to the experimental data.
Fig. 3 .
Fig. 3. Simulated S(Q,ω) of liquid Au (black circles).The GH fit (red thin curve) is clearly inaccurate, while the VE modelling (pale blue curve) provides a high fit quality.However, shear waves clearly propagate in this liquid as witnessed by the maximum in the DoS shown in Fig. 4(b), and by the evolution, as Q grows, of a lowfrequency maximum in CT(Q,ω) (see Fig.4(c)).The dispersion relation we were able to determine from both experiment and simulations contains instead only the longitudinal branch (Fig.4(a)).A study similar to that on Au was later carried out on liquid Ag, again by using the BRISP spectrometer and performing parallel AIMD calculations.As far as the measurements are concerned, a GH modelling was the
Fig. 4 .
Fig. 4. (a) Dispersion relation obtained from NBS (red dots) and AIMD data (black dots); the dashed black line is the hydrodynamic law ωs = cs Q.(b) DoS of liquid Au from AIMD.(c) Normalized CT(Q,ω) at Q values ranging from 4.0 (blue monotonic curve) to 25.5 nm -1 (broad purple curve).The magenta dashed lines, containing the maxima of ωs(Q), highlight the frequency band where the DoS shows the typical shoulder due to longitudinal modes.The green dashed curve, marking the frequency around which the maxima of CT(Q,ω) evolve in panel (c), is shown to correspond to the frequency of the maximum in the DoS, the latter owing to the weakly dispersive transverse modes that were not detected from S(Q,ω), thereby leaving a missing branch in panel (a).
EPJ
Web of Conferences 286, 04002 (2023) https://doi.org/10.1051/epjconf/202328604002ECNS 2023 The 2C fits to the AIMD data of Ag thus enabled the determination of a second, low-frequency, branch ω t(Q) in the dispersion relation displayed in Fig. 7.However, when ω s starts to decrease, the fits become unstable and the low-frequency modes are badly determined.Conversely, transverse-like modes are clearly detected whenever ω s and ω t have considerably different values.
Fig. 7 .
Fig. 7. Dispersion relation of liquid Ag.Red dots and black circles represent the longitudinal branch as obtained from experiment and simulations, respectively.The transverse branch (green stars) could be determined only from the AIMD results at Q values where ω s and ω t are sufficiently different.Given the ambiguous situation between experimental and simulation results for liquid Ag, we revisited the neutron data by means of the second statistical route based on Bayes' theorem and briefly described in Sect.2.2.As mentioned, we are particularly interested in the posterior distribution of the parameter k, conditional to the experimental data set at hand (globally denoted as Y).The posterior distribution of the number of excitations is reported in Fig. 8 at both low and high values of the measured Q range.
Fig. 8 .
Fig. 8. Conditional posterior distributions of the number k of inelastic components in the experimental S(Q,ω) of liquid Ag at selected Q values.
Fig. 9 .
Fig. 9. Posterior distributions for the undamped frequency in the one-(cyan) and two-excitation (green for the transverse and pink for the longitudinal) cases, specifically obtained by switching off the RJ algorithm option.
Fig. 11 .
Fig. 11.Fits of the neutron data on liquid Ag using the unconstrained phenomenological model specified in the figure title.
Fig. 12 .
Fig. 12.(a) Single scattering intensity of liquid D2 (black circles with error bars) and global fit curve (red solid line) according to Eq. (6).(b) Zoom of panel (a): the magenta dotdashed curve is the intramolecular component, with visible rotational lines; the dotted and dashed black curves are the elastic and inelastic components which sum up to give, within a normalization factor, the asymmetric GH-based SCM(Q,E) (cyan dashed curve); the green solid curve accounts for a small amount of H2 likely present in the sample. | 7,088.2 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Magnetic levitation using a stack of high temperature superconducting tape annuli
Stacks of large width superconducting tape can carry persistent currents over similar length scales to bulk superconductors, therefore giving them potential for trapped field magnets and magnetic levitation. 46 mm wide high temperature superconducting tape has previously been cut into square annuli to create a 3.5 T persistent mode magnet. The same tape pieces were used here to form a composite bulk hollow cylinder with an inner bore of 26 mm. Magnetic levitation was achieved by field cooling with a pair of rare-earth magnets. This paper reports the axial levitation force properties of the stack of annuli, showing that the same axial forces expected for a uniform bulk cylinder of infinite Jc can be generated at 20 K. Levitation forces up to 550 N were measured between the rare-earth magnets and stack. Finite element modelling in COMSOL Multiphysics using the H-formulation was also performed including a full critical state model for induced currents, with temperature and field dependent properties as well as the influence of the ferromagnetic substrate which enhances the force. Spark erosion was used for the first time to machine the stack of tapes proving that large stacks can be easily machined to high geometric tolerance. The stack geometry tested is a possible candidate for a rotary superconducting bearing.
Introduction
Stacks of high temperature superconducting (HTS) tapes have proven potential to act as composite superconducting bulks, for either trapped field magnets or as passive components of a magnetic levitation system. An increasing selection of stack size and geometry can now be fabricated from commercial tape as illustrated in figure 1. Experiments on stacks made from standard 12 mm wide commercial tape have shown that high fields can be trapped using both the pulsed field method of magnetization [1], and field cooling [2]. Large width tape is produced by some manufacturers prior to slitting to smaller standard 12 or 4 mm widths. American Superconductor produce 46 mm wide tape which was used to create square annuli with a 26 mm hole. These annuli were then stacked to form a stack with a 26 mm bore capable of generating a uniform persistent field when magnetized using field cooling [3,4]. The trapped fields achieved inside the bore were 3.5 T at 4.2 K and 0.65 T at 77 K giving such a stack of annuli potential to be used in a small scale NMR/MRI device. The work presented here used annuli taken from exactly the same stack, although the outer surface of the stack was machined to achieve a hollow cylinder geometry.
Superconducting levitation offers stable contactless bearings, enabling very low loss [5]. It is standard to use HTS (RE)Ba 2 Cu 3 O 7−d , ((RE)BCO) bulk superconductors, where 'RE' stands for rare earth, for magnetic levitation, but stacks or blocks made from HTS tape have previously been investigated in the context of maglev applications showing that stable levitation of RE permanent magnets (PMs) is possible [6,7]. There are two types of bearing geometry; cylindrical and planar as illustrated in [8]. They require the superconducting bulk stator to be in the form of a hollow cylinder or flat disk respectively, the research reported here is targeting the cylindrical bearing geometry. Cylindrical bearings have previously been used for large scale flywheel energy storage systems such as the one produced by ATZ GmbH [9]. The superconducting stator part is made of many tessellated (RE) BCO bulk pieces to form an approximate cylinder.
The cylindrical geometry has previously been investigated by using HTS tape in the form of a coil to create a hollow cylinder with a bore of 35 mm [10]. Over 300 N of axial force was measured in this case for a pair of rare-earth magnets. This approach has the advantage of creating composite bulk cylinders of potentially unlimited diameter, but the currents induced in such a cylinder can be complex due to no directly circulating current paths around the bore. Although more limited in size, the annuli however, do allow directly circulating current paths around their bore and so should have very similar levitation force behaviour compared to a hollow bulk cylinder. The main advantage of using the stack of commercial HTS tape annuli for superconducting levitation is the predictability and uniformity of the superconducting properties related to available commercial tape.
Annuli pieces and spark erosion machining of stack
The tape used was produced by American Superconductor and had a nominal I c range of 200-350 A cm −1 w at self-field and 77 K. The tape is based on a rolling assisted biaxially textured substrate (RABiTS) process for the 75 μm thick Ni-5W substrate, with the buffer layers deposited by reactive sputtering and the (RE)BCO layer by metal organic deposition of TFA based precursors. The stabilizer consisted of a 3 μm silver layer, with the (RE)BCO layer being approximately 1 μm thick, giving a total tape thickness of approximately 80 μm.
The total stack consisted of 294 layers. The original annuli layers shown in figure 2(a) had their 26 mm hole machined using mechanical boring for previous trapped field tests. For the present experiment, the annuli stack had to fit inside the 50 mm cylindrical bore of the measurement system cryostat, therefore the square geometry had to be machined. The loose square annuli were clamped in a custom stainless steel rig visible in figure 2(b). A spark erosion machine, or electric discharge machine, was then used to cut the stack outer surface into a circle of 45.5 mm diameter. The method works by applying a high voltage between the sample and a 0.3 mm diameter wire whilst under water which has controlled ion concentration. The sample must be electrically conducting and so cannot be a bulk (RE)BCO superconductor. The sample is moved via a 2-axis stage following a pre-programmed path which defines the cut. The voltage creates highly localised sparks between the wire and sample which melt and erode the sample as the wire moves through it. The cut width is 0.3 mm and the resulting sample dimensions have a tolerance of less than 0.1 mm making the method highly precise. The great advantage of the method is that no stress is applied to the sample itself unlike most other machining methods such as lathe turning and abrasive grinding. The parts cut away are not necessarily damaged and could be used for other applications. Laser cutting cannot be used for very thick samples and water jets leads to high distortion and poor tolerance. It is clear from figure 2(c) that spark erosion results in a surface with high geometric tolerance.
It is perhaps surprising that sparking HTS layers underwater can be a reliable and accurate method of machining. Critical current tests were performed on a control stack made from 12 mm wide SuperOx tape. The central layer of a 5 layer stack had a measured I c of 458.2 A. This stack was subsequently cut in half along the transport axis. A critical current test was then performed again on one half of the central layer of tape (width 5.80 mm) giving an I c of 207 A. Assuming constant surface current density, it can be deduced from the sample widths and measured I c that the cut damage region is approximately 0.37 mm from the cut surface. This is likely an over estimate as the J c of a tape is usually worse at the tape edges compared to the central region, however this test allows us to say with some confidence that damage to the HTS layer is limited to no more than 0.4 mm from the cut. Scanning Hall probe magnetometry of the trapped field of single tape layers that have been cut, also back up this result as the trapped fields indicate current flowing very close to the cut edges. A curious result of this machining method is that most of the resulting layers in the stack are weakly fused together on the cut surface which can aid stack integrity and handing after cutting.
Final hollow cylinder stack geometry
The overall geometry of the system is shown in figure 3. Due to stacking volume fraction of the annuli, the effective tape thickness used in the modelling was 83 μm. Two Nd-Fe-B PMs 23 mm in diameter were stacked together and coaxially aligned with the annuli, as shown in figure 3, via a central rod. This typical arrangement produces high magnetic field gradients for the superconductor to maximise stiffness. The centrally aligned position shown in figure 3 was always used for field cooling of the annuli stack before any movement, and so is associated with the z=0 position in later graphs. Movement only occurred for positive z displacement but force behaviour is expected to be the same if moving in negative z due to the symmetry of the assembly. The gap between the PMs and annuli was 1.5 mm, which is smaller than the 2.5 mm used for the tape coil experiment in [10] and so was expected to lead to the larger levitation forces. The height of the PMs is not ideal given the height of the annuli stack, but shorter Nd-Fe-B magnets were not available in the desired diameter and machining such ceramic magnets is not possible. However, the dimensions of the PMs do not prevent high gradients and magnitudes of magnetic flux density being generated. The levitation force system was built around an Oxford Instruments Variox cryostat with indirect cooling of the coil samples via helium gas between a cold head and the samples. The cold head takes the form of a ring embedded in the wall of the cryostat bore which is 50 mm in diameter. The helium gas is at 1 bar pressure. As no significant heat is generated during the levitation tests, the cooling power available is more than enough to maintain the sample at stable cryogenic temperatures down to 10 K. The annuli stack sample holder is rigidly mounted to the cryostat but the PM stack is attached to a G10 epoxy glass rod. The rod is moved up and down using a linear stage driven by a stepper motor. A step size of 0.2 mm was chosen to achieve sufficient resolution. The load cell was rated for up to 1000 N and had an error of ±0.7%. Force is measured when the rod is momentarily stationary between movement steps of the linear stage. This ensures that no friction forces resulting from guides and seals influence the force measurement. Further details of the levitation force system can be found elsewhere [11].
Force hysteresis
After cooling of the tape annuli to the desired operating temperature in the position shown in figure 3, the PM stack was displaced upward in the positive z diretion by 40 mm (extraction) and then back down 40 mm (insertion) to the starting position, all at a speed of 1 mm s −1 , but with momentary pauses every 0.2 mm for load cell measurement. This resulted in the hysteresis curves shown in figure 4 for the three temperatures tested. The levitation force is defined in terms of the positive z direction in figure 3, which means that a negative levitation force is attractive. It is clear that the temperature has a strong influence on the hysteresis, as expected, with the lowest temperatures exhibiting the smallest hysteresis due to high J c . Conversely 77.4 K shows significant irreversible behaviour, suggesting large-scale penetration of the flux originating from the PM inside the annuli stack. The peak force for the 20 K curve is 549 N and is the same (within measurement error) to the theoretical maximum predicted by the perfectly trapped flux (PTF) model (red dotted line) which considers infinite J c . It is surprising that the experimental curve can be so coincident with the PTF curve as the PTF curve usually represents an upper asymptote. However more representative modelling detailed in section 4 taking into account the ferromagnetic properties of the tape substrate, predict higher levitation forces which explains why the experimental results can match or even exceed a PTF model without ferromagnetic properties. An 'ideal bulk' superconductor is not considered as being ferromagnetic so the comparison here is still valid for evaluating the annuli stack performance.
For stable levitation, a negative gradient on the hysteresis curve is required. This means that the present system would be operated at below 10 mm displacement, if supporting a static axial load such as a flywheel. This gives enough margin for stability if displaced further. Force behaviour for displacements greater than, say, 10 mm for this system are not of much practical significance.
It is worth commenting on how the force curves compare to similar experiments in which a composite bulk hollow cylinder was made of HTS pancake coils [10]. For all temperatures, the force for the coil sample was lower but this is largely due to the weaker magnetic fields being used in that case. The highest force measured for the coil experiment was 317 N at 20 K. The gap between PMs and coils was 2.5 mm in the previous study compared to 1.5 mm for the present study which largely explains the difference. The key difference between the two experiments is more to do with shape of the levitation force curves. The previous coil tests produced curves with lower stiffness than expected from modelling and compared to the annuli stack. This is due to the more complex currents set up inside the coils compared to the simple circulating currents induced in the annuli stack. A more quantitative evaluation of the levitation force in both cases should be made by comparison to modelling results which offer a prediction for the maximum performance expected for the exact field generated by the PMs used and exact component dimensions.
Modelling of superconducting levitation force
Two different FEM techniques were used to simulate and understand the levitation force experiments. The PTF model as described in [12,13] estimates levitation forces involving superconducting domains by perfectly preserving the magnetic flux density inside the domain when there is movement. This is physically equivalent to an infinite J c and therefore induced surface currents. It was achieved here by preserving the magnetic vector potential in the superconducting domain as in [14,15] and is a simple and fast computation tool using a time independent solver. Due to the limited fields produced by rare earth PMs, it is often a good approximation, but breaks down if the J c is not high enough.
The critical state model [16], on the other hand, simulates real induced currents within the superconducting domain and so is a more accurate tool and is necessary to determine current flow paths, however computation times are considerably longer than the PTF model.
Modelling parameters for the critical state model
The H-formulation for magnetic fields was used in COMSOL Multiphysics 5.0 as in the two previous works [17] and [10]. The framework used an E-J power law to simulate the critical state, where E f and J f are the azimuthal electric field and current density respectively.
The Kim model [18] with temperature dependent parameters was used to describe the dependence of the critical current density (equivalent to the engineering critical current density J e for the experiment) on field: A full description of the parameters used is given in table 1. This equation and the parameters listed in table 1 were the same as that used for the previous modelling of levitation force for HTS pancake coils [10]. The motivation behind equation (2) is to use a simple mathematical framework that can easily fit typical measured J e values for commercial superconducting tape over 10-77 K and fields of 0-4 T, as these are the ranges of interest for superconducting bearings. The temperature dependent lift factor is defined below, where SF is the self field, and I c refers to tape critical current.
The L 0 factor and the B 0 Kim law parameter are fitted to typical HTS tape data such as [19] and then approximated with a linear temperature dependence given in table 1 to give B 0 (T) and L 0 (T). J c (θ) anisotropy is ignored in our model, as it is not assumed to significantly affect the force based on preliminary test models. An n-value of 9 was used. Although this may seem low, using a high n-value showed no change in the force values or induced current pattern and only increased computational time and instability therefore 9 was sufficient for the current study.
The movement of the PMs was implemented by modelling the PMs as a thin layer of current density on the circumferential surface based on the theoretical equivalence of remanent magnetization and surface current density for a PM: J s =B rem /μ 0 A m −1 . The thin layer currents approximating the ideal surface current density J s , were then moved along the z direction by defining them with a time and space dependent current density J(z, r)=J 0 (vt, r) A m −2 , where v is the speed at which the domain moves and was 1 mm s −1 for both experiment and model.
Ferromagnetic substrate consideration
The RABiTS substrate of the HTS tape is ferromagnetic which is an effect which must be considered in the modelling. Ferromagnetic permeability data for typical RABiTS substrates at 77 K as reported in [20], was used to create an interpolated BH or permeability μ r (B) curve. The same data was used for modelling AC losses in HTS tape [21]. In order to determine the effect of the ferromagnetic substrate alone without superconductivity, the PM stack was moved in and out of the annuli stack at 100 K. The results in figure 5 show that forces over 40 N are generated which means the ferromagnetic effects cannot be ignored if trying to model the levitation force curves. There was no hysteresis which shows that the HTS layer was normal and the substrate is acting as a soft ferromagnetic material. Simple FEM modelling was done in the AC/DC module to determine the force between the PMs and the magnetic annuli using the interpolated data mentioned previously. The permeability of the substrate was multiplied by 0.90 to account for the fact that 10% of the stack volume is not substrate. A very close match was obtained between the modelling and experimental force data suggesting the validity of using the data to model the combined superconducting-ferromagnetic case.
PTF model results
The 2D axially symmetric PTF model was applied to the same geometry as the experimental system shown in figure 3, resulting in the force-displacement curves shown in figure 6. Two models were solved, with and without ferromagnetic substrate. The effect of the ferromagnetic substrate was approximated by using an isotropic and uniform μ r (B) relation as detailed in the previous section. Because the PTF model is equivalent to having infinite J c , there is no hysteresis in either of the curves. It is clear that the effect of the ferromagnetic substrate on levitation force is significant. This can be explained qualitatively by considering the enhanced field that is trapped inside the superconducting stack when it is field cooled. Similar enhancement can be achieved by fixing PMs in various positions onto the superconducting bulk itself to increase the trapped field [14], but it is clearly more elegant
Parameter
Description Value E 0 Electric field constant in equation (1) 1×10 −4 V m −1 I c0 =I c (77 K, SF) Tape critical current at 77.4 K and self-field 220 A cm −1 w L 0 (T) Lift factor for tape I c defined by equation ( and effective for the superconducting bulk itself to be ferromagnetic, an effect that can only be achieved by using stacks of HTS tape. This allows for the experimental axial levitation force to match that which is expected for an ideal bulk having infinite J c but no ferromagnetism. As in the case for previous experiments with bulk superconducting cylinders [14], the PTF model curve approximately gives the maximum stiffness for initial displacement and also the maximum force. This implies that forces over 600 N may be possible using higher J c tape. The critical state modelling in the next section shows that the measured force is lower that the PTF ferromagnetic model due to the depth over which currents are induced in the composite bulk cylinder.
Critical state modelling results
The critical state model is excellent for showing the magnitudes and depths over which currents should be induced in a superconducting cylinder at different temperatures when moving a PM stack. A superconducting cylinder was modelled in a 2D axisymmetric geometry with the same dimensions as the annuli stack shown in figure 3. Figure 7 shows the circulating current density induced in the crosssection of the cylinder wall for the three different temperatures modelled for an example displacement of 7 mm.
One central region of current dominates (red/yellow) but there are also two smaller oppositely flowing regions of current (blue) at the top and bottom of the stack. These regions are related to the field poles of the PM stack. The central expelled flux from the PM stack is axially symmetric and so can be considered as a single pole, in addition to the poles on the top and bottom of the PM stack. The number of current regions induced in the cylinder should be equal to the number of poles for the PM stack (three in this case). As the two ends of the PM stack are outside the annuli, the current regions associated with these end poles are relatively small. This is unlike the case of the HTS coils previously tested [10].
The higher the temperature, the lower the J c and so the greater the depth over which current is induced which relates to large hysteresis. The 77.4 K case therefore explains why significant hysteresis can be seen at this temperature in the experimental force curves. At 45 K and below, it is clear that the high J c leads to more effective shielding of the stack interior, confining the induced current to a thinner layer near the inner surface. In these cases, most of the stack volume is not active in the magnetic levitation and therefore, for this application, the wall thickness of the annuli stack could be halved without any real change in the levitation force. In figure 4, there is significant departure from the PTF curve at very large displacements such as 30 mm for all temperatures. This is because larger displacements continue to induce currents deeper into the cylinder than shown in figure 7, and particularly from the top of the cylinder which only has a thin layer of current visible for 7 mm displacement. The greater the depth the currents are induced over, the greater the departure from the PTF model. For 30 mm displacement at 77 K, the critical state shows almost all the cross-sectional area in the top half of the cylinder is saturated with current which explains the unusual flat force behaviour at these displacements seen in figure 4. Figure 8 summarises the critical state levitation force results. The larger flux penetration and deeper induced currents for 77 K, correspond to significantly lower levitation force. The most important unknown parameter for the critical state modelling was I c0 . As this is not exactly specified for the wide tape used, the method used was to choose the value that gave the closest match to the 77 K experimental force curve. This was found to be 220 A cm −1 w to the nearest 10 A, and so it was this value that was used in the modelling. The lift factor in table 1 then resulted in higher scaled critical current values for the 45 and 20 K models. Due to the unknown I c0 , the purpose of the critical state modelling is to qualitatively reproduce the general features of the force curves and reveal the nature of the current flow as in figure 7. There are however important comparisons to be made between the modelling and experiment. Although I c0 was chosen so the 77 K model curve matched the magnitude of the experimental curve, it is noteworthy that the shape of the two curves match very well which suggests that the model is accounting well for the superconducting and magnetic behaviour at 77 K. The force magnitude of the 20 K modelling curve matches reasonably well to experiment but the 45 K curve has a noticeable difference. There are three potential causes for these differences. Firstly, the lift factor and Kim law parameters used for the model are estimates based on typical HTS tape rather than the exact HTS tape product used. Secondly, the field dependant permeability of the substrate is temperature dependent. Data was used for 77 K [20], but this may give errors for lower temperatures. Thirdly the effect of anisotropic permeability due to gaps between the substrates has not been considered as the H-formulation implementation in the AC/ DC module of COMSOL, does not allow field dependant anisotropic permeability. Despite these factors, the modelling is sufficiently accurate to explain the general force behaviour of the system. In particular, it is the critical state model for 20 K which confirms that a composite bulk with finite J c and ferromagnetic properties, can give rise to forces between those predicted by the ideal PTF model, with and without ferromagnetism.
Summary
Wide width HTS tape can be machined into annuli and used for magnetic levitation suited to a cylindrical rotary bearing geometry. Stable superconducting levitation for an HTS annuli stack made from 46 mm wide tape was proven with over 500 N of force measured. Higher forces can be expected in future for tape with a higher I c . Interestingly, the effect of the ferromagnetic substrate was to significantly enhance the stable axial levitation force to values higher than possible with a uniform bulk of infinite J c . Spark erosion has been shown to be a practical and accurate method of machining stacks of HTS tape which is only possible due to the largely metallic and therefore normally conductive composition of the stacks.
Both PTF models and full critical state models explained the force behaviour, induced currents and the effect of the ferromagnetic substrate. The 2D axisymmetric critical state model is well suited to modelling the annuli stack due to its symmetry and because all the induced currents are purely azimuthal. Most of the annuli stack cross-section had no currents induced in it and so a much smaller cylinder wall thickness can be used in an optimised system which would save material. Previous experiments have shown high trapped field in a stack of the same HTS tape annuli using field cooling magnetization. The current authors are conducting pulsed field magnetization tests for the same stack reported in this paper to prove its suitability as a practical trapped field magnet. | 6,311.2 | 2017-02-01T00:00:00.000 | [
"Physics"
] |
RAB GTPases and SNAREs at the trans-Golgi network in plants
Membrane traffic is a fundamental cellular system to exchange proteins and membrane lipids among single membrane-bound organelles or between an organelle and the plasma membrane in order to keep integrity of the endomembrane system. RAB GTPases and SNARE proteins, the key regulators of membrane traffic, are conserved broadly among eukaryotic species. However, genome-wide analyses showed that organization of RABs and SNAREs that regulate the post-Golgi transport pathways is greatly diversified in plants compared to other model eukaryotes. Furthermore, some organelles acquired unique properties in plant lineages. Like in other eukaryotic systems, the trans-Golgi network of plants coordinates secretion and vacuolar transport; however, uniquely in plants, it also acts as a platform for endocytic transport and recycling. In this review, we focus on RAB GTPases and SNAREs that function at the TGN, and summarize how these regulators perform to control different transport pathways at the plant TGN. We also highlight the current knowledge of RABs and SNAREs’ role in regulation of plant development and plant responses to environmental stimuli.
Introduction
Eukaryotic cells contain membrane-bound organelles that carry characteristic sets of proteins and membrane lipids, and correct placement of these specific components is critical to ensure functions of each organelle. The single membrane-bound organelles that constitute the endomembrane system [i.e., the endoplasmic reticulum (ER), Golgi apparatus, the trans-Golgi network (TGN), endosomes/multivesicular endosomes (MVEs), lysosomes/vacuoles and the plasma membrane (PM)] exchange proteins and membrane lipids among each other via process called membrane traffic. Membrane traffic exchanges molecules between the donor and target compartments by using membrane-bound intermediates (Fig. 1). This process was initially called "vesicular transport" since the transport intermediates are thought to be vesicular in shape; however, piling evidence showed that the transport intermediates can also be tubular or irregularly-shaped, therefore in this review, we refer to this process as "membrane traffic" rather than "vesicular transport", and call the intermediate structures "transport intermediates".
The process of membrane traffic is divided into four sequential steps: the first step is the "budding" step, in which the cargo at the donor organelle is sequestered into a specific domain on the donor membrane (also referred to as a "zone" of a membrane) and packaged into the transport intermediate. The second step is the "transport" step, in which the transport intermediate is delivered to the target compartment by motor proteins and cytoskeleton. In the third "tethering" step, the transport intermediate docks onto the target membrane, and subsequently the transport intermediate fuses with the target membrane thereby releasing the content to the target organelle during the last "fusion" step ( Fig. 1). In addition, membrane lipids and proteins can travel to organelles with different identities as a result of a process called organelle maturation. A well-established example of such case is cisternal maturation of the Golgi apparatus. During cisternal maturation, the secretory cargo molecules travel with the Golgi cisternae without the need of being enveloped into the transport intermediates, and as the maturation process progresses, the cisternae itself changes its identity from cis-to trans-Golgi (Losev et al. 2006;Matsuura-Tokita et al. 1 3 2006). Furthermore, direct contacts between organelles also can transfer membrane lipids and proteins from one compartment to the other. One such example can be observed in yeasts where the secretory cargo is conveyed from the ER to the cis-Golgi via repeated approach (termed "hug-and-kiss" action) of cis-Golgi to the ER (Kurokawa et al. 2014).
The TGN, the tubulovesicular compartment adjacent to the trans-side of the Golgi apparatus, acts as an important hub that coordinates different trafficking routes to various target membranes in post-Golgi transport pathways. In eukaryotic cells, the TGN receives secretory and vacuolar cargo from the trans-cisternae of the Golgi apparatus, sorts and sends the cargo to the correct destination. Uniquely, in addition to this conserved role, the plant TGN is known to receive endocytic cargo from the PM and recycle selected cargo back to the PM ( Fig. 2; Chow et al. 2008;Dettmer et al. 2006;Kang et al. 2011;Lam et al. 2007;Uemura et al. 2012a;Viotti et al. 2010). In animal cells, the endocytic cargo is received by a compartment separate from the TGN compartment called early endosomes (EE), whereas plants' TGN is often designated as TGN/EE, as these two compartments share the same function. Interestingly, in comparison to the animal TGN which is tightly associated with the Golgi apparatus, the plant TGN can take a Golgi-independent state, in which the TGN is located further away from the Golgi apparatus, and behaves functionally independently from it (Kang et al. 2011;Uemura et al. 2014;Viotti et al. 2010). Recent findings suggest that such unique character of the plant TGN is strongly linked to the plant-specific organization of membrane traffic regulators. In this mini-review, we pick up two important regulators of membrane traffic, RAB GTPases and SNAREs, which are highly conserved among eukaryotic species but show unique features in plants, and highlight their functions at the TGN. We also summarize the involvement of these regulators in in plant development and environmental stress responses.
RAB GTPases and their regulators at the TGN
RAB GTPase is a family belonging to the Ras GTPase superfamily, which members have ability to cycle between GTPbound "active" and GDP-bound "inactive" states. When RAB is in the active state, it interacts with various effector proteins that include tethering complexes that dock transport intermediates to the correct target membrane (Grosshans et al. 2006;Zerial et al. 2001). Activation of RAB GTPases is regulated by guanine nucleotide exchange factors (GEFs) that catalyze exchange of GDP for GTP. Deactivation is regulated by GTPase-activating proteins (GAPs), which promote GTP hydrolysis activity of small GTPases, thereby accelerating hydrolytic reaction converting the GTP bound to RAB GTPase to GDP.
RAB genes are broadly conserved among eukaryotic species; Arabidopsis has 57 RAB genes that are classified into 8 groups (RAB1/RABD, RAB5/RABF, RAB6/RABH, RAB7/ RABG, RAB8/RABE, RAB11/RABA, RAB2/RABB and RAB18/RABC), each of which is localized to distinctive organelles and marks different membranes of the endomembrane system (Table 1, Pereira-Leal et al. 2001;Rutherford et al. 2002;Ueda et al. 2002;Woollard et al. 2008). Interestingly, RAB11 group, which is known to regulate late Fig. 2 Plant TGN acts as a hub for secretion, endocytosis, recycling and vacuolar transport. Cargo proteins synthesized at the endoplasmic reticulum is transported to the Golgi apparatus, and then to the TGN. This transport process is called early secretion. At the TGN, the cargo is sorted and delivered to different destinations. Golgi-independent TGN, which is derived from the Golgi-associated TGN but located in distance from the Golgi-apparatus, is a specialized compartment for secretion (also referred to late secretion). Many of the trafficking components regulating secretion to the PM is utilized to deliver cargo to the cell plate. Recent study indicated that one Golgi-associated TGN bears "secretory-trafficking zone" at which the components of secretory machinery is accumulated and "vacuolar trafficking zone" at which the components regulating vacuolar transport is accumulated. Uniquely to plants, the TGN is the first compartment to which the endocytic cargo reaches, thus the plant TGN act as an early endosome that serves a platform for endocytosis and recycling. TGN; trans-Golgi network 1 3 secretory events in eukaryotic systems, is enormously diversified in plants. While humans and budding yeast have three (out of 66 RABs encoded in their genomes) and two (out of 11) RAB11s, respectively (Stenmark et al. 2001), Arabidopsis and rice have 26 (out of 57) and 15 (out of 39) RAB11/ RABA members, respectively (Rutherford et al. 2002;Saito and Ueda 2009). In budding yeast, Ypt31/32 (the Rab11 orthologs) regulates secretory pathway by promoting vesicle formation at the TGN, and tethering of transport vesicles to the PM (Thomas et al. 2016). Mammalian Rab11s are shown to localize to the TGN, recycling endosomes (the endosomal compartments specialized for recycling proteins to the PM) and sorting endosomes (the early endosomal compartments in mammalian cells that receive endocytic cargo from the PM or the TGN, and sort and deliver the cargo to recycling endosomes, late endosomes/MVEs, the PM or the TGN), and regulate traffic of cargo from these compartments to the PM (Campa et al. 2017;Hsu et al. 2012;Naslavsky et al. 2018).
The uniqueness of the plant TGN functions may owe to this diversification of RAB11 members in plant lineages. Plant RAB11/RABA members are further classified into six subgroups: from RABA1 to A6, and most of them are shown to localized to the TGN or TGN-related structures. RABA1b, a member of the largest RABA1 subgroup, is demonstrated to localize to a distinctive region on the TGN (Asaoka et al. 2013a). RABA1b also colocalizes with VAMP721, R-SNARE (discussed later) that functions in the secretory pathway (Asaoka et al. 2013a), suggesting that RABA1b regulates secretory pathway from the TGN to the PM. Also, endocytic recycling of PIN1 from the TGN to the PM is impaired by raba1b mutation (Feraru et al. 2012), indicating that RABA1 is involved in recycling of cargo from the TGN to the PM. Interestingly, the raba1a raba1b raba1c raba1d quadruple mutant develops normally under a standard laboratory growth conditions; under salinity stress however, the quadruple mutant is stunted (Asaoka et al. 2013a, b). This suggests that RABA1-regulated secretion from the TGN is required for abiotic stress responses, such as salinity stress tolerance.
RABA2 and RABA3 also mark the distinctive domain on the TGN in non-dividing cells, and accumulate on the extending edges of the cell plates in dividing root cells (Chow et al. 2008). Lines of evidence suggest that plants use secretory and recycling machineries to deliver cell wall materials from the TGN to the newly forming cell plate during cytokinesis, and thus the traffic pathway from the TGN to the cell plate is termed "modified exocytosis" by Kanazawa et al. 2017). The Arabidopsis plants overexpressing the dominant negative mutant form of RABA2A are defective in cytokinesis (Chow et al. 2008), therefore, it is likely that RABA2 and RABA3 play major roles in modified exocytosis in dividing cells. RABA1b is also found on cell plates in dividing cells (Asaoka et al. 2013a); however, it was shown that RABA2a and RABA1e behaved differently when the cells were applied with endosidin 7 (ES7), an inhibitor of callose biosynthesis during cell plate formation (Davis et al. 2016;Park et al. 2014). Other study shows that raba1, raba2 and raba4 mutations affect different cell wall components (Lunn et al. 2013), suggesting that RABA2, RABA1 and RABA4 regulate different traffic pathways or transport different cargo during cell plate formation. TGN Park et al. (1994), Kim et al. (1996) Recent study indicated that RABA2 and RABA3, but not other RABA members, bind directly to SYP121 (PM-localizing Qa-SNARE) and VAMP721 (PM and TGN-localizing R-SNARE, discussed later) to regulate exocytosis (Pang et al. 2022). This also suggests RABA members are parts of different molecular machineries that coordinate secretion from the TGN. A member of RABA4 group, RABA4b, is ubiquitously expressed in Arabidopsis. Meanwhile, the confocal laser scanning microscopy (CLSM) showed that in root hair cells, RABA4b is accumulated in the tips of the growing root hairs (Preuss et al. 2004). RABA4b interacts with phosphatidylinositol 4-kinase β1 (PI4Kβ1) and PI4Kβ2, when it is in active form (i.e., PI4Kβ1/2 are the downstream effectors of RABA4b). The mutations in pi4kβ1 and pi4kβ2 resulted in smaller plants with abnormal root hair shapes (Preuss et al. 2006), suggesting that RABA4 effectors regulate plant growth and root hair integrity downstream of RABA4. Likewise, RABA4d, the pollen-specific RABA4 member, is accumulated in the pollen tube tips, and raba4d mutation affected the shapes of pollen tubes and reduced their growth rates (Szumlanski et al. 2009). Tip growth involves massive secretion of proteins and polysaccharides exclusively to the tips of the growing cells (reviewed in Campanoni et al. 2007). Consistently, immunoelectron microscopy indicated that RABA4 and PI4Kβ1/β2 are predominantly localize to the secretory vesicle-forming region of the TGN (Kang et al. 2011;Preuss et al. 2006), suggesting that RABA4 regulates secretion from the TGN. These data together suggest that RABA4 members are involved in polarized secretion during tip growth. The members of RABA1 group are also shown to localize to the tips of the root hairs or the pollen tubes, for example, RABA1b and RABA1e are accumulated in the root hair tips, and RABA1f marks the tips of the pollen tubes (Asaoka et al. 2013a). Do RABA1 and RABA4 regulate the same trafficking events? It is shown that dominant negative mutant proteins of RABA1 and RABA4 exert different effects on the endocytosis of the PM-localized receptor, FLS2, upon ligand binding in tobacco expression system: while overexpression of dominant negative form of RABA1 impaired the correct localization of newly synthesized FLS2 to the PM, overexpression of dominant negative form of RABA4c accelerated endocytic transport of FLS2 to MVEs (Choi et al. 2013). This suggests that RABA1 and RABA4 regulate different trafficking steps at the TGN, although further studies are required.
A member of RABA5 group, RABA5c, is shown to accumulate on the growing edges of the cell plate during cytokinesis. In non-dividing cells, major population of RABA5c localizes to the large vesicles located near the PM of the geometric edges of the cells, and minor population is found to colocalize with the TGN marker (Kirchhelle et al. 2016). Overexpression of the dominant negative form of RABA5c resulted in disruption of lateral root shape owing to perturbation of the cell geometry by increasing the anisotropy of cortical microtubules and cellulose microfibrils (Kirchhelle et al. 2016(Kirchhelle et al. , 2019. Taken together, it is proposed that RABA5 is responsible for regulating specific traffic pathway that sends materials from the TGN to the geometric edges of non-dividing cells to alter mechanical properties of cell edges. In transient expression system using tobacco leaf epidermal cells, the overexpression of dominant negative form of RABA6a interfered with the traffic of endocytic cargo to the MVEs (Choi et al. 2013), thus RABA6 is suggested to regulate endocytic pathway. As far as we have surveyed, RABA6 is present sporadically in plant species. RABA6 is however present in Arabidopsis and Amborella trichopoda, while absent in Oryza sativa, Selaginella moellendorffii, Physcomitrella patens, and Marchantia polymorpha. This implies that RABA6 may have specialized roles in RABA6possessing plants, though further study is needed to understand the exact roles of this subgroup. RAB GTPases are activated by specific GEFs. Transport protein particles (TRAPP) family is a family of multi-subunit tethering complexes that activate specific RAB GTPase by acting as a GEF and tether transport intermediates to the correct target membrane (reviewed in Ravikumar et al. 2017;Vukasinovic et al. 2016). TRAPP family consists of four members (TRAPPI to IV) and each member is involved in different trafficking events in yeasts: TRAPPI is known to activate Ypt1 (ortholog of RAB1/RABD in plants) and mediate traffic from the ER to the Golgi apparatus, while TRAPPII is a GEF for Ypt31/32p (RAB11 ortholog in yeasts), and regulate post-Golgi traffic at the TGN (reviewed in Kim et al. 2016). TRAPPIII is also capable of activating Ypt1 more efficiently compared to TRAPPI (Thomas et al. 2018), suggesting that TRAPPIII plays a major role in regulating ER-to-Golgi trafficking. It is also reported that the specific component of TRAPPIII and TRAPPIV, namely Trs85 and Trs33, respectively, are involved in regulating Ypt1-mediated autophagy in yeasts Lynch-Day et al. 2010).
Proteomic analysis indicated that the components of TRAPPI, II and III family members are present in purified TGN fraction (Drakakaki et al. 2012). The mutation in TRAPPI component caused mild defect in cytokinesis, whereas the mutation in TRAPPII component caused severer phenotypes during cytokinesis leading to defects in embryogenesis or seedling lethality (Qi et al. 2011;Thellmann et al. 2010). Also, the mutations in TRAPPII components caused abnormal accumulation of secretory markers inside the root cells (Qi et al. 2011). TRAPPII colocalizes with RABA1c, and mutation in TRAPPII component cause partial diffusion of RABA1c to cytosol (Qi et al. 2011), therefore, it is suggested that TRAPII functions upstream of RABA1c to 1 3 regulate secretion from the TGN. A component of TRAP-PII was also identified by a forward genetic screen aimed at isolating mutants that are defective in leaf venation (Naramoto et al. 2014). VAN4, VASCULAR NETWORK DEFEC-TIVE 4, coded TRS120, the specific component of TRAPPII complex, and van4 mutant shows defects recycling of PIN proteins (Naramoto et al. 2014). VAN4/TRS120 colocalized with the TGN marker and RABA1c (Naramoto et al. 2014), further supporting that TRAPPII functions together with RABA1 to regulate recycling and secretory processes at the TGN. Proteomic analysis indicated that TRS120 co-precipitated with dominant negative form of RABA2a, and mutation in trs120 interfered with the accumulation of RABA2a on the cell plate (Kalde et al. 2019), implying that TRAPPII also acts as a GEF for RABA2 members.
Interestingly, in the proteomic analysis of purified TGN followed by live cell imaging showed that RABD/RAB1 partially colocalizes with the TGN markers in plant cells (Drakakaki et al. 2012;Pinheiro et al. 2009). Mammalian and yeast homologs of RABD (Rab1 and Ypt1, respectively) participate in ER-to-Golgi trafficking, and plant RABD members, too, are reported to take part in this early secretory event (Batoko et al. 2000). Arabidopsis YIP1 was also identified in the proteomic analysis of purified TGN (Drakakaki et al. 2012). Yeast Yip1p (YPT/RAB GTPase Interacting Protein 1) is known to interact with inactive form of Ypt1/RAB1 and Ypt31/RAB11, and recruit Ypt1p to the Golgi membrane (Yang et al. 1998). In yeast, Ypt1, Ypt6 (RAB6/RABH ortholog), Ypt31/32p, Sec4p (RAB8/RABE ortholog) as well as their GEFs and GAPs act in cascade to ensure directional transport of secretory cargo from the ER-Golgi apparatus to the TGN (called "early secretion"), and then from the TGN to the PM (called "late secretion") (Ortiz et al. 2002;Rivera-Molina et al. 2009;Suda et al. 2013;Wang et al. 2002). Several studies show that plant RAB6/RABH localizes mainly to the Golgi apparatus but subpopulation of RABH is found on the TGN (Johansen et al. 2009;Renna et al. 2018). This suggests that RABH may function during late secretion. Taken together, presence of plant RABD, YIP1 and RABH at the TGN implicates that RAB GTPase cascade regulates early and late secretion in plant system. RABF/RAB5 is the key regulator of the endosomal trafficking. RAB5 is conserved broadly in eukaryotic organisms; however, land plants and some green algae species possess plant-unique RAB5, called ARA6/RABF1, in addition to the conventional RAB5s (RABF2A/RHA1 and RABF2B/ ARA7 in Arabidopsis) (Ebine et al. 2011;Hoepflinger et al. 2013;Ueda et al. 2001Ueda et al. , 2004. Both canonical RAB5s and ARA6 localize predominantly to the limiting membrane of MVEs (Haas et al. 2007;Scheuring et al. 2011); however, quantitative analysis indicated that canonical RAB5s and ARA6 are located in close proximity to the TGN marked by RABA1b, clathrin or SYNTAXIN OF PLANT 43 (SYP43; discussed later) (Asaoka et al. 2013a;Ito et al. 2012Ito et al. , 2016. Electron microscopy indicated that MVEs are often found in vicinity of the TGN (Kang et al. 2011), and immunoelectron microscopy showed that anti-RABF2B antibody labels both MVEs and the TGN . Live cell imaging also showed that ARA7 marks subdomain of the TGN (Singh et al. 2014), and ARA6 overlaps with the TGN marker when expressed under a strong promoter in tobacco leaf cells (Bottanelli et al. 2012). In addition, plants' RAB5 activator, VPS9A (VACUOLAR SORTING PROTEIN 9A; Goh et al. 2007), is suggested to localize to the MVEs, as well as to the TGN (Sunada et al. 2016).
What is the implication of localization of RAB5s on the TGN? Ultrastructural studies and live-cell imaging showed that the MVE number was reduced when the TGN function was impaired by concanamycin A (V-ATPase inhibitor) (Scheuring et al. 2011), suggesting that MVE biogenesis is linked to the TGN function. In addition, an ultrastructural study also detected multivesiculated TGN-like structures, and live cell imaging showed that a hybrid compartment bearing both TGN and MVE markers are formed when dominant negative mutant protein of one of the ESCRT-III components is overexpressed in protoplasts (Scheuring et al. 2011). Another study also shows that PM-localizing FLS2 is transiently sequestered to the TGN-MVE hybrid compartment when endocytosis and vacuolar transport of FLS2 is triggered by ligand-binding in tobacco cells (Choi et al. 2013), implying that endocytic cargo passes the TGN-MVE hybrid structures en route to the vacuole. Based on these data, it is commonly thought that in plant system, MVEs are formed by multivesiculation and organelle maturation of the TGN. It is not clear if plant RAB5s have active roles in endosomal maturation of the TGN; nevertheless, it is likely that RAB5 marks the early phases of endosomal maturation.
SNARE proteins at the TGN
SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptors) proteins regulate fusion between a transport intermediate and its target membrane. SNAREs are grouped into Q-and R-SNAREs, which localize to the target membrane and the vesicular membrane (i.e., the membrane of the transport intermediate), respectively (Fasshauer et al. 1998). Q-SNAREs are further classified into Qa-, Qb-and Qc-SNAREs based on their amino acid sequences (Fasshauer et al. 1998). After transport intermediates are docked onto the target organelle, R-SNARE on the transport intermediate interacts with the cognate Qabc-SNAREs on the target membrane to form tight helical bundle called trans-SNARE complex. Subsequently, the target and vesicular membranes are brought closely, removing the water molecules between the juxtaposing membrane leaflets, and then the fusion occurs between the membranes. In Arabidopsis, at least 66 Q-and R-SNARE proteins are encoded in genome (Saito and Ueda 2009). The systematic localization analysis using Arabidopsis cultured cells has been done to map the subcellular localization of plant SNARE proteins ). The result firstly indicated that most of SNAREs are ubiquitously expressed in plants, and secondly that SNAREs mark distinctive organelles in plant cells.
Immuno-electron microscopy and systematic localization analysis indicated that plant TGN bears three Qa-SNAREs (SYNTAXIN OF PLANT (SYP) 41/42/43), three Qb-SNAREs (VPS TEN INTERACTING (VTI) 11/12/13) and one Qc-SNARE (SYP61) (Bassham et al. 2000;Kang et al. 2011;Sanderfoot et al. 2001;Uemura et al. 2004). Mutations in SYP4 group impact secretory and vacuolar transport, as well as the morphology of Golgi apparatus and the TGN (Uemura et al. 2012a). Interestingly, uptake of lipophilic dye, FM4-46, is unaffected in syp4 mutations (Uemura et al. 2012a), suggesting that constitutive endocytosis of membrane materials is not impaired by syp4. The syp42 syp43 double mutant shows pleiotropic phenotypes. For example, the syp42 syp43 double mutant is smaller in size, and shows aberrant response to gravity due to the loss of polar localization pattern of PIN2 proteins in root cells (Uemura et al. 2012a). The syp42 syp43 double mutant is sensitive to abiotic stresses, such as salinity and osmotic stresses (Uemura et al. 2012b). In addition, the syp42 syp43 double mutant is susceptible to non-host species of powdery mildew pathogen, and shows severe chlorosis (i.e. hyper-sensitive response) when infected with host species of powdery mildew fungus (Uemura et al. 2012a). The proteomic analyses using apoplastic cell fraction indicated that syp42 syp43 double mutant is defective in secreting cell-wall modification enzymes to the apoplast when infected with the pathogen (Uemura et al. 2019). The syp42 syp43 double mutant is defective in targeted secretion of VAMP721 (discussed later) to the plant-pathogen contact site, the process required for the plant to restrict pathogen entry (Uemura et al. 2019). These data together indicate that secretion and vacuolar transport of specific cargo regulated by SYP4 at the TGN is essential for normal development of plants, as well as, plant responses to environment and surrounding pathogens/microbes.
The live cell imaging showed that VTI11 and VTI13 localize to the TGN, MVEs and the vacuolar membrane, whereas VTI12 localizes on the TGN, MVEs and the PM in Arabidopsis protoplasts and root cells (Niihama et al. 2005;Uemura et al. 2004). VTI11 is capable of forming trans-SNARE complex with tonoplast-localized SNAREs, namely SYP22 (Qa) and SYP51 (Qc), as well as VAMP727 (R) that is involved in vacuolar trafficking (discussed below) (Ebine et al. 2008;Sanderfoot et al. 2001;Yano et al. 2003), suggesting that VTI11 regulates vacuolar transport. A forward genetic screen identified vti11 mutant as shoot gravitropism 4 (sgr4)/zigzag (zig), which shows defects in shoot gravitropism Morita et al. 2002). In endodermal cells of vti1/zig-1 mutant, the amyloplasts (the statocysts) are abnormally accumulated on the upper side of the cell against gravity Yano et al. 2003), suggesting that VTI11 is required for normal sedimentation of amyloplasts in the direction of gravity. The vti11/ zig-1 mutation also caused fragmentation of the vacuoles Yano et al. 2003). Interestingly, single amino acid substitution mutation in VTI12 gene suppressed the phenotype of vti11/zig-1 mutant (Niihama et al. 2005). This mutation caused VTI12 to localize to the vacuole, and enabled VTI12 to form complex with SYP22 (Niihama et al. 2005). While a proteomic analysis showed that both VTI11 and VTI12 co-precipitate with GFP-tagged SYP43 (Fujiwara et al. 2014), other study indicated that VTI12 protein, but not VTI11, interacts with TGN-localizing SNAREs (i.e., SYP61 and SYP4) and VPS45 that regulate SNARE complex formation at the TGN (discussed later) (Bassham et al. 2000;Sanderfoot et al. 2001). In addition, a genetic data suggested that VTI11 and VTI12 regulate transport to lytic vacuole and storage vacuole, respectively (Sanmartin et al. 2007), and a detached leaf assay shows that vti12 mutant exhibited severer chlorosis compared to wild-type and vti11/zig-1 (Surpin et al. 2003). These data suggest that although the molecular functions of VTI11 and VTI12 are interchangeable, VTI11 and VTI12 may take part in distinctive traffic events. The traffic pathway that VTI13 regulates is not clear. VTI13 is localized to tonoplasts and punctate structures labeled by FM4-64 in Arabidopsis root hair cells, and shown to take part in root hair growth (Larson et al. 2014). Interestingly, ultrastructural analysis indicated that in vti13 mutant cells, SYP4 is mislocalized to the ER (Larson et al. 2014), implying that VTI13 is required for the correct localization of SYP4 to the TGN.
SYP61 gene was identified as OSMOTIC STRESS-SEN-SITIVE MUTANT 1 (OSM1) by a genetic screen aimed at finding genes responsible for salinity stress responses (Zhu et al. 2002). The syp61/osm1 mutant wilts easily compared to wild type when grown on soil with limited moisture, and is sensitive to salt and osmotic stresses (Zhu et al. 2002). SYP61 is shown to interact with a member of aquaporins called PLASMA MEMBRANE INTRINSIC PROTEIN 2;7 (PIP2;7) (Hachez et al. 2014). PIP2;7 localizes to the PM in wild-type root cells; however, in syp61/osm1 mutant, PIP2;7 accumulated to ER-derived globular/lenticular structures (Hachez et al. 2014), suggesting that SYP61 is required to transport PIP2;7 to the PM via secretory pathway. Recent study indicates that SYP61 is ubiquitinated by a ubiquitin ligase ATL31, which is involved in carbon/nitrogen nutrient stress responses (Hasegawa et al. 2022;Sato et al. 2009).
The syp61 knock down mutant and syp61/osm1 mutant are hypersensitive to C/N-nutrient imbalances (Hasegawa et al. 2022), suggesting that ubiquitination status of SYP61 is important for the response to nutrient availability. Sec1/Munc-18 (SM) family proteins activate Qa-SNAREs, and promote SNARE complex assembly (Shen et al. 2007). Immunoelectron microscopy indicated that the SM protein, VPS45, localizes to the TGN, and coprecipitate with TGN-localized SNAREs but not with the MVE-localized SNAREs (Bassham et al. 2000). VPS45 was also identified as BEN2 (BFA-visualized endocytic trafficking defective 2) during a forward genetic screen aimed at isolating mutants that are defective in accumulation of PIN1 inside BFA bodies after BFA treatment (Tanaka et al. 2013). The vps45/ben2 mutant produced smaller BFA bodies labeled by PIN1-GFP, and caused delay in endocytosis monitored by the tracer molecule, FM4-64 (Tanaka et al. 2013). This suggests that the activation of TGN-localizing SNAREs by VPS45 influences endocytosis and recycling of cargo, such as PIN1.
In yeasts and mammals, Ykt6, the R-SNARE, regulates versatile trafficking events, such as retrograde trafficking from the cis-Golgi to the ER, secretion, endosomal trafficking and vacuolar transport (reviewed in Kriegenburg et al. 2019). Also, YKT6 is shown to function in the process of fusion between autophagosome and lytic compartment in yeast, fly and mammals (Kriegenburg et al. 2019). Plant orthologs of Ykt6, YKT61 and YKT62 in Arabidopsis, interact with SYP41, and facilitates fusion between liposomes containing either SYP41 or SYP61 (Chen et al. 2005). This suggested that plant YKT6 is a putative R-SNARE regulating the trafficking at the TGN; however, the subcellular localization analysis indicated that GFP-tagged YKT6 is found in the cytosol in Arabidopsis protoplasts , and thus the role of YKT6 at the plant TGN is still unclear. Recently, ykt6 mutants produced by genome editing were shown to be defective in male and female gametogenesis and embryonic development (Ma et al. 2021). Ma et al (2021) had demonstrated that GFP-tagged YKT61 expressed under the regulation of the UBIQUITIN10 promoter was associated to membranes, and distributed to cytosol and the punctate structures in Arabidopsis root cells. They also demonstrated that GFP-YKT61 interacts with SEC22 (ER-localizing R-SNARE), SYP22 (tonoplast-localizing Qa-SNARE), SYP41 (TGN-localizing Qa-SNARE, described above), VAMP721/722 (PM and TGN-localizing R-SNARE, discussed below) (Ma et al. 2021), altogether suggesting that plant YKT6 plays roles in multiple trafficking events, which may include the trafficking from and to the TGN.
The systematic analysis demonstrated that the members of VESICLE-ASSOCIATED MEMBRANE PROTEIN 72 (VAMP72), the R-SNAREs, are localized to the TGN and the PM in Arabidopsis protoplasts . VAMP72 group consists of seven members (VAMP721 to VAMP727) among which the functions of VAMP721, VAMP722 and VAMP727 are extensively studied. In addition to the TGN and the PM, VAMP721 and VAMP722 localize to the cell plates in dividing cells (El Kasmi et al. 2013;Shimizu et al. 2021;Uemura et al. 2012a;Zhang et al. 2011), and are shown to regulate secretion from the TGN (Kwon et al. 2008;Shimizu et al. 2021). The vamp721 vamp722 double mutation results in dwarf seedlings that show defects in cell plate formation and auxinrelated responses (Zhang et al. 2011;Zhang et al. 2021). In vamp721 vamp722 double mutant cells some of the PM markers and PIN proteins are mislocalized to the intracellular structures (Zhang et al. 2011;Zhang et al. 2021). Also, the vamp721 vamp722 double mutation reduces the TGN number, affects the TGN size, and causes aggregation of the TGN , suggesting that VAMP721 and VAMP722 are required to keep the integrity of the TGN. In addition, VAMP721 and VAMP722 are demonstrated to take part in plant immunity response to powdery mildews (Kwon et al. 2008), and to bacteria (Kim et al. 2021;Kwon et al. 2020;Yun et al. 2013). It is demonstrated that VAMP722 is accumulated at the powdery-mildew entry sites, and forms complex with PM-localizing Q-SNAREs, SYP121/PEN1 (PENETRATION 1) and SNAP33 (SYNAP-TOSOMAL-ASSOCIATED PROTEIN OF 33 kDa) (Kwon et al. 2008) to secrete powdery-mildew resistance protein RPW8.2 (Kim et al. 2014). The VAMP721/722-depleted plants that were treated with bacterial elicitor accumulate enzyme of the lignin biosynthetic pathway, caffeoyl-CoA O-methyltransferase 1 (CCOAOMT1) inside the cells (Kwon et al. 2020), suggesting that VAMP721 and VAMP722 is responsible for secreting enzymes that are involved in cell wall reinforcement when a plant is infected with pathogenic bacteria. Recent study showed that VAMP721 interacts with PICALM1a/ECA1 (PHOSPHATIDYLINOSITOL-BIND-ING CLATHRIN ASSEMBLY 1A/EPSIN-LIKE CLATH-RIN ADAPTOR 1) and PICALM1b, the ANTH-domain containing clathrin adaptor protein (Fujimoto et al. 2020). In picalm1a/b double mutant, VAMP721 is mislocalized to the PM, and the picalm1a/b double mutant is defective in secretion of mucilage after the imbibition (Fujimoto et al. 2020). Therefore, it is proposed that VAMP721 is recycled back to the TGN in PICALM1a/1b-dependent manner, and the recycling of VAMP721 is important for correct secretion of cargo, such as mucilage.
Unlike other VAMP72 members, the systematic analysis indicated that VAMP727 localizes to MVEs in Arabidopsis protoplast cells ). In planta, VAMP727 colocalizes mainly with MVE markers (Ebine et al. 2008); however, subpopulation of VAMP727 localizes to the TGN (Shimizu et al. 2021;Uemura et al. 2019). VAMP727 localized to the PM when endocytosis was blocked by PI3K-PI4K inhibitor, wortmannin, and VAMP727 further accumulates at the PM as a result of overexpression of constitutive active form of ARA6 (Ebine et al. 2011), suggesting that subpopulation of VAM727 cycles between the PM and ARA6-localizing MVEs. Biochemical analysis indicated that VAMP727 forms trans-SNARE complex with tonoplast-localized Q-SNAREs (Ebine et al. 2008), as well as the PM-localized Qa-SNAREs (Ebine et al. 2011), suggesting that VAMP727 is involved in versatile traffic pathways linking TGN-vacuole via MVEs, and MVE-PM. Meanwhile, VAMP727 is suggested to function mainly in vacuolar transport, since the mutation in VAMP727 aggravates the phenotype of the syp22-1 (mutant of tonoplast-localizing Qa-SNARE), and overexpression of VAMP727 recures the phenotype of the syp22-1 (Ebine et al. 2008). Consistently, VAMP727 shares the same zone with components of vacuolar trafficking machineries, but not with components of the secretory traffic machineries on the TGN (Shimizu et al. 2021). Adaptor protein (AP) complex are known to bind to specific sorting signals in the cytosolic tail of the cargo and promote packaging of cargo to transport intermediates at the donor compartment. In other words, specific AP complex marks a distinctive zone of the organelle where the cargo for the specific traffic pathway is accumulated. AP-1 is shown to take part in secretory pathway and trafficking pathway to the cell plate, whereas AP-4 is known to interact with the vacuolar sorting receptors and takes part in vacuolar transport (Fuji et al. 2016;Singh et al. 2018). Shimizu et al. have elegantly shown using super-resolution confocal live imaging microscopy (SCLIM) they had developed that a single TGN bears "secretory-trafficking zone" which is comprised of AP-1, clathrin and VAMP721, and "vacuolar trafficking zone" comprised of AP-4 and VAMP727 (Shimizu et al. Fig. 3 Roles of TGN-localizing RAB GTPases. Members of RABA/ RAB11 group regulate different transport pathways form/to the TGN. RABA1 and RABA4 are suggested to regulate secretion, while RABA1 and RABA2/A3 are involved in transport to the cell plate. RABA1 is also suggested to regulate recycling of the PM-localizing membrane proteins. The precise member that regulates vacuolar transport from the TGN is not clear; however, lines of evidence suggests that RABF/RAB5 marks early phases of endosomal maturation. GI-TGN; Golgi-independent TGN, GA-TGN; Golgi-associated TGN, GA; Golgi apparatus, MVEs; multivesicular endosomes 1 3 2021). In addition, 4 dimensional live-cell imaging demonstrated that a membrane fraction containing VAMP721, AP-1 and clathrin separates from the Golgi-associated TGN (GA-TGN) to produce Golgi-independent population of the TGN (termed GI-TGN; Shimizu et al. 2021;Uemura et al. 2014Uemura et al. , 2019, suggesting that GI-TGN is the specialized population of the TGN that is responsible for secretion.
Future perspectives
Plant TGN serves an important platform to coordinate secretion, recycling, vacuolar transport and endocytosis, and the trafficking of cargo via TGN is essential for plant development and plant response to various environmental stresses and stimuli. Figures 3 and 4 summarize the current understandings of roles of RAB GTPases and SNAREs in regulation of distinctive transport pathways from/to the TGN. Although the knowledge of the functions of RABs and SNAREs at the TGN is expanding, essential questions are still unanswered. For example, do different RABA members regulate distinctive trafficking routes, or do they share common effectors, trafficking machineries or functional "zones" to deliver different cargo to the designated destinations?
What are the molecules responsible for demarking different trafficking zones on the same TGN? What exactly is the cargo delivered from the TGN during plant development, cell plate formation, abiotic/biotic stress responses? To answer to these questions, we believe that in addition to the classical cell biological approaches, application of latest techniques such as super resolution microscopy and artificial intelligence technology based on machine learning will allow us to detect and classify the fine dynamics of membrane trafficking regulators, and will be a key to deepen our understanding of the complex but unique functions of TGN in plant system. bution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes Fig. 4 Roles of TGN-localizing SNAREs. SYP4, the Qa-SNARE, regulates secretion, recycling and vacuolar transport. Interestingly, SYP4 is not involved in constitutive endocytosis. Lines of evidence suggest that VTI12 takes part in the traffic pathway form the TGN to the PM, while VTI11 takes part in vacuolar transport form the TGN. SYP6, the Qc-SNARE, is shown to regulate secretion. VAMP721 and VAMP722 are involved in secretion, recycling and transport to the cell plate. VAMP721 is recycled back to the TGN by the action of PICALM1a/1b. VAMP727 mainly regulates vacuolar transport. VAMP721 and VAMP722 are the components characterizing the "secretory-trafficking zone", while VAMP727 is the component characterizing the "vacuolar trafficking zone" of the TGN. GI-TGN; Golgiindependent TGN, GA-TGN; Golgi-associated TGN, GA; Golgi apparatus, MVEs; multivesicular endosomes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,618.4 | 2022-04-29T00:00:00.000 | [
"Biology"
] |