text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Analysis of the Rectification of Electromagnetic Radiated Emission Performance of a Plug-in Hybrid Mini Truck
The EMC radiated emission performance of the plug-in hybrid mini truck shall meet the relevant requirements of national standards GB/T 18387-2017 and GB 34660-2017 on the radiation of vehicles. The plug-in hybrid mini truck is placed in a 10 m semi-anechoic chamber for the whole vehicle as required by the standards. By activating different working conditions of the vehicle, various possible situations occurring in the normal driving process of the vehicle are simulated. When a plug-in hybrid mini truck underwent the above two tests, the result of radiated emission test was found to exceed the limit value of the standard. The relevant problem electrical parts are identified by analyzing the radiated emission data of the vehicle, conducting near-field scanning of the vehicle, and locating the over-standard problem points. Through rectification and optimization, the plug-in hybrid mini truck fulfilled the relevant requirements of national standards GB/T 18387-2017 and GB 34660-2017 on the radiation of vehicles.
Introduction
The continuous upgrading of electric vehicle technology has influenced the R&D route of mini trucks.By drawing on the mature electrification cases in passenger cars, OEMs began to develop electric mini trucks, and plug-in hybrid mini trucks also came into the market.Since there are fewer application cases of electrification in mini trucks, various problems arise during the development.To compete for the market and expand their market share, OEMs put more and more electrical parts in mini trucks to improve the operability and comfort of the driver, which leads to more and more serious EMC radiated emission problems in the vehicle.
Therefore, the government authority included the requirements of EMC radiated emission performance of plug-in hybrid mini trucks in the national standards GB/T 18387-2017 Limits and Test Method of Magnetic and Electric Field Strength from Electric Vehicles and GB 34660-2017 Road Vehicles-Requirements and Test Methods of Electromagnetic Compatibility to avoid the impact of plug-in hybrid mini trucks on the environment and to verify whether the EMC radiated emission performance of newly developed models meet the requirements of the standards, like this in [1,2].
This paper introduces the test scheme of electromagnetic radiated emission performance of a plug-in hybrid mini truck.According to the test scheme, the test data is analyzed, and the over-standard problem electrical parts are identified, rectified, optimized and verified, thus completing the EMC rectification and optimization of the HV system of the vehicle.
Test Site
According to the requirements of the national standards GB/T 18387-2017 Limits and Test Method of Magnetic and Electric Field Strength from Electric Vehicles and GB 34660-2017 Road Vehicles-Requirements and Test Methods of Electromagnetic Compatibility, the radiated emission test of the vehicle is conducted in the 10 m semi-anechoic chamber, as shown in Figure 1.
Test Procedure
The test procedure of GB/T 18387-2017 Limits and Test Method of Magnetic and Electric Field Strength from Electric Vehicles is: The vehicle under test is placed in the 10 m semi-anechoic chamber, and the electric field monopole antenna and magnetic field loop antenna are arranged at least 3 m from the four sides of the vehicle under test.The test frequency band is 150 KHz-30 MHZ.According to the test procedure, before the test, the vehicle shall run at a constant speed of 40 Km/h to take a pre-scan of the electric field in four directions: front, back, left, and right.And a pre-scan of the magnetic field in four directions are: front, back, left and right, with two polarization modes.After the pre-scan, the maximum surface of electric field radiation and the maximum surface of magnetic field radiation of the vehicle are identified by comparing four sets of data on the electric field and eight sets of data on the magnetic field, respectively.Finally, the vehicle takes the final scan at the maximum emitting surfaces of the electric field and magnetic field at the speed of 16 Km/h and 70 Km/h, respectively.The final scan data includes two sets of electric field data and two sets of magnetic field data with different polarization modes.
After the test, the final scan test data are compared with the electric and magnetic field limits in GB/T 18387-2017.If the limit value is exceeded in one of the waveform curves, the vehicle test is not passed.
The test procedure of GB 34660-2017 Road Vehicles -Requirements and Test Methods of Electromagnetic Compatibility test process is: The vehicle under test is first placed in the 10 m semi-anechoic chamber, and the radiated emission receiving antenna specified in GB 34660 is aligned with the center point of the vehicle by adjusting the vehicle position.Radiated emissions specified in GB 34660 are subject to narrowband and broadband tests, and the test frequency band is 30 MHZ-1 GHZ.During the narrowband test, all LV electrical parts of the plug-in hybrid mini truck shall be activated, such as the fan, wipers, headlamps, etc.For the broadband test, the working condition of 40 Km/h is added for the plug-in hybrid mini truck in addition to the narrowband working condition.Narrowband and broadband tests shall be conducted on the left and right sides of the vehicle, respectively, in vertical and horizontal polarization modes.
After the test, the four curves of the narrowband obtained from the test are compared with the narrowband limits of radiated emission specified in GB 34660-2017.The four curves of the broadband are compared with the broadband limits in the standard.If one of the waveform curves exceeds the limit, the vehicle test is not passed.
Analysis of Test Data
According to the above test scheme, a plug-in hybrid mini truck is tested per the two standards to obtain the vehicle test data.Figure 2 shows the test results of GB/T 18387-2017, where Figure 2
Investigation and Positioning of Problem Points
Over-standard EMC radiated emission of the vehicle may result from the operation of one or more controllers/actuators of the vehicle.According to the controller/load band corresponding to the over-standard EMC radiated emission, combined with the working condition of the vehicle (with-speed state, READY state, ON state, ACC state, OFF state) and the near-field scanning by a handheld spectrum analyzer, the problem parts of EMC are finally determined, like this in [3].
By near-field scanning of the motor electric control system, it is found that the radiation values of 16 MHZ and 20-30 MHZ of the motor electric control system are greatly enhanced relative to the noise floor.As shown in Figure 4, the handheld spectrum analyzer conducts the near-field scanning of the motor electric control system.
Through the near-field scanning of the vicinity of the LV harness port of the generator controller, a waveform curve that matches the over-standard point of GB 34660-2017 is found, and the generator controller is the main problem part causing the over-standard problem near 80 MHZ as per GB 34660-2017.As shown in Figure 5, the handheld spectrum analyzer conducts the near-field scanning of the vicinity of the LV harness port of the generator controller.Summary of the investigation: (1) The vehicle HV system leads to the over-standard problem at 10-50 MHZ as per GB/T 18387-2017 and GB 34660-2017, and the motor electric control system is the main problem part.
(2) Through the near-field investigation, it is found that the generator controller causes the over-standard problem near 80 MHZ as per GB 34660-2017.
Rectification and Optimization
The EMC problem in an automobile occurs when three elements of EMC coexist, i.e., source of disturbance, coupling path, and sensitive equipment.The coupling paths can include conduction coupling disturbance and radiation coupling disturbance.The disturbance transmitted along the conductor is called conduction disturbance, and it is transmitted by means of electric, magnetic, and electromagnetic coupling.The electromagnetic disturbance transmitted through space through electromagnetic waves is called radiation disturbance.It is transmitted by means of near-field induction coupling and far-field radiation coupling.Moreover, conduction and radiation disturbances may exist simultaneously, resulting in a compound disturbance.In the test, the vehicle is the source of disturbance; the EMC test room (10 m semi-anechoic chamber) space is a coupling path, and the vehicle transmits disturbance outward through space radiation coupling; the receiving antenna of the test room is sensitive equipment.As shown in Figure 6.In this regard, the rectification route of the vehicle EMC is shown in Figure 7 below, including suppressing the source of disturbance, cutting off the disturbance paths and reducing the sensitivity of the sensitive source.For radiated emissions, after the electromagnetic disturbance from the source of disturbance is radiated into space via the antenna, it will be received in space by the antenna of the sensitive device.The sensitive device is then influenced, which forms electromagnetic radiation coupling.
The impact of electromagnetic radiation coupling depends mainly on the following factors: (1) Voltage and current of the source of the disturbance.The greater the voltage power supply is, the faster the change over time is, and the greater the energy radiated out; (2) There are transmitting antenna's characteristics of the disturbance source, such as the area of the circuit loop and the geometric length of the antenna.The larger the area of the loop is, the greater the length is, and the easier the energy radiated out; (3) There are characteristics of the receiving antenna of the sensitive equipment, such as the area of the circuit loop and the geometric length of the antenna.The larger the area of the loop is, the greater the length is, and the easier it is to receive electromagnetic waves from space.
Resonance occurs when the signal is transmitted through the conductor.Resonance is most severe when the length of the harness is exactly an integer multiple of the half-wavelength of the signal.In general, when the length of the harness is greater than one-tenth of the signal wavelength, the signal's energy will be radiated to space through the harness.
The disturbance voltage U generated by electromagnetic radiation coupling is related to the magnitude of the disturbing electromagnetic field and the receiving function heff of the receiving antenna of the sensitive equipment, as shown in Figure 8.For radiated emission, the source of disturbance is mainly the vehicle controller/load and its harnesses, the disturbance path is mainly the space of the test room, and the sensitive equipment is the antenna of the test room.Since the harness itself does not generate disturbance, to further clarify the three elements of EMC, it can be considered that the source of disturbance is the vehicle controller/load, the disturbance path is the vehicle harness and the space of the test room, and the sensitive equipment is the antenna of the test room.
Because the space and antenna of the test room are the standard test environment, they cannot be changed.The requirements of EMC standards can be met only by rectifying the vehicle controller/load and harnesses.
Rectification of Problem Electrical Parts (1) Rectification of motor electric control system
By inspection, it is found that the 360° ring joint at the three-phase line port of the motor electric control system is unreliable, posing the risk of a false connection of the shielding layer, like this in [4].
As shown in Figure 9, the recommended grounding method for the HV harness shielding layer is: As shown in Figure 10, actual HV harness connection and optimization of the vehicle is: (2) Rectification of resolver line of motor electric control system The resolver line of the vehicle's drive motor is too long, and the grounding of the shielding layer of the connecting section of the resolver line is not reasonable.Therefore, the shielding of the resolver line is strengthened by copper foil wrapping, and the filtering magnetic ring is added inside to reduce outward radiation.As shown in Figure 11, a magnetic ring is added for filtering at the motor end of the resolver line, and the magnetic ring is fixed inside the shielding layer.As shown in Figure 12, copper foil is used to strengthen the shielding of the resolver line.As shown in Figure 13, the resolver line is changed to be arranged on the right side of the vehicle in order to reduce the circuit area of the resolver line.Figure 12.Rectification of Resolver Line 2.
(3) Rectification of excess harness of the motor electric control resolver line The harness of the electric control resolver and the harness of its connector are suspended inside the vehicle, forming an antenna, like this in [5].
As shown in Figure 14 and Figure 15, the excess harness is recommended to be wound in the following manner: (4) Rectification of three-in-one controller housing and generator By analyzing the adhesive strip between the three-in-one controller housing and the cover, it is found that the role of the adhesive strip is to prevent water, which is not conductive.Therefore, the 3-in-1 controller housing is not a good metal confinement space, and there is a risk of electromagnetic leakage, so its shielding shall be enhanced.Due to the excessive outward radiation of the generator controller, filtering + reinforced shielding are adopted for optimization like this in [6].In addition, the filtering magnetic ring is added to the generator controller's HV harness, and the copper foil wrapping is adopted to strengthen the shielding.Finally, absorbing material is used for wrapping to further reduce its outward radiation like this in [7].Accordingly, the shielding was reinforced with copper foil between the three-in-one housing and the cover, as shown in Figure 16.The generator controller harness is wrapped by absorbing material, as shown in Figure 17.
Verification of Rectification
In the 10 m semi-anechoic chamber, the hybrid mini truck is tested in accordance with the requirements of GB/T 18387-2017 and GB 34660-2017 on the radiated emission performance index of the vehicle, like this in [8]- [10].By analyzing the test data, the radiated emission performance of the hybrid mini truck after rectification and optimization meets the standard requirements.Figure 18 shows the test results of GB/T 18387-2017, where Figure 18
Conclusion
In this paper, the test method of the plug-in hybrid mini truck is introduced by analyzing the requirements of the national standards GB/T 18387-2017 Limits and Test Method of Magnetic and Electric Field Strength from Electric Vehicles and GB 34660-2017 Road Vehicles -Requirements and Test Methods of Electromagnetic Compatibility on the radiated emission performance index of the vehicle.The methods of analyzing the over-standard data of the radiated emission test and locating the problem electrical parts in the process of the EMC mandatory test of a hybrid mini truck are elaborated.Finally, an EMC rectification and optimization case for the HV system of the vehicle is proposed.
During the rectification and optimization of the over-standard problems of the EMC radiated emission of the vehicle, it is found that part of the problems can be solved in the development stage of the EMC performance of the vehicle, such as excess harness, harness arrangement, shielding performance of the HV system housing, etc.We suppose the EMC performance index requirements
(a) shows the electric field test (at a vehicle speed of 70 Km/h) specified in GB/T 18387-2017, and Figure 2(b) shows the magnetic field Y-direction test (at a vehicle speed of 70 Km/h) specified in GB/T 18387-2017.Figure 3 shows the test results of GB 34660-2017, where Figure 3(a) shows the narrowband test in the left-side vertical direction specified in GB 34660-2017, and Figure 3(b) shows the broadband test in the left-side vertical direction specified in GB 34660-2017.(a) Electric Field (70 Km/h) Specified in GB/T 18387-2017 (b) Magnetic Field Y-direction (70 Km/h) Specified in GB/T 18387-2017 Figure 2. Test Results of GB/T 18387-2017 before Rectification.(a)Narrowband Test in Left-side Vertical Direction Specified in GB 34660-2017 (b) Broadband Test in Left-side Vertical Direction Specified in GB 34660-2017 Figure 3. Test Results of GB 34660-2017 before Rectification.By analyzing the vehicle test data, it is found that the over-standard point is 16 MHZ and 20-30 MHZ for the electric field and 16 MHZ for the magnetic field as tested under GB/T 18387-2017 Limits and Test Method of Magnetic and Electric Field Strength from Electric Vehicles; and 30-50 MHZ and 80 MHZ as tested under GB 34660-2017 Road Vehicles -Requirements and Test Methods of Electromagnetic Compatibility.
Figure 4 .
Figure 4. Near-field Scanning of Motor Electrical Control System.
Figure 9 .
Figure 9. Recommended Grounding Method of Shielding Layer.
Figure 10 .
Figure 10.Auxiliary Shielding with Copper Foil against Unreliable 360° Grounding of HV Harness Shielding Layer.
Figure 14 .
Figure 14.Recommended Folding Method for Excess Harnesses.
Figure 15 .
Figure 15.Resolver Line Folding and Copper Foil Wrapping.
(a) shows the electric field test (at a vehicle speed of 70 Km/h) specified in GB/T 18387-2017, and Figure 18(b) shows the magnetic field Y-direction test (at a vehicle speed of 70 Km/h) specified in GB/T 18387-2017.Figure 19 shows the test results of GB 34660-2017, where Figure 19(a) shows the narrowband test in the left-side vertical direction specified in GB 34660-2017, and Figure 19(b) shows the broadband test in the left-side vertical direction specified in GB 34660-2017.(a) Electric Field (70 Km/h) Specified in GB/T 18387-2017 (b) Magnetic Field Y-direction (70 Km/h) Specified in GB/T 18387-2017 Figure 18.Test Results of GB/T 18387-2017 after Rectification.(a) Narrowband Test in Left-side Vertical Direction Specified in GB 34660-2017 (b) Broadband Test in Left-side Vertical Direction Specified in GB 34660-2017 Figure 19.Test Results of GB 34660-2017 after Rectification. | 3,973.2 | 2023-09-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Boundary Value Problems for Fourth Order Nonlinear p-Laplacian Difference Equations
φ p (s) = |s| p−2 s (p > 1), f ∈ C(Z(1, k) ×R,R). In the last decade, by using various techniques such as critical point theory, fix point theory, topological degree theory, and coincidence degree theory, a great deal of works have been done on the existence of solutions to boundary value problems of difference equations (see [1–7] and references therein). Among these approaches, the critical point theory seems to be a powerful tool to deal with this problem (see [5, 7–9]). However, compared to the boundary value problems of lower order difference equations ([6, 8, 10–13]), the study of boundary value problems of higher order difference equations is relatively rare (see [9, 14, 15]), especially the works by using the critical point theory [16]. For the background on difference equations, we refer to [17]. In this paper, we will consider the existence of solutions of the boundary value problem of (1) with (2). First, we will construct a functional J such that solutions of the boundary value problem (1) with (2) correspond to critical points of J. Then, by using Mountain pass lemma, we obtain the existence of critical points of J. We mention that (1) is a kind of difference equation containing both advance and retardation. This kind of difference equation has many applications both in theory and practice. For example, in [17], Agarwal considered the following difference equation:
In the last decade, by using various techniques such as critical point theory, fix point theory, topological degree theory, and coincidence degree theory, a great deal of works have been done on the existence of solutions to boundary value problems of difference equations (see [1][2][3][4][5][6][7] and references therein).Among these approaches, the critical point theory seems to be a powerful tool to deal with this problem (see [5,[7][8][9]).However, compared to the boundary value problems of lower order difference equations ([6, 8, 10-13]), the study of boundary value problems of higher order difference equations is relatively rare (see [9,14,15]), especially the works by using the critical point theory [16].For the background on difference equations, we refer to [17].
In this paper, we will consider the existence of solutions of the boundary value problem of (1) with (2).First, we will construct a functional such that solutions of the boundary value problem (1) with (2) correspond to critical points of .Then, by using Mountain pass lemma, we obtain the existence of critical points of .We mention that (1) is a kind of difference equation containing both advance and retardation.This kind of difference equation has many applications both in theory and practice.For example, in [17], Agarwal considered the following difference equation: with the boundary value conditions as an example.It represents the amplitude of the motion of every particle in the string.And in [7], the authors considered the following second order functional difference equation: Journal of Applied Mathematics with different boundary value conditions where the operator is the Jacobi operator given by In [18], the authors considered the second order -Laplacian difference equation: with boundary value conditions As for the periodic and subharmonic solutions of -Laplacian difference equations containing both advance and retardation, we refer to [19].And for the periodic solutions of -Laplacian difference equations, we refer to [20].Throughout this paper, we assume that there exists a function (, , V) which is differentiable in (, V) and (, 0, 0) = 0 for each ∈ Z(0, ), satisfying for ∈ Z(1, ).
Preliminaries and Main Results
Lemma which implies that and * () = 1 is obvious.If > 2, then we have which implies that and * () = 1 is obvious.Now the proof is complete.
Before we apply the critical point theory, we will establish the corresponding variational framework for (1) with (2). Let Then is a -dimensional Hilbert space.Obviously, is isomorphic to R .In fact, we can find a map : → R defined by Define the inner product on as The corresponding norm ‖ ⋅ ‖ can be induced by For all ∈ , define the functional () on as follows: Clearly, ∈ 1 (, R).We can compute the partial derivative as Let denote the open ball in with radius and center 0, and let denote its boundary.
In order to obtain the existence of critical points of on , we need to use the following basic lemma, which is important in the proof of our main results.
Then possesses a critical value ≥ given by where Let Now we state our main results.
In view of (37) and (38), it is easy to obtain the following corollary.
For the case when = 2, we have the following corollary for the boundary value problems of the fourth order nonlinear difference equations.Corollary 8. Assume that (, , V) satisfies the following conditions.
Proof of Theorem 5
In order to prove Theorem 5, we first establish the following lemma.
Then by (49), which means that satisfies the condition ( 1 ) of the Mountain pass lemma.By our assumptions, it is clear that (0) = 0.In order to use Mountain pass lemma, it suffices to verify that condition ( 2 ) holds.In fact, similar to the proof of (45), we have for any ∈ .Since 0 < , it is easy to see that there exists an ∈ with ‖‖ > such that (±) < 0. Thus ( 2 ) holds.
In the last part of this paper, we give an example to illustrate our results. | 1,279.6 | 2014-01-30T00:00:00.000 | [
"Mathematics"
] |
Structural changes in wet granular matter due to drainage
Unsaturated wet granular media are usually modelled using force laws based on analytical and empirical results of liquid bridge forces between pairs of grains. These models make ad-hoc assumptions on the liquid volume present in the bridges and its distribution. The force between grains and rupture criterion of the bridge are a function of this assumed volume of liquid, in addition to other parameters like contact angle of the liquid, geometry of the grains and the inter grain distance. To study the initial volume and morphology of liquid bridges, hydrodynamic simulation of dynamic effects leading to formation of liquid bridges at grain scale are indispensable. We use a Smoothed Particle Hydrodynamics algorithm to simulate the hydrodynamics of the evolution of the free surface using a novel freesurface-capillary model, inspired by the molecular basis of surface tension. We present validations for the model and simulations of formation and rupture of liquid bridges.
Introduction
Unsaturated wet granular media are encountered in a wide range of engineering applications such as in energy sector, pharmaceutics and food industry.Distribution of liquid as bridges between pairs of grains and in more complex shapes between more than two particles leads to complex constitutive behaviour of the material [1].The formation of liquid bridges between wet particle also cause grain agglomeration which is either desirable (e.g.wet granulation) or undesirable (wet fluidized beds).A better understanding of the 'formation' of liquid bridges will aid in controlling these processes and in arriving at better input parameters for macroscopic simulations.Studies on liquid bridges in literature usually focus on static bridges, and deformation of bridges during stretching and rupture, quasi-statically [2,3].Few experimental and theoretical studies have focused on the initial bridge formation process [4], the dynamics due to liquid transfer from grain to the bridge and the hydrodynamics of the rupture of the bridge [5].
Most research that describes dynamics of liquid bridges do so without considering hydrodynamics owing to the difficulty in resolving such effects at low capillary numbers.Empirical models that resulted from experimental studies were summarized by Herminghaus [6], which are used in a number of industrial scale studies as force laws.Capillary bridge force between grains of different sizes (polydisperse) was studied by Cherblanc et.al. [3], where an analytical rupture criterion for liquid bridges was also proposed.A recent Computational Fluid Dynamics simulation by [5] report the underprediction by analytical models, of the rupture criteria for liquid bridges formed e-mail<EMAIL_ADDRESS>collision of two wet spheres.Another CFD study [7] presents the physics of formation of liquid bridges for given flow conditions.To our knowledge no theoretical study exists that considers the formation of liquid bridges following drainage.In particular, the volume of liquid 'trapped' after the liquid drains through a pair of particles has not been studied.Current advances in meshless methods for fluid dynamics allow simulations of such scenario, overcoming conventional difficulties in CFD related to large density ratio (between liquid and surrounding gas) and coupling with rigid bodies.Smoothed Particle Hydrodynamics [8] is extensively used in simulation of complex fluid flow involving interactions with rigid bodies and effects of surface tension.A recent free surface model in SPH [9] allows for accurate simulation of flows involving free surfaces and superimposition of surface tension forces on the particle domain.We present and validate this SPH model in three dimensions.The method is then applied to simulate example cases of rupture of liquid bridge between two solids and the formation of liquid bridge between two grains following liquid draining.Visualizations of dynamic evolution of liquid bridges are presented.
Formulation
We outline the governing equations that are being solved, and the numerical method namely Incompressible Smoothed Particle Hydrodynamics (ISPH) with the capillary model, here.
Governing equations
Incompressible isothermal fluid flow is governed by the momentum conservation equations: where u is the velocity, p is the pressure, ρ and μ are the density and coefficient of viscosity of the fluid, respectively, D = (∇u + ∇u T )/2 is the deformation rate tensor, f B is the body force per unit mass on the fluid element and t is the time.The momentum equation (eq. 1) is the Navier-Stokes equations written in Lagrangian formulation and D/Dt denotes the material derivative.The mass conservation is ensured through the constraint of divergence free velocity field, ∇ • u = 0.In the incompressible version of SPH (as opposed to the Weakly Compressible version [10],) incompressibility is achieved by solving for a pressure field whose gradient ensures a divergence free velocity field.
SPH approximation
The basics of SPH algorithm and its fundamentals are widely established and are described in a number of publications [8].For the purpose of brevity, we here present the SPH descritization of the governing equations, together with terms that account for capillary effects directly.The momentum conservation equation in the SPH formulation reads as: Here, the first term on the right hand side represents the acceleration due to pressure gradient and the second term represents the viscous dissipation.The equation is written for the conservation of moment for a discretized fluid particle a, and particles in the neighborhood represented by subscript b.The variables p, ρ and μ represent the pressure, density and viscosity of the fluid, respectively.For incompressible flows, the value of ρ remains the same throughout the fluid.The third term on the RHS represents the acceleration due to pairwise-interparticle force superimposed on the discretized particles and is responsible for the effect of surface tension and contact angle dynamics, and will be described in the next subsection.The fourth term on the RHS represents the body force (gravity) acting on the domain.
Pairwise forces and capillarity
Following the molecular theory of capillarity [11], Tartakovsky and Meakin [12] proposed a surface tension model and methods for applying macroscopic surface tension coefficient to a Weakly Compressible SPH algorithm with an equation of state based on ideal gas law.In addition to mimicking the effect of surface tension, the superimposition of pairwise forces introduces a 'virial' pressure to the fluid, and this additional pressure can be computed for a given particle system.In the case of incompressible fluids with a free surface (single phase), the virial pressure can be treated as an additive term.Though this needs to be corrected for to account for accurate Laplace pressure jump across interfaces, the dynamics of capillary flow would depend only on relative values of pressure, that is, the pressure gradient.
We use the cosine function as given in [12] for the pairwise force and maintain the same cutoff length as the smoothing kernel used for computation of other forces from the continuum model.The pairwise force is given as: Using this potential and using the Hardy's [11] formula for integrating stresses at a point in a Lagrangian particle model, the partial surface energy due to pairwise interaction force is given by: For the 5 th order Wendland kernel function [13] used in the current work, this leads to the calibration equation for the surface tension coefficient: Here, h ratio is the ratio of the total cutoff length of the kernel to the initial particle spacing Δx.The strength of the pairwise force is given by s ll .Using different values for strengths of pairwise force, and correspondingly for different ratios for surface energies, different contact angles can be obtained.For the free surface simulations presented here, the contact angle is given by: where θ 0 is the contact angle, s αα is the strength of pairwise potential between liquid and s αβ is the strength of pairwise potential between particles of different phases, for example, liquid and solid.We integrate the SPH equations using the velocity Verlet integration algorithm [12] 3 Validation and results Validation of the surface tension model by an oscillating bubble experiment was performed and will be part of a different detailed publication.Validations of more complex phenomena relevant to the present work, are presented here.
Contact angle
Equation 6 as a model for wetting phenomena is validated for different contact angles ranging from 30 • to 150 • .A hemispherical drop is placed on a horizontal solid substrate and is allowed to relax.Figure 1 shows the outer profile of the droplet after a finite time for different contact angles.The time variation of the contact angle after the droplet is allowed to relax from its initial hemispherical position for two extreme contact angles is shown in Fig. 2. The contact angles are measured by linear regression of position of first six surface particles from the substrate.We perform a numerical experiment with capillary tubes of different diameters immersed in a periodic domain of liquid.The capillary rise height can be analytically derived using balance of pressure due to liquid column and the Laplace pressure jump across curved interfaces.Table 1 shows the comparison of measured capillary rise and the analytical result for different diameters of the capillary tube.The simulations results for different capillary widths are shown in Fig. 3.
Rupture of capillary bridge
In wet granulation processes, predicting agglomeration of granules is a critical step for successful simulations [14].In many macro-scale simulations, analytical criteria is used to decide whether capillary bridges formed during collision of two wet particles would sustain for the given approach momenta of the grains.In a recent computational fluid mechanics study [5], it was shown that the analytical cohesion criteria grossly underpredicts the rupture criteria.We perform a similar simulation using the proposed SPH algorithm to corroborate this observation opening up the possibility for deriving accurate agglomeration criteria for complex scenarios involving polydisperse grains and out-of-axis collisions.
Two particles of diameter 50 μm are considered, with a fluid drop of ≈ 1072 μm 3 volume on one of the particles.The viscosity and surface tension coefficient of the liquid are 0.001 Nsm −2 and 0.071 N/m respectively.The dry particle is imparted a velocity of 5m/s towards the wet particle.The moment of rupture is clearly seen in 4, with the formation of two satellite droplets.The coefficient of restitution for the impact can be observed from Fig. 5, as the asymptotic value of normalized relative velocity of the colliding particles.In this case, the restitution coefficient is e ≈ 0.825.
Formation of capillary bridge following drainage
The simulation of capillary bridge surfaces in literature usually employ surface energy minimization algorithms [2].However, the initial liquid volume in the bridge which involves draining of liquid is not given enough attention in literature.We perform simulation of a large liquid droplet impact a pair of grains, and drain through them, resulting in a liquid bridge.Figure 6 shows the different stages of this liquid flow through grains.In Fig. 6b, the liquid drop penetrates the space between grains.In Fig. 6d, a 'Y' shaped configuration of the liquid between the grains pulled down by inertia of liquid is seen.Following this, as see in in Fig. 6e, the bridge begins to be formed when the forked fluid structure straightens and moves upwards due to hydrophilicity of the grain surface.The vertical stem undergoes a rupture at this instant.In Fig. 6f, we see the liquid bridge completely detached and retain a finite fluid volume in it.
Conclusion
Using a novel free surface-surface tension model the structural changes of the liquid before formation of liquid bridge following draining of liquid and during its rupture following violent collisions are simulated for example scenarios.These simulations provide confidence for more detailed parametric studies that would result in better force laws, and models to describe liquid bridge formation and rupture.
4 YFigure 1 :
Figure1: The profile of droplet placed on solid substrate, for different contact angles.θ s is the contact angle measured by linear regression of six surface particles near the substrate and θ a is the contact angle given by pairwise force ratio.
Figure 2 :
Figure 2: Time variation of the contact angle for the case of θ = 30 • and θ = 60 • .The droplets were initiated from the geometry of hemisphere resting on a solid surface.
Figure 4 :Figure 5 :
Figure 4: Instances of collision of two rigid spheres with a wet spot on one.
Figure 6 :
Figure 6: Instance of formation of a liquid bridge following draining of a large liquid drop through a pair of spherical grains
Table 1 :
The capillary rise height for different capillary tube diameters (2D) | 2,826.6 | 2017-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Non-relativistic trace anomaly and equation of state in dense fermionic matter
We theoretically investigate a non-relativistic trace anomaly and its impact on the low-temperature equation of state in spatially one-dimensional three-component fermionic systems with a three-body interaction, which exhibit a non-trivial three-body crossover from a bound trimer gas to dense fermionic matter with increasing density. By applying the $G$-matrix approach to the three-body interaction, we obtain the analytical expression for the ground-state equation of state relevant to the high-density degenerate regime and thereby address how the three-body contact or, equivalently, the trace anomaly emerges. The analytical results are compared with the recent quantum Monte Carlo data. Our study of the trace anomaly and the sound speed could have some relevance to the physics of hadron-quark crossover in compact stars.
I. INTRODUCTION
The recent development of astrophysical observations enables us to address the fundamental question of how matter behaves at extremely high density.Indeed, masses and radii of neutron stars have been simultaneously deduced from gravitational waves observed from a binary neutron star merger [1].By incorporating such information and the presence of a heavy neutron star [2] into a Bayesian analysis, the equation of state of neutron star matter has been determined, such that the speed of sound is marginally peaked at several times of the normal nuclear density ρ 0 = 0.16 fm −3 [3].In a manner that is consistent with this equation of state, nowadays we are in a position to theoretically construct the equation of state of matter at densities significantly higher than ρ 0 .
In such an extremely dense environment, hadrons, which consist basically of three quarks, overlap with each other and can no longer be regarded as point-like particles.Eventually, neutron star matter is expected to be governed by the quark degrees of freedom in the form of color superconducting quark matter [4].It is nevertheless difficult to figure out how nuclear matter, which is relatively well-known, changes into such quark matter as density increases.The above-mentioned empirical equation of state of neutron star matter invokes the socalled hadron-quark crossover scenario [5][6][7], where nuclear matter, if compressed, would undergo a crossover toward quark matter rather than a phase transition [8,9].While its microscopic mechanism is still elusive in the presence of the sign problem inherent in lattice simulations of finite-density quantum chromodynamics (QCD), a recent lattice simulation of finite-density two-color QCD [10,11], which is free of the sign problem, indicates a peak of the speed of sound in the density region, where the diquark condensate gradually changes in a similar way to the Bose-Einstein condensation (BEC)to-Bardeen-Cooper-Schrieffer (BCS) crossover [12][13][14][15][16][17][18][19] realized in ultracold Fermi atomic gases [20][21][22].
In this sense, an alternative promising route to ad-dress the microscopic mechanism of the hadron-quark crossover could be via an analog quantum simulation based on ultracold atomic physics [23][24][25].Thanks to the tunable interactions, adjustable internal degrees of freedom, and reachable quantum degeneracy through Feshbach resonances, hyperfine states, and state-of-theart cooling techniques, respectively, ultracold atoms offer an ideal platform to investigate quantum manybody physics [26].Indeed, for a non-relativistic onedimensional (1D) three-component Fermi atom mixture with a three-body interaction between different components, a crossover from a gas of tightly bound trimers to a gas of single atoms with increasing density has been pointed out [27]; the thermal equation of state and the minimum of the compressibility (corresponding to the sound velocity peak) in the crossover regime have been reported from a quantum Monte Carlo (QMC) simulation based on the worldline formulation, which is free of the sign problem [28].Such a system can be regarded as a good testing ground for many-body theories involving three-body forces, which play a crucial role in low-energy nuclear physics [29,30].Furthermore, this 1D system exhibits a trace anomaly due to the broken scale invariance [27,31].The same kind of trace anomaly is known to appear in spatially three-dimensional (3D) dense QCD, which has been recently discussed in connection with the sound velocity peak [32].Note that in both systems, the trace anomaly corresponds to the deviation of the equation of state from the scale-invariant behavior in the high density limit.In the non-relativistic system, which will be studied here, the trace anomaly can be expressed in terms of the threebody contact [27,31], which is a three-body generalization of Tan's two-body contact [33][34][35] and characterizes the probability of finding three particles close to each other [36,37].The numerical values of the three-body contact in the crossover regime have also been obtained from the above-mentioned QMC simulation [28].
It is useful to consider various spatial dimensions and multi-body interactions in the non-relativistic Fermi sys-tem of interest here.The trace anomaly has been experimentally measured in spatially two-dimensional (2D) two-component Fermi atomic gases with two-body interaction [38,39], while it has been theoretically shown that in the high temperature limit, there is an exact mapping of the two-body anomalous interaction in the 2D model onto the three-body anomalous interaction in the 1D model [40].Incidentally, both models are asymptotically free: The interactions become asymptotically weaker with increasing density.Intuitively, this can be understood from the comparison of two energy scales, namely, the Fermi energy E F and the multi-body binding energy E b ; the weak coupling limit is realized when E F /E b → ∞.This property is in contrast to the conventional 3D model where the energy ratio is characterized by the Fermi momentum k F and the two-body scattering length a in such a way that the high density limit corresponds to the unitarity (i.e., k F |a| → ∞) [19].
In considering the crossover mechanism of nonrelativistic three-component Fermi mixtures, it is interesting to focus on the analogy with the BEC-BCS crossover in two-component Fermi atomic gases, where tightly bound diatomic molecules are changed into loosely bound Cooper pairs.Indeed, in three-component Fermi atomic gases, one can expect a similar crossover where tightly bound triatomic molecules are changed into loosely bound trimers called Cooper triples [41,42].Such a triple state that persists even in the presence of a Fermi sea is a natural extension of the Cooper pairing state and partially consistent with a phenomenological picture of quarkyonic matter that is favorable for explaining the sound velocity peak in neutron star matter [43].The crossover from baryons to color-singlet Cooper triples has also been discussed theoretically in a semi-relativistic quark model with a phenomenological three-body attraction that is responsible for color confinement [44].However, it remains to be investigated how the ground-state equation of state is associated with the three-body correlations and the trace anomaly even in the high-density regime of neutron star matter.
In this work, we theoretically investigate the groundstate equation of state for non-relativistic 1D threecomponent fermionic matter, which is connected to the trace anomaly just like dense QCD, by using the Brueckner G-matrix approach, which is known to successfully describe the ground-state equation of state for asymptotically free 2D Fermi atomic gases [45][46][47].Remarkably, this approach gives an analytical expression for the equation of state, which in turn well reproduces the equation of state obtained by a QMC simulation [48] and experiments [49,50] throughout the 2D BCS-BEC crossover.Moreover, the G-matrix result for the ground-state energy of a Fermi polaron, namely, an impurity quasiparticle immersed in a Fermi sea, shows an excellent agreement with the exact result in 1D [51] and experimental results in 2D [52].We focus on the low-temperature and high-density regime where the QMC simulation is numerically demanding even in this non-relativistic 1D system Feynman diagrams for the three-body G-matrix G3(K, ω).The circle represents the three-body coupling g3.Three solid lines correspond to the three-body propagator Ξ(K, ω).
regardless of the fact that the high-density regime corresponds to the weakly-coupled regime due to the asymptotic freedom.In particular, we derive an analytical expression for the equation of state and the three-body contact in this system and elucidate the impact of the trace anomaly on the ground-state equation of state in the presence of strong three-body correlations.This paper is organized as follows.In Sec.II, we present a formalism for describing non-relativistic 1D three-component fermions involving three-body attractive interaction.In Sec.III, we discuss the three-body contact and the ground-state equation of state in the high-density regime (µ > 0).We summarize this paper in Sec.IV.Throughout the paper, we take = k B = 1, and the system size is set to be unity.
II. FORMALISM
We consider non-relativistic 1D three-component fermions by starting from the following Hamiltonian [27], where ǫ p = p 2 /(2m) is the kinetic energy of a fermion with mass m, and ψ p,a is the fermion operator with color index a = r, g, b. g 3 is the contact-type coupling of the three-body force.The three-body interaction can be expressed in terms of the three-fermion operator, where ε a1a2a3 is the completely antisymmetric tensor.The three-body coupling constant g 3 induces a threebody bound state even for infinitesimally small g 3 in 1D.Because of non-perturbative properties of the three-body coupling, we need to sum up an infinite series of the threebody ladder diagrams shown in Fig. 1 even in the weakcoupling (or high-density) regime.The three-body G-matrix is given by where is the three-body propagator with the Pauli-blocking factor Q(k, q, K), and E k,q,K is the kinetic energy of three particles given by ( The three-body coupling can be characterized by the three-body binding energy E b in vacuum obtained from the pole of the three-body T -matrix [42].Namely, we take Q(k, q, K) = 1 in Eq. ( 4) and obtain where Λ is the momentum cutoff.In this regard, one can find that a usual Hartree-like lowest-order interaction energy g 3 ρ r ρ b ρ g , where ρ a is the number density of color a, vanishes in the limit of Λ → ∞, indicating that an appropriate regularization of Λ is needed even in the high-density limit.
Hereafter, we focus on the color-symmetric case with ρ r = ρ b = ρ g ≡ ρ/3, where ρ is the total number density.Following the idea of the Brueckner Hartree-Fock theory in the presence of a bound state [45][46][47], we evaluate the internal energy E as the Hartree-Fock-like expectation value E = H by replacing g 3 with the in-medium effective interaction G 3 (K = 0, ω = −E b ), that is, This approximation leads to an analytical expression for the equation of state, which works unexpectedly well in two-dimensional two-component Fermi atomic gases with attractive interaction throughout the BCS-BEC crossover [46,47].We evaluate Ξ(0, −E b ) in G 3 (0, −E b ) within the Tamm-Dancoff approximation where low-energy excitations below the Fermi energy E F are suppressed at T = 0 [53].This treatment is similar to the generalized Cooper problem for three-body states [41,[54][55][56][57].As in the case of 2D two-component Fermi gases in which the states with zero center-of-mass momentum can be regarded as the relevant contribution, the states with K = 0, which correspond to squeezed Cooper triples, can be relevant in 1D three-component Fermi gases [41,42].
Because of the internal degrees of freedom associated with constituent fermions in three-body cluster states, the three-body correlations at K = 0 involve an ultraviolet divergence with respect to the integration of relative momenta.Although we do not consider the condensation in the present 1D system, the low-momentum correlations play a crucial role at low temperature and high density; indeed, such an approximation shows a good agreement with experiments [49,50,52] as well as QMC simulations [36,58] as shown in Refs.[45][46][47].On the other hand, our approach cannot reproduce the lowdensity limit where a gas of tightly bound trimers is realized because we do not consider three-body correlations with K = 0.In this regard, we focus on the high-density regime where the contribution with K = 0 can be expected to be dominant.By introducing q = √ 3 2 q and r = k 2 + q2 , we obtain Note here that r 2 = mE k,q,0 ≥ 3mE F .Then, we find Accordingly, we obtain One can see that G 3 (0, −E b ) in Eq. ( 10) does not depend on Λ in contrast to g 3 in Eq. ( 6).Eventually, the internal energy within the present approach reads particular, it is worth mentioning that in the highdensity limit (E F ≫ E b ) one can obtain which is similar to the lowest-order interaction correction in 2D Fermi gases with two-body interaction [46,59].While the G-matrix approach can be justified in the highdensity regime due to the asymptotic freedom, the logarithmic correction in Eq. ( 12) is the consequence of nonperturbative nature of the three-body coupling captured by the infinite ladder resummation in Fig. 1. other thermodynamic quantities can be obtained via the thermodynamic identities.The chemical poten-tial µ = ∂E ∂ρ is given by The pressure P = µρ − E reads In the high-density limit where E b is negligible compared to E F , one can find the scale-invariant result P = 2E [27].However, at lower densities, such a relation is gradually broken due to the trace anomaly, which is equivalent to the three-body contact C 3 = P −2E [31].We thus obtain One can easily find that C 3 is positive definite as in the case of conventional Tan's contact [33][34][35].Moreover, one can obtain the squared sound velocity where v F = k F /m is the Fermi velocity corresponds to the high-density conformal limit in this system.We note that at T = 0, c 2 s is related to the compressibility κ = 1 ρ ∂ρ ∂P .By introducing the non-interacting compressibility κ 0 = ρ/v 2 F , we find κ/κ 0 = v 2 F /c 2 s .
III. RESULTS
First, we discuss the three-body contact C 3 , which arises from the trace anomaly.In particular, we focus on the regime with µ > 0 where the Fermi degeneracy of constituent fermions is important.Figure 2 is the length scale associated with the three-body binding energy E b .For comparison, we also show the QMC results at T = 0.4E b in Ref. [28], where the circles and triangles are evaluated by two different ways.These two results are different due possibly to lattice artifacts.
increases with µ, indicating the importance of the lowenergy three-body correlations which are reminiscent of squeezed Cooper triples [42].We note that this increment is also associated with the increase of ρ as C 3 is normalized by the density-independent scales λ b = 2π mE b and E b in Fig. 2. If we use the density-dependent scale (e.g., C 3 /ρE F ), such a quantity vanishes and hence the scale-invariant result C 3 /ρE F = (P − 2E)/ρE F → 0 is recovered in the high-density limit.Even within our simplified approach, our result is close to the QMC results performed at finite temperature (T = 0.4E b ) in Ref. [28].
In the present G-matrix approach where C 3 is a little bit underestimated, we do not consider the three-body correlations with nonzero K and the trimer-trimer interaction [60], which may be the origin of such underestimation of C 3 .
Figure 3 shows the internal energy per particle E/ρ + E b /3.In Ref. [28], the QMC results for the internal energy density E has the the trimer contribution E trimer = E TFG − ρE b /3 subtracted out were reported, where E TFG is the internal energy density of a noninteracting trimer Fermi gas.Since E TFG is not considered in our calculation, we compare the G-matrix result for E/ρ + E b /3 with (E − E trimer )/ρ in the QMC simulation.At µ > 0, our result is qualitatively consistent with the QMC result as both results show linear increase with µ/E b .This enhancement is also related to the increase of ρ.The quantity shown in Fig. 3 indicates the degree to which the system differs from a non-interacting trimer Fermi gas.In this sense, the result for C 3 obtained by the G-matrix approach may be regarded as the three-body correlations that cannot be described by the point-like trimer formation.On the other hand, the trimer-trimer For comparison, the QMC result at T = 0.4E b in Ref. [28] is shown, where the plot has the trimer contribution Etrimer subtracted out.
repulsive interaction [60] is not considered in our calculation.This repulsion would act to increase E and thus lead to further discrepancy between the QMC simulation and the G-matrix approach.Finally, we examine the squared sound velocity c 2 s /v 2 F as shown in Fig. 4. For comparison, we also show the QMC result obtained from the dimensionless compressibility κ/κ 0 at T = 0.4E b .Although the QMC result, which is above unity, involves the finite-temperature effect, the G-matrix result for c 2 s /v 2 F is well below unity.To understand this discrepancy, we phenomenologically introduce the three-body correlations with nonzero centerof-mass momenta (K = 0) as E → E + δE K =0 where In Eq. ( 17), K T = 6m(3E F + E b ) is the effective trimer Fermi momentum.Then, using the thermodynamic identities, we find the associated correction to the squared sound velocity c 2 s → c 2 s + δc 2 s,K =0 , where The dashed curve in Fig. 4 shows the result for c 2 s /v 2 F including the phenomenological three-body correlations with K = 0. Indeed, it is close to the QMC result.This indicates the importance of the Pauli pressure of inmedium trimers in addition to the trace anomaly in the high-density regime.On the other hand, δc 2 s,K =0 does not vanish even in the high-density limit (E F ≫ E b ), whereas c 2 s /v 2 F should approach unity in the high-density limit.In this sense, the phenomenological expression for δc 2 s,K =0 based on point-like trimer states overestimates the excess of c 2 s , implying that the non-local Coopertriple-like correlations with K = 0 should be taken into account [42].The trimer-trimer interaction would also play an important role in changing c 2 s , in addition to C 3 and E.Although the G-matrix result is known to follow π ln(3EF/E b ) , which is consistent with the equation of state in the BCS-BEC crossover [46,47], therefore, a more detailed investigation of the highdensity asymptotic behavior of c 2 s is left for an important future work.
IV. SUMMARY
To summarize, we have investigated the trace anomaly and its impact on the ground-state equation of state for non-relativistic 1D three-component fermions.By extending the G-matrix approach developed for the system with two-body interaction to the system with three-body interaction, we have obtained the analytical expression for the ground-state equation of state in the presence of three-body correlations that cannot be described by the formation of point-like trimers.The three-body contact, which results from the non-relativistic trace anomaly, is found to increase with the chemical potential.Our results are qualitatively consistent with the recent QMC results at positive chemical potentials even within the simplified approximations adopted here.We expect that the Cooper-triple-like three-body correlations appear in the present system.
for future perspectives, it is important to consider the three-body correlations with nonzero center-of-mass momenta as well as trimer-trimer interactions for further understanding of the three-body crossover equation of state.Finite-temperature effects should also be addressed for more quantitative comparison with the QMC calculation.Moreover, it would be interesting to apply the present approach to hadron-quark crossover by considering a system of constituent quarks interacting via a three-body color-confining force.
FIG. 3 .
FIG. 3. Internal energy (E/ρ + E b /3)/E b as a function µ/E b .For comparison, the QMC result at T = 0.4E b in Ref.[28] is shown, where the plot has the trimer contribution Etrimer subtracted out.
FIG. 4 .
FIG. 4. Squared sound velocity c 2 s /v 2 F as a function of µ/E b where vF is the Fermi velocity corresponding to the conformal high-density limit.The solid and dashed curves represent the G-matrix results with and without the degenerate trimer contribution δc 2 K =0 , respectively.The circles show the QMC results obtained from c 2 s /v 2 F = κ0/κ at T = 0.4E b .The horizontal dotted line corresponds to the high-density limit (c 2 s /v 2 F = 1). | 4,711 | 2024-02-07T00:00:00.000 | [
"Physics"
] |
Q-holes
We consider localized soliton-like solutions in the presence of a stable scalar condensate background. By the analogy with classical mechanics, it can be shown that there may exist solutions of the nonlinear equations of motion that describe dips or rises in the spatially-uniform charge distribution. We also present explicit analytical solutions for some of such objects and examine their properties.
Introduction
Spatially-homogeneous solutions in the complex scalar field theories with the global U(1)invariance have been proven to be very useful in different branches of modern physics. Perhaps the most known example of their application to cosmology is the Affleck-Dine mechanism of baryogenesis [1]. Evolution of the spatially-homogeneous condensate in the Early Universe, which is usually studied numerically, is subject to certain restrictions in order to yield a successful cosmological scenario [2]. For instance, a possible spatial instability of the condensate results in its fragmentation into nontopological solitons -Q-balls. The latter, in turn, can be a crucial ingredient in the solution of the dark matter problem [3]. This makes inhomogeneous classical solutions also of considerable interest in cosmology. Their another application is related to the possibility of production of gravitational waves [4][5][6].
Emergence of localized stationary configurations was first discovered in the systems whose evolution is governed by the Nonlinear Schrödinger Equation (NSE) [7]. In nonlinear optics these solutions are known as bright solitons. Similar solutions in a theory of the complex scalar field in four dimensional space-time, possessing the global U(1)-charge, were called "Q-balls" by S. Coleman [8]. NSE admits another interesting class of solutions corresponding to "dark solitons" in a stable medium [9]. They have the form of a dip in a homogeneous background. It is important to note that these solutions are of the topological nature. In particular, they cannot be deformed into the surrounding condensate by a finite amount of energy. Therefore, the question arises about the existence and properties of the JHEP12(2016)032 analogs of dark solitons in the complex scalar field theory, where they presumably can be analyzed by the same methods as the ordinary Q-balls.
The existence of the dip-in-charge-like solutions in scalar field theories is not a manifestation of some specific properties of these theories. In fact, such solutions exist for the usual "Mexican hat" scalar field potential. To see this, let us consider the complex scalar field φ with the Lagrangian density If λ > 0, the theory admits the well-known real static solution -the kink, which has the form It can be generalized to a class of stationary but not static solutions as follows, where ω is a constant parameter and Then, for the U(1)-charge density ρ we get which clearly has a dip around the origin x = 0. The kink solution (1.2) is unstable in this model and can be interpreted as a sphaleron in the Abelian gauged version of (1.1), see [10] for details. Another distinctive feature of the model (1.1) is the stability of the charged condensate as long as λ > 0. We note that the solution (1.3) requires an infinite amount of energy to be deformed into the spatially-homogeneous condensate of the same charge or frequency. In this paper we present the soliton-like localized solutions in a theory of the complex scalar field, which describe inhomogeneities in the charge distribution of the condensate and can be deformed into the spatially-homogeneous condensate of the same frequency using a finite amount of energy. We will refer to such solutions as "Q-holes" or "Q-bulges" in order to stress their similarity to the ordinary Q-balls and to the "holes in the ghost condensate" of [11]. In the next section we will argue in favor of existence of these solitons with the help of Coleman's overshoot-undershoot method and survey their general properties. In section 3 we will present and examine the explicit examples of Q-holes in one and three spatial dimensions. In section 4 we will discuss the classical stability (in fact, instability) of Q-holes and Q-bulges. In Conclusion we will briefly discuss the obtained results.
Suppose that the potential V (φ * φ) has a minimum (local or global) at φ = 0. Then the theory may admit localized configurations called Q-balls [8,12]. They are solutions to the corresponding equations of motion of the form x 2 , f (r) > 0 for any r, and ∂ r f (r)| r=0 = 0. With the ansatz (2.2), the equations of motion for the field φ reduce to the equation for the function f , It is a well-known observation that the latter equation can be thought of as an equation of motion of a point particle in classical mechanics, with the "coordinate" f and the "time" x (or r), that moves in the effective potential For d > 1, the motion of the particle is also affected by the "friction" term ∼ 1 r df dr . This mechanical analogy is illustrated in figure 1. Specifically, the particle begins to move at the "moment of time" x = 0 (or r = 0) from the "coordinate" f = f max and reaches the vacuum state f = 0 at the "time" x → ∞ (or r → ∞). 1 Note that for d > 1, U ω (f (0)) > U ω (0) because of the "friction" term. The reasoning outlined above, despite being simple, can help to unveil a new class of solutions in the case when the potential U ω (f ) possesses other local (or global) maxima except that at f = 0. In the rest of the paper we will explore this case in detail.
Time-dependent scalar condensate, Q-holes and Q-bulges
Suppose that for certain values of ω, the effective potential U ω (f ) develops a maximum point, dU at some constant f c = 0. 2 Then a family of spatially-homogeneous time-dependent solutions appears in addition to the vacuum solution f ≡ 0, Without loss of generality, we take f c to be real such that f c > 0. The solutions (2.6) represent the scalar condensate and have an infinite total charge and energy. As will be shown below, they can be stable under small fluctuations. Note that, in general, the existence of extra maxima of the effective potential U ω (f ) does not imply the existence of extra minima of the initial potential V (f ). We can now use the mechanical analogy described in the previous section to advocate the existence of inhomogeneous solutions of the form φ(t, x) = f (r)e iωt in addition to the time-dependent scalar condensate (2.6). 3 Here we will discuss two types of such solutions. The mechanical analogy for the first type is presented in figure 2. The crucial feature of these solutions, which we will refer to as "Q-holes", is expressed by the inequality f (∞) > f (0). That is, they can be thought of as "dips" in the homogeneous charged condensate.
The mechanical analogy for the second type is presented in figure 3. Solutions of this type obey the inequality f (∞) < f (0). Hence they can be thought of as "rises" in the homogeneous charged condensate. For this reason, we will call such solutions "Q-bulges". We can see from figure 3 that the existence of Q-bulges demands a specific high energy behavior of the effective potential. Apart from this fact, from the point of view of the mechanical analogy, Q-bulges lie close to Q-balls.
JHEP12(2016)032
Let us briefly discuss the main properties of Q-holes and Q-bulges. First, their asymptotes at infinity, imply that the frequency ω of the Q-hole (Q-bulge) is fixed by the frequency of the scalar condensate of magnitude f c . Second, the charge and the energy of the Q-hole (Q-bulge) are defined in the standard way, (2.8) When being calculated at a given Q-hole (Q-bulge) configuration, the expressions (2.8) are clearly infinite. However, since f (r) → f c as | x| → ∞, it is reasonable to compute the charge and the energy of the Q-hole (Q-bulge) relative to the corresponding background solution (2.6). Hence we define the renormalized charge and energy as follows, (2.9) Here Q c and E c are the scalar condensate charge and energy. The quantities (2.9) are finite as we will explicitly demonstrate below. Furthermore, they obey the following relation,
JHEP12(2016)032
which is analogous to that for Q-balls. 4 Last, but not least, the following key property of Q-holes (Q-bulges) can be deduced from eqs. (2.9), The relation (2.11) is also well known to be valid for Q-balls (with Q ren and E ren substituted by the genuine charge and energy of the Q-ball). This justifies the meaningfulness of our notions of Q ren and E ren . Note that since for Q-holes the inequality f ( x) < f c holds for all | x| < ∞, the sign of Q ren is opposite to the sign of ω and the renormalized energy E ren is not positive definite. To prevent possible confusion, we stress again that E ren is defined with respect to the energy of the corresponding background solution and has no absolute meaning. Hence, unlike Q-balls, it is not possible to select a universal ground energy level from which one can count the energy of Q-holes (this reasoning holds for Q-bulges as well). Instead, the energy of each Q-hole (Q-bulge) must be renormalized in a unique way. As for Q-bulges, ωQ ren > 0 in this case, leading to E ren > 0.
We would like to point out once again that, although dUω(f ) df f =fc = 0, the original potential V (f ) may not have zero derivatives everywhere except the origin. Therefore, in general, the asymptotes of Q-holes (Q-bulges) do not approach any false vacuum state, contrary to what our intuition says about the properties of solitons.
Explicit examples
In this section we consider the model allowing for analytical investigation of the scalar condensate and Q-holes. For this purpose it is convenient to choose a simple piecewiseparabolic potential of the model [14], θ is the Heaviside step function with the convention θ(0) = 1 2 . The potential (3.1) consists of two parabolic parts joined together at the point |φ| = v. It possesses at least one minimum at |φ| = 0. It is easy to see that for < 1 there are no other minima, while for > 1 the second (local or global) minimum is located at |φ| = v. The potential (3.1) can be generalized by using different masses for large and small values of |φ|.
The potential (3.1) does not admit the existence of Q-bulges. 5 In principle, Q-holes and Q-bulges are of the same kind -both solutions describe inhomogeneities in the scalar condensate and possess the same properties described by eqs. (2.9)-(2.11). Meanwhile, a possible negativity of E ren for Q-holes seems to be their peculiar feature, which makes their analysis more interesting. For this reason, we select Q-holes for the more detailed investigation.
From this expression it follows that 0 ≤ |ω| < M . On the other hand, the condition f c > v implies that ω 2 > M 2 (1 − ), if ≤ 1, and ω 2 ≥ 0 otherwise. Combining these restrictions, we obtain the allowed region for ω, Next we determine the charge and the energy of the condensate. When 0 < f c < v, they take the form and for f c > v we have where ω is bounded by eq. (3.4). It is clear that the total charge and energy of the condensate are infinite due to the infinite volume of space. We see that the theory contains two series of condensate solutions. The solutions of the series with f c < v allow to interpret them as collections of particles of mass M . Indeed, for these solutions ρ e = M ρ q . The solutions with f c > v, despite being condensate, cannot be interpreted in this way.
Let us now examine the classical stability of the condensate under small fluctuations.
are classically stable -the corresponding fluctuations satisfy the standard Klein-Gordon equation. In order to study the stability of the second series,
JHEP12(2016)032
we write the scalar field in the form where a and b are complex constants and k = (k 1 , . . . , k d ). Then we substitute this representation into the equation of motion for the scalar field and obtain a linearized equation for the fluctuations above the condensate solution. The stability (instability) of the condensate is manifested in the absence (existence) of the solutions of the linearized equation with imaginary k 0 . Straightforward calculations give the following equation on k 0 and k, whose solutions are given by we obtain k 2 0 ≥ 0, i.e., the scalar condensate is stable under small fluctuations. 6
Q-holes in (1+1)-dimensional space-time
Let us now study in detail Q-holes in the (1+1)-dimensional space-time. The corresponding solutions of eq. (2.3) take the form defines the matching point at which f (X) = v. Since the argument of the inverse hyperbolic tangent should be less than unity and not less than zero, from eq. (3.16) one finds Note that the r.h.s. of the inequality (3.17) can be obtained using the mechanical analogy. It is clear from figure 2 that if U ω (f c ) ≥ 0, then the particle will never reach the top of 6 In the general case, the scalar condensate is stable (i.e., k 2 0 ≥ 0 for any k) if d 2 V df 2 f =fc − 1 fc dV df f =fc ≥ 0, see [15]. For the renormalized charge and energy we get Let us mention some properties of Q ren and E ren following from eqs. (3.19) and (3.20). 7 This statement remains true in the (d + 1)-dimensional case. 1. |Q ren | → ∞ and |E ren | → ∞ as |ω| → M 1 − /2. Indeed, in this case X → ∞, whereas f (x) tends to the vacuum solution f ≡ 0 at |x| < X.
3. For ω = M √ 1 − and < 1, we get Q ren = 0 and E ren = 0. Indeed, in this case Some typical examples of Q ren (ω) and E ren (ω) dependencies are presented in figures 5-7. We see that the renormalized energy E ren can take positive as well as negative and zero values. As was explained in section 2.2, this result is expected and should not surprise. As a useful check of validity of our calculations, one can show numerically that eq. (2.11) fulfills for Q ren and E ren given by eqs. (3.19) and (3.20).
Let us pause here to make a general comment on a choice of regularization scheme for Q-holes (and Q-bulges). Eqs. (2.9) give a natural way to obtain finite values for the charge and energy of the Q-hole (Q-bulge). The corresponding quantities Q ren and E ren satisfy all the relations they are expected to satisfy as the "charge" and the "energy" of the soliton. We can expect, therefore, that any consistent regularization must lead to the same expressions for Q ren and E ren . One such scheme corresponds to putting the system in a box of size 2L (with the natural boundary conditions f (−L) = f (L) and taking the limit L → ∞. This procedure endows eqs. (2.9) with the precise meaning. We conclude that the negativity of E ren at some ω is an inherent property of Q-holes and not a consequence of a particular choice of regularization.
Q-holes in (3+1)-dimensional space-time
The analysis of Q-holes in three spatial dimensions lies closely to that in the (1 + 1)dimensional case. The spherically symmetric ansatz for the scalar field reads as follows, where R is defined by Contrary to the (1 + 1)-dimensional case, the latter equation has no analytical solutions for R. However, it can be solved numerically. Acting exactly as in the (1 + 1)-dimensional case, with the use of the mechanical analogy one can obtain the relation ω 2 < 1 − 2 M 2 leading together with eq. (3.4) to is interesting to note that both in 1 + 1 and 3 + 1 dimensions the model admits Q-ball solutions. 8 As can be easily shown using the mechanical analogy, they exist if An example of the Q-hole solution in 3 + 1 dimensions is presented in figure 8. The renormalized charge and energy are given by 2. For ω = 0 (if allowed, i.e., if > 1), Q ren = 0 and E ren > 0 due to eq. (2.10). This is an expected result for the solution with ω = 0, which is just a sphaleron.
3. For ω = M √ 1 − and < 1, we have Q ren = 0 and E ren = 0. Indeed, in this case Some typical examples of the Q ren (ω) and E ren (ω) dependencies are presented in figures 9-11. Again, we see that the renormalized energy E ren can be positive, negative 9 or JHEP12(2016)032 zero. As in the (1 + 1)-dimensional case, one can check numerically that the relation (2.11) fulfills for Q ren and E ren given by eqs. (3.26) and (3.27).
Classical instability of Q-holes and Q-bulges
There is a well-known classical stability criterion for Q-balls [16,17], which states that Qballs with dQ dω < 0 are classically stable. 10 It is easy to see that the method used to obtain this criterion (as well as the similar approach used in [18] for obtaining the stability criterion for NSE) cannot be generalized straightforwardly to the case of Q-holes and Q-bulges with the (renormalized) charge and energy given by Q ren and E ren . Indeed, contrary to the case of ordinary Q-balls, whose asymptotics at r → ∞ are the same for any value of ω, for Q-holes and Q-bulges the asymptotic behavior is different for different ω. Moreover, their total charge and energy are infinite. Despite these obstacles, one can give some arguments in favor of classical instability of these solitons, to which we now proceed.
As was mentioned in section 3.2, one can put the system in a box of finite size and regard Q ren as a difference Q − Q c between the charges of soliton and condensate computed in this box. The box implies boundary conditions to be imposed on the solutions. For example, in 1 + 1 dimensions with the size of the spatial dimension 2L, one can demand the periodic boundary conditions f (−L) = f (L) and df dx x=−L = df dx x=L = 0. The solution obeying these conditions can be easily obtained for the potential (3.1). It is also easy to check that this solution does not have nodes and tends to (3.15) for L → ∞. Hence one can expect that, as long as the characteristic scale of the soliton l is much smaller than the size of the box L, the value of Q ren lies very close to its limit at L → ∞. Since Qren Q 1 for l L, for such solutions one can write Q ≈ Q c . Since Q is finite in the box, we have no obstacles in the derivation of the Q-criterion [16,17] that would forbid us to apply it to our case. Choosing the size of the box to be sufficiently large, we have dQ dω ≈ dQ c dω . (4.1) We see that the sign of dQ dω , determining the (in)stability of the solution, follows from the sign of dQc dω . We now ask what the sign of dQc dω is. We will be interested in the case of a stable scalar condensate, like the one in eq. (3.10), for which the condition holds [15]. The condensate charge is 10 Speaking more precisely, this condition must be supplemented by an additional requirement on the number of negative eigenvalues of some operator, which arises when considering perturbations above the Q-ball.
JHEP12(2016)032
additional term ∼ √ φ * φ that it contains. There still remains a purely numerical way to investigate the classical instability in the theory with the potential (3.1). The numerical analysis may also clarify whether or not Q-holes or Q-bulges lead to fragmentation of the scalar condensate into Q-balls. We leave the thorough investigation of these issues for future work.
Conclusion
In this paper we have presented Q-holes and Q-bulges -two classes of localized configurations representing dips and rises in the spatially-homogeneous charged time-dependent scalar condensate. The important feature of these configurations is that they can be deformed into the condensate by a finite amount of energy. We expect that inhomogeneities of this type may be crucial for the nonlinear dynamics of the condensate in the Early Universe, in particular, for its fragmentation into Q-balls. We have also found the explicit solutions for Q-holes in the model with a simple piecewise-parabolic potential proposed in [14], and examined their properties. It has been shown that the renormalized energy E ren of Q-holes can take both positive, zero and negative values.
In this paper, we did not address in detail the question of quantum stability of Qholes and Q-bulges. Of course, if "ordinary" particles interact with Q-holes and Q-bulges through, say, the combination φ * φ (which is time-independent for these solutions and for the scalar condensate), Q-bulges and Q-holes with E ren > 0 can decay into such particles. Moreover, one may expect that Q-bulges and Q-holes can be created (even spontaneously) in processes involving these particles. So, this case is rather standard.
However, the case of excitations of the scalar field φ above the condensate, which are supposed to form the corresponding scalar particles, is not so trivial. For ordinary Q-balls one can define a standard vacuum far from the core of the soliton and apply a standard quantization procedure to the perturbations above this vacuum. For the time-dependent scalar condensate, excitations on top of the background have nonstandard dispersion laws like the one in eq. (3.12). Moreover, one can check that even the charge of the excitation with respect to the condensate charge also has a very nonstandard form, and the standard quantization procedure can not be applied to such excitations. It would be interesting to see what should be defined as "particles" related to the excitations of the form (3.11) on top of the time-dependent background, and what must be the consistent quantization procedure, providing us with the correct definition of energy of the quantum excitations. These questions call for further detailed investigation. | 5,297.6 | 2016-09-18T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Genistein and exercise modulated lipid peroxidation and improved steatohepatitis in ovariectomized rats
Background The prevalence of nonalcoholic steatohepatitis (NASH) in menopausal women is increasing, but current treatments have not been proven effective. The objective of this study was to investigate the treatment effects of genistein and running exercise in ovariectomized (OVX) rats with NASH. Methods Thirty-six female Sprague-Dawley rats were divided into 6 groups, control; OVX with standard diet; OVX with high fat and high fructose (HFHF) diet for 4 weeks; OVX with HFHF and genistein treatment (16 mg/kg BW/day) for 5 weeks (OVX + HFHF+GEN); OVX with HFHF and moderate intensity exercise for 5 weeks (OVX + HFHF+EX); OVX with HFHF and combined treatments (OVX + HFHF+GEN + EX). Serum interleukin-6 (IL-6) levels, hepatic free fatty acid (FFA), hepatic glutathione (GSH), and hepatic malondialdehyde (MDA) levels were measured. Liver histology was examined to determine NASH severity. Results OVX + HFHF group had the highest levels of hepatic FFA compared with OVX and control groups (5.92 ± 0.84 vs. 0.37 ± 0.01 vs. 0.42 ± 0.04 nmol/mg protein, respectively, p < 0.01). Serum IL-6 levels were significantly elevated in both OVX and OVX + HFHF groups as compared with controls (112.13 ± 6.50 vs. 121.47 ± 3.96 vs. 86.13 ± 2.40 pg/mL, respectively, p < 0.01). In OVX + HFHF group, hepatic MDA levels were higher, while GSH levels were lower than in OVX and control groups (MDA; 0.98 ± 0.04 vs. 0.82 ± 0.02 vs. 0.78 ± 0.03 nmol/mg protein, and GSH; 46.01 ± 0.91 vs. 55.21 ± 1.40 vs. 57.94 ± 0.32, respectively; p < 0.01 for both). Comparing with OVX + HFHF group, rats that received genistein, exercise and combined treatments demonstrated an improvement in liver histopathology, decreased levels of hepatic FFA (1.44 ± 0.21 vs. 0.45 ± 0.04 vs. 0.49 ± 0.05 nmol/mg protein, respectively, p < 0.01), serum IL-6 (82.80 ± 2.07 vs. 83.47 ± 2.81 vs. 94.13 ± 1.61 pg/mL, respectively, p < 0.01), and hepatic MDA (0.80 ± 0.03 vs. 0.76 ± 0.02 vs. 0.76 ± 0.03 nmol/mg protein, respectively, p < 0.01). Conclusions Genistein and moderate intensity exercise were effective in reducing the severity of NASH in OVX rats through the reduction in liver inflammation, oxidative stress and liver fat contents.
Background
The spectrum of nonalcoholic fatty liver disease (NAFLD) comprises simple steatosis, nonalcoholic steatohepatitis (NASH), progressive fibrosis, cirrhosis and in some cases hepatocellular carcinoma. NASH is defined histologically by the presence of steatosis, hepatocyte ballooning, and lobular inflammation [1]. The derangement in lipid and glucose metabolism is a common occurrence in NAFLD. High-calorie diet and high fructose consumption are associated with increased severity in patients with NASH [2]. In addition, excessive energy intake and fat accumulation in the liver can induce oxidative stress and hence severe lipid peroxidation [3].
The prevalence of NAFLD increases in postmenopausal women as compared with pre-menopausal women suggestive of a protective effect of estrogen on NAFLD [4]. Receiving more than 6 months of hormone replacement therapy has been shown to decrease the frequency of NAFLD in post-menopausal women by 34% [5]. Despite its beneficial effects in improving NASH pathology, long term use of hormone replacement therapy increases the risk of breast, ovarian and endometrial cancers. An alternative medicine that can improve NAFLD in the settings of estrogen deficiency without higher risks of cancers is therefore an attractive option.
Genistein (4′, 5, 7-trihydroxyisoflavone, supplement Figure 1), a phytoestrogen which can be found in soybean products, has been shown to prevent lipid accumulation [6]. Moreover, genistein is not associated with increased prevalence of breast cancer [7]. A previous animal study suggested that genistein (16 mg/kg) improved histopathology of NASH by upregulating hepatic PPARγ and reducing oxidative stress markers and inflammatory cytokine in ovariectomized (OVX) rats fed with high fat high fructose (HFHF) diet [8]. Another animal study similarly showed that genistein reduced %NFκB-positive cells and hepatic free fatty acid levels and improved liver histology in both rats with intact ovary and post-ovariectomy fed with HFHF diet [9]. Using specific-pathogen-free male Sprague Dawley rats fed with high fat diet as a NASH model, Yin and colleagues found that high dose genistein decreased NASH activity score, hepatic triglyceride levels, insulin resistance, plasma endotoxin levels and hepatic toll-like receptor-4 expression [10].
Exercise has been shown to improve NASH and metabolic profiles in both animal and human studies [11]. Aerobic exercise in menopausal women improved lipid profiles after 24 weeks of training as compared with a control group [12]. In an animal model of high fat diet induced NASH, moderate intensity exercise training improved histological features of NASH with reduction in markers of hepatic stellate cell activation and extracellular matrix remodeling [13]. Furthermore, combination of exercise and isoflavone for 5 weeks could reduce plasma triglyceride (TG) in male rats [14]. However, it remains unclear whether combination treatment with genistein and running exercise has an additive effect on attenuating NASH in OVX rats fed with HFHF diets compared with either treatment alone. The present study aimed to determine the beneficial effects of genistein, running exercise and combined therapy on steatosis, inflammation and oxidative stress in OVX rats with NASH induced by HFHF diet. We used OVX female rats as a model for post-menopausal women.
Animal preparation
Eight-week-old female Sprague-Dawley rats weighing 200-220 g from the Nomura Siam International Co., Ltd. Bangkok, Thailand were used. The protocol was approved by the Animal Care and Use Committee at the Faculty of Medicine, Chulalongkorn University (IRB No. 018/2561). All animals were kept at the Animal Center, Faculty of Medicine, Chulalongkorn University under strictly hygienic conventional system in a controlled temperature room at 25 ± 1°C with a normal 12 h light-12 h dark cycle. All rats had free access to purified drinking water. Each group of rats (described in the following paragraph) was housed in a separate stainless steel cage with solid bottom and open top. After one week of acclimatization to the new environment, rats were randomized to the control or OVX groups. Bilateral ovariectomy was performed in OVX groups using double dorsolateral approaches [15]. In brief, after the animal was anesthetized with intraperitoneal injection of sodium thiopental, skin incision was performed and muscle layers were cut. First, both ovaries were identified. Distal uterine horns were then ligated and both ovaries were removed. After ovariectomy, the surgical wound was closed layer by layer. OVX rats were individually caged during a 2-week recovery period. The presence of anestrous stage on vaginal smear at two weeks after the operation was used to confirm the completion of ovariectomy [16].
Running protocol was as follows. All rats in the exercise group performed treadmill running five days per week for five weeks. Exercise period progressed from 10 min to 30 min by adding five minutes per week. Speed was maintained at 15 m/minute for the first three weeks then increased to 20 m/minute for the remaining two weeks. The distance run during the first week was 0.15 km/day, the second week was 0.225 km/day, the third week was 0.3 km/day, the fourth week was 0.5 km/day and the fifth week was 0.6 km/day. This protocol was considered a moderate intensity exercise based on results from prior studies [17,18].
At the end point of the experiment, animals were euthanized using overdose sodium thiopental injection intraperitoneally (dose > 50 mg/kg) after an 8-h fast. Livers were surgically removed, weighed, and cut into several pieces. Half of liver specimens were immediately frozen in liquid nitrogen and stored at − 80°C until further analyses (Oil Red O stain, and MDA, hepatic fatty acid and GSH measurement). The remaining liver specimens were fixed in 10% formaldehyde for histopathological examination. Blood samples were obtained through cardiac puncture. Serum was then separated by centrifuging the blood at 2000 rpm (r.p.m.) for 20 min at 4°C. Serum samples were stored at − 80°C until further analysis. Serum IL-6 levels, hepatic contents of free fatty acid (FFA), gluthatione (GSH), and malondialdehyde (MDA) were measured. Liver histology was examined to determine NASH severity. All procedures were performed at the Alternative and Complementary Medicine for Gastrointestinal and Liver Diseases Research Unit, Chulalongkorn University. The primary outcome was to determine the effects of genistein, moderate intensity exercise and combined treatment on liver histology in rats with NASH. Secondary outcomes included changes in liver fat content and inflammatory and oxidative stress markers in NASH and treatment groups.
The HFHF diet used in this study was modified from Pickens MK formula [19] and contained 55% fat (from palm oil), 35% carbohydrate (consisted of 20% fructose and 80% starch), and 10% protein from albumin. Standard diet contained 7% fat, 47% carbohydrate, and 27% protein (Perfect companion group Co., Ltd., Thailand). Rats had free access to food ad libitum.
Genistein powder at the dose of 16 mg/kg body weight (Cayman Chemical Company, USA), was dissolved in 0.1% DMSO prior to administration to each rat by oral gavage once daily in the morning for 5 weeks.
Moderate intensity exercise protocol was a treadmill running at 80%VO 2 max 3 times a week for 5 weeks as described previously [14]. The speed and duration of the treadmill exercise were gradually increased until the animals could maintain a running speed of 18-20 m/min for 30 min/day on a 0% incline.
Hepatic free fatty acid measurement
Liver lipid was extracted with a lipid extraction kit (Bio-Vision, Inc., CA, USA) according to the manufacturer's instructions then suspended in 50 μl of lipid suspension buffer and sonicated for 15-20 min at 37 o C. The lipid was used to quantify the amount of FFA by colorimetric assays (BioVision, Inc., CA, USA). Tissue protein concentration was determined using a Pierce BCA Protein Assay Kit (Thermo Scientific, USA). Results were expressed in nmol per mg of hepatic tissue protein.
Fat droplet measurement
Liver specimens were frozen, sliced with cryostat, and stained with Oil Red O staining. The slides were examined under light microscopy with 400x magnification to detect fat droplets. The amount and individual size of the lipid droplets from three sections of each rat were analyzed using ImageJ program and presented as a percentage of surface area that were stained red [20].
Serum IL-6 measurement
Enzyme-linked immunosorbent assay (ELISA) kit was used to measure the serum levels of IL-6 by strictly following the manufacturer's instructions (R&D Systems, Inc. USA). The absorbance was read at 450 nm.
Hepatic MDA and GSH measurements
Lipid peroxidation was determined by measuring MDA levels in liver tissue using thiobarbituric acid reaction (TBARS Assay Kit, Cayman Chemical Company, USA). Briefly, liver was homogenized and sonicated on ice for 15 s. Supernatants were obtained after centrifugation at 1600 x g for 10 min at 4°C. The absorbance of the supernatant fraction was read at 530 nm and results were expressed in nmol per mg of hepatic tissue protein.
To determine hepatic GSH, tissue was homogenized and assayed using a Glutathione Assay Kit (Cayman Chemical Company, USA). Briefly, Liver tissue was washed and homogenized before being centrifuged to obtain the supernatants which were then deproteinated. The sulfhydryl group of glutathione reacts with DTNB to form TNB of which the absorbance was measured at 405 nm. The results were multiplied by sample dilution of deproteination and expressed in micro molar (μM).
Statistical analysis
Data from all animals (n = 36) were included in the analyses. Continuous variables were compared using one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. Results were presented as mean ± standard error of the mean (SEM), and p value of less than 0.05 was considered statistically significant. All analyses were performed using SPSS for Windows version 17.0 (SPSS, Inc., Chicago, IL, USA).
Effects of genistein, exercise, and combined treatment on body weight changes
At the beginning of the experiment, the body weight of rats in each group were not different (p = 0.988). At the end of the experiment, the body weight of OVX rats markedly increased compared with control group (ΔBody weight 165.67 ± 5.51 vs. 78.13 ± 4.76, p < 0.01). A significant reduction in the body weight was observed in the all groups of OVX rats fed with HFHF diet compared with rats fed with standard diet. The body weight changes after treatment with genistein, running exercise, or combining genistein and exercise were not significantly different among treatment groups compared with OVX rats fed with HFHF diet.
Alterations in gross appearance of the liver and liver index
Gross examination of the liver was performed and each liver was weighed. The liver of OVX and OVX + HFHF rats showed yellowish discoloration as compared to normal liver appearance in a control group. Liver indices, the ratio between liver weight and body weight, were significantly higher in both OVX and OVX + HFHF groups compared with a control group (3.57 ± 0.09% vs. 4.43 ± 0.30% vs. 2.82 ± 0.06%, respectively; p < 0.01). Treatment with genistein, exercise or in combination could normalize liver appearance (Fig. 1a) and reduce liver indices to the level of a control group (2.86 ± 0.07 vs. 2.79 ± 0.12 vs. 2.93 ± 0.03, respectively; p < 0.01) (Fig. 1b).
Effects of genistein, exercise, and combined treatment on cytokine levels and oxidative stress markers
As shown in Fig. 5b, serum levels of interleukin-6 (IL-6) were higher in both OVX and OVX + HFHF groups than in a control group (112. 13 (Fig. 5c) and lower GSH (Fig. 5d) 0.76 ± 0.03 nmol/mg protein, respectively p < 0.01). On the contrary, genistein, exercise and combined treatment could not restore hepatic GSH back to normal levels.
Discussion
Estrogen deficiency in post-menopausal women has been shown to be associated with visceral fat accumulation, insulin resistance and NAFLD [23]. An animal model showed that estrogen when bound to liver estrogen receptor-α (ESR1) could inhibit hepatic gluconeogenic genes such as phosphoenolpyruvate carboxykinase 1 (Pck-1) and glucose 6-phosphatase (G6Pase) and decrease hepatic lipogenesis through a down-regulation of fatty acid synthase (Fas) and acetyl-CoA carboxylase (Acc1) genes [24]. Furthermore, experimental studies demonstrated that estradiol reduced production of reactive oxygen species (ROS) and lipid peroxidation, thus inhibiting IkappaB-α degradation and nuclear factor-kappaB (NF-kB) activation [25]. With the aforementioned evidence, it was not surprising to see that the OVX rats in our study showed a higher degree of liver fat accumulation, inflammation and hepatocyte injury than in control rats. High fructose consumption has been associated with increased de novo lipogenesis, insulin resistance and visceral adiposity in overweight/obese adults [26]. Animal studies showed that adding fructose in the diet significantly increased expression of lipogenic genes, such as Acc1, Fas and stearoyl CoA desaturase (SCD1) than high fat diet alone [27]. Moreover, fructose metabolites could directly activate transcriptional factors, such as sterolregulatory element binding protein (SREBP)-1c and carbohydrate-response element binding protein (ChREBP), thus enhancing hepatic lipogenesis [27]. Fructose metabolism also leads to hepatic ATP depletion, formation of uric acid, ROS production and liver inflammation [28]. Similarly, our results showed that OVX + HFHF diet could induce a more severe form of NASH pathology than that seen in OVX alone. In line with histopathological changes, OVX + HFHF diet increased hepatic FFA contents, markers of lipid peroxidation (MDA) and inflammation (IL-6), and reduced hepatic levels of natural antioxidant (GSH).
Genistein has a similar structure to 17β-estradiol and can stimulate the transcriptional activity of estrogen receptor alpha and beta which makes it an attractive alternative to estrogen replacement therapy in post-menopausal women with NAFLD [29]. Independent of the estrogen effect, genistein could also induce the expression of peroxisome proliferators-activated receptor α (PPARα) which in turn regulates fatty acid β-oxidation pathways in the liver. The activation of PPARα prevents triglyceride accumulation and is associated with histological improvement of NASH in a human study [30,31]. Our results conformed to other observation that genistein could reduce hepatic steatosis (as evidenced by decreased fat accumulation on Oil Red O stain and hepatic FFA contents), liver inflammation (both by histology and serum marker) and oxidative stress. Similar to our study, Susutlertpanya and colleagues demonstrated that genistein treatment decreased hepatic MDA and TNF-α levels and enhanced PPAR-ϒ expression along with the improvement in liver histology in rats with NASH. A recent study suggested that genistein might reduce inflammation in NASH through the reduction in endotoxin levels and toll-like receptor 4 gene expression [10].
Clinical and animal experiments showed that aerobic exercise was beneficial for NASH through several mechanisms [11]. Cho and colleagues demonstrated that treadmill running could significantly increase PPARα and carnitine palmitoyl transferase I (CPT-1) expression and decrease SREBP-1c, lipin1, and FAS expression leading to the improvement in NAFLD in C57BL/6 mice fed with high fat diet. They also showed that exercise enhanced the production of 5′ AMP-activated protein kinase (AMPK) leading to the increase in fatty acid β-oxidation and the attenuation of lipogenesis [32]. Furthermore, continuous running exercise has been shown to decrease lipid peroxidation and protein carbonylation in mouse liver [33]. In accordance with other studies, our results showed that running exercise could elicit the improvement in liver histology and the reduction in hepatic FFA and markers of inflammation (IL-6) and lipid peroxidation (MDA) in OVX + HFHF rats. We, however, did not evaluate the effect of exercise on insulin sensitivity in this study. Previous unpublished data from our group showed that insulin sensitivity did not change significantly in rats with NASH compared with control rats, possibly due to weight loss seen in our NASH model. Therefore, in the current study, we mainly focused on proving that this exercise protocol could reduce the severity of the NASH features by other mechanisms. Our results did show that it could improve NASH histopathology, probably by the reduction in inflammation and oxidative stress. Unfortunately, we cannot prove the association between this exercise protocol and insulin sensitivity. This is the limitation of our study and would evaluate this aspect in the next study.
In this study, neither genistein nor exercise could restore GSH levels in OVX + HFHF rats to the levels of a control group. A study by Wiegand et al. showed similar results in that genistein had no effects on hepatic enzyme activity of catalase (CAT), glutathione peroxidase (GPx) and superoxide dismutase (SOD) or glutathione levels [34], while an in vitro study demonstrated genistein increased the expression of GPx but not CAT or SOD. Data are conflicting regarding the effect of exercise on oxidative stress and it appears that the intensity and type of exercise may have different effects on GSH. Elokda and colleagues showed that combined aerobic and circuit weight training exercise increased GSH levels, while Ilhan et al. reported a more prominent increase in lipid peroxidation and a reduction in GSH in combined aerobic-anaerobic exercise group as compared to other types of exercise [35].
Interestingly, combining genistein and exercise did not provide an additional benefit on NAFLD compared with either treatment alone. Although there were no other studies that directly evaluated the effects of these treatment on NAFLD, a proteomics study using ovariectomized rats indicated that isoflavone and exercise combination therapy could favorably modulate hepatic protein expression toward normal values than either treatment alone [36]. Given that either treatment alone was so efficacious that it almost normalized histological and biochemical changes, we hypothesized that a small added benefit from combined treatment might not be apparent in this experimental study.
There were few limitations to our model. First, we used young OVX rats as a model for post-menopausal women. Applying these results to elderly women needed to take into consideration the effects of aging on treatment efficacy. Second, although our rats manifested on liver histology as NASH, they were lean because our model closely resembled the methionine-choline deficient diet model. This was in contrast with most patients with NASH who were obese. There might be differences in treatment efficacy between these two phenotypes.
Conclusion
We found that estrogen deficiency induced by ovariectomy could lead to NAFLD development in this rat model with a more severe pathology when HFHF diet was added. Genistein and moderate intensity exercise could reduce fat accumulation, liver inflammation and oxidative stress, thus improving histological changes of NASH. Combining genistein and exercise did not provide additional benefits from either treatment alone. This is the first study to evaluation the combined effect of genistein and exercise in NASH. Clinical studies are warranted to confirm the beneficial effects of genistein in post-menopausal women with NAFLD. | 4,732 | 2020-06-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Rip2 Participates in Bcl10 Signaling and T-cell Receptor-mediated NF- (cid:1) B Activation*
Engagement of the T-cell receptor (TCR) initiates a signaling cascade that ultimately results in activation of the transcription factor NF- (cid:1) B, which regulates many T-cell functions including proliferation, differentiation and cytokine production. Herein we demonstrate that Rip2, a caspase recruitment domain (CARD)-containing serine/threonine kinase, plays an important role in this cascade and is required for optimal TCR signaling and NF- (cid:1) B activation. Following TCR engagement, Rip2 associated with Bcl10, a CARD-containing signaling com-ponent of the TCR-induced NF- (cid:1) B pathway, and induced its phosphorylation. Rip2-deficient mice were defective in TCR-induced NF- (cid:1) B activation, interleukin-2 production, and proliferation in vitro and exhibited defective T-cell-dependent responses in vivo . The defect in Rip2 (cid:2) / (cid:2) T-cells correlated with a lack of TCR-induced Bcl10phosphorylation.Furthermore,deficiencyinBcl10-dependentNF- (cid:1) B activation could be rescued in Rip2 (cid:2) / (cid:2) embryonic fibroblasts by exogenous wild-type Rip2 but not a kinase-dead mutant. Together these data define an important role for Rip2 in TCR-induced NF- (cid:1) B activation and T-cell function and highlight the significance of of in of
Many diverse stimuli activate NF-B by inducing the phosphorylation and destruction of inhibitory molecules known as the IBs that retain NF-B in the cytoplasm (1). The IB kinase (IKK) 1 complex, composed of two kinase subunits, IKK␣ and IKK, and a non-catalytic subunit, NEMO/IKK␥, is responsible for the phosphorylation of the IBs. The association of Bcl10 and CARMA1 (CARD11), two caspase-recruitment domain (CARD)-containing proteins, has been shown to be essential to the transduction of the signal from the T-cell receptor (TCR) to the IKK complex (2). Mice deficient for either Bcl10 or CARMA1 display profound defects in T-cell proliferation and cytokine production due to a lack of NF-B activation (3)(4)(5)(6)(7); however, the mechanism by which the CARMA1/Bcl10 complex activates IKK remains unclear.
In vitro experiments have indicated that Bcl10 undergoes phosphorylation when over expressed with its viral homologue, E10 or CARMA1 (8 -10). In these studies, Bcl10 phosphorylation correlated with its ability to activate NF-B, suggesting that this modification was required for NF-B activation. Indeed, the COOH-terminal domain of Bcl10 is rich in serine and threonine residues and has been proposed as the site of CARMA1-mediated phosphorylation (10). Since CARMA1 itself is not a kinase, the kinase responsible for Bcl10 phosphorylation has remained an open question.
Rip2 is a serine/threonine kinase that contains a CARD domain at its carboxyl terminus and has been shown to induce NF-B activation in over expression systems (11)(12)(13). Rip2 has also been shown to associate in vitro with members of the TRAF family, such as TRAF6, that plays an essential role in the innate immune response downstream of Toll-like receptors (TLRs) (14,15). In addition, Rip2 has been implicated in regulating both the innate and adaptive immune responses (16,17). Mice deficient in Rip2 mounted only an attenuated immune response against Toll-like receptor agonists such as lipopolysaccaride (LPS) (16,17). Interestingly, CD4 ϩ T-cells from Rip2-deficient mice were unable to proliferate efficiently in response to antigen-induced T-cell activation, but no mechanism was provided for this striking observation (16,17). We sought to define the role of Rip2 in antigen-induced NF-B activation and T-cell proliferation.
EXPERIMENTAL PROCEDURES
Generation of Rip2 Ϫ/Ϫ Mice-A targeting vector that removed exon I of Rip2 was electroporated into ES cells. Homologous recombinants were used to generate chimeric founder mice by microinjection into C57BL/6J blastocysts. Germ line transmission was confirmed by Southern blot analysis of genomic tail DNA. Two independent ES clone lines resulted in mice with identical phenotypes. All mice used in experiments were backcrossed onto C57BL/6 five to seven generations and were confirmed to be Ͼ95% C57BL/6 by PCR analysis of genomic tail DNA.
Proliferation Assays-Splenic B and T-cells and CD4 ϩ T were purified by negative selection using magnetic beads (Miltenyi Biotech) to Ͼ95% purity. Purified T-cells were activated with plate bound anti-CD3 (0 -10 g/ml) (BD Biosciences) with or without irradiated CD4-depleted * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. APCs or plate-bound anti-CD28 (0 -10 g/ml) (BD Biosciences), phorbol myristate acetate (PMA) (2 ng/ml) plus ionomycin (0.1 g/ml) (Sigma) in the presence or absence of IL-2 (50 ng/ml) (R & D Systems). B-cells were stimulated with anti-IgM (20 g/ml) (Jackson Laboratories), LPS (20 g/ml) (Sigma), or PMA (2 ng/ml) plus ionomycin (0.1 g/ml). Cells were harvested at 24, 48, 72, and 96 h after an 8-h pulse with [ 3 H]thymidine (1 Ci/well), and incorporation of [ 3 H]thymidine was measured using a Matrix 96 direct  counter system (Hewlett-Packard). Data represent triplicate samples and are representative of at least three separate experiments.
Neonatal Heart Allograft-Neonatal hearts from BALB/c (H-2 d ) mice were surgically implanted behind the dorsum of the ear pinna of 12week-old male Rip2 Ϫ/Ϫ and wild-type mice (Both H-2 b ). Heart grafts were examined with a stereomicroscope at 10 -20-fold magnification every other day until rejection.
Isolation of Phosphorylated Proteins-Splenic T-cells from Rip2
Ϫ/Ϫ and wild-type mice (4 ϫ 10 7) cells/ml) were stimulated with 10 g/ml plate-bound anti-CD3 for 0, 15, and 30 min. Cells were lysed under denaturing conditions to disrupt protein-protein interactions and diluted to 0.1 mg/ml in phospho-lysis buffer (Qiagen). Phosphorylated proteins were separated using the phospho-protein purification kit (Qiagen) according to the manufacturer's instructions.
RESULTS AND DISCUSSION
Rip2 Associates with Bcl10 and Induces Its Phosphorylation-We investigated whether Rip2 could associate with molecules known to play essential roles in the TCR-induced signaling cascade. Initially, we tested whether Rip2 and Bcl10 could associate by overexpressing tagged versions of both proteins in 293 T-cells. V5-tagged Bcl10 could be co-immunoprecipitated with HA-tagged Rip2 (Fig. 1A). Interestingly, two bands representing Bcl10 were observed. The upper band was determined to be a hyperphosphorylated form of Bcl10, since it could be collapsed to the lower band by phosphatase treatment (Fig. 1A). Hyperphosphorylation of Bcl10 was also apparent by mobility shift in whole cell lysates from 293T-cells cotransfected with Rip2 and Bcl10, compared with the very low levels of phosphorylation seen with Bcl10 alone (Fig. 1B). To establish the domains of Rip2 responsible for hyperphosphorylation of Bcl10, mutants with deletions of either the kinase domain or the CARD domain were used in co-expression studies. Bcl10 hyperphosphorylation required both a functional kinase domain and CARD domain of Rip2 as neither mutant induced phosphorylation of Bcl10 (Fig. 1B). Moreover, phosphorylation of Bcl10 was specific for Rip2, as overexpression of RIP or Rip3 did not induce Bcl10 phosphorylation (Fig. 1C).
To determine whether Rip2 was involved in Bcl10-dependent signaling pathways, we studied the interaction of endogenous proteins in Jurkat cells stimulated with cross-linking antibodies to CD3-TCR. Rip2 and Bcl10 consistently associated in a transient and time-dependent manner after TCR engagement (Fig. 1D, bottom panel). Induction of phosphorylated Zap-70 confirmed TCR activation (Fig. 1E). We next examined the phosphorylation status of Bcl10 using phosphoserine-specific antibodies. Lysates from anti-CD3 treated and untreated Jurkats were immunoprecipitated using antibodies to Bcl10 and Western blots were performed using antibodies for phosphoserine and Bcl10. Serine phosphorylated Bcl10 was detected after 15-min treatment with anti-CD3 (Fig. 1F, top panel) and treatment of immunoprecipitates with phosphatase (PPase) significantly diminished levels of serine-phosphorylated Bcl10. Phosphorylation of endogenous Bcl10 was also apparent after treatment with anti-CD3, as evidenced by a slower migrating band that could be collapsed by treatment with phosphatase (Fig. 1F, bottom panel). Taken together, these results were consistent with Rip2 binding Bcl10 upon TCR engagement and inducing its phosphorylation.
Defective T-cell Proliferation and Function in Rip2 Ϫ/Ϫ Mice-To examine the effects of Rip2 on T cell activation in an in vivo setting, we generated Rip2-deficient mice by homologous recombination. Rip2 Ϫ/Ϫ T-cells were deficient in anti-CD3 FIG. 1. Rip2 associates with Bcl10 and induces Bcl10 phosphorylation. A, 293T-cells were co-transfected with HA-tagged Rip2 and V5-tagged Bcl10. Cell lysates were immunoprecipitated with antibodies to HA, V5, or as a negative control, Myc, and precipitates were immunoblotted for V5. In some cases, immunoprecipitates were treated with alkaline phosphatase (ϩCIP). Western blots for anti-HA and anti-V5 were also performed on lysates prior to immunoprecipitation. B, 293Tcells were transfected with V5-tagged Bcl10 together with wild-type Rip2 or mutants lacking the kinase domain (⌬KD) or the CARD domain (⌬CARD), and lysates were analyzed by Western blot with anti-V5, anti-Rip2, or anti-actin antibodies. C, 293T-cells were transfected with V5-tagged Bcl10 together with Myc-tagged Rip2, RIP, or Rip3, and lysates were analyzed by Western blot with anti-V5, anti-Myc, or antiactin antibodies. D, Jurkat cells were stimulated with anti-CD3 for the indicated times, and lysates were immunoprecipitated with anti-Bcl10 or anti-Rip2. Immunoprecipitates were subjected to Western blot analysis using anti-Bcl10. E, whole cell lysates from Jurkat cells treated as above were immunoblotted using phospho-specific antibodies for Zap-70. F, Jurkat cells treated as above were immunoprecipitated with anti-Bcl10 and immunoblotted using antibodies specific for phosphoserine or Bcl10. In some cases, immunoprecipitates were treated with phosphatase (PPase). These data are representative of at least three separate experiments. (P-Bcl10, phosphorylated Bcl10). WB, Western blots. induced proliferation (Fig. 2, A and B). This defect could not be rescued by co-stimulation with anti-CD28 or activation using PMA in combination with calcium ionophore (ion) (Fig. 2C). The levels of IL-2 produced after treatment with anti-CD3 alone, anti-CD3 with anti-CD28, or with PMA and ionomycin were drastically reduced compared with wild-type T-cells (data not shown). Moreover, addition of exogenous IL-2 was not able to rescue the defect in proliferation in Rip2 Ϫ/Ϫ T-cells after stimulation (Fig. 2D). Consistent with previous reports, B-cell proliferation in response to PMA/ionomycin, IgM, and LPS was comparable between Rip2-deficient and wild-type B-cells (data not shown) (16). Taken together, these results suggested that the defect in proliferation in Rip2 Ϫ/Ϫ mice was confined to T-cells and likely due to impairment upstream of IL-2 gene transcription and NF-B activation.
Rip2 Participates in Bcl10 Signaling
Previous in vivo experiments on Rip2 Ϫ/Ϫ mice tested T-cell responsiveness using models such as Listeria challenge and T-celldependent antibody responses (16,17), which also involve participation of TLRs and other innate signaling cascades through adjuvant and bacterial components. Therefore, to test T-cell responsiveness in vivo, we designed a high bar functional test, the ability to participate in a graft rejection response, which does not require major driver co-signals from pathways of the innate immune system. Hearts from allogeneic neonate BALB/c (H-2 d ) mice were transplanted into the ear pinna of wild-type and Rip2deficient mice (H-2 b ) and allograft survival monitored. While all hearts were rejected by wild-type mice by day 15, over 50% of the neonate hearts were still beating in Rip2-deficient mice and continued to function for an additional 5 days (Fig. 2E). Therefore, Rip2-deficient mice rejected heart allografts much less readily than wild-type mice, consistent with our in vitro data and a defect in normal T-cell activation and function.
Defective NF-B Activation in Rip2-deficient Cells-To determine the molecular basis of the impairment in T-cell receptor signaling in the absence of Rip2, we analyzed pathways activated by TCR engagement in wild-type and Rip2 Ϫ/Ϫ T-cells. T-cells from wild-type and Rip2 Ϫ/Ϫ mice were treated with plate-bound anti-CD3 or TNF␣, and lysates were assessed by Western blot using phospho-specific antibodies to IB␣. IB␣ was rapidly phosphorylated and degraded in wild-type T-cells but not in Rip2 Ϫ/Ϫ cells (Fig. 3A). In contrast, treatment of both wild-type and Rip2 Ϫ/Ϫ T-cells with TNF␣ promoted equivalent phosphorylation and degradation of IB␣ (Fig. 3B). Hence, NF-B signaling downstream of other surface receptors remained intact in Rip2 Ϫ/Ϫ mice.
TCR engagement also elicits activation of the RAS/MAPK (mitogen-activated protein kinase) pathway. Western blotting using phospho-specific anti-ERK1/2 antibodies demonstrated that ERK-1 and ERK-2 were phosphorylated with similar kinetics in wild-type and Rip2 Ϫ/Ϫ T-cells after TCR engagement (Fig. 3C, upper panel). Similarly, activation of the JNK signaling pathway post-TCR engagement was equivalent in both wild-type and Rip2-deficient T-cells (Fig. 3C, lower panel). These results confirmed that the defect was specific for NF-B signaling downstream of the TCR and that parallel pathways activated by TCR engagement remained intact.
To address the role of Rip2 kinase activity in Bcl10-dependent NF-B activation, we transfected MEFs from wild-type and Rip2 Ϫ/Ϫ mice with Bcl10 and a luciferase reporter for NF-B. While Bcl10 could induce NF-B activation in wild-type MEFs, NF-B activation by Bcl10 was significantly decreased in Rip2 Ϫ/Ϫ MEFS (Fig. 3D). Transfection of exogenous wild-type Rip2, but not a kinase-dead mutant, K47A, could rescue Bcl10induced NF-B reporter activity in Rip2 Ϫ/Ϫ MEFs (Fig. 3D). Therefore, the kinase activity of Rip2 is required for optimal Bcl10-induced NF-B activation.
Bcl10 Is Phosphorylated after TCR Engagement in Wild-type but Not Rip2 Ϫ/Ϫ Mice-Taken together, our data suggested
FIG. 3. Defective NF-B activation and Bcl10 phosphorylation in Rip2deficient cells. Purified T-cells from Rip2
Ϫ/Ϫ and wild-type (WT) mice were stimulated with 10 g/ml plate-bound anti-CD3 (A) or 10 ng/ml TNF␣ for 0 -30 min (B). Lysates were subjected to Western blotting using antibodies to IB␣ and phospho-IB␣. C, purified T-cells from Rip2 Ϫ/Ϫ and wild-type mice were stimulated with 10 g/ml CD3 for 0 -30 min, and lysates were subjected Western blotting using antibodies for phospho-ERK1/2, p44, and phosho-JNK. D, Rip2 Ϫ/Ϫ and wild-type MEFs were transfected with an NF-B luciferase reporter and Bcl10 with or without either wildtype Rip2 or a kinase-dead (KD) mutant Rip2. These data are representative of four separate experiments. Purified T-cells from Rip2 Ϫ/Ϫ and wild-type T-cells were stimulated with 10 g/ml CD3 for 0 -30 min. Cell lysates were separated into non-phosphorylated (NPh) fractions and phosphorylated (Ph) fractions using phospho-specific columns. Purified fractions were Western blotted for Bcl10 and phospho-ERK (E) and HSP60 (F).
that Rip2 functions to regulate T-cell activation by phosphorylating Bcl10. Therefore, we wished to establish whether the lack of NF-B activation observed in Rip2 Ϫ/Ϫ T-cells correlated with a lack of Bcl10 phosphorylation after TCR engagement. Wildtype and Rip2-deficient T-cells were treated with ␣-CD3, and cell lysates were fractionated using a phosphoserine/threonine column. Under these lysis conditions, all protein-protein interactions are disrupted, and only phosphorylated proteins bind the column, while unphosphorylated proteins flow through. Western blotting of the phosphorylated protein fractions using antibodies to phospho-ERK and Bcl10 revealed that while phosphorylated ERK1/2 could easily be detected in the purified phosphorylated fractions of both wild-type and Rip2 Ϫ/Ϫ T-cells (Fig. 3E, middle panel), Bcl10 was only present in the purified phosphorylated fractions of ␣-CD3 treated wild-type T-cells (Fig. 3E, top panel). By contrast, similar levels of Bcl10 were detected in the non-phosphorylated fractions from wild-type and knock-out T-cells (Fig. 3E, bottom panel). To confirm that no unphosphorylated proteins contaminated the phosphorylated protein fraction, lysates from both fractions were Western blotted using antibodies for Hsp60. Hsp60 was abundant in the non-phosphorylated fraction but undetectable in the phosphorylated protein fraction (Fig. 3F). These data demonstrate that Bcl10 is phosphorylated in mouse primary T cells after TCR stimulation, and deficiency of Rip2 precludes phosphorylation of Bcl10.
Herein we provide evidence for the importance of Rip2 in TCR-mediated NF-B activation and Bcl10-dependent signaling. Phosphorylation of Bcl10 occurs after TCR engagement, and lack of phosphorylation correlates with a defect in NF-B activation and T-cell proliferation. Earlier studies have shown that Bcl10 is phosphorylated upon over expression of CARMA1; however, the importance of phosphorylation in T-cell signaling was unclear. Our data suggest that phosphorylation of Bcl10 by Rip2 plays a key role in signaling between the TCR and the IKK complex.
Recent reports (4 -7, 19 -21) have demonstrated that CARMA1 is critically involved in TCR-induced NF-B activation. It remains unclear whether Bcl10 phosphorylation is required for its association with CARMA1. The kinetics of the association between Rip2 and Bcl10 and subsequent Bcl10 phosphorylation in Jurkat cells correlates with the kinetics of the published interaction between CARMA1 and Bcl10 in Jurkat T-cells (19). Phosphorylation of Bcl10 may either facilitate its recruitment to lipid rafts or serve to activate other key molecules in downstream signaling events that ultimately activate the IKK complex. For example, MALT1/paracaspase, a death domain-containing caspase-like molecule, has also been shown to associate with Bcl10 and enhance NF-B activation (22,23) and is also required for TCR-induced proliferation, cytokine production, and NF-B activation (24).
Our data are consistent with previous reports that Rip2deficient mice suffer from defects in the adaptive immune response due to lack of antigen-induced T-cell proliferation and NF-B activation (16,17). Similar to published results, we also observed a defect in cytokine production in macrophages stimulated with LPS and other Toll-like receptors, demonstrating an additional defect in innate immunity (data not shown) (16,17). Since Rip2 associates with key signaling molecules in both the adaptive and innate immune responses, such as Bcl10 and TRAF6 respectively, it is reasonable that the absence of this promiscuous kinase would impinge on multiple signaling pathways and result in broad ranging deficits in immune system function. | 3,938.2 | 2004-01-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Exciton–polariton condensates with flat bands in a two-dimensional kagome lattice
Microcavity exciton–polariton condensates, as coherent matter waves, have provided a great opportunity to investigate hydrodynamic vortex properties, superfluidity and low-energy quantum state dynamics. Recently, exciton condensates were trapped in various artificial periodic potential geometries: one-dimensional (1D), 2D square, triangular and hexagonal lattices. The 2D kagome lattice, which has been of interest for many decades, exhibits spin frustration, giving rise to magnetic phase order in real materials. In particular, flat bands in the 2D kagome lattice are physically interesting in that localized states in the real space are formed. Here, we realize exciton–polariton condensates in a 2D kagome lattice potential and examine their photoluminescence properties. Above quantum degeneracy threshold values, we observe meta-stable condensation in high-energy bands; the third band exhibits a signature of weaker dispersive band structures, a flat band. We perform a single-particle band structure calculation to compare measured band structures.
Introduction
A two-dimensional (2D) kagome lattice, or a basketweave lattice [1], consists of interlaced triangles and exhibits a higher degree of frustration in comparison with other 2D lattices including square and triangular lattices [2]. Several materials in nature have been considered to have properties associated with the kagome lattice including jarosite [3], the second layer of adsorbed helium-3 on graphite [4] and a rare mineral known as herbertsmithite [2,5]. The structure has been actively investigated in relation to spin frustration and associated magnetic phases. In addition, flat band ferromagnetism is one of the famous theoretical predictions regarding kagome lattices [6,7]. A tight-binding band structure calculation of the 2D kagome lattice in the strong hopping energy regime including the nearest-neighbor tunneling terms predicts a completely flat band. In such a flat band the condensate order parameters become tightly localized at the potential dips [8]. The dispersionless band can be pictured to arise from quantum destructive interference around kagome hexagons, making clustered density distribution at the trap potential sites in real space.
Recently, several schemes were proposed to create flat bands in the 2D kagome lattice, utilizing metallo-photonic waveguides [9] or using photonic crystal structures [10]. Here, we explore microcavity exciton-polaritons in a 2D kagome lattice, whose potential is produced by depositing a thin metal film [11][12][13][14]. We present our photoluminescence properties in real-and momentum-space spectroscopy and images.
Exciton-polaritons and device fabrication
Exciton-polaritons are quasi-particles arising from the strong coupling between quantum well (QW) excitons and cavity photons, revealing quantum Bose nature, Bose-Einstein condensation [15][16][17][18] and superfluidity [19]. Our microcavity sample consists of three stacks of four GaAs/AlAs QWs sandwiched by 16 and 20 pairs of Ga 0.8 Al 0.2 As/AlAs as top and bottom distributed Bragg mirrors (DBRs). The cavity length is spatially varied so that the detuning values ( ) between the exciton and the cavity photon energy can be tuned. The vacuum Rabi splitting at zero detuning ( = 0 V) is approximately 14 meV. In our experiment, we focus on detunings in the range = 0 ∼ −2.3 meV. Microcavity exciton-polariton systems have experimental advantages in accessing readily the in-plane momentum distribution of the polariton system by collecting leaked photon flux through the top DBR, due to energy and momentum conservation between the polaritons and external photons. The metal-film deposition technique upon the grown microcavity wafers has been successful in producing in-plane periodic lattice potentials to influence excite-polariton condensates. It is simple but flexible to pattern readily various geometries in 2D. The underlying physical mechanism to produce the trap potential by this metal-film technique is that the cavity photon field is modified at the metal-semiconductor interface compared to the air-semiconductor interface. Owing to the metal film, the photon field vanishes at the metal-semiconductor region, whereas it decays into air in the bare semiconductor area. Therefore, the cavity length below the metal film is relatively squeezed; consequently, the photon energy is blue-shifted and the LP energy follows blue-shifted as well.
We applied negative resist to the wafer and used electron beam lithography to draw kagome lattice patterns in each 200 µm × 200 µm area. A 2D kagome geometry is patterned by arranging holes whose diameters are 1.5 or 3 µm with a 1 : 1 aspect ratio, namely the lattice constants are 3 and 6 µm, respectively. Figure 1 shows the 6 µm fabricated device. We deposit a thin metal film on top of the grown wafer by electron-beam lithography and lift-off [11,12]. We deposited a 20/3 nm Ti/Au film, which produces about 200 µeV potential depth near the zero-detuning region [11][12][13][14].
We perform angle-resolved photoluminescence experiments at ∼4 K in a continuous-flow cryostat, measuring the spectra and images in real space and reciprocal space. A compact optical setup has been constructed utilizing Fourier optics to access the near-and far-field planes. We use a Ti:sapphire mode-locked laser to inject exciton-polaritons into the microcavity QWs. The tomographic momentum-energy dispersion relations in the Fourier planes enable us to construct the band structures of the 2D kagome device. A high numerical aperture (NA = 0.55) objective lens collects the emitted photons, which are fed into either a 0.75 m spectrometer or an imaging camera.
Band structures of a kagome lattice
The energy band structure and wavefunctions of states in a kagome lattice are calculated using a plane wave expansion of the potential. A kagome lattice potential V (x) can be described as a superposition of cosine waves where k n are vectors in reciprocal space corresponding to the Fourier decomposition of V (x) and V n are the amplitudes. We find that N = 18 is enough to realize a kagome lattice shape approximately ( figure 2(b) shows the kagome lattice potential profile using the 18terms decomposition). We see that the kagome lattice is faithfully reproduced using the decomposition with clear potential dips at kagome points. Decomposing the potential into its Fourier components allows the Schrödinger equation, to be written in Fourier space using the expansion ψ(x) = e −ik·x ψ k dk aŝ According to the Bloch theorem, each point k in the Brillouin zone (BZ) is coupled only to vectors k ± k n , which discretizes our Hamiltonian. We choose the region of k in the first BZ and compute the band structure by diagonalizing our Hamiltonian. The size of the Hamiltonian H k is infinite, since an arbitrary number of k n vectors can be added to the initial k point. However, due to the k 2 diagonal elements of the Hamiltonian H k , such high-momentum wavevectors are energetically suppressed, allowing us to impose a cutoff with negligible decrease in accuracy. We found that H k within |k| 6|k 1 | is sufficient to attain convergence to numerical precision.
The result of such a calculation is shown in figure 3 for our experimental parameters. In the calculated results, the energy is normalized with E 0 =¯h is the mass of an exciton-polariton under no potential, a is the lattice constant of a kagome lattice, and a band structure depends on the ratio of E 0 and V 0 . In our experimental item, the potential depth V 0 is fixed at 200 µeV due to a fixed potential depth the metal deposition technique makes [11][12][13][14]. E 0 is changeable by choosing different lattice constants. The larger a becomes, the smaller E 0 becomes, increasing V 0 /E 0 . Due to the lattice constant dependence of E 0 the dimensionless depth of the potential V 0 /E 0 can be varied by changing a.
This means that the system is closer to the 'tight binding model' that predicts a completely flat band for larger lattices. We see this effect comparing figures 3(a) and (b), where the band structure becomes flatter for the larger sample, normalized to the same energy scale E 0 . In particular, the third band most rapidly approaches a flat band, in agreement with previous works [10]. The remaining curvature of the third band can be attributed to the presence of next nearest-neighbor interactions, which break the degeneracy of the flat band. For a much larger lattice the third band becomes almost perfectly flat, as can be seen in figure 3(c) which assumed a = 12 µm. The energy scale of the band structure decreases with a; thus a trade-off between energy resolution and V 0 /E 0 must be made. For this reason we have chosen to work with a sample in the range of a = 3-6 µm .
Experimental results
The sample was pumped at an incidence angle of ∼60 • by a 76 MHz mode-locking laser with a pump power of 15 mW, approximately twice the threshold for condensation. The laser spot size had dimensions of approximately 50 µm × 100 µm. Figure 1(c) shows a representative BZ taken with a device a = 3 µm. It clearly shows that a higher population of lower polaritons appears around the first BZ. The sharp emission peaks in a triangular pattern in figure 1(c) are due to the scattered pump laser signals, which give the scale of the first BZ. Figure 4 shows the pump power-dependent intensity plots at the high-symmetry points in the BZs. There is a nonlinear increase in the numbers of lower polaritons at each high-symmetry point, with a threshold around 10 mW.
We have experimentally measured the band structures of the first three bands from the tomography in the momentum plane. Figures 3(a) and (b) present the band structures of two devices with a = 3 and 6 µm, respectively. As the theoretical calculation shows, the bands in the a = 6 µm device are flatter. The flatter bands in the a = 6 µm device result from the fact that the fixed potential is renormalized by a four times smaller characteristic energy E 0 in comparison with the a = 3 µm device. Due to the smaller potential strength and the broader energy spectral linewidth, we are not able to resolve the band gap between energy bands.
In figure 5, the signal strength of each band reveals the LP population distribution in the momentum space. For example, in the first band (left images), LP condensates are accumulated near 1 , whose energy is lowest. Similarly, LP condensates are trapped near K points which are meta-stable points in the second and third bands. The black lines overlaid in the experimental results are the theoretically calculated band relations of the first, second and third bands, which match very well with experimental data.
Conclusion
We have successfully fabricated 2D kagome lattice structures with two different lattice constants by depositing a thin metal film on the grown GaAs-based microcavity structures. Photoluminescence experimental data exhibit the clear band structure of kagome lattice geometries, where meta-stable condensates of microcavity exciton-polaritons are formed above the quantum degeneracy threshold excitation values. In particular, exciton-polariton condensates show weaker dispersive energy relations with a larger lattice constant, which can be unambiguously explained by the renormalization of the energy scale in a given structure. We anticipate that with a stronger kagome lattice structure, the flat band can be produced in this system. The associated condensate order parameter of this flat band in the real space would be interesting to examine in future, which may reveal localization of condensates. | 2,608.2 | 2012-06-01T00:00:00.000 | [
"Physics"
] |
Higher neuron densities in the cerebral cortex and larger cerebellums may limit dive times of delphinids compared to deep-diving toothed whales
Since the work of Tower in the 1950s, we have come to expect lower neuron density in the cerebral cortex of larger brains. We studied dolphin brains varying from 783 to 6215g. As expected, average neuron density in four areas of cortex decreased from the smallest to the largest brain. Despite having a lower neuron density than smaller dolphins, the killer whale has more gray matter and more cortical neurons than any mammal, including humans. To begin a study of non-dolphin toothed whales, we measured a 596g brain of a pygmy sperm whale and a 2004g brain of a Cuvier’s beaked whale. We compared neuron density of Nissl stained cortex of these two brains with those of the dolphins. Non-dolphin brains had lower neuron densities compared to all of the dolphins, even the 6215g brain. The beaked whale and pygmy sperm whale we studied dive deeper and for much longer periods than the dolphins. For example, the beaked whale may dive for more than an hour, and the pygmy sperm whale more than a half hour. In contrast, the dolphins we studied limit dives to five or 10 minutes. Brain metabolism may be one feature limiting dolphin dives. The brain consumes an oversized share of oxygen available to the body. The most oxygen is used by the cortex and cerebellar gray matter. The dolphins have larger brains, larger cerebellums, and greater numbers of cortex neurons than would be expected given their body size. Smaller brains, smaller cerebellums and fewer cortical neurons potentially allow the beaked whale and pygmy sperm whale to dive longer and deeper than the dolphins. Although more gray matter, more neurons, and a larger cerebellum may limit dolphins to shorter, shallower dives, these features must give them some advantage. For example, they may be able to catch more elusive individual high-calorie prey in the upper ocean.
Introduction
Is there an ecological advantage to having a smaller brain with less cortex and fewer neurons? This question has been asked relative to diving mammals that must search for food at depth using limited oxygen stores [1,2]. Brains are metabolically expensive [3][4][5] and should not grow larger unless increased size provides some advantage. The total energetic requirement of the brain increases with increasing numbers of neurons, which leads to the need for more food and more time spent feeding to support the brain [5,6]. Previous studies of bottlenose dolphins (Tursiops truncatus) demonstrated that the highest metabolism is in the gray matter of the cerebral cortex and cerebellum [7,8]. Altogether, these findings suggest that a diving animal with a brain comprising a very small percentage of its body weight and containing fewer neurons should be able to make oxygen-limited dives for a longer period of time.
In earlier studies, only body size was positively correlated with dive time [9,10]. However, recent work has shown that some toothed whales (Odontoceti) have relatively small brains and cerebella compared to those of the odontocete family Delphinidae (marine dolphins). Dive times of non-dolphin odontocetes have not been compared to those of dolphins of similar body sizes. Furthermore, potential associations between maximum dive duration, cerebellum size, and cortical neuron densities have yet to be explored. Researchers previously demonstrated that dolphin brains had higher cortical neuron densities than that of one mature pygmy sperm whale (Kogia breviceps), a small odontocete from the family Kogiidae capable of diving for long periods [11]; dolphins perform relatively short duration dives compared to K. breviceps, which has a maximum dive time of 47 min [12][13][14]. Researchers have examined the anatomical composition of shallow and deep diving mammals in relation to the differential metabolic costs of body tissues [15]. Their findings suggest that deep divers invest a smaller percentage of total body mass in metabolically expensive brain and viscera, and a larger percentage of body mass in less energetically expensive skin, bone, and muscle.
For diving, odontocetes rely upon hearing and sonar to locate prey at depth [16,17]. As odontocetes search for squid or fish, their muscular nose makes brief trains of echolocation clicks that are focused through a melon-shaped forehead [18][19][20][21][22]. The clicks bounce off the prey, and returning echoes reach the ear (Au 1993), where the cochlea then converts the echoes into nerve impulses [23,24]. Along axons, the impulses pass from the cochlea to the brainstem and midbrain to reach the gray matter of the cerebral cortex, containing neurons with axonal and dendritic processes along which action potentials are conveyed. The transmission of action potentials along these processes represents a major proportion of the energy budget of the brain [25]. Also, there is evidence that the cerebral and cerebellar cortex of dolphins has the highest metabolism of the brain (Fig 1) [7,8].
While certain aspects of cortical cytoarchitecture are uniformly present across all mammals, several variations distinguish cetaceans from most mammals. The layering of the cetacean cortex has recently been displayed in great detail [26][27][28]. For example, cetacean cerebral cortex has a very thick and neuron-sparse layer I, or molecular layer [29][30][31][32]. Also, layer IV of the neocortex, which is the primary recipient of sensory information in humans and other primates, is absent across Cetacea. Layer II of the odontocete cortex is characterized by high neuron density and is potentially similar in function to the external granular layer of terrestrial mammals [28]. Clustering of neocortical neurons in layer II of the cortex is present in all odontocetes and in mysticetes, such as humpback and fin whales [26], but not in one mysticete, the bowhead whale [33]. Large cortex surface area is a feature of many cetacean brains [34][35][36]. The extent of cortical gyrification also varies within Cetacea, with the bowhead whale and some river dolphins displaying less convoluted brains than most other cetaceans [37].
While data on neuronal densities within the neocortex have previously been reported for a limited number of cetacean species and cortical sampling sites [11,26,[31][32][33][38][39][40][41][42][43][44][45], the present study seeks to contribute to and expand upon the existing data to examine potential relationships between neuroanatomical measurements and maximum dive duration in short-and long-diving odontocetes. The data presented include previously reported and original T. truncatus, Delphinus delphis and Orcinus orca neuroanatomical measurements [35,[46][47][48] as well as the first measurements of neuron density from the brains of a neonatal and adult killer whale (Orcinus orca) and an adult Cuvier's beaked whale (Ziphius cavirostris), which are rarely available for study. The beaked whale is the longest diver among whales studied to date, with a maximum dive time of 137.5 minutes [49,50]. Moreover, data from K. breviceps, another long-diving odontocete, was included in our analysis following the only study of the brain of this species [11]. Here we present observations on brain, cerebral cortex gray matter, and cerebellar mass, CSA, neuronal density, and dive time variability in cetaceans of different taxa. Some researchers have suggested that cetacean cortex is likely to be quite variable across species [26]. We agree with this assessment and were particularly interested in comparing our data on gray matter mass and neuron density with what we know of brain size, cerebellum size, cortex surface area, and maximum dive times of Z. cavirostris and O. orca, whales of very similar body size.
Ethics statement
No animals were sacrificed in these studies. All animals died of natural causes and their brains were removed during postmortem examination. The study followed protocols approved by the Institutional Animal Care and Use Committee at the Biosciences Division, Space and Naval [7,8]. The color map indicates the relative degree of glucose metabolism. The images demonstrate that high metabolic areas (i.e., areas of increased glucose consumption; red) are mainly concentrated in the gray matter of the cerebral cortex and cerebellum, with the exception of smaller sub-cortical nuclei (e.g., inferior colliculi; thalamic gray matter). The PET and MRI scans are from the same healthy dolphin that was trained to lie still in the scanner. https://doi.org/10.1371/journal.pone.0226206.g001 Warfare Systems Center (SSC) Pacific and the Navy Bureau of Medicine and Surgery, and followed all applicable U.S. Department of Defense guidelines for the care and use of animals.
The brains examined in the present study came from five odontocete species (T. truncatus, D. delphis, O. orca Z. cavirostris, and Kogia breviceps; abbreviated as Tt, Oo, Zc, Kb, and Dd) that died of natural causes in human-managed care or after stranding on beaches. The Zc, Kb, and Dd stranded on beaches in California and were collected by local stranding networks authorized under the United States Marine Mammal Protection Act. The Tt and the Oo were maintained in accordance with regulations under the U. S. Animal Welfare Act. All brains were submitted to us for postmortem analysis. No additional or pre-existing samples were used in this study. The brains were removed at necropsy within 12 hours of the individual's natural death and were free of neuropathologies. On removal, all brains were immersion-fixed in neutral buffered 10% formalin. Once hardened, the brains were sectioned to measure cerebral cortex surface area (CSA) by using previously published methods [34,35]. Brain and cerebellum masses and CSA were previously reported for each individual included in this study [37].
Tissue specimens were taken from four sites in the cerebral hemispheres, the left and right supralimbic and anterior paralimbic lobules (Fig 2). The fixed specimens were mounted in paraffin, sectioned at 7 microns, stained with cresyl violet (Nissl method), and glial fibrillary acidic protein (GFAP). With light microscopy, photographic montages were prepared from each cortical site from both brain hemispheres, with each section oriented and photographed perpendicular to the pial surface. These images and a grid were projected on a monitor, and magnification and units of measurement were encoded in an image analyzer (Optomax, Hollis, NH, USA). The image analyzer allowed for determination of nuclear area and nuclear diameter of both neurons and glia (Table 1). For all sampling sites in both hemispheres Neuron densities were determined and compared for each cortical layer as well as for the entire cerebral cortex for all areas examined. Glial cells were identified by GFAP staining and their smaller size and their lack of cytoplasm. To calculate cell density, a grid area (0.03 mm 2 ) was placed over each cell layer. All neurons and glial cells were recorded in each layer from the pial surface to the white matter border. Converting actual cell counts from counts/mm 2 to counts/ Anterior-superior view of the Tursiops truncatus (Tt) brain. The supralimbic (auditory; red) and anterior paralimbic (motor; blue) areas are marked on either side of the brain.
https://doi.org/10.1371/journal.pone.0226206.g002 mm 3 required the use of the following equation: N v = neurons counted/mm 3 N = counted cells A = area counted 0.03 mm 2 D = mean cell (nuclear) diameter (see Table 1) T = thickness of tissue sections 0.007 mm Comparison of the mean neuron and glial cell nuclear areas and nuclear diameters in the cerebral cortex of the Odontocete by brain area. Brain size in grams for each species is indicated in parentheses. Consistent with the findings of Haug [36], there was a trend for larger neurons in the larger brains.
Cortical surface area (S) and cortical gray matter thickness (T) were used to calculate the total volume of gray matter (G), G = ST [51]. Gray matter mass was then calculated by multiplying gray matter volume by 1.036 g/cm 3 , the specific gravity of gray matter in humans [52]. We previously found an average value of 1.04 g/cm 3 for the entire cetacean brain, including white and gray matter [37].
The total number of neurons in the gray matter of the cerebral cortex was calculated for each species from neuron density, cortical surface area (CSA) and thickness (T) measurements [36]. Our total neuron counts for Tt, a species whose neurons have been enumerated by several investigators, are within the lower range of published values [31,32,36,40].
The Kb was not part of our initial study. Our Kb data were derived from Nissl stained sections and an automated counting procedure (Reveal Biosciences, San Diego, CA). Only total counts were done in this species. There were not counts of separate layers.
All statistical comparisons were made using a two tailed T-test.
Neuron density and cortical measurements from the brains of five odontocetes
Average neuron densities, cell sizes, and other anatomical data for the five odontocete species examined in the current study are presented in Tables 1-3 along with data from other species reported in separate studies in S1 Table. We compare our data from adult odontocetes of similar body size (Tt and Kb and Oo and Zc all females) in Figs 3 and 4. These two pairs of species differ greatly in diving capability despite body size similarities. The maximum dive time of Zc (>2 hr) is over 12 times longer than that of Oo. Kb dives more than five times longer than Tt. Both pairs of females differ in whole brain, cortical gray matter, and cerebellum mass, cortical surface area, and neuronal density (Figs 3 and 4). Consistent with all previous studies [26-28, 31, 40, 53], the present study of the cerebral cortex of five odontocetes found a thick neuron sparse layer I, a lack of layer IV as well as a high density of neurons in layer II within the four cortical areas sampled (Fig 2). Although one study [40] noted a trace of layer IV in a Tt neonate, the current results from the Oo neonate did not reveal a trace of layer IV; however, the authors of this earlier study sampled slightly different areas of cortex compared to our own samples. Neuron densities comparisons between left and right cerebral hemispheres (Fig 2), revealed no significant differences in the areas sampled (P = 0.03) for all species. Previously, we found a slight but significant right hemisphere advantage in cortical surface area for Tt and Dd [54]. In a follow-up study including the same two species, we found no significant difference in cortical thickness between the two hemispheres [46]. Layer 1, or the molecular layer [30,36,55], was relatively thick in all species that we studied but layer 1 in our specimens were all sparse in neurons and contained numerous glia. In agreement with other published work [28,44], we found cortical thickness differences across the cetacean brains studied. The cortical thickness of the Oo, which has the largest body and brain mass within the family Delphinidae, was greater than that of the other delphinids (Tt and Dd) examined. However, the cortical thicknesses of Zc and Oo were very similar, despite a three-fold difference in total brain mass (Fig 4). We found that Tt and Dd cortical thicknesses did not differ significantly, as was found by another previous study [53].
Cerebellum comparisons
Compared to Kb and Zc, delphinids (Oo, Tt, and Dd) have a larger proportion of the brain devoted to the cerebellum [37,56] (Figs 3 and 4). The amount of cerebellum relative to total brain mass in Zc and Kb (11% and 10%, respectively) is similar to that of humans [57]. Relative to their body size, delphinids have the largest cerebella compared to other cetaceans [37] and other large terrestrial mammals, including elephants [57]. Humans and other mammals have more neurons in the cerebellum than in the cerebral cortex [57]. Although delphinids have high neuronal density within the cerebellum, it is less than that of humans according to another study [58], which reported neuron densities of 572 cells/per 0.001 mm3 in the Tt cerebellum compared to 721 cells/per 0.001 mm3 in the human cerebellum. As the cerebellum of Tt is about 53% larger than that of humans [59], this suggests that Tt has about over 20% more total neurons within its cerebellum than humans. In Tt there is also support for high metabolism and circulation in the living dolphin cerebellum [8]. In addition to the high metabolism suggested by 18 F-2-fluoro-2-deoxyglucose positron emission tomography (FDG PET) scans in the living dolphin [7], another PET study observed rapid cerebellar and cerebral cortex uptake using short half-life radiolabeled ammonia 13NH 3 [8].
Neonatal comparisons
With the addition of data from our neonatal Oo specimen to the literature, there are now neuron and glial cell packing density measurements for neonatal cetaceans of two species, Tt and Oo. Fig 5 compares the neuron and glial cell densities, brain, cerebellum, and cortical gray matter masses, and cortical surface areas of Tt and Oo neonates and adults. The neonate Tt, described in a previous study [40], had the highest neuron and glial cell densities overall, at 48,700 cells/mm 3 and 77,300 cells/mm 3 , respectively. The mature Tt brain was about 2.5 times larger in mass than the neonatal Tt brain and the cortical neuron density of the neonatal Tt brain was about 2.5 times greater than that of the mature Tt brain. The neonatal Oo, with a much larger body size and brain almost five-fold larger than that of the neonatal Tt [40] had neuron and glial cell densities that were around half those of the Tt neonate (Oo neonate neuron density: 21,503 cells/mm 3 ; glial cell density: 37,030 cells/mm 3 ). However, despite the great differences in brain size and brain cell density between these two delphinid species, the ratios of glia to neurons were similar (adult Tursiops: 1.95, neonate Tursiops: 1.59, adult Orcinus: 1.32, neonate Orcinus: 1.72). Both neuron and glial cell densities were higher in neonates compared to mature adults. Comparison of a larger short diver and a larger long diver. An illustration of the differences in brain mass, cortical surface area, cerebellum mass, average neuron density, and maximum dive time between one female adult delphinid (Oo; A) and one female adult ziphiid (Zc; B) of similar body size. Zc, which can dive deeper than a mile and longer than an hour, has a smaller brain with fewer neurons than that of Oo, an animal of similar body size that performs shallower, shorter dives. Glial cell density disparities between neonate and adult delphinids were comparable to those in neuron density. The brain mass of the adult Oo was approximately twice that of the neonate Oo. Similarly, the glial cell density of the neonate Oo cerebral cortex was nearly twice that of the mature Oo. However, the cortical neuron density of the neonate Oo was only 1.4 times greater than that of the mature Oo.
Brain size, neuron and glial density and dive time
For the present study, we primarily focus on the relationships between brain-body measurements, neuronal densities, and dive time. Our neuron density measurements add to the very sparse literature on these values in cetacean brains. Of particular interest was the relationship between neuron densities with brain size. Previous studies [38,39] have suggested that neuronal density is a function of brain size rather than taxonomic relationships. The glia/neuron ratio rises from small-brained rodents (mouse, rabbit: 0.35) to ungulates (pig, cow, and horse: 1.1), humans (1.68-1.78), and Tt (1.95) of intermediate brain size, to large-brained whales (fin whale: 4.54-5.85) [39,60].
One published review of the literature [41] suggests that cetaceans have relatively low neuron densities and high ratios of glia to neurons. In contrast to early studies [38,39], more recent studies suggest that taxonomic as well as brain size differences may affect neuron density in cetaceans [11,43]. Even earlier, one study [36] noted "The density of glial cells in the Comparison of neonates and adults for the only two cetacean species where such data is available. Brain and body size, brain cell density, and cortical surface area comparisons between neonates and adults representing two dolphin species, Tt (A) and Oo (B). All data are from individual animals. Brain and body mass data as well as neuron and glial cell densities for the neonate Tt were published previously [40].
https://doi.org/10.1371/journal.pone.0226206.g005 gray of cortex shows very large variations. A dependence on brain size cannot be observed." One published report [61] demonstrated that Tt had neuron densities as high as P. phocoena, a species with a brain only one-third as large as that of Tt. Furthermore, two species of river dolphins (Platanista) with brains only one-seventh as large as Tt had similar neuron densities to Tt [61].
Previous studies [38,39] have suggested that neuron density was a function of brain size rather than taxonomy. Instead, we suggest that neuron density likely follows a particular trend based on species taxonomy and family-specific features, such as metabolism, gestation duration, and ecology [37]. Some cetacean species, such as D. leucas and Z. cavirostris, display similar neuron densities and brain masses despite significant differences in body size.
Across species and compared to neonates of the same species, adult body size is vastly different, thus encephalization quotient (EQ) is also quite different. EQ is not a good indicator of neuron density. Again, there seems to be a trend in neuron density based on taxonomic family. Also, Dd, Tt, Oo are in the same dolphin family and their average neuron density decreases with increasing brain and cerebral cortex size in mature animals. Previous studies in land mammals have presented the scaling relationship between neuronal density and brain size as being order-specific [5,60]. Although the present study only presents data for a limited number of species within non-delphinid cetacean families, these data may suggest family-specific scaling patterns for neuronal density in cetaceans. This is suggested by our comparisons of neuroanatomical data from Tt, a delphinid, with Kb, a kogiid [11], which supports a previous study demonstrating that neuron density is much lower in Kb, compared to Tt, despite its much smaller brain [11].
Also, when we compare delphinid Oo with ziphiid Zc two animals of very similar body size, we find that Oo has a higher neuron density despite having a brain three times as large as that of Zc. However, within their taxonomic group, our results show that in family Delphinidae, cortical neuron density decreases with brains enlargement from Dd to Tt to Oo. Taken together, the aforementioned comparisons indicate that body and brain size are not in every case the determining factors of cortical neuron density.
Among mammals, the glia/neuron ratio is species-specific, and the number of glial cells varies with the number of neurons during ontogenesis [43]. By comparing multiple brain measurements in available brains from short-diving delphinids with brains from two nondelphinid odontocetes known for longer dive durations, the present study extends these results.
One study [62] found a higher ratio of glia to neurons in the human frontal cortex compared to other primates with much smaller brains. They suggested that "relatively greater numbers of glia in the human neocortex relate to the energetic costs of maintaining larger dendritic arbors and long-range projecting axons in the context of a large brain." The neuron-glia relationship may be somewhat different in cetaceans. For example, the neonatal Oo, with a much larger body size and brain over five times larger than that of the neonatal Tt, had brain cell densities (neuron density: 21,503 cells/mm 3 ; glial cell density: 37,030 cells/mm 3 ) that were less than half of those of the Tt. However, glia/neuron ratios for neonate Tursiops and Orcinus are quite similar (Tursiops: 1.59, Orcinus: 1.72). The glia/neuron ratio in the deep-and long-diving Zc (3.18) with a brain of 2004 g is twice that of the shallow-and short-diving Oo (1.68) with a brain of 6215g (S1 Table). Having higher glia/neuron ratios while performing long, deep dives into cold and dark ocean waters with limited oxygen stores may facilitate heat production [41]. In addition, higher glia/neuron ratios may enhance neurotransmission of acoustic information or protect the brain from hypoxia at depth. For example, glia can serve as a sink for carbon dioxide during periods of hypoxia [63].
Dive capabilities and the cerebellum
There is a dichotomy in cerebellum size between the delphinoids (Phocoenidae, Monodontidae, and Delphinidae) and other odontocetes. After controlling for brain size, the average delphinid cerebellum is 17.2% larger than the average ape cerebellum and 53.5% larger than the average human cerebellum [64]. However, there is great diversity in relative cerebellum size across cetaceans. For example, the largest delphinid, Oo, has a cerebellar quotient (CQ) about twice as high as that of the giant sperm whale (Physeter macrocephalus) [65], and about 70% higher than that of Zc (Oo: 753 g cerebellum [CQ = 1.36]; Zc: 206 g cerebellum [CQ = 0.8]). Both P. macrocephalus and Zc can dive over six times longer (one to two hours) than Oo (just over 10 minutes). Zc has a smaller brain and a lower neuron density compared to Oo, resulting in a comparatively lower total neuron count (Fig 4, Table 2). Although other anatomical and physiological features contribute to the ability of ziphiids and kogiids to perform longer and deeper dives [15,66], lower brain metabolism due to a smaller whole brain and cerebellum relative to body size with fewer neurons in cortex gray matter is likely another feature that may be an advantage in diving. This comparison of odontocetes appears to show that lower total neuron counts and neuronal densities, and smaller cerebella are all correlated with longer dive times (Figs 3 and 4).
One study of terrestrial mammals [5] notes that total glucose use by the whole brain, cerebral cortex, or cerebellum is directly related to the number of neurons within each structure. A higher total neuron number yields a higher energy budget for the brain as a whole. This increased energy budget may actually reflect the relatively large amount of mitochondria that is present in gray matter (particularly in dendrites and axon terminals) compared to white matter [67]. Thus, the amount of mitochondria may be the mediating factor in the negative correlation between relative gray matter mass and dive time. Having a relatively low number of neurons and a lower brain energy budget may facilitate the longer, deeper dives of some cetaceans.
Dive time in cetaceans may also depend on the efficiency of oxygen use in muscle tissue and in the bloodstream. How much muscle and blood oxygen reserves a cetacean has is related to the dive duration and oxygen demands [68]. Cetaceans with higher demands for oxygen have increased oxygen reserves compared to those that typically dive for shorter periods of time.
In the non-cetacean mammals investigated so far, the greatest neuron density and the largest total number of neurons are found in the cerebellum [57]. Relative to body size, the cerebellum of members of the family Delphinidae is the largest among Cetacea and possibly of all mammals. The bottlenose dolphin (Tt) cerebellum is over 50% larger than that of humans [57,59], yet its cerebellar neuron density is only slightly less than that in humans [58]. Considering the cerebellum size and neuron density data reported [58], the total neuron cell count of the bottlenose dolphin cerebellum is approximately 20% greater than that of humans.
Total numbers of neurons in the cerebral cortex and cerebellum
Recent work [5,69] suggests that the absolute number of neurons within the cerebral cortex may be a better determinant of cognitive performance than its relative size. Previous studies have reported total neuron counts for additional cetacean species, such as the long-finned pilot whale (Globicephala melas), which has a relatively high total neuron count that is nearly twice that of humans [45]. S1 Table demonstrates that delphinids generally have high total neuron counts that are near or (in the case of pilot and killer whales) greater than the total number of neurons in humans. Some human studies [70,71] have posited that the cerebellum is responsible for more than just movement and motor control, but also for cognitive tasks, including attention, executive control, language, working memory, learning, pain, and emotion. The large number of neurons in the dolphin cerebellum [58] may also account for some of the intricate sensory and cognitive behavior of delphinids. Fig 1 shows horizontal and frontal sections of a living Tt brain during positron emission tomography (PET) after uptake of 18 F-2-fluoro-2-deoxyglucose (FDG) to measure relative brain metabolism from glucose uptake [7]. Magnetic resonance images are shown for anatomical comparison. The images demonstrate that high metabolic areas are mainly concentrated in the gray matter of the cerebral cortex and cerebellum, with smaller areas of high metabolism in the inferior colliculus and thalamic gray matter. Both PET and MRI scans are from the same healthy dolphin that was trained to lie still in the scanners. These particular scans cannot be compared directly with other Tt or cetacean brains since direct comparisons of different brains require PET ligand dosages calibrated for body size. However, these images demonstrate that in Tt, as in humans and other mammals, the highest metabolism is in the gray matter.
Metabolism of the cerebral cortex and sociality
It is metabolically expensive to have more gray matter, more neurons, and a larger cerebellum. If doing so may also limit dolphins to shorter and shallower dives, what is the advantage that outweighs these limitations? Some proponents of the Social Brain Hypothesis posit that large, metabolically expensive brains support an enhanced social repertoire, cooperation, and prey diversity [6,72]. This is often taken to mean that large-brained animals can support a large social group. One study indicates that this trend may not be as linear as previously expected among cetaceans [72]; this study found that cetaceans in the largest social groups (e.g. Dd), deemed 'megapods,' have smaller brains compared to those belonging to mid-sized social groups (e.g. Tt and Oo). Furthermore, cetaceans living as relatively solitary (e.g. Zc and Kb) have the smallest brains relative to body size. Our data on neuron density resemble this trend with regard to social group size with one exception. Dd have relatively large brains and high neuron densities despite usually belonging to 'megapods.' While it is true that cetaceans in mid-sized groups tend to have the richest social repertoires [72], it may also be the case that having greater neuron densities may support complex and expanded social connections, regardless of group size. However, we cannot be certain that neurons of different species are comparable. Our results suggest neurons are larger in the larger brains. One study showed that in fact, the neurons found in cetacean cortex appear to be less complex than those found in the closely related Artiodactyls, having lower overall dendritic lengths, segment lengths, spine numbers, and spine density [73] There were differences in fixation between the different brains in the study. Differences in fixation could amplify neuron differences shown in the study. Still, until equally well fixed material is available, we must consider that neurons differences observed could possibly mean that there are different information processing abilities of neurons of the compared species. In this sense, the social relations mentioned above must be viewed with caution. Future studies may address the question: How can neurons be compared across species?
Comparison of cortical gray matter mass and neuron density
Of the species we studied, Oo has the largest percentage of cortical gray matter mass, and humans have the greatest percentage of brain mass in the remaining areas of the brain (ROB), which includes white matter and the brain stem. The percentage of brain mass in the human cerebellum is about equal to that in Zc and similar to that in the non-delphinoid odontocetes, such as Kb [37]. An even smaller relative cerebellum size is found in P. macrocephalus and the Indian River dolphin, (Platanista gangetica) [37]. Humans have a lower percentage of gray matter mass [72] compared to the cetaceans we studied (Fig 6). Considering the cerebrum alone, the human cerebrum is closer to 60% white matter and 40% gray matter [74,75].
The gray matter of the human cerebral cortex contains a total of 15 to 20 billion neurons [57,76,77]. Although it has been suggested that the human cerebral cortex has the highest total number of neurons compared to that of any other species [67], our findings and those of other investigators suggest that this is not the case. For instance, one study [45] reports a much higher total neuron count for the pilot whale compared to humans (S1 Table). Also, based on our calculations using cortical thickness, cortical surface area, and neuron density, Oo specimens have total neocortical neuron counts higher than those of any other species, including G. melas and humans (S1 Table). Oo also has a much larger cortical gray matter mass compared to other cetaceans. It appears that the proportion of the brain occupied by cortical gray matter of the adult female killer whale examined here (44%) is similar to that previously reported for a male killer whale (48%) based on MRI scans of a brain of very similar size (6215g for current specimen and 6435g for another specimen in [56]).
Consistent with a magnetic resonance imaging (MRI) study of the Oo brain [56], we found cortical gray matter to be 44% of brain mass in Oo, despite its large brain size. Gray matter accounts for approximately 41% of total brain mass in Dd, Tt and 42% in Zc (Fig 5). The comparison of gray matter percentage between dolphins and humans is reminiscent of the comparison of humans with higher primates. The other primates have a higher percentage of gray matter compared to humans [57]. Some observations have emphasized the higher percentage of white matter (axons) in the human frontal cortex [62]. The human advantage in information processing may relate to the more extensive white matter of the frontal cortex.
Taken together, these findings demonstrate that brain size alone does not always accurately predict neuron density. In odontocete cetaceans, evolutionary history and taxonomic relationships may play a more important role than brain size in determining neuron density. Furthermore, it appears that neuron density and maximum dive time may also be related in odontocetes, with lower neuron densities and total neuron counts relative to body size appearing to correlate with longer dive times when animals of similar body size are compared (Fig 7).
Limitations
We have presented our bihemispheric measurements of cortical neuronal density from five cetacean species (Oo, Zc, Kb, Tt, and Dd) with brains representing almost an order of magnitude difference in size. Cetacean data is largely underrepresented in the literature. The present study began in the 1980s [35,46,47,48] and though limited, provides more samples than those previously published on these different odontocetes. We present the only neuron and glia density data comparing Oo and Zc. Also, we present the only data comparing neonate and adult killer whales (Oo). The neuron and glia counts were done in the mid-1980s and were presented at conferences with published abstracts only [47,48]. Other cetacean neuron density data come from a variety of studies. Most cell density estimates are based on samples from different areas of the cerebral cortex. Although our methods for assessment of neuron density could be questioned, we used the same methods and same areas of cortex for all species except for kb.
Traditional methods for calculating brain cell density involve stereology, in which crosssections of brain tissue from various regions of the brain are examined. This is the method employed in the present study. One study [81] described an "alternative non-stereological method, the isotropic fractionator, which involves homogenizing brain tissue into a suspension of countable neuronal and non-neuronal cell nuclei." Two counting methods, manual Tt, Dd, Zc, and H. sapiens. Each bar represents the percentage of total brain mass in the cerebellum (pink), gray matter of the cerebral cortex (green), and the remaining areas of the brain (ROB: brainstem, white matter of the cerebral cortex) are shown in blue. Mass of the gray matter was calculated by multiplying the cortex surface area and cortex thickness (see S1 Table) by the specific gravity of the gray matter of the cerebral cortex [52]. For average cortex thickness and surface area we used our own measurements. For the human cortex thickness, we used a value of 2.5 mm. The calculated values here are similar to H. sapiens values averaged from previously published data [74]. Cerebellum masses are values from a previous publication [37]. Kb was not included because we only had a brain from an immature animal.
https://doi.org/10.1371/journal.pone.0226206.g006 and automated, may be used with the isotropic fractionator [82]. One publication [83] warned of the dangers of under sampling with stereological methods. For example, another study [84] reported a rather extreme neuron count of 14.9 billion neurons in the cerebral cortex of the small harbor porpoise (Phocoena phocoena), which is close to the total cortical neuron count for humans. It has been posited that this extreme value was likely due to an invalid extrapolation after sampling too few cells within cortical sections [82]. However, researchers have compared the stereology and isotropic fractionator techniques and found no consistent or statistically significant differences in the results obtained from both methods when sufficient samples were taken [82].
The same conclusions were drawn from a study using two different species (humans and macaques) [85]. Furthermore, another study [86] found that the relationship between average estimates and the variance of estimates for a given tissue sample was comparable across all techniques (manual and automated counting with the isotropic fractionator, and stereology). The main advantage of using the isotropic fractionator is faster processing time, whereas the key disadvantage is destruction of the analyzed tissue sample.
As mentioned above, methods vary. Going forward, more standardized methods for calculating cell density must be established in order to accurately compare data across individuals, developmental stages, and species. Only a few brains from a limited number of cetaceans have been studied. To our knowledge, only two neonatal cetacean brains have been studied, comprising one Tt neonate [40] and our present study of the Oo neonate. We were able to study a single Zc that beached alive near our laboratory. Every year many cetaceans live strand on beaches around the world. Given the necessary resources, scientists should be able to examine a wider sample of cetacean brains to fill in the giant gaps in our knowledge. New imaging technology and software for automated cell counting may make it possible to analyze neuron and glia density and other anatomical features such as the microcirculation across the entire brain. Our current findings are a small step in that direction.
Supporting information S1 Table. Values from the current studies compared to published values. Brain cell densities, cortical surface area and thickness, total number of cortical neurons, and brain and body mass measurements for ten species of cetaceans and humans. Total cortical neuron counts were estimated using our measurements for cortical thickness, average cortical surface area, and average neuron density from each species, with the exception of the total cortical neuron counts for Gm and Hs. ND AVG = average neuron density of the neocortex (cells/mm 3 ); GD AVG = average glial cell density of the neocortex (cells/mm 3 ); SA Cx = cortical surface area (cm 2 ); T Cx = cortical thickness (mm); N TOTAL = total number of neurons in the neocortex (×10 9 ); M Brain = brain mass (g); M CGM = cortical gray matter mass (g); M Cb = cerebellum mass (g); M Body = body mass (kg); � = neonatal specimens; as = anterior supracallosal sector; ps = posterior supracallosal sector; ▲ = cortical surface area estimates based on other previously measured brains of similar mass [37]; AVG = averaged adult data from a previous publication [37] | 9,160.2 | 2019-12-16T00:00:00.000 | [
"Biology"
] |
Development of an Augmented Reality environment for the assembly of a precast wood-frame wall using the BIM model
his article presents the development of an Augmented Reality (AR) application to assist in the assembly of a precast wood-frame wall, based on the BIM model of the wall execution sequence. The research study used the Design Science Research approach and its aim was to develop an AR application named “montAR” (version 2.0). This application offers a tutorial that combines a wall model visualized in AR in actual scale, followed by an audio with step-by-step instructions of the assembly process. Its applicability was simulated in a laboratory with the participation of volunteers (architecture and engineering students). Two visualization gadgets were used and compared: smartphones and smart glasses. The potentialities and difficulties of the use of the AR system were assessed through a questionnaire answered by the participants and through direct observation and result analysis by the researchers. The results demonstrated the potential of using AR for precast wall assembly. From a technological innovation perspective, this study emphasizes the potential use of AR as a technology suitable for training and for construction quality control.
T Introduction
The light wood-frame construction method, or simply wood framing, combines a commitment to the environment with new construction techniques.Around 95% of the residences in the United States rely on the aforementioned technology, however in Brazil, its adoption is still incipient (MOLINA; CALIL JUNIOR, 2010), restricted to some regions of the country.The embracement of new construction technologies, like wood framing, depends on individuals' training to develop and execute projects that involve such technologies.Hence, it is important to highlight that new skills must be gained and applied to the work.
For beginners, doubts and the inevitable occurrences of failures in the wood-frame panel assembly could be minimized with the aid of Augmented Reality (AR).As a tool to assist visualization, AR can contribute to the construction of wood-frame buildings as it exhibits additional information that enables better comprehension and also gradually guides the execution of the project.Embedded information into virtual models can be used if connected to the real construction elements and visualized in AR.This way, AR can provide a visualization of the assembly procedure in an interactive way at the same time it shows superimposed models onto the real world.By allowing individuals to be aided in the assembly process of wood-frame edifications, the AR technology would act as a facilitator to the diffusion of this construction system.
As an initiative to teach people about this construction method in Brazil, the application "montAR" (version 2.0) was developed.It offers a tutorial to aid the assembly of a wood-frame wall panel.This tutorial comprises a wall modeled in a Building Information Modeling (BIM) software with a real scale visualization through AR, accompanied by audio step-by-step assembly process instructions.The application was developed for two different mobile devices: smartphone and smart glasses with Android operational system.Smart glasses are a type of optical Head Mounted Displays (HMD) that integrates first-person cameras with hands-free displays (ALVAREZ CASADO et al., 2015).As in the smartphones, the smart glasses already have the processing power and an integrated operational system together with tracking technology (usually GPS and 9 axis head tracking), battery, memory, storage, audio output, image and video capture and display.
The aim of this paper is to evaluate the potential of the use of AR for precast wall assembly and identify difficulties and limitations associated with the assessed gadgets: smartphone and smart glasses.The next section presents some researches involving assembly tasks supported by AR technologies.
AR as a tool for assembly tasks
AR is a technology that combines virtual elements with the real world in real time.The Augmented Assembly is the application of AR in an augmented environment where virtual objects are combined with the real world to enhance the artifact assembly (NEE et al., 2012).In the assembly processes, two or more objects are joined.AR use is being explored to guide the tasks required to put together parts of an artifact.In an augmented environment, physical parts of an artifact, real feedback and virtual contents are used to aid assembly of planned artifacts, allowing the benefits of both physical and virtual worlds.
In the last five years, numerous research studies have been conducted on the use of AR to assist assembly tasks.AR has been applied to, e.g., an animated system to improve the construction assembly process (HOU et al., 2013); an experiment focusing on verifying the performance in retaining work memory according to the user gender (HOU; WANG, 2013); assisted interactive manual assembly design systems (WANG et al., 2013, WANG;ONG;NEE, 2013, ONG;WANG, 2011); a piping assembly in order to examine physical and cognitive potential of the assemblers (HOU; WANG; TRUIJENS, 2015); a building structure model assembly using wooden blocks in order to teach abstract construction and civil engineering topics in a more practical manner (SHIRAZI; BEHZADAN, 2015).These researches are summarized in Figure 1.The localization technology aims to determine where to display digital content.Its accuracy possesses limitations on how AR applications can superimpose virtual information onto the real world (CHI; KANG; WANG, 2013).The hardware device can determine the level of mobility provided, so it is important to observe the display and processing device used.The control mechanism specifies the interaction techniques used to interact with the system, such as gestures or indirect controllers.The AR implementation may not be suitable for all applications (NEE et al., 2012).Therefore, it is important to identify which applications would bring significant benefits to the user performance in terms of ease of learning new tasks, low rate of errors and decreased cognitive workload.In addition, it is important to analyze the effects of prolonged use of the devices and, from that, develop a highly interactive, efficient and customized interface for each application so that the user is satisfied, recognizes the advantages of the new technology and agrees to adopt it in the execution of their work (NEE et al., 2012).In the proposed use of AR, the users are likely to spend a substantial amount of time using the system; this is the reason the user experience must be studied and taken into account.
Materials and methods
The question leading this research is if and how AR could work as an assisting tool to the assembly of a precast wood-frame wall.The purpose is that an AR system could make the assembly stages and execution visible and explicit.Hence, the Design Science Research methodology was adopted in this research process.
The Design Science Research objective is to produce applicable and useful knowledge for problem solving, to enhance existing systems and to create new solutions or artifacts (LACERDA et al., 2013).According to Peffers et al. (2007), this method allows the creation e evaluation of Information Technology artifacts designed based on a research problem.Piirainen and Gonzalez (2013) add that this type of methodology contributes to the existing knowledge by reaching for solutions to non-trivial questions in an innovative manner.
In short, the process of this research consists of five stages iteratively related: (a) understanding wood-frame house construction process; (b) objective: AR as a tutorial tool for construction tasks; (c) design and development for AR usage; (d) simulation; and (e) evaluation.
The overall research process is represented in Figure 2.These stages carry relation to the Design Science Research stages suggested by Peffers et al. (2007), shown in the superior portion of the figure.
The first stage of this research involved the study of related researches and also the understanding of wood-frame house construction process.A visit was carried out to a company specialized in the manufacture of wood-frame residences in an industrialized way in order to collect data to support the artifact's conception and development.
Both the wood-frame panel assembly process in the factory and their installment on the building site were ascertained, Figure 3.
This study sought to establish a parallel on how the AR technology could assist the panels assembly process allowing the self-construction.How could someone build wood-frame structures without specific technical knowledge about the subject?How could this construction system disseminate across Brazil?Could the AR function as a template for assembly?
The exploratory phase of this work allowed the researchers to familiarize themselves with the chosen construction technique and to comprehend the difficulties in the assembly process.This way, it was possible to establish a specific objective.The development stage consisted of BIM modeling, assembly steps idealization, preparation of components and application development.This stage is briefly described in the following section.
After the development of the application, it was possible to simulate its use in laboratory and assess it with real users.Then, the applicability of the AR proposal was analyzed.
Design and development for AR usage
The aim of this section is to demonstrate the process of bringing the existing graphic information with a BIM model to mobile applications in order to offer a tutorial tool by means of AR.
BIM modeling and assembly steps
The aim of this research was to use BIM as a data source for an AR application, thus a BIM model was the primary data source to the development of the actual system.Therefore, a BIM model of a wood-frame wall was designed using the Autodesk ® Revit ® software.This stage was developed in two phases: (a) wall structure; and (b) built-in installations and wall sheathing.
In the first phase, the wall structure was designed.
The proposed structure measures 1.70 meters wide by 1.75 meters high and possesses a central opening for a window.The structural framing material takeoff was extracted from Revit ® , Figure 4.
In the second phase, the built-in installations and wall sheathing were designed.The built-in installations were flexible conduit, electrical box, PVC pipe and elbow.Oriented Strand Board (OSB) was used as wall sheathing.The OSB panels provided were 1.22 x 2.20 meters.In the interest of exploring the most out of them, the cuttings were made after a study conducted on Revit ® .From that, only 3 panels were necessary for both front and back sheathing.The complete material takeoff was extracted from Revit ® , as presented in Figure 5.
The step-by-step was idealized through a BIM software by the "Phasing" resource.Hence, each step corresponded to a different phase in Revit®.In total, 10 phases were created, matching up with the 7 assembly steps and the 3 steps of inserting the pipe and conduit and placing the OSB panel.
Preparation of components for assembly
Molina and Calil Junior (2010) point that, in Brazil, structural wood-frame projects use rapid growth reforestation wood as raw material, such as Pinus and, to a lesser extent, eucalyptus.
According to Ferreira (2013), the wood used is from reforestation and it doesn't go through any transformation process that requires energy, favoring the low consumption of energy in the manufacture of residences In this research, Eucalyptus components, section 5x11 centimeters, were used.The eucalyptus was cut in accordance with the design.Each one was identified by a color according to its length and function.
Subsequently, the wall was physically assembled.
For the first phase, the components were drilled in order to offer a pre-set assembly system.Letters were written alphabetically to identify elements of the same color to favor its location in the panel.
For the second phase, the built-in installations and OSB wall sheathing were placed, Figure 6. 68
Application development
Next, the artifact's functionality, characteristics and development are presented.The "montAR" application was developed in order to offer an assembly guide of a wood-frame wall.Using AR, the user's view is augmented with the assembly sequence, annotations and audio instructions designed to facilitate the task comprehension, positioning and execution.
Two versions of the "montAR" application were developed.The first consisted of a tutorial developed for Phase 1, associated with the woodframe panel assembly.The 2.0 version adds Phase 2, which includes the built-in systems and OSB wall sheathing.The "montAR" application is available on Google Play1 and on Moverio Apps Market2 .
The "montAR" application was developed from the Metaio's Software Development Kit (SDK) for Unity 3D ® .Its functionality is based on the use of an ID Marker (a kind of a fiducial marker) associated with the assembly execution steps of a wood-frame panel.When the ID Marker is seen in AR, it indicates the correct position of the pieces that make up the panel by displaying the model in Development of an Augmented Reality environment for the assembly of precast wood-frame wall from the BIM model 69 real scale, highlighting its position on the assembly board.Markerless tracking techniques, such as tracking using the salient geometric features of real spatial objects, were considered to be adopted.However, as all of the wood components have the same rectangular shape, this approach was dismissed.
After the downloading, the "montAR" application could be accessed by its icon on the screen of the devices.When running, the application displayed a menu with the options: Phase 1, Phase 2 and Acknowledgement (in Portuguese).In Phase 1 and 2, there was a menu on the right side numbered from 1 to 7 to access the panel execution steps.These steps consist of the exhibition of the AR model elements and also of audio instructions that include the correct positioning, orientation (vertical/horizontal), joints and additional assembly referral, Figure 7.
It's important to notice that an assembly sequence is valid only if all the assembly operations are geometrically feasible (WANG et al., 2013), so the assembly sequence was established without interference between parts during the assembly process.In each step, through AR visualization, the wood pieces added are highlighted with its correspondent color over the assembly board, indicating, in real scale, the right position that the pieces need to be inserted.Hence, the BIM model had to go through a treatment process described by Cuperschmid and Ruschel (2013), which includes: exportation file as FBX format, model optimization in the 3D Max ® software, setup of the correct scale, materials application (colors, textures and transparency), alignment in relation to the marker position and ideal illumination of the virtual scene in Unity 3D ® .
Additionally, audio instructions referring to each step were recorded.The Adobe Audition ® software was used to bring optimization to the recordings.They were, then, exported as MP3 format into Unity 3D ® .In the Unity 3D ® , the application interface was developed with an initial menu and a numbered menu with the assembly steps.The models and audio were associated with each step.Meshes were created corresponding to the measurements, so that the model, when seen in AR, could exhibit them, Figure 8.At the end of this process, the Unity 3D ® project was compiled into an APK file (Android application package).
The application was designed to run on Epson Moverio™ BT200 and on smartphones with Android operational systems, Figure 9.The Moverio™ BT200 is composed of binocular smart glasses (240g) that use two translucent screens to project the augmented content onto the surface of the real environment and a control pad (165g).It was used in this study because it: is capable of displaying three-dimensional models, provides a reasonable field of view (compared to other existing AR smart glasses at the beginning of the research) of 23 degrees, has a translucent optical display system, has a battery life of approximately 5.8 hours and was marketing available.
Through Metaio SDK, it was possible to encounter two ways of accessing AR resources in the Moverio™ BT200: (i) one in which the device captures the real scene by the glasses built-in camera, mixes the virtual content with the real scene and projects all of the content on the screen.In this situation, there is no possibility to use stereoscopy.(ii) and the other in which there is no capture of the real scene and the virtual content is included in the real scene.However, the insertion of a black background is needed, which causes partial occlusion of the real scene by the shading of the physical environment.In other words, the device always projects an image in 100% of the screen area, disallowing the resource of only adding the virtual content to the scene that is being visualized by a translucent screen.In this situation, there is the possibility to use stereoscopy.Given these possibilities, the first way of using the device was chosen in order to avoid the real scene occlusion, even though it offers a flat visualization.
Simulation and evaluation
The first version of the "montAR" application was used to evaluate the Phase 1, which was associated with the wood-frame wall panel assembly.Phase 1 was assessed using smart glasses (CUPERSCHMID; GRACHET; FABRICIO, 2015) and smartphone (CUPERSCHMID; GRACHET; FABRICIO, 2016).This paper proposes the comparison between the use of both devices for Phase 1 as much as for Phase 2, which includes the built-in systems and sheathing.For that reason, this paper contemplates the assessment of all the assembly process of a precast woodframe wall, assisted by the "montAR" application (version 2.0) running on smartphone and smart glasses.
The evaluation was conducted in a carpentry lab equipped with Epson Moverio™ BT200 smart glasses, a Samsung Galaxy S5 mini smartphone (both with pre-installed "montAR" application), an ID marker, plus specific materials for both phases.During Phase 1 it was used: color identified eucalyptus studs, screws, an electric and a manual screwdriver and an assembly board on which the wood-frame wall panel should be mounted.During Phase 2 it was used: OSB, flexible conduit, electrical box, PVC pipe and elbow.
Formal research to evaluate AR interfaces involving users, such as this one, has not been widely performed (DÜNSER; BILLINGHURST, 2011).AR systems diverge from desktop systems in various aspects; the most essential is that such systems are produced for being used as a mediator or amplifier of human visualization (NILSSON, 2010).One interface of AR includes the hardware (e.g.smartphone; smart glasses), the software (e.g."montAR"), the devices for visualization (i.e.smartphone screen, smart glasses screen), the interface elements (e.g.menus, icons), the markers, the interaction technique (e.g.rotating the marker, pinching the virtual model), and the content shown in AR.Depending on the device, the tracking system, the interaction technique used and the interface in AR is altered.These aspects may justify, partially, the lack of well-established methods to evaluate AR interfaces.
To perform this evaluation, the method described by Olsson (2013) of subjective measures was adopted for evaluating the quality of the interaction with the system.The system's applicability was simulated in the laboratory with the participation of volunteers.The procedure to perform this evaluation was planned to happen as follows: (a) presentation and Characterization: At first, the aim of the evaluation and the task goal (woodframe wall assembly) was explained.Then, a characterization questionnaire of the participant was applied, checking the age, gender, education level, the frequency of smartphone or tablet usage and the previous use of any AR systems; (b) appropriation of the technology by AR participants: The researcher demonstrated the functioning of the AR system.Next, participants used the application "montAR" for approximately 5 minutes, to familiarize themselves with this media language.During this period, the participants had the opportunity to handle the device and point it to the marker at the same time they visualized in AR the wood-frame wall model; (c) wood-frame wall panel assembly: The proposal of this stage was to verify the quality of the participant's experience when using the AR system in a way to assembly the wood-frame wall.The users could freely walk around in order to observe the augmented environment from different perspectives and execute the task.The task was to assemble the wood-frame wall panel using the "montAR" application through Epson Moverio™ BT200 or Samsung Galaxy S5 mini.This task was planned to happen in two phases; and (d) in addition to the questionnaire, the participant was asked to talk about the impressions, difficulties and opinions about the use of this technology.During the process, the researcher took notes and registered the process with photographs for further analysis.
Results and discussions
The assessment of Phase 1 took place in September 2015 and Phase 2 on February 2016.Two visualization gadgets were used and compared: smartphone and smart glasses.The evaluation conducted examines the quality of the user's interaction with the task.This evaluation aims a qualitative analysis of the interaction and manipulation of both the virtual information and the real wood studs for assembly.In this evaluation, subjective measures and qualitative analysis were performed.The quality of the user experience was evaluated through the categories described by Olsson (2013), an open question and direct observation.
The evaluation involved twenty-eight participants, among Architecture or Civil Engineering undergraduate students.Seven people at a time performed the evaluation of one phase with a distinct device (smartphone or smart glasses).The evaluation was conducted individually.All subjects voluntarily participated in the experiments.All of them had never used AR before.As AR training is equally effective for both males and females (HOU; WANG, 2013), in this study participated both genders, without the need of equal split.
All of the participants informed daily basis use of smartphone or tablet.None of them had previous experience in any kind of AR system neither had assembled a wood-frame panel before.Initially, the participants were given information about the research aim and explanation on the system operation.All of them took a few minutes to become familiar with the AR functioning before starting.
Results from the experiments indicate a positive effect of facilitations when using the AR system in assembly tasks.All participants demonstrated interest in the AR system used.The users could express their considerations about the application.For providing a realistic view of the users' motivation to actually use the application, both phases of the assembly process were very important.
Altogether, 28 evaluations were performed.All participants were able to assemble the wood-frame wall.In the first phase, each participant performed the assembly task in about 55 minutes and, in the second phase, 15 minutes.The first phase consisted of the structure assembly and the eucalyptus components used weighed around 10 kg each, which hampered the execution of the task.Also, the need to screw the elements together resulted in a longer completion time.In the second phase, less effort was required, as there were fewer steps and only the placement of the pieces was requested, without the need of nailing the OSB panels.
Users performing Phase 2 with smart glasses can be seen in Figure 10.It illustrates the effort to accomplish the task (from left to right): user standing in front of the panel to visualize the whole scene; user measuring the distance to place the PVC pipe; user placing the conduit; user positioning one part of the OSB; user holding the second part of the OSB with both hands; user putting the conduit onto the electrical box.
Figure 10 -Smart glasses users performing the assembly of the wood-frame wall
Figure 11 exemplifies participants using smartphones in order to accomplish the task.The first three images demonstrate users visualizing the scene in the smartphone screen and the last three images illustrate users performing tasks with hands-free.In this situation, users visualized the scene through the smartphone screen, however, to have their hands free to perform the steps, it was necessary to memorize them.In this sense, the mental workload was higher than with the use of smart glasses.
The evaluations were categorized from two devices: smartphones and smart glasses.The results indicate the user's overall judgment of the experience.Despite the use of different devices and the addition of a new phase, the results of the questionnaire did not diverge much from ones from Phase 1 (CUPERSCHMID; GRACHET; FABRICIO, 2015FABRICIO, , 2016)).
The results of the questionnaire, in general, indicate acceptance of the proposed system.All agreed that: the step-by-step provided was effective; the AR system was appropriate to the task; they felt satisfied with the way they performed and accomplished the task and they enjoyed the experience of assembling the woodframe panel using the AR system.Regarding the screen size, most users declared that it was appropriate to the task.The interface provided by the AR system seemed natural to the majority of the participants.All smart glasses users and most smartphone users felt the desire to continue using the system.More than a half of the users felt involved in something extraordinary.
On the other hand, many did not feel comfortable when using the devices during the whole assembly period.The rejection was 50% for the use of smart glasses and 21% for smartphones.Another factor that must be emphasized is the confusion caused by the superimposition of the virtual information onto the real world.About 21% of users of each device opted to abstain from opining.
In addition to the questions, an open question was provided destined to further considerations, where the participant could comment on his/her difficulties and suggest improvements on the system.Some important considerations, especially related to Phase 2, should be highlighted.The greatest objection involved the aforementioned visual confusion caused by the superimposition of the virtual information over the real one.A few number of users complained about the colors and textures chosen to the virtual components.Similar colors hampered the differentiation between real and virtual elements, Figure 12.Some participants complained about the virtual image stability.At various moments, the virtual model would move in relation to the marker.All along the application development, the virtual model position was determined in order to avoid alterations, even if the user moved.Unfortunately, given the technological limitations of the hardware as much as software, this issue couldn't be totally bypassed, Figure 13.
Even though the Moverio™ possesses a frame for prescription lenses, they must be specially designed in order to fit the frame.For this reason, the participants who wore prescription glasses felt difficulty in using the smart glasses.They stated that using the smart glasses over the other was uncomfortable, Figure 14.In addition, the adaption of a strap holder to the smart glasses was necessary to keep it from falling off of the users' face, even those who didn't wear prescription glasses.Amongst the improvement suggestions, one user requested an animation showing the assembly process instead of using static models.Corroborating, other participant suggested the addition of visual orientations such as arrows indicating the position of the components.Other stated that not only the position of the pipe and conduit should be shown, but also information on how to insert them in the panel.
On the other hand, one complimented the system claiming that it is much better than paper-based manual.This participant stated that, with 2D manuals, one can visualize isolated parts when the AR enables the user to see the whole model.Other participant declared that the system is very interesting and didactic.The detailed visualization of each step was also complimented for facilitating the understanding of the assembly.During the execution of the task of assembling the panel using AR, the participants' actions were registered.It was noted that the users would begin the task holding the smartphone and the Moverio's touch-pad.After a while, when the participants felt the need to have their hands free, they would place them in their pockets, Figure 15, and remove only when accessing a button to move to next step was necessary.It demonstrates an adaption process that involves both: the use of the technology and the task to be executed.
The majority of the users preferred to use the headphone in only one ear, Figure 16.This attitude is probably related to the connection to the real world.Hence, for the sake of not losing the sound connection, many users received the audio information from just one ear.Several smartphone users preferred not to use the headphones, listening only through the device internal speaker.
Conclusion and future work
In this article, we have presented the results of a user evaluation, in which a mobile AR system utilizing BIM models was created to assist the assembly of a precast wood-frame wall (including structure, built-in installations and OSB wall sheathing).This research is the first example of the use of AR for precast wood-frame wall assembly in actual scale.An AR prototype system was developed based on a BIM model.The "montAR" application was successfully designed and used to conduct a simulation.Then, an evaluation devised to assess the user experience between the use of a smartphone and smart glasses was undertaken.The users had the opportunity to test a prototype of the system.This study provides a comparison between the mobile devices used in the AR technology for guiding people in a wood-frame wall panel assembly.The concrete experience of holding a smartphone or wearing smart glasses and using the application in real settings provided an important aspect of the user experience.
Although it has been observed that improvements can be made, the majority of users considered the application efficient as an assembly aid.None of the 28 participants had previous experience in assembling wood-frame structures, still all of them were capable of executing the task.The AR application enabled the understanding of the tasks and could be used as a tutorial tool for assembly.The issues related to the difficulty in visualizing the model as a consequence of the adoption of certain colors, textures and transparency can be easily solved by altering them in the Unity 3D ® .However, the greatest complaint resides in the discomfort of using the devices all along the process, especially the smart glasses.The precision and stability of the superimposition of the virtual model over the real one were also inadequate.Therefore, a combination of improved hardware and software should be achieved in the interest of using this technology in working sites.Unlike Hou, Wang and Truijens (2015) that used cumbersome devices such as a PC, sensor, monitor and a large marker, this research required only smart glasses or a smartphone and a marker to work, which facilitates its use.This setup may favor its use in construction sites.Therefore, the knowledge produced, despite being used in the context of wall assembly in a laboratory, demonstrated potential application to the assembly of buildings in working site.
Furthermore, the study made enables the idealization of expanding this application to other fields, such as production control.Usually, methods of work verification are adopted, which include the use of checklists associated with in loco measurements to certify accordance with the initial project.In this respect, the exhibition of the virtual model superimposed onto the correspondent real elements in actual scale could reduce the inspection time and prevent errors.Therefore, in addition to the use as assembly guidance, the AR application could be used as a production control system, once it enables quick association between planning and execution.It is worth highlighting that the system offers assembly dimensions, which allows the checking of all elements positioning.As a tool for production control, a checklist associated with the virtual model could be added in order to ease the team communication and register important data, such as: who is responsible for verification, date of inspection, items approved and those that need repair, among others.
Future work should focus on adding support for the implementation of the AR system into real construction projects.The consolidation of AR in the assembly tasks in the construction sector still needs more tracking accuracy, virtual image stabilization, intuitive interaction techniques and suitable devices for construction sites (smart glasses more resistant with the capacity to project only the virtual image on the screen superimposed on the real world).This research would not be possible without the co-operation of many people, and the authors would like to thank everyone who has participated in the evaluation section.We are very grateful for the financial support for the Junior Post-doctorate scholarship (Process: 151435/2014-6) and for the Scientific Initiation scholarship (Process: 149755/2015-5) provided by the National Council of Scientific and Technologic Development (CNPq).
Figure 2 -Figure 3 -
Figure 2 -Figure The overall research process
Figure 7 -
Figure 7 -"montAR" application: icon, initial menu and the steps to assemble the wall
Figure 11 -
Figure 11 -Smartphone users perfoming the assembly of the wood-frame wall
Figure 12 -
Figure 12 -Similar colors and textures causing visual confusion
Figure 14 -
Figure 14 -User wearing two glasses at the same time: smart glasses and regular glasses (left) and the strap holder attached to smart glasses (right)
Recent AR researches involving assembly activities Research
Development of an Augmented Reality environment for the assembly of precast wood-frame wall from the BIM model 65 Figure 1 - | 7,404.6 | 2016-08-17T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Cancellation mechanism of dark matter direct detection in Higgs-portal and vector-portal models
We present two alternative proofs for the cancellation mechanism in the U(1) symmetric pseudo-Nambu-Goldstone-Boson Dark Matter (pNGB DM) model. They help us to have a better understanding of the mechanism from multi-angle, and inspire us to propose some interesting generalizations. In the first proof, we revisit the non-linear representation method and rephrase the argument with the interaction eigenstates. In this picture, the phase mode (DM) can only have a trilinear interaction with a derivative-squared acting on the radial mode when the DM is on-shell. Thus, the DM-quark scattering generated by a mass mixing between the radial mode and the Higgs boson vanishes in the limit of zero-momentum transfer. Using the same method, we can easily generalize the model to an SO(N) model with general soft-breaking structures. In particular, we study the soft-breaking cubic terms and identify those terms which preserve the cancellation mechanism for the DM candidate. In our discussion of the second method, we find that the cancellation relies on the special structure of mass terms and interactions of the mediators. This condition can be straightforwardly generalized to the vector-portal models. We provide two examples of the vector-portal case where the first one is an SU(2)L × U(1)Y × U(1)X model and the second one is an SU(2)L × U(1)Y × U(1)B−L × U(1)X model. In the first model the vector mediators are the Zμ boson and a new U(1)X gauge boson Xν, while in the second model the mediators are the U(1)B−L and U(1)X gauge bosons. The cancellation mechanism works in both models when there are no generic kinetic mixing terms for the gauge bosons. Once the generic kinetic mixing terms are included, the first model requires a fine-tuning of the mixing parameter to avoid the stringent direct detection bound, while the second model can naturally circumvent it.
Introduction
Cosmological and astrophysical observations indicate that the energy of the universe consists of substantial cold Dark Matter (DM) [1], which cannot be explained by the Standard Model (SM) of particle physics. By far the most attractive candidate of DM is the Weakly Interacting Massive Particles (WIMPs), which couple to the SM particles with a strength similar to the weak interaction. The WIMP models are interesting not only because they can naturally explain the data of DM relic abundance by the thermal production mechanism, but also because they may be detected in terrestrial experiments. In recent years, there are many underground dark matter direct detection experiments, e.g., XENON1T [2], LUX [3], and PandaX-4T [4], searching for signals of DM-nuclei scattering. However, there is still a null result from all these experiments even the detection sensitivity has been improved by successive upgrades. The absence of direct detection signal can be explained by a super weak interaction between the dark sector and the SM, but then it is hard to obtain the observed relic density of DM by the well-studied freeze-out production framework.
In recent years, the pseudo-Nambu-Goldstone-Boson (pNGB) dark matter models, which naturally predict a suppressed direct detection signal, have drawn much attention. The model was firstly established in ref. [5], where a cancellation mechanism of Higgs portal DM-nuclei scattering is found in their template model. The cancellation mechanism is based on a soft broken global U(1) symmetry and the pNGB property of the DM candidate. It is found that DM-nuclei scattering processes happen in the t-channel mediated by two Higgs bosons and their amplitudes cancel automatically in the zero-momentum transfer limit. On
In this work, we are going to revisit the pNGB DM model and discuss two simple methods for proving the cancellation mechanism. In the first proof, we will revisit the nonlinear representation which has been considered in ref. [31]. We will rephrase the argument in a way that the cancellation becomes obvious. 1 Using the non-linear representation can help us to generalize the model in different ways. For example, extending the symmetry to SO(N) is straightforward in this picture. In ref. [31], the SO(N) with masses degenerate pNGBs has been studied, so we will focus on more general soft-breaking structures including non-degenerate spectrum and also the soft-breaking cubic terms. We find that with certain conditions, some cubic terms can preserve the cancellation for the DM candidate.
In the second proof, we use the linear representation and show that the combination of the CP-even scalars coupling to DM can be redefined as a new scalar boson with no mass mixing to the SM Higgs boson. Instead, a kinetic mixing between them is generated and it is the only portal connecting SM fermions to the DM. The suppression of the cross section is then caused by the fact that the kinetic mixing terms of the mediators are effectively negligible in the t-channel DM-quarks scattering processes.
Inspired by the second proof, we generalize the cancellation mechanism to the vectorportal cases. It is well known that if the DM is a Dirac fermion (or complex scalar) which couples to the gauge boson, the gauge boson mediated DM-nucleon scattering cross section will be too large to accommodate the current direct detection bounds. Therefore, finding a mechanism that can naturally generate a small direct detection cross section without suppressing the DM annihilation cross section is phenomenologically interesting. We will establish two template models to illustrate how the mechanism works. In the first model, we introduce a new gauge boson from a U(1) X symmetry and show that the DM-quark scattering mediated by Z µ and Z µ bosons cancel in the zero-momentum transfer limit. However, the cancellation is violated if there is a generic kinetic mixing between the U(1) Y and the U(1) X gauge bosons. In our second model, we propose a U(1) B−L ×U(1) X extension in which cancellation occurs between the two new gauge bosons. We find that, even if the generic kinetic mixing term violates the cancellation, the direct detection bound can still be circumvented since the mixing can be naturally small in this case if it originates from 2-loop corrections. This paper is organized as follows. In section 2, we establish our two proofs of the cancellation mechanism for the Higgs-portal model and introduce the SO(N) generalization. In section 3, we discuss the cancellation mechanism for vector-portal models and JHEP01(2022)117 provide two examples. Finally, our conclusions are given in section 4. For the reader's convenience, we briefly review the original proof of the cancellation mechanism given in ref. [5] in appendix A. In appendix B, we briefly introduce a UV origin of a spurion field, κ ijk , which is presented in section 2.
Two proofs for the cancellation mechanism
First of all, let us briefly review the basic ideas proposed in ref. [5]. The model proposed therein consists of the SM plus an extension with a complex scalar S. A global U(1) symmetry of S is spontaneously broken by its non-zero vacuum expectation value (VEV), and thus a Goldstone boson emerges. The Goldstone boson acquires a mass if the U(1) symmetry is softly broken. To be precise, the potential of the SM Higgs field H and the scalar S is given by The first line of eq. (2.1) respects the U(1) symmetry, while the second line is a softbreaking term. The SM gauge symmetry and the global U(1) symmetry are spontaneously broken by the VEVs of H and S, respectively. In the unitary gauge, we can write H and S as The CP-even component s will mix with h, while the CP-odd component χ is a pseudo-Nambu-Goldstone Boson (pNGB) playing the role of a DM candidate. The masses of (h, s) and χ can be easily obtained by finding the stationary point of the potential and the results are shown as follows The mass matrix of (h, s) can be diagonalized by an orthogonal matrix O as M 2 diag = OM 2 even O T , while the mass eigenstate is (h 1 , h 2 ) = (h, s)O T . The χ-quark scattering is through the t-channel mediated by h 1 and h 2 as shown in figure 1(a). It has been proved by ref. [5] that the amplitude of this process vanishes in the zero-momentum transfer limit (t → 0), since there is a cancellation between the two diagrams corresponding to the h 1 and h 2 mediators.
Although the χ-quark scattering is suppressed due to the cancellation, the annihilation cross section for χ pairs is not suppressed since χ + χ →f + f is through the s-channel and the s variable is not necessarily small compared to the masses of the mediators. Therefore this model can easily fit the relic density data and avoid the stringent direct detection bound at the same time.
In the following subsections, we are going to introduce two methods to prove the cancellation mechanism for DM-quark scattering and discuss some possible generalizations.
The first proof
In this subsection, we revisit the non-linear representation proof of the cancellation mechanism in the U(1) model [9], and rephrase it in a more simpler way. 2 This leads us to a better understanding of the cancellation mechanism, and helps us to generalize the model. In the non-linear representation, the complex singlet S is written as where s is a scalar that mixes with the h, while χ is the DM candidate. Substituting eq. (2.4) into the potential, we find that the only terms involving χ come from the softbreaking terms: where we have expanded the cosine function up to order χ 2 in the second line. We read off the mass squared of the pNGB χ as and find that there is a sχ 2 trilinear coupling, arising from the potential. In the non-linear picture, the kinetic term of S consists not only of the kinetic terms of s and χ, but also includes derivative interactions. To be precise, the kinetic term is
JHEP01(2022)117
We see that the third term is another source of sχ 2 trilinear coupling, which can be rewritten it in equivalent form as The first term is a total derivative and can be dropped in the action. Combining eqs. (2.7) and (2.9), the full sχ 2 trilinear interaction is 10) The first term contributes a coupling proportional to the momentum squared of the s field, while the second term vanishes when χ is on-shell. To the tree level, the χ-quark scattering can only be mediated by the s boson which mixes with the Higgs field h (see figure 1(b)). We expect that the amplitude must be proportional to t = ( which vanishes when t → 0. Note that, if we include following soft breaking terms, which involving odd numbers of S, the cancellation property is not preserved in general, unless κ 1,2,3 satisfy 3 However, the linear term S + S * , the cubic terms |S| 2 (S + S * ), and S 3 + (S * ) 3 usually have different origins. The |S| 2 S operator is a cubic term generated by a spurion with 1 unit of U(1) charge, while the S 3 operator is a cubic term generated by a spurion with 3 units of charge. Although the linear term and the |S| 2 S term has the same charge, their cancellation requires a special relation between κ 1 and κ 2 as κ 3 1 = κ 2 v 2 s /2, if κ 3 is set to be 0. This is a blind-spot of direct detection due to an accidental relation between two unrelated parameters, however, it is not satisfied in a general case. There is no symmetry or fundamental principle to guarantee the relation (2.13). Therefore, if we want the cancellation property to be obtained automatically, it is better to forbid the linear and cubic terms by assuming a Z 2 symmetry with S → −S.
There is a more general soft-breaking potential which includes a term as κ HS |H| 2 (S + S * ), but it can be removed if we shift S by a constant S → S +s 0 , where s 0 = −κ HS /2λ SH .
JHEP01(2022)117
The couplings, µ 2 , µ 2 S , µ 2 S , κ 3 1 , also change respectively, and the cancellation conditions, in terms of the new couplings, are the same as the discussion above.
The non-linear representation is useful for generalizing this model. We easily see that the cancellation mechanism still works if the model is extended with more Higgs doublets which are neutral with respect to the global U(1). For example, the 2HDM [28] and NHDM extensions do not violate the cancellation since these Higgs fields only couple to the radial mode s. A more interesting generalization is to consider a global SO(N) symmetry which is spontaneously broken to SO(N-1). The SO(N) is also softly broken so that the Goldstone bosons can acquire masses.
We will consider an SO(N) model consisting of a real scalar field Φ which is in the fundamental representation of the SO(N) group. To simplify things as much as possible at the beginning, we first consider a Lagrangian without soft-breaking cubic terms, where M 2 is an arbitrary symmetric N×N real matrix. We can rotate Φ by an orthogonal transformation to a convenient basis,Φ = OΦ, such that the mass matrixμ 2 = OM 2 O T is diagonal. Assuming that the diagonal matrixμ 2 takes the form where all the m 2 i and µ 2 φ are positive quantities. 4 The SO(N) symmetry is spontaneously broken due to the negative mass-squared parameters, while the m 2 i are soft-breaking masses. Note that in ref. [31], all the m 2 i are assumed to be equal so that the remnant symmetry is exactly SO(N-1). In our setup, we are considering a more general situation in which the soft-breaking terms also break the SO(N-1) symmetry. Using the new basis, the Lagrangian can be rewritten as The last term of eq. (2.16) is a diagonal soft-breaking mass term which gives masses to the Goldstone bosons. In this case, theΦ field can be parametrized as The eigenvalues can always be written in this form if at least one of them is negative. In this case, we can choose the minimal one to be the −µ 2 φ and express all the others as m 2
JHEP01(2022)117
whereâ = 1, 2, . . . , N − 1 indicate the broken generators given by and χâ are pNGBs which play the role of multi-component DM. The minimization conditions of the potential are and these will determine the VEVs in terms of the model parameters. The mass matrix for (h, φ) is given by (2.19) The kinetic term ofΦ can be computed by expanding the exponential function and keeping the leading terms up to the quadratic of χâ as follows, Substituting eq. (2.20) into the kinetic term, we obtain The soft-breaking mass terms can also be computed by the expansion and the result is It is now easy to show that the masses of the pNGBs χâ are mâ, while the trilinear interactions of χâ and φ are Once we read off the Feynman rules in momentum space, the first term of eq. (2.23) is proportional to the momentum squared of φ, while the second term vanishes when χâ is on-shell. Since the pNGBs can only communicate with the SM fermions through φ which is mixing with h, the amplitude of the χâ-quark scattering vanishes in the zero-momentum JHEP01(2022)117 transfer limit. Note that when N is an even number, the model is equivalent to the SU(N/2) generalization which has been discussed previously in ref. [32].
We can now try to add some soft-breaking cubic terms to eq. (2.16) and see how they affect our results. Without loss of generality, these terms can be written, where i, j, k = 1, 2, . . . , N represent the indices of the fundamental representation, while κ ijk are real parameters with 1 unit of mass dimension. Using the expansion given by eq. (2.20), we can analyze these terms separately for different components, as follows κâbĉχâχbχĉ terms include trilinear interactions among the pNGBs. They can be categorized to the following three types: 1. Ifâ =b =ĉ, then χâ cannot be a DM candidate since this interaction leads to DM decay.
2. Ifâ =b =ĉ, χâ is a viable DM candidate, since it is protected by Z 2 parity χâ → −χâ, while χĉ is unstable and thus can not be DM. In this case, the χâ DM still enjoys the cancellation mechanism since χĉ does not mix with the radial mode or the Higgs boson h to leading order. 5 3. Ifâ =b =ĉ, all χ fields are stable DM candidates, unless one of them has a mass larger than the sum of the other two, in which case it will decay into the lighter states. The stability of DM can also be guaranteed by introducing a Z 2 × Z 2 symmetry. We can let χâ, χb being odd under the first Z 2 , while let χb, χĉ being odd under the second Z 2 .
terms lead to mass mixing between χâ and χb, and in addition their trilinear interactions with the radial mode. When we add only these terms to the model, they definitely violate the cancellation mechanism for χâ and χb, since the trilinear coupling from the potential term can no longer cancel the contribution from the kinetic term. Moreover, once the full mass matrix of χâ and χb has a negative eigenvalue, the vacuum we have chosen is not stable anymore.
terms contain tadpoles of χâ, which means we have chosen a wrong vacuum configuration. In this case, a more complicated formulation of the vacuum is required, which is beyond the scope of this work but worth studying in the future. 3 terms generate extra mass terms and trilinear interactions for each pNGB. If these terms are added individually, the cancellation mechanism is violated for all pNGBs.
JHEP01(2022)117
cancels the extra quadratic of χâ and, therefore, the cancellation still works for χâ.
On the other hand, the symmetric tensor κ ijk can be separated into two parts which might originates from different sources. One is a spurion κ i in the fundamental representation, so that κ ijk can be written as κ ijk = κ i δ jk + κ j δ ik + κ k δ ij . The other source of κ ijk is a spurion in a symmetric three-index irreducible representation of SO(N), which is denoted asκ ijk , with conditionsκ ijk δ ij =κ ijk δ jk =κ ijk δ ik = 0. In principle, we should not expect the coefficients of operators originating from these two different sources to be related. Note that, in the first case, only the κ N component is permitted to be non-zero, otherwise the vacuum configuration is incorrectly chosen. However, the κ N component leads to a violation of the cancellation mechanism for all the pNGBs, so this case will not be considered in this work.
In the original U(1) (or SO (2)) model, only a single pNGB appears so no κâbĉΦâΦbΦĉ term can be added without violating the cancellation. Although it can include a term such as 3 2 (Φ 1 ) 2Φ2 + (Φ 2 ) 3 , which preserves the cancellation, 6 as we have mentioned, it requires an unnatural combination of the two unrelated terms from different sources. The simplest model allowing κâbĉΦâΦbΦĉ terms is the SO(3) model which contains two pNGBs. To find out the most general cubic terms naturally preserving the cancellation, we consider the scenario in which κ ijk stems from a spurion in an irreducible symmetric three-index representation of SO(3). The resulting cubic terms can be parametrized as Note that independent degrees of freedom ofκ ijk is seven, which match the number of dimensions for the symmetric three-index representation. We assume that the chosen vacuum is stable, so that following conditions should be satisfied κ 111 +κ 122 =κ 112 +κ 222 = 0 , (2.26) otherwise tadpole terms of pNGBs will be generated. If we require that at least one pNGB particle, χ 1 , is a stable DM candidate, following conditions should also be satisfied Finally, if we want the cancellation mechanism to work for χ 1 , we requires Substituting these conditions into eq. (2.25) and expanding in series of χâ, we obtain the explicit expression for the cubic terms,
JHEP01(2022)117
Note that the condition eq. (2.27) can be naturally satisfied if we assume a Z 2 symmetry under whichΦ 1 is odd. On the other hand, the condition eq. (2.28) is not automatically satisfied in the most general case. However, a special case thatκ 113 =κ 223 = 0, can be naturally satisfied by assuming a Z 2 symmetry under whichΦ 3 is odd. In this case, the Z 2 symmetry is spontaneously broken by the VEV ofΦ 3 , so that eq. (2.28) can still be slightly violated at loop-level. We expect that the loop-level violation is small and does not lead to significant effect in the direct detection process. If we further assume thatκ 112 =κ 222 = 0, then χ 2 can also be a stable DM particle, but it does not preserve the cancellation unless κ 113 andκ 223 also vanish. In conclusion, the SO(3) model is the minimal model which can include some soft-breaking cubic terms without violating the cancellation mechanism for the DM candidates Finally we want to make some comments on the second condition in eq. (2.26), which is κ 112 +κ 222 = 0. As a low energy effective theory, we just assume this condition by hand, for the consistency of the chosen vacuum. However, a non-vanishingκ 112 which respects this condition can also be automatically generated if we consider a UV completion of the spurioñ κ ijk . We assume that all the soft-breaking terms coming from a single real scalar field, K ijk , which is in an irreducible symmetric 3-index representation of the SO(3) symmetry. The K ijk field can couple to the Φ i field through following renormalizable potential terms, where ε ijk is the Levi-Civita symbol. When K ijk acquires a non-trivial VEV, K ijk ≡ κ ijk /λ K(Φ) 3 , the first term in eq. (2.30) generates the cubic term given in eq. (2.24). The second and third terms generate soft-breaking masses ∆M 2 ij , while the terms in the second and third lines generate soft-breaking linear terms for Φ i . If only K 112 and K 222 have non-zero VEV, K 112 = − K 222 ≡ κ/λ K(Φ) 3 (see appendix B for more details about the vacuum configuration), we can check that all the linear soft-breaking terms vanish, while the soft-breaking mass matrix of Φ i is which is consistent with the structure of eq. (2.15). Therefore, we have found a UV completion of the soft-breaking cubic term for the SO(3) model, in which a pNGB DM candidate can preserve the cancellation property.
In the case thatκ 113 ∝ K 113 = 0, the linear term induced by the KKKΦ coupling is usually non-vanishing, so that our previous discussion which only included the cubic term of Φ i was incomplete. Since the situation is much more complicated, we leave it for future research.
The second proof
In our second proof of the cancellation mechanism, we use the linear representation eq. (2.2). The mass term and trilinear couplings from the potential are given by which can be written in the following quadratic form We then find that the combination of h and s that couples to the χ appears as a quadratic form in the potential. If we define a new scalar, φ ≡ (λ SH vh + λ S v s s)/λ S v s , and rewrite the Lagrangian in terms of h and φ as we see that the mass terms are already diagonalized in this form. On the other hand, the kinetic terms are not canonical anymore and there is also a kinetic mixing term between h and φ generated in this basis. We can treat the kinetic mixing term as an interacting vertex which is endowed with a value where q µ is the momentum of φ. The propagators of h and φ can be read off from their non-canonical kinetic and mass terms as Note that in this form the SM fermions only couple to the h field, while the DM χ only couples to the φ field. The only portal connecting these two sectors is the kinetic mixing vertex whose strength is proportional to the momentum squared of the mediator, which is just the t variable (see figure 1(c)). Therefore we see that the amplitude of the χ-quark scattering, vanishes in the t → 0 limit.
JHEP01(2022)117
The lesson we can learn from the second proof is that the cancellation mechanism relies on the special structures in the masses and interactions of the scalar mediators. The condition of the cancellation is that the combination of the mediators that appears in the trilinear coupling with the DM has no mass mixing with the SM Higgs boson h. This inspires us to look for the same structure in the vector-portal models. When the gauge symmetry is spontaneously broken by the Higgs mechanism, the masses of the gauge boson are generated by the gauge interaction of the Higgs field. If the representation of gauge symmetries for the DM field is the same as the a new Higgs field, which breaks some new gauge groups, then DM field will couple to the new gauge bosons with the same combination of mediators as the new Higgs field. Therefore, the cancellation in the vector-portal models can be achieved by the same method as in the Higgs-portal models. In the following section, we give two examples of the cancellation mechanism in vector-portal models in details.
The SU(2) L × U(1) Y × U(1) X model
Firstly, we consider a simple extension of the SM with a gauged U(1) X symmetry as a toy model for illustrating the vector-portal cancellation mechanism. In our setup, besides the SM Higgs doublet H, we introduce a new Higgs field Φ and a Dirac fermion DM field, Ψ which are both in the representation (2, 1/2, 1) of the gauge symmetry SU(2) L × U(1) Y × U(1) X . The covariant derivatives of H, Φ, and Ψ are given by where g X is the gauge coupling of U(1) X . The χ field, which is the neutral component of Ψ, is the DM candidate. After H and Φ acquire their VEVs, defined as the SU(2) L × U(1) Y × U(1) X gauge symmetries are spontaneously broken and both the Z µ and X µ boson become massive. The masses terms of the neutral gauge bosons are (3.5) The mass matrix of the gauge field (Z µ , X µ ) can be read off as follows, (3.6) , there is only one diagram whose mediator involves a mixing between the V µ and Z µ fields.
It can be diagonalized by an orthogonal transformation U as
and we can define GeV as the SM VEV of the Higgs field. It The gauge interactions of the DM candidate χ are given by The diagram of χ-f (SM fermion) scattering is shown in figure 2(a) , and its corresponding amplitude can be computed as In the limit q µ = ( Figure 3. The leading diagram of the χ − f scattering when there is a generic kinetic mixing term for the gauge fields. which implies a suppressed direct detection signal. The cancellation mechanism in this case can also be proved by the second method we used for the Higgs-portal model. Note that the structures of the gauge interaction and the mass terms are similar to the Higgs-portal model. We can define a vector V µ = (gZ µ /(2c W ) + g X X µ )/g X , and rewrite the Lagrangian in terms of Z µ and V µ as The second line of eq. (3.11) is a kinetic mixing term for V µ and Z µ , which can be treated as an interacting vertex in terms of the momentum of V µ . Since the standard model fermions do not couple to the V µ field, while the dark matter χ only couples to V µ , their scattering can only be induced by the kinetic mixing vertex (see the diagram shown in figure 2(b)), which vanishes in the zero-momentum transfer limit. 7 So far, the cancellation mechanism seems to work well for the vector-portal model. However, the cancellation is violated if the model includes a generic kinetic mixing between the B µ and the X µ fields in the original Lagrangian, i.e., where s ≡ sin is a parameter characterizing the kinetic mixing. When we rewrite the Lagrangian using V µ , this term leads to a kinetic mixing between the V µ and the photon field A µ :
JHEP01(2022)117
which can induce a new diagram in the χ-quark scattering mediated by the photon. See the diagram shown in figure 3. The amplitude can be computed as Since the photon is massless, it has a pole at q 2 = t = 0 and therefore leads to a nonvanishing amplitude when taking q µ → 0. Although the amplitude is suppressed by the small kinetic mixing parameter s , it is still too large to accommodate the direct detection measurement since the process is mediated by the vector boson V µ whose mass is smaller than the weak scale. The lightness of V µ is due to the fact that gg X v 2 φ should be much smaller than g 2 v 2 otherwise the ρ parameter will significantly deviate from 1. We use the fact that v φ ≤ v and assume that g X is not too large, so that m V ∼ g X v φ should be smaller than m Z . The χ-proton cross section corresponding to figure 3 process is Comparing to the current direct detection bound, σ χN ∼ 4×10 −46 cm 2 , for m χ ∼ 1 TeV [4], the kinetic mixing must be fine-tuned such that (s /g X ) 10 −4 , in order to avoid the stringent direct detection bound. It is unnatural to have such a small kinetic mixing because, in principle, this term can be generated by Ψ and Φ loops (see figure 4). It is easy to estimate the 1-loop correction of the kinetic mixing as If we consider that in some higher energy scale µ ∼ Λ U V , there is a UV completion of the model such that s vanishes, then in the low energy regime the kinetic mixing parameter is generated as If the logarithm is of order unity, the order of magnitude for |s /g X | is 10 −3 ∼ 10 −2 , which is much larger than its upper bound implied by the direct detection data. On the other hand, if there is a generic kinetic mixing parameter, s (0) , and we require a (s IR /g X ) 10 −4 in low energy scale, (s (0) /g X ) should be fine-tuned to cancel out the loops-induced contribution, which is at least an order of magnitude larger than (s IR /g X ).
In conclusion, if we want this model to avoid the stringent direct detection constraint, we need to tolerate some fine-tunings of the kinetic mixing parameter.
The SU(2) L × U(1) Y × U(1) B−L × U(1) X model
If we look at the argument for why the cross section suppression fails in the previous model, the main subtlety is that the VEV v φ is constrained to be small since it serves as a part of the SU(2) L × U(1) Y symmetry breaking. This inspires us to find other vectorportal models that suffer from stringent direct detection bound, and to consider whether the cancellation mechanism is able to work in these cases. A potential candidate is the U(1) B−L extension of SM [36][37][38][39][40]. In this case, all SM fermions are assigned U(1) B−L charges equal to their baryon or lepton numbers. Three right-handed neutrinos are added to ensure the gauge symmetries are free from anomalies, which as bonus, generate the neutrino masses at the same time. We also introduce a Dirac fermionic DM, χ, which is charged under the U(1) B−L . The gauge boson of the U(1) B−L model, Z µ , can be a mediator generating the DM-quark scattering processes. Since these processes are vector mediated, the cross sections will be large and, thus, direct detection will place a stringent bound on the mass scale of Z µ . Using the results from ref. [41], the DM-nucleon scattering cross section, mediated by Z µ , is where n is the B-L charge of χ. For a DM mass of around 1 TeV, current bounds from XENON1T [2], LUX [3], and PandaX-4T [4] are σ SI χN 10 −45 cm 2 , which implies m Z /( √ ng B−L ) 20 TeV. For a comparison, the LEP bound on m Z /g B−L is about 6.9 TeV [42][43][44], while the bound from LHC run-2 [45,46] is about m Z /g B−L 20 TeV (10 TeV) for m Z = 4 TeV (5 TeV) [44,47], respectively. If the B-L charge of the dark matter is n ∼ O(1), the direct detection constraints can be stronger than the ones from current collider experiments. Moreover, if we consider the relic abundance of the DM, which can be estimated as [41] Ω χ h 2 ≈ 2.14 × 10 9 GeV −1 (3.19) then, using the bound m Z /( √ ng B−L ) 20 TeV from direct detection, we find that (3.20)
JHEP01(2022)117
To be consistent with the current cosmological observation [1], the DM mass must be very close to the resonance (2m χ ≈ m Z ), which is unnatural if this must occur simply as a coincidence.
The situation is very different if we consider that there is an extra U(1) X gauge symmetry and the cancellation mechanism is applied. The gauge symmetry is now SU(2) L × U(1) Y × U(1) B−L × U(1) X . A Dirac fermion in the representation χ ∼ (1, 0, n χ n φ , n χ ) is introduced to be the DM candidate, while two complex scalar fields in representation Φ 1 ∼ (1, 0, 1, 0), Φ 2 ∼ (1, 0, n φ , 1) are also introduced to break the gauge symmetries. The Lagrangian is now given by (3.21) where the covariant derivatives are defined as, Note that the charges of χ and Φ 2 are chosen such that their couplings to the gauge fields are the same up to a factor n χ . It is worth emphasizing that this structure is one of the conditions for the cancellation mechanism. Once Φ 1 and Φ 2 acquire non-zero VEV, both U(1) B−L and U(1) X are spontaneously broken, and thus the gauge fields C µ and X µ become massive. The mass terms and the gauge interactions are given by Now, as in the previous model, we can define a vector field, V µ = X µ + (n φ g B−L /g X )C µ , and rewrite the Lagrangian in terms of V µ and C µ as Since the DM χ only couples to V µ , while SM fermions only couple to W a µ , B µ , and C µ , the only way for the dark sector to communicate with the SM is through the kinetic mixing between V µ and C µ , which vanishes in the zero-momentum transfer limit. On the other hand, the annihilation cross section of χ is not suppressed, since it involves the s-channel processes and the total energy of the incoming dark matter pairs can be comparable to the vector masses. Now the observed relic abundance of DM can be satisfied without assuming that the mass of χ is near the resonance, because m Z /( √ n χ n φ g B−L ) is no longer constrained by direct detection. Since the present work concerns the cancellation mechanism, we leave a detailed discussion of the phenomenologies of this model for future research. Figure 5. The 2-loop corrections for the B µν X µν term.
JHEP01(2022)117
Finally, we note that in principle, we should also include generic kinetic mixing terms among the B µ , C µ and X µ fields. The kinetic mixing between C µ and X µ does not violate the cancellation mechanism because it only changes the coefficient of the kinetic terms for C µ and V µ , which are unimportant for the direct detection. The C µν B µν mixing does not violate the cancellation, to the leading order, since the DM only couples to V µ . The kinetic mixing term B µν X µν can lead to a non-vanishing χ-nucleon scattering cross section even in the limit q µ → 0. However, if we assume that the magnitudes of the kinetic mixing terms are of the same order as their leading loops corrections, the B µν X µν term can be naturally small since its leading corrections come from 2-loop diagrams, as shown in figure 5 . Even in other scenarios, in which some SM fields are charged under the U(1) X symmetry, 8 and one-loop corrections for B µν X µν can be generated, the direct detection constraint can still be satisfied, since a mixing with s ∼ 10 −3 is small enough for a TeV-scale V µ mediator.
Conclusion
In this work, we discussed two methods for proving the cancellation mechanism for the Higgs-portal DM-quark scattering in the pNGB dark matter model. In the first proof, we used the non-linear representation of the complex singlet S, where the phase mode plays the role of the pNGB DM. We showed that the trilinear coupling between the onshell pNGB DM χ and the radial mode s is proportional to the momentum-squared of s field, which vanishes in the limit of zero-momentum transfer. Since the χ field can only communicate with the quarks through the mixing between s and the Higgs field h, the amplitude of the DM-quark scattering is suppressed by a small t variable. We can easily generalize the model to inlcude the 2HDM or NHDM extension since all Higgs doublets can only couple to the pNGB χ through a mixing with the radial mode s. In addition, based on the non-linear representation we can easily generalize the model to the softly broken SO(N) cases and prove that the cancellation mechanism still works as well. We also found that in the SO(N) model, some soft-breaking cubic terms can be added without violating the cancellation mechanism for the DM.
In our second proof, we find that the combination of the CP-even scalars, which presents in the trilinear coupling with the pNGB DM, can be redefined as a new scalar field φ which does not couple to the SM fermions. In this picture, the masses of h and φ are diagonalized while the kinetic terms of the Higgs bosons are not canonically normalized. A
JHEP01(2022)117
kinetic mixing between φ and h also appears which is the only portal connecting the DM field to the SM fermions. Since the Higgs bosons only behave as mediators of the t-channel scattering, both their kinetic terms and mixing term vanish in the zero-momentum transfer limit, so that there is no communication between the dark and SM sectors to leading order.
Inspired by this second proof, we generalized the cancellation mechanism to include vector-portal models. In our first example, we considered a Dirac fermion electroweak doublet whose neutral component is the DM candidate. It is well known that the Z µ boson mediated DM-nucleon scattering cross section is too large to accommodate the current direct detection bound. To solve this problem, we introduced a new gauged U(1) X symmetry and proved that the amplitude induced by the new gauge boson X µ mediator can automatically cancel the amplitude induced by Z µ boson. However, the cancellation is violated if a generic kinetic mixing term for the gauge bosons is included. The kinetic mixing parameter also needs to be fine-tuned in order to avoid the stringent direct detection bound.
In our second example, we considered a gauged U(1) B−L × U(1) X extension model and found that the cancellation mechanism works very well in this case. The kinetic mixing term which violates the cancellation is small if we assume that it has the same order of magnitude as its quantum correction, which is at the 2-loops level. Even if this is generated by 1-loop diagram, the resulting DM-nucleon cross section can still satisfy the current experimental bound since the vector mediator in this case can be as heavy as 1 TeV. Therefore, we can plot the t-channel Feynman diagrams (see figure 1(a)) for χ + q → χ + q scattering, mediated by h 1 and h 2 , and compute the amplitude as In the zero-momentum transfer limit (t → 0), it is easy to demonstrate that the amplitude vanishes. The proof given in ref. [5] is summarized as follows Therefore, the cross section of the χ-nucleon scattering is suppressed by the very small momentum transfer of the cold dark matter. | 9,730 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Isotope effects of trapped electron modes in the presence of impurities in tokamak plasmas
The trapped electron modes (TEMs) are numerically investigated in toroidal magnetized hydrogen, deuterium and tritium plasmas, taking into account the effects of impurity ions such as carbon, oxygen, helium, tungsten and others with positive and negative density gradients with the rigorous integral eigenmode equation. The effects of impurity ions on TEMs are investigated in detail. It is shown that impurity ions have substantially-destabilizing (stabilizing) effects on TEMs in isotope plasmas for 0$ ?>Lez≡Lne/Lnz>0 ?> (<0 ?>), opposite to the case of ion temperature gradient (ITG) driven modes. Detailed analyses of the isotope mass dependence for TEM turbulences in hydrogenic isotope plasmas with and without impurities are performed. The relations between the maximum growth rate of the TEMs with respect to the poloidal wave number and the ion mass number are given in the presence of the impurity ions. The results demonstrate that the maximum growth rates scale as γmax∝Mi−0.5 ?> in pure hydrogenic plasmas. The scale depends on the sign of its density gradient and charge number when there is a second species of (impurity) ions. When impurity ions have density profiles peaking inwardly (i.e. 0$ ?>Lez≡Lne/Lnz>0 ?>), the scaling also depends on ITG parameter ηi ?>. The maximum growth rates scale as γmax∝Meff−0.5 ?> for the case without ITG (ηi=0 ?>) or the ITG parameter is positive (0$ ?>ηi>0 ?>) but the impurity ion charge number is low (Z⩽5.0 ?>). However, when 0$ ?>ηi>0 ?> and the impurity ion charge number is moderate (Z=6.0−8.0 ?>), the scaling law is found as γmax∝Meff−1.0 ?>. Here, Z is impurity ion charge number, and the effective mass number, Meff=(1−fz)Mi+fzMz ?>, with Mi ?> and MZ ?> being the mass numbers of the hydrogenic and impurity ions, respectively, and fz=Zn0z/n0e ?> being the charge concentration of impurity ions. In addition, with regard to the case of Lez<0 ?>, the maximum growth rate scaling is γmax∝Mi−0.5 ?>. The possible relations of the results with experimental observations are discussed.
The trapped electron modes (TEMs) are numerically investigated in toroidal magnetized hydrogen, deuterium and tritium plasmas, taking into account the effects of impurity ions such as carbon, oxygen, helium, tungsten and others with positive and negative density gradients with the rigorous integral eigenmode equation. The effects of impurity ions on TEMs are investigated in detail. It is shown that impurity ions have substantially-destabilizing (stabilizing) effects on TEMs in isotope plasmas for ≡ > L L L / 0 ez ne nz (< 0), opposite to the case of ion temperature gradient (ITG) driven modes. Detailed analyses of the isotope mass dependence for TEM turbulences in hydrogenic isotope plasmas with and without impurities are performed. The relations between the maximum growth rate of the TEMs with respect to the poloidal wave number and the ion mass number are given in the presence of the impurity ions. The results demonstrate that the maximum growth rates scale as γ ∝ − M i Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Introduction
The turbulence induced by drift instabilities, such as ion temper ature gradient (ITG) mode and trapped electron mode (TEM), is a potential candidate responsible for the anomalous transports in magnetic fusion plasmas [1][2][3][4]. In recent years, the drift-micro-turbulence has been intensively investigated for tokamak plasmas [5][6][7][8][9][10] and other magnetic configuration, such as reversed field pinch (RFP), plasmas [11,12]. It is also observed that hydrogenic isotope plasmas, such as deuterium or tritium plasmas exhibit certain advantages in energy confinements and other aspects in comparison with pure hydrogen plasmas. In many operational regimes the hydrogen isotopes give rise to highly different confinement behaviors. The isotope effect has been observed in many different tokamaks under different plasma conditions with a degree of confinement improvement in energy, particle, and momentum depending on plasma regimes. Bessenrodt-Weberpals et al investigated the effect of isotopic mass on plasma parameters as observed on the ASDEX tokamak and revealed that the ion mass was a substantial and robust parameter, which affected all the confinement times (energy, particle and momentum) in the whole operational window [13]. Xu et al provided the first direct experimental evidence for the importance of multi-scale physics for unraveling the isotope effect in fusion plasmas [14]. They emphasized that increasing the mass of the isotopes would weaken transport because that the characteristic step size of collisional transport and turbulent structures both decrease with the ion gyroradius ρ s , which is consistent with the results of Ramisch et al [15].
In addition, the theoretical studies of the isotopic mass dependence of transport are performed in [16][17][18]. Dong et al numerically investigated isotope scaling of ITG mode growth rate in tokamak plasmas with impurities [18], which suggested that the maximum growth rate of ITG mode scaled as − M i 0.5 and − M eff 0.5 for plasmas of pure hydrogenic ions and with the presence of impurity ions, respectively, while it was − M i 0.5 for the impurity mode, where M i , M z are the mass numbers of the main and impurity ions, respectively, and is the effective mass number of the plasmas, = f Zn n / z z e 0 0 , where Z and n z 0 are the charge number and density of the impurity ions, respectively, while n e 0 is the electron density. An adiabatic electron response is assumed in the work of Dong et al [18]. In fact, it has been well accepted that trapped electrons (TEs) play an essential role in plasma confinement and their contributions have to be taken into account in drift instability studies [19][20][21]. Studying the TEMs in hydrogen, deuterium and tritium plasma in detail and analyzing their isotope effects are of practical significance for the study of drift-wave-turbulence. Lately, many authors addressed this subject with different point of view in detail. For example, by applying an improved mixing length estimate to dissipative trapped electron mode (DTEM) turbulence, Tokar et al [22] found that a favorable scaling with isotope mass might be realized, although this model fails to demonstrate empirical scaling with other parameters as argued by Waltz et al [23]. In addition, to study the effect of primary ion species of differing charge and mass on drift wave instabilities and transport, linear and non-linear gyrokinetic simulations were carried out with GYRO code by Pusztai et al [24], which revealed the significant effects of the different electron-to-ion mass ratio on ion scales or on the deviations from pure gyro-Bohm scaling. Very recently, ITG and TEM simulations were performed with the GENE (gyrokinetic electromagnetic numerical experiment) code for the three hydrogen isotopes by Bustos et al, they discovered that an isotope effect is clearly visible for some types of ITG/TEM and TEM turbulence [25]. These works remarkably push forward the studies of the isotope effects for drift wave turbulences.
It also needs to be noted that there are carbon (C), tungsten (W), oxygen (O) and other impurities deposited in the plasmas in tokamak devices where divertors are C or W-coated plasma facing components (PFCs), as well as limiters are used [26]. For instance, the tritium inventory in ITER is expected to be dominated by co-deposition with carbon. A strong degradation of the confinement is found for helium concentrations of about 5-10% in the main chamber [27]. Therefore, as was done in [18], impurities, in particular, carbon, helium (He), oxygen and tungsten ions, must be considered in the studies in order to make the results comparable with the experimental observations.
In this paper, we aim to study the characteristics and isotope effects of TEMs in the presence of impurities numerically and to attempt to conclude a systematic result. We focus on the collisionless trapped electron modes (CTEMs) because these modes are among the most plausible candidates for turbulent transports under reactor conditions. For this purpose, we deal with a model of collisionless toroidal plasmas which include TEM and the effect of a second ion species. This work is based on the developed comprehensive gyrokinetic dispersion equation [28] used for the study of low frequency drift-like instabilities, which now has been extended to include the contribution of impurities and the TE dynamics [29][30][31]. Hydrogen, deuterium and tritium are chosen as the work gases and the second ion species such as C, He, O, W are the impurities. The TEMs in the plasmas with different parameters are investigated in detail. As a result, we found that the isotope effects of TEM are more complicated than those of ITG mode. The reasons for such differences are discussed.
The remainder of this paper is organized as follows. In section 2, we present the gyrokinetic integral equations developed to include the contribution of impurity ions and the effects of the TEs. The numerical results are analyzed to study the isotope effects of TEM and the effects of impurities on TEM in section 3. Section 4 contains the conclusion and discussions.
Integral eigenvalue equations
In this work a toroidal geometry with circular cross section is considered for the tokamak magnetic configuration. We extend the gyrokinetic integral equation [30] for studying low-frequency drift modes to include impurity species and add the TE contributions. The kinetic characteristics of ions, such as Landau resonance, magnetic drift and finite Larmor radius are retained. The passing electron response is assumed to be adiabatic, and the finite Larmor radius effects of TEs are neglected. The dynamics of a low-frequency electrostatic perturbation in inhomogeneous plasmas is described by the quasineutrality condition . The form of n eT,na will be discussed later.
On the other hand, the perturbed main ion ( ) n i and impurity ion ( n z ) densities in a tokamak are given by [28,29] ( ) The non-adiabatic response h s (s = i,z) is determined by solving the following gyrokinetic equation, We finally obtain the integral eigenvalue equation with TE response and including the effects of the impurity as following: The kernel of equation (1) is Also, all the symbols have their usual meanings, i.e. L Ts is the temperature scale lengths, q is the safety factor, = s r q q r d / d is the magnetic shear. I j ( = j 0, 1) is the modified Bessel function of the order j. In addition, m u and T u ( = u i z , ) are the ion species' mass and temperature, respectively.
In equation (1), n eT,na is the non-adiabatic density response of the collisionless TEs. In the ballooning representation, when the finite gyroradius effects are neglected, it can be represented as [31][32][33] Thus, according to equations (1) and (3), the ultimate integral eigenvalue equation with TE response and including the effects of the impurity is as follows, It is noteworthy that, due to the quasineutrality condition, some parameters are not independent such as Therefore, it is not necessary to give the values of L ei and η z as primary parameters for the calculations.
Numerical results and analysis
The computer cod HD7 for solving the integral eigenmode equation which has been updated by including the multiple ion species and TE effects is employed in this work. The TEM of hydrogen, deuterium and tritium plasmas with or without the second ion species are investigated with equation (4). In the calculations, we use the following typical parameters values: unless otherwise stated in addition to the varying parameters such as η i , η e , and L ez . Note that the wave number is normalized is the hydrogen ion Larmor radius, the mode growth rate and real frequency are both normalized in the following presentation.
TEM in pure hydrogenic plasmas
As the simple and archetypal cases we investigated the TEMs in pure hydrogen (H), deuterium (D) and tritium (T) plasmas first. The normalized growth rate and real frequency versus ρ θ k H are plotted in figure 1. As presented in figure 1(a), the mode growth rates increase, reach maximum, and then decrease with the increment of the normalized wave number This is the same as the result for ITG modes in pure hydrogen, deuterium and tritium plasmas given in [18]. That is, the maximum growth rate of the TEM, just as the ITG mode, is inversely proportional to the square root of the ion mass number. This is due to the saturation of the mode growth rate depending on the mode wave number [19]. This saturation takes place at smaller wave numbers for heavier isotopes which possess larger Larmor radii, thus resulting in smaller growth rate.
With respect to the case with η η = = 1. case. Therefore, the normalized real frequencies have more behaviors, such as reaching maximum values and then decreasing, that do not appear in the case of η i = η e = 1.5.
To study the isotope effects of TEMs in realistic tokamak plasmas, it is necessary to invoke impurity modification of TEM-turbulence. By means of introducing the impurity ions into the plasma, the atomic mass numbers of the primary and impurity ions both become explicit parameters in the drift wave turbulence. One can study the impacts of impurities in realistic plasma by taking all ion species into account individually. Since the impurity density gradient essentially affects the instability characteristics, we should investigate the two cases of the ratio of the electron density scale length over the impurity density scale length = > L L L / 0 ez ne nz and < L 0 ez , which correspond to a positive and a negative impurity density gradient (inwardly and outwardly peaking impurity density profile), respectively.
TEM with impurities in the case of
In the following we introduce a second ion species to study the isotope effects of the TEM in the presence of impurities. The impurity ions with inwardly peaked density profiles ( ) are considered first. In addition, among the hydrogenic plasmas with carbon or helium impurities, the maximum growth rates of the modes are the highest and lowest in hydrogen and tritium plasmas, respectively. The maximum growth rates of the modes are in the between in deuterium plasmas.
The growth rates of the modes are higher in the H, D, and T plasmas with helium impurity than in the counterparts with carbon impurity for ρ < θ k 1 H . However, the maximum growth rate is lower in tritium plasmas with + He 2 impurity than that with + C 6 impurity for ρ > θ k 1 H , although it is the opposite in hydrogen plasmas with + He 2 and with + C 6 impurities. Meanwhile, the maximum growth rates are quite close to each other in the deuterium plasmas with helium and carbon impurities, although the former is slightly lower than the latter. Figure 2(b) shows that the normalized real frequencies of TEMs are much higher in plasmas with carbon impurities than those with helium impurities. Among them, the maximum normalized real frequency is the highest in hydrogen plasmas, while it is the lowest in tritium plasmas. For the cases with impurity ions of same or close charge numbers it is demonstrated in figure 3(a) that the unstable wave number spectrum is wider and the value of the maximum growth rate is higher when the mass number of the impurity ions is smaller. For example, for two cases with same impurity ion charge number (i.e. 2) in a hydrogen plasma, the mode growth rate with fully ionized helium ions (its mass number is 4) reaches the maximum value 0.142 at about ρ ≈ which is higher than that with + C 2 ions (whose mass number is 12), since the mode growth rate of the latter reaches the maximum value 0.105 at about ρ ≈ Comparing the normalized growth rates in the cases with + C 2 and + C 6 impurities, which are represented with the solid lines in figures 3(a) and (c), respectively, it is demonstrated that for the impurities with moderate mass numbers (e.g. carbon with the mass number 12), the higher the degree of ionization, the wider the unstable TEM wave number spectrum and the higher the value of the maximum growth rate. Also, we find from figure 3(c) that the maximum mode growth rates in hydrogen, deuterium and tritium plasmas with + O 8 ions are higher than those with + C 6 ions. This means that the charge number effect dominates the mass number effect in this case.
It is also demonstrated in figures 3(b) and (d) that the normalized real frequency increases with the normalized poloidal wave number ρ θ k H for all the cases, which implies that with moderate electron temperature gradient (η e = 1.125) and somewhat higher TE parameter (ε = 0.25), the typical TEMs, propagating in electron diamagnetic drift direction, hold in hydrogenic plasmas with impurities.
Comparing the results of figure 2(a) with 1(a) and figure 3(a) with 1(c), the important result is that in > L 0 ez case, impurity ions enhance the TEMs and mode growth rates increase. In other words, the effects of impurities on the TEMs are destabilizing, which is opposite to the case of ITG modes. One possible explanation is the dilution effect of impurity ions on hydrogenic ion effects [34]. Another reason may be that the impurity ions drift in the direction opposite to that of the electrons and, therefore, have reduced damping effect on the modes.
The isotope scaling (I).
As mentioned above, Dong et al studied the isotope scaling of ITG mode in [18] and pointed out that, for the case with pure hydrogenic plasmas and the case with impurity ions with inwardly peaked density profiles, i.e. , respectively. Comparing with the results of [18], it is demonstrated that, in pure hydrogenic plasmas, the maximum growth rate of TEMs scales the same as that of ITG modes, that is, For the case with impurities, however, the scaling law becomes complicated. In fact, as shown by the blue fitting lines in figure 4(a), above scaling hold for the case with cases we first study the case without temperature gradients. Figure 5 shows the variations of the normalized growth rates and real frequencies as functions of normalized poloidal wave number ρ θ k H in hydrogen, deuterium and tritium plasmas with fully ionized carbon impurity C +6 and partially ionized tungsten impurity The other parameters are figure 5(a) it is easy to see that the mode growth rate is the highest in hydrogen plasmas, the lowest in tritium plasmas, and the intermediate in deuterium plasmas. In addition, the stabilization effect of mass number of the impurity ions is clearly demonstrated again in these cases. The real frequencies shown in figure 5(b) indicate that the modes propagate in the electron diamagnetic drift direction and that the modes have the widest, the narrowest and the intermediate ranges of unstable poloidal wave number spectra in hydrogen, tritium, and deuterium plasmas, respectively.
The case with η η >
, 0 i e . The growth rates and real frequencies of the modes are shown in figure 6 in the presence The impurity ions are still C +6 and W +6 . We know from figure 6(a) that the modes have the highest, lowest, and intermediate maximum growth rates in hydrogen, tritium and deuterium plasmas, respectively, just as in the above case. Correspondingly, the unstable θ k spectra are the widest, the narrowest and the intermediate in hydrogen, tritium and deuterium plasmas, respectively. It is shown in figure 6(b) that the normalized mode real frequencies are always positive and increase monotonically when ρ θ k H increases, which denotes that these are pure TEMs. Finally, the stabilization effect of mass number of the impurity ions is clearly demonstrated again in these cases.
Note that in < L 0 ez case impurity ions have stabilizing effects on TEMs, and the heavier the impurity ions, the stronger the effects, which can be seen clearly by comparing figures 5 and 6 with figure 1. This is also opposite to the case of ITG modes. The possible reason is that impurities have the effect of enhancing the anomalous electron-ion energy exchange mediated by TEM when impurity density profile peaks opposite to the electron density profile. The result is the same as the isotopic scaling of impurity-driven mode in [18]. Nevertheless, there is another scaling, γ Z max eff 1.5 , for the latter (Here Z eff is the effective charge number). However, no such Z eff scaling evidences have been found in TEM case. The reason may be that TEMs in plasmas with impurity ions of outwardly peaked density profiles are not independent modes like impurity-driven modes and, therefore, have more sophisticated relationship with the effective charge numbers. More detailed study is certainly needed in this field.
Discussions
In previous sections it has been demonstrated that the TEM is rather robust (weak) and the growth rate of the mode is higher (lower) in plasmas with impurities of ) than that in pure hydrogenic plasmas. The main results about the ion mass dependence of the maximum growth rate of TEMs are summarized in table 1. , which is consistent with and then verifies the previous conclusions [19,22].
In addition, it scales as γ ∝ The isotope effect has been observed in many tokamaks [13]. The general observation is that the energy confinement time is proportional to the power of 0.5 of the ion mass number, , here M i is the averaged ion mass [35,36]. Taking into account heating conditions, the energy confinement time dependence on the ion mass number, M i , may be modified gently [37]. Therefore, it is reasonable that, in some cases, the scaling law If one assumes, just as was done in [18], that the correlation time is determined by the single harmonic with the maximum growth rate, and that the correlation length is independent of the mode width of each single harmonic and determined by equilibrium plasma parameters, under which the plasma thermal conductivity has the same scaling as the maximum growth rate of the mode does, then we are possibly able to find a possible relation between our results and the isotope scaling of τ E . It can be found that if TEM turbulence controls the energy confinement of the bulk plasma, the results deduced from the isotope scaling of the maximum growth rate of the TEMs is consistent with the observed mass scaling of the confinement time.
Conclusions
By taking into account the kinetics of TEs and including a second ion species, the upgraded gyrokinetic integral eigenmode code HD7 is applied to investigate the TEMs in hydrogen, deuterium and tritium plasmas with or without impurity ions. The characteristics of TEMs in pure hydrogenic isotope plasmas and those with impurity ions are investigated and compared. In particular, the effects of impurity ions on TEMs in isotope plasmas are investigated in detail. The results suggest that the maximum growth rates of the TEMs are the highest, the intermediate and the lowest in hydrogen, deuterium and tritium plasmas, respectively, regardless of presence or absence of impurity ions. Correspondingly, the modes have the widest, the intermediate and the narrowest unstable θ k spectra in hydrogen, deuterium and tritium plasmas, respectively, with or without impurity ions. In addition, the effects of impurities on the TEMs are substantially destabilizing in case, which can be explained with the dilution effect of impurity ions on hydrogenic ion effects. In < L 0 ez case impurity ions have stabilizing effects on TEMs, and the heavier the impurity ions, the stronger the effects. By means of introducing the impurity ions into the plasma, the atomic mass of the primary and impurity ions both become the explicit parameters in the drift wave turbulence.
On the other hand, a detailed analysis of the isotope mass dependence of TEMs is performed. The relations between the maximum growth rate of the TEMs with respect to the poloidal wave number and the ion mass number are given in the absence or presence of the impurity ions. It is demonstrated, as the first result, quantitatively speaking, that the maximum growth genic plasmas, which is consistent with the previous results [19,22]. Secondly, the scale depends on the sign of its density gradient and charge number when there is a second species of (impurity) ions. When the density gradient of the impurity ions has the same sign as that of electrons, i.e. There is evidently some modest difference on isotope scaling between ITG modes and TEMs. For > L 0 ez case, in particular, as η i is non-zero and moderate, and the impurity ion charge number is moderate, the mode maximum growth rate scales as γ ∝ − M eff 1.0 , which is in contrast with ITG case. This is not surprising and may provide a way to distinguish ITG and TEM in experiment.
We also discussed and compared the scaling and the experimental observations for plasma energy confinement time in an attempt to find a possible relation between them. It is found that there are some similar characteristics and a possible trend in their forms and this trend is consistent with the observed mass scaling of the confinement time if TEM turbulence controls the energy confinement of the bulk plasma. | 5,936 | 2016-03-11T00:00:00.000 | [
"Physics"
] |
Interactive Combustion in a Linear Array of 2 D Laminar Isolated and Triple Burner Jets
Many practical combustion systems such as residential gas burners contain dense groupings or clusters of jet flames with sufficiently small spacing between them, which causes flame interaction. The interaction effect, due in part to Oxygen deficiency in the interstitial space between the flames, causes the spreading of flames, which may merge together to form larger group flames. This interactive effect is studied analytically by revisiting the laminar isolated flame theory for 2D jets, for which similarity solutions are readily available in compressible form, and symmetrical interaction zones can be observed. Flame characteristics were studied by obtaining analytical expressions for flame specific parameters such as height and width, lift-off height and blow-off velocity, air entrainment and mixing layer growth. The theory for multiple interacting jets describes an approximate criterion for interburner spacing at which flame interaction and group flame formation are first observed. The analytical framework presented in this paper presented in this paper produced results which were compared with experimental measurements. The experimental apparatus allowed the interburner spacing to be varied from 7.87 mm to 50.8 mm, and measurements of flame height, width, lift-off height and group-flame formation under interactive modes. Images of the evolving flow field were taken and Schlieren images of the multiple 2D jets were also recorded using a digital camera.
Introduction
In order to study the stability and combustion behavior of interacting jet diffusion flames, laminar single flame stability theory must be developed for a burner and extended to include the effect of multiple burners.In this current work, a stability theory for jets is introduced and the appropriate generalized conservation equations for momentum, species, and energy for 2D compressible systems with boundary conditions are presented.The governing equations are solved to give explicit solutions for axial and radial gas velocities, flame height, maximum flame width and its axial location, amount of air entrainment, lift-off height and blow-off velocity as a function of injection Reynolds number (Re), Schmidt number (Sc), and fuel composition.We begin by summarizing relevant literature review corresponding to single-flame combustion.Literature regarding the stability characteristics (lift-off height and blow-off velocity) of single jet diffusion flames is quite extensive.Some of the relevant endeavors are summarized here.Van quickenborne and Van Tiggelen [1] studied the stabilization of lifted, turbulent, diffusion flames and measured the gas composition, gas flow velocity, intensity, and Eulerian scale of turbulence for a free jet of Methane issuing from circular burners with inner diameters of 1.33 mm, 1.8 mm, and 2.4 mm.In particular, they found that the base of the lifted flame anchors in a region corresponding to the formation of a stoichiometric mixture, where, turbulent burning velocity equals the flow velocity.They also noted that blow-off in jet diffusion flames is not an extinction phenomenon since the flame can be maintained at various heights, provided a permanent ignition source.Kalghatgi [2] theorized that the blow-off velocity is a function of the laminar flame speed and height of the stoichiometric contour and found via experimentation a "universal" nondimensional formula to describe the blow-out stability limit of gaseous jet diffusion flames in still air.Chung and Lee [3] experimentally studied the characteristics of laminar lifted flames stabilized in a nonpremixed jet.The jet was released from nozzles made of Quartz tubing, with inner diameters of 0.164, 0.195, and 0.247 mm at the nozzle exit.They found that the stabilization mechanisms of lifted flames could be interpreted using laminar cold theory based on the premise that the lift-off height is much greater than the preheat zone.Based on this assumption, they derived an expression for liftoff height as a function of the flow rate, burner diameter and Sc.They also correctly identified Sc as a key player in the stability mechanism for circular jets, noting that the lift-off height will increase with an increase in flow-rate for Sc < 0.5 and Sc > 1 but will decrease for 0.5 < Sc < 1.
The above studies focused on circular burners.However, 2D jets where (d i B as shown in Figure 1(a)) are more suitable for interaction studies since the radius of influence around a central jet can be made uniform (Figure 1(b)).As shown in the figure, at a given height above the burner, the interaction zone will be uniform, meaning that two burners placed on either side of the central flame will be the only influences (end effects can be neglected since d i B).Given their physical considerations, it is believed that a more accurate study of interactive processes can be conducted with 2D burners.An added advantage to analytically studying 2D burners is that variable transformation is available allowing similarity solutions for the compressible form of the governing equations.Similarity solutions for circular jets are feasible under the assumption of incompressibility, which is clearly invalid for combustion processes.From the above discussion, a general idea of literature on single-flame theory and experiments can be inferred.Extending the discussion to multiple flame interaction is presented in the next few paragraphs.
In light of the numerous practical applications of multiple burner arrays in industry and elsewhere, fundamental information regarding the factors governing flame interaction is required.The separation distance between individual burners in many combustion systems is often small enough to cause the flames to interact with one another resulting in a change of flame structure (such as greater flame length and width) and a change in stability characteristics of the flame (such as higher blow-off velocity).Multiple flame interaction also exhibits reduced NO x production [4] under certain conditions.As such, one of the main objectives of combustor design is to maximize flame stability while simultaneously minimizing thermal NO x production.Towards this objective, cluster burners, wherein multiple flames (more than 94 swirl burners in a typical combustor) are being investigated in the gas turbine industry.Several important factors which may influence the interactive process are (a) number of burners, (b) spacing between the burners, (c) fuel flow rate through the burners, (d) properties of the fuel, (e) array geometry (such as linear, square, triangle arrangement, etc.), and (f) exit plane geometry of the burners (circular, planar, elliptical, etc.).Flame interaction, therefore, is believed to a complicated phenomenon.In order to keep the physics in tractable form, this study will concentrate only on a linear array of three laminar 2D burners fired with gaseous fuels.
Whereas literature regarding isolated jet flames is quite extensive (as discussed earlier), literature regarding stability characteristics of interacting multiple flames is scarce.One of the earliest studies of multiple flames was conducted by Putnam and Speich [5] who presented a model for studying mass fires using buoyancy controlled, turbulent, multiple jets of gaseous fuels.Each jet was represented as a point source and the important parameters which govern the interactive process were identified to be (a) multipleflame height to single-flame height ratio, (b) number of jets, (c) source-shape factor, and (d) flame spacing to fuelflow rate ratio.They also observed that in some cases of specific flowrates and array geometries, the single flames merged to form larger flames.Lenze et al. [6] measured the axial concentrations of H 2 and CO for an array of three and five turbulent circular burners fired with both town gas and natural gas.They observed a relation between the visible flame heights in multiple jet arrangements and the axial location where CO concentration approached zero.In addition, they observed a simple relationship between the increase in height of turbulent interacting flames to the height of a single, turbulent isolated flame.However, their studies were limited to an attached turbulent flame issuing from circular burners and made no attempt to fundamentally understand the flame stability processes under interactive modes.A more comprehensive study of the interactive process was conducted by Menon and Gollahalli [7].They measured visible flame height, merging length, blow-off velocity, temperature profiles, and concentration levels of O 2 , CO, and NO as a function of separation distance for lifted flames issuing from circular jets fired with Propane (inner jet diameters of 1, 2, 3, 3.5, and 5 mm).They found that the interaction of multiple jets increased the flame length, increased the blow off velocity, decreased the peak temperature levels, increased the CO levels and decreased the NO levels.However, the study was limited to circular burners which are not ideal for interactive studies as previously mentioned.
Objectives and Outline
The primary objective of the current work is to describe the physics of the interactive processes between multiple 2D jets.Particular interest will be paid to the dependence of flame geometry (flame height, flame width, mixing layer growth, etc.) and stability characteristics (lift-off height and blowoff velocity) on interburner spacing, Re and fuel composition.By obtaining similarity solutions for the governing equations for a single 2D jet in compressible form, analytical expressions for flame height and width, lift-off height and blow-off velocity, air entrainment, and mixing layer growth as a function of injection Re and fuel composition can be obtained.A discussion on the importance of the Sc on the lift-off height calculations will be attempted.By expanding single flame theory to include multiple burner effects, one can gain an insight into the various mechanisms of interaction between individual flames.Comparisons between theoretical predictions and experimental measurements of flame height, maximum flame width, lift-off height, blowoff velocity (for single burner), and interburner spacing (where group flames are formed) will be given.Overall, this paper aims to understand the physics of multiple flames, provide approximate analytical solutions for basic burner configurations, and validate those using experimental data.This valuable knowledge can aid in the development of a theory for turbulent multiple flames and even optimization of burner configurations for specific applications.
Therefore, the current work is organized to present analytical models for isolated and multiple burner jets followed by experimental results for comparison with the models.A review of the conservation equations for an isolated 2D jet in compressible form, a presentation of flame stability theory in Section 3, and a summary of the relevant results in tabular form for an isolated 2D jet are presented in Section 4. Thereafter, theory for single burners is extended to multiple 2D jets (with few approximations) in Section 5. A criterion for predicting the interburner spacing at which (i) flame interaction and (ii) group flame formation begin is also presented, and modes of interaction for a linear array of 2D jets are identified.In order to validate the analytical expressions obtained from the isolated flame theory, a simple apparatus consisting of an enclosed isolated 2D jet was constructed.Experimentally measured data on flame height, maximum flame width and its axial location, lift-off height, and blow-off velocity for the isolated jet is then presented for comparison in Section 6.Thereafter, the experimental apparatus was modified to include three, side-by-side 2D jets of orifice size 0.8 mm × 6.35 mm.Details of the experimental apparatus and results are presented in the following Section 7 for multiple burner jets.Measurements of flame height, blow-off velocity, and group-flame formation under interactive modes, using a digital camera are presented.Schlieren images of the multiple 2D jets are also provided.Another set of experimental data corresponding to multiple jets is then presented for comparison with the analytical results.Conclusions drawn from both analytical and experimental studies are given in Section 8.
Analytical Model Formulation for
Single Flame Since the fuel mass fraction at the center of the jet is high and the fuel mass fraction in the far field is low, the mixture is only ignitable within a narrow range where the fuel and air are in combustible proportions.Here H is the flame height.
An expanded view of this ignitable region is shown in Figure 2(b).For y < y R , the mixture is too fuel rich to ignite and for y > y L , the mixture is too lean to ignite.At some distance y st the mixture will be in stoichiometric proportions.Given these restrictions on the flammability limits, if the mixture is ignited within the region bound by TRC (rich limit) and NFB (lean limit) as shown in Figure 2(b), the flame can propagate only within the narrow region δ, which will be called the combustible mixture tube.It is important to emphasize that the flame propagation for the fuel fired laminar jet is very different from the flame propagation for the premixed Bunsen burner flame.For this work, it was assumed that the combustible mixture tube was thin and could be represented by the position in space where the fuel and air are in exact stoichiometric proportions (dashed line MGKD in Figure 2 Four possible scenarios exist as shown in Figure 3. First, the stoichiometric contour can lie "inside" the flame speed contour (Figure 3(a)).Secondly, the stoichiometric contour y
Air entrainment
Air entrainment (a) Profiles of axial velocity, fuel, and oxygen mass fraction for a 2D jet under mixing conditions.A qualitative illustration of the mass fraction of oxygen and fuel is shown can lie "outside" the flame speed contour (Figure 3(b)).Thirdly and fourthly, the contours can intersect at some distance away from the burner (Figures 3(c) and 3(d)).Consider the first option (Figure 3(a)).If the stoichiometric contour lays "inside" the flame speed contour, then at every point along the stoichiometric contour (line MGD in Figure 3(a)) the axial gas velocity will be higher than the flame speed (since momentum diffuses in the radial direction).If an ignition source were provided at M, the flame would attempt to propagate along the stoichiometric contour towards the burner at a velocity equal to the laminar flame speed S.
But the axial gas velocity is higher than the flame speed (v x > S) along the stoichiometric contour and therefore, the flame moves away downstream from the burner.As it moves downstream, the axial gas velocity decreases but is still higher than the flame speed (the condition for finding an anchoring position).However, this must happens beyond the tip of the stoichiometric contour where the mixture is no longer combustible.Given this discussion, the first option will result in blow-off.Consider the second possibility Figure 3(b)), where the stoichiometric contour lies "outside" the flame speed contour.In such a configuration, the magnitude of the axial gas velocity is less than the flame speed (v x < S) along the stoichiometric contour.If an ignition source were provided at M, the flame would propagate towards the burner since the axial gas velocity along the stoichiometric contour is now less than the flame speed (v x < S).This leads to a nozzle attached flame anchored at the burner base.The third possibility (Figure 3(c)) where the stoichiometric contour and the flame speed contour intersect gives the anchoring position for the flame (v x = S and Y O2 /Y F = ν O2 at the intersection point).Further downstream from the intersection point, the stoichiometric contour lies "outside" the flame speed contour which is a stable profile as shown in Figure 3(b).From the intersection point toward the burner base, the stoichiometric contour lies on the "inside" of the flame speed contour which is an unstable profile as shown in Figure 3(a).Therefore, the flame can only exist from the intersection point K in Figure 3(c).to the tip of the stoichiometric contour.Hence, L * gives the lift-off height of the flame.Now consider the fourth possibility (Figure 3(d)): If an ignition source were provided at M, the flame would attempt to propagate at S. However, the axial gas velocity at M is much higher than the flame velocity and hence the mixture cannot be ignited at M. If an ignition source were provided at D, the axial gas velocity would less than S and as such the flame would propagates towards the burner base.The flame is therefore only ignitable up to position K and hence, L * I (limited ignition) gives the ignitable height from the burner rim.To summarize, the fuel mixture is not ignitable for L * I < x < H * .As will be explained in the upcoming discussion, Schmidt number plays a key role in defining the various possibilities shown in Figures 3(a) to 3(b).If Sc > 1, lifted flames are predicted (Figure 3(c)).If Sc < 1, partial flame or ignitable heights are predicted (Figure 3(d)).If Sc = 1, no intersection between the contours is possible and the predicted flames are either (i) stable and anchored at the burner base (Figure 3(b)) or (ii) blown off (Figure 3(a)).Explicit expressions for plotting the stoichiometric contour and the flame speed contours shown in Figures 3(a) to 3(d) can be found by solving the governing differential equations for the single 2D jet.
Discussion of Single-Flame Analytical Model
4.1.Summary of Solutions.For the isolated 2D jet, the equations of mass, momentum, energy, and species were transformed from compressible form into incompressible form (Annamalai and Sibulkin, [8]), normalized, converted into ordinary differential equation using an appropriate similarity variable, and solved with the appropriate boundary conditions to give solutions for the axial and radial velocity (v x and v y ), the species concentrations (Y F , Y O2 , etc.), the flame height (H), the lift-off height of the flame (L), and the blow-off velocity (v blow ).For the complete, step by step, details of the derivations for 2D jet and circular jets including the use of stretched variables in deriving incompressible forms of conservation equations, conversion to ordinary differential equation using similarity variables, solution of ordinary differential equation for velocity, species, and so forth refer to Annamalai et al. [9,10].The solutions to the governing differential equations are summarized in tabular form in Table 1.For the sake of compactness, details of derivation are omitted.All solutions are presented in absence of buoyancy forces.If buoyancy forces are included, M * and J * will be larger than the values listed.In Table 1, rows 9, 10, and 16 summarize the solutions of axial velocity, lateral velocity, and species/temperature profiles for mixing problem or Shvab-Zeldovich variable profiles for combustion problems.For lift-off and blow-off analyses, the Shvab-Zeldovich formulation is unnecessary.However, for defining the criteria for interaction, solution to the Shvab-Zeldovich formulation is needed.
For variable density see Fay [12].Details of the derivation and summary of results for circular jet can be found in Tillman [13].
Solution for Stoichiometric Contour.
In order to determine the lift-off height and blow-off velocity of 2D jets, the mixing problem must be solved.By setting b = Y F and Y O2 in the result for row 16 in Table 1, solutions for Y F (y * , x * ) and Y O2 (y * ,x * ) can be obtained.Using these results, the stoichiometric contour y * st (x * ) where can be obtained and at any specified x * , y * st (x * ).If one uses Shvab-Zeldovich formulation for combustion problem under finite kinetics, then at stoichiometric surface Y F / = 0 and Y O2 / = 0 but β F-O2 = 0.At thin limit Y F = 0 and Y O2 = 0 and still β F-O2 = 0. Using the solution given in row 16 and (1), y * st (x * ) will be the same for these additional two cases.For the stoichiometric contour (or flame contour), where φ f is given by where and ν O2 = stoichiometric ratio (ν O2 = 4 for pure CH 4 issuing into air).The flame profile in real coordinates will be exactly same as the stoichiometric contour only if properties remain constant for both mixing and combustion problems.Equations (3a) and (3b) can be rewritten as for pure fuel with Y F,i = 1, ), where M is the axial momentum flux rate at the inlet For an ideal gas with a constant molecular weight Momentum equation in η and other solutions Used for estimating the mass flow within P/2 for multiple burners Height at which two adjacent mixing layers intersect x * = 0.009856 Ratio of flame height and max width (H * /y * f ,max ) Flame angle with axis, θ , at flame tip and location x * (Figure 4(b)) The (A : I) represents the stoichiometric air to injected gas (fuel + inerts) ratio.If y * st is set to zero, then the height of stoichiometric contour H * st = H * f can be determined as shown in row 17.Typical stoichiometric contour is shown as dashed line in Figure 5.
Velocity Contour. Using the solution given in row 9 for axial velocity v *
x , one can plot velocity contours by setting v * x = 0.55-0.79as shown in solid lines in Figure 4(a).In this particular plot, H * st = 10 and Re i = 2.77.Note that the centerline velocity decreases with x * 1/3 as shown in row 9.If v i = 50 cm/s, then v * x = 0.69 represents the velocity contour v x = 35 cm/s (KAB in Figure 4(a)) which is same as flame velocity for stoichiometric Methane-air mixture.Hence the intersection of this velocity contour with stoichiometric contour at point K represents the anchoring position of the flame or lift-off height L * .
The solution for v x /v x,i along the stoichiometric contour can also be determined.However, of greater interest is the velocity contour y * v (x * ) where v x = flame velocity, S, given in row 9, can be plotted by solving for ξ and eventually y * using row 6.The resulting expression is given below: (5) Equation ( 5) gives the velocity contour at which the axial gas velocity is equal to the laminar flame speed.Here afterwards, such a contour will be called flame speed contour.By setting y * v = 0, H * v can be determined from (5) as shown in row 17.
Reynolds Number Dependence on Stoichiometric and
Flame Speed Contour Growth.The flame speed and stoichiometric contour growth, for a fixed Sc are qualitatively ), then the stoichiometric contour (dashed line) will lie on the "outside" of the flame speed contour and stable flame is formed since v x < S everywhere at low Re.At a critical Re, the two contours will lie directly on top of one another (Figure 5(A)c).If Re is increased further then the stoichiometric contour will lie "inside" the flame speed contour and blowoff will be achieved (Figures 5(A)d and 5(A)e).Given this discussion, blow-off is assumed to occur when the two contours lie directly on top of one another as shown in Figure 5(A)c.Notice from Figure 5(A) that in each frame, the stoichiometric contour and the flame speed contour are proportional to one another (contours never intersect).Since the contours never intersect, flames for Sc = 1 fuels are either stable or anchored at the burner base or blown off.Lifted flames are not possible for fuels with Sc = 1.Also notice that even though momentum and mass are diffusing equally, the rate of growth for each contour (H * stoich /H * flame ) is different for a given Re (this can be seen from row 17).
If Sc / = 1, mass and momentum diffuse at different rates, the two contours will intersect at some distance, x * away from the burner given that the injection velocity is sufficiently high.Figures 5(B ) the stoichiometric contour lies on the "outside" of the flame speed contour (stable flame).As the injection Re is increased further, the two contours intersect as some distance above the burner L * .L * is the lift-off height.In Figure 5(B)c, the flame speed contour lies "outside" of the stoichiometric contour for x * < L * since v x > S everywhere for y * < y * f .Therefore, the flame cannot exist between the burner exit and the intersection point.However, for x * > L * the contours switch positions and the flame speed contour lies "inside" the stoichiometric contour, a stable configuration.Therefore, the intersection point corresponds to the lift-off height for Sc > 1.
For Sc < 1, the following interesting scenario occurs.At a low velocity, the flame speed contour lies "inside" the stoichiometric contour giving rise to a stable flame (Figures 5(C)a and 5(C)b) meaning the mixture could be ignited and stabilized at any x * along the stoichiometric contour from the burner base to the tip of the stoichiometric contour (H * ).At a certain velocity (v crit ) the flame speed contour and the stoichiometric contours intersect at their respective tips (Figure 5(C)c) marking the last position where the mixture is ignitable along the stoichiometric contour within 0< x * < H * .Beyond the critical velocity, the flame speed and stoichiometric contours intersect at L * as shown in Figure 5(C)d.If Sc < 1, for x * < L * , the flame speed contour lies on the "inside" of the stoichiometric contour, a stable configuration.For x * > L * , the flame speed contour lies on the "outside" of the stoichiometric contour, an unstable configuration.Therefore, for a Sc < 1 mixture with v x,i greater than the critical velocity, the mixture can be ignited and stabilized along the stoichiometric contour only for x * < L * .The flame cannot propagate beyond x * > L * , if the laminar flame speed (S) is assumed to remain constant, and as such L * will be called the ignitable height or the partial flame height.It is important to state that partial flames (which occur when Sc < 1 and v x,i > v x,i,crit for S = constant) have not been observed experimentally for CH 4 flames (Sc = 0.74) beyond the critical velocity.Tests conducted by Tillman et al. [14] on a linear array of three stainless steel circular burners fired with pure CH 4 showed that neither lift-off height nor partial flame height was observed before blowoff for the CH 4 flames.However, it must be stated that in the case of combustion, S will be strongly affected by temperature and weakly affected by burnt gas composition.
As such, S will probably increase as the flame temperature increases.Hence, it is possible that during combustion, the flame can be stabilized beyond L * for v x,i > v x,i,crit for Sc < 1 fuels.
From the above discussion, an explicit solution can be found for the lift-off height (or unignitable height) if Sc > 1 or partial flame height (or ignitable height) if Sc < 1.The solution comes from finding the intersection of the stoichiometric contour given in (2) and the flame speed contour given in (5) and can be found below: The flame height is nondimensionalized by the diameter (d i ) of the planar jet H * f = H * st is given by the maximum height of the stoichiometric contour, found by setting ξ = ξ * st = 0 in row 16, v x,i = the average injection velocity, S = the laminar flame speed, C is given in row16, φ f is given in (3a), M * is given in row 2, and J * is given in row 3.
The Sc dependence of (6a) and (6b) is shown in Figure 6 for N 2 -diluted mixtures of C 3 H 8 (Sc ≈ 1.3) and N 2 -diluted mixtures of CH 4 (Sc ≈ 0.74), respectively.Note that (6) does not predict lifted flames for Sc = 1.As previously explained, the flame speed and stoichiometric contours do not intersect because mass and momentum diffuse equally.Also note from (6a) and Figure 6 that if Sc > 1, increasing v x,i results in increasing the L * /H * ratio.Since x,i and hence at Sc = 1.3, L * ∝ v 14 x,i .This shows that lift-off height is a strong function of injection velocity, hence, as the injection velocity is increased, the liftoff height increases to a large value rapidly.However, if the Sc < 1, then increasing v x,i results in decreasing the L * ig /H * f ratio or the ignitable height decreases (i.e., the mixture can only be ignited closer and closer to the burner as the velocity increases).For Sc = 0.74, L * ∝ v −7.5 x,i .The reason for the negative exponent is as follows.Sc < 1 means mass diffuses at a faster rate than momentum.Hence, the fuel diffuses more rapidly than momentum in the radial direction near the burner.A combustible mixture in a low-velocity regime is readily formed.Despite further downstream of the burner, enough mass has diffused so that the stoichiometric contour must form at a lateral distance y closer to the axis of the jet where the axial gas velocity is much higher and hence ignition becomes more difficult.Once ignited near the burner exit whether the flame will be stabilized beyond x * = L * ig depends upon the flame velocity for partially oxidized fuel, Sc and temperature of the vitiated mixture.
The blow-off velocity can also be found from (6a).If Sc > 1, then L * /H * increases with increasing v x,i .At a critical v x,i , the intersection of the flame speed contour and the stoichiometric contour will occur at the tip of the respective contour (Figures 5(A)c, 5(B)e, and 5(C)c).This marks the last possible intersection point for the two contours and hence the last possible stable configuration for the flame.Hence, at blow-off, L * = H * , setting which in (6a) and solving for v x,i leads to This simple expression allows for the blow-off velocity to be found as a function of the Sc (C also has dependence on Sc) and fuel properties (laminar flame speed φ f etc.).When (6).The group of lines on the left correspond to diluted C 3 H 8 mixtures (Sc ≈ 1.3 and N 2 percentages of 85%, 87%, and 90%) and those on the right correspond to diluted CH 4 mixtures (Sc ≈ 0.74 and N 2 percentages of 60%, 65%, and 70%).
Sc < 1, (7) predicts the critical velocity (Figure 5(A)c) at which the ignitable height at which L * ig = H * .The ignitable height keeps decreasing with v i .The flame may or may not be stable beyond L * ig .The blow-off characteristics seem to depend upon Sc, particularly, when Sc > 1, blow-off occurs for v i > v xi due to large amount of air.
Solution for Air Entrainment: Discussion of Schmidt
Number Dependence.For application to multiple jets the growth of mixing layer and the ratio of amount air entrained by each jet to the amount of gas injected are very important since they affect oxygen concentration in the interstitial space between the burners.Setting v x /v x,i = 0.01 in row 9 from Table 1, the mixing layer y * m versus x * can be obtained as shown in row 11.The conversion of y * to real coordinate y requires decompression of y (row 4) using the temperature profile and ideal gas law for density.The mass flow of gases, per unit width, at any given x is given by ṁ y, x = 2 y 0 ρv x dy. ( Using the axial velocity solution (given in row 9), the normalized gas flow at any x within specified y is given in row 12, while the total gas flow at same x within 0 < y < ∞ is given by row 13.The total air entrained at x is Row 15a presents the solution for air entrained in normalized form as a function of x * .By setting x * = H * f in row 15a and using the relation for H * f from row 17, the air entrained at x * = H * is obtained as in row 15c.It is interesting to note that C defined in row 16 approximately represents the ratio of air entrained at x = H for any Sc to the air entrained at x = H for Sc = 1 (row 15c).As Sc → 0, C → 2/3 , ( ṁ A / ṁ i ) → (1/φ f − 1) and using (4) For a pure fuel jet, it was seen that as Sc approaches smaller values, the air entrained at x * = H * f approaches the stoichiometric amount.For all fuels with Sc = 1 (C = 1), the air entrained at x = H to the fuel injected is 50% in excess of the stoichiometric amount (row 15c).The variation of excess air percentage at x = H (tip of flame) with Sc is shown in Figure 7 for 2D and circular jets.See Tillman [13] for circular jet details.For fuels with Sc = 1 at x = H, 2D jets entrain 50% excess air while circular jets entrain 200% excess air.The mathematical results suggest that excess air percentage at x = H will not change even if fuel is diluted with inerts since H will decrease for the latter.It is seen that excess air percentage is much higher for C 3 H 8 compared to H 2 .This will affect the temperature profiles and hence thermal NO x [15].
Qualitative Discussion of Multiple-Flame Interaction.
Interactive processes within liquid drops and coal particle clouds have been dealt in earlier literature [14,16,17].
Interacting jet combustion in cross flow [18] and heat transfer characteristics of impinging multiple jets is also studied [19,20].Here, the effect of multiple laminar jets on stability is considered.If multiple single flames are packed closely together, then the flame height, flame width, and flame stability are increased due to the interaction between the burners.For a given burner geometry and combustible fuel, the increase in flame stability is generally a function of two parameters: the spacing between the burners and the average velocity (or Reynolds number) at the burner exit.
If the injection velocity is kept constant and the spacing between the burners is reduced, at a critical spacing the flames will begin to interact.Likewise, if the spacing between the burners is kept constant and the injection Reynolds number is increased, at a critical Reynolds number the flames will begin to interact.The first sign of flame interaction is an increase in flame height and an increase in flame width.This is due to a decrease in the oxygen available in the interstitial space between the flames.In other words, each adjacent flame is competing for oxygen and as the spacing is reduced to a certain level or as the injection velocity that is, fuel input rate is increased to a certain level, the available oxygen around the flame begins to decrease.When this occurs, the flames must lengthen and widen to obtain the necessary oxygen for completing the combustion, that is, width of the flame will be increased (row 18).
Based on this hypothesis, simple criteria for predicting the stages of interaction can be developed.First, consider Figure 8(a) which shows three 2D burners (W: west, C: central, and E: east) aligned in a linear array at a spacing of l 1 .The mixing layer, obtained by setting v x /v x,max = 0.01 in row 9 for each burner is given by the dashed line.Assuming symmetry in the mixing layer profiles, meaning that the injection velocity for each burner is the same and that the mixing layers themselves are unaffected by the presence of the other flames, then the mixing layers will intersect at a lateral distance of l 1 /2 at an axial distance of x int .The axial intersection point, x int , is of utmost importance.Since the air required for combustion is entrained within the mixing layer, the flame acts independently as long as x int H.In other words, if x int H, then the flame has the same amount of oxygen, entrained along the mixing layer, from x = 0 to x = H as it would if the flame were isolated and alone in ambiance.If the interburner spacing were increased to l l 1 , then x int H. Hence, as long as l ≥ l 1 each individual flame acts independently.These flames will be referred to as "isolated" flames similar to the terminology used in droplet and particle combustion literature [16,17].It should be noted that the excess air percentage drawn at x = H varies with the Schmidt number of the injected fuel and the exit plane geometry of the burner (circular or 2D) as shown in Figure 7.As shown in Figure 7, a circular jet with a Sc = 1.0 draws 200% excess air at x = H, while a 2D jet with Sc = 1.0only draws 50%.This means that in addition to the injection velocity and the interburner spacing, the Schmidt number of the fuel and the exit plane geometry of each burner also play a role in determining when interaction begins.
The mass fraction of O 2 available within the interstitial space between the flames is graphically illustrated in Figure 9 as a function of interburner spacing.As previously explained, for isolated flame behavior the O 2 (Y O2,l/2 ) is the same as the far field O 2 concentration (Y O2,∞ ) for x = 0 to x = H as shown in Figure 9(a).However, as the spacing is reduced to l 2 < l 1 (Figure 9(b)), the mixing layers intersect at x = K which is less than the flame height (H) and the flames begin to interact (Figure 9(b)) due to O 2 deficiency in the region x = K to x = D.The oxygen deficiency developing in the interstitial space (x K < x < x D ) between the flames is shown in Figure 8(b).Now, the free stream O 2 concentration in the interstitial space between the burners in the region of x = K to x = D is less than the far field oxygen concentration.Under these conditions, the flames must widen and lengthen to obtain the necessary oxygen for combustion and will be referred to as interacting "individual" flames.
As the spacing between the burners is reduced further (l < l 3 ), the amount of air entrained per burner within the interburner spacing will become insufficient to completely burn the fuel issuing from the central burner, C (Figure 8(c)).At this point, the oxygen concentration at some axial location between the flames reaches zero, and the flames must merge together forming a larger "group" flame.When a group flame forms, the flame structure near the burner (height x u in Figure 8(c)) is similar to an underventilated Burke-Schmann flame.If the spacing is decreased further, to where the tips of the each burner are touching, no flame can exist in the interstitial space and the cluster behaves like a single burner with all of the required air coming from the far field (Figure 8(d)).This type of flame is called a "sheath" flame in analogy with group combustion literature.It is important to state that the interaction hypothesis proposed is based only on the amount of oxygen available in the interstitial space between the flames.Hence, the hypothesis may require modification, particularly for sooty flames where radiation interaction occurs over larger distances.
Solution for Flame Interaction.
Based on the qualitative observations illustrated in Figure 8, an approximate criterion can now be developed to predict at which interburner spacing flame interaction and group flame formation will occur.As shown in Figure 8(a), the last possible position where single flame behavior can be observed is governed by the intersection of the mixing layers (at l = l 1 , x int = H).Therefore, it is necessary to find an equation for the mixing layer profile.The mixing layer is defined as v x = 0.01v x,max .It is assumed that v x,max always occurs at y * = 0. Therefore, from row 9: Hence along the mixing layer, From ( 13), solving for ξ at which v x = 0.01v max gives ξ = 2.993 and using the modified similarity variable ξ, as found in row 6, the mixing layer profile can be found as given in row 11: Note that y * mix = y mix /d i and the stretched variable y = y 0 ρ/ρ i dy from row 4. Hence, From (14a), (14b) it can be seen that the expansion of gases increases the thickness of mixing layer.If it is assumed that the mixing layer growth is unaffected by the presence of other jets, then the height at which the mixing layers of adjacent jets intersect is given by setting y * mix = l * /2 in (14a) which gives the axial location of the mixing layer intersection as: Given this assumption, as long as x * int > H * , the flames behave independently.Setting x * int = H * and solving for l * gives the least interburner spacing where single flame behavior will be observed.From the rearrangement of (15a), an isolated flame occurs if: where the stretched variable l * = l /d i and l = ymix 0 ρ/ρ i dy ≈ (ρ avg /ρ i )l.Expressing (15b) in terms of the physical variable y gives In other words, the expansion of gases requires the interburner spacing to be at a larger distance to form an individual flame.Equation (15c) can also be nondimensionalized by the isolated flame height H, as found in row 17.Algebraic manipulation gives the following: where Re i is the injection Reynolds number, C = Schmidt number dependent constant defined in row 16, J * comes from the conservation of species as given in row 3, and φ f from (3a).For typical values of Re i = 10, φ f = 0.05, C = 1, J * = 1, and ρ i /ρ avg ≈ 4, (15d) predicts that interaction will occur when l/H ≤ 0.957 or when the interburner spacing is approximately equal to the isolated flame height of each burner.
Solution for Group Flame Formation.
The following hypotheses are used in arriving at a criterion for group flame formation.
Isolated Flames Approximation.
Following the group combustion literature [16,17,21], a group flame is presumed to form when two flames from adjacent burners touch each other.Hence, where W max is the maximum width of flame or nondimensionalizing by the isolated flame height, H, as found in row 17: group W max is estimated from the isolated laminar flame theory as given in row 18.Notice that the maximum flame width, given in row 18, is not an explicit function of injection Reynolds number.However, at low Reynolds numbers, the Froude number (Fr ∝ v 0.5 ) will be low meaning that buoyancy forces will be dominante and M * will increase.If M * increases, then W max will decrease and l group will decrease.This leads to the conclusion that (16b) predicts that the interburner spacing at which group flames will form will not be a function of injection Reynolds number if the injection velocity is high enough to cause the flame to be momentum instead of buoyancy driven (Fr > 1).Also notice from row 18 that W max depends on the oxygen concentration within the interstitial space (given by the φ f term in row 18).As the burner spacing decreased, the oxygen concentration in the interstitial space decreases (as shown in Figure 8(b)) which causes φ f to decrease.If φ f decreases, W max will increase as shown in row 18.Since under interactive modes the flame width widens, (16a) and (16b) represent the lower bound on the interburner spacing where group flames should be formed for momentum-driven flames.
Insufficient Air in the Interstitial
Space.As a second approximation, group flames will be assumed to form when the entrained air becomes insufficient to burn the fuel issuing from the central burner (Figure 8(c)).Recall from Figure 7 that the excess air drawn by the isolated 2D jet at x * = H * is 50% when Sc = 1.Hence, at x * < H * the excess air percentage will decrease.At some point say x * = x * stoich , the excess air percentage will be zero or in other words, the amount of air entrained will be equal to the stoichiometric amount.Recall from ( 9) that the total air entrained, per unit depth, at x is given by Normalizing (17) and recalling ṁ (∞, x * )/ ṁ i from row 13, Using ( 9) and ( 18), the gas entrained within l * /2 for each single burner is given by Setting ṁ a / ṁ i = (A : F) stoich /Y F,i = (A : I) stoich in ( 19) leads to a simple criterion which can be used to predict the spacing at which group flames will form in an array of 2D burners.A group flame will form when Rearranging (20), the spacing l * that leads to group flame formation can be found: * < (A : Equation (21a) expressed in terms of physical variables is given by Non-dimensionalizing equation (21b) by the isolated flame height of each burner, H, as found in row 17, gives the following:
Experimental Results for Isolated Burner Jets
6.1.Experimental Apparatus.The experimental apparatus shown in Figure 10 consists of an open-ended, rectangular enclosure in which a stainless steel 2D burner was mounted.The sides of the apparatus were screened with a fine mesh to dampen outside disturbances.The 2D burner tested had a cross section of 0.8 mm by 6.35 mm and a length of 610 mm to allow for a fully developed velocity profile at the burner exit.The burner was fired with CH 4 or C 3 H 8 (usually diluted with air or N 2 ) supplied from a pressurized tank.The flow was regulated by a flowmeter with a maximum scale of 6.5 lpm of air.Measurements of flame height, flame width and its axial location were made using a CID 2250 digital camera with a 1/30 second sampling rate and a resolution of 191 pixels/in.
Experimental Measurements versus
Theoretical Predictions
Ratio of Maximum Flame Width to Visible Flame
Height. Figure 11 shows the ratio of the maximum flame width to the visible flame height as a function of injection Re i for a single 2D jet fired with 100% CH 4 .It is to be noted that parameter ratio W/His used for plotting to reduce property sensitivity.The theoretical predictions for the maximum flame width to visible flame height ratio (y f ,max * /H * ) are specified in row 19.Note, however, that the radial coordinate (y * = y /d i ) is a compressed coordinate and as such the transformation given in row 4 is needed to evaluate the actual radial coordinate (y).To evaluate the integral in row 4, the ratio ρ/ρ i must be known.If the combustion gases are assumed to be ideal with a constant molecular weight, then ρ/ρ i = T i /T.Two possibilities will be considered.First, the gases will be assumed incompressible (ρ/ρ i = 1 and y = y).This row is given by the dashed line in Figure 11.Secondly, an average density ratio will be assumed.For this row, ρ avg /ρ = T i /T avg ≈ 4 which impliesy = 4y .This row is given by the solid line in Figure 11.
Theoretical predictions follow the basic trend of experimental data showing that the ratio decreases as the injection Re i increases.The agreement between theory and experiment is the best at higher Re i since buoyancy forces, which were not W/H Re i included in the analytical expressions, will become negligible with increasing injection velocity.The Froude number, Fr, as defined in row 2, was used to determine the ratio of the momentum forces to the buoyancy forces.Calculated Fr at each Re i is tabulated in Table 2.It can be observed from the table that buoyancy forces appear to be important (Fr < 1) at all locations.However, ignoring buoyancy forces may become critical only after Fr falls below 0.4 (when Re i = 2.95).For Re i beyond 2.95, the theory and experiment are compared favorably as previously mentioned.In Figure 11, M * = 1.20 was used, corresponding to nonbuoyant conditions for a laminar, parabolic velocity profile at the burner exit.Under buoyant conditions, M * should be greater than 1.20.If M * increases, then row 19 shows that W * /H * decreases which may explain the differences between the theory and the experiment at lower injection Re.Sc for CH 4 used in row 19, was calculated to be 0.74 based on a stoichiometric mixture of CH 4 and air, and all gas properties were evaluated at 700 K.
Ratio of Axial Location of Maximum Flame Width to
Visible Flame Height.Figure 12 shows the ratio of the axial location where the maximum flame width occurs to the visible flame height as a function of injection velocity for a single 2D jet fired with 100% CH 4 .The solid line on the graph represents the laminar flame theory predictions found in row 21.For these predictions, Sc was evaluated to be 0.74 based on a stoichiometric mixture of CH 4 and air.Note from row 21 that the theory predicts no injection velocity or injection Re i dependence.However, experimental data shows that as the injection Re i increases, the ratio of the axial location at the maximum flame width to the visible flame height decreases up until Re i = 3 and then increases again.If the injection Re i is very low, the maximum flame width occurs at approximately half the flame height.As the injection Re i increases up to approximately 3.0, the axial location at which the maximum flame width occurs becomes a smaller fraction of the total flame height.In other words, the flame remains wide near the base of the burner as it continues to lengthen.However, as injection Re i is increased beyond 3.0, the flame starts to become wider at the flame tip meaning the ratio of x * max /H * begins to increase.
Lift-Off Height Measurements with Nitrogen-Diluted
C 3 H 8 Mixtures.As previously discussed in Section 4, no lifted flames were observed when the 2D jet was fired with CH 4 .However, lifted flames were observed for propane mixtures (Sc ≈ 1.3).The lift-off height to flame height ratio for a single 2D jet fired with an 85% N 2 -15% C 3 H 8 by volume mixture is shown in Figure 13.The percentage of nitrogen dilution was selected to allow lift-off to occur in the laminar regime.As shown in the figure (solid line) and given in the theory (row 23), lift-off height increases as the injection velocity is increased.Sc for the fuel mixture (as used in row 23) was taken to be 1.30 based on a stoichiometric mixture of C 3 H 8 and air.M * in row 23 was selected to be 1.20 representing nonbuoyant conditions, and the laminar flame speed was taken to be 0.25 m/s based on experimental measurements [6].The theory tends to predict lift-off at a later injection velocity than the experiment and also shows a steeper increase in lift-off height.The comparison between the laminar flame theory's prediction of blow-off velocity (as given in row 24) and the experimental measurements of blow-off for a single 2D jet fired with N 2 -diluted mixtures of C 3 H 8 is given in Figure 14.
For the theoretical predictions, M * = 1.20,J * = 1.0, and S was taken to be 0.20, 0.23, and 0.25 m/s for 90% N 2 −10% C 3 H 8 , 87% N 2 -13% C 3 H 8 , and 85% N 2 -15% C 3 H 8 mixtures based on experimental data collected in [6].As shown in the figure, the theory underpredicts the blow-off velocity by a factor of about 1.40.It is encouraging, however, that the theory does show the same trends as the experimental data.
As the mixture is leaned, both the theory and the experiment show a similar decrease in blow-off velocity.Recently a test method was proposed to determine the degree of flammability of refrigerants [20].Higher the flame speed (S) of a fuel, higher the flammability of fuel.Looking at the expression for blow-off velocity v blow (row 24) and liftoff height L (row 23), it is seen that v blow ∝ C(Sc)S, while L ∝ S 3Sc/(Sc−1) .For Propane fuels, Sc ≈ 1.30, C = 1.076, and hence v blow ∝ 1.076S, while L ∝ S 12 for 2D jet (L ∝ S 4 for a circular jet).Thus, lift-off height is more sensitive to flammability than blow-off.Therefore, the current lift-off height may be used to distinguish the degree of flammability of different refrigerants.One can increase the sensitivity by diluting fuel with inerts since it affects φ f .
Experimental Results for Multiple Burner Jets
7.1.Experimental Apparatus.The experimental apparatus shown in Figure 10 was modified to an open-ended, rectangular enclosure in which three stainless steel 2D burners were mounted coaxially in a linear, triple burner array (TBA).The burner dimensions remain the same as in the earlier experiments.Interburner spacing is variable with a resolution of 1/8 in.The three burners were fired with equal amounts of CH 4 or C 3 H 8 supplied from a pressurized tank.Shop air and N 2 were available for dilution.Experimental results and relevant discussions will be divided into four sections which will identify: (i) the interactive modes in multiple 2D jets, as a function of injection Reynolds number and interburner spacing, (ii) the interaction and group flame formation criteria, (iii) the changes in flame structure (flame height and width) under interactive modes, and (iv) the changes flame stability (lift-off height and blow-off velocity) under interactive modes. of the nondimensional interburner spacing (l * = l/d i ) and three injection Reynolds numbers (0.784, 1.62, and 2.95) for 100% CH 4 .It can be deduced from the figure that the strength of multiple flame interaction is governed by two factors: (i) the injection velocity and (ii) the interburner spacing.In general as the interburner spacing is reduced below a certain value, the flames will lengthen due to oxygen deficiency developing in the interstitial space between the flames.However, the magnitude of this increase appears to be governed by the injection Re i (i.e., injection of more fuel).As long as Re i is kept low, the spacing at which interaction will occur can be greatly reduced.In other words, if the injection velocity is kept low, the interactive effects will not be felt until the burners are brought much closer together and even at this lower spacing the interactive effects appear to be weaker.Notice that at l * = 8 diameters, H m /H = 1.18 and 2.56 for Re i = 0.784 and Re i = 2.95, respectively.On the other hand, if the injection velocity is high, the interactive effects will come in to play at a much earlier spacing and the interactive effects appear to be much stronger.The upper limit on H m /H can be found by recalling that sheath flames behave much like single flames issuing from an equivalent diameter with equal injection velocity.Hence, d i,equiv,sheath = 3d i for a 2D jet and Re i,equiv,sheath = 3Re i .Since H ∝ Re i as shown in row 17 from Table 1, H sheath /H iso = 3.This gives the upper limit on H m /H iso .In other words, if H m /H iso = 3, then a sheath flame has formed and the H m /H iso ratio is at a maximum.Figure 15 shows that the measured H m /H ratios approach the sheath limit.
Multiple-Flame Growth versus Single Flame Growth:
Fixed Reynolds Number, Decreased Spacing.As previously mentioned, as the interburner spacing decreases, the flames must widen and lengthen to obtain the necessary oxygen for combustion.This effect was qualitatively illustrated in Figure 8 which identified the stages of interaction as isolated, individual, group, and sheath flames.Now, Figure 16 shows the actual flames undergoing similar stages of interaction.
The figure depicts a grouping of three 2D burners at a fixed injection Re of 3.70 at interburner spacings of 32 burner diameters (Figure 16 Note that injection velocity in each frame is remaining constant.Therefore, any changes in the characteristics of the flames are due to interactive effects.Even though the flames are interacting in Figure 16(c), as evidenced by increased width and height, they still retain their individual shape.Therefore, these flames are referred to as "individual" in analogy with group combustion literature [14,16].As the spacing is reduced further down to 13.12 burner diameters, the "individual" flame characteristics begin to become lost and the flames begin to start merging together or forming an incipient group flame.At 9.92 burner diameters, the group flame has become a sheath flame meaning that the amount of air entrained in the interstitial space between the burners has fallen to such a low that no flame can exist between the burners, and all of the required air is coming from the ambient surroundings.The flame in Figure 16(d) is similar to a single flame issuing from a larger diameter with a similar throughput.Also note that as the burner spacing is reduced Figures 16(a)-16(d) the fraction of the yellow or sooty region of the visible flame height is increased.This is due to the lack of oxygen in the interstitial space and will mostly likely lead to increased soot and CO production which could further complicate the interaction process by increasing radiation heat transfer to adjacent flames.The sooty flames are more Individual flames (as shown in Figure 8(b)), where the flames have not merged together but where interactive effects between the burners increase flame height, flame width, and flame stability exist in the band given between the interaction criteria and group flame formation criteria.Therefore, individual flames exist at higher injection Re i and lower interburner spacings than isolated flames as explained in Figure 8.As the interburner spacing is reduced further and as the injection Re i is increased further, the flames merge together and form a larger group flame as illustrated in Figure 8(c).Group flames exist until the oxygen in the interstitial space falls to approximately zero and a sheath flame forms.The sheath flame (as illustrated in Figures 8(d) and 16(e)) behaves in a similar manner as an isolated flame issuing from a larger burner diameter.For Figure 17, a sheath flame was taken to exist if the merging height for the flames (x u in Figure 8(c)) was less than 2.5 mm.It must also be stated that the interaction criteria, group flame formation and sheath flame formation curves are upward sloping and could be approaching limiting values.For example, the slope of the group flame formation curves grows increasingly larger as the interburner spacing is increased.In other words, the injection Re i at an interburner spacing of 20 to 25 burner diameters may approach infinity, making forming a group flame impossible beyond certain spacing.The same may be true for interaction criteria and sheath flame formation criteria.In fact, it is expected that the sheath flame formation criteria would be the steepest curve followed by the group flame formation curve and the interaction curve.
Interaction Criteria:
Comparison with Theory.Figure 18 gives the comparison between the experimentally observed interactions (shown in Figure 17) and the theoretical predictions for flame interaction and group flame formation (equations (16b) and (21c)) for a linear array of 2D burners fired with 100% CH 4 .Here, the interburner spacing, l, has been nondimensionalized by the isolated visible flame height (H), which was measured for a single 2D jet.Re i of the single jet was kept the same as the Re i per burner for the linear array of 2D jets.In Figure 18, the dashed line corresponds to theoretical predictions for flame interaction based on the adjacent mixing layers intersecting at the isolated flame height (shown in Figure 8(a)) as given by (15d).The dotted line corresponds to the theoretical prediction for group flame formation assuming that there is insufficient air available in the interstitial space between the flames as found in (21c).The solid line gives the group flame formation prediction based on isolated flames touching at their maximum width as given in (16b).As shown in the figure, equations (15d) and (21c) overpredict the interburner spacing necessary to form individual and group flames, respectively, by approximately a factor of 4. Comparison between the theoretical predictions for group flame formation given by (16b) and the experimental measurements for group flame formation, represented by the square data points in the figure, is good.This comparison is better at higher Re i as buoyancy forces, which were neglected from our analysis, decrease to smaller value.Froude numbers, as defined in row 2, were calculated to be a maximum of 0.56 at Re i = 6.86 and l * = 64 and a minimum of 0.154 at Re i = 0.78 and l * = 8.
Interactive Effects on Flame Geometry:
Comparison with Theory 7.4.1.Schlieren Images of 2D Jets.A Schlieren system was used to study the interaction of multiple flames at various interburner spacings.Figure 19 shows the captured Schlieren images for a linear array of three 2D jets at interburner spacings of 64, 32, and 16 burner diameters.In each frame, the burners have been fired with 14.8 mL/min/burner of C 3 H 8 (Re i = 2.97), therefore, any changes in the images will be due strictly due to interactive processes between the flames.As shown in Figure 19, there is a vast difference in the images.In Figure 19(a), the burners are separated by 64 burner widths (approximately 54.8 mm).At this spacing, the first order density gradients of each flame (∂ρ/∂y = −(P/RT 2 )∂T/∂y) do not intersect and the flames are operating under "isolated" conditions.As the spacing is reduced to 32 burner widths (approximately 25.4 mm) as shown in Figure 19(b), the density gradients intersect at some axial distance away from the burner (typically beyond the flame tips).It appears that the density gradient growth (width, etc.) from Figures 19(a) to 19(b) remains unchanged.The intersection of the density gradients as shown in Figure 19(b) may be a sign of flame interaction but definitely does not represent the merging of flames.Interestingly enough, changes in the flame structure and flame stability were not noticed until a much closer spacing, namely, 16 burner diameters or 12.7 mm spacing.This means that the intersection of the first order density gradient may have little effect on the visible characteristics of the flames.
As the spacing is reduced to 16 burner widths (12.7 mm) as shown in Figure 19(c), a very interesting phenomenon occurs.The flames begin to flicker or pulse in series at a frequency of approximately 10 Hz.It should be noted that this flickering was not observed until the spacing was decreased to 16 burner widths.It is therefore believed that this flickering is a direct result of the interaction between the flames.It should also be noted that the flickering ceases if one of the outside burners is removed (e.g., in the row of a binary array at the same Re i , the flickering ceases).One possible explanation for the flickering observed in the triple burner array can be found by analyzing the flame height expression from row 17.The flame height of a 2D jet is a very strong function of oxygen concentration (given by the 1/φ 3 f term in row 17).Hence, a small decrease in oxygen concentration will result in a large increase in flame height.It is possible that as the flames are brought within 16 burner widths of one another, oxygen concentration in the interstitial space around the central flame decreases.If disturbances in the oxygen concentration occur, then Y O2,∞ = Y O2,∞,avg + ε.If ε < 0, then the oxygen concentration decreases and the flame lengthens.Once the flame lengthens, oxygen consumption decreases since the injected fuel is being consumed over a larger axial distance.This decreased consumption rate of oxygen could cause the oxygen concentration in the interstitial space to rise again (ε > 0), lowering the flame height.So it is possible that the flickering can be attributed to disturbances in the levels of oxygen concentration available for the central flame.
Interactive Effects on Flame Stability:
Comparison with Theory x,i ), lift-off height increases as the injection velocity is increased.Schmidt number for the fuel mixture (used in row 23) was taken to be 1.30 based on a stoichiometric mixture of C 3 H 8 and air.M * in row 23 was selected to be 1.2 representing nonbuoyant conditions and the laminar flame speed was taken to be 0.25 m/s based on experimental measurements [6].The isolated flame theory tends to predict lift-off at a later injection velocity than the experiment and also shows a steeper increase in lift-off height.The theory also predicts a lower blow-off velocity than the experiment, given by L/H = 1.Also, the experimental data shown in Figure 20 seems to suggest that L/H decreases for a given injection velocity as the separation distance between the burners is decreased.From the experimental data it appears that increases in flame height may be larger than increases in lift-off height.Comparison between the laminar flame theory's prediction of blow-off velocity (as given in row 24) and the experimental measurements of blow-off in a linear array of three planar burners fired with N 2 -diluted mixtures of C 3 H 8 is given in Figure 21.For the theoretical predictions of blow-off for an isolated flame, M * and J * were selected to be 1.2 and 1.0, respectively.S was taken to be 0.2, 0.23, and 0.25 m/s for 90% N 2 -10% C 3 H 8 , 87% N 2 -13% C 3 H 8 , and 85% N 2 -15% C 3 H 8 mixtures from experimental data collected, [6] respectively.As shown in the figure, there is no change in blow -off velocity for interburner spacing of 64 and 16 burners diameters and the theory underpredicts the blow-off velocity at these spacings by a factor of about 1.40.However, as the interburner spacing is reduced to 10.67 diameters, there is a large increase in blow-off velocity marking an approximately 50% increase in flame stability.Even though isolated flame theory underpredicts the experimental data, it is encouraging that the theory shows the same trends as experiments.As the mixture is leaned, both theory and experiment show a similar decrease in blow-off velocity.
Isolated Jets
(1) Solutions for the compressible form of the governing equations of mass, momentum, energy, and species for a single 2D jet were solved to give explicit solutions for flame height and width, mixing layer growth, lift-off height, and blow-off velocity.
(2) A flame stability theory (lift-off, blow-off velocity, etc.) was offered based on two important parameters: the flame speed contour, giving the positions in space where the axial gas velocity equals the laminar flame speed and the stoichiometric contour giving the positions in space where fuel and air are in stoichiometric proportions.For fuels with Sc = 1, if the stoichiometric contour lies "inside" the flame speed contour, a stable flame is predicted.If the stoichiometric contour lies "outside" the flame speed contour, blow-off is achieved.If the two contours intersect (Sc / = 1), the intersection point corresponds to the lift-off height (Sc > 1) or the ignitable or possibly partial flame height (Sc < 1).
(3) Schmidt number was found to play a key role in flame stabilization processes.For Sc = 1, no intersection between the flame speed and stoichiometric contours is possible and hence no lifted flames are predicted.For Sc > 1, the intersection of the contours gave the lift-off height which increased with increasing Reynolds number until the blow-off condition was reached.For Sc < 1, an interesting scenario was observed.The intersection of the contours leads to an ignitable height or partial flame height which decreased as Reynolds number was increased.
(4) Laminar flame theory predictions compared favorably with experimental data collected for a single 2D jet.
Triple Burner Jets
(1) Laminar single flame theory was modified to include multiple burner effects to obtain simple expressions which predict the interburner spacing at which flame interaction begins and at which formation of group flames occurs.
(2) Four distinctive modes of flame interaction were identified: (a) isolated, (b) individual, (c) group, and (d) sheath.For a given burner geometry and combustible fuel properties, these modes were found to be a function of interburner spacing and injection Reynolds number.
(3) At low injection Reynolds numbers, flame interaction can cause flame flicker.
(4) Laminar isolated flame theory underpredicts the blow-off velocity in linear arrays of 2D burners by approximately 40% for interburner spacings from 64 to 16 burner widths.
3. 1 .
Flame Stabilization in 2D Jet.For the pure mixing problem, the axial velocity v x , fuel and oxygen mass fraction profiles (Y F , Y O2 ) are qualitatively shown in Figure 2(a).
(b)).Given this discussion, the flame stability characteristics of the single 2D jet can be mapped by analyzing two contours: (a) the stoichiometric contour representing the stoichiometric or combustible mixture tube where Y O2 /Y F = ν O2 and (b) the flame speed contour representing the positions in space where the axial velocity is equal to the flame speed (v x = S).
δ
(b) Expanded view of the combustible mixture tube
Figure 4 :
Figure 4: Qualitative growth of the flame speed and stoichiometric contours with Re.
4. 5 .
Lift-Off and Blow-off Solutions: Discussion of Schmidt Number Dependence.The Schmidt number (Sc) governs the way the two contours grow with respect to one another.Physically, Sc represents the ratio of the momentum to the mass diffusion.If Sc = 1, then mass and momentum diffuse equally and the two contours never intersect.Figures5(A)a-5(A)e show how the contours grow with respect to one another as the injection Reynolds number is increased for Sc = 1.If Re is very low (Figures 5(A)a and 5(A)b )a-5(B)e show growth of the stoichiometric and velocity contours with velocity v x,i when Sc > 1 and Figures 5(C)a-5(C)e show the corresponding results when Sc < 1.The intersection point corresponds to the lift-off height for Sc > 1 (Figure 5(B)) and the partial flame height (but not the lift-off height) for Sc < 1 (Figure 5(C)).For low injection Re, (Figures 5(B)a and 5(B)b
Figure 5 :
Figure 5: Stoichiometric (dashed) and flame speed (solid) contour growths with increasing Re and varying Sc.
Figure 7 :
Figure 7: Excess air variation at x = H as a function of Sc for 2D and circular jets.
When l > l 1 , each burner operates as a single burner.Recall that 50% excess air is entrained for 2D burners fired with fuels of Sc = 1.0 when l > l 1 When x int < H, interaction effects the individual flame characteristics, l 2 < l < l 1 When l < l 2 , the air entrained is insufficient to burn the fuel from the central burner and a group flame is formed W C E y x (d) Sheath combustionall of the required air must come from the far field
Figure 11 :
Figure 11: Ratio of maximum flame width to visible flame height as a function of injection Re for a single 2D jet fired with 100% CH 4 .
Figure 12 :Figure 13 :
Figure 12: Ratio of axial location of the maximum flame width to the visible flame height as a function of injection Re i for 100% CH 4 fuel.
Figure 15
shows the ratio of the height of the multiple flames (H m ) to the height of an isolated flame (H) as a function
(a)),16 burner diameters (Figure16(b)), 14.4 burner diameters (Figure16(c)), 13.12 burner diameters (Figure16(d)) and 9.92 burner diameters (Figure16(e)).Notice that as the interburner spacing is reduced from 32 to 16 burner diameters, the visible flame heights and the flame characteristics (width, axial location at maximum width etc.) change very little, indicating almost no interaction.Figures16(a) and16(b) show the flames in the isolated flame mode.As the spacing is reduced to 14.4 burner diameters, however, the flames begin to interact as evidenced by the flames widening and lengthening slightly.
Figure 17 :
Figure 17: Interburner spacing required for flame interaction and group flame formation as a function of injection velocity for 100% CH 4 fuel.
Figure 18 :
Figure 18: Comparison of theoretical predictions (given in (4d), (5b), and (10c)) and experimentally observed interburner spacing nondimensionalized by the visible flame height of a single isolated jet for CH 4 fuel.
7. 5 . 1 .
Lift-Off Measurements with Nitrogen-Diluted C 3 H 8 Mixtures.Since no lift-off was observed for CH 4 (Sc < 1) flames, the 2D jets were fired with a Nitrogen-diluted mixture of Propane (Sc > 1).The lift-off height to flame height ratio for an 85% N 2 -15% C 3 H 8 mixture is shown in Figure20for interburner spacings of 64, 16, and 10.67 widths.The percentage of Nitrogen dilution was selected to allow blow-off to occur in the laminar regime.As shown in the figure (solid line) and given in the theory (row 24), L/H ∝ v (3Sc/Sc−1)
Table 1 :
Summary of single, laminar jet results for 2D planar jet under non-buoyant conditions.
2 D = constant and
Table 2 :
Calculated Froude number versus Reynolds number for isolated jet.
* /H * Re i | 16,941.4 | 2012-11-19T00:00:00.000 | [
"Engineering",
"Physics"
] |
Ca2+ Binding to α-Synuclein Regulates Ligand Binding and Oligomerization
α-Synuclein is a protein normally involved in presynaptic vesicle homeostasis. It participates in the development of Parkinson's disease, in which the nerve cell lesions, Lewy bodies, accumulate α-synuclein filaments. The synaptic neurotransmitter release is primarily dependent on Ca2+-regulated processes. A microdialysis technique was applied showing that α-synuclein binds Ca2+ with an IC50 of about 2–300 μm and in a reaction uninhibited by a 50-fold excess of Mg2+. The Ca2+-binding site consists of a novel C-terminally localized acidic 32-amino acid domain also present in the homologue β-synuclein, as shown by Ca2+binding to truncated recombinant and synthetic α-synuclein peptides. Ca2+ binding affects the functional properties of α-synuclein. First, the ligand binding of 125I-labeled bovine microtubule-associated protein 1A is stimulated by Ca2+ ions in the 1–500 μm range and is dependent on an intact Ca2+ binding site in α-synuclein. Second, the Ca2+ binding stimulates the proportion of125I-α-synuclein-containing oligomers. This suggests that Ca2+ ions may both participate in normal α-synuclein functions in the nerve terminal and exercise pathological effects involved in the formation of Lewy bodies.
␣-Synuclein is a protein normally involved in presynaptic vesicle homeostasis. It participates in the development of Parkinson's disease, in which the nerve cell lesions, Lewy bodies, accumulate ␣-synuclein filaments. The synaptic neurotransmitter release is primarily dependent on Ca 2؉ -regulated processes. A microdialysis technique was applied showing that ␣-synuclein binds Ca 2؉ with an IC 50 of about 2-300 M and in a reaction uninhibited by a 50-fold excess of Mg 2؉ . The Ca 2؉ -binding site consists of a novel C-terminally localized acidic 32-amino acid domain also present in the homologue -synuclein, as shown by Ca 2؉ binding to truncated recombinant and synthetic ␣-synuclein peptides. Ca 2؉ binding affects the functional properties of ␣-synuclein. First, the ligand binding of 125 I-labeled bovine microtubule-associated protein 1A is stimulated by Ca 2؉ ions in the 1-500 M range and is dependent on an intact Ca 2؉ binding site in ␣-synuclein. Second, the Ca 2؉ binding stimulates the proportion of 125 I-␣-synuclein-containing oligomers. This suggests that Ca 2؉ ions may both participate in normal ␣-synuclein functions in the nerve terminal and exercise pathological effects involved in the formation of Lewy bodies.
Parkinson's disease (PD) 1 and other common neurodegenerative disorders, e.g. dementia with Lewy bodies and the Lewy body variant of Alzheimer's disease, are characterized by the development of the proteinaceous inclusions called Lewy bodies in the degenerating nerve cells (1). Lewy bodies comprise ␣-synuclein (AS)-containing filaments, and purified AS readily forms amyloid-like filaments in vitro (2)(3)(4)(5). Moreover, missense mutations in the AS gene cause heritable autosomal dominant PD (6,7). Transgenic animal models support the direct link between AS and neurodegeneration because overexpression of AS leads to neuronal loss, nerve terminal pathology, and formation of Lewy body-like inclusions (8 -11). It has been proposed that the pathogenic mechanisms triggered by AS rely on structural changes occurring during the transition from the monomeric to the -folded filamentous state (12). AS is a member of the synuclein family, which, in man, is dominated by ␣-, -, and ␥-synuclein (13). The synucleins are acidic proteins of about 140 amino acids that display a "natively unfolded" struc-ture (14). The N-terminal part of the proteins is highly conserved and contains several KTKEGV consensus repeats, whereas the C-terminal portion is less well conserved and possesses no known structural elements (15). The differences in its primary structure are reflected in segregated functional domains, e.g. brain vesicles bind to the N-terminal part, whereas the microtubule-associated proteins tau and microtubule-associated protein 1B bind to the C-terminal part (16 -18). The AS gene is dispensable for normal development and breeding as demonstrated in AS knockout mice (19). However, these mice do exhibit subtle changes in the contents of certain neurotransmitters and in synaptic transmission (19), and antisense suppression in primary nerve cell cultures causes a reduced distal pool of synaptic vesicles (20). This indicates that AS plays a role in the cellular signaling events, an observation that is in agreement with biochemical studies demonstrating that AS can affect phospholipase D2 and protein kinases and modulate phosphorylation of nerve cell proteins (17,21,22).
Ca 2ϩ ions regulate a plethora of cellular processes. This functionality has been refined in neurons, where the propagation of action potentials over long distances and the fine-tuned neurotransmitter release from nerve terminals represent such Ca 2ϩ -regulated processes (23)(24)(25).
The actions of Ca 2ϩ ions are mediated by several mechanisms. The Ca 2ϩ -calmodulin complex and its diverse downstream signaling pathways represent common cellular mechanisms (26). More neuron-specific Ca 2ϩ -regulated proteins are represented by synaptic vesicle-associated proteins and abundant Ca 2ϩ -binding neuronal proteins like parvalbumin and calbindin (27). The latter group may function as a slow buffer that modulates synaptic plasticity (28). The importance of cellular Ca 2ϩ homeostasis is highlighted by the central role of Ca 2ϩ ions in apoptotic processes and neuronal excitotoxicity (29,30).
The purpose of the present study was to investigate the binding of Ca 2ϩ to AS to ascertain whether Ca 2ϩ can regulate normal and pathological AS functions. 45 Ca and 125 I were obtained from Amersham Pharmacia Biotech. All reagents were of analytic grade, unless stated otherwise. The synthetic peptide AS-(109 -140), corresponding to amino acid residues 109 -140 in human AS, was from Shaefer-N (Copenhagen, Denmark).
Miscellaneous-
Proteins-The novel deletion mutants AS-(1-110) and AS-(1-125) were produced by PCR-based mutagenesis as described previously for AS-(1-95) and AS-(55-140) (18). The constructs were verified by DNA sequencing. The mutant proteins were expressed in Escherichia coli and purified essentially as described for wild type AS (31). The peptides were more than 95% pure as assessed by Coomassie Blue staining (Fig. 2, middle panel, A, inset), and their identities were verified by mass spectrometry (data not shown). Purified bovine microtubule-associated protein (MAP)-1A was kindly provided by Dr. Khalid Islam (32). It consisted essentially of the pure ϳ360-kDa heavy chain (Fig. 3, top panel, inset, lane 1). The MAP-1A was iodinated to a specific activity of about 250 mCi/mg using chloramin T as the oxidizing agent, as described previously for MAP-1B (18). The electrophoretic migration of the iodinated MAP-1A consisted of a single slow-migrating band corresponding to the nonlabeled protein (Fig. 3, top panel, inset, lane 2). All protein concentrations were determined using the Bio-Rad protein assay using bovine serum albumin as standard. 45 Ca 2ϩ Equilibrium Dialysis Assay-First buffers and protein stock solutions were passed through a Chelex 100 column (Bio-Rad) to remove Ca 2ϩ ions to negligible levels (33). Next, solutions containing 1 mM 45 Ca 2ϩ and different concentrations of unlabeled Ca 2ϩ were prepared with and without a constant concentration of AS-(1-140), AS-(1-125), AS-(1-110), AS-(1-95), -synuclein, and ␥-synuclein. All experiments were performed at 4°C in a solution containing 150 mM KCl and 20 mM HEPES, pH 7.4. The concentration of the synucleins varied from 20 to 300 M. Binding was measured by equilibrium dialysis. For equilibrium dialysis, 30-l plexiglass chambers were used (34). Each chamber was divided into two equal compartments by a cellulose membrane cut from dialysis tubing (Spectrum, Houston, Texas; cutoff, 3,500 Da). The left-side compartments contained 25 l of calcium-containing samples, with or without the synucleins, and the right-side compartments contained 25 l of buffer. Control experiments showed that equilibrium was established within 2 h (data not shown). Accordingly, the chambers were emptied after 9 h, before the samples were assayed for radioactivity and protein. The Ca 2ϩ concentration was determined by liquid scintillation counting with a LKB Wallac 1209 Rackbeta counter (Turku, Finland). No quenching of the radioactivity of 45 Ca 2ϩ by the synucleins was observed. The recovery of 45 Ca 2ϩ was 97%, demonstrating that no significant adsorption of calcium to the cellulose membrane or dialysis chamber had occurred. The radioactivity of Ca 2ϩcontaining solutions that had not been dialyzed was taken to represent the known concentration of total Ca 2ϩ . In the binding experiments, the concentrations of bound and free Ca 2ϩ were calculated by using the radioactivity samples taken from the synuclein-containing chambers (representing bound plus free Ca 2ϩ ) and the corresponding synucleinfree chambers (representing free Ca 2ϩ ). The concentrations of the synucleins were measured by spectroscopy at 280 nm using the extinction coefficient calculated for each of them. The protein content in the isolated samples was determined by SDS-polyacrylamide gel electrophoresis and silver staining to assure the absence of degradation and leakage through the membrane.
MAP-1A Binding Assay-The 125 I-MAP-1A binding to AS peptides immobilized in Polysorb microtiter plates (Nunc, Copenhagen, Denmark) was performed essentially as described previously for tau (17). The binding buffer consisted of 150 mM KCl, 20 mM HEPES, pH 7.4, 0.01% extensively dialyzed bovine serum albumin, 0.1 mM EDTA, and 0.1 mM EGTA supplemented with various concentrations of CaCl 2 and MgCl 2 . The even immobilization of the C-terminally truncated AS peptides was verified by their similar specific binding of a 125 I-labeled affinity-purified antibody (ASY-3) raised against a synthetic peptide corresponding to the N-terminal 31 residues of AS (data not shown).
Chemical Cross-linking of AS Oligomers-AS and C-terminal-truncated peptides (1 M) supplemented with 500 pM of the corresponding 125 I-labeled AS were incubated for 2 h at 20°C in 150 mM KCl, 20 mM 4-morpholinepropanesulfonic acid, pH 7.4, 0.1 mM EDTA, 0.1 mM EGTA, and 0.5 mM dithioerythreitol in the absence and presence of Ca 2ϩ . The distribution of monomers, oligomers, and higher aggregates was subsequently stabilized by the addition of a short-length hydrophilic chemical cross-linker, bis(sulfosuccinimidyl)suberate (BS3) (1 mM), for 15 min, and then the cross-linker was quenched by the addition of an equal volume of Tris-containing SDS, dithioerythreitol loading buffer. The samples were subsequently resolved by reducing gradient SDS-polyacrylamide gel electrophoresis followed by visualization by autoradiography. 45 Ca 2ϩ equilibrium dialysis was performed to determine whether AS is a Ca 2ϩ -binding protein. Fig. 1 demonstrates that human recombinant AS binds Ca 2ϩ with a half-saturation of about 300 M. The saturation of the binding approaches 0.5 mol Ca 2ϩ /mol AS at 1 mM Ca 2ϩ , which indicates the presence of a single binding site. Mg 2ϩ (8 mM) fails to inhibit the 45 Ca 2ϩ tracer binding (1 M) significantly as compared with the ϳ85% inhibition obtained by 1.5 mM unlabeled Ca 2ϩ . Hence, the binding site displays a Ca 2ϩ selectivity among the dominating intracellular divalent cations (Fig. 2, middle panel, A).
AS Contains a Novel Ca 2ϩ -binding Motif-
Truncated recombinant AS peptides with deletions of the N-terminal 29-and 54-amino acid residues and the C-terminal 45-amino acid residues were used for initial localization of the Ca 2ϩ -binding site. Fig. 2, middle panel, B demonstrates that only the C-terminal truncation inhibited the binding, whereas the N-terminal truncation has no effect. No inhibition of the Ca 2ϩ binding is observed when testing the mutations causing PD (A30P and A53T) (Fig. 2, middle panel, A). Acidic amino acid residues often participate in the binding of Ca 2ϩ ions as noted in the EF-hand, C 2 -domain and the low-affinity Ca 2ϩbinding sites in S100 proteins (35)(36)(37), and such residues account for 33% of the C-terminal 45 residues. Fig. 2, top panel, demonstrates a striking identity in the spacing of 10 of the 12 acidic residues in the C-terminal 32 residues of ␣and -synuclein. ␥-Synuclein, however, shows no such similarity. The similarity in the spacing of acidic residues is reflected at the functional level, where ␣and -synuclein, but not ␥-synuclein, bind Ca 2ϩ (Fig. 2, middle panel, A). The acidic residues in the C terminus of ␣and -synuclein are organized as a tandem repeat of 16 amino acids (Fig. 2, top panel), and the integrity of this structure may be required for the binding of Ca 2ϩ ions. This hypothesis was explored by examining the expression and purification of recombinant truncated AS peptides lacking (i) the C-terminal repeat AS-(1-125) and (ii) both repeats AS-(1-110) and AS-(1-95) (Fig. 2, middle panel, B, inset). Fig. 2, middle panel, B shows that removal of the single C-terminal repeat in AS-(1-125) inhibits the Ca 2ϩ binding to the level of the peptides lacking both repeats or the entire 45 C-terminal residues. Moreover, a synthetic peptide corresponding to the tandem repeat structure, AS-(109 -140), binds 45 Ca 2ϩ to the same extent as wild type AS (Fig. 2, middle panel, B), and its binding isotherms reveal indistinguishable affinities for Ca 2ϩ (Fig. 2, bottom panel). Accordingly, the C-terminal repeat structure in ␣and -synuclein is necessary and sufficient to bind Ca 2ϩ and represents a bona fide Ca 2ϩ -binding domain.
Ca 2ϩ Binding to ␣-Synuclein Modulates Ligand Interactions-The propensity of Ca 2ϩ ions to modulate ligand binding to AS was analyzed in terms of the effect of such binding of (i) the amyloidogenic A labeled bovine MAP-1A to immobilized AS (Fig. 3, top panel). The association of 50 pM 125 I-MAP-1A reaches a plateau within 9 h at 4°C (data not shown), and all incubations are therefore performed for 16 h. The interaction is specific, as demonstrated by the inhibition of 125 I-MAP-1A binding by both unlabeled MAP-1A and AS, and it exhibits a high affinity (IC 50 ϳ 30 nM; Fig. 3, top panel).
The binding of MAP-1A to AS is enhanced by Ca 2ϩ ions (Fig. 3, middle panel), and a maximal stimulatory effect of about 90% is obtained at concentrations greater than 0.5 mM (Fig. 3, middle panel), with a half-maximal stimulation at about 0.3 mM Ca 2ϩ (Fig. 3, middle panel). Both Mg 2ϩ and Ca 2ϩ (1.5 mM) stimulate MAP-1A binding to AS, but the effect of Ca 2ϩ alone is about 60% greater than that for Mg 2ϩ ions alone; when com- bined, their effect is synergistic (Fig. 3, bottom panel, A). Disruption of the Ca 2ϩ -binding domain in AS obtained by removal of the C-terminal 15, 30, and 45 amino acid residues completely abrogates the Ca 2ϩ -stimulatory effect on MAP-1A binding (Fig. 3, bottom panel, B) and demonstrates that the Ca 2ϩ effect was indeed based on the AS moiety. Removal of the Ca 2ϩ -binding site in AS increases the binding of MAP-1A to the truncated AS peptide (data not shown), but it abrogates the stimulatory Ca 2ϩ effect (Fig. 3, bottom panel, B). This indicates a negative regulatory effect of the C-terminal segment of AS on the MAP-1B interaction that is alleviated by binding of Ca 2ϩ ions. The stimulatory effect of Mg 2ϩ ions on MAP-1A binding may thus be mediated via the MAP-1A moiety. Many Ca 2ϩ effects are mediated through the binding of Ca 2ϩ to calmodulin, but AS does not bind to calmodulin-Sepharose in either the absence or presence of Ca 2 (data not shown).
Ca 2ϩ Ions Regulate the Oligomeric Distribution of AS Molecules-Abnormal filamentous AS is a characteristic of diseased brain tissue, and AS aggregation represents a nucleation-dependent process, where the nucleation by oligomeric AS species may represent a rate-limiting step. We used the short-length hydrophilic chemical cross-linker BS3 to covalently stabilize AS oligomers in the absence and presence of Ca 2 . This analysis is likely to underestimate the oligomeric content because the cross-linking efficiency is Ͻ100%. However, the method has the advantage of visualizing molecules associated through lowaffinity interactions. Gel filtration methods and other timeconsuming procedures for separating oligomerized and monomeric species may not be able to reveal such interactions due to dissociation during the procedures. No significant AS-(1-140) oligomers are present without cross-linking (Fig. 4). The same applies to AS-(1-125) and AS-(1-110) (data not shown). Supplementing the AS solution with 1 mM BS3 for 15 min before reducing SDS-polyacrylamide gel electrophoresis causes the formation of 125 I-labeled bands compatible with AS dimers, trimers, and higher oligomers with a higher oligomeric content among the C-terminal-truncated peptides (Fig. 4). Saturation of the Ca 2ϩ binding site (1.5 mM) increases the oligomeric content of AS-(1-140) 2-fold for dimers and 2.5-fold for trimers, and higher aggregates, whereas no Ca 2 effect is observed for the truncated peptides. The oligomers are not an artifact of the iodination of AS because the distinct oligomeric pattern is absent without the presence of 1 M unlabeled AS. Accordingly, Ca 2ϩ binding to the C-terminal tandem repeat domain favors the formation of AS oligomers. DISCUSSION The present study identifies a novel Ca 2ϩ -binding motif in the C terminus of AS. The binding of Ca 2ϩ alters the interactions between AS molecules in the process of oligomerization and between AS and certain nerve cell proteins as exemplified by MAP-1A. AS binds Ca 2ϩ ions selectively as compared with the predominant cytosolic divalent cation Mg 2ϩ , which suggests that AS functions can be regulated by Ca 2ϩ ions in a cellular context.
Several synuclein genes are expressed in man; the most predominant of these are AS, -synuclein, and ␥-synuclein (15). The localization of ␣and -synuclein in normal nervous tissue is restricted to the nerve terminals (38,39), in contrast to ␥-synuclein, which is localized in the somatodendritic compartment (40). The nerve terminal localization parallels the Ca 2ϩ binding properties of the proteins because ␣and -synuclein, but not ␥-synuclein, bind Ca 2ϩ . We therefore wish to suggest a functional significance of Ca 2ϩ binding to AS in the nerve terminals where high local Ca 2ϩ concentrations are reached (41) and AS regulates complex nerve terminal processes related to neurotransmitter homeostasis and maintenance of the distal pool of synaptic vesicles (19,20).
The Ca 2ϩ -binding motif is localized to the C-terminal 32 residues of AS and comprises an acidic tandem repeat rich in proline residues. This structure is sufficient and necessary to confer Ca 2ϩ binding activity and requires the presence of both repeats as demonstrated both by the full binding activity of the synthetic peptide AS-(109 -140) and the absence of binding to AS-(1-125). The Ca 2ϩ -binding domain does not resemble any of the hitherto recognized Ca 2ϩ -binding structures such as the EF-hand, the C 2 -domain, or the less defined low-affinity binding sites in the SH-100 class of proteins, with the exception of the clustering of negatively charged residues (35)(36)(37). AS is natively unfolded, and circular dichroism spectroscopy does not reveal any structural changes in the absence or presence of Ca 2ϩ (14). However, it is not always necessary for Ca 2ϩ ions to cause gross structural changes for functional effects to arise, as shown for the C 2 A domain in synaptotagmin I, where Ca 2ϩ works as an electrostatic switch that facilitates binding to syntaxin I and acidic phospholipids (42,43). The IC 50 for the Ca 2ϩ binding to AS is about 300 M, and Ca 2ϩ concentrations close to this magnitude are only encountered in normal nerve cells close to Ca 2ϩ channels at the plasma membrane during propagation of action potentials and at neurotransmitter release (41). However, cofactors may increase the Ca 2ϩ affinity and thus increase the potential significance of the Ca 2ϩ binding in analogy with the approximate 1000-fold increase in the apparent Ca 2ϩ affinity of the synaptotagmin C 2 A domain upon phospholipid binding (43). Candidate cofactors are the kinases casein kinase-1, casein kinase-2, src, and fyn that have been implicated in the phosphorylation of Ser 129 and Tyr 125 (44 -46). Tyr 125 is conserved from fish and birds to man. Such phosphorylation will increase the negative charge of the Ca 2ϩ -binding domain and thereby potentially increase the Ca 2ϩ affinity.
␣-Synuclein and -synuclein are soluble proteins with vesicle-binding properties that are localized to nerve terminals. Their local concentration is very high because they constitute about 0.1% of the total protein in rat brain extracts (47), and this may make them suited to be presynaptic Ca 2ϩ buffers.
The Ca 2ϩ binding to the C-terminal domain in AS stimulates binding of the novel ligand MAP-1A, and this domain probably plays a negative regulatory role because its removal increases MAP-1A binding (data not shown) but abrogates the stimula- tory Ca 2ϩ effect. MAP-1A belongs to the same group of microtubule-associated proteins as the AS ligands tau and microtubule-associated protein-1B (17,18), and several characteristics favor a physiological interaction between MAP-1A and AS. First, their developmental expression profiles are parallel with a low to absent expression in the fetal period, followed by increased expression during postnatal development (48 -51). Second, both proteins are predominantly carried as part of the slow component b of axonal transport, indicating subcellular contacts to the same transporting structures (52,53). Third, a significant part of the transported proteins is incorporated into stationary axonal structures (52,53). The functional significance of such a putative interaction remains unsolved, but AS is known to change the functional properties of its ligands (17,21).
AS-containing filaments accumulate in Lewy bodies during the year-long process of neurodegeneration in PD. In vitro, filament formation is a nucleation-dependent process, as demonstrated, where preformed oligomers/filaments can seed the growth of filaments (54). Accordingly, if oligomer formation represents a rate-limiting step, then even a small increase in their rate of formation, regulated by pathogenic factors, may enhance filament growth significantly (12). Known factors with this property are: (i) AS mutations linked to familial Parkinson's disease (5), and (ii) proteolytic activities directed against the AS C-terminal because C-terminally truncated AS preparations more readily form fibrils (4), contain a higher proportion of oligomers, as revealed by chemical cross-linkers (Fig. 4), and such peptides are recovered from pathological brain tissue and isolated Lewy bodies (3,18). Increased Ca 2ϩ concentrations represent a novel fibrillogenic factor, as demonstrated by the increased oligomeric content upon binding of Ca 2ϩ to the tandem repeat domain in AS. This makes AS resemble synaptotagmin VII whose oligomerization is stimulated by Ca 2ϩ (55). High levels of AS filaments have been reported in preparations of recombinant protein (56). However, this study was performed with prolonged incubation, elevated temperature, and acidic pH as compared with our 2-h incubation at pH 7.4. The low oligomeric content in Ca 2ϩ -stimulated wild type AS is, by contrast, in accordance with gel filtration experiments demonstrating oligomers with low solubility (Ͻ10%) even after 66 days of incubation at pH 7.4 (5). The inhibitory role of the C-terminal part of AS on fibril formation may rely on an electrostatic repulsion from these negatively charged segments. The molecular mechanism exploited by proteolysis and Ca 2ϩ binding would then be similar because both remove negative charges from the C terminus.
Conclusively, our study extends our knowledge of AS functions in relation to both normal and pathological nerve cell paradigms by linking AS functions to the important cellular messenger Ca 2ϩ . This may facilitate future studies on the still poorly understood mechanisms underlying the gain in toxic function by AS in neurodegenerative disorders. | 5,301.8 | 2001-06-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Authorship Identification of a Russian-Language Text Using Support Vector Machine and Deep Neural Networks
: The article explores approaches to determining the author of a natural language text and the advantages and disadvantages of these approaches. The importance of the considered problem is due to the active digitalization of society and reassignment of most parts of the life activities online. Text author-ship methods are particularly useful for information security and forensics. For example, such methods can be used to identify authors of suicide notes, and other texts are subjected to forensic examinations. Another area of application is plagiarism detection. Plagiarism detection is a relevant issue both for the field of intellectual property protection in the digital space and for the educational process. The article describes identifying the author of the Russian-language text using support vector machine (SVM) and deep neural network architectures (long short-term memory (LSTM), convolutional neural networks (CNN) with attention, Transformer). The results show that all the considered algorithms are suitable for solving the authorship identification problem, but SVM shows the best accuracy. The average accuracy of SVM reaches 96%. This is due to thoroughly chosen parameters and feature space, which includes statistical and semantic features (including those extracted as a result of an aspect analysis). Deep neural networks are inferior to SVM in accuracy and reach only 93%. The study also includes an evaluation of the impact of attacks on the method on models’ accuracy. Experiments show that the SVM-based methods are unstable to deliberate text anonymization. In comparison, the loss in accuracy of deep neural networks does not exceed 20%. Transformer architecture is the most effective for anonymized texts and allows 81% accuracy to be achieved.
Introduction
It is now known that it is possible to determine the individual characteristics of the author on the basis of the writing style, since each text has a specific linguistic personality [1].
The topic of attribution overlaps with information security [2][3][4][5]. With the constant increase in volume of transmitted and received documents, there are many opportunities for the illegal use of personal data. An example is a type of fraud in which an attacker sends an employee of an organization an email on behalf of a manager asking them to perform a specific action (e.g., to divulge confidential information of the organization or to transfer funds). In addition, quite often there are situations related to hacking the victim's social media accounts and sending messages on the victim's behalf. One solution to this kind of problem is to compare the writing style of the suspicious texts with others for which it is certain that they were written by the person. As a result of the comparison, it is possible to determine the author. Establishing general differences in the documents based on the writing style is most relevant if there are no other data that would allow the author to be identified.
One type of violation in cyberspace is a copyright infringement and related rights of a text, which can be expressed, for example, by claiming a text by another author for material gain or attempting to pass off the authorship of a created text as the authorship of another person. The effectiveness of intellectual property protection in the digital space is determined by an ability to resist such violations and threats of their occurrence. Authorship identification methods allow determining such infringements and establishing the identity of the text creator.
Interest in the topic is also due to a growth in the volume of text data, the evolution of technology, and social networks. Thus, automatic identification of authorship is a growing area of research, which is also important in the fields of forensic science and marketing.
In this article, we solve the problem of identifying the author of a Russian-language text using a support vector machine and deep neural networks. Literary texts written by Russian-speaking writers were used as input data. The article includes an overview of related works, the statement of the text authorship problem, a detailed description of approaches to solving the authorship identification problem, an impact evaluation of attacks on the developed approaches, and a discussion of the results obtained.
Related Works
An excellent overview of articles up to 2010 is presented in [1]. However, since then, methods based on deep neural networks (NN) have become more and more popular, replacing classical methods of machine learning. For example, the topic of author identification is considered annually at the PAN conference [6]. As part of the conference, researchers were offered two datasets of different sizes, containing texts by well-known authors.
The authors of [7] emphasize that they have proposed an approach that takes into account only the topic-independent features of a writing style. Guided by this idea, the authors chose several features such as the frequency of punctuation marks, highlighting the last word in a sentence, consideration of all existing categories of functional words, abbreviations and contractions, verb tenses, and adverbs of time and place. An ensemble of classifiers was used in the work. Each of them accepts or rejects the supposed authorship. Research is distinguished by the application of an approach that, in general, is aimed at recognizing a person based on his behavior. Here, Equal Error Rate (EER) has been applied as the thresholding mechanism. Essentially, the EER corresponds to the point on the curve where the false acceptance rate is equal to the false rejection rate. The results are 80% and 78% accuracy for the large and small datasets, respectively. The results of the approach allowed the authors to take third place among all the submitted works.
In [8], stylometric features were extracted for each pair of documents. The absolute difference between the feature vectors was used as input data for the classifier. Logistic regression was used for a small dataset, and a NN was used for a large one. These models achieved 86% and 90% accuracy for small and large datasets, respectively. As a result, the authors of the study took second place.
The work that achieved the best result in the competition [9] presents the combination of NN with statistical modeling. Research is aimed at studying pseudo metrics that represent a variable-length text in the form of a fixed-size feature vector. To estimate the Bayesian factor in the studied metric space, a probability layer was added. The ADHOMINEM system [10] was designed to transmit the association of selected tokens into a two-level bi-directional long short-term memory (LSTM) network with an attention mechanism. Using additional attention levels made it possible to visualize words and sentences that were marked by the system as "very significant". It was also found that using the sliding window method instead of dividing a text into sentences significantly improves results. The proposed method showed excellent overall performance, surpassing all other systems in the PAN 2020 competition on both datasets. The accuracy was 94% for the large dataset and 90% for the small one.
The authors of [11] took into account the syntactic structure of a sentence when determining the author of a text, highlighting two components of the self-supervised network: lexical and syntactic sub-network, which took a sequence of words and their corresponding structural labels as input data. The lexical sub-network was used to code a sequence of words in a sentence, while the syntactic subnetwork was used to code selected labels, e.g., parts of speech. The proposed model was trained on the publicly available LAMBADA dataset, which contains 2662 texts of 16 different genres in English. The consideration of the syntactic structure made it possible to eliminate the need for semantic analysis. The resulting accuracy was 92.4%.
The work in [12] provides an overview of the methods for establishing authorship with the possibility of their subsequent application in the field of forensic research on social networks. According to the authors, in forensic sciences, there is a significant need for new attribution algorithms that can take context into account when processing multimodal data. Such algorithms should overcome the problem of a lack of information about all candidate authors during training. Functional words have been chosen as a feature, as they are quite likely to appear even in small samples and can therefore be particularly effective for analyzing social networks. Combinations of different sets of n-grams at symbol and word level with n-grams at the part-of-speech level were investigated. An accuracy of 70% was obtained for 50 authors.
The main idea of the study [13] is to modify the approach to establishing authorship by combining it with pre-trained language models. The corpus of texts consisted of essays by 21 undergraduate students written in five formats (essay, email, blog post, interview, and correspondence). The method is based on a recurrent neural network (RNN) operating at the symbol level and a multiheaded classifier. In cross-thematic authorship determination, the results were 67-91%, depending on the subject, and in cross-genre, 77-89%, depending on the genre.
The essence of [14] is to research document vectors based on n-grams. Experiments were conducted on a cross-thematic corpus containing some articles from 1999 to 2009 published in the English newspaper The Guardian. Articles by 13 authors were collected and grouped into five topics. To avoid overlapping, those articles for which content included more than one category were discarded. The results show that the method is superior to linear models based on n-gram symbols. To train the Doc2vec model, the authors used a third-party library called GENSIM 3. The best results were achieved on texts of large sizes. Accuracy for different categories ranged from 90.48 to 96.77%.
In [15], an ensemble approach that combines predictions made by three independent classifiers is presented. The method based on variable-length n-gram models and polynomial logistic regression and used to select the highest likelihood prediction among the three models. Two evaluation experiments were conducted: using the PAN-CLEF 2018 test dataset (93% accuracy) and a new corpus of lyrics in English and Portuguese (52% accuracy). The results demonstrate that the proposed approach is effective for fiction texts but not for lyrics.
The research conducted in [16] used the support vector machine (SVM). Parameters for defining the writing style were highlighted at different levels of the text. The authors demonstrated that more complex parameters are capable of extracting the stylometric elements presented in the texts. However, they are most efficiently used in combination with simpler and more understandable n-grams. In this case, they improve the result. The dataset included 20 samples in four different languages (English, French, Italian, and Spanish). Thus, five samples from 500 to 1000 words in each language were used. The challenge was to assign each document in the set of unknown documents to a candidate author from the problem set. The results were 77.7% for Italian, 73% for Spanish, 68.4% for French, and 55.6% for English.
Authorship identification methods are used not only for literary texts but also to determine plagiarism in scientific works. For example, [17] presents a system for resolving the ambiguity of authorship of articles in English using Russian-language data sources. Such a solution can improve the search results for articles by a specific author and the calculation of the citation index. The link.springer.com database was used as the initial repository of publications, and the eLIBRARY.ru scientific electronic library was used to obtain reliable information about authors and their articles. To assess the quality of the comparison, experiments were carried out on the data of employees of the A.P. Yershov Institute of Informatic Systems. The sample included 25 employees, whose publications are contained in the link.springer.com system. To calculate the similarity rate of natural language texts, they were presented as vectors in multidimensional space. To construct a vector representation of texts, a bag-of-words algorithm was used with the term frequencyinverse document frequency (TF-IDF) measure. Stop-words were preliminarily removed from the texts, and stemming of words was carried out. Experiments were also provided on the vectorization of natural language texts using the word2vec. The average percentage of the number of publications of authors recognized by the system was 79%, while the number of publications that did not belong to the author but were assigned to his group was close to zero. The approaches used in the system are applicable for disambiguating authorship of publications from various bibliographic databases. The implemented system showed a result of 92%.
There were only a few works that achieved a high level of author identification in Arabic texts. In [18], the Technique for Order Preferences by Similarity to Ideal Solution (TOPSIS) was used to select the basic classifier of the ensemble. More than 300 stylometric parameters were extracted as attribution features. The AdaBoost and Bagging methods were applied to the dataset in Arabic. Texts were taken from six sources. Corpora included both short and long texts by three hundred authors writing in various genres and styles. The final accuracy was 83%.
A new area of research is attribution, which uses not only human-written texts but also texts obtained using generation [19]. Several recently proposed language models have demonstrated an amazing ability to generate texts that are difficult to distinguish from those written by humans. In [20], a study of the problem of authorship attribution is proposed in two versions: determining the authorship of two alternative human-machine texts and determining the method that generated the text. One human-written text and eight machine-generated texts (CTRL, GPT, GPT2, GROVER, XLM, XL-NET, PPLM, FAIR) were used. Most generators still produce texts that significantly differ from texts written by humans, which makes it easier to solve the problem. However, the texts generated by GPT2, GROVER, and FAIR are of significantly better quality than the rest, which often confuses classifiers. For these tasks, convolutional neural networks (CNN) were used, since the CNN architecture is better suited to reflect the characteristics of each author. In addition, the authors improved the implementation of the CNN using n-gram words and part-of-speech (PoS) tags. The result in the "human-machine" category ranges from 81% to 97%, depending on the generator, and, for determining the generation method, 98%.
The author of [21] presented the software product StylometRy, which allows the identification of the author of a disputed text. Texts were presented in the form of a bagof-words model. Naive Bayesian classifier, k-nearest method, and logistic regression were chosen as classifiers, and pronouns were used as linguistic features. The models were checked in L. Tolstoy, M. Gorky, and A. Chekhov texts. The minimum text volume for analysis was 5500 words. The accuracy of the model for texts over 150,000 characters was in the range of 60-100% (average 87%).
The scientific work [22] describes the features of four styles of the Russian language -scientific, official, literary, and journalistic. The parameters selected for texts analysis were: the ratio of the number of verbs, nouns, adjectives, pronouns, particles, and interjections to the number of words in the text, the number of "noun + noun" constructions, the number of "verb + noun" constructions, the average word length, and the average sentence length. Decision trees were used for classification. The accuracy of the analysis of 65 texts of each style was 88%. The highest accuracy was achieved when classifying official and literary texts, and the lowest was achieved for journalistic texts.
The authors of [23] present the analysis and application of various NNs architectures (RNN, LSTM, CNN, bi-directional LSTM). The study was conducted based on three datasets in Russian (Habrahabr blog-30 authors, average text length 2000 words; vk.com-50 and 100 authors, average text length 100 words; Echo.msk.ru-50 and 100 authors, average text length 2000 words). The best results were achieved by CNN (87% for Habrahabr blog, 59% and 53% for 50 and 100 authors with vk.com, respectively). Character's trigrams performed significantly better for short texts from social networks, while for longer texts, both trigram and tetragram representations achieved almost the same accuracy (84% for trigrams, 87% for tetragram representations).
The object of research study [24] is journalistic articles from Russian pre-revolutionary magazines. The information system Statistical Methods of Literary Texts Analysis (SMALT) has been developed to calculate various linguistic and statistical features (distribution of parts of speech, average word and sentence length, vocabulary diversity index). Decision trees were used to determine the authorship. The resulting accuracy was 56%.
The problem of authorship attribution of short texts obtained from Twitter was considered in scientific work [25]. Authors proposed a method of learning text representations using a joint implementation of words and character n-grams as input to the NNs. Authors used an additional feature set with 10 elements: text length, number of usernames, topics, emoticons, URLs, numeric expressions, time expressions, date expressions, polarity level, and subjectivity level. Two series of comparative experiments were provided to test using CNN and LSTM. The method achieved an accuracy of 83.6% on the corpus containing 50 authors.
The authors of [26] applied integrated syntactic graphs (ISGs) to the task of automatic authorship attribution. ISGs allow for combining different levels of language description into a single structure. Textual patterns were extracted based on features obtained from the shortest path walks over integrated syntactic graphs. The analysis was provided on lexical, morphological, syntactic, and semantic levels. Stanford dependency parser and WordNet taxonomy were applied in order to obtain the parse trees of the sentences. The feature vectors extracted from the ISGs can be used for building syntactic n-grams by introducing them into machine learning methods or as representative vectors of a document collection. Authors showed that these patterns, used as features, allow determining the author of a text with a precision of 68% for the C10 corpus and also performed experiments for the PAN'13 corpus, obtaining a precision of 83.3%.
An approach based on joint implementation of words, n-grams, and the latent Dirichlet allocation (LDA) was presented in [27]. The LDA-based approach allows the processing of sparse data and volumetric texts, giving a more accurate representation. The described approach is an unsupervised computational methodology that is able to take into account the heterogeneity of the dataset, a variety of text styles, and also the specificity of the Urdu language. The considered approach was tested on 6000 texts written by 15 authors in Urdu. The improved sqrt-cosine similarity was used as a classifier. As a result, an accuracy of 92.89% was achieved.
The idea of encoding the syntax parse tree of a sentence into a learnable distributed representation is proposed in [28]. An embedding vector is created for each word in the sentence, encoding the corresponding path in the syntax tree for the word. The one-to-one correspondence between syntax-embedding vectors and words (hence their embedding vectors) in a sentence makes it easy to integrate obtained representation into the word-level Natural Language Processing (NLP) model. The demonstrated approach has been tested using CNN. The model consists of five types of layers: syntax-level feature embedding, content-level feature embedding, convolution, max pooling, and softmax. The accuracy obtained on the datasets was 88.2%, 81%, 96.16%, 64.1%, and 56.73% on five benchmarking datasets (CCAT10, CCAT50, IMDB62, Blogs10, and Blogs50, respectively).
The authors of [29] combined widely known features of texts (verbs tenses frequency, verbs frequency in a sentence, verbs usage frequency, commas frequency in a sentence, sentence length frequency, words usage frequency, words length frequency, characters ngram frequency) and genetic algorithm to find the optimal weight distribution. The genetic algorithm is configured with a mutation probability of 0.2 using a Gaussian convolution on the values with a standard deviation of 0.3 and evolved over 1000 generations. The method was tested on the Gutenberg Dataset, consisting of 3036 texts written by 142 authors. The method is implemented using Stanford CoreNLP, stemming, PoS tagging, and genetic algorithm. The obtained accuracy was 86.8%.
There is no generally accepted opinion regarding the set of text features that provides the best result. In most works, text features such as bigrams and trigrams of symbols and words, functional words, the most frequent words in the language, the distribution of words in parts of speech, punctuation marks, and the distribution of word length and sentence length have proven to be effective. It is incorrect to judge the accuracy of the methods applied to the Russian language based on the results of research in the English language or any other languages because of the specific structure of each language. The choice of approach depends on the text language, the authorship identification method, and the accuracy of the available analysis methods. Particularly, the peculiarity of the Russian language in comparison with English, for which most of the results are presented, is its flexibility and, consequently, more complex word formation and a high degree of morphological and syntactic homonymy, which makes it difficult to use some features useful for the English language. The problems of genre, sample representativeness, and dataset size also limit the implementation of some approaches.
Investigations aimed at finding a method with high separating ability with a large number of possible authors are not always useful when solving real-life tasks. It is necessary to continue further research aimed at finding new methods or improving/combining existing methods of identifying the author, as well as conducting experiments aimed at finding features that allow accurately dividing the styles of authors of Russian-language texts. By using these features, it will be possible to work with small samples.
Problem Statement
We define the identification of the text author as the process of determining the author based on a set of general and specific features of the text that formed the author's style.
The . It is necessary to determine which author from set A is the true author of the remaining texts (anonymous or disputed) T = {t m+1 , . . . , t k } ⊆ T.
In this statement, the author's identification problem can be considered as a multilabel classification task. In this case, set A is the set of predefined classes and their labels, set D is the set of training samples, and objects to be classified are included in the set T . The goal is to develop a classifier that solves the problem-finding the objective function F : T × A → [−1, 1] , which assigns some text from the set T to its true author. The function value is described as the degree to which the object belongs to the class, where 1 corresponds to the completely positive solution, while −1, on the contrary, is a negative one.
Methods for Determining the Author of a Natural Language Text
Early research [1] was aimed at evaluating the accuracy and the speed of classifiers based on machine learning algorithms. Then, the best results in all parameters were demonstrated by the SVM classifier. However, over the past 10 years, many solutions based on deep NNs appeared in the field of NLP: RNN and CNN for multi-label text categorization, category text generation, and learning word dependencies, and hybrid networks for aspect-based sentiment analysis. These solutions significantly exceed the effectiveness of traditional algorithms. As of 2020, LSTM, CNN with self-attention, and Transformer [30,31] are the models that successfully solve related text analysis problems. Thus, the purpose of the study was to compare SVM with modern classification methods based on deep NN. The enumerated models, their mathematical apparatuses, as well as the techniques of their application to the task of authorship attribution are described below.
Support Vector Machine
The SVM classifier is similar to the classical perceptron. Application of its kernel transformations allows training radial basis function network and perceptron with a sigmoidal activation function, the weights of which are determined by solving a quadratic programming problem with linear constraints, while training a standard NN implies solving the problem of non-convex minimization without restrictions. In addition, SVM allows working directly with a high-dimensional vector space without preliminary analysis and also without manually selecting the number of neurons in the hidden layer.
The main difference between SVM and deep-learning models is that SVM is unable to find unobvious informative features in text that have not been pre-processed. Therefore, it is necessary to first extract such features from the text.
Let us denote the set of letters of the alphabet, numbers, and separators A = a 1 , a 2 , . . . , a |A| , the set of possible morphemes M = m 1 , m 2 , . . . , m |M| , the language dictionary W = w 1 , w 2 , . . . , w |W| , the set of phrases C = c 1 , c 2 , . . . , c |C| , the set of sentences S = s 1 , s 2 , . . . , s |S| , and the set of paragraphs P = p 1 , p 2 , . . . , p |P| . Then, the text T can be represented as sequences of elements as follows: where a i j ∈ A, m i j ∈ M, w i j ∈ W, c i j ∈ C, s i j ∈ S, p i j ∈ P; N a , N m , N p , N w , N c , N s -the number of characters, morphemes, words, phrases, sentences, paragraphs in the text.
In the study, when classifying with SVM, informative features are used as an unordered collection as inputs of the SVM. The frequencies of single text's elements are used as follows: In addition, the texts elements sequences of some length (n-grams) or a limited number of them from the dictionary are used as follows: where L-total number of counted n-grams; k-threshold value; f()-relative frequency of the element in the text; a-the symbol; P()-the probability of the element appearing in the text; n-the length of the n-gram.
It should be noted that for texts of small volumes, it is supposed to use frequencies smoothed by the methods of Laplace (5), Good-Turing (6), and Katz (7), which makes it possible to estimate the probabilities of non-occurring events: where P ADD -estimates of Laplace; W-the language dictionary; C()-the number of occurrences of the element in the text.
where P GT -estimates of Laplace; N-the total number of the considered elements of the text; N C -the number of text elements encountered exactly C times; C*-discounted Good Turing estimate.
where t k -the fact of the existence of the j-th word of the i-th text in the dictionary W; P KATZ -estimates of Katz; α()-weight coefficient.
In the process of authorship attribution of natural language text using classical machine learning methods, not only standard feature sets can be used; features obtained as a result of solving related tasks such as determining the author's gender and age, the level of the author's education, the sentiment of the text, etc. can also be used. However, as a part of this study, aspect-oriented analysis was also used for informative features extraction. Such a type of analysis involves understanding the meaning of a text by identifying aspect terms or categories. Thus, it becomes possible to extract keywords and opinions related to aspects.
There are two well-known approaches to implementing aspect analysis: statistical and linguistic. The statistical approach is performed as an extraction of aspects, determination of the threshold value for them, and selection such aspects, the values of which are indicated above the given threshold. The linguistic approach takes into account the syntactic structure of the sentence and searches for aspects by patterns.
We decided to use a combination of these methods. Aspects chosen were nouns and noun phrases (statistical approach), and the syntactic structure of the sentence was determined based on the dependencies between words (linguistic approach).
Multi-layered NN, consisting of fully connected layers, was implemented to extract aspects. The following training parameters were used:
The principle of operation of SVM is to construct a hyperplane in the space of highdimensional features in such a way that the gap between the support vectors (the extreme points of the two classes) is maximized. The mapping of the original data onto space with the linear separating surface is performed using a kernel transformation: where (Φ(x), Φ(x )) is the inner product between the sample being recognized and the training samples, and k is some mapping of the original space onto the space with the inner product (the space of dimension sufficient for linear separability). Then the function performing the classification looks like this: where α is the optimal coefficient, k is the kernel function, y is the label of class, b is the parameter that ensures the fulfillment of the second Karush-Kuhn-Tucker condition for all input samples corresponding to Lagrange multipliers that are not on the boundaries.
The optimal coefficient α is determined by maximizing the objective function: where the maximization condition: in the positive quadrant 0 ≤ α i ≤ C, i = 1, l.
The regularization parameter C determines the ratio between the number of errors in the training set and the size of the gap.
Deep Neural Networks
A distinctive feature of deep NNs is their ability to analyze a text sequence and extract informative features by itself. In some studies, texts should be accepted by the model unchanged [1]. However, in solving the problem of determining the author of a natural language text, preliminary preparation is an important stage.
The purpose of preprocessing is to cleaning the dataset from noise and redundant information. Within the framework of the study, the following actions were taken to clean up the texts: • Converting text to lowercase; • Removing stop-words; • Removing special characters; • Removing digits; • White space formatting.
The data obtained from the results of preprocessing must be converted into a vectorunderstandable NN. For this purpose, it was decided to use word embeddings-a text representation, where words having a similar meaning are defined by vectors close to each other in hyperspace. The received word representations are fed to the inputs of the deep NN.
Long Short-Term Memory
LSTM is a successful modification of the classical RNN, which avoids the problem of vanishing or exploding gradients. This is due to the fact that the semantic weights of the LSTM model are the same for all time steps during error backpropagation. Therefore, the signal becomes too weak (exponentially decreases) or too strong (exponentially increases). This is the problem that LSTM solves.
The LSTM model contains the following elements: • Forget Gate "f"-an NN with sigmoid; • Candidate layer "C'"-an NN with Tanh; • Input Gate "I"-an NN with sigmoid; • Output Gate "O"-an NN with sigmoid; • Hidden state "H"-an vector; • Memory state "C"-an vector.
Then the time step t is considered. The input to the LSTM cell is the current input vector X t , the previous hidden state H t−1 , and the previous memory state C t−1 . The cell outputs are the current hidden state H t and the current memory state C t. The following formulas are used to calculate outputs: where X t -the input vector; H t−1 -the hidden state of the previous cell; C t−1 -the memory state of the previous cell; H t -the hidden state of the current cell; C t -the memory state of the current cell at time t; W, U are the weight vectors for the forget gate f(), the gate of candidates, i.e., an input and output gates; σ-sigmoidal function; tanh-tangential function.
The most important role is the state of memory C t . It is the state in which the input context is stored. It changes dynamically depending on the need to add or remove information. If the value of the forget gate is 0, then the previous state is completely forgotten; if equal to 1, then it is completely transferred to the cell. With the current state of C t memory, a new one can be calculated: Then it is necessary to calculate the output from the hidden state H at time t. It will be based on memory state: Received C t and H t are transferred to the next time step, and the process is repeated.
CNN with Attention
CNN consists of many convolutional layers and subsampling layers. Each convolutional layer uses filters with input and output dimensions D in and D out . The layer is parameterized by the four-dimensional nuclear tensor W of the measurement and the displacement vector D out -b out . Therefore, the output value for some word q: where ∆-kernel change.
The main difference between the attention mechanism and CNN is that the new meaning of a word is determined by every second word of the sentence, since the receptive field of attention includes the full context and not just a grid of nearby words.
The attention mechanism takes as input a token feature matrix, query vectors, and several key-value pairs. Each of the vectors is transformed by a trainable linear transform, and then the inner product query vectors are calculated with each key in turn. The result is run through Softmax, and with the weights obtained from Softmax, all vectors values are summed into a single vector. As a result of applying the attention mechanism, a matrix is obtained where the vectors contain information about the value of the corresponding tokens in the context of other tokens.
Transformer
The mechanism of attention in its pure form can lose information and complicate the convergence, and therefore a solution is required to this problem. Therefore, it was decided to also try its more complex modification-a transformer.
The transformer consists of an encoder and a multi-head attention mechanism. Some of the transformer layers are fully connected, and part of a shortcut is connected. A mandatory component of the architecture is multi-head attention, which allows each input vector to interact with other tokens using the attention mechanism. The study uses a common combination of multi-head attention, a residual layer, and a fully connected layer. The depth of the model is created by repeating this combination 6 times.
A distinctive feature of multi-head attention is that there are several attention mechanisms and they are trained in parallel. The final result is concatenated, passed through the training linear transformation once again, and goes to the output. Formally, it can be described as follows. The attention layer is determined by the size of the key/query D k , the number of heads N h , the size of the head D h , and the output D out . The layer is parametrized with the key matrix, the query matrix W x qry , and the value matrix W x val for each head, together with the protector matrix W out used to assemble all the heads together. Attention for each head is calculated as: The actual head value is calculated as: And the output value is calculated as follows: where X-output values, W key -the matrix of keys, T-the transposition operation, A q -the attention value for a particular head, k-the key position, q-the query position, N h -the number of heads, b out -the bias coefficient of the measurement D out .
Even a carefully selected feature space does not guarantee high model efficiency, but equally important are the training parameters of the SVM model. In an early study [1], the following parameters were identified as the most appropriate:
As stated earlier, deep NNs do not need a predetermined set of informative text features, as they are able to search for them on their own. However, these models are also extremely sensitive to learning parameters. These parameters have been selected based on the results of model experiments for related tasks [32,33]: • Optimization algorithm-adaptive moment estimation (Adam); • Regularization procedure-dropout (0.2); • Loss function-cross-entropy; • Hidden layer activation function-rectified linear unit (ReLU); • Output layer activation function-logistic function for multi-dimensional case (Softmax).
A large number of data are required to train models. For this purpose, the corpus was collected from the Moshkov library [34]. The corpus includes 2086 texts written by 500 Russian authors. The minimum size of each text was 100,000 symbols.
As part of experiments with models, the number of training examples varied with needs in solving real-life authorship identification tasks (including when the training data are limited). Therefore, the texts were divided into fragments ranging from 1000 to 100,000 characters (~200-20,000 words). We used three training examples for each author and one for testing. Table 1 shows the accuracy of the SVM model for datasets of 2, 5, 10, and 50 candidate authors. Table 2 shows the results of applying SVM trained on statistical features and extracted aspects. Cross-validation for 10-folds was used as a procedure for evaluating the effectiveness of the models. It should be noted that the results presented in Tables 1 and 2 were obtained by joint application of SVM and the Laplace smoothing method, which gives a slight increase in accuracy (from 0.01 to 0.07) on small sample sizes. Experiments have also shown that the Good-Turing and Katz smoothing methods negatively affect the quality of identification, with an average accuracy 0.04-0.11 lower when using them. Table 3 shows the accuracy of determining the author using the LSTM for datasets of similar size and obtained by 10-fold cross-validation, while Table 4 shows the CNN with Attention and Table 5, the Transformer. Obtained results allow one to form a conclusion about the special effectiveness of SVM trained on accurately selected parameters and features. The approach based on SVM demonstrates superior accuracy to modern deep NNs architectures, regardless of the number of the samples and their volume. It should also be noted that the SVM classifier is able to learn on large volumes of data 10 times faster than deep NNs architectures. The average training time for SVM was 0.25 machine-hours, while deep models were trained for an average of 50 machine-hours.
Attacks on the Method
SVM classifier showed excellent results in determining the author of a naturallanguage text. However, keep in mind that the above experiments were not complicated by deliberate modifications aimed at text anonymization. Anonymization may have a negative impact on the accuracy of authorship identification. This hypothesis was confirmed by an early study [35]. A text anonymization technique was proposed based on a fast correlation filter, dictionary synonymizing, and a universal transformer model with a self-attention mechanism. The results of the study showed that decision-making accuracy can be reduced by almost 50% due to the proposed method of anonymization, keeping the text in readable and understandable form for humans.
As part of the work, it was decided to evaluate the described anonymization technique on the developed approaches. The results are presented in Table 6. The results of the experiments confirm that deep models are much more resistant to the anonymization technique than the SVM classifier. This is due to their ability to extract unobvious features that are not controlled by the author on a conscious level, while SVM operates on the basis of pre-defined features manually found by experts, and therefore text may be exposed to deliberate confusion by anonymization techniques. It should be noted that in such cases, SVM with aspect analysis shows a bit higher accuracy than SVM without it.
Discussion and Conclusions
During the course of the research, the authors analyzed modern approaches to determining the author of a natural-language text, implemented approaches of authorship attribution based on SVM and deep NNs architectures, evaluated the developed approaches on different numbers of authors and volumes of texts, and evaluated the resistance of the approaches to anonymization techniques. The results obtained allow us to draw several conclusions.
Firstly, despite the great popularity of deep NNs architectures, they are inferior to the traditional SVM machine learning algorithms in accuracy by more than 10% on average. This is due to the fact that NNs require more data for learning than SVM to extract informative features from the text. However, when solving real-life authorship identification tasks, the number of data could be not enough for accurate decision-making by the NN.
Secondly, the SVM classification is based on an accurately found set of features manually formed by experts. Such informative features are also obvious for anonymization techniques and therefore can be removed or significantly corrupted. Thus, to solve the problem of identification of the author of a natural language text, both the SVM-based approach and deep models proposed by authors are equally suitable. However, when choosing an approach, the researched data and available technical resources should be objectively evaluated. In the case of a lack of resources, an SVM approach should be used. If there are traces of use anonymization in the text, despite the longer processing time, deep NNs architectures are recommended because they can find both the obvious and unobvious dependences in the text.
Thirdly, when using SVM, we recommended using five of the most informative features of the author's style that may improve the authorship identification process: unigrams and trigrams of Russian letters, high-frequency words, punctuation marks, and distribution of words among parts of speech.
Finally, based on the results obtained, as well as on the experience of earlier research, the authors identified the important criteria to obtain accurate results when identifying the author of a natural language text:
3.
Stability: the writing style may be influenced by the imitation of another author's writing style or by deliberately distorting the author's own style for other reasons. The chosen approach should be resistant to such actions.
4.
Adaptability to the style of text: the author adapts to the specificities of the selected style of text to follow it. Adaptability to the style of text leads to significant changes in the characteristics of the author's writing style. In addition, when writing official documents, many people use ready templates and just fill in their own data. As a result, it is quite problematic to identify the similarity of official documents and, for example, messages in social networks written by one person. 5.
Distinguishing ability: the selected informative features of the text should be significantly different for various authors, greater than the possible difference between the texts written by the same author. Selecting a single parameter that clearly separates two authors is problematic. Therefore, it should be a complex set of features from different levels of text that are not controlled by the author at the conscious level. In this case, the probability of wrong identification for different authors is reduced. Informed Consent Statement: Not applicable. | 9,856 | 2020-12-25T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Multi-Scale Simulations Provide Supporting Evidence for the Hypothesis of Intramolecular Protein Translocation in GroEL/GroES Complexes
The biological function of chaperone complexes is to assist the folding of non-native proteins. The widely studied GroEL chaperonin is a double-barreled complex that can trap non-native proteins in one of its two barrels. The ATP-driven binding of a GroES cap then results in a major structural change of the chamber where the substrate is trapped and initiates a refolding attempt. The two barrels operate anti-synchronously. The central region between the two barrels contains a high concentration of disordered protein chains, the role of which was thus far unclear. In this work we report a combination of atomistic and coarse-grained simulations that probe the structure and dynamics of the equatorial region of the GroEL/GroES chaperonin complex. Surprisingly, our simulations show that the equatorial region provides a translocation channel that will block the passage of folded proteins but allows the passage of secondary units with the diameter of an alpha-helix. We compute the free-energy barrier that has to be overcome during translocation and find that it can easily be crossed under the influence of thermal fluctuations. Hence, strongly non-native proteins can be squeezed like toothpaste from one barrel to the next where they will refold. Proteins that are already fairly close to the native state will not translocate but can refold in the chamber where they were trapped. Several experimental results are compatible with this scenario, and in the case of the experiments of Martin and Hartl, intra chaperonin translocation could explain why under physiological crowding conditions the chaperonin does not release the substrate protein.
Introduction
Proteins that have not yet folded to their native state may interfere with the machinery of the cell. For this reason, prokaryotic and eukaryotic cells have evolved special macromolecular ''chaperone'' complexes that capture and refold partially folded proteins, thereby preventing them from indulging in cellular mischief [1,2,3,]. An important class of chaperone complexes are the cage chaperones or chaperonins. These complexes can efficiently trap partially folded proteins in a cavity that is barely larger than the target protein, and assist in the folding of an entire class of proteins with different amino acid sequences. Hence, the chaperonin is able to distinguish partly folded states from the native state, independently of the specific amino-acid sequence. It is important to stress that in the presence of molecular crowding (similar to the one present in a cell) the chaperonin complex has been demonstrated to not release the substrate protein before it reaches the native state [4]. Below, we report a detailed numerical study of protein dynamics inside the so-called GroEL-GroES chaperone complex. The GroEL complex consists of two barrel-shaped protein complexes joined at the bottom (see Figure 1). Non-native proteins can be captured in an open GroEL ''barrel''. The GroES ''lid'' can then cap a protein-containing barrel, thereby initiating the refolding process. After about 15 seconds and several refolding cycles, the GroES cap is released and the other barrel is capped (if it contains a protein). A single ''cycle'' of the GroEL-GroES chaperone hydrolyses seven ATPs [5]. This energy is presumably used to compress the protein in a smaller, more hydrophilic GroEL cavity, thus increasing the thermodynamic driving force to expel this protein. Recently we reported simulations of the kinetics of chaperone-induced protein refolding, using a lattice model for the GroEL-GroES complex [6]. This study suggested that proteins may refold either inside the cavity in which it has been captured or, surprisingly, by translocating from one barrel of the GroEL dimer to the other (see Figure 2). This second route is unexpected because it is generally believed that proteins cannot cross the equatorial plane that separates the joined GroEL barrels [7,8,9].
In the present paper we use atomistic and mesoscopic simulations to test whether such a translocation scenario is compatible with the available structural information on the GroEL complex. Our simulation studies focus on the equatorial regime of the GroEL complex that might be expected to act as a barrier against translocation. Crystallographic studies indicate that most protein units in the chaperonin complex have a fairly rigid structure both in the open and closed configurations [5]. However, low-resolution small-angle neutron scattering experiments [7] and cryo-electron microscopy [8,9] indicate the presence of disordered residues in a central cavity of the equatorial region. These chains do not show up in the X-ray crystallographic structure of the GroEL complex. Figure 1. Space-filling representation of the X-Ray structure of the GroEL/GroES/ADP complex [5]. Colours represent the type of surface: all hydrophobic amino acids (A, V, L, I, M, F, P, Y) are in yellow, while the polar ones (S, T, H, C, N, Q, K, R, D, E) are red. The sphere in the equatorial region has a radius of 40 Å and models the cavity between the cis and the trans chamber. In the inset we show the actual simulation setup, consisting of: the confining sphere, the chains (orange) anchored the GroEL amino acids in the equatorial region (blue), and a test alpha helix (red) allowed only to rotate around the center of mass, and to translate. doi:10.1371/journal.pcbi.1000006.g001 Figure 2. The present simulations suggest that non-native proteins may reach their native state either by the standard ''intra-chamber'' folding or by translocation through the equatorial region. The two pathways are shown in the schematic drawing above. In the initial configuration (1), the chaperonin barrel is open and exposes a hydrophobic rim for binding partially folded proteins. After a non-native protein is captured, the GroEL-GroES complex closes (i.e., the barrel gets capped) (2). After that, the protein can either refold in the original barrel (3A) or, if their structure is far from native [6], translocate to the other side (3B). The early stages of translocation cost free energy, as the protein must locally unfold to initiate the translocation. This implies that the translocation route will be preferentially followed by relatively unstable non-native conformations. The gain in free energy as a result of folding facilitates the subsequent translocation process, when the protein enters the other barrel of the chaperonin complex (4). If, after translocation, the protein is still in a non-native state, it will remain trapped, as the surface of the open barrel can bind to the hydrophobic surface of non-native proteins. In this way the folding cycle can start again, with the capping of the second cavity and the opening of the first. The process (shuttling) continues until folding is completed. doi:10.1371/journal.pcbi.1000006.g002
Author Summary
Chaperonin complexes capture proteins that have not yet reached their functional (''native'') state. Non-native proteins cannot perform their function correctly and threaten the survival of the cell. The chaperonins help these proteins to reach their native state. The prokaryotic GroEL-GroES chaperonin is an ellipsoidal protein complex that is approximately 16 nm long. It consists of two chambers that are joined at the bottom. Interestingly, protein repair by this chaperonin is not a one-step process. Typically, several capture and release steps are needed before the target protein reaches its native state. It is commonly assumed that substrate proteins cannot translocate, i.e., move inside the complex from one chamber to the other. In the absence of translocation, proteins that have not yet reached their functional conformation have to be released into the cytosol before being recaptured by a chaperonin. We present multi-scale simulations that show that it is, in fact, surprisingly easy for substrate proteins to translocate between the two chambers via an axial pore that is filled with disordered protein filaments. This finding suggests that non-native proteins can be squeezed like toothpaste from one chamber to the other: the incorrect structure of the protein is broken up during translocation and the protein has an increased probability to find its native state when it reaches the other chamber. The possibility for intra-chaperonin translocation obviates the need for a potentially dangerous release of non-native proteins.
The presence of disordered protein chains in the pore that joins the two GroEL chambers will certainly affect the permeability of the equatorial plane, but they need not block translocation. There are, in fact, examples [10] where disordered protein chains near a pore act to enhance the selectivity of the translocation process. Interestingly, the chemical composition of the disordered chains in the GroEL complex is similar to that of chains in known translocation channels in the nuclear pore complex.
Results
We have performed fully atomistic and coarse-grained simulations that do reproduce the structural data of [7], and allowed us to bridge the computational cost of computing the translocation free energy barrier of a short alpha helix. For the fully atomistic simulations in explicit water we used the GROMACS Molecular Dynamics (MD) simulation package [11]. MD simulations of 10 ns were performed on the structure of the central region at which time the system had equilibrated ( Figure S1). In order to compute the scattering profile we used the program CRYSON from Svergun et al. [12]. Figure 3 shows that the neutron-scattering form factor computed on the basis of equilibrated structure of the trans ring agrees well with the experimental data of Krueger et al. [13]. Interestingly, the simulations show that chains on the cis ring do not obstruct the passage between the two GroEL chambers (see Figure 4 and Figure S2). The chains in the trans ring fluctuate in a region between 5 and 15 Å from the center, in agreement with hollow-cylinder model proposed by Krueger et al. on the basis of their experimental data [13].
To compute the free-energy barrier for protein translocation, the MD approach described above would have been prohibitively expensive. We therefore performed Monte Carlo simulations on a suitably coarse-grained model for the GroEL complex. We focused on the structural fluctuations within a spherical region (diameter 40 Å ) around the trans side of the equatorial cavity ( Figure 1), because the cis chains did not appear to represent an obstacle to translocation. The disordered chains in the cavity (22 monomeric units long) were rigidly anchored on a circular rim around the trans hole of ,30 Å radius (Figure 1 inset). To this end, we represented all peptide backbones using a model that keeps track of the positions of 5 distinct types of backbone atoms (H, N, C a , C, and O). Side chains are represented as hard spheres with a radius of 2.5 Å , centred on the C a atoms. Neighbouring spheres along the chain are allowed to overlap (see Figure S3). We used this coarsegrained model to estimate the free-energy cost associated with the insertion of a short and rigid helix, 21 monomeric units long, in the region of disordered protein chains. We sampled the free energy as a function of a reaction coordinate Q s that measures the progress of the translocation process. Q s is defined as the total number of C a atoms that have passed the entrance of the trans ring. We define the entrance as a plane through the average position of the hydrogen atoms in the anchoring amino acid of the chains. In order to translocate, a protein must first ''find'' the translocation hole. From our study of a lattice model GroEL [6], we know that this first step is relatively easy. The key question is therefore whether or not the free-energy cost for the subsequent translocation is prohibitive. The present calculations address this issue by computing the free energy difference involved in moving an a-helix from the entrance of the pore region to the inside. Of course, the free-energy barrier depends on the interaction between the a-helix and the disordered chains that consist mainly of Gly and Met.
Analysis
We start by considering a very naive estimate that has the advantage that it is based on the fully atomistic simulations. From these simulations, we know the density profile of C a atoms in the trans ring (see Figure 4). If, in the spirit of the Flory model, we assume that the density fluctuations of independent polymer Kuhn segments are Poisson distributed, we can estimate the probability P 0 that a tube with the diameter of an a-helix contains no C a atoms at all. This would lead us to an estimate of the free energy barrier that is equal to 2kTlnP 0 . Using the density profile of Figure 4 and an estimate [14] for the persistence length of a protein filament, we obtain a translocation barrier of approximately 4 k B T. If we make the (unrealistic) assumption that all C a 's in a single chain are fully correlated, then we estimate the barrier height to be only 1 k B T, which should be a significant underestimate. To see whether such a rough estimate is at all reasonable, we can repeat the same procedure for the coarse-grained model where we can also perform direct free-energy calculations. To be consistent with the previous case, we assume that the there are only excluded-volume interactions between the (mainly Gly) chains and the helix residues. In terms of the interaction matrix of [15] this is equivalent to assuming that the helix consist entirely of Thr residues. Assuming all Kuhn segments fluctuate independently, we estimate the barrier to be 4 k B T, and the assumption of fully correlated fluctuations will again yield an estimate of order 1 k B T. The good agreement between the fully atomistic and coarse grained estimates is, of course, somewhat fortuitous, in view of the fact that the two density distributions are not identical. However, it suggests that the coarse-grained model may be of practical use. Next, we compute the free energy barrier for the coarse-grained model system using the MC method described in the Methods.
First we considered the case of pure steric interactions between both the chains and the helix. In Figure 5 we plot the free energy F(Q S ) as a function of the reaction coordinate Q S that measures the number of C a 's that have entered the pore region. The plot shows a symmetric barrier with a height of approximately 2 k B T, which is surprisingly close to the estimate obtained assuming fully correlated fluctuations of protein segments. In other words: the chains tend to move as a whole in an out of the central area of the pore. This picture is supported by the snapshot of the pore region ( Figure S4). The main conclusion that we can draw from the coarse-grained free-energy calculations is that the presence of seven protein chains in the central core region of the trans ring is not enough to obstruct translocation on steric grounds alone.
Of course, the interaction between a typical translocation protein segment and the ring chains is not purely steric. To consider the effect of both attractive and repulsive interactions, we consider the two cases separately. As the chains consist predominantly of Gly, we consider the scenarios that the interactions between the filament residues and the C a atoms of the helix are all equal to the twice the average of all attractive (resp. repulsive) interaction energies of Gly in the Betancourt-Thirumalai interaction matrix [15] (20.1k B T and +0.1k B T, respectively). The strength of attractive/repulsive interactions between the C a 's of the helix and the filament is therefore 20.2k B T (resp +0.2k B T). By taking an interaction that is double the average attractive/repulsive interaction strength, we are presumably modeling rather extreme cases that should put bounds on the actual translocation barrier. Figure 5 shows the computed free-energy barriers for translocation in the case of attractive (resp. repulsive) interactions. The translocation barrier is appreciably lower when the chains attract the a helix (2 k B T) than in the opposite limit (4.5 k B T). However, the most striking observation is that the barrier is quite small in either case -a barrier of 4.5 k B T can easily be crossed due to the action of thermal fluctuations.
In fact, in the case of attractive interactions, there is virtually no barrier for translocation. This absence of a barrier may provide a rationale for the experimental observation that Krueger et al. observed in their SANS experiments [13] that a non-native protein (DPJ-9) was partially sucked into isolated trans rings. If proteins can indeed translocate through the GroEL equatorial plane then this may also be relevant for the mechanism by which the GroEL/GroES chaperonin can help the refolding of proteins that are too big to be encapsulated. In such cases, portions of the protein could be attracted to the inside of the pore and perform either a complete or a partial translocation ( Figure S5). According to [6] either process can enhance the refolding efficiency.
The translocation of encapsulated non-native proteins is most likely in cases where the initial structure is far native. The reason is two fold: first of all, for such conformation there should be a low free-energy cost associated with partial unfolding-a necessary first step in translocation. Secondly, non-native chains that are trapped in a hydrophilic cage tend to be compressed. They can lower their free energy by translocating out of the cage. The simulations of [6] suggest that the driving force for such translocation can be as much as 0.5 k B T per amino-acid residue. Such a free-energy gradient is enough to completely remove a small free-energy barrier that might oppose translocation ( Figure S6).
Discussion
In conclusion, our simulation results are not compatible with the assumption that the disordered protein chains in the cis or trans rings provide an effective barrier against translocation. The present findings may help explain a puzzling experimental finding concerning refolding experiments in the presence of crowding agents [4]. The experiments of [4] demonstrated that, under physiological crowding conditions, the substrate protein does not escape from the chaperonin until it has reached its native state. This phenomenon is difficult to reconcile with the standard scenario where a protein (folded or not) is expelled from the cischamber as another non-native protein binds to the ATP-trans chamber. However, if it is not another protein that binds to the hydrophobic rim of the trans chamber, but the original protein that has translocated from the cis-chamber (see Figure 2), then it becomes clear why non-native proteins are unlikely to escape. We stress that the present findings do not rule out the possibility that non-native proteins fold into the native state without translocation [16]-translocation is simply an added route for protein folding. Such a route maybe very important for proteins that folds cotranslationally, where confinement in a optimal size tunnel is crucial for efficiently reaching the native state [17].Our simulations suggest that it would be interesting to carry out refolding experiments on GroEL with mutated chains that would strongly stick to each other (or that could be cross-linked). Such mutation would impede the translocation and should thereby reduce the efficiency of the GroEL/GroES complex.
Atomistic Molecular Dynamics
The flexible nature of this region prevented accurate X-ray determination of the chains filling the interconnecting pore. To obtain a full-atomistic model, the program MODELLER [18] has been used to generate a starting configuration of the chains missing in the X-ray structure (PDB code: 1AON) of the GroEL/ GroES complex loaded with ADP. The reconstructed fragments (sequence KNDAADLGAAGGMGGMGGMGGM) are added at the C-term extremity of each monomeric building block of the chambers. In order to avoid steric clashes between the chains, the procedure has taken into account of the quaternary assembly of the chains. After the generation of the chains structures, three steepest-descent minimisations were performed, using the program GROMACS [11] (energy minimisation tolerance: 0.1, 0.05 and 0.01 kJ/mol 21 nm 21 ). Molecular Dynamics (MD) simulations were subsequently performed with the GROMACS [11] package by using GROMOS96 force field with an integration time step of 2 fs. Non-bonded interactions were accounted for by using the particle-mesh Ewald method (grid spacing 0.12 nm) [19] for the electrostatic contribution and cut-off distances of 1.4 nm for Van der Waals terms. Bonds were constrained by LINCS [20] algorithm. The system was simulated in the NPT ensemble by keeping constant the temperature (300 K) and pressure (1 atm); a weak coupling [21] to external heat and pressure baths was applied with relaxation times of 0.1 ps and 0.5 ps, respectively. As we intended to simulate a solution at a pH-value of 7 the protonation states of pH sensitive residues were assigned as follow: Arg and Lys were positively charged, Asp and Glu were negatively charged and His was neutral. The protein's net charge was neutralised by the addition of Cl 2 and Na + ions. It would have been prohibitively expensive to simulate the entire chaperonin plus surrounding water. However, this was not necessary, as our aim was to study the structure and dynamics of the strongly fluctuating the equatorial rings, rather than the relatively rigid remainder of the GroEL ''chamber''. We therefore immobilised the chamber atoms that are not directly connected to the pore chains. Of course, the equatorial chains were free to move and relax in the pore. In order to further reduce the number of degrees of freedom treated, we only considered water molecules (SPCE [22]) inside the GroEL chamber. We achieved this by imposing a strong repulsive external potential outside the GroEL chamber. Ignoring the water outside the cage is not an unreasonable simplification, as we found that the disordered chains were completely solvated by water molecules and never moved outside the atoms of the internal surface of the chamber. We assumed periodic boundary conditions only along the symmetry axis of the GroEL complex (''z-axis'').
Coarse-Grained Monte Carlo Simulations
The Caterpillar model is a modification of the tube model of Maritan and co-workers [14,23,24]. The main differences are that we treat the structure of the backbone in more detail and that our scheme to account for self avoidance by means of bulky side groups is computationally cheaper than the approach of Maritan et al. who introduced a three-body interaction to achieve the same. The interaction between amino acids with different side chain E CA is given by the following expression where r Ca is the distance between nonadjacent C a atoms in the protein and r max is the distance at which the potential has reaches half e . For e we use the 20620 matrix derived with the method of Betancourt and Thirumalai [15].Although these interaction energies are strictly speaking neither energies nor free energies, they do provide a reasonable representation of the heterogeneity in the interactions between different amino acids. We modeled the hydrogen bonds between the hydrogen and the oxygen of the backbone with a 10-12 Lennard-Jones potential: where the minimum is at s = 2.0 Å and E LJ = 3.1 k B T. The directionality of the hydrogen bond was taken into account by multiplying the Lennard-Jones potential by a pre-factor where h 1 and h 2 are the angles between the atoms COH and OHN respectively. The large hard spheres centered on the C a atoms ensure that the orientation factor is maximum only for angles close to p. Apart from rotations around the dihedral angles w 1 and w 2 ( Figure S3), the backbone is rigid. We have verified that this model can indeed reproduce typical protein motifs such as alpha helices and beta sheets, depending on the amino-acid sequence.
Folding
To sample the conformations of the protein chains anchored on the trans ring, we use two basic Monte-Carlo moves: branch rotation and an improved version of the biased Gaussian step [25], while for the translocating alpha helix we allow only translation moves and rotation around the center of mass. Figure S1 Root mean square displacement of the C a atoms of the equatorial chains compared to the initial condition. The time scale starts from 7 ns and goes all the way to 11 ns. The plateau demonstrates that the dynamics reached equilibrium. Found at: doi:10.1371/journal.pcbi.1000006.s001 (0.15 MB EPS) Figure S2 Schematic representation of the model used for the GROMACS full atomistic simulations. The part of the protein that was kept constrained in space is shown in grey. The chains that were free to fluctuate are shown in light blue. The water molecules that fully solvated the protein complex are not shown. The axes are drawn to indicate the coordinate system used in the calculation of the filament density profiles. Figure S5 Plot of translocation free energy F (Q,Q s ) as a function of the number of Helix-chains contacts Q and of the number of translocated residues Q s . We plot F (Q,Q s ) for an attractive (20.2 kT) interaction between the alpha helix and the chains. In this scenario the alpha helix is pulled towards the middle of the hole, and it is subject to two choices, either to stay there surrounded by the chains (low values of Q S and high values of Q) or directly translocate (low values of Q S ). The small barrier (,2 k B T) separating these two states suggests that the translocation can occur in one step (all the way down) or in two steps (first trapped for a while in the hole and then escaping). Found at: doi:10.1371/journal.pcbi.1000006.s005 (0.21 MB EPS) Figure S6 Plot of translocation free energy F9 (Q s ) = F (Q s )20.5Q S where Q s is the number of translocated residues, and F (Q s ) is the translocation free energy with repulsive (0.2 kT) helix-chains interactions. The correction added to the free energy comes from the fact that the protein feels a gradient towards a a folded and translocated state. We extrapolated the coefficient 20.5 k B T per amino acids translocated, from previous work on lattice proteins [6]. | 5,987.8 | 2008-02-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
RETRACTED ARTICLE: Involvement of NORAD/miR-608/STAT3 axis in carcinostasis effects of physcion 8-O-β-glucopyranoside on ovarian cancer cells
We, the Editors and Publisher of the journal Artificial Cells, Nanomedicine, and Biotechnology, have retracted the following article: Xiaohong Yang, Yimin Yan, Yuhuan Chen, Jingwen Li & Jing Yang (2019) Involvement of NORAD/miR-608/STAT3 axis in carcinostasis effects of physcion 8-O-β-glucopyranoside on ovarian cancer cells. Artificial Cells, Nanomedicine, and Biotechnology, 47(1), 2855–2865, DOI: 10.1080/21691401.2019.1637884 Since publication, concerns have been raised about the integrity of the data in the article. When approached for an explanation, the authors have been unable to verify their original data or answer our questions about research ethics and the informed consent process. We are therefore retracting this article and the corresponding author listed in this publication has been informed. We have been informed in our decision-making by our policy on publishing ethics and integrity and the COPE guidelines on retractions. The retracted article will remain online to maintain the scholarly record, but it will be digitally watermarked on each page as “Retracted”.
Introduction
Ovarian cancer turns into one of the most lethal gynaecological tumours [1]. The prognosis of ovarian cancer is poor, which is characterized by only a just 40% of 5-year survival rate [2,3]. Relative absence of early symptoms and the high percentage (>60%) of patients that are diagnosed at an advanced stage result in the high mortality of this disease [4]. Thereby, it is of huge significance to deep grab the underlying biology of ovarian cancer and to explore new therapeutic paradigms.
Long noncoding RNAs (lncRNAs), a group of transcripts without protein-coding potential, have gained massive attention due to their roles in the regulation of diverse physiological and pathological processes [5,6]. In recent years, lncRNAs have been identified to have oncogenic or tumour suppressor roles in tumorigenesis [7]. Plenty of lncRNAs are involved in the process of ovarian cancer, such as CCAT1 [8], GAS5 [9], HOTAIR [10], tumour protein translationally controlled 1 antisense RNA 1 (TPT1-AS1) [11] and pro-transition associated RNA (PTAR) [12]. Moreover, various studies pointed out that lncRNAs are some new prognostic biomarkers or targets for the detection or treatment of ovarian cancer patients [13]. Therefore, identification of crucial lncRNAs involved in ovarian cancer has great significance for the diagnosis and treatment of this disease.
Recently, the functions of a novel cytoplasmic lncRNA, non-coding RNA Activated by DNA damage (NORAD) have been investigated [14]. NORAD is found to function as an oncogene and is relevant to adverse prognosis in various cancers, such as pancreatic cancer [15] and bladder cancer [16], and oesophagal squamous cell carcinoma [17]. In addition, existing evidence has pointed out that lncRNAs could bind RNA/miRNAs and thereby regulate their translation on the identity of competing for endogenous RNAs (ceRNAs) [18,19]. In patients with pancreatic cancer, NORAD is reported to severe as a ceRNA for miR-125a-3p to regulate the expression of the small GTP binding protein RhoA, thereby promoting epithelial-to-mesenchymal transition to facilitate invasion and metastasis [15]. Also, NORAD is believed to reduce the level of miR-608 as a ceRNA and consequently upregulate FOXO6 expression in gastric cancer cells, thus promoting the tumour growth [20]. However, the potential impact of NORAD in ovarian cancer remains incomplete investigated, let alone its underlying mechanism.
Rumex japonicus Houtt is a perennial herbal plant widely distributed in China, has been applied in folk medicine because of its antimicrobial, anti-inflammatory and antineoplastic activities [21,22]. Physcion 8-O-b-glucopyranoside (PG) is one of main active ingredients. PG has been pointed out to exert anti-tumour effects in diverse cancers, such as hepatocellular carcinoma [23], melanoma [24], breast cancer [25], oral squamous cell carcinoma [26] and clear-cell renal cell carcinoma [27]. However, the effect of PG on ovarian cancer development has not been clarified. Whether NORAD is a key regulatory principle in mediating the effects of PG on ovarian cancer o remains unclear.
Currently, the impacts of PG on the cell biological performances of ovarian cancer were evaluated. The level of NORAD both in ovarian cancer tissues and in cells was determined, followed by detection of the impacts of abnormal expression of NORAD on the cell biological performances of SKOV3 cells. Moreover, we explored whether NORAD modulated the level of STAT3 by competitively sponging miR-608, thus mediating the anti-tumour effects of PG on ovarian cancer cells. Our findings will offer novel view for the treatment of ovarian cancer.
Patients
In total, 56 ovarian cancer patients who underwent surgery in our hospital between April 2012 and April 2018 were enrolled in this study. The mean age of patients was 53.4 ± 12.2 years. The inclusion criteria were as follows: (1) patients' age between 18 and 78 years; (2) histologically confirmed ovarian cancer; (3) patients with adequate cardiac function: all patients underwent an electrocardiogram, and an echocardiogram with a left ventricular ejection fraction > 50% was performed in patients with a history of heart disease or abnormal electrocardiogram; (4) pulmonary function tests displayed forced expiratory volume in 1 s (FEV 1) up to 1.2 L, FEV1% higher than 50%, and carbon monoxide diffusing capacity (DlCO) more than 50%; and (5) patients with complete liver function (total bilirubin < 1.5 Â normal level (ULN); aspartate transaminase (AST) and alanine transaminase (ALT) < 1.5 Â ULN). Exclusion criteria were as follows: (1) other malignant diseases metastasized to the ovaries; (2) combined with other diseases, such as serious lung diseases, heart diseases (e.g., symptomatic coronary artery disease or myocardial infarction within 12 months) and significant dysfunction of bone marrow, liver and kidney; (3) pregnant or lactating women; and (4) allergic to any drugs.
Cell culture
A human ovarian epithelial cell line HOEpiC and five human ovarian cancer cell lines SKOV3, Caov3, A2780, HO-8910 and OVCAR3 were purchased from Chinese Type Culture Collection, Chinese Academy of Sciences. These cells were then grown in RPMI 1640 medium (TaKaRa, Japan) containing 100 mg/ml streptomycin sulfate, 100 mg/ml streptomycin sulfate and 10% fetal bovine serum (TaKaRa) and cultured in a 37 C humidified air atmosphere with 5% CO 2 .
Cell treatment and transfection
PG was gained from Chengdu Xunchen Biological Technology Co., Ltd. (Chengdu, China). For detecting the effect of PG on ovarian cancer cells, SKOV3 cells were disposed with various concentrations of PG (0, 2 0, 50 and 100 lM) for 24 h, 48 h and 72 h, respectively, according to the study of Xie et al. [28].
MTT assay
SKOV3 cells were seeded into each well of 96-well plate with the density of 2 Â 10 3 cells each well. Following different treatments at the indicated time, MTT solution (5 g/L, 20 lL/ well) was added to each well and used to incubate the cells at 37 C for another 4 h. DMSO (150 lL/well) was then mixed with cells for dissolving crystals for 10 min. Absorbance values at 492 nm were then measured for assessing cell viability using the microplate reader (BioTek, VT, United States).
Flow cytometric analysis
After different treatments, SKOV3 cells were harvested and washed with ice-cold PBS. Cell apoptosis was then analyzed
Transwell assays
After different treatments, SKOV3 cells (5 Â 10 4 cells) were harvested and then placed into the top chamber of Transwell assay inserts (8-mm pore size; Millipore, Billerica, MA, USA) in 200 lL of serum-free RPMI 1640 medium. Unlike migration assay, the top chamber of an insert was precoated with Matrigel for the invasion assay. RPMI 1640 containing 10% FBS was then added to the bottom chamber as a chemoattractant. Afterwards, any remaining cells on the top layer of the inserts were swept by a sterile cotton swab. The migrated or invaded cells through the membrane were fixed with methanol, stained 0.1% crystal violet, and counted and imaged using digital microscopy (Olympus, Tokyo, Japan).
Quantitative PCR (qPCR) test
We isolated the total RNA from cells using Trizol kit (TaKaRa, Japan); afterwards, the isolated and purified RNA was chosen for synthesizing cDNA using an M-MLV Reverse Transcriptase kit (TaKaRa). The expression levels of target genes were then detected by qPCR using the SYBR V R Green kit (Takara) on an ABI StepOne PCR instrument. With b-actin as an internal control for RNA and U6 as a reference for miRNA, comparative Ct (2 ÀDDCT ) method was used for relative quantification of target genes.
Western blot test
After different treatments, SKOV3cells were lysed using the mammalian protein extraction reagent RIPA (Beyotime, Beijing, China). After quantitation, protein extraction (50 mg per lane) was divided by 10% SDS-PAGE and then the divided proteins were transferred to PVDF membrane (Sigma, USA). The membranes were then incubated with specific primary antibodies (1:1000) at 4 C overnight. After incubation with the recommended secondary antibodies (1:5000) at 37 C for 2 h, the reacted protein signals were revealed using ECL detection kit (Pierce, Rockford, IL, USA). Primary antibodies to STAT3, Bcl-2, Bax, pro-caspase-3, cleaved caspase-3, pro-caspase-9, cleaved caspase-9 and b-actin were all gained from Abcam (Cambridge, UK). b-actin was chosen as the control.
Dual-luciferase reporter test
The pMIR-REPORT-STAT3-wt/mut (Sangon Biotech, Shanghai, China) were produced and then were transfected into SKOV3 cells, together with miR-608 mimic or mimic NC. Dual-Luciferase Reporter Assay System (E1910, Promega, WI, USA) was chosen for the evaluation of luciferase activity of reporter vectors after 48 h of transfection
Statistical analysis
We carried out all experiments independently with three times repeats. The SPSS 16.0 software (SPSS, Chicago, IL) was chosen for statistical analysis. The obtained data were displayed as the mean ± standard error (SD). The statistical differences were calculated using a Student t-test for two groups or one-way analysis of variance (ANOVA) for more than two groups. p < .05 was chosen as statistically significant.
PG inhibits cell viability, migration and invasion but promotes apoptosis of SKOV3 cells
To grab the impact of PG in ovarian cancer development, we used different concentrations of PG to treat SKOV3 cells for different times and then detect the effects of PG on cell behaviours. As shown in Figure 1 (Figure 1(B)). Moreover, the results of Transwell assays displayed that 48 h of PG treatment prominently suppressed the migration (Figure 1(C)) and invasion (Figure 1(D)) of SKOV3 cells in a dose-dependent way (p < .05). These findings proved to us that PG might exhibit an anti-tumour effect on ovarian cancer.
NORAD is upregulated in ovarian cancer and PG decreases the expression of NORAD
There were studies reporting the role of NORAD in several cancers [15][16][17]28], but the impact of NORAD in ovarian cancer remains incomplete clarified. To grab if NORAD played a significant role in ovarian cancer, we assessed the levels of NORAD both in ovarian cancer tissues and in cells. The data uncovered that NORAD was markedly higher expressed in ovarian tumour tissues than that in non-tumour tissues (p < .01, Figure 2(A)). Moreover, NORAD was also remarkably up-expressed in five ovarian cancer cell lines, including A2780, Caov3, HO-8910, SKOV3 and OVCAR3, compared to that in normal ovarian epithelial HOEpiC cells (p < .05, Figure 2(B)). SKOV3 cells were selected for subsequent experiments due to the highest expression levels of NORAD. Subsequently, when transfecting with pc-NORAD and sh-NORAD, NORAD was dramatically up-expressed or depressed in SKOV3 cells, respectively (p < .001, Figure 2(C)), implying a high transfection efficiency. Moreover, silencing of NORAD by transfection with sh-NORAD significantly inhibited cell viability (p < .05, Figure 2(D)), induced apoptosis (p < 0.001, Figure 2(E)) and suppressed migration (p < .05, Figure 2
R E T R A C T E D
and invasion (p < .05, Figure 2(G)) of SKOV3 cells. Opposite effects on SKOV3 cell viability ( Figure 2D), migration ( Figure 2(F)) and invasion (Figure 2(G)) were observed after silencing of NORAD by transfection with sh-NORAD compared to transfection with sh-NC (all p < .05), but apoptosis did not exhibit significant change (Figure 2(E)). These data indicated that NORAD might play a tumour-promotion function in ovarian cancer. To further investigate whether PG exhibited antitumour effects on ovarian cancer by regulating NORAD, we determined the NORAD expression in PG-treated SKOV3 cells.
The results showed that 48 h of PG treatment remarkably decreased NORAD expression in a dose-dependent manner (Figure 2(G)). Based on this result, 100 lg/ml of PG was selected in the following experiments.
PG inhibits the cell biological performances of SKOV3 cells through decreasing NORAD expression
To explore whether the effects of PG on ovarian cancer via inhibiting NORAD expression, SKOV3 cells were transfected with pc-NORAD after treatment of 100 lg/ml PG for 48 h. we found that compared to PG þ pc-NORAD transfection, overexpression of NORAD by transfection with pc-NORAD significantly increased the cell viability (p < .05, Figure 3(A)), migration (p < .05, Figure 3(C)) and invasion (p < .05, Figure 3(D)) but inhibited apoptosis (p < .01, Figure 3(B)) of PGtreated SKOV3 cells, indicating that overexpression of NORAD reversed the impacts of PG treatment on the cell biological performances of SKOV3 cells.
NORAD promotes the growth of PG-treated SKOV3 cells by sponging miR-608
A recent study has confirmed that NORAD accelerates the growth of gastric cancer cells through sponging miR-608 [20]. We thus hypothesized that whether this kind of correlation between NORAD and miR-608 existed in ovarian cancer. As presented in Figure 4(A), miR-608 was dramatically enhanced after transfection with miR-608 mimic relative to transfection with mimic NC (p < .001), suggesting that miR-608 was successfully overexpressed. In comparison to PG þ pc-NORAD þ mimic NC group, miR-608 mimic concurrently in PG þ pc-NORAD þ miR-608 mimic group significantly changeover the impacts of NORAD high level on PG-treated SKOV3 cells by decreasing cell viability (p < .05, Figure 4(B)), migration (p < .05, Figure 4(D)) and invasion (p < .05, Figure 4(E)) but promoting apoptosis (p < .01, Figure 4(C)). We thereby deducted that NORAD might promote the growth of PG-treated SKOV3 cells by sponging miR-608.
STAT3 is tested as a target of miR-608, and the impacts of NORAD in PG-treated SKOV3 cells are through miR-608-mediated STAT3 We identified the target genes of miR-608 in ovarian cancer cells. Using Targetscan online tool, we found that STAT3 was targeted by miR-608 (http://www.targetscan.org/cgi-bin/ targetscan/vert_71/view). Subsequently, luciferase reporter test was carried out and the data displayed that STAT3-wt group obtains a very low luciferase activity after co-transfection with miR-608 mimic (p < .05), but the luciferase activity of STAT3-mut did not exhibit obvious change ( Figure 5(A)). Meanwhile, the levels of STAT3 were conspicuously underexpressed in miR-608 mimic group relative to mimic NC group (p < .01, Figure 5(B)). These data confirmed the target relationship between miR-608 and STAT3. Furthermore, we further overexpressed STAT3 to verify whether the role of NORAD in PG-treated SKOV3 cells was through regulation of miR-608-mediated STAT3. The results showed that STAT3 was successfully overexpressed by transfection of pEX-STAT3 (p < .001, Figure 5(C)). Our results further showed that, the impacts of pc-NORAD and miR-608 synchronously on PGtreated SKOV3 cells were dramatically changeover after pc-STAT3 transfection simultaneously, which were testified by the increased cell viability (p < .015, Figure 5(D)), decreased apoptosis (p < .01, Figure 5(E)) and the enhanced abilities of migration (p < .05, Figure 5(F)) and invasion (p < .05, Figure 5(G)) in PG þ pc-NORAD þ miR-608 mimic þ pEX-STAT3 group compared to PG þ pc-NORAD þ miR-608 mimic þ pEX group. We thereby deducted that the role of NORAD in PGtreated SKOV3 cells might be regulated by miR-608-mediated STAT3.
Discussion
Mounting evidence has revealed that deregulation of lncRNAs is involved in the development of human malignancies [29], providing insights into novel strategies for cancer treatment. We grabbed the antineoplastic activity of PG in ovarian cancer cells and explored whether NORAD was responsible for the antineoplastic impact of PG. And, we discovered that PG depressed cell viability, migration and invasion but enhanced apoptosis of SKOV3 cells, suggesting the potential of PG as a chemotherapeutic agent in the progress of ovarian cancer. One of other important findings of our study was that NORAD was upregulated in ovarian cancer tissues and cells, and silencing of NORAD resulted in depressions on cell viability, migration and invasion, but an enhancement on apoptosis of SKOV3 cells. Consistent with previous findings in other cancers [30][31][32], our results confirmed that NORAD may also play a tumour-promotion impact in ovarian cancer. However, we did not investigate the association between NORAD expression and overall patients, further studies are still required to evaluate the correlation between NORAD and patients' prognosis. Moreover, overexpression of NORAD reversed the impacts of PG treatment on the cell biological performances of SKOV3 cells; we thus speculate the PG may exert anti-tumour impact on ovarian cancer via targeting NORAD.
Accumulating evidence together pointed out that lncRNAs could bind miRNAs/RNAs and thereby running their regulator functions on the identity of ceRNAs in many diseases [33,34]. Strikingly, our results found that the role of NORAD in PGtreated SKOV3 cells were across the miR-608-mediated STAT3. There is a study that has confirmed the regulatory 2860 X. YANG ET AL.
R E T R A C T E D
relationship between NORAD and miR-608 [20], implying the reliability of our results. Moreover, miR-608 is shown to serve as a tumour suppressor in lung adenocarcinoma [35] and is associated with the prognosis of lung adenocarcinoma treated with tyrosine kinase inhibitors targeting EGFR [36]. Expect in lung adenocarcinoma, previous studies also confirmed the tumour suppressive role in some other types of human tumours, including glioma [37], colon cancer [38] and bladder cancer [39]. Notably, HOXD cluster antisense RNA 1 (HOXD-AS1), a cancer-related lncRNA, has been shown to promote the malignant behaviours of ovarian cancer cells through miR-608 [40]. Given the tumour suppressive role of miR-608 in various cancers, we deducted that NORAD promotes the progression of ovarian cancer via regulating miR-608. Moreover, STAT3 was tested as a target gene of miR-608 in ovarian cancer cells. It is shown that STAT3 plays a leading role in several processes such as inflammation and immunity in tumours by regulating numerous oncogenic signalling pathways, such as nuclear factor-jB and Janus kinase pathways [41]. In ovarian cancer ascites, elevated STAT3 expression is shown to promote tumour invasion and metastasis [42]. Moreover, STAT3 is abnormally highly expressed and is shown to be relevant to the poor outcome in patients with ovarian cancer [43]. Furthermore, scholars pointed out that STAT3 exhibits utility in both monotherapy and combination therapy and thereby may be a promising target in ovarian cancer models [44,45]. In this study, we found that the impacts of pc-NORAD and miR-608 mimic synchronously on PG-treated SKOV3 cells were dramatically changeover after overexpression of STAT3 synchronously. We thus deducted that NORAD could regulate STAT3 through mediating miR-608, thus regulating the antineoplastic activity of PG in ovarian cancer cells.
To sum up, our findings confirm the antineoplastic activity of PG in ovarian cancer cells. Moreover, NORAD/miR-608/ STAT3 axis is an important regulatory chain in mediating the antineoplastic impacts of PG in ovarian cancer cells. Our discoveries may offer a novel experimental basis for the explanation of the progression of ovarian cancer. Further in vivo tests including animal or human clinical trials are to carry out to verify our conclusion.
Disclosure statement
Authors declare that there is no conflict of interests. | 4,449.2 | 2019-07-12T00:00:00.000 | [
"Biology"
] |
A high‐temperature water vapor equilibration method to determine non‐exchangeable hydrogen isotope ratios of sugar, starch and cellulose
Abstract The analysis of the non‐exchangeable hydrogen isotope ratio (δ2Hne) in carbohydrates is mostly limited to the structural component cellulose, while simple high‐throughput methods for δ2Hne values of non‐structural carbohydrates (NSC) such as sugar and starch do not yet exist. Here, we tested if the hot vapor equilibration method originally developed for cellulose is applicable for NSC, verified by comparison with the traditional nitration method. We set up a detailed analytical protocol and applied the method to plant extracts of leaves from species with different photosynthetic pathways (i.e., C3, C4 and CAM). δ2Hne of commercial sugars and starch from different classes and sources, ranging from −157.8 to +6.4‰, were reproducibly analysed with precision between 0.2‰ and 7.7‰. Mean δ2Hne values of sugar are lowest in C3 (−92.0‰), intermediate in C4 (−32.5‰) and highest in CAM plants (6.0‰), with NSC being 2H‐depleted compared to cellulose and sugar being generally more 2H‐enriched than starch. Our results suggest that our method can be used in future studies to disentangle 2H‐fractionation processes, for improving mechanistic δ2Hne models for leaf and tree‐ring cellulose and for further development of δ2Hne in plant carbohydrates as a potential proxy for climate, hydrology, plant metabolism and physiology.
However, recent studies show the great potential of δ 2 H values of plant compounds to retrospectively determine hydrological and climatic conditions (Anhäuser, Hook, Halfar, Greule, & Keppler, 2018;Gamarra & Kahmen, 2015;Hepp et al., 2015Hepp et al., , 2019Sachse et al., 2012), as well as to disentangle metabolic and physiological processes (Cormier et al., 2018;Estep & Hoering, 1981;Sanchez-Bragado, Serret, Marimon, Bort, & Araus, 2019;Tipple & Ehleringer, 2018) such as the proportional use of carbon sources (i.e., fresh assimilates vs. storage compounds) for plant growth (Lehmann, Vitali, Schuler, Leuenberger, & Saurer, 2021;Zhu et al., 2020). Enabling the analysis of δ 2 H ne of NSC, especially sugar at the leaf level, will make it possible to study processes and environmental conditions which are shaping the 2 H-fractionation of carbohydrates at a much higher time resolution compared to the analysis of δ 2 H ne of cellulose. New routines and high-throughput analytical methods for δ 2 H ne values of NSC are thus needed to enable widespread application in earth and environmental sciences.
The difficulty of establishing reliable methods for δ 2 H ne values of NSC and cellulose is mainly caused by the presence of oxygen-bound hydrogen atoms (H ex ) that can freely exchange with hydrogen atoms of the surrounding liquid water and water vapor. The interference of H ex greatly affects the analysis of δ 2 H ne , which retains useful information on climate, hydrology, metabolism and physiology. The oldest method of measuring δ 2 H ne is to derivatize hydroxyl groups with nitrate esters, using a mixture of either H 2 SO 4 or H 3 PO 4 with HNO 3 (Alexander & Mitchell, 1949;Boettger et al., 2007;DeNiro, 1981;Epstein et al., 1976). However, the nitration process requires a large sample amount, is labour intensive, uses hazardous derivatization reactions and leads to thermally unstable products. A newer derivatization method to measure δ 2 H ne in sugars is using N-methyl-bistrifluoroacetamide to replace H ex with trifluoroacetate derivatives, which are measured by gas chromatography -chromium silver reduction/high-temperature conversion-IRMS (GC-CrAg/HTC-IRMS) (Abrahim et al., 2020). This method still relies on a large sample amount of >20 mg extracted NSC, a relatively long measuring time, and the limitation of measuring only one element per analysis. Potential alternative methods that work without derivatization and use smaller amounts of material are based on water vapor equilibration, which sets H ex to a known isotopic composition that allows the determination of δ 2 H ne by mass balance (Cormier et al., 2018;Filot et al., 2006;Sauer et al., 2009;Schimmelmann, 1991;Wassenaar & Hobson, 2000). However, established water vapor equilibration methods are mainly calibrated for analysis of δ 2 H ne values of complex molecules such as cellulose, keratin and chitin (Schimmelmann et al., 1986;Wassenaar & Hobson, 2000) and whether these methods can also be used for the analysis of δ 2 H ne in NSC remains to be shown. The main purpose of this study was therefore to establish a high-throughput hot water vapor equilibration method to determine δ 2 H ne of NSC, based on already established protocols for cellulose (Sauer et al., 2009 (Cernusak et al., 2016). The leaf samples were immediately transferred to gas-tight 12 ml glass vials ('Exetainer', Labco, Lampeter, UK, Prod. No. 738W), stored on ice until the harvest was complete (≤2 hr), and then at À20 C in a freezer until further use (Appendix 1).
The sample material was dried using a cryogenic water distillation method (West, Patrickson, and Ehleringer (2006), crumbled with a spatula (dicotyledon species) or cut with scissors (monocotyledon species) into small pieces, and 100 mg of the fragmented material was separated for cellulose extraction. The remaining leaf material was then ball-milled to powder (Retsch MM400, Retsch, Haan, Germany) for NSC extraction.
| Cellulose and starch nitration, and isotopic analysis of the nitrated products
Nitrates of cellulose and starch without exchangeable H were used as reference material to assess the δ 2 H ne values derived from the hot water vapor equilibration method. Nitration of cellulose and starch standards was performed following the method of Alexander and Mitchell (1949), using a mixture of P 2 O 5 and 90% HNO 3 . δ 2 H values of nitrated cellulose and starch were analysed with a TC/EA-IRMS system, using a reactor filled with chromium as described by Gehre et al. (2015). Reference materials for δ 2 H measurements of cellulose and starch nitrates were the IAEA-CH-7 polyethylene foil (PEF; International Atomic Energy Agency, Vienna, Austria) for a first offset correction and the USGS62, USGS63 and USGS64 caffeine standards (United States Geological Survey, Reston, Virginia, U.S.A.) (Schimmelmann et al., 2016) for the final normalization.
| Preparation of leaf cellulose and NSC for δ 2 H ne analysis
Every compound (i.e., sugars, starch and cellulose) was extracted once per sample. Cellulose (hemicellulose) was extracted from 100 mg of the fragmented leaf material in F57 fibre filter bags (made up of polyester and polyethylene with an effective pore size of 25 μm; ANKOM Technology, Macedon NY, U.S.A.). In brief, the samples were washed twice in a 5% sodium hydroxide solution at 60 C, rinsed with deionized water, washed 3 times for 10 hr in a 7% sodium chlorite solution, which was adjusted with 96% acetic acid to a pH between 4 and 5, and subsequently rinsed with boiling hot deionized water, and dried overnight in a drying oven at 60 C. The neutral sugar fraction ('sugar', a mixture of sugars, typically glucose, fructose, sucrose and sugar alcohol [Rinne, Saurer, Streit, & Siegwolf, 2012]) were extracted from 100 mg leaf powder and further purified using ion-exchange cartridges, following established protocols for carbon and oxygen isotope analyses (Lehmann et al., 2020;Rinne et al., 2012). This is needed to separate the sugar from other water-soluble compounds such as amino acids which would alter the resulting δ 2 H ne values (Schmidt, Werner, & Eisenreich, 2003). Starch was extracted from the remaining pellet of the sugar extraction via enzymatic digestion following the established method for carbon isotope analysis (Richter et al., 2009;Wanek et al., 2001). The same protocol was used to hydrolyse the commercial starch standards. Aliquots of the extracted sugar (including those derived from starch) were pipetted in 5.5 Â 9 mm silver foil capsules (IVA Analysentechnik GmbH & Co. KG, Germany, Prod. No. SA76981106), frozen at À20 C, freeze-dried, folded into cubes and packed into an additional silver foil capsule of the same type, folded again and stored in an exicator at low relative humidity (2-5%) until isotope analysis.
2.5 | δ 2 H ne analysis of cellulose and NSC using a hot water vapor equilibration method One microgram of commercial starch or cellulose standard was packed into 3.3 Â 5 mm silver foil capsules (IVA, Prod. No. SA76980506), which led to a total peak area between 20 and 30-V seconds (Vs) of each IRMS analysis. For sugar standards, one mg was transferred first into a 5.5 Â 9 mm silver foil capsule (IVA), and additionally packed in a second capsule of the same size and folded again. The reason for the double packing was the observation that sugar samples became liquefied and rinsed out of single-packed capsules during the hot water vapor equilibration, which led to a loss of sample and to negative impacts on the analysis of δ 2 H ne in sugars. Such rinsing was prevented by double packing and had no negative impact on the drying time of the sugars (Appendix 2). The double packing did not have a negative impact on the equilibration itself, as indicated by the high x e of the sugars (Table 1). All packed samples were stored in an exicator at low relative humidity (2-5%) until isotope analysis.
All samples were equilibrated in a home-built The inlet was connected to a stainless steel tube (i.e., feeding capillary, BGB, Switzerland), which was leading out of the oven where a santoprene pump tubing was fitted into a peristaltic pump (Appendix 6).
The end of the santoprene pump tubing was inserted into a 50 ml falcon tube containing the equilibration water. The peristaltic pump pro- For testing the reproducibility of the adapted method, triplicates of each type of cellulose and sugar samples were equilibrated independently on separate days following a standardized sample sequence (Appendix 7), in total three times with Water 1 (δ 2 H = À160‰) and three times with Water 2 (δ 2 H = À428‰). For starch and digested starch, triplicates were equilibrated only once with Water 1 and once with Water 2.
Subsequently, all samples (still hot) were immediately transferred into a Zero Blank Autosampler (N.C. Technologies S.r.l.), which was installed on a sample port of a high-temperature elemental analyser system. The latter was coupled via a ConFlo III interface to a Delta Plus XP IRMS (TC/EA-IRMS, Finnigan MAT, Bremen, Germany). It is crucial to transfer the samples as fast as possible and still hot from the equilibration chamber to the autosampler to avoid any isotopic reequilibration of the sample with air moisture and water absorption.
The autosampler carousel was evacuated to 0.01 mbar and afterwards filled with dry helium gas to 1.5 bar to avoid any contact with ambient water (vapor). The samples were pyrolysed in a reactor according to Gehre, Geilmann, Richter, Werner, and Brand (2004) 2.6 | Calculation of non-exchangeable hydrogen isotope ratio (δ 2 H ne ) According to Filot et al. (2006), the %-proportion of exchanged hydrogen during the equilibrations [x e , Equation (2)] can be calculated as: where δ 2 H e1 and δ 2 H e2 are the δ 2 H values of the two equilibrated samples, δ 2 H w1 and δ 2 H w2 are the δ 2 H values of the two waters used, α e-w is the fractionation factor of 1.082 for cellulose (Filot et al., 2006). While α e-w needs to be adapted for different compounds and fractions with different functional groups (Schimmelmann, 1991 Statistical analyses (one-way ANOVA and Tukey posthoc test) were performed using R version 3.6.3 (R. Core.Team, 2021 (Werner & Brand, 2001) is applied, that is, all samples are prepared and measured in the same way.
Besides, the calculated x e values of the IAEA-CH-7 reference material without any H ex were close to 0 throughout all measurements, denoting the absence of absorbed water on the surface of each compound, as well as the analytical reproducibility for all δ 2 H ne values of cellulose, was high as indicated by a standard deviation of 0.8 to 1.9‰ for three repetitions.
The same method was also applied to analyse δ 2 H ne of NSC ( which is comparable to the precision of derivatization methods (Dunbar & Schmidt, 1984: 1.9‰;Augusti et al., 2008: 2 and 10‰; Abrahim et al., 2020: 0.4 and 3.6‰). As no nitrated sugars were available due to the safety problems with sugar nitration, we could not calculate the accuracy. We, however, can assume that the accuracies for sugars should be in a comparable range as those derived from digested starch (À8.0 and À2.0‰). The reproducibility of the results for all tested commercial sugars ranged between 4.0 and 8.6‰ for three repetitions. The x e of the different sugars ranged between 34.1 and 53.5% and was thus similar or very close to x e.pot , which gives further confidence in the reliability of the method for sugars. The smaller deviation of x e from x e.pot for sugars than for cellulose might be explained by the dissolution of the sugars during the hot water vapor equilibration, leading to a breakdown of the crystal structure of the sugars. This might have facilitated a complete exchange of H ex with the water vapor in sugars, that is, not feasible for cellulose (Sauer et al., 2009;Schimmelmann, 1991 (Leaney, Osmond, Allison, & Ziegler, 1985;Sternberg, Deniro, & Ajie, 1984). While the observed variation in δ 2 H ne of NSC and cellulose among the photosynthetic pathways are unlikely to be explained solely by differences in leaf water 2 H enrichment (Kahmen, Schefuß, & Sachse, 2013;Leaney et al., 1985), higher leaf water δ 2 H values might partially contribute to higher δ 2 H ne of NSC and cellulose in CAM plants compared to C 3 plants (Smith & Ziegler, 1990 isotope effects during metabolic processes (Cormier et al., 2018;Cormier, Werner, Leuenberger, & Kahmen, 2019). Our results are supported by a previous study (Luo & Sternberg, 1991;Schleucher et al., 1999), showing that nitrated starch was more 2 H-depleted than nitrated cellulose within the same autotrophic photosynthetic tissue, which can be interpreted as another proof for the reliability of the new method for δ 2 H ne values of NSC. The high variability in 2 Hfractionation in the sequence from sugars to starch to cellulose (Table 2) between all tested species indicates high variability in common 2 H-fractionation processes, which is also supported by recent studies (Cormier et al., 2018;Sanchez-Bragado et al., 2019). Thus, the variability in 2 H-fractionation may find application in future plant physiological studies, investigating stress responses or short-and long-term carbon dynamics. We assume that δ 2 H ne of NSC are susceptible to diel or seasonal changes in environmental conditions such as temperature and light intensity due to their short turnover time (Fernandez et al., 2017;Gibon et al., 2004). The variability in 2 Hfractionation between different species might also be important if multiple tree species are used during the establishment of tree-ring isotope chronologies in dendroclimatological studies (Arosio, Ziehmer-Wenz, Nicolussi, Schlüchter, & Leuenberger, 2020).
In conclusion, we show that a hot water vapor equilibration method originally developed for cellulose can be adapted for accurate, precise and reproducible analyses of δ 2 H ne in non-structural carbohydrates (NSC) such as sugar and starch. By applying the method for compounds from different plant species, we demonstrated that this analytical method can now be used to estimate 2 Hfractionation among structural and NSC and to distinguish plant material from plants with different photosynthetic pathways. It should be noted that the method presented herein enables analysis of δ 2 H ne of bulk sugar and sugar derived from digested starch and is therefore not compound-specific nor position-specific. Yet, our δ 2 H ne method allows us to measure NSC samples in high-throughput and we thus expect that it will help to identify important 2 Hfractionation processes. These findings could then eventually be studied in more detail with compound-specific methods (GC-IRMS [Abrahim et al., 2020]) or methods giving positional information (NMR [Ehlers et al., 2015]). We therefore expect that the method will find widespread applications in plant physiological, hydrological, ecological and agricultural research to study NSC fluxes and plant performance, and the beverage and food industry, to distinguish between sugars of different origins, which could be applied to check if a certain product is altered by the addition of low-cost supplements. We also expect that the method can help to improve mechanistic models for 2 H distributions in organic material (Roden, Lin, & Ehleringer, 2000;Yakir & DeNiro, 1990). The method may further help, in combination with other hydrogen isotope proxies (e.g., fatty acids, n-alkanes or lignin methoxy groups), researchers to better understand metabolic pathways and fluxes, shaping the hydrogen isotopic composition of plant material. | 3,843 | 2021-09-26T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Optogenetic activation of heterotrimeric Gi proteins by LOV2GIVe— a rationally engineered modular protein
Heterotrimeric G-proteins are signal transducers that mediate the action of many natural extracellular stimuli as well as of many therapeutic agents. Non-invasive approaches to manipulate the activity of G-proteins with high precision are crucial to understand their regulation in space and time. Here, we engineered LOV2GIVe, a modular protein that allows the activation of Gi proteins with blue light. This optogenetic construct relies on a versatile design that differs from tools previously developed for similar purposes, i.e. metazoan opsins, which are light-activated GPCRs. To make LOV2GIVe, we fused a peptide derived from a non-GPCR protein that activates Gαi (but not Gαs, Gαq, or Gα12) to a small plant protein domain, such that light uncages the G-protein activating module. Targeting LOV2GIVe to cell membranes allowed for light-dependent activation of Gi proteins in different experimental systems. In summary, LOV2GIVe expands the armamentarium and versatility of tools available to manipulate heterotrimeric G-protein activity. GRAPHICAL SUMMARY
INTRODUCTION
Heterotrimeric G-proteins are critical transducers of signaling triggered by a large family of G-protein-coupled receptors (GPCRs). GPCRs are Guanine nucleotide Exchange Factors (GEFs) that activate G-proteins by promoting the exchange of GDP for GTP on the Gα-subunits (Gilman, 1987). This signaling axis regulates a myriad of (patho)physiological processes and also represents the target for >30% of drugs on the market (Sriram and Insel, 2018), which fuels the interest in developing tools to manipulate G-protein activity in cells with high precision.
Optogenetics, the use of genetically-encoded proteins to control biological processes with light (Warden et al., 2014) is well-suited for the non-invasive manipulation of signaling.
Metazoan opsins are light-activated GPCRs that have been repurposed as optogenetic tools (Airan et al., 2009;Bailes et al., 2012;Karunarathne et al., 2013;Oh et al., 2010;Siuda et al., 2015), albeit with limitations. For example, opsins tend to desensitize after activation due to receptor internalization and/or exhaustion of their chromophore, retinal (Airan et al., 2009;Bailes et al., 2012;Siuda et al., 2015). Exogenous supplementation of retinal, which is not always feasible, is required in many experimental settings because this chomophore is not readily synthesized by most mammalian cell types or by many organisms. Also, opsins are relatively large, post-translationally modified transmembrane proteins, which makes them inherently difficult to modify for optimization and/or customization for specific applications.
Here we leveraged the light-sensitive LOV2 domain of Avena sativa (Harper et al., 2003) to develop an optogenetic activator of heterotrimeric G-proteins not based on opsins. LOV2 uses ubiquitously abundant FMN as the chromophore and does not desensitize. It is small (~17 KDa) and expresses easily as a soluble globular protein in different systems (Lungu et al., 2012;Strickland et al., 2012;Zayner et al., 2013), making it experimentally tractable, easily customizable, and widely applicable. Our results provide the proof of principle for a versatile optogenetic activator of heterotrimeric G-proteins that does not rely on GPCR-like proteins.
Design and optimization of a photoswitchable G-protein activator
We envisioned the design of a genetically-encoded photoswitchable activator of heterotrimeric G-proteins by fusing a short sequence (~25 amino acids) from the protein GIV (a.k.a. Girdin) called the Gα-Binding-and-Activating (GBA) motif to the C-terminus of LOV2 (Fig. 1A, left). GBA motifs are evolutionarily conserved sequences found in various cytoplasmic, non-GPCR proteins that are sufficient to activate heterotrimeric G-protein signaling . We reasoned that the GBA motif would be "uncaged" and bind G-proteins when the C-terminal Jα-helix of LOV2 becomes disordered upon illumination (Harper et al., 2003) (Fig. 1A, right). We named this optogenetic construct LOV2GIV (pronounced "love-to-give"). We used two well-validated mutants that mimic either the dark (D) or the lit (L) conformation of LOV2 (Harper et al., 2004;Harper et al., 2003;Zimmerman et al., 2016) to facilitate LOV2GIV characterization (Fig. 1A, right).
Our first LOV2GIV prototype had a suboptimal dynamic range, as it bound the G-protein Gαi3 in the lit conformation only ~3 times more than in the dark conformation (Fig. 1B). Based on a structural homology model of LOV2GIV (Fig. 1C), we reasoned that it could be due to the relatively high accessibility of Gαi3-binding residues of the GIV GBA motif within the dark conformation of LOV2GIV. In agreement with this idea, fusion of the GBA motif at positions of the Jα helix more proximal to the core of the LOV2 domain tended to lower G-protein binding, including one variant ("e", henceforth referred to as LOV2GIVe) in which binding to the dark conformation was almost undetectable (Fig. 1D). Consistent with this, a structure homology model of LOV2GIVe showed that amino acids required for G-protein binding are less accessible than in the LOV2GIV prototype (Fig. 1E). In contrast, the lit conformation of LOV2GIVe displayed robust Gαi3 binding, thereby confirming an improved dynamic range compared to the LOV2GIV prototype (Fig. 1F).
LOV2GIVe binds and activates G-proteins efficiently in vitro only in its lit conformation
Concentration-dependent binding curves revealed that the affinity of Gαi3 for the dark conformation of LOV2GIVe is orders of magnitude weaker than for the lit conformation ( Fig. 2A), which had an equilibrium dissociation constant (Kd) similar to that previously reported for the GIV-Gαi3 interaction . We also found that LOV2GIVe retains the same G-protein specificity as GIV. Much like GIV (Garcia-Marcos et al., 2010;Marivin et al., 2020), LOV2GIVe bound robustly only to members of the G i/o family among the 4 families of Gα proteins (G i/o , G s , G q/11 and G 12/13 ), and could discriminate within the G i/o family by binding to Gαi1, Gαi2 and Gαi3 but not to Gαo (Fig. 2B). Next, we assessed if LOV2GIVe can activate Gproteins in vitro, i.e. whether it retains GIV's GEF activity, by measuring steady-state GTPase activity as done previously for GIV and other related non-receptor GEFs (Garcia-Marcos et al., 2010;Maziarz et al., 2018). We found that LOV2GIVe in the lit, but not the dark, conformation led to a dose-dependent increase of Gαi3 activity (Fig. 2C), which was comparable to that previously shown for GIV (Garcia-Marcos et al., 2009). These findings indicate that LOV2GIVe recapitulates the G-protein activating properties of GIV in vitro, and that these are effectively suppressed for the dark conformation.
LOV2GIVe activates G-protein signaling in cells in its lit conformation
To investigate LOV2GIVe-dependent G-protein activation in cells, we initially used a humanized yeast-based system (Cismowski et al., 1999;DiGiacomo et al., 2020) in which the mating pheromone response pathway has been co-opted to measure activation of human Gαi3 using a gene reporter (LacZ, β-galactosidase activity) (Fig. 3A, left). When we expressed LOV2GIVe dark (D), lit (L) or wt in this strain, only very weak levels of β-galactosidase activity were detected (Fig. 3A, right). We reasoned that this could be due to the subcellular localization of the construct, presumed to be cytosolic in the absence of any targeting sequence, because we have previously shown that GIV requires membrane localization to efficiently activate G-proteins (Parag-Sharma et al., 2016). Expressing LOV2GIVe (L) fused to a membrane targeting sequence (mLOV2GIVe) led to a very strong induction of β-galactosidase activity, which was not recapitulated by expression of mLOV2GIVe (D) or wt (Fig. 3A, right).
LOV2GIVe-mediated activation levels (several hundred-fold over basal) are comparable to those previously reported for the endogenous pathway in response to GPCR activation by mating pheromone (Hoffman et al., 2002;Lambert et al., 2010). Next, we tested mLOV2GIVe in mammalian cells. Instead of using a downstream signaling readout as we did in yeast, we used a Bioluminescence Resonance Energy Transfer (BRET) biosensor that measures G-protein activity directly (Fig. 3B, left). Expression of mLOV2GIVe (L), but not (D), led to an increase of BRET proportional to the amount of plasmid transfected (Fig. 3B, right). At the highest amount tested, mLOV2GIVe (L) caused a BRET increase comparable to that observed upon agonist stimulation of the M4 muscarinic receptor, a Gi-activating GPCR. Introducing a mutation that precludes G-protein activation by GIV's GBA motif (F1685A (FA), (Garcia-Marcos et al., 2009)), in mLOV2GIVe (L) also abolished its ability to induce a BRET increase ( Fig. 3-figure supplement 1). These results indicate that the lit conformation of LOV2GIVe activates Gprotein signaling in different cell types.
LOV2GIVe activates G-protein signaling in cells upon illumination
Next, we tested whether LOV2GIVe can trigger G-protein activation in cells in response to light. For this, we used two complementary experimental systems. The first one consisted of measuring G-protein activity directly with the mammalian cell BRET biosensor described above upon illumination with a pulse of blue light. We found that, in HEK293T cells expressing mLOV2GIVe wt, a single short light pulse resulted in a spike of G-protein activation that decayed at a rate similar to that reported for the transition from lit to dark conformation of LOV2 (T 1/2 ~1 min) (Fig. 4A). This response was not recapitulated by a mLOV2GIVe construct bearing the GEF-deficient FA mutation (Fig. 4A). For the second system, we turned to a spot growth assay with the humanized yeast stain described above. In this system, conditional histidine prototrophy controlled by FUS1 promoter is used as a signaling readout downstream of Gprotein activation under conditions of homogenous and continued illumination (Fig. 4B, left).
Yeast cells expressing mLOV2GIVe wt grew in the absence of histidine only when exposed to blue light (Fig. 4B, right). This effect was specifically caused by light-dependent activation of mLOV2GIVe wt because cells expressing the light-insensitive mLOV2GIVe (D) construct failed to grow under the same illumination conditions, whereas mLOV2GIVe (L) grew the same regardless of illumination conditions (Fig. 4B, right). Taken together, these results show that mLOV2GIVe activates G-protein signaling in cells upon blue light illumination.
Conclusions and future perspectives
Here we have presented proof-of-principle evidence for a photoswitchable G-protein activator that does not rely on opsins, i.e., light-activated GPCRs. This tool is based on a modular design that combines the properties of the light-sensitive LOV2 domain with a motif present in a non-GPCR activator of G-proteins. We propose that the versatility of the LOV2GIVe design could help overcome some limitations of currently available optogenetic tools that are based on GPCR-like proteins. For example, our results indicate that LOV2GIVe requires targeting to membranes where the substrate G-protein localizes (Fig. 3A), a feature that could be leveraged in the future to target it to different membranous organelles to study how activation of G-proteins in discrete subcellular compartments triggers different signaling events (Stoeber et al., 2018;Tsvetanova et al., 2015). Our results in yeast also support that LOV2GIVe can activate G-protein signaling ( Fig. 4C) even in cell types where there is not sufficient synthesis of retinal to support opsin-based activation (Scott et al., 2019). A limitation of LOV2GIVe is that it only acts on one subset of heterotrimeric G-proteins, those containing Gαi subunits.
Nevertheless, potential applications are still broad, as Gi proteins control processes as diverse as inhibitory neuromodulation, opioid action, or heart rate, among many others.
Reagents and Antibodies
Unless otherwise indicated, all chemical reagents were obtained from Sigma or Fisher Scientific. Fluorescein di-β-D-galactopyranoside (FDG) was from Marker Gene Technologies, and the protein inhibitor mixture was from Sigma (catalog no. S8830). Leupeptin, pepstatin, and aprotinin were from Gold Biotechnology. All restriction endonucleases and E. coli strain BL21(DE3) were from Thermo Scientific. E. coli strain DH5α was purchased from New England Biolabs. Pfu ultra DNA polymerase used for site-directed mutagenesis was purchased from Agilent. [γ-32 P]GTP was from Perkin Elmer. Mouse monoclonal antibodies raised against αtubulin (T6074), FLAG tag (F1804) or His tag (H1029) were from Sigma. Mouse monoclonal antibody raised against hemagglutinin (HA) tag (clone 12CA5, #11583816001) was obtained from Roche. Mouse monoclonal antibody raised against MYC tag (9B11, #2276) was from Cell Signaling. Rabbit polyclonal antibody raised against Gαs (C-18, sc-383) was purchased from Santa Cruz Biotechnology. Goat anti-rabbit Alexa Fluor 680 (A21077) and goat anti-mouse IRDye 800 (#926-32210) secondary antibodies were from Life technologies and LiCor, respectively.
Plasmid Constructs
The parental LOV2GIV sequence was obtained as a synthetic gene fragment from GeneScript and subsequently amplified by PCR with extensions at the 5' and 3' ends that made it compatible with a ligation independent cloning (LIC) system (Stols et al., 2002). For the bacterial expression of GST-LOV2GIV constructs, we inserted the LOV2GIV sequence into the pLIC-GST plasmid kindly provided by J. Sondek (UNC-Chapel Hill) (Cabrita et al., 2006). Two Information. For the yeast expression of LOV2GIVe constructs, we inserted the LOV2GIVe sequence into two different versions of a previously described pLIC-YES2 plasmid (Coleman et al., 2016). Both versions contain a MYC-tag sequence cloned upstream of the LIC cassette between the HindIII and KpnI sites, but in one of the two plasmids it was preceded by a sequence encoding the first 9 amino acids of Saccharomyces cerevisiae Gpa1, a previously validated membrane-targeting sequence (Parag-Sharma et al., 2016;Song et al., 1996). For the mammalian expression of LOV2GIVe constructs, we inserted the LOV2GIVe sequence into a modified pLIC-myc plasmid ( (Cabrita et al., 2006), kindly provided by J. Sondek (UNC-Chapel Hill, NC)), in which a sequence encoding the first 11 amino acids of Lyn, a previously validated membrane-targeting sequence (Inoue et al., 2005;Parag-Sharma et al., 2016), was inserted in the AflII/KpnI sites upstream of the MYC-tag. The resulting sequence preceding the first amino acid of LOV2GIVe is MGCIKSKGKDSGTELGSMEQKLISEEDLGILYFQSNA (bold= Lyn11, underline= MYC-tag). Cloning of the pET28b-Gαi3 and pET28b-Gαo plasmids for the bacterial expression of rat His-Gαi3 or rat His-Gαo (isoform A), respectively, have been described previously (Garcia-Marcos et al., 2010;Garcia-Marcos et al., 2009). Plasmid pLIC-Gαi1(int.6xHis) for the bacterial expression of rat Gαi1 with an internal hexahistidine tag at position 120 was generated by PCR amplification from pQE6-Gαi1(int.6xHis) ( (Dessauer et al., 1998), kindly provided by Carmen Dessauer, University of Texas, Houston) and insertion at NdeI/BglIIsites of the pLIC-His plasmid (Stols et al., 2002). The plasmid for the bacterial expression of rat His-Gαi2 (pET28b-Gαi2) was generated by inserting the Gαi2 sequence into the NdeI/EcoRI of pET28b. Plasmids for expression of FLAG-Gαi3 (rat, p3XFLAG-CMV10-Gαi3, N-terminal 3XFLAG tag) or Gαs (human, pcDNA3.1(+)-Gαs) in mammalian cells were described previously (Beas et al., 2012;Garcia-Marcos et al., 2009). Plasmids for the expression of Gαq-HA (mouse, pcDNA3-Gαq-HA, internally tagged) or Gα12-MYC (mouse, pcDNA3.1-Gα12-MYC, internally tagged) in mammalian cells were kindly provided by P. Wedegaertner (Thomas Jefferson University) (Wedegaertner et al., 1993) The plasmid for the expression of M4R was obtained from the cDNA Resource Center at Bloomsburg University (pcDNA3.1-3xHA-M4R, cat# MAR040TN00).
Steady-state GTPase Assay
This assay was performed as described previously
Yeast Strains and Manipulations
The previously described (Cismowski et al., 1999) Saccharomyces cerevisiae strain (Cismowski et al., 2002;Cismowski et al., 1999;Maziarz et al., 2018). Plasmid transformations were carried out using the lithium acetate method. CY7967 was first transformed with a centromeric plasmid (CEN TRP) encoding the LacZ gene under the control of the FUS1 promoter (FUS1p), which is activated by the pheromone response pathway. The FUS1p::LacZ-expressing strain was transformed with pLIC-YES2 plasmids (2µm, URA) encoding each of the LOV2GIV constructs described in "Plasmid Constructs". Double transformants were selected in synthetic defined (SD)-TRP-URA media.
Individual colonies were inoculated into 3 ml of SDGalactose-TRP-URA and incubated overnight at 30°C to induce the expression of the proteins of interest under the control of the galactoseinducible promoter of pLIC-YES2. This starting culture was used to inoculate 20 ml of SDGalactose-TRP-URA at 0.3 OD600. Exponentially growing cells (~0.7-0.8 OD600, 4-5 hours) were pelleted to prepare samples for "β-galactosidase Activity Assay" and "Yeast Spot Growth Assay" described below and for preparing samples for immunobloting as previously described Maziarz et al., 2018). Briefly, pellets corresponding to 5 OD600 were washed once with PBS + 0.1% BSA and resuspended in 150 µl of lysis buffer (10 mM Tris-HCl, pH 8.0, 10% (w:v) trichloroacetic acid (TCA), 25 mM NH 4 OAc, 1 mM EDTA). 100 µl of glass beads were added to each tube and vortexed at 4°C for 5 min. Lysates were separated from glass beads by poking a hole in the bottom of the tubes followed by centrifugation onto a new set of tubes. The process was repeated after the addition of 50 µl of lysis buffer to wash the glass beads. Proteins were precipitated by centrifugation (10 min, 20,000 g) and resuspended in 60 µl of solubilization buffer (0.1 M Tris-HCl, pH 11.0, 3% SDS). Samples were boiled for 5 min, centrifuged (1 min, 20,000 g) and 50 µl of the supernatant transferred to new tubes containing 12.5 µl of Laemmli sample buffer and boiled for 5 min.
Enzymatic activity was calculated as arbitrary fluorescent units (a.f.u.) per minute (min).
Bioluminescence Resonance Energy Transfer (BRET)-based G Protein Activation Assay-
BRET experiments were conducted as described previously (Maziarz et al., 2018) . HEK293T cells (ATCC®, CRL-3216) were seeded on 6-well plates (~400,000 cells/well) coated with gelatin and after one day transfected using the calcium phosphate method with plasmids encoding for the following constructs (DNA amounts in parenthesis): Venus(155-239)-Gβ 1 (VC-Gβ 1 ) (0.2 μg), Venus(1-155)-Gγ 2 (VN-Gγ 2 ) (0.2 μg) and Gαi3 (1 μg) mas-GRK3ct-Nluc (0.2 μg) along with mLOV2GIVe constructs in the amounts indicated in the corresponding figure legends. Approximately 16-24 hours after transfection, cells were washed and gently scraped in room temperature PBS, centrifuged (5 min at 550g) and resuspended in assay buffer (140 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 1 mM CaCl 2 , 0.37 mM NaH 2 PO 4 , 20 mM HEPES pH 7.4, 0.1% glucose) at a concentration of 1 million cells/ml. 25,000-50,000 cells were added to a white opaque 96-well plate (Opti-Plate, Perkin Elmer) and mixed with the nanoluciferase substrate Nano-Glo (Promega cat# N1120, final dilution 1:200) for 2 min before measuring luminescence signals in a POLARstar OMEGA plate reader (BMG Labtech) at 28 ºC. Luminescence was measured at 460 ± 40 nm and 535 ± 10 nm, and BRET was calculated as the ratio between the emission intensity at 535 ± 10 nm divided by the emission intensity at 460 ± 40 nm. For the activation of M4R in Fig. 3B, cells were exposed to 100 µM carbachol for 4 min prior to measuring BRET. For measurements shown in Fig. 3B and Fig. 4B, BRET data are presented as the difference from cells not expressing LOV2GIVe constructs. For kinetic BRET measurements shown in Fig. 4A, luminescence signals were measured every 0.24 seconds for the duration of the experiment. The illumination pulse was achieved by switching from the luminescence read mode to the fluorescence read mode of the plate reader, with the following settings: 485 ± 6 nm filter, 200 flashes (~1.5 s). After the pulse of illumination, measurements were returned to luminescence mode with the same settings as prior to illumination. BRET data was corrected by subtracting the BRET signal baseline (average of 30 seconds pre-light pulse) and then subjected to a smoothening function (second order, four neighbors) in GraphPad for presentation. At the end of some BRET experiments, a separate aliquot of the same pool of cells used for the luminescence measurements was centrifuged for 1 min at 14,000 g and pellets stored at -20 °C. To prepare lysates for immunoblotting, pellets were resuspended in lysis buffer (20 mM HEPES, pH 7.2, 5 mM Mg(CH3COO) 2 , 125 mM K(CH3COO), 0.4% Triton X-100, 1 mM DTT, and protease inhibitor mixture). After clearing by centrifugation at 14,000 g at 4 °C for 10 min, protein concentration was determined by Bradford and samples boiled in Laemmli sample buffer for 5 min before following the procedures described in "Immunoblotting".
Yeast Spot Growth Assay
Cells bearing LOV2GIVe constructs growing exponentially in SDGalactose media were pelleted as described above ("Yeast Strains and Manipulations"), and resuspended at equal densities. Equal volumes of each strain were spotted on agar plates in 4 identical sets. Two of the sets were seeded on SDGalactose-TRP-URA plates with histidine and the other two sets were seeded on SDGalactose-TRP-URA-HIS (supplemented with 5 mM 3-amino-1,2,4-triazole).
From each one of the two pairs of sets, one of the plates was exposed to a homemade array of blue LED strips positioned approximately 12 cm above the plates (~2,000 Lux as determined by a Trendbox Digital Light Meter HS1010A) whereas the other one was incubated side by side under the same light but tightly wrapped in aluminum foil. Plates were incubated simultaneously under these conditions for 4 days at 30 °C and then imaged using an Epson flatbed scanner.
Immunoblotting
Proteins were separated by SDS-PAGE and transferred to PVDF membranes, which were blocked with 5% (w:vol) non-fat dry milk and sequentially incubated with primary and secondary antibodies. For protein-protein binding experiments with GST-fused proteins, PVDF membranes were stained with Ponceau S and scanned before blocking. The primary antibodies used were the following: MYC (1:1,000), His (1:2,500), FLAG (1:2,000), α-tubulin (1:2,500), HA (1:1,000) and Gαs (1:500). The secondary antibodies were goat anti-rabbit Alexa Fluor 680 (1:10,000) and goat anti-mouse IRDye 800 (1:10,000). Infrared imaging of immunoblots was performed using an Odyssey Infrared Imaging System (Li-Cor Biosciences). Images were processed using the ImageJ software (NIH) and assembled for presentation using Photoshop and Illustrator softwares (Adobe). . Right, diagram depicting the design principle of the photoswitchable G-protein activator LOV2GIV. In the dark conformation (D, which is mimicked by the LOV2 C450S mutant), the C-terminus of LOV2 forms an α-helix (Jα) and the GBA motif is not readily accessible for G-proteins. In the lit conformation (L, which is mimicked by the LOV2 mutant I539E), the C-terminus of LOV2 is more disordered and the GBA motif becomes accessible for G-proteins, which in turn are activated upon binding to the GBA motif. (B) LOV2GIV (L) binds Gαi3 better than LOV2GIV (D). Approximately 20 µg of the indicated purified GST-fused constructs were immobilized on glutathione-agarose beads and incubated with 3 µg (~300 nM) of purified His-Gαi3. Resin-bound proteins were eluted, separated by SDS-PAGE, and analyzed by Ponceau S-staining and immunoblotting (IB) with the indicated antibodies. Input= 10% of the total amount of His-Gαi3 added in each binding reaction. (C) Ribbon representation of a LOV2GIV structure homology model generated using the I-TASSER server. On the left, the model is colored blue for LOV2 and red for GIV, whereas on the right it is colored according to solvent accessibility. Selected GIV residues known to be important for G-protein binding (L1682, F1685, L1686) Kalogriopoulos et al., 2019) Approximately 20 µg of the indicated purified GST-fused constructs were immobilized on glutathione-agarose beads and incubated with the lysates of HEK293T cells expressing the indicated G-proteins (FLAG-Gαi3, Gαs, Gαq-HA and Gα12-MYC on the left panels) or purified His-tagged proteins (3 µg, ~300 nM of His-Gαi1, His-Gαi2, His-Gαi3 and His-Gαo on the right panels). One representative experiment of at least three is shown. (C) LOV2GIVe (L), but not LOV2GIVe (D), increases Gαi3 activity in vitro. Steady-state GTPase activity of purified His-Gαi3 was determined in the presence of increasing amounts (0-2 µM) of purified GST-LOV2GIVe (L) (black) or GST-LOV2GIVe (D) (red) by measuring the production of [ 32 P]Pi at 15 min. Results are the mean ± S.D. of n=2. In the engineered strain, no pheromone responsive GPCR (Ste3) is expressed, the yeast G-protein Gpa1 is replaced by human Gαi3, and downstream signaling does not lead to growth arrest but promotes the activation of a reporter gene (LacZ) under the control of the pheromonesensitive, G-protein-dependent promoter of FUS1 (FUS1p). Right, membrane-anchored LOV2GIVe (mLOV2GIVe), but not its untargeted parental version, leads to strong G-protein activation only in the lit conformation. Yeast strains expressing the indicated LOV2GIVe constructs ((D), (L) or wt) or an empty vector (-) were lysed to determine β-galactosidase activity using a fluorogenic substrate (mean ± S.E.M., n=3) and to prepare samples for immunoblotting (one experiment representative of 3 is shown). (B) Left, diagrams showing the principle for the G-protein activity biosensor used in this panel. Upon action of a GPCR (top) or mLOV2GIVe (bottom), G-protein activation leads to the release of YFP-tagged Gβγ from Gαi, which then can associate with Nluc-tagged GRK3ct and results in the subsequent increase in BRET. Right, mLOV2GIVe (L) but not mLOV2GIVe (D), leads to increased G-protein activation similar to a GPCR as determined by BRET. BRET was measured in HEK293T cells transfected with the indicated amounts of mLOV2GIVe plasmid constructs along with the components of the BRET biosensor and the GPCR M4R. M4R was stimulated with 100 µM carbachol. Results in the graph on the top are expressed as difference in BRET (∆BRET) relative to cells not expressing mLOV2GIVe (mean ± S.E.M., n=3). One representative immunoblot from cell lysates made after the BRET measurements is shown on the bottom. . BRET was measured in HEK293T cells transfected with 2 µg of the indicated mLOV2GIVe plasmid constructs along with the components of the BRET biosensor described in Fig. 3. Results in the graph on the top are expressed as difference in BRET (∆BRET) relative to cells not expressing mLOV2GIVe (mean ± S.E.M., n=5). One representative immunoblot from cell lysates made after the BRET measurements is shown on the bottom. | 5,765.2 | 2020-08-19T00:00:00.000 | [
"Engineering",
"Biology"
] |
Numerical investigation of plasmonic bowtie nanorings with embedded nanoantennas for achieving high SEIRA enhancement factors
This paper presents the numerical investigation of several complex plasmonic nanostructures — bowtie nanoring and crossed-bowtie nanoring nanoantennas with embedded bowtie nanoantennas and crossed-bowtie nanoantennas — for surface enhanced infrared absorption (SEIRA) spectroscopy-based substrates. The proposed nanostructures exhibit substantially large SEIRA enhancement factor (∼8.1 × 105) compared to previously reported enhancement factor values for bowtie nanoantennas or nanoring antennas. The plasmonic properties of the proposed nanostructures have been studied by the numerical evaluation of the near-field electromagnetic enhancement at resonant plasmon mode excitation wavelengths in the mid-IR spectral regime. The highest SEIRA enhancement of ∼8.1 × 105 occurs at a wavelength of ∼6800 nm (6.8 μm). A substantial electric field enhancement as large as ∼375, corresponding to SEIRA EF of ∼1.4 × 105 is noted even when the minimum gaps between the plasmonic nanostructures is as large as 10 nm, which can easily be fabricated using the conventional nanolithography techniques. The occurrence of several electric field hotspots due to the presence of plasmonic nanoantennas embedded inside the nanorings was observed, as the electric fields are enhanced in the vicinity of the plasmonic nanostructures being proposed. The multiple electric field hotspots in the proposed nanostructures can lead to larger average electric field enhancement as well as the average SEIRA enhancement for these substrates. Moreover, by embedding plasmonic nanoantenna structures inside the bowtie nanorings and crossed-bowtie nanorings, large spectral tunability of plasmon resonance wavelengths is achieved in the spectral regime from 4 μm to 8 μm. This is done by varying a larger number of spectral parameters that are present in these complex nanostructures. This paper also reports a novel configuration of crossed-bowtie nanoring plasmonic structure exhibiting less polarization dependence of the SEIRA enhancement factor. This structure also exhibits tunability of hotspot positions when the direction of the polarization of the incident light is rotated. The proposed structures in this paper can be fabricated by the state-of-the-art nanofabrication technologies. The proposed structures could find potential applications in chemical and biological sensing and biochemical detection of analyte molecules.
Light is manipulated through the phenomena of surface plasmon resonance (SPR) and localized surface plasmon resonance (LSPR) in engineered plasmonic nanostructures. This results in strong confinement of light beyond the diffraction limit into subwavelength dimensions and strong local field enhancements. SPR and LSPR are optical phenomena based on resonant oscillations of conduction electrons at the interface of metals and dielectrics that are induced by incident optical radiation. These phenomena have been extensively applied for the detection of molecular binding interactions. These electron oscillations could be propagating in nature on a planar interface, also known as surface plasmon polaritons (SPPs), or confined to subwavelength dimensions of the engineered plasmonic nanostructures, known as localized surface plasmons (LSPs).
Excitation of plasmon resonances in plasmonic nanostructures has been employed for applications pertaining to surface enhanced spectroscopies (SES) and predominantly for surface enhanced Raman scattering (SERS). The demand of the existing biomedical research relies on substantially improving analytical sensitivity of detecting single molecules under native physiological conditions. SERS is an efficient technique for the detection of pathogens and other molecules present at extremely low concentrations. Such sensitive detections can help in early-stage detection of cancers and other diseases. A complementary spectroscopic technique to SERS is surface enhanced infrared absorption (SEIRA), which was first observed by Hartstein et al [32]. Over the past few years, SEIRA has been applied for observing the dipole-active vibrational modes of organic chemical or biological molecules that characteristically absorb in the mid-infrared spectral regime. Hence, it has found immense applicability in sensing of chemical and biological molecules of interest.
Detailed investigations of the technique by Osawa et al [33] revealed that the enhancement mechanism in SEIRA is similar to that in SERS and that the SEIRA enhancement arises mainly due to electromagnetic enhancement. In SEIRA, the molecular vibrational modes with dipole moments perpendicular to the substrate surface are preferentially enhanced [34]. While the electromagnetic enhancement factor (EF) in SERS is approximately proportional to the fourth power of the electric field enhancement (which is the ratio of the magnitude of the optical electric field in the vicinity of the analyte molecules to the magnitude of the electric field of the incident radiation), the SEIRA enhancement factor is proportional to the electric field intensity enhancement, i.e. the square of the electric field enhancement [35]. However, while the typical Raman scattering cross-section is within the range ∼10 -27 -10 -30 cm 2 [36], the infrared absorption cross-section is many orders higher, typically∼10 -20 cm 2 [37] suggesting robust efficacy of the technique even for subtle SEIRA enhancements. The latest advances and principle mechanisms of SERS and SEIRA have been enumerated in the latest report by Wang et al [24].
The intrinsic property of substantially enhanced electromagnetic field intensity in plasmonic active metalbased nanostructures due to the resonant plasmon mode excitation has immensely been exploited for enhanced light-matter interactions. Bowtie nanoantennas (BNAs) have been employed for several plasmonic applications due to the characteristic electric field confining geometry which is known to provide higher intensities of localized electromagnetic field and better spatial confinement compared with other nanoantenna designs, and thus is promising for high performance photonic devices and applications such as optical tweezers [38], single molecule fluorescence enhancement [39], bowtie driven Yagi-uda nanoantenna [40] and graphene photonic devices [41].
The past decade has seen several reports based on experimental and theoretical investigations of BNA-based surface enhanced spectroscopies such as SERS [42][43][44][45][46], which is widely studied and is comparatively a more mature spectroscopic technique than SEIRA. Further, nanoring or nanocavity based plasmonic nanostructures have also been extensively studied and employed for relevant photonic applications other than SES [47][48][49][50]. The transmittance properties of contour BNAs in mid-infrared have been studied by Yang et al [51]. Plasmonic nanoantennas are highly desirable in many biomedical optics and biophotonic applications. More specifically, they have been applied for enhanced absorption spectroscopy in the near-infrared (NIR) and mid-infrared (MIR) spectral regimes for detection of biomolecules of interest. As the resonant wavelengths of nanoantennas are directly proportional to antenna lengths, nanoantennas operating in the NIR or MIR spectral regimes can have dimensions in the micrometer scale, which makes it difficult to incorporate these nanoantennas into nanoscale devices [52]. Hence, several research groups have strived to red-shift plasmon resonances of the plasmonic nanoantennas to the NIR or MIR spectral regimes to ensure their application to enhanced NIR and MIR absorption spectroscopy while keeping the dimensions of the nanoantennas in the nanometer scale.
Although the resonance wavelengths of the nanoantennas can be slightly shifted by changing the dielectric constant of the material around the nanoantennas [52], engineering the geometry of the plasmonic nanoantennas has been found to be effective means of tuning the plasmon resonance wavelengths of the nanoantennas over a wide range of wavelengths. Contour BNAs with tunable optical response with variable contour thickness was proposed by Sederberg and Elezzabi [3]. Further, a novel Sierpinski fractalization concept in plasmonic BNAs for optimizing the nanoantenna optical response in NIR or MIR spectral regimes was proposed in [52,53]. S Cakmakyapan et al [54] have demonstrated localized near-infrared electromagnetic field enhancement by incorporating the same concept of Sierpinski fractalization in a conventional BNA. These works were intended to optimize the plasmonic response of nanoantennas for operation over a wide spectral range including the NIR and MIR spectral regimes. Hui-Hsin Hsiao et al [55] have modified the contour bowtie antenna geometry to further red-shift the nanoantenna optical response. The resonant mode excitation and plasmon hybridization of plasmonic gold contour BNAs were investigated by Nien et al [56]. Hu et al in their work [57] have shown by FEM calculations, that plasmonic BNAs support both bonding and anti-bonding modes and exhibit enhanced electric field intensities in the bowtie gaps. Hollow cavity contour plasmonic BNAs can align electric dipoles on both outer and inner surfaces of the nanoring, thereby effectively funneling the incident electric field inside the nanoscale gap regions.
There have been several interesting reports on metallic nanostructures and nanoantennas based SEIRA substrates for mid-infrared to far-infrared sensing. For instance, aluminium cross-antennas have been employed for detection of vibrational resonances in mid-infrared region of the electromagnetic spectrum [58]. Yin et al [59] have employed high aspect ratio gold nanonails for investigating the SEIRA enhancement mechanism in these nanostructures. They have reported a high enhancement factor of 108000 in the far infrared spectral range. Neubrech et al [60] have enhanced molecular infrared vibrations using periodic array of gold nanoantennas. Enhanced SEIRA enhancement factor of the order of∼10 5 using nanoantenna dimers with nanoscale gaps of∼3 nm has been reported by Huck et al [61]. Aouani et al [62] have also reported broadband detection of molecular vibrational modes in the mid-infrared spectrum using log-periodic antennas with large SEIRA enhancement. Further, Wu et al [63] have demonstrated distinct absorption of infrared vibrational modes in Fano-resonant metamaterial in the wavelength range∼4.7-7.5 μm. There are several other reports for SEIRA sensing in mid-infrared spectral regime using distinct shapes of nanostructures such as fan-shaped gold nanoantennas [64], linear rod nanoantennas [65], and gold cross nanoantennas [66].
In general, there has always been an immense interest in infrared absorption spectroscopy and detection and quite recently, several intriguing reports on infrared detection have also surfaced. Fathi et al [67] in their work have developed CuS nanocrystals as infrared photoactive material which exhibits enhanced detectivity and responsivity by manipulating optical and electrical properties of the developed nanocrystals. In another report by Huang et al [68], the authors have demonstrated that the quantum mechanical effects due to subnanometer nanocavity gaps in nanoantennas elevates infrared absorption detection of molecular moieties by blue-shifted plasmonic resonances due to the inherent quantum tunnelling effects in such subnanometer gap nanoantennas. Further, Gao et al [69] have developed a generalized classical model of SEIRA spectroscopy that shows that mid-IR surface plasmon resonances can be used for detection of vibrational bands. SEIRA spectroscopy has also been incorporated to monitor protein denaturation in the infrared regime using engineered plasmonic metasurfaces [70]. In a latest report by Najem et al [71] enhanced infrared sensing is demonstrated using aluminium nanostructured bowties on a metal-insulator-metal (MIM) platform.
In this work, we have demonstrated by numerical calculations several configurations of novel geometries of bowtie nanorings and crossed-bowtie nanorings with embedded bowtie nanoantennas and crossed-bowtie nanoantennas based SEIRA substrates that exhibit substantially large SEIRA electromagnetic EF (∼8.1×10 5 ) compared to previously reported conventional bowtie nanoring (∼2025) [72]. This value of SEIRA electromagnetic EF is larger than what has been in previous reports of theoretical calculations of the SEIRA enhancement factor. By employing numerical Finite Difference Time Domain (FDTD) simulations, we have investigated the near field electric field enhancement and electric field distribution patterns at resonant plasmon mode excitation wavelengths and have found that the electric fields are significantly enhanced in the vicinity of the plasmonic nanostructures being proposed in this paper. The electric field intensity enhancement values were obtained in the vicinity of the bowtie nanoring and crossed-bowtie nanoring nanoantennas with embedded bowtie nanoantennas and crossed-bowtie nanoantennas for near-IR and mid-IR spectral regimes (2000 nm to 8000 nm) for detection of chemical and biological molecules of interest. Further, we demonstrate the occurrence of several electric field hotspots present in the proposed plasmonic nanostructures compared to the conventional bowtie nanoring nanoantenna [72] in which the electric field hotspot is only confined between the central gap of the nanoantenna. The proposed structures demonstrate multiple spatial regions of enhanced electric field hotspots owing to the presence of embedded plasmonic nanostructures in the nanoring cavity. The multiple spatial regions of enhanced electric field hotspots in the proposed nanostructures with embedded plasmonic nanoantennas can lead to larger average electric field enhancement as well as the average SEIRA enhancement for these substrates. Embedding plasmonic nanoantennas inside the nanoring bowtie nanoantennas, leads to a change in the polarizing ability of the nanoantennas, which allows tunability of the plasmon resonances of the consolidated structures. Moreover, by embedding plasmonic nanoantenna structures inside the bowtie nanorings, we can achieve large spectral tunability of plasmon resonance wavelengths by varying a larger number of spectral parameters that are present in these complex nanostructures. We also report crossed-bowtie nanoring plasmonic structures that have significantly less polarization dependence compared to bowtie nanoring structures. Moreover, the crossed-bowtie nanoring plasmonic structures allow the tunability of the hotspot positions when the direction of the polarization of the incident light is rotated. The proposed bowtie nanoring and crossed-bowtie nanoring plasmonic nanoantennas with embedded nanostructures can be fabricated by employing electron beam lithography [73] followed by electron beam deposition of gold and subsequent lift-off. These nanostructures can also be fabricated by first employing DC sputter deposition to deposit thin gold films on silica substrates, followed by focused ion beam milling [74] or transmission electron beam ablation lithography [75] for milling out these complex plasmonic nanostructures. A possible fabrication process flow is shown in Fig. S1 (available online at stacks.iop.org/MRX/ 9/096201/mmedia) in the supplementary section.
Numerical finite difference time domain simulations
The numerical analysis of the proposed bowtie nanoring and crossed-bowtie nanoring structures with embedded complex nanostructures has been performed using Finite Difference Time Domain (FDTD) simulations. FDTD [76]is a numerical analysis technique based on Yee's algorithm and is employed in computational electromagnetics for solving differential forms of discretized coupled Maxwell's equations by updating electric and magnetic field vectors to obtain stabilized electromagnetic field behaviour [61]. The details of the equations employed for FDTD modeling of the plasmonic nanostructures proposed in this work are given in the Note S1 of the supplementary section.
The numerical simulations were performed using a commercial FDTD software known as FDTD Solutions from Lumerical Solutions. The software incorporates several dispersion models such as Drude, Lorentz-Drude and Debye models. The simulations are performed with periodic boundary conditions in x and y directions and PML boundary condition along the z-direction. In order to estimate the SEIRA enhancement due to the bowtie nanoring and crossed-bowtie nanoring structures, the electric field intensity enhancement values were obtained in the vicinity of these nanostructures for near-IR and mid-IR spectral regimes (2000 nm to 8000 nm) for detection of chemical and biological molecules of interest. The incident plane wave was polarized in the direction of the axis of the plasmonic nanoring bowtie nanostructures.
FDTD simulations were performed after convergence of mesh size was achieved. After carrying out convergence testing, a mesh size of 1 nm was used in x and y directions, while a mesh size of 4 nm was used in the z-direction. The material constants for gold and SiO 2 were obtained by the dispersion relations provided by Palik [77]. For simulations carried out for larger structures in higher wavelength range, the optical properties of gold were taken from Olmon et al [78]. The FDTD simulations of the proposed structures were carried out by considering a curvature at the bowtie tip [79] as fabrication of extremely sharp bowtie tips is limited by the capabilities of present nanofabrication techniques. The time step in FDTD simulations was selected such that the Courant stability condition [80] was satisfied. The time step stability factor was set to 0.99 corresponding to a time step of 0.00229134 fs. Further, the autoshutoff threshold was set to the Lumerical default value of 10 −5 for ensuring negligible residual energy in the simulation domain. The autoshutoff threshold estimates the energy contained in the simulation domain as a fraction of input power and the autoshutoff criteria was satisfied for all simulations. Once the energy in the simulation region falls below the threshold, it is ensured that the fields completely decay and thus reliable solutions are obtained.
Results and discussion
In this section, we analyze the results obtained by the numerical simulations of the proposed bowtie nanoring and crossed-bowtie nanoring geometries to be employed as efficient SEIRA substrates for applications pertaining to identification of molecular signatures and biochemical detection. The near-field electric field enhancement and electric field distributions in the proposed structures at several resonance wavelengths were numerically investigated. We mainly investigate four nanoring geometries, namely, bowtie nanostructures Fig. S2 of the supplementary section. We note transmission dips around the peak electric field enhancement wavelength, which do not necessarily coincide with the peak electric field enhancement resonance wavelengths. This is possible since the far-field spectral characteristics determine the consolidated behavior of the periodic array. Thus the far-field resonances may deviate from the near-field resonances due to the additional pronounced effect of the periodic coupling in the nanoantenna array.
The specific spectral regime of interest for SEIRA sensing lies in the wavelengths of infrared absorption typically larger than 2.5 μm (specifically between 4 μm and 8 μm). Thus, we investigate spectral tunability for SEIRA sensing applications with variation of L and t, the results of which are shown in figure 2. We note that, as the length of the antenna L is increased (see figure 2(A)), the plasmon modes redshift, which is understandable because there is an increase in the distance between the charges at the opposite interfaces with increase in L, leading to a lower restoring force. From the electric field enhancement spectra at point O, for variation of L, we also note the occurrence of fundamental dipolar resonances at 4900 nm, 5740 nm, 6810 nm and 7930 nm for L values 700 nm, 850 nm, 1000 nm and 1150 nm respectively. This shows a large tunable SEIRA spectrum, with broad resonance peaks of great interest for SEIRA detection. Additionally, from the electric field enhancement spectra at point M for variation of L (see figure 2(B)), we note a distinct asymmetric lineshape at the quadrupolar resonance wavelength. As L increases, this mode is distinctively broadened which could intuitively be explained by the plasmon hybridization. For instance, for L=1150 nm, although we note that the peak electric field enhancement is less compared to that noted at point O, a distinctively broad resonance peak having a spectral width of∼2.7 μm is noted with an electric field enhancement of∼50. . From these spectra we note that the variation of t can also profusely alter the spectral properties and is therefore an important geometric parameter for SEIRA spectral tuning. It can be observed from figures 3(A) and (B) that the plasmonic modes undergo a red-shift as t (the thickness of the nanoring) decreases, which could be explained based on plasmon hybridization [81].
The primary focus of this work is to demonstrate the applicability of the proposed nanostructures for applications pertaining to surface enhanced infrared sensing. We have accordingly designed our structures to work in the desired infrared spectral regime with fundamental resonant mode between∼5000 to∼8000 nm, at which peak electric field enhancement is noted in figures 2 and 3. However, besides the fundamental mode, we also note the occurrence of several higher order modes at lower wavelengths. Harnessing multi-wavelength response on a single plasmonic substrate could find interesting applications in surface enhanced infrared spectroscopy based biosensing. The far-field reflection and transmission spectra of the structure are shown in Fig. S2 of the supplementary section.
We next study the effect of variation of gaps between the bowtie nanorings G on the electric field enhancement spectra at the center of the bowtie nanorings (point O) and at the center of the embedded nanoantennas (point M). The parameters kept constant are l=60 nm, H=40 nm, L=350 nm, and t=20 nm. The electric field enhancement spectra for variation of G at location O and location M are shown in figures 4(A) and (B), respectively. We note that as the gap decreases, the peak electric field intensity increases. With a decrease in the gap G between the arms of the nanoring bowtie nanoantennas, the near-field coupling between the tips of the bowtie nanorings increases. This increases the E-field enhancement in the gap region of the nanoring bowtie nanoantennas. The resonant wavelength at which the peak electric field intensity is noted exhibits a very subtle red-shift with decrease in G (see figure 4(A)).
This red-shift could again be attributed to the fact that when the spacing between the nanostructures reduces, there is a reduction in the restoring force of the conduction electrons. This results in a decrease in the resonance frequency, which implies a spectral red-shift. We can observe from figure 4(A) that as the gap between the bowtie nanorings G decreases from 16 nm to 3 nm, there is a substantial increase in the peak E-field enhancement (∼6800 nm) from∼175 (for G=16 nm) to∼900 (for G=3 nm). This implies that the SEIRA enhancement factor increases from 30635 to 810000 as the gap between the bowtie nanorings G decreases from 16 nm to 3 nm. We observe from figure 4(B) that the E-field enhancement at point M is almost independent of the variation of the gap between the bowtie nanorings G. We also observe that even for a small change in the gap between the bowtie nanorings G from 6 nm to 5 nm, there is a substantial increase in the peak E-field enhancement (∼6500 nm) from∼480 to∼650, which implies a substantial increase in the SEIRA enhancement factor from 230400 to 422500.
From figures 1(D) and (E), we note that there are multiple hotspots present in a plasmonic bowtie nanoring antenna containing embedded nanoantennas (due to redistribution of energy to the gaps in the embedded bowtie nanoantennas). Hence, the proposed structure (plasmonic bowtie nanoring antenna containing embedded nanoantennas) could be preferred for SEIRA detection applications as it produces multiple electromagnetic hotspots in the desired spectral regime and this property is useful not only due to a larger average SEIRA enhancement for the SEIRA substrate but also because it could find useful applications in detection of biomolecules since the target molecules would have more locations of enhanced electric field hotspots. It should be mentioned that a substantial electric field enhancement at the fundamental mode M1 (peak electric field enhancement as large as∼375, corresponding to SEIRA EF∼140625) is noted (see figure 2(A))), while a comparatively less electric field enhancement is noted between the gaps in the embedded bowtie nanostructures (peak electric field enhancement∼70, corresponding to SEIRA EF∼4900). It has to be noted that the gap between the bowtie nanorings G in these nanostructures is 10 nm. It should be noted that when the gap between the bowtie nanorings G decreases to 3 nm, there is a substantial increase in the SEIRA enhancement factor (EF) to 810000 (E-field enhancement being∼900) for the fundamental mode (see figure 4(A)). It has to be noted that while 10 nm gaps between nanostructures can be fabricated easily using the conventional nanolithography techniques (such as E-Beam lithography as shown in Fig. S1), sub-5 nm gaps can also be fabricated using certain fabrication techniques such as extreme UV lithography and TEBAL (transmission electron beam ablation lithography). The effects of varying all other structural parameterssuch as the embedded bowtie nanoantenna gap g, embedded bowtie triangle side length l, and the structure height H are given in the supplementary section (see Figs. S3-S6 of the supplementary section).
Further, to understand the noted behavior of spectral E-field enhancement, we compare our results with the case of solid bowtie geometry case. We note from the enhancement spectra for the solid bowtie geometry (See Fig. S7 of the supplementary section) that there is only one plasmonic mode. This mode is substantially blueshifted compared with that of the fundamental modes of B-B NA. Several modes in B-B NA could occur due to the plasmon hybridization in the bowtie nanoring [82,83] resulting in higher order multipolar response. Although the solid bowtie structure seems to confine higher electric field in the nanogap region of the antenna compared with the proposed B-B NA, we must note that the structure is rather a conventional nanoantenna with only a single electric field hotspot. The proposed structure B-B NA has multiple hotspots of enhanced electric field which can be employed for detection of chemical and biological sensing of target analytes as these molecules have a larger probability for accessing multiple regions of enhanced electric field hotspots and thus enhanced detection and SEIRA sensing. Additionally, with the proposed bowtie nanoring structures, we obtain an extra degree of spectral tunability by varying the contour bowtie nanoring thickness and thus suitable resonances can be utilized in the spectral regime of interest for SEIRA detection. We evaluated the electric field enhancement spectra for bowtie nanoantennas with larger length spans (lying between 800 nm and 1100 nm) for a constant periodicity of 2500 nm and found that the peak electric field enhancement reduces as L is increased (see Fig. S7 of the supplementary section). Although the peak E-field enhancement obtained for these lengths is larger for the case of solid bowtie nanoantennas as compared to ring bowtie nanoantennas (B-B NAs), the peak resonance occurs around 3500 nm in the case of solid bowtie nanoantennas (see Fig. S7 of the supplementary section). Moreover, the E-field enhancement drops below 200 between 5000 and 7500 nm in the case of solid bowtie nanoantennas. As the spectral region of interest for the detection of certain molecules using SIERA lies in the spectral region between 5000 to 7500 nm, the ring bowtie nanoantennas (B-B NAs) are more suitable as they have their peak E-field enhancements (>300) in this spectral region (see figure 2(A)). Moreover, we can observe from figure 2(A) that the peak resonance wavelength in B-B NAs can be tuned from 5000 nm to 8000 nm by varying the value of L.
We next investigate the spectral properties of crossed-bowtie nanostructures embedded bowtie nanoring antenna (CB-B NA) shown in figure 5. The schematic perspective representation of the structure in periodic array arrangement and a single unit cell with labelled key dimensions are shown in figure 5(A) and 5(B) respectively. The electric field enhancement spectra at locations O and M are shown in figures 5(C) and (D). We observe from figure 5(E) that there are several spatially distributed electric field hotspots at the fundamental resonance mode of the structure. This could be due to the several sub-20 nm nanogaps that are present inside the nanoring bowtie nanoantennas besides the nanogap 'G' between the tips of the nanorings of the nanoring bowtie nanoantennasi.e. nanogaps between the arms of the crossed bowtie nanoantennas as well as those between the crossed bowtie nanoantennas and the inner walls of the nanorings of the nanoring bowtie nanoantenna. However, we note that the peak electric field intensity is reduced in a CB-B NA compared with that of a B-B NA (see figure 5(D)). This could be explained on the basis that energy is redistributed and funneled into the several nanogaps present in the CB-B NA structure. The maximum value of electric field enhancement noted in the CB-B NA structure is∼370, corresponding to a SEIRA EF ∼136900.
The spectral properties of bowtie nanostructures embedded crossed-bowtie nanoring antenna (B-CB NA) are shown in figure 6. Figures 6(A) and (B) represent the schematic of the structure in perspective periodic array and unit cell representation respectively. The electric field enhancement spectra at locations O, M, and M * for the structure with embedded nanostructures are shown in figures 6(C) and (D). Further, from the electric field enhancement spectra, we note the existence of several higher order multipolar modes at lower wavelengths which could be ascribed to the plasmon hybridization in the structure due to the bowtie nanorings and the additional crosswise bowtie nanorings with embedded complexes. The peak electric field intensity noted at the fundamental resonance mode at 6400 nm for B-CB NA is∼140 corresponding to a SEIRA EF∼19600 (see figure 6(E)). We note that the electric field is distinctively less in the nano-gaps of the embedded small bowtie structure inside the orthogonal crossed bowtie arms. This is because the structure is simulated for incident light which is horizontally polarized and thus only the horizontal bowtie arms and their embedded antennas confine stronger electric field compared to the crossed-bowtie arms and their embedded antennas. To further demonstrate the effect of polarization of the incident light with color maps of electric field distributions, we have also evaluated a less polarization dependent structure containing crossed-bowtie nanostructures embedded crossed-bowtie nanoring antenna (CB-CB NA) in periodic array which is discussed next. We note from figure 6(E) that there are multiple hotspots present in a plasmonic crossed-bowtie nanoring antenna containing embedded bowtie nanoantennasthese hotspots are present not just between the tips of the plasmonic crossed-bowtie nanoring antennas but also in the gaps at the center of the bowtie nanoantennas.
Next, we study the spectral properties of crossed-bowtie nanostructures embedded crossed-bowtie nanoring antenna (CB-CB NA), which is a less-polarization sensitive structure and can be employed for the relevant applications. The schematic of the structure in perspective periodic array and unit cell representation are shown in figures 7(A) and (B). The spectral lineshapes of the electric field enhancements show the existence of several higher order multipolar resonances again attributed to the plasmon hybridization in the structure (see figures 7(C) and (D)). The electric field distributions of the nanostructure at the resonance wavelength of 6400 nmfor x-and y-polarizations of the incident radiationare shown in figures 7(E) and (F), respectively. The peak electric field intensity enhancement (noted at the fundamental resonance mode at 6400 nm) for CB-CB NA is∼140 corresponding to a SEIRA EF∼19600 for both the x-polarization and the y-polarization of the incident radiation, showing the less polarization dependence of these nanostructures. We note from figures 7(E) and (F) that there are multiple hotspots present in a plasmonic crossed-bowtie nanoring antenna containing embedded crossed-bowtie nanoantennasthese hotspots are present not just between the tips of the plasmonic crossed-bowtie nanoring antennas but also in the gaps at the center of the crossed-bowtie nanoantennas. Moreover, we can observe from figures 7(E) and (F) that the positions of the hotspots inside the crossed-bowtie nanoring plasmonic structures can be tuned by rotation of the direction of the polarization of the incident radiation.
The variations of the structural parameters of the different designs of the plasmonic nanostructuresbowtie nanoring antennas and cross-bowtie ring nanoantennas with embedded bowtie nanostructureswere carried out to obtain the maximum possible SEIRA enhancement factor, to obtain tunability of the plasmon resonance wavelengths from 2 μm to 8 μm spectral regime, to obtain a large number of hotspots for SEIRA based sensing, and to obtain less polarization dependence of the structures. A comparison of SEIRA EF evaluated for the proposed structures in this work with that published in literature is shown in table 1.
Conclusions
In this paper, we have numerically evaluated several novel plasmonic nanostructures of bowtie nanorings and crossed-bowtie nanorings with embedded complex nanostructures which could be employed as SEIRA substrates demonstrating a large SEIRA enhancement factor (∼8.1×10 5 ) compared to previously reported nanostructures including conventional bowtie nanorings without embedded complexes. By the numerical FDTD simulations carried out in the spectral regime from 2 μm to 8 μm, we have investigated the plasmonic properties of the proposed structures such as near-field electric field enhancement and electric field distribution patterns at resonant plasmon mode excitation wavelengths. The highest SEIRA enhancement of∼8.1×10 5 Spatial distribution of the electric field enhancement (in log scale) at the main resonance λ=6474 nm noted from electric field enhancement spectra for x-polarization of the incident radiation. The dimensions of the structure are L=1000 nm, t=20 nm, l=300 nm, G=10 nm, g=20 nm, and H=40 nm. occurs at a wavelength of∼6800 nm (6.8 μm). A substantial electric field enhancement as large as∼375, corresponding to SEIRA EF of∼1.4×10 5 is noted even when the minimum gaps between the plasmonic nanostructures is as large as 10 nm, which can easily be fabricated using the conventional nanolithography techniques. From the numerical calculations, we also noted the occurrence of several higher order modes at lower wavelengths. Further, we have found that the proposed plasmonic structures exhibit the occurrence of several electric field hotspots. This is due to the presence of embedded complexes in the nanoring cavities and substantially enhanced electric fields noted in the vicinity of the plasmonic nanostructures being proposed. The spectral tunability of the plasmon resonance (in the spectral regime from 4 μm to 8 μm) is noted upon variation of structural dimensions of both the nanorings and the embedded complexes in the nanoring cavities. Embedding plasmonic nanoantennas inside the nanoring bowtie nanoantennas also leads to a change in the polarizing ability of the nanoantennas. This allows tunability of the plasmon resonances of the consolidated structures. We also report a novel configuration of crossed-bowtie nanoring plasmonic structure exhibiting less polarization dependence less polarization dependence of the SEIRA enhancement factor as well as tunability of hotspot positions when the direction of the polarization of the incident light is rotated. The proposed structures in this paper can be fabricated by the state-of-the-art nanofabrication technologies. The proposed multiple hotspots based novel configurations of plasmonic bowtie and crossed-bowtie nanorings could be useful for chemical and biological sensing and in the detection of molecular fingerprints.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
Conflicts of interest
There are no conflicts to declare. (D) Spatial distribution of the electric field enhancement (in log scale) at resonance λ=6400 nm for x-polarization of the incident radiation. (E) Spatial distribution of the electric field enhancement (in log scale) at resonance λ=6400 nm for y-polarization of the incident radiation. The dimensions of the structure are L=1000 nm, t=20 nm, l=300 nm, G=20 nm, g=20 nm, and H=40 nm. | 7,779.8 | 2022-09-08T00:00:00.000 | [
"Physics"
] |
The homogeneous structure in a Cartan space
The homogeneus almost product structure on the Finsler space have Lieviu Popescu studied. In this paper we study the integrability conditions for the homogeneus product structure in Cartan space with Miron connection.
Let (T * M, π, M ) be the contangent bundle, where M is a differentiable, real n-dimensional manifold. If (U, ϕ) is a local chart on M , then the coordinates of a point u = (x, y) ∈ π −1 (U ) ∈ T * M will be denotes (x i , y i ), i, j, . . . = 1, 2, . . . , n. The natural basis of the module X(T * M ) is given by ( ∂ ∂x i , ∂ ∂yi ) = (∂ i , ∂ i ). Given H(x, y) -the metrical function T * M → R, the 2-homogeneus with respect to y i , g ij = 1 2 ∂ i ∂ j H -the metrical tensor, then pair (T * M, g) is Cartan space. The linear connection are given by respectively, such that δ i = ∂ i − L ik ∂ k , and (δ i , ∂ i ) is a local basis of X(T * 0 M ) = X(T * M )\{0} which is called the adapted basis to L ij . The vector fields δ i and ∂ i are 1 and 0 -homogeneous with respect to y i . The tensor of curvature of linear connection L are given by then differential geometric object (L ij , Γ k ij , 0) will be called the Miron connection of Cartan spaces [1].
Let L : (T * M, g) → R a differentiable function which is 1-homogeneous with respect to y i , r > 0 is a constant. We define linear mapping P : Proposition 1.
(a) P is an almost product structure, P 2 = I, (b) P preserves the property of homogeneity of vector fields from X(T * 0 M ). The proof is evident.
Proposition 2. Obtain such the identity
The almost product structure are integrable, if and only if is Nijenhuis tensor equal zero.
there ∇ -the operator of covariant derivative respect the Miron connection.
Proof. Let N be the Nijenhuis tensor of the homogeneous almost product structure P In the adapted basis we have where Analogous where Also where Let N = 0, from (8) follow (4) and (5). The calculation give, as from (4) and (5) follow Eqs. (10) and (12).
Then the following class of Riemanian metrics may be considered on X(T * 0 M ) where Dy i = dy i − L ik dx k . We have We define the two form Φ by the class of almost product structure P [2]: for all vector fields X, Y on X(T * 0 M ).
Theorem 2. The class of almost product structures P is Kähler, if and only if, L is constant.
Proof. From the (16) we have local expression the 2-form Φ: As From the Ricci identity follows R ikh dx k ∧ dx h ∧ dx i = 0, from Eqs. (1) and (2) follows Γ h ik = Γ h ki , and Γ h ik dx k ∧ Dy h ∧ dx i = 0. Then As r > 0, then 1 r + r L 2 = 0. There for DΦ = 0, if and only if, δ k L = 0, ∂ k L = 0.
Corollary 1.
If the almost product structure P is Kähler, it is integrable. | 848.4 | 2013-01-01T00:00:00.000 | [
"Mathematics"
] |
Crafting Adversarial Examples for Neural Machine Translation
Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used mono-directional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.
Introduction
Recent studies have revealed that neural machine translation (NMT), which has achieved remarkable progress in advancing the quality of machine translation, is fragile when attacked by some crafted perturbations (Belinkov and Bisk, 2018;Cheng et al., 2019Cheng et al., , 2020Wallace et al., 2020). Even if the perturbations on inputs are small and imperceptible to humans, the translation quality could be degraded * The four authors contributed equally. † Corresponding author: Kun He.
Input x
John Biden just win the election Trans. y 约翰·拜登刚刚赢得了大选 Ref.
Input x
John Biden just lost the election Trans. y 约翰·拜登刚刚赢得了大选 Table 1: A real example of adversarial generation for Google translation with antonym substitution (i.e., win to lost) which reverses the semantics on the source but preserves the same translation exactly (reported in October, 2020).
dramatically, raising increasing attention to adversarial defenses for building robust machine translation systems as well as its prerequisite researches on building effective NMT adversarial attacks. As character level perturbations usually lead to lexical errors and are easily corrected by spell checking tools (Ren et al., 2019;Zou et al., 2020), in this work, we focus on crafting word level adversarial examples that could maintain lexical and grammatical correctness and hence are more realistic.
An essential issue of crafting NMT adversarial examples is how to define "what is an effective NMT adversarial attack". Researchers have provided an intuitive definition that an NMT adversarial example should preserve the semantic meaning on the source but destroy the translation performance with respect to the reference translation (Michel et al., 2019;Niu et al., 2020). Correspondingly, the attack criteria are proposed as the absolute degradation or relative degradation against the reference translation (Ebrahimi et al., 2018;Michel et al., 2019;Niu et al., 2020;Zou et al., 2020). To craft a perturbation that maintains the semantics as well as grammatical correctness following the above definition and evaluation, a variety of methods to impose word replacements have been proposed in recent studies (Michel et al., 2019;Cheng et al., 2019Cheng et al., , 2020Zou et al., 2020), making it a commonly used paradigm for NMT attacks.
Reference Sentence
Chinese→English Translation Ref.: The chairperson of the conference expressed in a speech that high and new technologies have promoted the development of the nations in asia, europe, and america.
x: 会议主席在发言中认为, 高新技术促进了亚洲和欧 美国家的发展。 y: In his speech, the chairman of the meeting held that high and new technologies have promoted the development of asian and european countries. Ref. × : The chairperson of the conference expressed in a speech that the high-level leadership has promoted the growth of the nations in asia, europe, and america.
x × : 会议主席在发言中称, 高层促进了亚洲和欧美国 家的成长。 y × : In his speech, the chairman of the meeting said that the high-level leadership has promoted the growth of asian and european countries. Ref. √ : The chairperson of the convention expressed in a speech that the high-level leadership has promoted the development of the nations in asia, europe, and america.
x √ : 代表大会主席在发言中称, 高层促进了亚洲和欧 美国家的发展。 y √ : In his speech, the chairman of the npc standing committee said that the high-level leadership has promoted the development of asian and european countries. However, there exist potential pitfalls overlooked in existing researches. First, it is possible to craft an effective attack on the NMT models by reversing the semantics on the source, as illustrated in Table 1 1 . Meanwhile, since the antonyms are potentially in the neighborhood of the victim word in the embedding space, just as the same as the synonyms, it is entirely possible to produce opposing semantics when replacing a word with its neighbors, making the proposed attack method break the definition.
Furthermore, there is a risk of evaluating the attacks directly using the reference translation. Differs to the classification tasks, even if the perturbation is small to be synonymous with the original word in the source, the actual ground-truth reference may be changed due to the substitution. Table 2 illustrates a typical failing adversarial example x × and a successful example x √ , where x × could be falsely distinguished as effective due to the missing of ground-truth reference Ref. × 2 . Obviously, x would be correctly distinguished if we have the actual ground-truth reference of x . However, the actual ground-truth reference of the perturbed input is notoriously difficult to be built beforehand, making the NMT attack hardly to be evaluated veritably.
In this work, in order to craft appropriate NMT adversarial examples, we introduce new definition and metrics for the machine translation adversaries by leveraging the round-trip translation, the process of translating text from the source to target language and translating the result back into the source language. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the input and degrades the translation dramatically, would naturally lead to a semantic destroying round-trip translation result. Based on our new definition and metrics, we propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures, e.g. RNN and Transformer.
Our main contributions are as follows: • We introduce an appropriate definition of NMT adversary and the deriving evaluation metrics, which are capable of estimating the adversaries only using source information, and tackle well the challenge of missing ground-truth reference after the perturbation.
• We propose a novel black-box word level NMT attack method that could effectively attack the mainstream NMT models, and exhibit high transferability when attacking popular online translators.
NMT Adversary Generation
Let X denote the source language space consisting of all possible source sentences and Y denote the target language space. Given two NMT models, the primal source-to-target NMT model M x→y aims to learn a forward mapping f : X → Y to maximize P (y ref |x) where x ∈ X and y ref ∈ Y, while the dual target-to-source NMT model M y→x aims to learn the backward mapping g : Y → X . After the training, NMT can correctly reconstruct the source sentencex = g(f (x)). In the following, we first give the definition of NMT adversarial examples, then introduce our word substitution based blackbox adversarial attack method.
Definition on NMT Adversarial Examples
Given a subset of (test) sentences T ∈ X and a small constant , we summarize previous works (Belinkov and Bisk, 2018;Ebrahimi et al., 2018;Michel et al., 2019) and give their conception of NMT adversarial examples as follows.
Definition 1 (NMT Adversarial Example). An NMT adversarial example is a sentence in , and S t (·, ·) is a metric for evaluating the similarity of two sentences, and γ (or γ , γ < γ) is threshold we can accept (or refuse) for the translation quality .
A smaller γ indicates a more strict definition of the NMT adversarial example.
In contrast to the adversarial examples in image domain (Szegedy et al., 2014), we argue that taking y ref as the reference sentence for x is not appropriate because the perturbation might change the semantic of x to some extent, causing that Definition 1 is not appropriate. To address this problem, we propose to evaluate the similarity between the benign sentence x and the reconstructed sentencê x, as well as the similarity between the adversarial sentence x and the reconstructed adversarial sentencex . We introduce a new definition of NMT adversarial example basing on the round-trip translation.
Definition 2 (NMT adversarial example). An NMT adversarial example is a sentence in is defined as the adversarial effect for NMT. And, the reconstructedx andx are generated with round-trip translation:x = g(f (x)),x = g(f (x )).
A larger E indicates that the generated sentence x can not be well reconstructed by round-trip translation when compared with the reconstruction quality of the source sentence x. Here α is a threshold ranging in [0, 1] to determine whether x is an NMT adversarial example. A larger α indicates a more strict definition of the NMT adversarial example. In this work, we use the BLEU score (Papineni et al., 2002) to evaluate the similarity between two sentences.
Based on Definition 2, we further provide two metrics, i.e., Mean Decrease (MD) and Mean Percentage Decrease (MPD) to estimate the translation adversaries appropriately. MD directly presents the average degradation of the reconstruction quality, and MPD reduces the bias of the original quality in terms of the relative degradation. The proposed MD is defined as: where N is the number of victim sentences, D i is the decreasing reconstruction quality of the adversarial example x i , denoted as: (2) Similarly, MPD is defined as: where P D i is denoted as: In practice, except for the constraints in Definition 2, adversarial examples should also satisfy the lexical and syntactical constraints so that they are hard for human to perceive. Therefore, the correct word in the source sentence must be replaced with other correct words instead of misspelled word to meet the lexical constraint. Besides, to keep the grammatical correctness and syntax consistency, the modification should not change the syntactic relation of each word in the source sentence.
To meet all the above constraints, we propose a novel NMT adversarial attack method by substituting words with their neighbors selected from the parser filter to generate reasonable and effective adversarial examples.
WSLS Attack
There are two phases in the proposed Word Saliency speedup Local Search (WSLS) attack Figure 1: Illustration of the proposed WSLS attack method. For a source sentence x, we first generate the valid victim locations, substitution candidates, and saliency scores to prepare the attack, then craft an initial adversarial example x by the Greedy Order Greedy Replacement (GOGR) followed by the Word Saliency speedup Local Search (WSLS) to promote the adversarial quality. method. At the first phase, we design initial strategies to obtain an initial example x . At the second phase, we present a local search algorithm accelerated by word saliency to optimize the perturbed example.
Initialization Strategy
Candidates. For a word w i in the source sentence x = {w 1 , . . . , w i , . . . , w n }, where i denotes the position of word w i in the sentence, we first build a candidate set W i ∈ D where D is the dictionary consisting of all the legal words. In this work, we build the candidate set by finding the k closest neighbors in the word embedding space: Then we filter the candidates based on the parsing, as shown in Part A of Figure 1 3 . Note that the combination of them can impose minor shifting on the source so as to meet the lexical and semantic constraints, as discussed in Section 2.1. In our experiments, we use the pretrained mask language model (MLM) to extract the embedding space to follow the black-box setting.
Greedy Substitution. For each position i, we can substitute word w i with w j i ∈ W i to obtain an adversary x = {w 1 , . . . , w j i , . . . , w n }, and evaluate the adversarial effect E(x, x ) by reconstruction. Then we select a word w * i that yields the most significant degradation: It is straightforward to generate an initial adversary through a Random Order Greedy Replacement (ROGR) method, which is to randomly select positions expected to make substitutions, then iteratively replace the word with its neighbors by Eq. 5 on the selected positions in a random order.
However, the initial result has a significant impact on the final result of the local search. If the local search phase starts with a near-optimal solution, it is likely to find a more powerful adversary after the local search process. Therefore, we design a greedy algorithm called Greedy Order Greedy Replacement (GOGR) for the initialization, which is depicted in Part B of Figure 1.
In the GOGR algorithm, at each step we enumerate all possible positions we haven't attacked yet, and for each position we try to substitute word w i ∈ x with word w * i ∈ W i according to Eq. 5, then we choose the best w * among the possible positions, and iteratively substitute words until we substitute enough words.
Word Saliency
To speed up the local search process, we adopt the word saliency, used for text classification attack, to sort the word positions in which the word has not been replaced yet. In this way, we can skip the positions that may lead to low attack effect so as to speedup the search process. For text classification task, Li et al. (2016) propose the concept of word saliency that refers to the degree of change in the output of text classification model when a word is set to the "unknown" token. Ren et al. (2019) incorporate the word saliency to generate adversarial examples for text classification. To adopt the concept of word saliency for NMT, we regard the output of a MLM for the word as a more general concept of word saliency, which is independent of the specific tasks.
. . , w n } and "mask" means the word is masked in the sentence.
Through Definition 3, the higher word saliency represents the lower context-dependent probability, which can be caused by numerous reasonable substitutions or rare syntax structure, indicating weaker word positions that are easier to be attacked.
In this work, as shown in Part C of Figure 1, we calculate the word saliency S(x, w i ) for all positions before the local search phase, making the local search efficiently inquire the word saliency.
Local Search Strategy
In the local search phase, as shown in Part D of Figure 1 and detailed in Figure 2, there are three types of walks, namely saliency walk, random walk and certain walk, used to update x to promote the attack quality.
To explore and exploit the search space, we define some basic operations and walks to evolve the adversaries. A mute operator is to restore an executed perturbation w * i to its original word w i to mutate the adversary. A prune operator is to exclude a portion of candidate locations where the perturbations will not be imposed to narrow down the search area. A tabu operator indicates that the last perturbed location is forbidden to be manipulated in the current iteration. As illustrated in Figure 2, the three operators are utilized in the local search walks (Part D). We interpret the three walks as follows.
Saliency Walk. We first design an efficient walk for the search, called the saliency walk (SW), to make a balanced exploration and exploitation in the neighbourhood of the well initialized solution generated by the aforementioned GOGR algorithm. During the saliency walk, as shown in Figure 2a, at the current iteration (t), we mute each perturbed word to generate a set of partial solutions, sorted in the ascending order of the saliency score, so as to give higher priority to the perturbations with higher word saliency on the locations. Then we prune other unperturbed words according to the descending order of the saliency score, and query candidate substitutions for each of the remaining words. Then candidate adversaries, consisting of the concatenation of each partial solution with each candidate substitution, are evaluated by Eq. 2 iteratively.
To accelerate the saliency walk, we have an early stop strategy: if the current best adversarial effect in the enumeration of the candidate adversaries at the present iteration (t), denoted as pbest (t) = E * , is better than pbest (t−1) (the best adversarial effect at the previous iteration (t − 1)), i.e. pbest (t) ≥ pbest (t−1) , then we terminate the enumeration of the candidates and pass the state of pbest (t) as well as the tabu operator to the next walk, otherwise the state of pbest (t−1) will be passed to the next walk and the tabu location is expired.
Random Walk. To avoid the current adversarial example get trapped in a local optimum, we design an effective mutation walk, called the random walk (RW), to mutate the current solution. During the random walk, as shown in Figure 2b, we randomly mute a perturbed word to generate a partial solution, and query the candidate substitutions for each of the unperturbed words as in saliency walk. Then we concatenate the partial solution with each candidate substitution to build the candidate adver-saries, among which the best solution is used to update pbest (t) . After that, the tabu operator will be forcibly passed to the next walk, reinforcing the exploration ability of the WSLS algorithm.
Certain Walk. To do a sufficient exploitation after the random walk as a mutation, we design the certain walk (CW). As shown in Figure 2c, certain walk is similar to saliency walk but it removes the prune operation to enlarge the neighborhood space.
To trade off the efficiency and search time, we adopt one saliency walk followed by random walk, certain walk, random walk and certain walk, to construct one round of local search, denoted as {SW, RW, CW, RW, CW}, as shown in Part D of Figure 1. Besides, we bring an early-stop-finetune mechanism to the WSLS method. For any walk in WSLS, if there exists an adversarial candidate that updates the historically best adversarial effect, this adversarial candidate will be immediately set as the initial solution to start a new local search. Otherwise, the WSLS will stop after the ending of the current round 4 .
Experimental Setup
We conduct experiments on the Chinese-English (Zh-En), English-German (En-De), and English-Russian (En-Ru) translation tasks. For the Zh→En translation task, we use LDC corpus 5 consisting of 1.25M sentence pairs, and use NIST (MT) datasets 6 to craft the attacks. Following the preprocessing in Zhang et al. (2019), we limit the source and target vocabulary to the most frequent 30K words, remove sentences longer than 50 words from the training data, and use NIST 2002 as the validation set for the model selection. For this translation task, we implement our attacks on two state-of-art word-level NMT models. 1) RNNsearch (Bahdanau et al., 2015) has an encoder consists of forward and backward RNNs each having 1000 hidden units and a decoder with 1000 hidden units. Denote this model as "Rnns." for abbreviation. 2) Transformer comprises six layers of transformer with 512 hidden units and 8 heads in both encoder and decoder, which mimics the hyperparameters in (Vaswani et al., 2017). Denote this model as "Transf." for abbreviation. For the or-acle back-translation (En→Zh), we use a sub-word level transformer as our oracle model which was trained with LDC datasets and then finetuned with the NIST datasets.
For the En→De and En→Ru translation tasks, We use WMT19 test sets to craft the adversaries, and implement our attacks on the winner models of the WMT19 En→De and En→Ru sub-tracks 7 . Specifically, the En→De model and En→Ru model are both subword-level transformer, where a joint byte pair encodings (BPE) with 32K split operations is applied for En→De, and separate BPE encodings with 24K split operations is applied for each language in En→Ru (Ng et al., 2019). We denote these two models as "BPE-Transf." for abbreviation. For the oracle back-translation (De→En, Ru→En), the best submitted NMT models in WMT19 are used as our oracle models which are further finetuned with 90% of the previous WMT test sets and validated with the remaining sets.
As for the reference result, Table 3 and Table 4 show the case-insensitive BLEU scores for forwardtranslation, back-translation, and round-trip translation on the selected language pairs. We observe that the word-level victim models (Rnns. and Transf.) achieve an average BLEU score of 36.71 and 41.55 for Zh→En translation respectively, demonstrating the accuracy of these two models on translating the original Chinese sentences. For the backtranslation, the oracle models achieve an average BLEU score of 82.9 for En→Zh translation, as well as a BLEU score of 54.83 and 57.24 for De→En and Ru→En translations respectively, indicating that the oracle models are reliable enough in the back-translation stage for the source reconstruction. Besides, the reconstruction quality of the victim models are reported in Table 3 and Table 4, where the source sentences are back-translated by the oracle models in the round-trip translation, showing that the source language is reconstructed well enough by the cooperation of forward-translation and oracle back-translation.
Furthermore, to enhance the authenticity of the attack performance, we removed the noisy data, which could not be correctly identified as the corresponding language sentences by online translators, and we also excluded sentences longer than 50 words in the NIST datasets, ensuring that the attack Table 3: Case-insensitive BLEU scores (%) for forward-translation (Zh→En), back-translation (En→Zh), and round-trip translation (Zh→En→Zh) on Zh-En language pair. "AVG" represents the average score of all datasets. results are credible 8 . As for the parameter settings of the attack methods, we use pyltp 9 as the parser checking tool and generate the top 10 nearest parser-filtered words to construct the candidate sets for each word. To generate the word saliency, two state-of-art whole word masking BERT are utilized as the MLM for the Chinese 10 and English 11 languages respectively. And the prune operators implemented in SW and RW will reserve the highest five word saliency locations and their word candidates. Finally, the adversaries are crafted by substituting 20% words.
Attack Results
To demonstrate our proposed WSLS method, we implement AST-lexcial (Cheng et al., 2018) as a black-box baseline, wherein AST-lexcial shares the same idea of random order random replacement. Besides, the naive ROGR method can be considered as another black-box counterpart of the white-box kNN method in Michel et al. (2019) that randomly selects the word positions and greedily selects the neighbor words based on the gradient loss.
As shown in Table 5 and Table 6, both GOGR and WSLS have the MD scores close to the original reconstruction scores for Rnns., Transf., and BPE-Transf., and their attack results are much better than that of AST-lexical as well as ROGR. It shows that both WSLS and GOGR can effectively attack various NMT models under the standard of Definition 2. WSLS is superior to GOGR, indicating that the local search phase can further promote the attack quality. Specifically, the MPD score of WSLS is almost 1.5 higher than that of GOGR, which is more obvious as compared to the MD metric, revealing the rationality of MPD also.
Ablation Study
We do ablation study on the WSLS algorithm in Table 7. Here "Init" is for the method used for initialization, WS indicates whether we use word saliency to speedup the local search, LS indicates whether we use local search or other variants of walk sequence for the local search.
From Table 7 we observe that: 1) The initialization of GOGR exhibits significantly better results than ROGR, and also converges faster than ROGR; 2) WSLS without word saliency speedup, denoted as WSLS 1 , exhibits slightly higher attack results but the running times are much longer than WSLS. Thus, we choose WSLS to have a good tradeoff on attack quality and time.
Transferability
To test the transferability of our method, we transfer our crafted adversarial examples on NIST 2002 dataset to attack the online Baidu and Bing translators. As shown in Table 8, the attack effectiveness is significant. It degrades the reconstruction quality of Baidu and Bing with more than 20 BLEU points, demonstrating the high transferability.
In addition, we provide two adversarial examples in Table 9, generated by WSLS on the Rnns. model, that can effectively attack the online Bing
Related Work
In recent years, adversarial examples have attracted increasing attention in the area of natural language processing (NLP), mainly on text classification (Jia and Liang, 2017;Ren et al., 2019;Wang et al., 2021). For neural machine translation (NMT), there are also some adversary works emerging quickly (Belinkov and Bisk, 2018;Ebrahimi et al., 2018;Michel et al., 2019;Cheng et al., 2019;Niu et al., 2020;Wallace et al., 2020). On the character level, a few adversarial attacks by manipulating character perturbations have been proposed since 2018. Belinkov and Bisk (2018) confront NMT models with synthetic and natural misspelling noises, and show that character-based NMT models are easy to be attacked by character level perturbation. Ebrahimi et al. (2018) propose to attack the character level NMT models by manipulating the character-level insertion, swap and deletion. Similarly, Michel et al. (2019) perform a gradient-based attack that processes words in source sentences to maximize the translation loss. To attack against production MT systems, Wallace et al. (2020) imitate the popular online translators and manipulate the perturbations based on the gradient of the adversarial loss with the imitation models. The above four works also incorporate adversarial training to improve the robustness of NMT.
However, the character level perturbations are hard to be applied into confronting practical NMT models, as these perturbations significantly reduce Baidu: In his speech, the president of the National People's Congress said that high-level leaders have promoted the growth of asian and european countries.
x : : Peterson reiterated that the WHO's main concern is the challenge of preventing outbreaks such as disease and dysentery, these patients may cause thousands of deaths.
Bing: Peterson reiterated that the WHO's main concern is to prevent outbreaks such as disease and dysentery , which can cause thousands of deaths. the readability and also could be easily corrected by spell checkers (Ren et al., 2019;Zou et al., 2020). On the other hand, word level adversaries could maintain lexical and grammatical correctness, which are more realistic but more challenging to generate. Cheng et al. (2018) craft the adversaries with randomly sampled perturbed positions, and then replace the words according to the cosine similarity of the embedding vectors between the original word and the neighbors. Cheng et al. (2019) propose a gradient-based attack method that replaces the original word with the candidates generated by integrated language model. Michel et al. (2019) generate adversaries by substituting the word with its nearest neighbors, which are informed by the gradient of the victim models. (Zou et al., 2020) introduce a reinforced learning based method to craft the attacks following Michel et al. (2019) to define the reward and substitution candidate set.
Existing word level translation attacks are mainly white-box, wherein the attacker can access all the information of the victim model. Besides, there is a risk of guiding the attacks to directly use the degradation of reference translation, since the actual references may be changed by word substitution. Thus, there exists few study on the effective word level attack for NMT, especially in the black box setting. This study fills this gap and sheds light on black-box word level NMT attacks.
Conclusion
We introduce an appropriate definition of adversarial examples as well as the deriving evaluation measures for the adversarial attacks on neural machine translation (NMT) models. Following our definition and metrics, we propose a promising blackbox NMT attack method called the Word Saliency speedup Local Search (WSLS), in which a general definition of word saliency by leveraging the strong representation capability of pre-trained language models is also introduced. Experiments demonstrate that the proposed method could achieve powerful attack performance, that effectively breaks the mainstream RNN and Transformer based NMT models. Further, our method could craft adversaries with strong readability as well as high transferability to the popular online translators. (Devlin et al., 2019), have achieved a powerful initialization for the NMT encoder models. MLM pre-trains the encoder for a better language understanding on the encoded language by randomly masking some tokens in continuous monolingual text streams and predicting these tokens. To predict the masked tokens, the language model pays attention to the relative language parts, which encourages the model to have a better understanding on the language. Inspired by the powerful language understanding ability of the pre-trained language models, and following the black-box setting, we use the pre-trained MLM to estimate the word saliency and build the word embedding space for adversarial attacks.
Back-Translation. There are a lot of works for improving the NMT performance by leveraging the back translation, which uses not only parallel corpus but also monolingual corpus for training the NMT models (He et al., 2016;Lample and Conneau, 2019). Previous works on back-translation demonstrate the ability of the dual NMT models to reconstruct the language. In this work, we observe that the back-translation technique makes it possible to evaluate NMT adversarial attacks without ground-truth references for the perturbed sentences, and we propose to evaluate the proposed NMT attack method basing on the reconstruction results of the original inputs and the perturbed examples. | 6,909 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Biogeography of telomere dynamics in a vertebrate
1 –––––––––––––––––––––––––––––––––––––––– © 2020 The Authors. Ecography published by John Wiley & Sons Ltd on behalf of Nordic Society Oikos This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Subject Editor: Ken Kozak Editor-in-Chief: Jens-Christian Svenning Accepted 19 November 2020 44: 1–3, 2020 doi: 10.1111/ecog.05286 44 1–3
2 raises immediate questions about its potential implications in eco-evolutionary processes that are prototypical of the integrative phenotype, such as dispersal.
Dispersal is a key process in ecology and evolution, which can influence -and be influenced by -the evolution of virtually all the features of an organism, including behaviour, morphology, physiology and genetics (Clobert et al. 2012, Canestrelli et al. 2016). Here, we ask for the first time whether dispersal-driven processes of historical biogeographic relevance, such as range expansions, can affect telomere dynamics. We hypothesize that, if individual variation in telomere dynamics translates into heritable variation in the capacity to cope with the new demographic and/or environmental conditions encountered during a range expansion, a spatial structure in telomere dynamics might emerge along the range of a recently expanded population. We explore this hypothesis using the Tyrrhenian tree frog Hyla sarda as study species. This is a small, cryptically coloured amphibian endemic to the Tyrrhenian islands, which colonized the Corsica island (i.e. the northern portion of its current range) from the Sardinia island, during a Late Pleistocene range expansion (Bisconti et al. 2011a, b, Fig. 1A). Using a long-term longitudinal common garden experiment (12 months) during which animals have been housed under identical conditions, we compared telomere length and its change over time between individuals sampled in the source area in Sardinia and individuals sampled in the expansion range in Corsica (Supporting information).
We found significant effects of geographic area (F = 10.25, p = 0.002), sampling time (F = 21.59, p < 0.001) and of their interaction (F = 6.60, p = 0.014). Post-hoc analyses showed that tree frogs from both Corsica and Sardinia had similar telomere length at the beginning of the experiment (p = 0.549), while tree frogs from Corsica had longer telomeres than those from Sardinia at the end of the experiment (coeff. estimate ± SE: 0.218 ± 0.057, p = 0.002) (Fig. 1). Telomere length did not change significantly over the experiment in Sardinia tree frogs (p = 0.229), while it increased significantly from the beginning to the end of the experiment in Corsica tree frogs (−0.215 ± 0.050, p = 0.0006; 31.2% elongation on average) (Fig. 1B). Results do not change if four females out of 47 frogs are removed from the models (data not shown). Individual telomere length measured from samples taken at the beginning and at the end of the experiment was significantly repeatable (coefficient of 0.42 with an associated variance of 0.03), meaning that 42% of variance in telomere length during an individual's life could be explained by within-individual consistency. All frogs included in this study survived until the end of the experiment, indicating that there was no selective bias owing to individuals with shorter telomeres having lower chances of survival.
In recent years, telomere dynamics have been studied in a wide range of species (Monaghan et al. 2018). The species, however, might not be the appropriate unit of analysis. Indeed, our study showed, for the first time, that different co-specific populations can show striking differences in telomere dynamics, and that these differences can hardly be explained by distinct environmental conditions experienced by individuals through time (Supporting information). We also did not observe any differences between populations in telomere length at the beginning of the study; rather, these differences emerged along a longitudinal common-garden experiment, indicating that point estimates (e.g. as routinely used when comparing sexes or developmental stages) might provide blurred pictures of interindividual variation in telomere dynamics. Telomeres shorten with age in most organisms studied to date (Tricola et al. 2018), but not in the Tyrrhenian tree frog, as in some other species (Hoelzl et al. 2016, Spurgin et al. 2017). Our longitudinal data covered a non-negligible portion of the expected lifespan of an individual tree frog (one out of approximately three years), so that telomere attrition was a plausible expectation. Instead, we observed no differences between time points in the Sardinia population, and substantial lengthening of telomeres in the Corsica population. Since Corsica population was founded during a recent range expansion from Sardinia (Bisconti et al. 2011a, b), the evolution of the observed differences in telomere dynamics between the two populations should have occurred during the expansion process or later. Previous data showed that levels of both genetic diversity and bioclimatic suitability did not differ appreciably between the studied populations (Bisconti et al. 2011a, b; see also Supporting information). Accordingly, founder events, genetic drift and environmental adaptation following the expansion, appear unlikely as drivers of the observed divergence in telomere dynamics. Hence, we suggest that adaptive processes occurred during the range expansion event have promoted the evolution of this geographic pattern in telomere dynamics.
In vertebrates, telomere length is maintained and restored by the enzyme telomerase (Monaghan and Haussmann 2006). Telomerase activity appears to be particularly relevant for the regulation of telomere dynamics in ectotherms (Olsson et al. 2018). Telomere elongation in Corsica but not Sardinia under common garden conditions suggests differential expression (i.e. re-activation or upregulation) of the telomerase in Corsica in response to these novel conditions. This might imply that the range expansion event promoted plasticity in telomere dynamics, adding perspective to the burgeoning focus on the role of plasticity in evolution (Schwander and Leimar 2011). Since telomere attrition may also be linked to behavioural phenotype (Bateson et al. 2015), it might also be that frogs with a specific behavioural type, related to exploration or boldness, are more common in the sink population (i.e. Corsica; Canestrelli et al. 2016). Whether such increased plasticity is an adaptation to the unpredictability of the novel environments, variation in developmental conditions across populations, life-history/behavioural strategies, or to the non-equilibrium demographic dynamics encountered during the expansion process, is a further intriguing subject for future research. | 1,440.6 | 2020-12-15T00:00:00.000 | [
"Biology",
"Geography",
"Environmental Science"
] |
Celebrating Professor Rajeev K. Varshney's transformative research odyssey from genomics to the field on his induction as Fellow of the Royal Society
Summary Professor Rajeev K. Varshney's transformative impact on crop genomics, genetics, and agriculture is the result of his passion, dedication, and unyielding commitment to harnessing the potential of genomics to address the most pressing challenges faced by the global agricultural community. Starting from a small town in India and reaching the global stage, Professor Varshney's academic and professional trajectory has inspired many scientists active in research today. His ground‐breaking work, especially his effort to list orphan tropical crops to genomic resource‐rich entities, has been transformative. Beyond his scientific achievements, Professor Varshney is recognized by his colleagues as an exemplary mentor, fostering the growth of future researchers, building institutional capacity, and strengthening scientific capability. His focus on translational genomics and strengthening seed system in developing countries for the improvement of agriculture has made a tangible impact on farmers' lives. His skills have been best utilized in roles at leading research centres where he has applied his expertise to deliver a new vision for crop improvement. These efforts have now been recognized by the Royal Society with the award of the Fellowship (FRS). As we mark this significant milestone in his career, we not only celebrate Professor Varshney's accomplishments but also his wider contributions that continue to transform the agricultural landscape.
Introduction
Professor Rajeev Kumar Varshney (Box 1, Figure 1), popularly known by his colleagues as the "Genomics Guru", is globally recognized leader for his work on genome sequencing, harnessing genetic diversity, genomics-assisted breeding, seed systems, and capacity building in developing countries (Figure 2).Over his extensive research career spanning more than two decades, he has made significant contributions to improving food security in Asia and Africa by creating genomic resources and utilizing them for crop improvement in some major "orphan" tropical crops.Professor Varshney is deeply committed to equipping breeders with the tools and resources for genomics-assisted breeding, including the provision of training in cutting-edge techniques and enabling access to affordable genotyping and sequencing technologies.Furthermore, he has been the spearhead for major international initiatives that have delivered genomic resources and superior crop varieties for chickpea, pigeonpea, and groundnut to some of the world's poorest farmers.
Recognizing these efforts Professor Varshney was recently (May, 2023) elected as a Fellow of the Royal Society (UK).In the esteemed 366-year history of the Royal Society, he is the fourth individual from India to be elected from the field of Agricultural Sciences and Forestry, and one of just 13 worldwide.Additionally, he is the fifth from Western Australia to be elected from any discipline.As a tribute to Professor Varshney's and his career, this article traces his remarkable journey from the early days of resolving genetic polymorphism on gels, to cataloguing genome diversity and advancing plant biology, and finally to his recent accolades, thereby celebrating a life committed to pushing the boundaries of scientific exploration.As we, some of his former PhD students, post-docs, colleagues, and collaborators, look back over his career, we hope not only to appreciate the scientific accomplishments of Professor Varshney but also to draw inspiration from the spirit of perseverance and vision he embodies.
Early influences and beginnings
Growing up in the agrarian landscapes of Bahjoi, Uttar Pradesh, India, Professor Varshney's education at Aligarh Muslim University in Uttar Pradesh, India, shaped his understanding of Botany, emphasizing genetics, plant breeding, and molecular biology.During his childhood and early student life, Professor Varshney was inspired by observations of smallholder farmers battling climatic and agronomic challenges, and generated an interest to do research for improving agriculture.Given his early interest in plant genetics, Professor Varshney pursued his PhD under the mentorship of distinguished geneticist Professor PK Gupta.Professor Gupta's example of hard work, commitment, and dedication established the foundations for Professor Varshney's future success (Figure 3).
Following the completion of his PhD, Professor Varshney started his post-doctoral research at IPK, Gatersleben (Germany) under the mentorship of Professor Graner who instilled a passion for research, and the importance of creating a nurturing and supporting environment for all colleagues and collaborators, qualities that later became inherent to Professor Varshney's working style.It was during this time that Professor Varshney attended a scientific conference in Bologna (Italy) in 2003, where he heard a speech by Professor Norman Borlaug.Professor Borlaug's compelling message urging the next generation of scientists to create innovative and sustainable solutions to safeguard food security for the world's poor resonated deeply with Professor Varshney.Inspired by Professor Borlaug's vision, and motivated by his interactions with stalwarts of the Indian Green Revolution, including Professor MS Swaminathan, CIMMYT, Mexico (2007-2013), Global Research Program Director -Grain Legumes (2013-2016), Global Research Program Director -Genetic Gains (2016-2021), and Global Research Program Director Accelerated Crop Improvement (2021-2022).Professor Varshney's significant contributions reached a plateau when he developed invaluable genomic resources for tropical crops.His exceptional work made him a sought-after scientist, and he was headhunted by several universities/institutes from the USA, the UK, Australia, Canada, and China.Professor Swaminathan imparted critical insights on the importance of interdisciplinary research and collaboration in tackling the multifaceted challenges inherent in agriculture.These pivotal experiences and insights played a decisive role in shaping Professor Varshney's approach during his 17-year tenure at ICRISAT and continue to inspire his work at Murdoch University, Australia.
Genomic resources in wheat and barley
Staple crops like wheat and barley, play a pivotal role in providing energy and protein for the rapidly growing global population.Professor Varshney's doctoral studies under the supervision of Professor PK Gupta primarily focused on enhancing the genetic and genomic resources i.e. development of microsatellte markers and their use for trait mapping in wheat to support markerassisted breeding programs (Gupta and Varshney, 2000;Kota et al., 2001;Varshney et al., 2000aVarshney et al., ,b, 2001)).During his time at IPK with Professor Graner, he played a significant role in the development of various genomic resources for barley, including expressed sequence tags (ESTs), molecular markers such as EST-SSRs, EST-SNPs, high-density genetic maps, and a transcript map (which involved integrating genes or transcripts with the genetic map) (Thiel et al., 2003;Varshney et al., 2004Varshney et al., , 2005aVarshney et al., ,b, 2006a,b, c),b, c).Additionally, he contributed to the generation of a physical map of barley (Varshney et al., 2005a).These genomic resources were utilized for mapping Quantitative Trait Loci (QTLs) for agronomic traits through linkage mapping and association mapping approaches, conducting comparative studies in cereals and functional genomics studies for drought tolerance and malting quality in barley.These resources provided a springboard for sequencing the barley genome at IPK.
In 2012, Professor Varshney decoded the genome sequence of pigeonpea (Asha genotype).This landmark effort marked pigeonpea as the first orphan legume crop genome to be sequenced, second only to soybean in food legumes.By integrating the next-generation sequencing technologies with traditional bacterial artificial chromosome end sequences (BAC) based resources and a genetic map, his team mapped ~605.78Mb of the 833.07 Mb genome and identified 48 680 putative genes.The availability of the draft genome sequence heralded pigeonpea improvement initiatives, setting a foundational step for future precision breeding efforts in the crop (Varshney et al., 2012).Building on this momentum, Professor Varshney continued his trailblazing contributions to genomics by leading sequencing initiatives for other important crops like chickpea (Varshney et al., 2013), and the A-progenitor of groundnut (Chen et al., 2016).In 2017, the genome of pearl millet was decoded by his team, providing a resource to develop modern genetic solutions to support over 90 million farmers spanning sub-Saharan Africa, India, and South Asia.The pearl millet genome (Tift 23D2B1-P1-P5) spanned approximately 1.79 Gb and was estimated to harbour 38 579 protein-coding genes.The expansion/contraction analysis of the predicted genes highlighted substantial enrichment for wax biosynthesis genes, which may contribute to heat and drought tolerance in pearl millet (Varshney et al., 2017a).Beyond the initial sequencing efforts, his team also utilized the state-of-the-art Hi-C sequencing approach to add another layer of refinement to these genomes, thereby increasing the accuracy and applicability of these genomes (Garg et al., 2022).Professor Varshney stands at the forefront of implementing advanced technologies.In one of his collaborative endeavours, near-gapless genome assemblies of two widely used soybean cultivars (Williams82 and Lee) were reported using long-read sequencing technologies (Garg et al., 2023a).All the genomic resources generated under Professor Varshney's leadership are publicly available, serving as a rich resource for the global scientific community.These resources have been integrated into breeding programmes worldwide, leading to the development and deployment of crop varieties with improved yields, better disease resistance, and greater adaptability to diverse environmental conditions.
Genome diversity catalogues
Following the successful decoding of reference genomes, Professor Varshney focused his efforts on resequencing projects, firmly establishing himself at the forefront of legume genomics.These efforts were aimed not only to unearth the rich genetic makeup of these important crops but also to understand how genetic variation could be harnessed to address pressing agronomic challenges.In the case of pigeonpea, whole genome resequencing (WGRS) of 292 diverse Cajanus accessions, including breeding lines, landraces, and wild species, led to the identification of 17.2 million variations (15.1 million SNPs, 0.9 million small insertions, and 1.2 million small deletions).By demonstrating the Central Indian state of Madhya Pradesh as the likely centre of origin of pigeonpea, the study helped resolve the long-standing ambiguity surrounding the centre of origin of the crop.The GWAS analysis of these variations highlighted the genomic regions affected by domestication and pinpointed loci associated with phenotypic variation for agronomically important traits relevant to pigeonpea breeding (Varshney et al., 2017b).Turning their attention to chickpea, Professor Varshney and his group re-sequenced 429 lines from 45 countries to generate a variation map of 4.9 million SNPs.This study provided: (a) discernible signs of selection during chickpea breeding, (b) Eastern Mediterranean as the primary centre of origin of chickpea, and (c) marker-trait associations and candidate genes, including TIC, REF6, aspartic protease, cc-NBS-LRR, and RGA3, for drought and heat tolerance (Varshney et al., 2019a).Subsequently, Professor Varshney led a team of 57 researchers from 41 organizations across 10 countries to further investigate the genetics of chickpea.The outcome of this work was a chickpea pangenome (592.28Mb), developed from the genome sequencing data from 3366 diverse chickpea lines (encompassing 3171 cultivated and 195 wild chickpea accessions) spanning 60 countries.The analysis provided details on chickpea's origin and migration routes to various parts of the world, the divergence time of different chickpea species, and the genetic loads/burdens responsible for lowering crop performance.The study also provided superior haplotypes for agronomic traits for undertaking haplotype-based breeding and laid a foundation for genomic prediction and optimal contribution selection for developing superior varieties (Varshney et al., 2021a).This landmark study not only received recognition within the scientific community, but received broader attention from an interested public, featuring in prominent international media outlets, e.g., The New York Times, The Economist, and the BBC.
High-throughput and cost-effective genotyping platforms
The deployment of genomic resources within breeding programmes has been mostly constrained by technological limitations and high costs associated with genotyping.To harness the extensive genomic resources for breeding purposes, there was a critical need for cost-effective, high-throughput genotyping platforms to develop dense genetic maps and conduct QTL analysis.SNP genotyping platforms serve multiple purposes, including estimating genetic diversity, fine mapping, association mapping, foreground selection, genomic selection, and evolutionary research.Professor Varshney and his team at ICRISAT addressed this need by developing low-, medium-, and highdensity SNP arrays in three legume crops: chickpea, pigeonpea, and groundnut.Notable examples include the development of Affymetrix arrays like the AxiomâCicerSNP Array for chickpea (Roorkiwal et al., 2018), AxiomâCajanusSNP Array for pigeonpea (Saxena et al., 2018), and the AxiomâArachisSNP for groundnut (Pandey et al., 2017a), encompassing up to 58 000 SNPs.As a snapshot of the genome-wide variations, these arrays have laid a robust foundation for high-throughput genotyping, proving vital for foundational research and practical breeding applications.Additionally, to conduct cost-effective genotyping for implementing Genomic Selection (GS) in legume breeding, 5000 SNP panels were developed for chickpea and groundnut through a targeted sequencing methodology.To incorporate target SNPs into breeding programmes, GoldenGate and VeraCode assays were also devised to genotype several hundred SNPs in crops such as chickpea and pigeonpea (Roorkiwal et al., 2013;Varshney, 2016).
Genotyping with high-density arrays can incur significant costs, especially when genotyping large breeding populations for purposes like marker-assisted selection (MAS), backcross breeding, and QC (quality control) of parental analysis.In applications where only a few markers are needed, cost-effective and highthroughput genotyping platforms are more suitable.To address this need, Professor Varshney, in collaboration with the International Maize and Wheat Improvement Center (CIMMYT) and International Rice Research Institute (IRRI), formulated a proposal that successfully secured $4 million in funding from the Bill & Melinda Gates Foundation.The High Throughput Genotyping (HTPG) project offered low-cost genotyping services at ~$1.5 per sample for genotyping with 10 SNPs, including DNA extraction (https://cegsb.icrisat.org/high-throughput-genotyping-projecthtpg/).This service now encompasses over 1000 SNPs for 100+ traits and 500+ QC SNPs.In partnership with Intertek, the project extended its services to eight CGIAR centres, 30+ NARS, and four private companies in more than 28 countries, covering over 18 crops (Bohar et al., 2020).Notably, the project significantly reduced costs for CGIAR and NARS by generating genotyping data at only a fraction (approximately one-third to one-fourth) of the original price.Furthermore, Professor Varshney also spearheaded efforts to develop cost-effective, 10 SNP panels in chickpea, pigeonpea, and groundnut to facilitate earlygeneration selection in breeding programs (Varshney et al., 2018).For instance, single-plex, cost-effective platforms like Kompetitive Allele Specific PCR (KASP) assays are key for SNP genotyping in MAS applications.KASP assays have been developed for several thousand SNPs in chickpea, pigeonpea, and groundnut (Hiremath et al., 2012;Khera et al., 2013;Saxena et al., 2012).Some of these markers are currently being utilized as diagnostic indicators for critical traits, such as drought tolerance in chickpea (Barmukh et al., 2022a,b), high oleic acid and foliar fungal disease resistance in groundnut (Pandey et al., 2023) within breeding programmes.
Gene expression atlases for legume crops
Under the guidance of Professor Varshney, advanced technologies like transcriptome sequencing and bioinformatics tools have been adeptly utilized to study the intricate patterns of gene expression across various tissues, developmental stages, and biotic/abiotic conditions (Garg et al., 2023b).Analysis of these patterns, by Professor Varshney's team has unravelled the intricate genetic interactions that underpin vital physiological processes in legumes such as chickpea, pigeonpea, and groundnut.Focusing on chickpea, his team developed the Cicer arietinum Gene Expression Atlas (CaGEA) using the droughtresistant cultivar ICC 4958.A total of 816 million raw reads were obtained from 27 tissues spanning five major developmental stages.The transcriptome data analysis pinpointed genes that play a role in flowering, nodulation, and the development of seeds and roots (Kudapa et al., 2018).Similarly, in pigeonpea, he led the development of the Cajanus cajan gene expression atlas.This atlas facilitated the identification of two regulatory genes, a pollen-specific SF3 and a sucrose-proton symporter, with implications for improving agronomic traits, e.g., seed production and yield (Pazhamala et al., 2017).To understand the development process of groundnut, his team developed the Arachis hypogaea gene expression atlas (AhGEA) for the world's widest cultivated subsp.fastigiata.AhGEA unveiled mechanisms behind complex regulatory networks, including gravitropism and photomorphogenesis, seed development, and oil biosynthesis in groundnut.Additionally, the analysis identified candidate genes associated with allergens, which following functional validation, might be used to develop allergy-free, consumer-friendly groundnut varieties (Sinha et al., 2020).
Multi-omics for understanding stress responses
Understanding the intricate layers of biological systems demands a multi-modal approach.The integration of multi-omics approaches allows researchers to achieve a holistic perspective, unlocking the synergies between genes, transcripts, proteins, and metabolites.In this direction, Professor Varshney and his team employed multi-omics approaches to delineate the stress response in different crops.As an example, his team employed transcriptome, small RNA, and degradome sequencing to understand the resistance mechanism of chickpea against Ascochyta blight (AB), a significant factor limiting global chickpea production.Transcriptome sequencing identified 6767 DEGs, primarily linked to disease resistance, pathogenesis-related proteins, and cell wall biosynthesis.Small RNA sequencing pinpointed 651 miRNAs, with 297 exhibiting differential expression across genotypes and conditions.Integrating small RNA and transcriptome data revealed 12 contrasting miRNA-mRNA interaction pairs in resistant and susceptible genotypes, shedding light on the genes potentially involved in AB infection (Garg et al., 2019).Similarly, his team used transcriptomics, proteomics, and metabolomics to unravel complex mechanisms regulating drought response in chickpea.The integrated root-omics data identified key proteins (encoding isoflavone 4 0 -Omethyltransferase, UDP-d-glucose/UDP-d-galactose 4-epimerase, and delta-1-pyrroline-5-carboxylate synthetase) and metabolites (fructose, galactose, glucose, myoinositol, galactinol, and raffinose) linked to various pathways crucial for drought response (Kudapa et al., 2023).
Professor Varshney's group has also led and collaborated with partners from USA to develop mitigation approaches to reduce aflatoxin contamination in groundnut, using transcriptomics to explore both host-pathogen interactions and the responses of Aspergillus flavus to drought-related stresses.For example, Professor Varshney's group examined the transcription responses of seven groundnut lines with reduced aflatoxin contamination to A. flavus infection along with one highly susceptible line.This work identified several differentially expressed genes potentially associated with reduced aflatoxin including transcription factors, pathogenesis-related proteins, glutathione-S-transferases, resveratrol synthase, and chitinase (Soni et al., 2021).Additional transcriptome studies on the A. flavus pathogen were also carried out with Professor Varshney's group in collaboration with the USDA-ARS and the University of Georgia.These efforts resulted in the transcriptome sequencing of 62 RNA-seq libraries from six different A. flavus isolates with varying levels of aflatoxin production capabilities describing their responses to drought-related oxidative stress (Fountain et al., 2016a(Fountain et al., ,b, 2020a)).This identified a major role of fungal secondary metabolism, including aflatoxin production, in oxidative stress responses of this pathogen.This and other follow-up studies also informed the selection of isolates for use in the development of new chromosome-arm reference genome assemblies for comparative analyses in A. flavus (Fountain et al., 2020b).
Genetics maps and marker-trait associations
Over the past few decades, there has been a notable surge in the development of extensive genomic resources in legumes such as chickpea, pigeonpea, and groundnut.This progress can be attributed to the dedicated efforts of Professor Varshney and his team at ICRISAT.The group was crucial in generating several thousand SSRs and diversity array technology (DArT) markers for each of the three legume crops (Varshney, 2016).Additionally, in recent years, a substantial number of SNP markers, which were previously lacking in these legume crops until 2005, have been successfully developed, numbering in millions.This large-scale marker information, combined with the development of cost-effective marker assays (as mentioned above), was subsequently employed for trait mapping and gene discovery in the targeted crops.
To maximize the utility of genomic resources in breeding efforts, Professor Varshney and his team conducted extensive trait mapping and developed over 50 genetic maps in various legume crops, particularly chickpea, pigeonpea, and groundnut.Using linkage mapping and association mapping approaches, they mapped more than 30-50 traits in each of the three legume crops.For example, to unravel the intricate nature of drought tolerance in chickpea and pinpoint markers associated with drought tolerance-associated traits, Professor Varshney led efforts in identifying a "QTL-hotspot" region on linkage group 4, harbouring 12 main-effect QTLs for drought tolerance-related traits, accounting for up to 58.20% of phenotypic variation (Kale et al., 2015;Varshney et al., 2014).Further, Professor Varshney and colleagues successfully identified genomic regions/candidate genes for resistance to sterility mosaic disease (Gnanesh et al., 2011), Fusarium wilt (Saxena et al., 2017), and various abiotic stresses that pose substantial yield constraints in pigeonpea.In the case of groundnut, his team successfully mapped QTLs associated with resistance to rust and late leaf spot (Pandey et al., 2017b), oil content (Shasidhar et al., 2017), and yield-related traits (Pandey et al., 2020;Vishwakarma et al., 2017).
Novel concepts and genomic innovations for agricultural metamorphosis
Professor Varshney has been at the forefront of introducing and advancing several pivotal concepts and methodologies that have fundamentally transformed the landscape of crop improvement through genomics.His contributions have significantly enriched the field of agricultural genetics and breeding, ushering in new paradigms and approaches.
Genomics-assisted breeding
In 2005, Professor Varshney introduced the concept of "Genomics-assisted breeding" (GAB) in the 10th Anniversary Issue of Trends in Plant Science, titled "Feeding the World: Plant Biotechnology Milestones" (Varshney et al., 2005b).GAB revolutionized crop improvement by harnessing cutting-edge genomic tools, such as functional molecular markers, advanced bioinformatics, and enhanced knowledge of statistics and inheritance patterns.This approach significantly increased the efficiency and precision of crop improvement across the world.It was envisioned that GAB would be a transformative force in developing and disseminating improved crop varieties, including those with high yields and resilience against pests, diseases, and environmental stresses.The present-day success stories based on the improved crop varieties resulting from GAB are a testament to this vision.GAB has accelerated the pace of breeding progress across a diverse range of crop species, developing over 130 publicly bred cultivars worldwide (Varshney et al., 2021b).Notably, GAB has been pivotal in producing improved cultivars with heightened resistance to key diseases like bacterial blight and blast in rice, rust in wheat, and fusarium wilt in chickpea.Significant progress has been made in improving abiotic stress adaptation (such as tolerance against submergence, salinity, and drought) and enhanced nutritional quality (e.g., wheat varieties with higher grain protein content, groundnut varieties with elevated oleic acid content, intermediate amylose content in rice varieties, as well as quality protein maize cultivars) in different crops using GAB.The next chapter: Genomics-assisted breeding 2.0 Recent innovations in genome sequencing, precise phenotyping, genetic diversity analysis, and genome editing technologies offer significant potential for identifying and aggregating superior alleles for target traits in crop improvement.Acknowledging this, in the 25th Anniversary Issue of Trends in Plant Science, themed "Feeding the World: The Future of Plant Breeding", Professor Varshney introduced a comprehensive approach, referred to as "genomics-assisted breeding 2.0" (GAB 2.0) or "genomic breeding", for shaping future crops (Varshney et al., 2021b).It involves fine-tuning crop genomes by accumulating advantageous alleles and removing detrimental ones to design future crops.In the years ahead, GAB 2.0 will help devise efficient strategies to breed climate-resilient crop varieties with high nutritional value while upholding sustainability and environmental conservation.
Super-pangenome
A pangenome represents the complete set of genes found within a particular species, capturing the full spectrum of its genetic diversity.This vast genetic reservoir is pivotal for crop improvement as it allows researchers to identify beneficial alleles or allele combinations that can be harnessed to enhance crop traits.Traditionally, pangenomes were constructed mainly from the cultivated gene pools of a specific species, occasionally incorporating two or three closely related species.However, recognizing that a crop's gene pool includes a multitude of species, especially wild relatives with diverse genetic makeup, Professor Varshney and his team proposed extending the pangenome approach beyond cultivated pool by including accessions from all available species within a genus.The resulting "Super-Pangenome" traverses the full landscape of crop diversity for rapid and transformative crop improvement (Khan et al., 2020).Within a short span of time, this new approach has spurred a growing number of published literatures on building super-pangenomes in various crops including tomato, potato, and rice (Raza et al., 2023).
5Gs for crop genetic improvement
Professor Varshney and his collaborators coined the 5G paradigm that calls for the need to make the best use of genome assemblies, germplasm characterization, gene function elucidation, genomic breeding, and gene editing for crop improvement (Varshney et al., 2020).Highlighting the imperative of robust genome assemblies and in-depth germplasm characterization, he advocates for sequencing-based identification of accurate breeding targets to enable optimized genomic breeding and geneediting methodologies.Although elements of the 5Gs are sporadically employed in global crop improvement initiatives, comprehensive 5G integration remains elusive, particularly in developing world.Professor Varshney and his collaborators underscored the potential of emerging technologies in sequencing, phenotyping, and data science to catalyse the global adoption of the 5G strategy and suggested that a fully realized 5G approach can significantly bolster breeding precision, yielding climate-resilient, nutritious varieties with accelerated genetic gains.
Fast-forward breeding
To meet the demands of a rapidly growing global population, agricultural systems around the world must increase their outputs in a sustainable manner.Considering this challenge, Professor Varshney and colleagues introduced the "Fast-forward breeding" framework for accelerated crop improvement (Varshney et al., 2021c).This framework offers a comprehensive strategy for incorporating cutting-edge technologies in crop genome sequencing, high-throughput phenotyping, and systems biology, together with efficient trait mapping and genomic prediction, including machine learning and artificial intelligence, to expedite the availability of advantageous traits for breeding and research purposes.Approaches like haplotype-based breeding, genomic prediction, and genome editing, outlined in this framework, are anticipated to hasten the targeted integration of superior genetic traits into future cultivars.Additionally, emerging breeding techniques, such as optimal contribution selection, can enrich the genetic diversity of breeding programs while accelerating genetic advancements.Combining speed breeding with stateof-the-art genomic breeding technologies has the potential to overcome the longstanding bottleneck of protracted crop breeding cycles.The methods outlined in this framework and their integration are poised to accelerate the breeding process for enhanced crop improvement, ultimately contributing to a more food-secure world.
Transformation of smallholder agriculture
With an objective to implement GAB in breeding programmes of chickpea, pigeonpea, and groundnut, Professor Varshney trained >500 breeders from 36 countries from Asia, Africa and South America and provided them access to high throughput and cost effective genotyping platforms.This collaboration enabled several Asian and African breeding programmes to successfully incorporate GAB, leading to development of numerous improved lines.Following stringent agronomic evaluations, many of these improved lines were recommended and delivered to farmers and several are in the advanced stages of the varietal release processes in countries like India, Ethiopia, Kenya, Tanzania, Ghana, Mali, and others.For example, in the case of chickpea, introducing the "QTL-hotspot" region through the GAB approach led to the development and release of several drought-tolerant varieties for cultivation in India and Ethiopia over the past few years.This list of improved varieties includes Pusa Chickpea 10216, Pusa Chickpea 4005, IPC L4-14, Pusa Chickpea Shyam, and Geletu, among others (for details see Roorkiwal et al., 2020).Particularly noteworthy is "Pusa Chickpea 10216", the first GABled chickpea variety in India with enhanced drought tolerance and an 11% yield advantage over the recurrent parent (Bharadwaj et al., 2021).Similarly, several Fusarium wilt resistant chickpea varieties, namely, Super Annigeri 1, Samriddhi, Pusa Manav, were delivered to chickpea farmers in India.In pigeonpea, Professor Varshney and team contributed to the development of low-cost and rapid molecular marker assays to test the genetic purity of hybrids and parental lines along with genomic prediction tools to obtain high heterotic combinations, thus paving the way for a full-scale commercial hybrid breeding technology in pigeonpea (Bohra et al., 2020;Saxena et al., 2021).In the case of groundnut, high oleic groundnut varieties, specifically "Girnar 4" and "Girnar 5", featuring nearly 80% oleic acid content compared to the typical varieties with 40-50% oleic acid content, were developed through a collaboration between ICAR-Directorate of Groundnut Research in Junagadh and Professor Varshney's team at ICRISAT (Pandey et al., 2020).Furthermore, Professor Varshney also played a key role in the marker-assisted ª 2024 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1504-1515 improvement of popular groundnut varieties for resistance to rust and late leaf spot.Several of these lines are in advanced stage of public release in India and many countries in Africa.These highyielding lines, with enhanced biotic/abiotic stress tolerance, underscore the potential of integrating genomics with breeding endeavours to develop superior, climate-resilient varieties.
With an objective of creating a positive difference in the livelihoods of smallholder farmers in Sub-Saharan Africa and Asia, Professor Varshney led "Tropical Legumes III" (TL III, https://tropicallegumeshub.com/) project as a Principal Investigator, with a total budget of USD 25 million from the Bill & Melinda Gates Foundation (Varshney et al., 2019b).Through collaboration with scientists from three CGIAR Centres (ICRISAT, IITA, CIAT) and 15 national programmes in target countries, TL III strategically targeted four cropscowpea, common bean, groundnut, and chickpeaacross seven regions, including Burkina Faso, Ghana, Mali, Nigeria, Ethiopia, Tanzania, Uganda, and the Indian state Uttar Pradesh.Professor Varshney played a key role in providing strategic guidance and oversight for the development and accessibility of high-yielding varieties for farmers.TL III and its predecessor (TL II Phase I and Phase II projects), led directly by Professor Varshney, played a pivotal role in releasing 266 varieties, producing 497 901 tons of certified seeds that were planted on about 5.0 million ha in the 15 countries and beyond, producing about 6.1 million tons of grain worth USD 3.2 billion (Ojiewo et al., 2019).The project outputs have benefitted >23 million lives, especially women and children.As per an independent economic analysis by Evans School Policy Analysis and Research of the University of Washington, the average benefit-to-cost ratio for this project is 16 indicating massive social returns the project delivered.In recognition of this remarkable work and its positive effects on the livelihoods of smallholder farmers in 13 African countries, Professor Varshney's previous organization, ICRISAT was honoured with the Africa Food Prize in 2021.2012) and Genomics-Assisted Breeding workshop annually in PAG USA and many more.These workshops/conferences have served as platforms for fostering collaboration, exchanging ideas, and disseminating the latest advancements in genomics-assisted breeding and crop improvement.His unwavering dedication to nurturing talent and providing access to cutting-edge techniques has empowered countless individuals to contribute to the field of genomics and agriculture.
Growing the crop science community by mentoring and empowering researchers
Notably, Professor Varshney has also supervised nearly 50 PhD students and 70 post-docs many of whom have gone on to establish successful careers in their respective fields.His mentorship has not only shaped careers but has also been instrumental in strengthening the global research community's capacity to address pressing agricultural challenges.His legacy of mentorship and capacity building continues to inspire and guide emerging scientists in their pursuit of scientific excellence.
By serving several organizations in the science leadership/management role for over 15 years, Professor Varshney has supervised large international teams representing a range of diversity in ethnicity, nationality, and gender.In these roles, Professor Varshney offered extensive proficiency in R&D management, product development, fostering innovative ideas, and championing an inclusive culture.He also actively encouraged an interdisciplinary and collaborative research environment, empowering team members to achieve high-performance and unlock their full potential as both team contributors and individual achievers.
Due to his outstanding science contributions and his impactoriented research, Professor Varshney has remained a soughtafter research leader to be invited to various committees and agencies across the globe.
Inspiring recognitions and honours throughout an illustrious journey
Professor Varshney's pioneering research contributions have been recognized and celebrated nationally and internationally throughout his illustrious career.His accolades are a testament to his dedication, innovation, and work's profound impact (Figures 4 and 5).His esteemed standing is evident from his election as a fellow to more than 10 esteemed science and agriculture academies/societies spanning India, Germany, the USA, the UK, Italy, and Africa.His work has earned him over 40 prestigious
Disseminating scientific knowledge and communication to society
With an objective of advancing plant biology and agricultural science, Professor Varshney, since his master's degree, has been proactive in publishing scientific articles.He is a prolific scientific author and has published a large number of papers (>600) in various high-impact journals including Nature, Nature Genetics, Nature Biotechnology, Plant Biotechnology Journal, Trends in Plant Science, Proceedings of the National Academy of Sciences, USA, Molecular Plant, New Phytologist, Plant, Cell & Environment, and Journal of Experimental Botany.As of December 16, 2023, Professor Varshney boasts an impressive h-index of 124, with his work cited over 66 000 times.Due to the substantial impact of his research, Clarivate Analytics has consistently recognized him as a "Highly Cited Researcher" since 2014.
Professor Varshney is not only a luminary in the scientific world of genomics and agriculture but also an ardent advocate for bridging the divide between scientific research and the public.Acknowledging the significance of making science both accessible and relatable, he has persistently strived to convey intricate scientific concepts in ways that resonate with a broader audience.A prime example of his outreach is his presentation on the TEDx stage, where his talk illuminated the complexities of genomics and its agricultural implications, making the topic both captivating and comprehensible to those outside the scientific community.Beyond public speaking, Professor Varshney's research has been featured in numerous international print and electronic media outlets, including The New York Times, The Economist, Forbes, BBC, ABC, Cosmos, Down to Earth, Mongabay, Doordarshan, and many others.His insights are highly sought after, and he utilizes these platforms to highlight the latest genomic advancements, their potential applications, and the forthcoming challenges.Embracing the digital era, Professor Varshney effectively utilizes online platforms to engage a wider audience.He maintains an active YouTube channel (https://www.youtube.com/@rajeevvarshney6803) to highlight videos that explore various facets of genomics and his research initiatives.Additionally, his commitment to effective communication is evident in his written contributions to his blog (https://rajeevkvarshney.wordpress.com/),where he regularly articulates insights into the scientific realm.Through his videos and articles, he imparts recent research developments and shares personal reflections on the odyssey of discovery.These platforms serve not merely as knowledge repositories but as catalysts that ignite curiosity, inspiring young enthusiasts to delve into the world of scientific discovery.
Personality beyond the sequences
While Professor Varshney's scientific accomplishments place him among the luminaries in legume genomics and agriculture, his persona is rich and multifaceted beyond academic achievements.
Beyond the image of the distinguished researcher is a man with diverse interests, deeply engaged in his work and the wider world.A fervent reader, Professor Varshney's curiosity is not limited to his laboratory.His office shelves are adorned not only with scientific journals but with a myriad of books spanning across genres.This reading habit undoubtedly nourishes his vast appetite for knowledge and broadens his understanding of diverse topics.However, his inquisitiveness is not just confined to books.A movie buff at heart, Professor Varshney enjoys delving into the world of cinema, appreciating the art of storytelling from various cultures and perspectives.This love for movies offers him a refreshing escape, a way to momentarily disconnect from the scientific rigours and immerse himself in narratives far removed from the world of genomics.Despite the demands of his profession, he never misses an opportunity to revel in moments of fun and leisure.His fun-loving nature ensures that the atmosphere around him, whether at conferences or team meetings, is light-hearted, promoting creativity and camaraderie among his colleagues and students.In considering these dimensions, we perceive Professor Varshney holistically: as a trailblazing scientist, an avid reader, a cinema aficionado, and a source of joy in the often-demanding field of research.This equilibrium between professional dedication and diverse passions positions Professor Varshney as an inspiration inside and outside the laboratory.
Figure 1
Figure 1 Professor Rajeev K Varshney in his element at WA State Agricultural Biotechnology Centre/Centre for Crop & Food Innovation, Murdoch University in 2023.
Figure 3
Figure 3 Key moments from Professor Varshney's esteemed career in the presence of distinguished individuals who have contributed to his professional journey.The snapshots showcase interactions with mentors and globally renowned scientists, reflecting the camaraderie and inspiration he has drawn throughout his distinguished research career.Professor Varshney with (a) Professor Wazahat Hussain (teacher during Bachelor's and Master's course); (b) Professor PK Gupta (PhD Supervisor); (c) Professor Andreas Graner (Post-doc Supervisor); (d) Dr William Dar (former Director General, ICRISAT); (e) Dr Peter Carberry (former Director General, ICRISAT); (f) Professor Peter Davies (Pro Vice Chancellor & Director, Food Futures Institute, Murdoch University); (g) Professor Norman Borlaug, Nobel Laureate for Green Revolution; and (h) Professor MS Swaminathan, father of Indian Green Revolution.
Figure 2
Figure 2 Professor Varshney's pioneering contributions in agricultural genomics applied to crop improvement.Professor Varshney's global leadership is underscored by his extensive work on genome sequencing, genetic diversity cataloguing, genomics-assisted breeding, seed systems enhancement, capacity-building, and empowering national programmes in developing countries.His research journey embodies collaborative excellence, rigorous genetic evaluations, and innovative breeding strategies, all aimed at transforming scientific breakthroughs into tangible advancements by delivering superior crop varieties to farmers worldwide.
ª
2024 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1504-1515
ª
2024 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1504-1515
ª
2024 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1504-1515 awards from different countries, including the Shanti Swarup Bhatnagar Prize and the Rafi Ahmed Kidwai, India's pinnacle of science and agricultural science awards.Professor Varshney has been invited to share his expertise at several esteemed international conferences, delivering presentations at high-profile meetings, including Plenary talk at the 30th Plant and Animal Genome (PAG) conference, USA (the world's largest genomics conference) in 2023; G8 International Conference on Open Data for Agriculture on Open Data in Genomics and Modern Breeding for Crop Improvement, organized by US and UK
Figure 4
Figure 4 Recognition of Professor Varshney at select prestigious award ceremonies.(a) Signing the 366-year-old Royal Society Charter during his induction as a Fellow of the Royal Society (2023); (b) receiving the Research Excellence India Citation Award from Thomson Reuters (2015); (c) in the group photo with the Prime Minister of India, Mr Narendra Modi, along with some other awardees of the Shanti Swarup Bhatnagar Prize (2015); and (d) being honoured by the Chief Minister, the Government of Uttar Pradesh (home state), India, Mr Yogi Adityanath (2018).
Figure 5
Figure 5 A timeline highlighting select accolades and fellowships presented to Professor Varshney for his significant research contributions.
Box 1. Life sketch Born on 13 July 1973 in Bahjoi, Uttar Pradesh, India, Professor Varshney has established himself as a prominent figure in the realm of plant genomics, genetics, and transformative agriculture.He began his educational pursuit at Aligarh Muslim University, Aligarh, Uttar Pradesh, India, where he completed his B.Sc. Honours in Botany in 1993 and furthered his education with an M.Sc. in Botany, specializing in Genetics, Plant Breeding, and Molecular Biology in 1995.Professor Varshney pursued his doctoral studies under the aegis of Professor PK Gupta and Professor PC Sharma at Chaudhary Charan Singh University, Meerut, Uttar Pradesh, India.In 2001, he earned his PhD in Agriculture (Molecular Biology) for his notable work on a Wheat Biotechnology Project supported by the Department of Biotechnology, Government of India, with his thesis titled "A Study of Microsatellites in Hexaploid Wheats".After completing his doctoral studies, he undertook the role of Wissenschaftlicher Mitarbeiter (Research Scientist) at the Leibniz Institute of Plant Genetics & Crop Plant Research (IPK), Gatersleben, Germany, under the mentorship of Professor Andreas Graner.During his stay at IPK for 5 years, he was deeply engaged in barley genomics research and comparative genomics across cereal crops.His exceptional skill set caught the attention of the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), India, leading to his appointment as Senior Scientist for Applied Genomics in 2005.With a vision to catalyse genomics research in dryland crops, Professor Varshney, as Founding Director, spearheaded the Center of Excellence in Genomics in 2007 with the support of the Department of Biotechnology, Government of India.In 2017, this centre, under leadership of Professor Varshney, evolved into the Center of Excellence in Genomics & Systems Biology.
In 2022, Professor Varshney's illustrious career took him to Murdoch University, Australia, where he now holds multiple directorial positions (Director of the Western Australian State Agriculture Biotechnology Centre (SABC), Centre for Crop & Food Innovation (CCFI), and International Chair in Agriculture and Food Security with the Food Futures Institute) that emphasize advancements in agricultural biotechnology and fortify food security initiatives.Alongside his professional endeavours, Professor Varshney finds joy and support in his personal life with his wife, Monika Varshney, and their two children, Prakhar Varshney and Preksha Varshney.
He has been instrumental in shaping scientific discourse through his longstanding editorial roles in numerous prestigious journals.With over a decade at the Plant Biotechnology Journal, including more than 4 years as a Senior Editor, and editorial positions in key journals such as Trends in Plant Sciences, Plant & Cell Physiology, Theoretical and Applied Genetics, Plant Breeding, Frontiers in Plant Science, Molecular Genetics and Genomics, The Plant Genome, and Plant Genetic Resources, he has helped to steer the direction of plant science research.
Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1504-1515 governments in the World Bank, 2013; brainstorming session on Digital Revolution for Agriculture at Bill & Melinda Gates Foundation in 2012, chaired by Mr Bill Gates, co-chair of Bill & Melinda Gates Foundation; FAO's international conference on Agricultural Biotechnology in Developing Countries in Mexico in 2010 and FAO's Regional Conference on Agricultural Biotechnologies in Sustainable Food Systems and Nutrition in Malaysia in 2017. | 9,396.8 | 2024-01-11T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
] |
Combining Image Space and q-Space PDEs for Lossless Compression of Diffusion MR Images
Diffusion MRI is a modern neuroimaging modality with a unique ability to acquire microstructural information by measuring water self-diffusion at the voxel level. However, it generates huge amounts of data, resulting from a large number of repeated 3D scans. Each volume samples a location in q-space, indicating the direction and strength of a diffusion sensitizing gradient during the measurement. This captures detailed information about the self-diffusion, and the tissue microstructure that restricts it. Lossless compression with GZIP is widely used to reduce the memory requirements. We introduce a novel lossless codec for diffusion MRI data. It reduces file sizes by more than 30% compared to GZIP, and also beats lossless codecs from the JPEG family. Our codec builds on recent work on lossless PDE-based compression of 3D medical images, but additionally exploits smoothness in q-space. We demonstrate that, compared to using only image space PDEs, q-space PDEs further improve compression rates. Moreover, implementing them with Finite Element Methods and a custom acceleration significantly reduces computational expense. Finally, we show that our codec clearly benefits from integrating subject motion correction, and slightly from optimizing the order in which the 3D volumes are coded.
Introduction
With the development of new medical imaging techniques, and constant refinement of existing ones, the associated storage requirements have been reported to grow exponentially each year [1]. This explains why medical image compression is an active area of research.
Our work belongs to the family of compression algorithms that are based on Partial Differential Equations (PDEs). The general idea behind this approach is to store a sparse subset of the image information, and to reconstruct the remaining image via PDE-based inpainting [2]. PDE-based compression has a long tradition for the lossy compression of natural images [2,3] and videos [4][5][6]. The benefit of PDE-based approaches relative to transform-based codecs like JPEG [7] and JPEG2000 [8] has often been most pronounced at high compression rates [3]. Even though this strategy for lossy compression has also been transferred to three-dimensional images [9], in medical imaging, lossless compression is often preferred to ensure that all diagnostically relevant details are preserved. In some cases, it is even legally forbidden to apply lossy compression for medical image archival [10,11].
PDE-based Lossless Compression of Diffusion MR Images
We recently introduced a PDE-based codec for 3D medical images that stores the residuals between the original image and an intermediate PDE-based reconstruction to ensure that the final reconstruction is lossless, and we demonstrated that this strategy led to competitive compression rates [12]. In our current work, we extend this idea for the specific use case of image datasets from diffusion MRI.
Diffusion MRI (dMRI) [13,14] is a variant of Magnetic Resonance Imaging in which diffusion sensitizing gradients are introduced into the measurement sequence. If the hydrogen nuclei that generate the MR signal undergo a net displacement along the gradient direction during the measurement, the signal is attenuated. Assuming that these displacements result from (self-) diffusion, comparing diffusion-weighted to nonweighted measurements permits computation of an apparent diffusion coefficient.
Taking measurements with different gradient directions captures the directional dependence of the diffusivity. It results from interactions between water and tissue microstructure and therefore carries information about structures that are much smaller than the MR image resolution. Important applications of dMRI include the detection of microstructural changes that are related to aging or disease, and the reconstruction of major white matter tracts, which is referred to as fiber tracking or tractography [15].
The large number of repeated measurements in diffusion MRI leads to large amounts of data. In practice, resulting image datasets are often compressed using GZIP [16]. In our previous work [12], we demonstrated that, compared to this, PDEbased lossless compression can further reduce the memory requirement of individual dMRI volumes by more than 25%. However, applying our codec to each 3D volume independently does not exploit the fact that measurements for nearby gradient directions are usually similar. Moreover, it is relatively time consuming.
In our current work, we address both of these limitations by combining the previous idea of lossless compression via image-space inpainting with a novel approach of PDE-based inpainting in qspace, which is the space spanned by diffusion sensitizing gradient directions and magnitudes. We find that predictions from linear diffusion in q-space can be made with low computational effort, and are strong enough to further improve compression rates.
The remainder of our work is organized as follows: Section 2 provides the required background and discusses prior work on 4D image compression. Section 3 introduces the components of our proposed codec. Section 4 demonstrates that the resulting compression rates exceed those of several baselines and investigates the effects of specific design choices. Section 5 concludes with a brief discussion.
Background and Related Work
We will now introduce the main ideas behind diffusion PDE-based image inpainting and compression (Section 2.1), clarify the foundations of diffusion MRI and q-space (Section 2.2), and briefly review the literature on 4D medical image compression (Section 2.3).
Diffusion PDE-based Inpainting and Compression
Inspired by their use for modeling physical phenomena, Partial Differential Equations (PDEs) have a long tradition for solving problems in image processing. In particular, the PDE describing heat diffusion has provided a framework for image smoothing and inpainting [17][18][19][20][21][22][23]. The heat equation captures the relationship between temporal changes in a temperature ∂ t u and the divergence of its spatial gradient ∇u, where D is the thermal diffusivity of the medium. In a homogeneous and isotropic medium, the diffusivity D is a constant scalar. In a nonhomogeneous isotropic medium, D would still be a scalar, but depend on the spatial location. In an anisotropic medium, heat dissipates more rapidly in some directions than in others. In that case, D is a symmetric positive definite matrix that is referred to as a diffusion tensor. In image processing, the gray value at a certain location is interpreted as a temperature u, and Equation (1) is coupled with suitable boundary PDE-based Lossless Compression of Diffusion MR Images 3 conditions. For image smoothing, where Ω is the image domain, and n is the normal vector to its boundary ∂Ω. The original image f : Ω → R is used to specify an initial condition u = f at t = 0. For increasing diffusion time t, u will correspond to an increasingly smoothed version of the image. In image inpainting, values are known at a subset of pixel locations, and unknown values should be filled in. For this, a Dirichlet boundary condition is introduced, which fixes values at a subset K of pixel locations [2,20] and a steady-state is computed at which ∂ t u ≈ 0. The ability of PDEs to reconstruct plausible images even from a very sparse subset of pixels made them useful for image compression [2][3][4].
Different choices of diffusivity D introduce considerable flexibility with respect to shaping the final result. Fixing D = 1 turns Equation (3) into second-order linear homogenous (LH) diffusion with Lu = ∆u, where ∆ denotes the Laplace operator, and the steady state satisfies the Laplace equation ∆u = 0. Even though the resulting reconstructions suffer from singularities [2] and can often be improved by the more complex models discussed below, they have been used to design compression codecs for cartoon-like images [24], flow fields [25], and depth maps [26][27][28]. Its simple linear nature and fast convergence to the steadystate also make LH diffusion an attractive choice for real-time video compression [5,6]. Compared to LH diffusion, decreasing the diffusivity as a function of image gradient magnitude permits a better preservation of salient edges [18,29]. This is referred to as nonlinear diffusion, since the results are no longer linear in the original image f . Rather than just decreasing the overall diffusivity close to edges, modeling D as an anisotropic diffusion tensor permits smoothing along edges, while maintaining or even increasing the contrast perpendicular to them. One widely used model is referred to as Edge-Enhancing Diffusion (EED) [30].
All PDEs that have been discussed up to this point are of second order. Fourth-and higherorder extensions have also been studied, both for smoothing [31][32][33][34][35] and for inpainting [36,37]. In the simplest case, setting Lu = −∆ 2 u in Equation (4) leads to the biharmonic (BH) equation. In two and three dimensions, it does not suffer from the singularities that are present in the results of LH diffusion [2,38], while preserving a simple linear nature. For this reason, BH has been considered for the design of compression codecs [38][39][40][41]. However, it no longer satisfies a min-max principle [33] and it increases running time and sensitivity to quantization error.
Our own previous work [37] proposed an anisotropic fourth-order PDE in which a fourthorder diffusion tensor is constructed from the image gradient in a similar way as in second-order EED. We thus refer to it as Fourth-Order Edge-Enhancing Diffusion (FOEED). It was shown to result in more accurate inpainting results than second-order EED, and higher PDE-based compression rates, in several examples [12].
Our current work is concerned with compressing data from diffusion MRI, which is similar to hyperspectral images in that it contains a large number of values (channels) at each location [40]. However, the channels in hyperspectral images have a one-dimensional order, while our channels correspond to positions on a spherical shell in q-space, which we will now introduce.
Diffusion MRI
The signal in Magnetic Resonance Imaging is generated by the hydrogen atoms within water molecules. Their heat motion is referred to as self-diffusion, since it takes place despite a zero concentration gradient. The extent to which this motion is restricted in a cellular environment correlates with microstructural parameters such as cellular density or integrity. Moreover, in the white matter of the human brain, which contains the tracts that connect different brain regions, self-diffusion can occur more freely in the local direction of those tracts than perpendicular to it [42]. Therefore, measuring the apparent diffusion coefficient in different directions provides relevant information about small-scale structures that are below the image resolution of in vivo MRI.
This motivates the use of diffusion MRI. It goes back to the idea of measuring diffusion by introducing a pair of diffusion sensitizing magnetic field gradients into a nuclear magnetic resonance sequence [43]. Integrating it with spatially resolved Magnetic Resonance Imaging permits diffusion measurements at a voxel level [13]. Repeating the measurements with differently oriented gradients reveals a biologically relevant directional dependence in various tissue types, including muscle and the white matter of the brain [44].
Several key parameters of the diffusion sensitization can be summarized in the gradient wave vector where γ is the gyromagnetic ratio of hydrogen nuclei in water, δ is the duration of the diffusion sensitizing gradients, and g corresponds to their direction and strength. The normalized MR echo magnitude |E(q, τ )| additionally depends on the time τ between the pair of gradient pulses. It is computed as the ratio between the corresponding diffusion-weighted measurement and an unweighted measurement with q = 0. It is antipodally symmetric, |E(−q, τ )| = |E(q, τ )|. The relevance of this q-space formalism derives from a Fourier relationship between |E(q, τ )| and the ensemble average diffusion propagator P (R, τ ), which specifies the probability of a molecular displacement R within a fixed diffusion time [45]. An alternative parameterization of the diffusion gradients is in terms of their direction and a factor b = 4π 2 q 2 (τ − δ/3), which also accounts for the fact that the diffusion weighting increases with the effective diffusion time Due to practical constraints on the overall duration of dMRI measurements, the sampling of q-space is usually limited to one or several reference measurements with q = 0, as well as one or a few shells with constant q , and thus constant b. This is illustrated in Figure 1. Such setups focus on the directional dependence of the signal, and typically strive for a uniform distribution of gradient directions on these shells [46]. Our codec assumes dMRI data with such a "shelled" structure, an assumption that is shared by well-established algorithms in the field [47].
4D Medical Image Compression
Many medical imaging modalities, including Magnetic Resonance Imaging, Computed Tomography, and ultrasound, can be used to image volumes repeatedly, in order to capture time-dependent phenomena such as organ motion, perfusion, or blood oxygenation. Considerable work has been done on lossless and lossy compression of the resulting 4D (3D plus time) image data. Much of it has borrowed from video coding, and has often involved motion compensation [48,49], which is combined with wavelet transforms [48,[50][51][52][53] or hierarchical vector quantization [54] for compression.
Almost all of these works have compared their compression rates to codecs from the JPEG family. We will also compare our codec to JPEG-LS, lossless JPEG, and JPEG2000. Additionally, we compare compression rates against GZIP [16] which, in conjunction with the Neuroimaging Informatics Technology Initiative (NIfTI) file format, is currently most widely used to compress diffusion MRI data in practice. To make this comparison fair, we also use Huffman coding or Deflate within our own codec, as opposed to computationally efficient alternatives that might further improve compression rates [55,56].
Even though the volumetric images in diffusion MRI are also taken sequentially, their temporal order is less relevant than the q-space structure that was described above: Measuring with the same diffusion sensitizing gradients, but in a different order, should yield equivalent information, even though it permutes the temporal order. To the best of our knowledge, no codec has been proposed so far that exploits this very specific structure. There has been extensive work on compressed sensing for diffusion MRI (see [57,58] and references therein), but with a focus on reducing measurement time, rather than efficient storage of the measured data.
Recent work has demonstrated the potential of deep learning for lossless compression of 3D medical images [59]. Extending this specifically for diffusion MRI is an interesting future direction. However, our PDE-based approach has the advantage of not requiring any training data. Since medical data is a particularly sensitive type of personal data, obtaining diverse large-scale datasets can be difficult, and the potential of model attacks that could cause data leakage is concerning [60,61].
Proposed Lossless Codec
Traditional PDE-based image compression [2,3,12,37] performs inpainting in image space, which relies on piecewise smoothness of the image. A key contribution of our current work is to additionally exploit the smoothness in q-space. As it can be seen in Figure 1, dMRI signals that are measured with similar gradient directions are correlated.
Our codec uses a spatial PDE for the first few volumes, which is described in more detail in Section 3.1. Once sufficiently many samples are available so that a q-space PDE, described in Section 3.2, produces stronger compression than the spatial PDE, we switch to it.
The q-space PDE assumes that all volumes are in correct spatial alignment, which might be violated in practice due to subject motion. For this reason, our codec includes a mechanism for motion compensation, described in Section 3.3.
Our overall compressed file format is specified in Section 3.4.
Lossless 3D Spatial Codec
The initial few volumes are compressed with an image space PDE-based codec that follows our recent conference paper [12]. To make our current work self-contained, we briefly summarize the most relevant points, focusing on the forward, i.e., encoding direction. The decoding process just mirrors the respective steps. The codec is composed of three main parts: Data sparsification (initial mask selection), prediction (iterative reconstruction), and residual coding.
Initial Mask Selection: As an initial mask, our codec simply stores voxel intensities on a sparse regular grid. More specifically, for a given 3D input image of size n 1 × n 2 × n 3 , the initial mask is chosen as a hexahedral grid consisting of vox- Most lossy PDE-based codecs select a mask adaptively [2,3], which better preserves important image features such as edges and corners [24]. However, this introduces the need to store the locations of the selected pixels, which can be avoided by the use of fixed grids [27,62]. In the context of lossless compression, we achieved higher compression rates by combining the latter strategy with iterative reconstruction.
Iterative Reconstruction: Making PDE-based compression lossless requires coding the differences between the original image and the PDEbased reconstruction, and is beneficial in terms of compression rates to the extent that those residuals are more compressible than the original image. In general, residuals become more compressible the more accurate the reconstruction is. Therefore, the overall compression rate can be increased by iteratively coding residuals of some pixels, and refining the remaining ones based on them.
Our previous work [12] explored different iterative schemes. The variant that is used here codes the residuals in all remaining face-connected neighbors of the current mask voxels, i.e., up to six voxels per mask voxel. Those neighbors become part of the mask for the next iteration, and the process continues until all voxels have been coded.
Among the PDEs that have been explored for inpainting, we currently consider the two that worked best in [12], i.e., traditional edgeenhancing diffusion (EED) [20] and our recent fourth-order generalization (FOEED) [37].
Residual Coding: Residuals are computed in modular arithmetic, so that they can be represented as unsigned integers. The final compression of the initial mask and the residuals is either done via a Huffman entropy encoder or the Deflate algorithm, depending on which gives the smaller output file size.
In cases where medical images contain a substantial amount of empty space, e.g., a background region with exactly zero image intensity, our previous work [12] found that coding it separately using run length encoding (RLE) can provide an additional benefit. Unfortunately, in dMRI, the background is perturbed by measurement noise, which renders this approach ineffective. Therefore, our current work does not include any dedicated empty space coding.
PDE-based q-Space Inpainting
The general idea of q-space inpainting is illustrated in Figure 2: Once a certain number of diffusion-weighted images with different gradient directions are known, we can use them to predict images that correspond to a new direction. This happens at the voxel level, so that the prediction at a given location is entirely determined by values at the same location in the known images.
This can be understood as "flipping" the setup from Section 3.1, where the mask consisted of pixel locations, and the inpainting was repeated with an identical mask for each channel. Instead, the mask now specifies the known channels, and inpainting is repeated for each voxel in the volume.
Compressing Diffusion-Weighted Images
Since we assume that diffusion-weighted measurements are on spherical shells in q-space (Section 2.2), we inpaint with second-order linear homogeneous (LH) diffusion ∂ t u = ∆u (6) or fourth-order biharmonic (BH) smoothing on the sphere, where ∆ is the Laplace-Beltrami operator. Given that our samples do not form a regular grid, we numerically solve these equations using Finite Element Methods (FEM) [63,64]. For this, we first construct a 3D Delaunay tessellation from the set of all gradient vectors g i and their antipodal counterparts −g i , and then extract a triangular surface mesh from it. Figure 3 shows an example of the given vectors (left), and the resulting triangular mesh (right).
Similar to PDE-based inpainting in the image domain, we fix the known values by imposing Dirichlet boundary conditions at the vertices corresponding to the previously coded diffusionweighted images, again accounting for antipodal symmetry so that each known image determines the values of two vertices. Once a steady state has been reached, the values at locations corresponding to diffusion-weighted images that are yet to be coded can serve as predictions. Similar as before, we compute residuals with respect to those predictions in modular arithmetic, and apply Huffman coding or Deflate to them.
We found that, once a sufficient number of diffusion weighted images are available as a basis of q-space inpainting, its residuals become more compressible than those from iterative image space inpainting. Our codec adaptively determines a suitable point for switching from spatial to q-space predictions. After the first diffusion-weighted volume, it starts comparing the sizes of compressing subsequent volumes with the spatial codec (Section 3.1) to the size when using q-space inpainting and switches to it on the first volume where it is beneficial. To limit computational effort, the spatial codec is no longer tried for subsequent volumes.
Accelerated Computation
Since q-space inpainting happens at a voxel level, it should be repeated for each voxel of the 3D image. However, the computational cost of running the FEM solver for each voxel separately is extremely high. Fortunately, linearity of the PDEs and the fact that the Dirichlet boundary conditions are imposed on the same vertices for each voxel permit a significant speedup.
Formally, we can consider one time step of numerically solving Equation (6) or (7), at time t, as applying a discrete linear differential operator D, which is determined by the vertices and connectivity of our triangular mesh, on a discrete input u (t) ∈ R c , where c is the number of channels (q-space samples per voxel). The boundary conditions ensure that u kj at positions k j that correspond to the fixed (previously coded) channels.
The inpainting result is obtained as the fixed point u (FP) as t → ∞. It can be approximated by iterating D a sufficient number of times, resulting in an operator D FP that directly maps D FP is still linear, and we observe that its kernel is the subspace corresponding to the unknown q-space samples, so that their initialization in u (0) does not influence the steady state [4]. Therefore, we can rewrite Equation 9 as where e kj are the indicator vectors of the known samples k j .
In other words, by computing w kj = D FP [e kj ], we can obtain weight vectors that specify how the known values are combined to predict the unknown ones. They are analogous to the "inpainting echoes" that have been computed in previous work [65] for the purpose of optimizing tonal data. Omitting the irrelevant initialization of the unknown values from the input u (0) , and the known values from the output u (FP) yields a weight matrix W of shape m × n for n known and m unknown values.
We compute the coefficients of W by running the FEM n times. In the jth run, we set the value corresponding to k j to one, all remaining values to zero. After numerically solving the Laplace or Biharmonic PDE, the values at the m unique vertices that correspond to unknown DWIs yield the jth column of W.
Finally, W allows us to make efficient predictions in each voxel, by simply multiplying it to a vector that contains the intensities in that voxel from the previously coded diffusion gradients.
Implementation Details and Running Times
We numerically solve Equations (6) and (7) via the open-source FEM solver package FEniCSx [64]. For implementation details, we refer to its tutorials [66]. Applying this solver to each voxel of a 104 × 104 × 72 volume takes close to two and four hours, respectively, for LH and BH PDEs on a single 3.3 GHz CPU core. The acceleration from the previous section reduces this to only 1.6 s and 2.4 s per volume, respectively. This includes the time for building a Delaunay tessellation, which is computed with SciPy [67], and extracting a surface mesh from it using the BoundaryMesh method from FEniCSx.
Compressing b = 0 Images
Our general approach simplifies for unweighted volumes with b = 0. Again, the first of them is compressed using the spatial codec. If multiple b = 0 images were acquired to increase the signal-to-noise ratio, our codec compresses the remaining ones by taking the respective residuals with respect to the first b = 0 volume, as illustrated in Figure 4.
Motion Correction
Subject motion commonly occurs during the lengthy dMRI acquisitions, and is typically accounted for by applying motion correction based on image registration [68]. We also include this step in our codec, since inpainting in q-space requires a correct spatial alignment of all 3D volumes so that predictions are based on information from the same location within the brain.
We implement motion correction as follows: 1. We perform affine registration of each volume to the same b = 0 volume, which is used as a common reference. This yields a transformation matrix T X→b0 which aligns DWI volume X to the b = 0 reference. 2. When predicting a DWI volume P , we transform all known volumes X via the affine transformation T −1 P →b0 • T X→b0 , which can be computed from the transformations in Step 1.
3. In addition to resampling each known volume X, we re-orient its diffusion gradient direction g X according to the rotational part of the transformation in Step 2. Omitting this step would lead to incorrect relative orientations of diffusion gradient directions [69], which could again reduce accuracy of q-space inpainting.
Transforming images via a common reference allows us to align them without having to perform image registration on all pairs of volumes. This saves considerable computational effort. Combining the two transformations and applying them in a single step also reduces computational effort, and simultaneously reduces image blurring compared to a two-stage implementation that would involve interpolating twice.
In addition to the computational expense, motion correction incurs the cost of having to store the affine matrices T X→b0 along with the compressed data. Experimental results in Section 4.4 will demonstrate that this storage cost is outweighed by the increase in compression rate when q-space inpainting properly accounts for motion.
Subject movement correction and B-matrix reorientation are done using the freely available FSL tools [68] and the DIPY imaging library [70], respectively. A practically relevant implementation detail concerns boundary effects. As illustrated in Figure 5, missing information can enter the field of view when applying image transformations. We found that q-space inpainting near the boundary of the domain works more reliably if we resolve these cases with nearest neighbor extrapolation, rather than with zero padding.
Compressed File Format
In our current implementation, the relevant data is spread over multiple files whose sizes are added when computing compression rates.
The volumes that are compressed with the 3D lossless codec (Section 3.1) are stored with the same header as in [12]. Stated briefly, it contains the original minimum and maximum voxel values (4 bytes), sizes of the compressed data streams for zero voxel binary mask and mask intensities (8 bytes), the diffusivity contrast parameter (4 bytes), the type of PDE (2 bits), the dilation mode (1 bit), and the types of encoding for mask intensities and residuals (2 bits).
For each volume that is compressed with qspace inpainting, the header contains the original minimum and maximum voxel values (4 bytes), the type of PDE (2 bits), the type of encoding for the residuals (1 bit), and the volume number in the original order (2 bytes).
Mask and residual values themselves are stored after compression with pure Huffman coding or Deflate, depending on what gave a smaller file size.
In addition, we store the NIfTI header (348 bytes) as well as files containing b values and gradient vectors in their original ASCII formats. For simplicity, affine transformations for motion correction are also kept in the ASCII format generated by FSL FLIRT [68].
Data
We evaluate our codec on two dMRI datasets that were made publicly available by Koch et al. [71], and are specifically suited to investigate the impact of subject motion compensation. Both datasets have been collected from the same subject (male, 34 years) in the same scanner, a 3T MAGNETOM Skyra (Siemens Healthcare, Erlangen, Germany), with an identical measurement sequence. For the first scan, the subject received the usual instruction of staying as still as possible during the acquisition. For the second scan, the subject was asked to move his head, to deliberately introduce motion artifacts.
From these datasets, we use the five nondiffusion weighted (b = 0) MRI scans each, as well as 30 diffusion weighted images (b = 700 s/mm 2 , diffusion gradient duration δ = 334 ms, spacing τ = 445 ms). Each image consists of 104×104×72 voxels with a resolution of 2×2×2 mm 3 . The data, and the effects of subject motion, are illustrated in Figure 6.
DTI Baseline
We compare the signal predictions from our qspace PDE to a simple baseline, which is derived from the Diffusion Tensor Imaging (DTI) model. DTI is widely used in practice, due to its relative simplicity and modesty in terms of scanner hardware and measurement time.
It rests on the assumption that the diffusion propagatorP (R, τ ) is a zero-mean Gaussian whose covariance matrix is proportional to the diffusion tensor D, a symmetric positive definite 3 × 3 matrix that characterizes the local diffusion [14]. The signal model in DTI relates the diffusion-weighted signal S(ĝ, b) for a given bvalue and gradient vector directionĝ = g/ g to the unweighted signal S 0 according to Fitting D requires at least one reference MR image S 0 , plus diffusion-weighted images in at least six different directions, which are usually taken with a fixed non-zero b-value. Equation (11) can then be used to predict the diffusion-weighted signal in any desired direction. In our experiments, Fig. 6 Example images from our two dMRI datasets, without deliberate head motion (left) and with strong motion artifacts (right). In each case, six corresponding sagittal slices from different diffusion weighted images (DWIs) are shown. Note that subject motion leads to spatial misalignments between DWIs, but also to artifacts within individual images. Table 1 Compressed file sizes from separate PDE-based compression of each 3D scan (baseline), from different variants of our proposed lossless codec, as well as from GZIP and lossless codecs from the JPEG family. For hybrid codecs, the split indicates the number of volumes coded with q-space or spatial inpainting, respectively.
Comparing Lossless Codecs for Diffusion MRI
A comparison of file sizes that can be achieved on our two test datasets with different lossless codecs is provided in Table 1. As a baseline, the first two rows show results from coding each 3D volume independently with our recently proposed PDE-based codec [12], using second-order (R-IEED-1) and fourth-order anisotropic diffusion (R-IFOEED-1). Additional savings of other codecs with respect to R-IEED-1 are given in percent. The second block in Table 1 shows results from several variants of our proposed new codec, which adaptively combines inpainting in q-space and image space. Highest compression rates were achieved when combining linear homogeneous (LH) diffusion in q-space with R-IFOEED-1 in image space, closely followed by R-IEED-1. Biharmonic (BH) diffusion in q-space also produced useful, but slightly weaker results.
Both q-space diffusion approaches achieved better compression than predictions from DTI (Section 4.2). This could be due to the fact that the quadratic model of diffusivities in Equation (11) is known to be an oversimplification in many parts of the brain [72], and the PDE-based approaches provide more flexibility.
DTI requires independent coding of at least seven 3D volumes, which led us to fix this split in our experiments. PDE-based imputation makes it possible to switch to q-space inpainting earlier, and our adaptive selection does so after four volumes in the low-motion data, after five volumes in the data with strong motion.
Switching to q-space inpainting also speeds up our codec. Our implementation of R-IEED-1 and R-IFOEED-1 requires approximately 478 s and 6185 s, respectively, for one volume on a single 3.3 GHz CPU core. Even though it would be possible to further optimize this, exploiting linearity in qLH and qBH, as described in Section 3.2, significantly lowers the intrinsic computational complexity, so that even a straightforward implementation only requires 1.64 s and 2.4 s per volume, respectively.
It can be seen in Figure 6 that subject motion during different phases of the acquisition leads to different types of artifacts. Results in Table 1 include the motion correction described in Section 3.3, which compensates spatial misalignments of different scans. However, motion can also lead to signal dropouts or to distortions within scans, which our current codec does not explicitly account for. This explains why q-space inpainting is less effective on the second as compared to the first scan. However, even on this challenging dataset that exhibits unusually strong artifacts, qspace inpainting still provides a benefit compared to all other alternatives.
Finally, Table 1 shows results from several other lossless codecs for comparison. GZIP is most widely used in practice, but the resulting files are more than 35% larger than those from our proposed codec. Among the lossless codecs from the JPEG family, JPEG2000 is the only one that outperforms R-IEED-1 for per-volume compression, and only by a small margin. Our new hybrid methods that combine image space and q-space inpainting always performed best. Table 2 investigates the benefit of motion correction (Section 3.3) by showing file sizes when removing motion correction from our codec, and Fig. 7 Given a set of previously coded DWIs, the closest strategy (left) selects the volume whose gradient vector has the smallest angular distance from the known ones, to maximize expected prediction accuracy. The furthest strategy (right) maximizes the angular distance, aiming for a more uniform coverage of the sphere for subsequent steps. The sketch shows the directions selected in the first three steps as black double arrows, the fourth direction as a red dot.
Benefit from Motion Correction
comparing the results to ones with motion correction (Table 1), indicating the benefit in percent.
Even on the first scan, in which the subject tried to keep his head still, compensating for small involuntary movements yields a slight benefit. The effect is largest when imputing via qBH and DTI. This might be explained by the fact that qLH satisfies the min-max principle, which makes it more robust against inaccuracies in its inputs, and provides another argument in its favor.
When strong head motion is present (second scan), restoring a correct voxel alignment via motion correction becomes essential for qspace inpainting. Without it, the switch to q-space imputation happens much later, and the overall file size is larger than when coding each volume independently. This is explained by the fact that our codec always applies difference coding to the b = 0 images, and that this becomes detrimental when those images are strongly misaligned.
Effect of Re-ordering DWIs
Since q-space imputation relies on the previously (de)coded diffusion weighted images, its accuracy depends on the order in which we process the gradient directions.
Two contradictory greedy strategies are illustrated in Figure 7: Always selecting the closest gradient direction, i.e., the one with the smallest angular distance from the already known ones, can be expected to result in the most accurate prediction, in the same spirit as our spatial codec Table 2 Compressed file sizes when omitting motion compensation, and the relative benefit from motion correction. Table 3 Compressed file size for scan 1 (without strong motion) when ordering the diffusion-weighted images differently. This affects the accuracy of q-space imputation.
On the other hand, the spatial codec starts with a seed mask that covers the full domain sparsely, but uniformly. Achieving something similar motivates selecting the gradient direction that is furthest from any of the known ones. Even though this strategy can be expected to lead to lower accuracy, and therefore to less compressible residuals in the first few iterations, later iterations might benefit from the more uniform coverage of the overall (spherical) domain. Table 3 presents the effect of these two selection strategies on final file sizes. The results are from the first scan, without strong motion. Overall, greedily selecting the furthest gradient vector gives slightly smaller overall file sizes. Therefore, this is the strategy that we followed in all other experiments.
Conclusion
In this work, we introduced a PDE-based lossless image compression codec that explicitly exploits both the spatial and the q-space structure in diffusion MRI. To our knowledge, it is the first codec that has been tailored to this type of data.
We demonstrated a clear improvement over PDEbased codecs that treat each volume separately, and over other established baselines including GZIP and spatial codecs from the JPEG family.
We evaluated several variants of our codec, and found that q-space predictions with linear homogeneous diffusion permitted the highest compression rates among them. With our proposed method for accelerated computation, it could also be applied at a very reasonable computational cost. We further demonstrated the importance of including motion correction, and propose an efficient implementation that is based on affine image transformations via a common reference. Finally, we found that the order of coding the diffusionweighted volumes had a relatively minor effect, but that a greedy strategy that strives to cover the sphere as uniformly as possible provides a small benefit.
In the future, one might attempt to replace the switching between image space and q-space inpainting with a PDE that jointly operates on the product space. However, this is likely to substantially increase the computational effort, and introduces the issue of properly balancing image space and q-space diffusion. Similarly, employing nonlinear PDEs for q-space predictions might further increase compression rates, but is likely to cause a high computational cost. | 8,708 | 2022-06-14T00:00:00.000 | [
"Computer Science"
] |
Exploring mixture estimators in stratified random sampling
Advancements in sensor technology have brought a revolution in data generation. Therefore, the study variable and several linearly related auxiliary variables are recorded due to cost-effectiveness and ease of recording. These auxiliary variables are commonly observed as quantitative and qualitative (attributes) variables and are jointly used to estimate the study variable’s population mean using a mixture estimator. For this purpose, this work proposes a family of generalized mixture estimators under stratified sampling to increase efficiency under symmetrical and asymmetrical distributions and study the estimator’s behavior for different sample sizes for its convergence to the Normal distribution. It is found that the proposed estimator estimates the population mean of the study variable with more precision than the competitor estimators under Normal, Uniform, Weibull, and Gamma distributions. It is also revealed that the proposed estimator follows the Cauchy distribution when the sample size is less than 35; otherwise, it converges to normality. Furthermore, the implementation of two real-life datasets related to the health and finance sectors is also presented to support the proposed estimator’s significance.
Introduction
Sampling is a procedure of selecting a representative fraction of a population so that one may observe and estimate something about the characteristics of interest for the entire population [1,2].A primary objective of survey sampling is to achieve a practical design for the survey to attain an adequate sample size for estimating the parameters of interest for the population under study.Survey sampling has several advantages over a full population study, including lower resource consumption, shorter turnaround times, and lower costs [3].Moreover, it also provides a basis for acquiring precise and useful parameter estimations.
Additionally, survey sampling makes generating accurate and efficient parameter estimates easier.Survey statisticians can improve an estimator's efficiency by enhancing the sampling technique, boosting the sample size, or employing auxiliary data.Auxiliary information about the population can consist of a known variable to which the study variable is approximately related.Typically, this auxiliary information is easy to quantify, whereas measuring the study variable itself can be costly [4].By using this additional data, which includes characteristics and variables that are highly correlated with the variable of interest, the estimation process can improve the accuracy of the study variable's mean [5,6].While measuring the study variable can often be expensive, the auxiliary information is usually easy to quantify.The i th element of Y may be correlated with one or more auxiliary variables (X i ) in addition to the study variable Y.The average elevation, area, and type of vegetation in a cattle field are examples of auxiliary variables that may be included if the study variable is the number of animals in the field.Auxiliary data is used in survey sampling for three main purposes: pre-selection, selection (i.e., selecting units that correspond to the study variable based on the auxiliary variable), and estimation (i.e., creating estimators of the ratio, product, and regression type).
In many studies, survey statisticians have been keenly interested in estimating parameters for the heterogeneous population.Therefore, for the heterogeneous population, Neyman [7] introduced Stratified Random Sampling (StRS), in which the population is partitioned into groups called "strata."Then a sample is chosen by some pattern within each stratum, and independent selections are made in different groups.Stratification is a probability sampling design used to increase the precision of estimation [8].StRS's sampling methodology ensures that every demographic group is adequately represented in the sample.For small sample sizes, adequate precision and accuracy may be achieved by using StRS's procedure.In sample selection, the StRS design helps to minimize bias.From an organizational point of view, stratified sampling is very convenient.Moreover, it is also recommended that auxiliary information be used in the StRS population parameter estimation process, which is useful for comparing estimates among several population groups.For more recent studies on StRS, see studies [9][10][11] and references therein.
Historically, numerous researchers have independently utilized auxiliary variables and attributes, proposing diverse estimators within the StRS framework to enhance efficiency [12].Cochran [13] developed classical ratio and regression methods to calculate the study variable's population mean.Graunt [14] was the first known user of the ratio estimator.When estimating using a ratio, Cochran first used auxiliary information.To calculate the population mean, Robson [15] suggested product type estimators by incorporating the ancillary data.Kadilar and Cingi [16] proposed the estimator when the population coefficient of skewness and kurtosis are unknown in stratified random sampling.In the estimation of population mean three cases of using auxiliary information have been suggested by Samiuddin and Hanif [17], such as no information, full information, and partial information.Ahmad, Hanif [18] suggested a modified and efficient estimator of the population mean using two auxiliary variables in survey sampling.Ahmad, Hanif [19] established a generalized multi-phase multivariate regression estimator with the help of several auxiliary variables.To find the population mean, Moeen, Shahbaz [20] suggested mixture estimators by simultaneously utilizing auxiliary variables and attributes.Double-phase and multi-phase sampling of a vector of variables of interest are used to calculate the population mean.Malik and Singh [21] also worked on the stratified sampling estimator.In single-phase sampling, Verma, Sharma [22] suggested some modified regression-cum-ratio, and exponential ratio type estimators.The improved version of the exponential estimator of the mean under StRS, initially proposed by Zaman [23], is presented by Singh, Ragen [24], and its properties for big sample approximation.Singh, Ragen [24] also suggested a class of estimators of population variance along with their asymptotic properties.Zaman [25] established a class of ratio-type estimators with the help of auxiliary attributes to calculate the population's mean.Under the StRS design, mixture regression cum ratio estimators in a single-phase scheme were established by Moeen [26].Yadav and Zaman [27] suggested a class of estimators using both conventional and non-conventional auxiliary variable parameters.Under StRS design, using an auxiliary attribute, Zaman and Kadilar [28] suggested exponential ratio estimators for stratified two-phase sampling.Lawson and Thongsak [29] introduce a novel set of population mean estimators designed for use in stratified random sampling.Their study examines the bias and mean square error of these estimators using Taylor series approximation.Through simulation and application to air pollution data in northern Thailand, they evaluate the performance of the estimators.Results from the air pollution data show that the proposed estimators outperform others in terms of efficiency.
Many sampling survey investigations seek to devise an efficient estimator for the population mean.Numerous studies have been designed to pursue this aim, incorporating various adaptations to classical ratio, product, and regression estimators utilizing SRS and StRS.Kadilar and Cingi [16] and Zaman [25] employed auxiliary variables and attributes, respectively, proposing different estimators within the StRS framework.However, these estimators prove beneficial only in specific scenarios, such as when auxiliary variables or attributes are solely utilized to estimate the population mean of the study variable.Consequently, a gap exists in the literature concerning the simultaneous utilization of auxiliary attributes and auxiliary variables alongside the study variable to estimate the population parameter.For example, in a household survey, income, expenditures, family size, number of employed, and number of literate persons are related variables that can be used as auxiliary information for estimating any characteristic.So, from the above example, we can use income (a quantitative variable) and family size (a qualitative variable) simultaneously to estimate the expenditure (a study variable).Therefore, the current article aims to propose a class of generalized mixture ratio estimators to estimate the population mean of the study variable by simultaneously incorporating the auxiliary attributes (qualitative) and variables (quantitative) in stratified random sampling (StRS).Therefore, the suggested estimators could be used in various sampling surveys.
The subsequent sections of the paper are organized as follows: Section: "Notations under Stratified Random Sampling" offers an overview of stratified random sampling.Section: "A Family of Proposed Estimators under Stratified Random Sampling" introduces the proposed estimators.In Section: "A Simulation Study", a comparative analysis is conducted based on the simulation study.Section: "Illustrative Examples" showcases two real-life examples.Finally, Section: "Conclusion and Recommendations" outlines the conclusions drawn from the study and provides recommendations for future research.
Notations under stratified random sampling
Let N denote the size of the population, s say that an attribute j = 1,2,3,. ..,m is dichotomized in the population based on its presence or absence; the attribute's values should be "0" and "1" correspondingly.
if h th unit of the population possesses attribute otherwise ( Under stratified random sampling, we take into account the following notations in order to determine the biases and mean square error (MSE) of the suggested estimator:
Relevant estimators under StRS
This section discusses some existing estimators and their bias and mean square error (MSE).Kadilar and Cingi [16] proposed the following estimators: In stratified random sampling by using the auxiliary attribute, Zaman [25] proposed the following estimators:
A family of proposed estimators under stratified random sampling
Within this section, we introduce a novel class of generalized mixture estimator, building upon the framework proposed by Zaman [25]; this estimator will be suitable for the estimation of the population mean � Y , incorporating the concurrent utilization of auxiliary variables and attributes within a stratified random sampling context. where Given that where a and K are constant, with a taking on the values 0 and 1 and K 2 R. Consequently, K 1 ,K 2 ,K 3 ,and K 4 may consist of any real number.Eq (19) can be rewritten as follows using Eq (20): However, for the h th stratum, let T KM h be a mixture estimator of the population mean given as follows:
� �
Similarly, we can rewrite Eq (23) as follows by using � y h as a common factor: Using notations from preceding section, the bias of T KM h is derived as follows, and after simplification, we obtain, Similarly, using notations from preceding section and after simplification, we get, Hence, the Bias expression of T KM st is obtained as follows: and : ð27Þ Moreover, the following simplification can be used to determine the maximum value of " a ": Adding the value of a in Eq (27) and after simplification, we obtained, Hence, the mean square error (MSE) expression of T KM st is obtained as follows: The complete derivation of BiasðT KM h Þ and MSEðT KM h Þ given in Eqs (25)(26)(27)(28)(29) are provided in S1 Appendix.
Unique cases of the proposed mixture estimator
In this section, we will delve into specific cases of the proposed mixture estimator, examining various combinations of constants.While Tables 1 and 2 highlight certain special cases of the estimator, exploring additional combinations of constants can reveal further special cases not covered in the tables.
A simulation study
We conducted a comprehensive simulation study to assess the proposed estimator's effectiveness.This involved comparing the performance of our estimator with that of several alternative estimators under StRS conditions.The percent relative efficiency (PRE) served as the criterion for evaluating estimator performance.We followed the steps outlined below to compute the PREs for our proposed estimator under StRS.
1.A simulated population comprising 1500 observations is established for the study variable Y. X serves as an auxiliary variable, while P represents an auxiliary attribute.Random selections are made from Normal and Uniform (symmetric), Gamma, and Weibull (asymmetric) distributions across 10,000 samples of specified sizes.Table 3 outlines the methodology for utilizing the Bernoulli distribution with specific parameter values to generate the auxiliary attribute.Moreover, population size is considered as N = N 1 + N 2 = 800 + 700 = 1500.
2. The given proportional allocation formula is used to determine the sample size from the stratum.
Using StRS, the population was divided into two strata: 800 and 700.Further, 10,000 random samples of size 20 are drawn by taking 12 units from stratum one and 8 units from stratum two.Next, 10,000 random samples of size 50 are drawn by taking 30 units from stratum one and 20 units from stratum two.Similarly, using the proportional allocation scheme, the next 10,000 random samples of size 80 are obtained by selecting 50 units from stratum one and 30 units from stratum two.Next, 10,000 random samples of size 200 are drawn by taking 120 units from stratum one and 80 units from stratum two.Further next, 10,000 random samples of size 400 are drawn by taking 280 units from stratum one and 120 units from stratum two.The results are shown in Table 4.
1.The PREs of estimators are calculated by using the following expression: where ây; and "i" stands for the estimator, whose performance needs to be compared.To check the efficiency of proposed estimator the expression of MSE given in Eq (30) has been utilized.Tables 5 and 6 present the PREs of the proposed estimator in comparison to those of competing estimators.We also assessed the impact of sample size on MSEs and employed sample sizes of 20, 50,80, 200, and when compared to the other estimators under consideration.Additionally, it was noted that the percent relative efficiency (PRE) values demonstrated an increase as sample sizes increased, as illustrated in Fig 1.
Exploring the Best-Fitted distribution
This section explores the most appropriate probability distribution for the proposed generalized mixture estimator across various sample sizes using EasyFit [30].EasyFit employs two methods, namely the Kolmogorov-Smirnov and Anderson-Darling tests, for this purpose.The auxiliary variables are generated from a normal distribution, while the auxiliary attribute is generated from a binomial distribution using R language (version 4.2.2), with parameters specified in Table 7.We select one thousand samples of sizes n = 20, n = 50, and n = 80 and then construct the sampling distribution of the proposed estimator.As depicted in Fig 2, the proposed generalized mixture estimator conforms to the Normal distribution, with the scale parameter and location parameter computed as 3.13 and 5.30, respectively.
Exploring the distribution of the proposed estimator for different sample sizes
This section examines how the suggested generalized mixture estimator's distribution behaves for different sample sizes.
Table 8 displays the probability distribution of the estimator for n = 20, 50, and 80.When the initial data is derived from the Normal distribution for a sample size of n = 20, the proposed estimator follows the Cauchy distribution, with the cutoff point being n = 35, for which the proposed estimator's probability distribution is converged to be Normal.Hence, the proposed estimator follows the normal distribution for n = 50 and 80. Table 9 illustrates the probability distributions of the proposed estimator against each value of n.The Kolmogorov-Smirnov results indicate that the p-values exceed 5%, supporting the hypotheses (H 0 ) that the data adhere to the specified distribution.
Illustrative examples
In this section, we present two real-life examples to illustrate the practical application of the proposed estimator.The distribution of study and auxiliary variables in each of the two data sets is discovered using EasyFit version 5.5 professional.The details of each of the data sets are given below.
Data-I: Tumor data
Data-I has been taken from Andersen, Borgan [31].We are interested in estimating the average thickness of the tumor by including auxiliary information.The data consists of 205 entities.StRS has been used, and the given variables and attributes have been considered.Y: Thickness of tumor (mm), X: Age of patient (at operation time), and P: Gender (0 = Male, 1 = Female) are used as auxiliary variable and attribute.The variable 'whether a patient was ulcerated or not' has been used for stratification purposes.Those patients who are not ulcerated are placed in stratum 1, while stratum 2 consists of the remaining patients.In Table 10, the parameters of the data have been presented.The population of size 205 is split into two strata, with sizes 115 and 90, respectively, as Table 10 illustrates.The proportional allocation scheme has been utilized to select random samples of sizes 12, 30, 50, and 8, 20, 30, and PREs are computed.In stratum 1, the average thickness of the tumor is 1.8113, while in stratum 2, its value is 4.3433.The variations in the tumor thickness in strata 1 and 2 are 4.7364 and 10.4244, respectively.The average age is 52.463, and the average attribute gender is 0.385.The variation in age is 277.95, and in gender, it is 0.238.
Table 11 shows that the original data set contained 28155 observations.The observations are divided into two strata of sizes 25631 and 2524, respectively.The proportional allocation scheme has been utilized to select random samples of sizes 12, 30, 50, and 8, 20, 30, and PREs is computed.In stratum 1, the average wage is 640.2, while in stratum 2, the average wage is 233.73.In strata 1, the variation in wage is 197379.9,and in strata 2, its value is 139839.9.The average number of years of education is 13.067, and the average SMSA is 0.7435.Finally, the variation in the number of years of education is 8.408, and in SMSA, the variation is 0.1907.
Tables 12 and 13 shows the necessary calculations (estimated means and PRE's) for proposed and comparative estimators with respect to StRS for those two real-life datasets mentioned.We used 20, 50, and 80 sample sizes for each data set.In dataset I, T KM st with n = 20, has the highest PRE, coming in at 105.42.In a similar vein, for n = 50 and 80, T KM st additionally displays 105.98 and 105.99 as dominant PRE values, respectively.Similarly, the suggested generalized mixture estimator exhibits the highest PREs in dataset II.This implies that when compared to comparative estimators, the suggested estimator performs remarkably well and efficiently.Furthermore, it is clear that the PREs rise in tandem with larger sample sizes.In Fig 3, the suggested estimator's performance is further illustrated graphically.
Conclusion and recommendations
In different scenarios, using auxiliary information is a very useful strategy to improve the estimator's efficiency.For this purpose, we have used auxiliary variables and attributes simultaneously.In this study, we introduce a generalized family of mixture estimators under Stratified Random Sampling (StRS), inspired by the work of Zaman [25], aimed at enhancing efficiency we derive the Mean Squared Error (MSE) expressions for the proposed estimator, supported by simulation results.Notably, our proposed class of estimators outperforms competing estimators in terms of Percent Relative Efficiency (PRE).Through simulation studies and real-life applications in the health and finance sectors, we demonstrate that our proposed estimator consistently delivers superior results compared to competitors across Normal, Uniform, Weibull, and Gamma distributions.Ultimately, we conclude that the efficiency of the suggested estimator holds both theoretically and in practical settings.Moreover, we suggest extending the scope of this study to include other types of estimators, such as ratio, product, power, difference, exponential, and regression estimators, under stratified random sampling.
Fig 2 .
Fig 2. Probability distribution of generalized proposed mixture estimator.https://doi.org/10.1371/journal.pone.0307607.g002 across symmetric and asymmetric distributions for estimating finite population means.Additionally, we analyze the estimator's behavior across various sample sizes regarding its convergence to the Normal distribution.Our findings indicate that the proposed mixture estimator adheres to the Normal distribution for sample sizes greater than or equal to 35.Furthermore,
Fig 3 .
Fig 3. A comparison between the proposed estimator and competitive estimators for real-life datasets.(A) Data-I and (B) Data-II.https://doi.org/10.1371/journal.pone.0307607.g003 where the number of units in stratum h is represented by N h , n = n 1 + n 2 + n 3 +. ..‥ +n h +. ...+n k , where the number of sampling units in stratum h is represented by n h , Y stands for the study variable, X refers to the auxiliary variable, P is the population proportion auxiliary attribute, and X and xh are the variances of stratum h, C yh and C xh are the coefficients of variations in stratum h.r xy h is the correlation coefficient between X and Y in stratum h, β 2(xh) and β 1(xh) are the kurtosis and skewness in stratum h and C xyh ¼ r xyh C xh C yh Let 0 400.Under Normal distribution, for n = 20,50,80,200 and 400,the proposed generalized mixture estimator T KM st the highest PRE's are 102.98,103.35, 104.46,
Table 3 . Simulated auxiliary variables and attributes.
, and 107.31 respectively.Similarly, under the Uniform distribution, for n = 20,50,80,200 and 400, the proposed generalized mixture estimator T KM st the highest PREs were reported as 102.92, 103.56,103.91,105.56, and 106.23 respectively.Moreover, similar results are evident in Table 6, particularly under the Gamma and Weibull distributions, the proposed generalized mixture estimator T KM st demonstrates significantly superior PRE values https://doi.org/10.1371/journal.pone.0307607.t003106.21
Table 12 . Estimated sample means and proposed generalized mixture estimators' percent relative efficiencies (PREs) with comparative estimators' PREs for data set I. Estimators Estimated Sample Means Data I
https://doi.org/10.1371/journal.pone.0307607.t012 | 4,706.8 | 2024-09-17T00:00:00.000 | [
"Mathematics"
] |
The Geologic Process of The Saka River area: Related to the History Woyla Elevated Ocean in The South Sumatra Island Region, Republic of Indonesia
The lithological and earth structures which compose the geologic process space are terribly fascinating to study. elaborated investigation of pre-tertiary rock subduction at the Woyla web site is rarely carried out. the variability of rocks derived from the Woyla oceanic plate, that folded on the West Sumatra continental plate within the Age of Reptiles era, illustrates the magnitude of the subduction impact mirrored in the structures that are still reflected in the abandoned rocks. The ways want to discover this subduction event are elaborated field observations, skinny section, XRD, and earth science structure measurements, supported by drones and satellite imagery. The lithology of basalt, flint, serpentine, marble, and arenaceous rock is vital to the presence of the Intraoceanic Arch of Woyla within the Saka phase. elaborated structural calculation show that the Saka segment went through several tectonic stages from the Mesozoic to Recent, that is mirrored in the Saka fault and therefore the Penanggungan fault.
Introduction
A geological process that happens in the Mesozoic is that the results of the encounter of the continent part of West Island with the interoceanic arc of Woyla [1]. West island consists of inferior metamorphic rocks like phyllite, slate, native slate, quartzite, and marble, that were another to the Tarap Formation consistent with [3]. The Tarap Formation correlates with the Tarantam and Kuantan Formations in Central Sumatra and therefore the Gunung-Kasih complicated in Lampung [2]. The Interoceanic Arch of Woyla consists of a volcanic mixture, particularly basalt, and andesite, whereas the oceanic aggregate consists of ocean rock within the type of flint and serpentine. The administrative position of the Saka phase is found on the Saka stream in the space from Buay Rujung Agung to Buay Runjung ( Figure 1).
The interpretation of the study case
The Saka segment is found within the western a part of the Garba advanced and runs from the NW-SE. The Saka stream localized in an exceedingly steep basement rock-type depression with a lithology of granite, basalt, and andesite with inserts of flint and snakelike marble. The local rock has a similar fracture scheme that runs parallel to the flow scheme. This flow is one among the foremost ideal manifestations of the dissemination of structural information during this study, there's a displacement of the fault block headstream, characterised by the existence of boudins developing within the scheme of the injury zone. supported field observations, there are alternative lithological entities additional down the Woyla Ocean Accretion Block, similar to basalt, full of spar veins and shells, and snakelike ( Figure 2).
Method of research
Elaborate the observed field is key to the present investigation. Rock samples, data of structural geology, and therefore the connection between the rocks were disbursed thoroughly in the Saka River area. The analysed of the sample are afterwards transferred through thin sampled section technique and might be employed in microstructure analysis. The steps of the sampling method embrace rock lifting, sample marking, and panel cutting, that are then withdraw thin sections for analysed by petrographic method. a number of the samples were analysed by geochemical tool -XRD, that was disbursed at the middle for earth science Studies, to see the mineral and element composition. Aerial picture analysis, therefore the use of drones within the field are used to support the structural lines.
The result and discussion
Localities of the Saka's lithology characterized as the rocks of the marine accretion rocks, that is include igneous rocks, farrago rocks, and matter rocks. The headstream a part of the Saka stream consists of igneous rock, consisting of take a look at rock compass point 1A Monzo-Gabro, SW 1B Gabbro, SW 1C mafic rock, IFFG 8B igneous rock Basalt, IFFG 8C Diorite, IDW part disturbed Andesite. The results of the microscopic analysis show that the dimensions of the Monzo-Gabro crystal & lt; 1 2.5 mm, holocrystalline, fine phaneritic, with non-uniform allotriomorphic grain relationships consisting of biotite, pyroxene, opaque ( Fig. 3) minerals, quartz, in some places some oligoclase blood vessel calcite, chlorite, sericite, feldspar, augite, and diopside. igneous rock is created of plagioclase, clay minerals, opaque minerals, and quartz. The volcanic rock andesite with crystallinity grade as holocrystalline, irregular-grained porphyrite, fine phaneritic within the sample of IFFG 8B consists of sericite, plagioclase, biotite, opaque mineral, stone spar, and diopside. The analysis of the petrographic sample shows that some plates have a structural segmentation. igneous rock is formed like AN eedric anhedral crystal made from quartz, calcite, plagioclase, opaque minerals, sericite, chlorite, chemical compound minerals, and some clay substance found within the centre of the path. The omnium-gatherum rocks here as marble, flint, curved and red limestone. Marble contains parted shells, flint, and serpentine are the keymarks of this area, that is during a melange complex. The serpentine minerals are extracted from the skinny section in IFFG eight N. The results of the XRD test confirmed that the existence of Faujasite, antigorite, quisotile, and nacrite minerals counsel the existence of serpentines within the Saka watercourse (Figure 4). Table 1 depicts the power load that will be applied to the planned system, as well as how the power will vary before and after the filter is installed, with an emphasis on reactive power (Q). It is well known that passive filters can create reactive power owing to the existence of a capacitor component in the passive filter, so that reactive power withdrawals at PLN can be reduced to avoid fines due to excessive reactive power withdrawals when PLN has a set norm at low PF. The total design power required may be computed as follows: The next step is to enter the data into the ETAP program using the data that has been designed, Table 4 shows the outcome of the system load flow that was created. Silicon dioxide during this space is understood as flint within the Insu Bukit Situlanglang area. Dated fint on Situlanglang ridge that is an element of the geological period Garba Complex [6]. At the bottomstream of the Saka riverarea, there are matter rocks in the sort of lithic wacke to feldspathic wacke ( Figure 5). The Saka part is processed by a horizontal fault structure, specifically, the Saka Fault trending NW-SE and several other native horizontal faults absolute to the NE-SW associate degreed NE-SE that forms a pull-apart scheme [7]. The aspect of the faults throughout this phase tend to be characterised by an oversized steps structural pattern on the watercourse wall, forming trenches or pull apart with upright dips of fault (87-900) and further expanding to the east. This scheme indicates a relative movement, exactly a sinistral shift with the pointsou'-sou'-west pattern. The fault could be an element of the fault system that's understood via the lower lineament of the digital elevation model Saka Fault. The sediment quality is alleged to be the history of the sediment formation and its natural science processes among the past [8].
The observed structure that has localized at 3 points, specifically IWD and SW1, those regions included to the deformation changes scheme of the Penanggungan Fault. Generally, the plate is processed by a merge of harm zone and parting system trending N-S and SW-NW, the native metamorphic folded structure within the serpentinite certain N940-1000E (Fig 6). Then the region was layered with the earlier analysis to connect the structural continuity and Poly-history deformation within the Saka segment. The result of Poly-history analysis showing the sinistral fault scheme at the observation site could also be a product of an after deformation from the Saka Fault throughout the Pleistocene epoch period. this is often often shown from the assorted fault patterns and rates, where the Penanggungan Fault cuts the Saka Fault as approach as ± 547 m and relatively tend to ENE-WSW (N700E / N2500E). In geometric terms, this fault isn't capsulate among the conjugate / completely different of the Saka Fault, although the sort is relatively sinistral. this is often often a results of the two faults that don't kind academic degree {oblique associate degreegle|angle} but an oblique angle >900 (α=100 and 10). This fault as the second advanced order (D2) that's processed by the deformation method of the natural action Line. supported this, if it' involving regional or earlier research, the Penanggungan Fault is enclosed within the Glacial epoch Garba difficult Transtension product. The structural activity is alleged to be the tectonic setting that's succeeding its structural pattern [9], [10]. The intrusions of Granite that exist across the Saka stream show that the granite as the results of melting rocks due to the geologic process that occurred at intervals the Mesozoic. The granite distribution as a part of the igneous rock in Garba to the Liki, Sui, Gilas, Meniting, Pisang areas to the Lubar space [5]. chemical analysis generated on granite exploitation the K / unit methodology as the Late Cretaceous age [4].
Inference
The Saka line concluded as a component of the Woyla Oceanic increasing is; 1) Igneous rock with characterized as the mid oceanic ridge (MOR). 2) Marble, chert, curving, and matter rocks indicated as lithic wacke to feldspathic wacke that these rocks originate from oceanic increasing at intervals the geologic process zone. 3) The observed sinistral fault might be an after-deformation product of the Saka Fault throughout the Glacial epoch and Penanggungan Fault has been surrounded at intervals the Pleistocene Garba advanced Transtension product. | 2,200.8 | 2021-11-01T00:00:00.000 | [
"Geology"
] |
Convolution Properties of p-Valent Functions Associated with a Generalization of the Srivastava-Attiya Operator
LetAp denote the class of functions analytic in the open unit disc U and given by the series f(z) = z p + ∑ ∞ n=p+1 anz . For f ∈ Ap, the transformationI p,δ : Ap → Ap defined byI λ p,δ f(z) = z + ∑ ∞ n=p+1 ((p + δ)/(n + δ))λanz , (δ + p ∈ C\Z 0 , λ ∈ C; z ∈ U), has been recently studied as fractional differintegral operator by Mishra and Gochhayat (2010) . In the present paper, we observed thatI p,δ can also be viewed as a generalization of the Srivastava-Attiya operator. Convolution preserving properties for a class of multivalent analytic functions involving an adaptation of the popular Srivastava-Attiya transform are investigated.
Introduction and Preliminaries
Let A be the class of functions analytic in the open unit disk U := { : ∈ C, || < 1} . ( Suppose that and are in A. We say that is subordinate to (or is superordinate to ), written as if there exists a function ∈ A, satisfying the conditions of the Schwarz lemma (i.e., (0) = 0 and |()| < 1) such that () = ( ()) ( ∈ U) .
By making use of the following normalized function: Srivastava-Attiya [2] introduced the linear operator L , : A 1 → A 1 by the following series: where the function ∈ A 1 is, respectively, by The operator L , is now popularly known in the literature as the Srivastava-Attiya operator.Various basic properties of L , are systematically investigated in [6][7][8][9][10][11].
For a function ∈ A and represented by the series (8), the transformation defined by has been recently studied as fractional differintegral operator by the authors [12].We observed that I , can also be viewed as a generalization of the Srivastava-Attiya operator (take = 1, = , = in ( 14)), suitable for the study of multivalent functions.(Also see [13] for a variant.)Furthermore, transformation I , generalizes several previously studied familiar operators.For example taking = 0 we get the identity transformation; the choices = −1, = 0 yield the Alexander transformation and = 1, a negative integer, = 0 give the S ȃ l ȃ gean operator.Some more interesting particular cases are also pointed out by the authors in [12] (also see [14]).
Using (14) it can be verified that For the functions () ∈ A given by their Hadamard product (or convolution) is defined by Observe that when = ∈ N, the operator I , given by ( 14) can be represented in terms of convolution as follows: where In the sequel to earlier investigations, in the present paper we find a convolution result involving I , is also presented.With a view to state a well-known result, we denote by ℘() the class of functions as follows: where The result is the best possible for 1 = 2 = −1.
Proof.Suppose that each of the functions () ∈ A ( = 1, 2) satisfies the condition (22).Set Then, by making use of the identity (15) in (26) we get Therefore, a simple computation, by using ( 24) and ( 27), shows that where The proof will be completed by finding the best possible lower bound for ℜ( 0 ()).A change of variable also gives Since () ∈ ℘( ) ( = 1, 2), where = ((1 − )/(1 − )) ( = 1, 2), it follows from a result in [15] that and the bound 3 is the best possible.An application of Lemma 1, in (30), yields In order to show that is the best possible in the assertion (23) when 1 = 2 = −1, we consider the function () ∈ A given by It is readily checked that satisfies (22) with = −1.Since it follows from (30) and Lemma 1 that This completes the proof of Theorem 2. | 915.6 | 2013-02-07T00:00:00.000 | [
"Mathematics"
] |
EXCITATION OF AUTOIONIZING STATES IN POTASSIUM BY ELECTRON IMPACT
The ejected-electron excitation functions for the 3 p3d4s P3/2, DJ lowest quartet and 3p3d4s(P)P3/2 upper doublet autoionizing states in potassium hav e been measured with an energy resolution of 0.2 eV over t h electron impact energy region from the lowest excitation threshold up to 500 eV. The detailed picture of the excitation dynamics of the P3/2, Dj and P3/2 levels in the near-threshold impact energy region has been achieved due to the use of a small increment of the incident electron energy. An analysis of the data, including the cons ideration of resonance and cascade excitation processes, has been performed on th e base of available experimental and theoretical data on the 3p -excitation in potassium atoms.
INTRODUCTION
The pioneering experimental studies of electron impact ionization of alkali atoms performed by Aleksakhin and Zapesochny [1] have revealed an important role of autoionization in this process.As ionization processes play the dominant role in the different kinds of plasma, including laser plasma [2], their intense experimental and theoretical studies have been started in the middle of seventies (see e.g.[3,4] and references therein).However, only a few works are known on measuring the electron impact excitation functions of core-excited autoionizing states in potassium.First such data were obtained for the (3p 5 4s3d) 4 F and (3p 5 4s4p) 4 D metastable states at the energy resolution of 0.3 and 0.08 eV, respectively [5,6].The strong resonances revealed in the threshold impact energy region were preliminarily attributed by the authors to the negative-ion states.Also, the role of cascade processes has been discussed in the excitation of the quartet states.The total excitation cross-section for the (3p 5 4s 2 ) 2 P 1/2,3/2 lowest autoionizing doublet has been measured recently in [3,7] with the energy spread of the incident electrons equal to 0.7 and 0.25 eV, respectively.Using the results of extended R-matrix calculations, a strong near-threshold structure has been attributed by the authors also to the negativeion resonances.Later, these data have been analyzed with the aim to find the role of cascade processes in excitation of the (3p 5 4s 2 ) 2 P levels [8].
In the present work, we have studied the ejected-electron excitation functions for the (3p 5 3d4s) 4 P 3/2 , 4 D J lowest quartet and 3p 5 3d4s ( 1 P) 2 P 3/2 upper doublet autoionizing states in potassium atoms in the impact energy region from the excitation threshold of levels up to 500 eV.By using an improved incident electron energy resolution of 0.2 eV and a small incremental step of the incident electron beam energy, the dynamics of electron impact excitation of the 4 P 3/2 , 4 D J and ( 1 P) 2 P 3/2 states was studied in detail, with analyzing the role of resonance and cascade excitation processes.
EXPERIMENTAL
Current measurements were performed on apparatus which consisted of a 127 o electrostatic cylindrical monochromator, electron analyzer, and an atomic beam source (fig.1).The monochromator with the mean radius of electron trajectory of 30 mm allowed obtaining an electron beam in the 16 ÷ 100 eV energy region with the intensity of 0.1 mkA and energy spread of 0.2 eV (FWHM) [9].The ejected-electron analyser with the mean radius of 12.7 mm and the energy resolution of 0.15 eV was located at the observation angle of 75 o .The resistively heated source of potassium vapours [10] provided their density in the interaction region of about 10 12 at⋅cm -3 .The ejectedelectron excitation functions have been obtained as an incident-electron energy dependence of line intensities corresponding to the electron decay of the 4 P 3/2 , 4 D J and ( 1 P) 2 P 3/2 autoionizing states.Note an essential peculiarity of such measurementsinstead of measuring the intensity of a single line (as it is done in traditional optical measurements [11]), the whole ejectedelectron spectrum (or its part) is measured for particular impact energy value.In the present work the spectra were measured step-by-step for different values of the incident electron energy.The intensity of the spectra was automatically normalized on the corresponding incident electron beam intensity by a "current-to-frequency" converter.All data acquisition procedure has been automatically controlled by the computerized detection system [12].An uncertainty in estimation of the line intensity depended on the detected signal level providing the necessary statistics of the data acquisition.In present measurements, it did not exceed 20% for the lowest line intensities in potassium spectra.The incident-electron and ejected-electron energy scales were calibrated by using photoabsorption data [13] for the excitation threshold of the (3p 5 4s 2 ) 2 P 3/2 state at 18.722 eV.The uncertainties of both energy scales were estimated as ± 100 meV and ± 50 meV, respectively.
RESULTS
In figure 2, the ejected-electron excitation functions for the (3p 5 3d4s) 4 P 3/2 , 4 D J and 3p 5 3d4s( 1 P) 2 P 3/2 autoionizing states in potassium atoms are shown in an incident electron energy region from the excitation thresholds of levels up to 500 eV.The excitation thresholds for these states are known to be 19.79;21.42 and 22.42 eV, respectively [14] (marked in figure 2 by dashed lines).Note, as it follows from the comparison of ejected-electron spectra measured at low impact energies [15], the 4 P 3/2 line dominates in the spectra.Therefore, the measured excitation function of this level has a minimal influence from the close lying fine-structure components.The same is valid for the ( 1 P) 2 P 3/2 state.The excitation function for the 4 D J states, on the other hand, reflects the summary excitation of all J-components, due to their much closer energy position and comparable excitation cross sections.As it can be seen, the excitation functions for the quartet states possess a resonance-like shape with main maxima at 27.7 eV ( 4 P 3/2 ) and 30.6 eV ( 4 D J ).Both functions start with well resolved near-threshold resonances at 21.1 and 21.5 eV, respectively.On the other hand, the excitation function of the 3p 5 4s3d( 1 P) 2 P 3/2 doublet state starts with a step-like rise of the cross section just above the excitation threshold (see arrow).Above 25 eV there is a smooth rise of the cross section up to the maximum value at about 140 eV.The data in the region 40÷50 eV were omitted from the present consideration due to the strong influence from the energy loss-spectra.
As it follows from the comparison of functions, their general shape agrees well with the character of corresponding excitation transition -spin-exchange for the 4 P 3/2 , 4 D J quartet states and dipole for the ( 1 P) 2 P 3/2 state.Hence, process (1) determines completely the excitation of the (3p 5 3d4s) 4 P 3/2 , 4 D J and ( 1 P) 2 P 3/2 states over the whole impact energy region studied in the present work.In this case processes ( 2) and ( 3) may be considered only as some contributions to the main direct excitation process.
It is well known that in the excitation functions the cascade transitions manifest themselves as a distinct rise of the cross section located close to the excitation threshold of cascading levels.In accordance with our previous analysis [6,8], the contribution from cascade transitions (process 2) to excitation of the low-lying quartet states in potassium may be expected only from the radiative decay of the 3p 5 4s4p quartet levels lying between 20 eV and 20.6 eV.Due to the high excitation energy of the (3p 5 3d4s) 4 D J level at 21.42 eV, such transitions can not influence the excitation of this level.The present data also do not reveal any remarkable cascade features at higher impact energies (see figure 2).The cascade contribution into the (3p 5 3d4s) 4 P 3/2 level, on the other hand, could be revealed in the initial part of its excitation function, in particular in the low-energy wing of the nearthreshold resonance at 21.1 eV.However, in order to reveal this process, the additional measurements should be performed with a The presence of the resonance features in all measured excitation functions just confirms the earlier observations [5][6][7] of the important role of negative-ions in electron impact excitation of the 3p 6 -subshell in potassium.The threshold resonance at 21.1 eV, observed in the excitation function for the (3p 5 3d4s) 4 P 3/2 , level may reflect the formation of a negative-ion state with a tentative configuration 3p 5 3d4s 2 .The same negative-ion configuration could also be responsible for the presence of the threshold resonance at 21.5 eV in the excitation of the (3p 5 3d4s) 4 D J level.
The near-threshold behaviour of the excitation cross section for the 3p 5 3d4s ( 1 P) 2 P 3/2 high-lying level points out the competitive role of the resonance and cascade processes.Indeed, the observed resonance-like rise of the cross-section may be attributed both to the mixture of closelying negative-ion resonances and to the cascade transitions, including those of resonance type, lying in the energy region 23÷30 eV.Unfortunately, at present there are no data on such processes in potassium at these energies.
CONCLUSIONS
In the present work, the electron impact excitation of the (3p 5 3d4s) 4 P 3/2 , 4 D J and ( 1 P) 2 P 3/2 autoionizing states in potassium atoms has been studied over the impact energy region from the lowest excitation threshold up to 500 eV at the incident electron energy resolution of 0.2 eV.The analysis of the data has shown that for all states considered the direct excitation process dominates over the whole impact energy region.The resonance features observed in the near-threshold region of the excitation functions point out the presence of strong negative-ion resonances in the excitation of all considered levels.In case of the (3p 5 3d4s) 4 P 3/2 , 4 D J quartet levels, these resonances can be tentatively attributed to the 3p 5 3d4s 2 configuration of negative potassium ion.For the 3p 5 3d4s( 1 P) 2 P 3/2 level the cascade transitions may also influence its excitation at threshold energies.
This work was supported, in part, by the INTAS under grant 03-51-4706.
Figure 1 .
Figure 1.Electron spectrometer for measuring the ejected-electron spectra of metal vapours: M -incident electron monochromator; A -analyzer of ejected electrons; BS -vapour source; FC -Faraday cup; CH -secondary electron multiplier; T -vapour trap.
Figure 2 .
Figure 2. The ejected-electron excitation functions for the 3p 5 4s3d autoionizing states.Vertical dashed lines mark the excitation thresholds of the states.
arb.u.) smaller incremental step of the electron impact energy. | 2,466.8 | 2008-01-01T00:00:00.000 | [
"Physics"
] |
Anisotropic SmFe10V2 Bulk Magnets with Enhanced Coercivity via Ball Milling Process
Anisotropic bulk magnets of ThMn12-type SmFe10V2 with a high coercivity (Hc) were successfully fabricated. Powders with varying particle sizes were prepared using the ball milling process, where the particle size was controlled with milling time. A decrease in Hc occurred in the heat-treated bulk pressed from large-sized powders, while heavy oxidation excessively occurred in small powders, leading to the decomposition of the SmFe10V2 (1–12) phase. The highest Hc of 8.9 kOe was achieved with powders ball-milled for 5 h due to the formation of the grain boundary phase. To improve the maximum energy product ((BH)max), which is only 2.15 MGOe in the isotropic bulk, anisotropic bulks were prepared using the same powders. The easy alignment direction, confirmed by XRD and EBSD measurements, was <002>. Significant enhancements were observed, with saturation magnetization (Ms) increasing from 59 to 79 emu/g and a remanence ratio (Mr/Ms) of 83.7%. (BH)max reaching 7.85 MGOe. For further improvement of magnetic properties, controlling oxidation is essential to form a uniform grain boundary phase and achieve perfect alignment with small grain size.
Introduction
The SmFe 12 -based compounds with a tetragonal ThMn 12 structure (space group I4/mmm) are considered to be promising candidates for new high-performance permanent magnets due to their intrinsic magnetic properties, high Fe content, and the relatively inexpensive rare-earth element Sm [1,2].It possesses higher theoretical magnetic performance compared to Nd-Fe-B sintered magnets.For example, the Sm(Fe 0.8 Co 0.2 ) 12 compound with the ThMn 12 -type structure has been realized in thin films, achieving a high M s of 1.78 T and a high anisotropy field of 12 T [3].However, there are no reports of SmFe 12 -binary compounds with high performance in the bulk magnet.Two main reasons account for this: Firstly, the SmFe 12 -binary phase is unstable and can only be produced in thin films.To form a stable SmFe 12−x M x phase, partial substitution of Fe with other stabilizing elements, such as Ti, V, Cr, Mo, W, Al, Si, and Ga, is necessary [4].Second is the low H c in the Sm(Fe 0.8 Co 0.2 ) 12 compound, which impedes the development of SmFe 12 -based bulk magnets [5].
In order to obtain stable bulk magnets with high coercivity H c , researchers have developed various methods to modify the microstructure.One effective approach is to decrease the grain size to the single domain size, which has been proven to enhance magnetic performance significantly.A common method for achieving nano-scale grain sizes is the melt spinning technique.There are many reports of obtaining the ThMn 12 -type magnets with good performance [6][7][8][9].The grain size can be controlled by heating the amorphous ribbons or by melt-spinning at specific speeds.Qian et al. [10] successfully 2 of 11 produced high-density Sm(Fe 0.8 Co 0.2 ) 11 Ti bulk magnets with average grain sizes ranging from 30 to 74 nm.An average grain size of 30 nm was achieved in melt-spun ribbons with enhanced H c , as reported by Zhao et al. [11].Additionally, the formation of a nonferromagnetic grain boundary phase (GBP) is effective in improving H c by providing magnetic decoupling among the main magnetic phase and hindering the movement of the domain wall.Elements such as Cu, Ga, B, and V have been found beneficial in forming the GBP in SmFe 12 -based magnets [8,[12][13][14].We successfully formed a Sm-Cu-Ga-rich GBP in heat-treated bulks with nano-scale grain sizes, resulting in enhanced H c [14].Liu et al. reported achieving a H c of 6 kOe in Sm(Fe 0.8 Co 0.2 ) 11 TiB 0.25 melt-spun ribbons with an average grain size of 150 nm [8].However, since an amorphous phase was obtained as an intermediate during this process, it is challenging to achieve oriented bulk magnets, which are crucial for improving (BH) max .Unlike Nd-Fe-B magnets, the hot-deformation process has been found to be ineffective in aligning SmFe 12 -based bulks [15,16].In V-substituted SmFe 12 -based compounds, high H c can be obtained even with a large grain size.High H c Sm-Fe-V-based oriented bulk magnets have been achieved with H c above 10 kOe by forming the Sm-rich GBP after hot-compacting jet-milled powders with a grain size of several microns [17,18].
In our previous work, high H c was successfully achieved in fully dense SmFe 10 V 2 isotropic bulk magnets using the jet-milling process [19].However, the magnetically aligned green bodies were too weak to retain their shape due to the good sphericity of the jet-milled powders.This resulted in a failure to press and sinter anisotropic bulks, as well as a high loss of powders during the jet-milling process.In this study, we used a ball milling process to prepare SmFe 10 V 2 powders.It is well known that ball-milled powders have an irregular shape, and the powder loss during this process is negligible.We investigated the influence of powder size, controlled by milling times, on the microstructure and magnetic properties.Subsequently, we prepared oriented bulks to study the impact of microstructure on their magnetic properties.
Experiments
The SmFe 10 V 2 ingots were synthesized through arc-melting using high-purity Sm (99.9%),Fe (99.95%), and V (99.9%) pieces.An additional Sm was introduced to compensate for Sm loss due to evaporation during the fabrication process and to facilitate the formation of a Sm-rich GBP.The following homogenization process was carried out at 1000 • C for 20 h under an Ar atmosphere.Subsequently, the homogenized ingots were crushed through hydrogen decrepitation at 250 • C for 5 h and manually ground into powders below 150 µm in size.The low-energy ball milling process was then used to further reduce the particle size.ZrO 2 milling balls with a size of 5 mm were used during the milling process.The weight ratio of the milling balls and powders is 20:1.Milling times ranged from 3 to 50 h to produce particles of different sizes, allowing the investigation of their influence on the microstructure and magnetic properties of the final heat-treated bulks.The 1.5 g as-milled powders (BM3-BM50) were pressed under 0.4 GPa using a stainless steel mold to obtain round, low-density green bodies with a diameter of 10 mm.Then, the green bodies were pressed again under 3.5 GPa, restricted by the same-thickness iron rings, to form high-density green bodies, followed by heat treatment at 1140 • C for 30 min under an Ar atmosphere.The heat-treated bulks were named from BM3-HT to BM50-HT depending on the milling time, as shown in Table 1.Furthermore, in order to improve the magnetic performance, the powders were initially pressed under a 10 kOe magnetic field with a pressure of 0.4 GPa and then pressed again under higher pressures.The powders inside the green bodies were arranged using the external magnetic field to obtain magnetic anisotropic bulks.The influence of pressure on the anisotropic bulks was investigated using stepped high pressures of 0.8, 1.0, 1.5, 2.6, 3.5, and 4.0 GPa.Finally, the anisotropic green bodies were heat-treated under the same conditions as the isotropic ones.Table 1 shows the sample names to distinguish all the samples during the experiment.The phase and crystalline structure were measured using X-ray diffraction (XRD, D/Max-2500VL/PC, Rigaku, Tokyo, Japan) analysis with Cu-Kα (λ = 1.5406Å) radiation.The magnetic properties of the bulk samples were examined using a vibrating sample magnetometer (VSM, MicroSense EZ9, KLA, Santa Clara, CA, USA), which can provide a maximum magnetic field of 27.5 kOe with a sample space of 10 mm at room temperature.Square samples weighing 20-100 mg, with side lengths smaller than 3 mm, were measured in a magnetic field ranging from −25 kOe to 25 kOe to obtain the hysteresis loops.Specimen density was measured with the Mettler Toledo Balance XPE205 using the Archimedes method.Microstructures were observed using scanning electron microscopy (SEM, JSM-6610LV, JEOL Ltd., Tokyo, Japan) and field emission SEM (FE-SEM, JSM-7800F, JEOL Ltd., Tokyo, Japan).Electron backscatter diffraction (EBSD, Oxford, Symmetry, JEOL Ltd., Tokyo, Japan) in FE-SEM (JSM-7800F) was used to investigate the crystal orientation of the 1-12 grains in the anisotropic bulks with a step size of 0.2 µm.The tetragonal SmFe 10 V 2 structure was used to index during the measurement.
Microstructures and Magnetic Properties of the Ball-Milled Powders
The H 2 decrepitated powder and the ball-milled powders with different milling times, ranging from 3 to 50 h, were analyzed using SEM, as shown in Figure 1.After grinding the H 2 decrepitated ingots, the powders exhibited a wide range of sizes, from several microns to 100 microns.The powder surface was smooth.There were some winding crucks in the large-size powders.The fractures made by H 2 decrepitation on the 1-12 grain boundaries limited the formation of multi-grain particles after the ball milling process.The powder size decreased with longer milling times.The maximum and minimum sizes of the powders with the same ball milling time varied significantly.The detailed distribution of powder size is shown in Figure S1 in the Supplementary File.The size of the powders in the same sample varies widely, ranging from less than 1 µm to 20 µm.The average powder size decreased as the ball milling time increased.A summary of the powder sizes is shown in Figure 2a according to Figure S1.Initially, the powders sharply decreased in size to nearly 3.5 µm.From 10 to 50 h of milling, there was a slight further decrease in powder size.According to Figure S2 in the Supplementary File, the average 1-12 grain size in the homogenized ingot is nearly 5 µm, along with smaller secondary grains.In the early stages of ball milling, the sharp decrease in powder size was due to the easy breakage of grain boundaries.The multi-grain powders were milled to the single-grain powders.However, as the milling continued, the powders, now consisting of single grain, were more resistant to breakage by ball milling.The break of 1-12 grains during prolonged milling accumulated stress.Then, the hysteresis loops of the powders, depending on the ball milling time, were measured using VSM, as shown in Figure 2b,c.The powders were magnetically aligned under the magnetic field of 25 kOe.Figure 2d summarizes the values of M r /M s and H c calculated using the hysteresis loops.M r /M s and H c increased first shapely and then slowly because of the tendency of the powder size.The high value of M r /M s in the BM7-50 powders indicated that most powders were single-grain particles.Since the powder size became finer, the number of single-grain particles and the smaller grain size improved M r /M s and H c .
Microstructure and Magnetic Properties of Heat-Treated Bulks Influenced by Ball Milling Time
After preparing powders of different sizes, the green bodies pressed from these powders were heat-treated to obtain the bulk magnets.Figure 3a shows the XRD patterns of the heat-treated bulks.The bulks included the main 1-12 phase, along with secondary SmO and α-(Fe 4 V) phases.The intensity of the peaks indicates that the amounts of the SmO and α-(Fe 4 V) phases significantly increased as the ball milling time extended.In order to clearly explain the changes in each phase with different ball milling times, the XRD refinements were calculated and summarized in Figure S3 and Figure 3b.The SmO phase increased rapidly with short ball milling times and continued to increase at a slower rate beyond 10 h.The trend of the SmO phase change was similar to that of the powder size.From Figures 1 and 2c, longer ball milling times resulted in finer particle sizes, leading to higher surface energy in the powders.Consequently, heavy oxidation occurred during the pressing and heat-treatment processes.The significant amount of SmO phase consumed free Sm, reducing the availability of Sm to form the Sm-rich GBP and the 1-12 phase.Conversely, the content of the α-(Fe 4 V) phase exhibited the opposite trend.It remained at a low level in the bulks from BM3-HT to BM7-HT but increased sharply from the BM10-HT sample and became a dominant phase with longer milling times.According to the analysis for the ball-milled powders, the accumulated stress in the powders decreased the stabilization of the ThMn 12 crystal structure, which is the main reason for the significant presence of the α-(Fe 4 V) phase when ball milling times were too long.In order to investigate the microstructure of the heat-treated bulks in-depth, we examined the phase distribution using FE-SEM measurements.Figure 4 shows the FE-SEM images of the BM5-HT, BM10-HT, and BM20-HT bulks.In these images, three distinct phases can be observed, each with different colors.The main gray area represents the 1-12 phase.Most of the white areas are the SmO phase, with some regions being the Sm-rich phase on the 1-12 grain boundaries, as indicated by the yellow arrows.The black areas correspond to the α-(Fe 4 V) phase, which increased with longer ball milling times.It is noteworthy that the α-(Fe 4 V) phase appeared at the grain boundaries, and the number of small α-(Fe 4 V) grains increased instead of their growth as ball milling time extended.This indicates that in bulks made from long-time ball-milled powders, for example, BM10-HT and BM20-HT bulks, the Sm-rich GBP was replaced by small α-(Fe 4 V) grains.However, due to the strong ferromagnetic properties of the α-(Fe 4 V) phase, achieving effective magnetic decoupling of the 1-12 grains is challenging [20].Figure 5a shows the demagnetization-corrected hysteresis loops of the heat-treated bulks, depending on the ball milling time.The summarized data is presented in Figure 5b.The magnetic properties were strongly influenced by the ball-milled powder size.According to reference and our previous works [19,21], the enhanced Hc resulted from the formation of the non-ferromagnetic Sm-rich GBP, which can promote magnetic decoupling of the 1-12 grains and hinder the domain wall motion.In relation to the XRD patterns in Figure 3 and SEM images in Figure 4, only the BM3-HT, BM5-HT, and BM7-HT Figure 5a shows the demagnetization-corrected hysteresis loops of the heat-treated bulks, depending on the ball milling time.The summarized data is presented in Figure 5b.The magnetic properties were strongly influenced by the ball-milled powder size.According to reference and our previous works [19,21], the enhanced H c resulted from the formation of the non-ferromagnetic Sm-rich GBP, which can promote magnetic decoupling of the 1-12 grains and hinder the domain wall motion.In relation to the XRD patterns in Figure 3 and SEM images in Figure 4, only the BM3-HT, BM5-HT, and BM7-HT bulks, which contained few α-(Fe 4 V) and SmO phases, exhibited high H c .As the ball milling time increased, heavy oxidation and decomposition of the 1-12 phase during the pressing and heat treatment processes led to the disappearance of the Sm-rich GBP.Consequently, H c sharply decreased due to the presence of a significant amount of α-(Fe 4 V) phase at the grain boundaries.Correspondingly, M s increased with the content of the α-(Fe 4 V) phase in the BM10-HT-BM50-HT samples.
bulks, which contained few α-(Fe4V) and SmO phases, exhibited high Hc.As the ball milling time increased, heavy oxidation and decomposition of the 1-12 phase during the pressing and heat treatment processes led to the disappearance of the Sm-rich GBP.Consequently, Hc sharply decreased due to the presence of a significant amount of α-(Fe4V) phase at the grain boundaries.Correspondingly, Ms increased with the content of the α-(Fe4V) phase in the BM10-HT-BM50-HT samples.
Anisotropic Bulks Modified by the Pressure
In Section 3.2, we successfully fabricated isotropic bulk magnets with a high Hc.However, due to their low magnetization and the low squareness of the hysteresis loops, (BH)max of the isotropic samples was very low, reaching only 2.15 MGOe in the BM5-HT sample, which had the highest Hc.Therefore, we attempted to fabricate anisotropic bulks to improve (BH)max.
Figure 6a shows the demagnetization-corrected hysteresis loops of the anisotropic bulks AP-04-AP-40, depending on the pressures applied during the high-pressure process.The detailed data are shown in Table S1 of the Supplementary File.High pressure strongly influences the microstructure and magnetic properties.Hc of the AP-04 sample pressed under 0.4 GPa is only 0.13 kOe, characteristic of a near-soft magnet.As the pressure increased to above 1.5 GPa, Hc remained between 9 and 10 kOe, with the highest value being 9.86 kOe in the AP-35 sample.It is evident that the Hc of anisotropic bulks was higher than that of the isotropic one, which was 8.91 kOe.Additionally, Ms at 23 kOe increased significantly from 59 emu/g to 73-79 emu/g, primarily due to the incomplete magnetic saturation in the isotropic bulks.Figure 6b summarizes the calculated value of Mr/Ms and (BH)max based on the hysteresis loops in Figure 6a.The value of Mr/Ms indicates the level of orientation in the anisotropic bulks.Higher pressure improved the orientation of the bulks up to 3.5 GPa.Beyond this point, at 4.0 GPa, the orientation decreased slightly.It is well known that high pressure leads to the significant deformation of green bodies.In our work, the pressure direction was perpendicular to the orientation direction, causing part deformation parallel to the orientation direction.Therefore, orientation improved with pressure up to 3.5 GPa, but excessive deformation at 4.0 GPa slightly reduced orientation.The highest Mr/Ms value, 83.7%, was achieved in the AP-35 bulk.High orientation is beneficial for (BH)max.In the anisotropic bulks, the highest (BH)max of 7.85 MGOe was achieved in the AP-35 bulk by the highest Ms, Mr/Ms, and Hc.
Anisotropic Bulks Modified by the Pressure
In Section 3.2, we successfully fabricated isotropic bulk magnets with a high H c .However, due to their low magnetization and the low squareness of the hysteresis loops, (BH) max of the isotropic samples was very low, reaching only 2.15 MGOe in the BM5-HT sample, which had the highest H c .Therefore, we attempted to fabricate anisotropic bulks to improve (BH) max .
Figure 6a shows the demagnetization-corrected hysteresis loops of the anisotropic bulks AP-04-AP-40, depending on the pressures applied during the high-pressure process.The detailed data are shown in Table S1 of the Supplementary File.High pressure strongly influences the microstructure and magnetic properties.H c of the AP-04 sample pressed under 0.4 GPa is only 0.13 kOe, characteristic of a near-soft magnet.As the pressure increased to above 1.5 GPa, H c remained between 9 and 10 kOe, with the highest value being 9.86 kOe in the AP-35 sample.It is evident that the H c of anisotropic bulks was higher than that of the isotropic one, which was 8.91 kOe.Additionally, M s at 23 kOe increased significantly from 59 emu/g to 73-79 emu/g, primarily due to the incomplete magnetic saturation in the isotropic bulks.Figure 6b summarizes the calculated value of M r /M s and (BH) max based on the hysteresis loops in Figure 6a.The value of M r /M s indicates the level of orientation in the anisotropic bulks.Higher pressure improved the orientation of the bulks up to 3.5 GPa.Beyond this point, at 4.0 GPa, the orientation decreased slightly.It is well known that high pressure leads to the significant deformation of green bodies.In our work, the pressure direction was perpendicular to the orientation direction, causing part deformation parallel to the orientation direction.Therefore, orientation improved with pressure up to 3.5 GPa, but excessive deformation at 4.0 GPa slightly reduced orientation.The highest M r /M s value, 83.7%, was achieved in the AP-35 bulk.High orientation is beneficial for (BH) max .In the anisotropic bulks, the highest (BH) max of 7.85 MGOe was achieved in the AP-35 bulk by the highest M s , M r /M s , and H c .
Following the VSM measurements in Figure 6, the magnetic properties of the anisotropic bulks were studied in detail.Subsequently, the microstructure and phase were examined to explain the origin of these magnetic properties.Figure 7 shows the BSE-SEM images of anisotropic bulks pressed under different pressures.A large number of voids were present in the AP-04 and AP-08 bulk due to the low pressure.The 1-12 grains were either isolated or directly connected with each other.The low H c was attributed to the absence of the Sm-rich GBP.As the pressure increased, both the size and number of voids in the heat-treated bulks decreased.When the pressure was above 1.5 GPa, the white GBP, identified as the Sm-rich Fe-V-lean phase, became distinguishable among the main 1-12 grains.The number and area of voids significantly decreased with increasing pressure.This observation correlates with the density measurements of the bulks, taken using the Archimedes method.The densities measured were 7.456, 7.538, 7.545, 7.579, 7.674, 7.709, and 7.716 g/cm³ for the AP-04-AP-40 bulks, respectively.Higher pressure during the pressing process resulted in higher density, which in turn improved (BH) max .S1 in the Supplementary File.
Following the VSM measurements in Figure 6, the magnetic properties of the anisotropic bulks were studied in detail.Subsequently, the microstructure and phase were examined to explain the origin of these magnetic properties.Figure 7 shows the BSE-SEM images of anisotropic bulks pressed under different pressures.A large number of voids were present in the AP-04 and AP-08 bulk due to the low pressure.The 1-12 grains were either isolated or directly connected with each other.The low Hc was attributed to the absence of the Sm-rich GBP.As the pressure increased, both the size and number of voids in the heat-treated bulks decreased.When the pressure was above 1.5 GPa, the white GBP, identified as the Sm-rich Fe-V-lean phase, became distinguishable among the main 1-12 grains.The number and area of voids significantly decreased with increasing pressure.This observation correlates with the density measurements of the bulks, taken using the Archimedes method.The densities measured were 7.456, 7.538, 7.545, 7.579, 7.674, 7.709, and 7.716 g/cm³ for the AP-04-AP-40 bulks, respectively.Higher pressure during the pressing process resulted in higher density, which in turn improved (BH)max. Figure 8a shows the XRD patterns of the anisotropic bulk AP-35, which is pressed under 3.5 GPa.The upper pattern was measured from the bulk in the direction perpendicular to the alignment direction, while the lower pattern was measured from powders ground from the same bulk.The sample includes the main 1-12 phase, SmO, and a small Following the VSM measurements in Figure 6, the magnetic properties of the anisotropic bulks were studied in detail.Subsequently, the microstructure and phase were examined to explain the origin of these magnetic properties.Figure 7 shows the BSE-SEM images of anisotropic bulks pressed under different pressures.A large number of voids were present in the AP-04 and AP-08 bulk due to the low pressure.The 1-12 grains were either isolated or directly connected with each other.The low Hc was attributed to the absence of the Sm-rich GBP.As the pressure increased, both the size and number of voids in the heat-treated bulks decreased.When the pressure was above 1.5 GPa, the white GBP, identified as the Sm-rich Fe-V-lean phase, became distinguishable among the main 1-12 grains.The number and area of voids significantly decreased with increasing pressure.This observation correlates with the density measurements of the bulks, taken using the Archimedes method.The densities measured were 7.456, 7.538, 7.545, 7.579, 7.674, 7.709, and 7.716 g/cm³ for the AP-04-AP-40 bulks, respectively.Higher pressure during the pressing process resulted in higher density, which in turn improved (BH)max. Figure 8a shows the XRD patterns of the anisotropic bulk AP-35, which is pressed under 3.5 GPa.The upper pattern was measured from the bulk in the direction perpendicular to the alignment direction, while the lower pattern was measured from powders ground from the same bulk.The sample includes the main 1-12 phase, SmO, and a small Figure 8a shows the XRD patterns of the anisotropic bulk AP-35, which is pressed under 3.5 GPa.The upper pattern was measured from the bulk in the direction perpendicular to the alignment direction, while the lower pattern was measured from powders ground from the same bulk.The sample includes the main 1-12 phase, SmO, and a small amount of α-(Fe 4 V) phase, similar to the isotropic sample.As seen in the bulk XRD pattern, the main aligned direction is <002>, which corresponds to the easy magnetic alignment direction and the shortest c-axis in the crystal structure of Figure 8b.However, there is also a secondary alignment direction, <202>, rotated 45 • from the main direction.To further investigate the orientation of each grain in the bulks, EBSD images were taken in the oriented direction, as shown in Figure 8c,d.The measurement surface normal was perpendicular to the oriented direction of the AP-35 bulk.The mean band contrast is 120.26.During the measurement, only the SmFe 10 V 2 structure was loaded for indexing.Therefore, the black areas represent secondary phases and voids in the image.Most of the 1-12 grains appear red or in similar colors, indicating that the majority of the grains were aligned in a similar direction of <001>.As shown in Figure 8e, the polar figures in three different directions were calculated based on the EBSD image, confirming that the main orientation direction is parallel to <001>.pendicular to the oriented direction of the AP-35 bulk.The mean band contrast is 120.26.During the measurement, only the SmFe10V2 structure was loaded for indexing.Therefore, the black areas represent secondary phases and voids in the image.Most of the 1-12 grains appear red or in similar colors, indicating that the majority of the grains were aligned in a similar direction of <001>.As shown in Figure 8e, the polar figures in three different directions were calculated based on the EBSD image, confirming that the main orientation direction is parallel to <001>.Some grains were oriented in similar directions due to insufficient alignment, corroborating the XRD pattern results.Although we successfully fabricated anisotropic bulks with high H c , there are some imperfections.The main issue is the defective squareness of the hysteresis loops, which held back the further improvement of (BH) max .Several factors contribute to these phenomena.One issue is that the external magnetic field was not strong enough to align all the powders in the same direction, as it needed to overcome the significant repulsive forces between the powders.The presence of a secondary alignment direction, detected in the XRD patterns in Figure 8a, proved this problem.Another possible reason is that not all powders were single-grain particles.Although the hydrogen decrepitation process weakened the grain boundaries, some small powders with multiple grains remained during the short-time ball milling process.These multi-grain powders compromised the level of orientation during the pressing and heat treatment processes.According to the FE-SEM images in Figure 4, the Sm-rich GBP was detected in the heat-treated bulks.However, it was not uniformly distributed among all grains.Some 1-12 grains lacked the white GBP due to inadequate
Figure 1 .
Figure 1.SEM images of the H2 decrepitated powder (0 h) and the ball-milled powders with di ent ball milling times from 3 to 50 h (BM3-BM50).The yellow arrows show the intergranular f ture made by the H2 decrepitation.The powder size distribution becomes finer with longer mil times, with maximum and minimum sizes varying severalfold within the same milling time.
Figure 1 . 4 Figure 1 .
Figure 1.SEM images of the H 2 decrepitated powder (0 h) and the ball-milled powders with different ball milling times from 3 to 50 h (BM3-BM50).The yellow arrows show the intergranular fracture made by the H 2 decrepitation.The powder size distribution becomes finer with longer milling times, with maximum and minimum sizes varying severalfold within the same milling time.
Figure 2 .
Figure 2. Average size (a), hysteresis loops (b,c), and the tendency of M r /M s and H c (d) of the H 2 decrepitated powder (0 h) and the ball-milled powders with different ball milling times, from 3 to 50 h (BM3-BM50).
Figure 3 .
Figure 3. XRD patterns (a) and phase fractions (b) of the heat-treated bulks BM3-HT-BM50-HT.The fractions of α-(Fe 4 V), SmO, and 1-12 phases were summarized from the refinement of XRD patterns calculated by the FullProf program.
Figure 5 .
Figure 5. Demagnetization-corrected hysteresis loops (a) and the tendency of Ms and Hc (b) of the heat-treated bulks (BM3-HT-BM50-HT) depending on the ball milling time.Ms and Hc were calculated by the hysteresis loops.
Figure 5 .
Figure 5. Demagnetization-corrected hysteresis loops (a) and the tendency of M s and H c (b) of the heat-treated bulks (BM3-HT-BM50-HT) depending on the ball milling time.M s and H c were calculated by the hysteresis loops.
Nanomaterials 2024 , 12 Figure 6 .
Figure 6.Demagnetization-corrected hysteresis loops (a), values of Mr/Ms, and (BH)max (b) of the anisotropic bulks depending on the pressure (AP-04-AP-40).The value of Mr/Ms represents the orientation level.The values of Ms, Mr, and (BH)max are shown in TableS1in the Supplementary File.
Figure 7 .
Figure 7. BSE-SEM images of the anisotropic heat-treated bulks (AP-04-AP-40) to check the density and phase influenced by the pressure.The black part represents the void.
Figure 6 .
Figure 6.Demagnetization-corrected hysteresis loops (a), values of M r /M s , and (BH) max (b) of the anisotropic bulks depending on the pressure (AP-04-AP-40).The value of M r /M s represents the orientation level.The values of M s , M r , and (BH) max are shown in TableS1in the Supplementary File.
Figure 6 .
Figure 6.Demagnetization-corrected hysteresis loops (a), values of Mr/Ms, and (BH)max (b) of the anisotropic bulks depending on the pressure (AP-04-AP-40).The value of Mr/Ms represents the orientation level.The values of Ms, Mr, and (BH)max are shown in TableS1in the Supplementary File.
Figure 7 .
Figure 7. BSE-SEM images of the anisotropic heat-treated bulks (AP-04-AP-40) to check the density and phase influenced by the pressure.The black part represents the void.
Figure 7 .
Figure 7. BSE-SEM images of the anisotropic heat-treated bulks (AP-04-AP-40) to check the density and phase influenced by the pressure.The black part represents the void.
oriented in similar directions due to insufficient alignment, corroborating the XRD pattern results.
Figure 8 .
Figure 8.(a) XRD patterns of the sample AP-35, including bulk and ground powders; (b) crystal structure of the composition of SmFe10V2; (c) band contrast map; (d) inverse pole figure (IPF) map in the AP-35 sample's surface normal (ND), in which the measuring face is perpendicular to the aligned direction; (e) polar figures of three different directions, calculated based on the EBSD image in (d).
Figure 8 .
Figure 8.(a) XRD patterns of the sample AP-35, including bulk and ground powders; (b) crystal structure of the composition of SmFe 10 V 2 ; (c) band contrast map; (d) inverse pole figure (IPF) map in the AP-35 sample's surface normal (ND), in which the measuring face is perpendicular to the aligned direction; (e) polar figures of three different directions, calculated based on the EBSD image in (d).
Table 1 .
Samples depending on the ball milling times (Group 1) and pressure (Group 2). | 7,005.6 | 2024-08-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Interaction of the Cyprus/Tethys slab with the mantle transition zone beneath Anatolia
MTZ is thickened to ~270 km as result of the cool slab in the MTZ influencing the 660-km discontinuity and includes an arrival at ~520-km depth likely from the top of a flat lying slab or a discontinuity related to a solid-solid phase transition in the olivine component of the mantle. We find evidence for low-velocity zones both above and below the 410-km discontinuity and above the 660-km discontinuity. The low velocity zones around the 410-km discontinuity might be the result of hydration of the MTZ from the slab and upward convection of MTZ material into the upper mantle. The origin of the low velocity zone around the 660-km discontinuity is less clear and could be related to sedimentation of subducted mid-ocean ridge basalts. The small footprint of the seismic array provides accurate information on the structure of the MTZ in an area influenced by subduction and shows small-scale changes in MTZ structure that might be lost in studies covering larger areas with sparser sampling.
Summary
The interaction of subducted oceanic lithosphere with the discontinuities of the mantle transition zone (MTZ) provides insight into the composition and temperature of the subducted slab as well as potential melting of the slab or the surrounding mantle and loss of volatiles from the slab.Detailed mapping of the structure of the MTZ will help to better understand how slabs transport material and volatiles into the mantle and how phase transitions affect the slab dynamics.Here we use a dense network of seismic stations in northern Anatolia to image the structure of the MTZ discontinuities in detail using P-wave receiver functions.With a station spacing of about 7 km and a surface footprint of ~35 km by ~70 km, analysing receiver functions calculated from teleseismic earthquakes that occurred during an ~18 month deployment produced clear images of where the mantle transition zone interacts with the Tethys/Cyprus slabs that either lie flat on the 660-km discontinuity or pass into the lower mantle.We observe an undulating 660-km discontinuity depressed by up to 30 km and a slightly depressed (1 -2 km) 410-km discontinuity, apparently undisturbed by the slab.The Introduction Subducted slabs are a major pathway for oceanic lithosphere, continental sediments, and volatiles to be transported into the lower mantle.Constraining this process and the interaction of the slab with the mantle at different depths is essential for our understanding of the flux and storage of elements such as carbon and water into the lower mantle and has an impact on life and the long-term habitability of Earth (Dasgupta and Hirschmann, 2010;Schmandt et al., 2014).The upper mantle beneath the eastern Mediterranean has long been influenced by interactions with subducted material due to the closure of the Tethys ocean and the on-going subduction at the Hellenic and Cyprus trenches (Cavazza et al., 2004;Faccenna et al., 2006;Jolivet et al., 2013;Robertson and Dixon, 1984;Stampfli, 2000).It is therefore an ideal location to study the interaction of subducted material with the upper mantle.
The mantle transition zone (MTZ) is bounded by two global seismic discontinuities located at depths of approximately 410 km and 660 km.These sharp increases in seismic velocity are typically attributed to solid-solid phase transitions in the olivine component of the mantle.The 410-km discontinuity (herewith "the 410") marks the transition from -olivine to wadsleyite, whilst the 660 km discontinuity (herewith "the Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 660") is due to a transition from ringwoodite to bridgmanite and magnesiowüstite (Frost, 2008).The depths of the 410 and 660 transitions are dependent on pressure and temperature (e.g.Helffrich, 2000), allowing MTZ thickness to be used to infer the thermal state of the mantle (Flanagan and Shearer, 1998;Schmerr and Garnero, 2006;Shearer and Masters, 1992).The 410 transition occurs at lower pressures (and hence at shallower depths) in the presence of lower temperatures, while the opposite Clapeyron slope of the 660 means it will occur deeper in regions of lower temperature (e.g.Helffrich, 2000).Increased temperatures will conversely result in a deeper 410, and a shallower 660, resulting in a thinned MTZ.Theoretical and experimental studies on the petrology of upper mantle material have also shown that the composition and water content of the mantle strongly influence the structure and depth of these discontinuities (Bolfan-Casanova et al., 2006).
An additional discontinuity at a depth of about 520 km (the 520) has also been observed at several locations (Shearer, 1990) and might be related to the wadsleyite to ringwoodite phase transition (Sinogeikin, 2003) or the formation of CaSiO 3 bridgmanite from garnet (Saikia et al., 2008).The 520 has been found to occur over a diffuse depth range and therefore might not be sharp enough to observe seismically and observations remain controversial (Bock, 1994).The 520 might be observable in subduction zones (Gilbert et al., 2001) where the wadsleyite-ringwoodite phase transition may occur over a small pressure interval due to variations in olivine content (Gu et al., 1998), hydrated mantle or slab material (Inoue et al., 1998).
Mineral-physical studies have shown that water content influences the properties of the olivine-wadsleyite phase transition (Ohtani, 2005;Smyth and Frost, 2002).Water is preferentially incorporated into wadsleyite rather than olivine, so the transition zone has a larger water storage capability than the upper or lower mantle.This could lead to dehydration melting in material moving upwards through the 410, or sinking through the 660 (Bercovici and Karato, 2003;Revenaugh and Sipkin, 1994;Schmandt et al., 2014).It is possible that the related partial melting could be observed as zones of low seismic velocity bounding the transition zone discontinuities (Revenaugh and Sipkin, 1994).
The eastern Mediterranean has a long and complex tectonic history currently characterized by differential plate motions between Arabia/Eurasia and by active subduction at the Hellenic and Cyprus trenches (Pichon and Angelier, 1979;Pichon et al., 1981).It has been proposed that the material being subducted at these trenches is Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 some of the oldest oceanic lithosphere on the planet and might be a remnant of the closure of the Neo-and Paleo-Tethys oceans (Granot, 2016;Hafkenscheid et al., 2006).
Tomographic images of the upper mantle in the region show the location of the subducted material from the active trenches to the top of the lower mantle (Berk Biryol et al., 2011;Bijwaard et al., 1998;Fichtner et al., 2013aFichtner et al., , 2013b;;Goes et al., 1999;Paul et al., 2014;Piromallo and Morelli, 2003;Salaün et al., 2012;Zhu et al., 2015), with evidence for a long lasting tear in the slab at the Isparta angle influencing the dynamics of the region (Berk Biryol et al., 2011;Bijwaard et al., 1998;Jolivet et al., 2013Jolivet et al., , 2009)).Although the tomographic images show that the flow of the eastern Mediterranean slabs into the lower mantle could be impeded by the 660 beneath Anatolia (Berk Biryol et al., 2011), the Tethys slab has been detected in the lower mantle beneath India (van der Meer et al., 2009;Van der Voo et al., 1999).It therefore remains unclear exactly how the slabs interact with MTZ discontinuities beneath Anatolia and the location where it penetrates into the lower mantle.
This study utilises P-wave receiver functions (e.g.Langston, 1979;Ammon, 1991) from a dense network of seismometers deployed in north-western Turkey to image the MTZ discontinuities beneath Anatolia (DANA, 2012).Our results show that the MTZ is thickened to ~270 km and contains a strong 520 km discontinuity, with evidence for multiple low velocity layers around the discontinuities.This provides conclusive evidence for the presence of the slab and indicates a complex interplay of processes as the slab interacts with the MTZ beneath Anatolia.
Data and Analysis
A temporary network (Fig. 1a) of 73 medium-and broad-band seismometers was deployed between May 2012 to October 2013 across the North Anatolian Fault zone in the rupture zone of the 1999 Izmit earthquake (DANA, 2012).The stations were deployed on a semi-regular rectangular grid of 6 lines East-West and 11 rows North-South.The nominal station spacing of the regular grid is 7 km.In addition to the main grid, a further 7 stations were deployed in a half circle towards the east, with a larger station spacing.
We use instrument-response deconvolved P-waveforms from 160 high-quality teleseismic events (Fig. 1b) to compute P-wave receiver functions (PRF), isolating Pto-S wave conversions from seismic discontinuities beneath the array.The waveform Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 data were initially filtered using a second-order Butterworth band-pass filter between 0.04 Hz and 3.0 Hz.We used an iterative time domain deconvolution method (Ligorría and Ammon, 1999) with a Gaussian pulse width of 0.625 to deconvolve the vertical component P waveform from both the radial and transverse components to isolate P-to-S wave conversions from the MTZ discontinuities.This approach leads to a dominant receiver function frequency of ~0.3 Hz, which is the ultimate limit on the spatial resolution of this study.Whilst the nature of our array (station spacing of ~7 km) provides us with dense sampling within the MTZ (Fig. 1b), our horizontal resolution is limited by the size of the Fresnel zone radii within the MTZ (approximately 60-80 km).Visual inspection led to 2346 high-quality receiver functions sampling an area of ~3° by 3° where slabs from the Mediterranean subduction zones (Hellenic and Cyprus slabs) have been imaged within the MTZ (Berk Biryol et al., 2011;Fig. 1b).This dataset was further trimmed to 1505 of the highest quality PRFs following an automated signal-to-noise ratio procedure (Cornwell et al., 2011;Hetényi et al., 2009).The final dataset of PRFs is dominated by ray paths that sample the MTZ to the north and east of the array (Fig. 1b).This ray geometry provides sampling in the region where the Hellenic and Cyprus slabs interact with the 660 discontinuity, while not sampling the transition of the slab through the 410 to the south of the region sampled by our dataset.Figures for the full dataset are shown in the supplemental material with images produced with the quality selected dataset shown in the main part of the manuscript.
PRFs were then migrated using three different approaches to firstly characterise the broad features and then examine their variation in space and depth.The first method is a 1D time migration (Stoffa et al., 1981) (Fig. 2) without corrections for crust and upper mantle velocity variations (Thompson et al., 2011) assuming the 1D Earth model ak135 (Kennett et al., 1995).We then perform a 2-D common conversion point (CCP) migration (Sheehan et al., 2000) in depth again using the ak135 velocity model (Fig. 3).This migration collapses all the data from our small conversion point footprint into a 2D N-S profile, with a minimum of 10 conversion points in an individual bin.Finally, we perform a 3D CCP migration (Fig. 4) through the EU60 velocity model of Zhu et al. [2015] by binning into 15 x 15 x 2 km voxels and then accounting for the Fresnel zone size by applying a Gaussian smoothing operator over Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 5 (i.e.75 km) neighbouring bins in the horizontal directions and 3 (i.e. 6 km) in the vertical direction (e.g.Hetényi et al., 2009).
Results
The PRF results clearly show P-to-S conversions from the upper mantle discontinuities.Figure 2a shows stacked PRFs as a function of slowness.An Nth-root (N=4) slant stack for a reference slowness of 5.59 s/deg showing the observed seismic arrivals is also included in Fig. 2b.The P-to-S conversions from the 410 and 660 are clearly visible arriving at ~45 s and ~65 s, slightly later (~2-3 s) than predicted by ak135.This could be due to the low S-wave velocities in the upper mantle beneath Anatolia (Fichtner et al., 2013a;Salaün et al., 2012;Zhu et al., 2015), but our dataset cannot rule out the possibility that both the 410 and 660 are located slightly deeper than average in this location.Notably, the traveltime anomalies are consistent between the 410 and 660 arrival times, which suggests that the majority of this travel time residual can be accounted for through velocity variations in the upper mantle above the 410 along the S-leg of the P-to-S conversion.Another positive arrival is located at times between 50 and 60 s, between the 410 and 660.The arrival time of this arrival varies as a function of slowness, arriving earlier at low slowness, and later at slownesses of greater than 6.5 s/deg.This arrival is most likely a P-to-S conversion from a discontinuity at approximately 520 km depth, potentially related to the wadsleyite to ringwoodite transition.We also observe a lower amplitude positive arrival at 75-85 s, which corresponds to a lower mantle depth of ~800 km (Figs. 2 and 4).In addition to these positive arrivals, there are several negative arrivals present in the PRFs that suggest velocity decreases with depth (i.e. a low velocity layer).These negative arrivals are particularly prominent at ~40 s and ~50 s, bounding the 410 discontinuity.A further localised negative arrival can be seen above the 660 at ~65 s, most evident at a slowness of larger than +6.5 s/deg.The moveout characteristics of these arrivals indicate that they are not multiples from shallower discontinuities (Fig. 2b).
The results of the 2D and 3D CCP depth migrations are shown in Figs. 3 and 4, overlain onto the EU60 tomography model (Zhu et al., 2015).Vertical and horizontal resolution of these migrated receiver function images are estimated to be +/-5 km and 30 km, respectively, based on the first Fresnel Zone at the relevant frequency.The 410 shows little depth variation along the 2D radial PRF profile whereas the 660 shows minor undulations on the order of 10 km (Fig 3).The maximum thickness of the transition zone in this transect is ~270 km: thicker than the global average of 242 km (Gu and Dziewonski, 2002).This thickening is the result of topography and an overall deepening of the 660, which is largest close to the subducted slab indicated by high seismic velocities in the tomographic model.A conversion from a depth of ~520 km is visible in the centre of the profile (Fig. 3), close to the velocity gradient interpreted as the top of the slab in the EU60 tomography model (Zhu et al., 2015).
The 3D CCP depth migration using the EU60 tomography model (Fig. 4) generally displays similar structures to those observed in the 2D migrated profile (Fig. 3).
Migration through the 3D tomography model appears to remove part of the traveltime anomaly visible in the 1D/2D migrations, giving further support to an upper mantle source of this traveltime anomaly.Some slight delay remains in the 3D migrated images, indicating that the tomography model may underpredict the magnitude of the upper-mantle velocity anomalies in the region.The 410 is enclosed by negative arrivals, which can also be seen above the 660.A positive arrival from ~520 km depth occurs at various locations, with a depth variation of +/-20 km.This arrival is strongest in the centre of the profiles coinciding with the top of the high velocity anomaly in the tomographic models.Negative arrivals around the 410 and 660 show clear lateral variations.The negative arrival above the 660 seems to be absent from the southern part of the profile, while the arrival from below the 410 is present almost everywhere.The arrival above the 410 appears more complex but is clearly visible in both the south and north of the profile.
Transverse PRFs were calculated using the deconvolution of the vertical from tangential components (see supplemental material) to examine evidence for radial seismic anisotropy or P-S conversions from dipping structures.Although some energy is present, the data distribution might prevent any conclusive interpretation of this energy and there seems to be little consistent structure throughout the volume in these transverse PRFs.We therefore conclude that there are no major and robust radially anisotropic signals present in the MTZ in this region.
Discussion
Our PRF results provide high-resolution images of the MTZ in the vicinity of a subducted slab.The tomographic models (shown in the background of Figures 2, 3 and 4) indicate that a slab (defined by the high velocity anomaly) is lying flat on the Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 660, although there is alternative evidence that it penetrates the lower mantle (Bijwaard et al., 1998;van der Meer et al., 2018;Zhu et al., 2015).Our densely sampled receiver function images allow new detailed insight into the physical processes occurring during the interaction of the slab with the MTZ (Fig. 5).
In general, the main features of the dataset are consistent between all migration approaches (Figs.2,3 and 4), though some lateral variations in the MTZ structure are apparent in the 3D dataset.The amplitude of the 410 conversion is quite variable in the 3D migration, and is not clearly visible in the far west and east of the image (Fig. 4).In addition, the low velocity zone below the 410, that is prevalent throughout the 1D and 2D migrations, is notable only in the northern parts of the 3D migration (Fig. 4).Only the most eastern 3D migration profile shows evidence of a low velocity zone beneath the 410 in the south.Conversely, the 660 discontinuity and its associated low velocity zone are spatially consistent between both the 2D and 3D migrations.The comparison between the two sets of migrated images allows us to identify the more robust features of our data set (e.g. the location of the 660 discontinuity), alongside those features that show spatial variability that is unclear in the 1D migration, such as the low velocity zone associated with the 410.
We find strong correlation between the structure of the MTZ discontinuities and tomographic images that map broad-scale velocity variations associated with the subducted slab.The 410 and 660 both appear slightly deeper than average in the 3D migrations.With our current dataset, it is difficult to determine whether the depression of the 410 and 660 can be attributed solely to a reduction in S-wave velocity above the 410, a true depression of both discontinuities, or a combination of these two.There is little evidence for significant topography of the 410 that exceeds more than a few kilometres.This is in agreement with the tomographic models of the region, which indicates that the slab impinges on the 410 to the south of our study area (Berk Biryol et al., 2011;Piromallo and Morelli, 2003;Zhu et al., 2015).
Therefore, it appears that the deepening of the 660 due to the presence of the subducted slab is the most likely cause of the increase in MTZ thickness to 270 km.
Experimental values for the Clapeyron slope for the transition from ringwoodite to bridgmanite and magnesiowüstite vary from -1.0 MPa/K (Fei, 2004;Katsura, 2003) to -3.0 MPa/K (Ito and Takahashi, 1989).If we attribute the observed thickening of the MTZ purely to the ringwoodite phase transition, this would indicate a temperature reduction in the range of 1000 K to 330 K. Temperatures at the higher end of this Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 range seem unrealistic (Peacock, 1996;Stern, 2002).However, recent studies suggest that a disassociation of ringwoodite into akimotoite and periclase might alter the Clapeyron slope of the 660 to values of -4 to -6 MPa/K (Hernández et al., 2015;Yu et al., 2011).This would reduce the required temperature difference between the (most likely cold) slab and ambient mantle at a depth of 660 km to explain the deepening of the 660 to a more realistic ~160 to 245 K (e.g.Cottaar & Deuss, 2016).P-to-S conversions from a depth of 520 km are observed in the centre of the profile and seem to be co-located with the top of the subducted slab.The high amplitude of this conversion would imply a strong and sharp discontinuity.The wadsleyite to ringwoodite phase transition is thought to take place over a wide depth range of 25 to 60 km (Akaogi et al., 1989) which would not be detectable in the frequency range of our data.The depth interval for the wadsleyite to ringwoodite transition has been found to be dependent on temperature (Xu et al., 2008), iron and water content (Mrosko et al., 2015).The co-location of the subducted slab and the arrivals from about 520 km depth (Fig. 3 and 4) makes it difficult to differentiate the source of these P-to-S conversions from a discontinuity formed due to a solid-solid phase transition or a compositional effect due to the top of the slab or the subducted oceanic Moho (Sinogeikin et al., 2003).The detection of an interface at this depth supports the existence of the slab in the transition zone in both scenarios, either by sharpening the phase transition through a temperature or water related effect; or by representing a direct detection of a compositional interface such as the top of the slab or the subducted Moho.The detection of a slab related interface (paleo-surface, paleo-Moho) seems to be a more suitable explanation.
Negative P-S converted energy that surrounds the 410 and occurs above the 660 is a prominent feature of this dataset and most likely indicates the presence of low velocity zones.Several studies have detected a low velocity zone above the 410 (Revenaugh and Meyer, 1997;Song et al., 2004;Tauzin et al., 2010) which has been interpreted as partial melting due to dehydration of hydrated material convecting upwards through the 410 (Bercovici and Karato, 2003).We observe a similar low velocity zone throughout our study region, although there is significant depth variation, most prominently a deepening from south to north.An intriguing feature of our PRFs is a negative arrival below the 410, which is interpreted as a low velocity zone at the top of the MTZ.Hydrated wadsleyite has been found to be more buoyant than dry wadsleyite (Karato, 2006), meaning hydrated material could rise to the top of Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 the MTZ.Hydrated wadsleyite "underplating" the 410, as suggested by Schmerr and Garnero [2007] beneath South America, is a candidate for the origin of this low velocity zone.The 410 is observed throughout the profile even in the presence of this deeper negative arrival so the low velocity hydrated wadsleyite does not obscure the conversion from the 410 in this location (Schmerr and Garnero, 2007).The hydration of transition zone wadsleyite may occur when the slab enters the transition zone and contains hydrous mineral phases and sediments (Kuritani et al., 2011).The detection of the low velocity layer beneath the 410 along the whole profile could be a remnant of this hydration process as the rollback of the Hellenic arc (currently 25-30 mm/yr towards the south west) (McClusky et al., 2000) would have placed the slab further to the north-east in the past, perhaps within our study region.Additionally, the 410 has a lower amplitude in the centre of the profile (Fig. 3, 4), in an area that coincides with a low velocity zone in the tomographic model (Zhu et al., 2015).This low velocity zone, along with the decreased amplitude of the 410 transition could also indicate an increased level of hydration in this area (Helffrich and Wood, 1996).However, Frost & Dolejš (2007) suggest that this effect can only occur where temperatures are significantly below ambient mantle and where water contents are at or approaching saturation.The subducted slab and the long subduction history in the region might be able to provide these necessary conditions.
The PRFs migrations also show a negative conversion above the 660.Dehydration melting, due to the larger water capacity of transition zone minerals compared to the lower mantle (Hirschmann, 2006;Pearson et al., 2014;Schmandt et al., 2014), would create a low velocity zone below the 660 in regions of down-welling.In contrast, we observe low velocities above the 660.Although the phase transition in the olivine system (from ringwoodite to bridgmanite and magnesiowüstite) can explain the large scale structure of the 660, other phase transitions from garnet to calcium-rich bridgmanite have been detected in a similar pressure and temperature range (Vacher et al., 1998).This could lead to a more complicated structure of the 660.Such complexity has been inferred from seismic observations in other subduction zones around the planet (Simmons and Gurrola, 2000;Thomas and Billen, 2009).In contrast to these studies, our observations of the 660 beneath Anatolia do not show any potential splitting in the P-to-S conversions, but instead a negative velocity layer, which cannot be explained by the proposed phase transitions.Shen et al. [2008] and Shen and Blum [2003] attribute a low velocity zone at these depths to the Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018 sedimentation of former subducted oceanic crust onto the 660, forming a layer dominated by majorite garnet.Similar proposals have been made for structures beneath western North America by Tauzin et al. [2013].This mechanism requires that the crustal part of the slab is stripped from the mantle component due to differential buoyancy (Karato, 1997), and could explain the presence of a low velocity anomaly above the 660.However, seismic evidence shows that some crustal material enters the lower mantle (Bentham et al., 2017;Kaneshima, 2016;Rost et al., 2008).Here, we observe a low velocity zone over the area of the profile where the slab seems to be lying flat on the 660.It remains unclear whether the slab is potentially overlying an older layer that contains crustal material, or if the velocity structure of the harzburgitic slab can be responsible for the low velocity anomaly atop the 660.A further explanation for low seismic velocities within the subducted slab in the lower MTZ is the presence of dense hydrous magnesium silicate phases.Low seismic velocities were observed by Brudzinski and Chen (2003) within the Tonga slab at mid to lower MTZ depths, and while this was interpreted as being associated with a metastable olivine wedge, more recent experimental work has postulated that the presence of superhydrous Phase B (ShyB) +/-Phase D could also explain the results (Rosa et al., 2015).These phases are only stable along cold slab geotherms and have been previously linked to rapid subduction of old oceanic lithosphere along the western Pacific.The material being subducted in the Cyprus trench is believed to be comprised of ~95Ma back-arc material formed during slab roll-back in the Late Cretaceous (Maffione et al., 2017;van der Meer et al., 2018).In addition to this, the age of eastern Mediterranean oceanic crust has been suggested to be as old as 340Ma, potentially making it a remnant of the Neotethys Ocean (Granot, 2016).These ages would likely be sufficient to produce the thick and cold oceanic lithosphere required to stabilise ShyB within the subducted slab at MTZ depths (e.g.Litasov & Ohtani, 2010), making this a viable hypothesis for the observed low velocity layer within the Cyprus Slab.
Summary and Conclusions
This work studied a region of the upper mantle beneath north-western Anatolia, where the Cyprus slab passes through the MTZ.The dense imaging of the transition zone with P-to-S conversions from receiver functions allows accurate observations of the interaction between the MTZ and the subducted slab.The "410" and "660" are found to be slightly deeper than the global average, though this is potentially due to low velocities in the upper mantle (Fichtner et al., 2013a;Zhu et al., 2015).The cold thermal anomaly of the subducted slab appears to deepen the 660, resulting in a mantle transition zone thickness of approximately 270 km.We estimate that this depression of the 660 translates to a negative temperature anomaly of 160 to 240 K at 660 km depth compared to ambient mantle.The 410 shows little topography and we thus conclude that the slab likely passes through the 410 to the south of our study area.The 410 is bounded by strong negative arrivals above and below the main conversion.A low velocity zone above the 410 might be related to melting of hydrated wadsleyite flowing from the transition zone and transitioning to dry olivine (Bercovici and Karato, 2003), while the low velocity zone below the 410 can be explained by buoyant hydrous wadsleyite rising to the top of the transition zone (Karato, 2006;Schmerr and Garnero, 2007).A conversion observed at a depth of ~520 km could be due to either a sharpening of the wadsleyite to ringwoodite phase transition due to the presence of the cold slab (Xu et al., 2008), the presence of water extracted from the slab (Kuritani et al., 2011), or simply a compositional signature from the subducted Moho or the top of the crust.A low velocity zone directly above the 660 might be related to mid-ocean ridge basalt that has been removed from the top of subducted slabs in the transition zone (Thomas and Billen, 2009), or could indicate the presence of dense hydrous phases within the subducted slab (e.g.ShyB +/-Phase D).The dense nature of this P-receiver function study allows the detection and characterization of a wide variety of features in the MTZ beneath Anatolia.The results suggest a complex suite of processes, involving both thermal and chemical variations, occurs within and around the MTZ during the transit of a subducted slab into the lower mantle.(Zhu et al., 2015).The location of the slab is indicated by high seismic velocities.The positive conversions from the 410 and 660 are observable across the profile and there is evidence for a conversion from a depth of about 520 km.The PRFs show evidence for 3 low velocity layers (LVL1, LVL2, LVL3) marked by strong negative arrivals.
The depth migrated section of the full dataset is shown in Suppl.Figure 1 41.37 ; and longitudes of 30.63 , 30.90 , and 31.17
Figure 1
Figure 1: a) DANA station locations.Stations of the regular grid are shown as black triangles and include 3 permanent stations of the KOERI national seismic network (red triangles).Yellow triangles indicate the eastern girdle of stations with larger station spacing.Background shows elevations from the Shuttle Radar Topography mission (Farr et al., 2007).b) Receiver function piercing points at 410 km (blue) and 660 km depth (red).Station locations indicated by small triangles.Slab depths from tomographic models are indicated by black lines and are from Berk Biryol et al., [2011].
Figure 2
Figure 2: a) Slowness section for the DANA network using the full dataset, highlighting the arrivals from the 410-km and 660-km discontinuities and waveform variations from velocity variations.Predicted traveltimes for conversions fromvarying depths (dashed lines) have been calculated through the 1D Earth model ak135(Kennett et al., 1995).Also visible are conversions from an apparent depth of 520 km in some regions of the study area.Dashed lines are 2 confidence limits.b) Slant stack of receiver functions with a reference slowness of 5.59 s/deg.We observe strong arrivals from the 410 and 660 with strong low velocity zones around the 410 and a weaker conversion from 520 km depth.A diffuse arrival from depths of about 800 km can also be observed.The 410 arrival shows strong negative arrivals from shallower and larger depths.Similarly, a lateral varying negative arrival can be observed for the 660.
Figure 3 :
Figure 3: Common conversion point depth migrated receiver functions of the qualitycontrolled dataset projected onto a single NS profile using the 1D ak135 velocity model.Background shows P-wave velocity variation of tomography model EU60 and shows only subtle differences.Only bins containing greater than 10 traces were included in the 2D migration.Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018
Figure 4 :
Figure 4: 3D common-conversion point (CCP) migration of the 1505 highest quality receiver functions using the EU60 regional tomography model [background colour fill, Zhu et al., 2015].Color scale for tomography shown in Fig. 3. Slices through the 3D model are shown in E-W and N-S direction (at latitudes of 40.82 , 41.09 and ). PRFs are shown as lines with positive arrivals filled in black and negative arrivals grey.Only bins containing more than 5 receiver functions are included, with a mean hit count of ~15 and maximum of 89 in the center of the study region.Downloaded from https://academic.oup.com/gji/advance-article-abstract/doi/10.1093/gji/ggy514/5232311 by Cardiff University user on 12 December 2018
Figure 5 :
Figure 5: Interpretative sketch of the detected structure.The 410 shows little depth variation while we observe a depressed 660 due to the cooling effect of the slab.A P-S conversion from 520 km depth can be seen which could be the top of the slab or a sharpened wadsleyite to ringwoodite transition.The 410 is surrounded by low velocity zones which are related to hydration from the subducting slab.The origin of the low velocity zone above the 660 is unknown but could be related to mid-ocean ridge basalt material. | 7,476 | 2017-12-01T00:00:00.000 | [
"Geology"
] |
Investigation of ferroptosis-associated molecular subtypes and immunological characteristics in lupus nephritis based on artificial neural network learning
Background Lupus nephritis (LN) is a severe complication of systemic lupus erythematosus (SLE) with poor treatment outcomes. The role and underlying mechanisms of ferroptosis in LN remain largely unknown. We aimed to explore ferroptosis-related molecular subtypes and assess their prognostic value in LN patients. Methods Molecular subtypes were classified on the basis of differentially expressed ferroptosis-related genes (FRGs) via the Consensus ClusterPlus package. The enriched functions and pathways, immune infiltrating levels, immune scores, and immune checkpoints were compared between the subgroups. A scoring algorithm based on the subtype-specific feature genes identified by artificial neural network machine learning, referred to as the NeuraLN, was established, and its immunological features, clinical value, and predictive value were evaluated in patients with LN. Finally, immunohistochemical analysis was performed to validate the expression and role of feature genes in glomerular tissues from LN patients and controls. Results A total of 10 differentially expressed FRGs were identified, most of which showed significant correlation. Based on the 10 FRGs, LN patients were classified into two ferroptosis subtypes, which exhibited significant differences in immune cell abundances, immune scores, and immune checkpoint expression. A NeuraLN-related protective model was established based on nine subtype-specific genes, and it exhibited a robustly predictive value in LN. The nomogram and calibration curves demonstrated the clinical benefits of the protective model. The high-NeuraLN group was closely associated with immune activation. Clinical specimens demonstrated the alterations of ALB, BHMT, GAMT, GSTA1, and HAO2 were in accordance with bioinformatics analysis results, GSTA1 and BHMT were negatively correlated with the severity of LN. Conclusion The classification of ferroptosis subtypes and the establishment of a protective model may form a foundation for the personalized treatment of LN patients. Supplementary Information The online version contains supplementary material available at 10.1186/s13075-024-03356-z.
Background
SLE is a chronic autoimmune disease involving multiple organs, characterized by intolerance to autonomic antigens, lymphoid hyperplasia, the production of autologous polyclonal antibodies, immune complex disease, and various tissue inflammation [1,2].Lupus nephritis (LN) is a critical complication of SLE and a vital risk factor for mortality in SLE patients.It has been reported that approximately 10-20% of patients with LN will develop end-stage renal disease (ESRD) within 5 years of diagnosis [3,4].The current treatment strategy targeting lupus nephritis is mainly based on immunomodulation.However, lack of understanding of the molecular mechanisms underlying LN has hindered the application and development of specific targeted therapies for this progressive disease.Moreover, the complexity of the pathophysiology and genetic diversity result in a significant proportion of patients not responding to the current targeted therapies.Therefore, there is an urgency to explore novel molecular subtypes for early diagnosis and individualized therapy of LN patients.
Ferroptosis is a novel form of iron-catalyzed lipid peroxidation-induced cell death characterized by the disruption of the lipid repair system involving glutathione and GPX4 synthesis [5].With the continuous expansion and deepening of research, a growing number of genes and signaling pathways that regulate ferroptosis have been gradually discovered [6][7][8].Accumulating evidence suggests that ferroptosis is closely related to the onset and progression of diverse kidney diseases, including acute kidney injury (AKI) and chronic kidney disease (CKD) [9][10][11][12], revealing that targeting ferroptosis might be a hopeful therapeutic strategy for kidney diseases.On the basis of bioinformatics analysis, the latest study has determined several ferroptosis-related biomarkers as inhibitors or drivers during the progression of LN [13].Moreover, another study demonstrates that iron imbalance in the proximal tubules enables to the promotion the accumulation of lipid hydroperoxides, and these ironcatalyzed oxidants can subsequently activate protein-and autoantibody-induced inflammatory transcription factors, ultimately leading to the production of stromal cells and cytokine/chemokine, and enhanced immune cell infiltration levels [14].However, little is known about the molecular subtypes based on ferroptosis and the molecular mechanisms behind how ferroptosis causes kidney damage in LN patients.
The current study systematically evaluated the differentially expressed ferroptosis-related genes (FRGs) in LN and clarified the correlation between the FRG signature and the immune microenvironment of LN patients.Two ferroptosis subtypes were classified on the basis of the expression profiles of the 10 differentially expressed FRGs.Subsequently, we established a 9-gene protective model based on the artificial neural network machine learning, aiming to provide innovative ideas for precise diagnosis and individualized therapy of LN patients at the gene level.Finally, we performed immunohistochemical (IHC) analysis to validate the role of ferroptosis subtype-specific protective genes in patients with LN.
Data download and pre-processing
Two LN-related microarray datasets (GSE32591, GSE127797) were downloaded from the Gene Expression Omnibus (GEO) website database using the R package "GEOquery" [15].The GSE32591 dataset (GPL14663 platform) contains a total of 93 samples (14 healthy and 32 LN glomeruli tissues vs. 15 healthy and 32 LN tubulointerstitium tissues).The GSE127797 dataset (GPL24299 platform) contains 42 LN glomeruli tissues and 46 LN tubulointerstitium tissues.Batch removal of the original glomerular and tubular gene expression profiles in these two datasets was performed using the ComBat function on the basis of R package of "sva" [16].Subsequently, a total of 14 healthy and 74 LN glomeruli tissues were utilized to conduct further analysis, and 15 healthy and 78 LN tubulointerstitium tissues were applied for verifying the repeatability of ferroptosis-related molecular subtypes.The analysis of differential expression was conducted using the R package of "limma" [17].The the screening criteria was |log2 (fold change)|> 1 and adjusted p-value < 0.05.
Unsupervised clustering for ferroptosis-related genes
Initially, 259 FRGs were downloaded from the FerrDb website database (http:// www.zhoun an.org/ ferrdb/).Afterward, 10 differentially expressed FRGs related to LN were selected based on the above screening criteria.On the basis of 10 FRGs expression profiles, 74 LN glomeruli tissues were classified into different subtypes via the unsupervised clustering method using the R package "ConsensusClusterPlus" [18].The optimal subtype number was identified using the cumulative distribution function (CDF), CDF delta area curves, and a value of consistent cluster score more than 0.9.The k-means algorithm with 1000 cycles were conducted to confirm the stability of the unsupervised clustering analysis.In addition, LN tubulointerstitium tissues were used for to validate the precision of the clustering.
Assessment of immune cell infiltration in LN
The CIBERSORT deconvolution algorithm, which is based on the "CIBERSORT" R package, was used to assess the relative proportions of 22 types of immune cells in each LN glomerulus tissue [19].The CIBERSORT algorithm could obtain the inverse fold product p-value for each glomerular sample by the Monte Carlo sampling method using the LM22 signature matrix.The sum of the estimated percentages of the 22 immune cell types in each sample was 100%, and a CIBERSORT p-value less than 0.05 was considered significant for immune cell fractions in the sample.
Evaluation of the correlation between genes and infiltrated immune cells
The Pearson correlation analysis was performed to assess the correlation between differentially expressed FRGs and immune cell properties using the R package of "psych".Correlation coefficients with a p-value less than 0.05 were considered to be significantly correlated.The results of correlation analysis were presented using the R package of "corrplot".
GSVA and GSEA analysis
GSVA enrichment analysis, a non-parametric unsupervised algorithm, was utilized to assess the variations of enriched gene sets between distinct ferroptosis subtypes using the "c2.cp.kegg.v7.4.symbols" and "c5.go.bp.v7.5.1.symbols" files downloaded from the MSigDB online database.Subsequently, the R package "limma" was applied for screening enriched pathways and biological functions via comparing GSVA scores between distinct ferroptosis subtypes.The p-value < 0.05 and |t value of GSVA score|> 2 were considered to be remarkably enriched.
GSEA is a calculation algorithm to clarify the distribution differences of a priori defined gene set between two groups.The R package of "clusterProfiler" was utilized to identify the significantly enriched pathways and biological functions between high and low NeuraLN group.the "c2.cp.kegg.v7.4.symbols" and "c5.go.bp.v7.5.1.symbols" files were selected as the reference gene lists.A p-value less than 0.5 was considered statistically significant.
Construction of a diagnostic model based on artificial neural network machine learning
The R package "Boruta" was applied for screening important variables associated with LN subtypes.Subsequently, a total of 74 LN glomeruli tissues were selected for constructing the neural network model using the R package "neuralnet" [20] based on the expression profiles of these important variables.The 74 LN glomeruli samples were randomly divided into a training set (70%, N = 53) and validation set (30%, N = 21).The number of hidden neuron layers were set as two-thirds of the number of the input layer.As a result, eight hidden layers were chosen as the best model parameter for developing a LN classification model based on predicted gene weight information.Subsequently, the constructed neural network model in the training set was verified in the validation set.The "ROSE" R package was used to demonstrate model classification performance and draw the ROC and Precision-Recall (P-R) curves.Finally, the weight values of important genes were utilized to calculate the disease classification score (NeuraLN): Σ i weight values i × Expression level of gene i .On the basis of the mean NeuraLN, the 74 LN glomeruli samples were classified into high and low score groups.
Construction and validation of a nomogram model
The nomogram model containing NeuraLN and immunescore information was built to assess the occurrence of LN subtypes using the R package of "rms".Each factor has a corresponding score, and the "total score" exhibits the aggregation of the scores of these factors.A calibration curve was applied for estimating the predictive performance of the nomogram model.
Immunohistochemical staining
Formalin-fixed, paraffin-embedded glomeruli tissues were collected from 13 control and 19 LN patients with LN diagnosed at the Fujian Provincial Hospital.This study was approved by the Fujian Provincial Hospital Ethics Committee.Briefly, samples were fixed in 4% buffered formalin for more than 24 h, embedded in paraffin and placed at room temperature until sectioning.Subsequently, the samples were cut into 2-3um slices and flattened on the warm water of the spreading machine.After being paraffinized with xylene.slices were hydrated using a streptavidin-biotin-peroxidase conjugate and were subjected to immunohistochemical analysis.To determine immunoreactivity, the slices were heating in 0.01 M citrate buffer for antigen repair.After washing, slices were incubated in PBS containing 10% normal goat serum to eliminate nonspecific staining and then incubated with the following primary antibody: rabbit anti-DPYS antibody (1:200, proteintech), rabbit anti-PAH antibody (1:200, proteintech), rabbit anti-HAO2 antibody (1:200, proteintech), rabbit anti-BHMT antibody (1:200, proteintech), rabbit anti-GAMT antibody (1:200, proteintech), rabbit anti-CUBN antibody (1:200, proteintech), rabbit anti-GSTA1 antibody (1:200, proteintech), rabbit anti-SLC27A2 antibody (1:200, proteintech), rabbit anti-ALB antibody (1:200, proteintech) at 4℃ overnight.After washing, these slices were incubated for 120 min at room temperature with HRP-labelled goat antirabbit secondary antibody (1:200, Thermo Fisher).Finally, the sections underwent counter-staining with Mayer's hematoxylin and were dehydrated and mounted for further analysis.The images of sections were acquired with a fluorescence microscope with 200 × magnification, and the fluorescence intensity was exhibited by the integrated option density (IOD).
Statistical analysis
The differences between the two groups were compared using the Wilcoxon test or t test.The correlation tests were performed using the Spearman analysis with the R package "psych".In comparisons between groups, a value of p less than 0.05 was considered to be remarkably significant.The R software (version: 4.1.0)was applied for data processing.
Identification of differentially expressed ferroptosis regulators and assessment of the immune cell infiltration in LN patients
The whole gene expression landscapes of two GEO datasets (GSE32591 and GSE127797) including 14 healthy (LD) and 74 LN glomeruli tissues, were downloaded from the GEO online database.A detailed flow chart of the current study is presented in Fig. 1.Samples from distinct platforms exhibited significantly different clustering before batch effect correction (Fig. 2A), whereas they clustered together after batch removal (Fig. 2B).To clarify the role of ferroptosis regulators in the progression of LN, we first obtained a total of 339 differentially expressed genes (DEGs) (241 upregulated and 98 downregulated genes) related to LN.When we combined these LN-related DEGs with 259 ferroptosis-related signatures, we eventually identified 10 differentially expressed ferroptosis regulators (Fig. 2C).Among them, the expression levels of NCF2, CD44, CYBB, GCH1, HMOX1, NNMT, and RRM2 genes were remarkably higher, whereas ALB, DUSP1, and TSC22D3 genes exhibited markedly lower expression levels in LN glomeruli tissues than those in LD (Fig. 2D,E).Subsequently, spearman's correlation analysis between these differentially expressed FRGs was utilized to elucidate whether ferroptosis functioned essentially in LN.Genes with correlation coefficients greater or less than 0.5 were displayed in the gene relationship network graph (Fig. 2F,G), indicating a fairly close association among these differentially expressed FRGs.These results demonstrated that the interactions among FRGs may play a critical role in the progression of LN.
To investigate the differences in immune microenvironment between the LN and non-LN glomeruli tissues, we analyzed the percentages of 22 infiltrated immune cells between LN patients and control individuals based on the CIBERSORT algorithm (Fig. 3A).The results revealed that LN patients displayed higher infiltration levels of naïve B cells, plasma cells, activated NK cells, monocytes, M2 macrophages, and resting dendritic cells, suggesting that the progression of LN was clearly accompanied by the alterations in immune response (Fig. 3B).We performed a correlation analysis to further understand the relationship between FRGs and significantly infiltrated immune cells in LN patients and discovered that CD8 + T cells, resting memory CD4 + T cells, Regulatory T cells (Tregs), resting NK cells, monocytes, and resting mast cells were significantly correlated with these 10 differentially expressed FRGs (Fig. 3C), implying that feroptosis regulators may act as key factors in regulating immune infiltration levels.
Identification of ferroptosis subtypes in LN
To illustrate the expression patterns related to ferroptosis in LN, we grouped the 74 LN glomeruli tissues using unsupervised clustering analysis based on the expression profiles of 10 differentially expressed FRGs.Relatively crisp boundaries were displayed in the consensus clustering matrix when the cluster numbers were set to two (k = 2) (Fig. 4A).In addition, the CDF curves exhibited the relative lower range ability at consensus index 0.2-0.6 when k = 2 (Fig. 4B).The differences in the area under the CDF curves were presented between the two CDF curves (k and k-1) (Fig. 4C), Moreover, the consensus score for each subgroup was more than 0.9 when k = 2 (Fig. 4D).We eventually grouped 74 LN patients into two ferroptosis-related subtypes, including Subtype 1 (n = 26) and Subtype 2 (n = 48).The analysis of t-Distributed Stochastic Neighbor Embedding (tSNE) also exhibited the distinct clusters between these two subtypes (Fig. 4E).We next utilized the 78 LN tubulointerstitium tissues to verify the repeatability of the clustering.Consistently, two distinct subtypes were also clearly identified (Figure S1A-E), further demonstrating that there are two ferroptosis-related subtypes in LN patients.
Identification of biological functions of distinct ferroptosis subtypes
To clarify the functional differences in different ferroptosis subtypes, we performed GSVA analysis and found that metabolism-related signaling pathway, oxidative phosphorylation, and the TCA cycle were significantly upregulated in Subtype 1, while DNA regulation, chemokine signaling, cytokine receptor interaction, autoimmune diseases, various immune cells pathways such as T cell receptor, B cell receptor, Nod-like receptor, natural killer cells activation, and Toll-like receptor pathway were upregulated in subtype 2(Fig.5A).Additionally, functional enrichment results indicated that Subtype 1 was remarkably associated with amino acid biosynthesis, fatty acid beta-oxidation, NADH oxidation, TCA cycle, mitochondrial protein processing, and metabolic processes.Immune-related biological functions such as B cell differentiation, neutrophil-mediated immunity, B and T cell activation, gamma delta T cell activation, and disease-related immunity, on the other hand, were significantly enriched in subtype 2. Otherwise, Subtype 2 also enriched in inflammatory responses such as leukocyte aggregation, cytokine production, lymphocyte activation, and IL1 and IL6 production (Fig. 5B).These results revealed that ferroptosis Subtype 2 might be implicated in various immune responses.
Identification of immune infiltration and immune checkpoint characteristics between ferroptosis subtypes
To explore the differences in immune microenvironment between distinct ferroptosis subtypes, we first comprehensively assessed infiltrated immune cells using the CIBERSORT algorithm.The altered immune infiltration levels were found between ferroptosis Subtype 1 and Subtype 2 (Fig. 6A).Subtype 1 exhibited greater proportions of naïve B cells, CD8 + T cells, resting memory CD4 + T cells, and Tregs, whereas the abundance of memory B cells, M0 macrophages, and neutrophils were markedly higher in Subtype 2 (Fig. 6B).To further elucidate the levels of immune infiltration between the two subtypes, we calculated the immune score using the ESTIMATE algorithm.Consistently, Subtype 2 also exhibited a greater immune score (Fig. 6C), indicating that ferroptosis Subtype 2 had noticeably increased infiltration of immune cells.
Subsequently, we further evaluated the expression levels of classical immune genes and immune checkpoints in LN patients with distinct ferroptosis subtypes.We found the extraordinarily enhanced expression levels of immunosuppression, MHC, and immunostimulatory-related genes in Subtype 2 when compared with Subtype 1 (Figure S2A-C), indicating that ferroptosis Subtype 2 presented stronger immune responses than ferroptosis Subtype 1.Moreover, we also found that the expression of immune checkpoints had the ability to distinguish between ferroptosis subtypes.The expression levels of immune checkpoint inhibitor-related genes (ICOS, CTLA4, CD86, CD70, CD40, CD27, and PDCD1), for example, were significantly higher in ferroptosis subtype 2 than in ferroptosis Subtype 1 (Fig. 6D), indicating that Subtype 2 may be more effective in immunotherapy.
Construction and evaluation of a predictive model
To further validate the molecular subtypes based on FRGs, we first identified the subtype-specific DGEs by intersecting the DEGs related to LN with the subtype-related DEGs (Fig. 7A).We eventually acquired a total of 12 subtypespecific DEGs, all of which were significantly downregulated in Subtype 2 (Fig. 7B).Subsequently, we identified 9 important variables closely associated with subtypes using the Boruta feature selection algorithm as follows: ALB, BHMT, CUBN, DPYS, GAMT, GSTA1, HAO2, PAH, and SLC27A2 (Fig. 7C).Afterward, the training cohort was utilized to establish an artificial neural network model based on the expression profiles of these 9 important features.According to the output results of the model (Fig. 7D), the intact training was conducted in 2300 steps, and the weight values of the neural network model ranged from -1.62 to 1.08.The weight predictions were as follows: -1.04961 (ALB), − 0.24042 (CUBN), -0.57845 (HAO2), -0.33385 (GSTA1), 0.003948 (GATM), 0.146428 (BHMT), 1.077421 (PAH), 0.042221 (DPYS), and -1.61736 (SLC27A2) (Supplementary Table 1 and Figure S3).In addition, we assessed the classification performance of the neural network model in the validation cohort by calculating the areas under the ROC and Precision-Recall (P-R) curves, and the value was 0.8571 and 0.9019, respectively (Fig. 7E), suggesting that this model was capable of distinguishing the subtypes of LN.Finally, these 9 feature genes were applied for constructing predictive scores (NeuraLN) according to their weight values and expression levels.We also developed a nomogram model for predicting the risk of LN subtypes, and the results suggested that the nomogram had the ability to predict the classification of LN subtypes according to the NeuraLN and immunescore (Fig. 7F).The calibration curves used for the predictive probability demonstrated the accuracy of a nomogram model (Fig. 7G).The results of the heatmap revealed a significant difference in the expression levels of 9 model genes between the high and low-NeuraLN groups (Fig. 8A).Consistently, the high-NeuraLN group also exhibited a relatively higher immune score (Fig. 8B).These results indicated that the high-NeuraLN group may be closely associated with immunity and may have the ability to predict the ferroptosis subtypes in LN patients.In addition, the abundances of 22 kinds of infiltrated immune cells were further elucidated in the low and high NeuraLN groups.The proportions of infiltrated immune cells such as memory B cells and M0 macrophages were higher in the low NeuraLN group, while high NeuraLN group presented higher infiltration levels of naïve B cells and CD8 + T cells (Fig. 8C).Later, we assessed the correlation between these 9 model genes and significantly infiltrated immune cells and found these genes were markedly correlated with CD8 + T cells and M0 macrophages (Fig. 8D).
To further clarify the functional differences between different NeuraLN groups, we conducted GSEA analysis and found the high NeuraLN group was primarily related to immune-related signaling pathways and biological functions (Fig. 9A,B).The high NeuraLN group was found to be significantly enriched in the following signaling pathways:KEGG_B_CELL_RECEP-TOR_SIGNALING,KEGG_INTESTINAL_IMMUNE_NET-WORK_FOR_IGA_PRODUCTION,KEGG_NOD_LIKE_ RECEPTOR_SIGNALING_PATHWAY,KEGG_TOLL_ LIKE_RECEPTOR_SIGNALING_PATHWAY.We have noted these pathways as they are associated with various aspects of immune response and signal transduction.And the high NeuraLN group predominantly demonstrates the following biological functions:ACTIVATION_OF_IMMUNE_
External validation of clinical specimens
Subsequently, we further analyzed the expression of 9-gene signature between control and LN glomerular tissues.The clinical information of 13 control and 19 LN patients were presented in Table 1.The immunohistochemistry images of ALB, BHMT, CUBN, DPYS, GAMT, GSTA1, HAO2, PAH, SLC27A2 between control and LN glomerular tissues were shown in Fig. 10A.Among them, the glomerular tissues from the LN group exhibited an increased integrated option density (IOD) of CUBN, DPYS, PAH, and SLC27A2.However, the IOD of ALB, BHMT, GAMT, GSTA1, and HAO2 was markedly curtailed in LN tissues when compared with the control group (Fig. 10B).These five downregulated genes (ALB, BHMT, GAMT, GSTA1, and HAO2) were consistent with our above results.We therefore further evaluated the correlation between these genes and the clinical traits of LN patients.Interestingly, all of the genes exhibited a significant negative correlation with SLE scores (SLEDAI), suggesting the critical role of these five genes in alleviating the progression of SLE.In addition, BHMT was negatively correlated with creatinine (Cr) and ds-DNA, respectively, and GSTA1 was negatively correlated with 24 h urine protein quantification (24 h-UTP) (Fig. 10B), indicating that high expression of GSTA1 and BHMT may serve as protective factors for renal function.
Discussion
The diversity of pathophysiological and clinical manifestations leads to a poor prognosis in patients with LN.In addition, untimely diagnosis and treatment at the early stage also serve as critical factors contributing to the deterioration of LN.Therefore, it is urgent to screen more accurate molecular subtypes and establish appropriate diagnostic models for early identification and guidance of individualized treatment of LN.
Ferroptosis is a newly reported Fe 2+ -dependent cell death mode evidenced by lipid peroxide overload caused by intracellular oxidative stress [21].It has been reported that several major pathways are currently involved in the regulation of ferroptosis.First, inhibition of the Xc − system has been shown to disrupt GSH and GPX4 synthesis by inhibiting cysteine uptake, lowering the activity of antioxidant products and increasing the expression of lipid peroxides, eventually leading to ferroptosis [22,23].Additionally, signaling pathways related to ferritin metabolism, such as the ATG5-ATG7-NCOA4and p62-Keap1-NRF2 axis, are thought to act as critical mechanism in regulating intracellular Fe 2+ metabolisms and ROS formation, which are closely associated with ferroptosis [24,25].Moreover, phosphatidylethanolamine (PE) and polyunsaturated fatty acid (PUFA) generated from ACSL4 and p53-SAT1-ALOX15-dependent lipid pathways are another primary factors contributing to the activation of ferroptosis [26,27].A growing body of research demonstrates that ferroptosis has been implicated in multiple autoimmunity diseases, including SLE [28], multiple sclerosis [29] and cutaneous diseases [30].For example, lower expression of intracellular GSH and GPX levels are associated with the severity of SLE patients, whereas reversal of GSH deletion alleviates the progression of lupus in mouse models [31,32].Recent studies found the activation of ferroptosis is also considered as the basis of chronic kidney diseases, including LN [14].Bioinformatics analysis based on microarray datasets indicates that ferroptosisrelated genes were closely related to the onset of LN [13].However, the latest study only disclosed the relationship between ferroptosis-related genes and LN, the potential molecular mechanisms by which ferroptosis regulates LN progression still remain obscure.
In this study, we first systematically evaluated the expression profiles of FRGs between LN and control glomerular tissues and identified 10 differentially expressed FRGs in LN patients, most of which presented distinct interactions, indicating that ferroptosis may exert a critical role in the progression of LN.Immune infiltrating analysis suggested that the proportions of immune cells were markedly distinct between controland LN glomerular tissues as evidenced by a greater abundances of naïve B cells, plasma cells, activated NK cells, Monocytes, M2 macrophages and resting dendritic cells in patients with LN, which were consistent with the previous studies [33][34][35][36][37]. Correlation analysis revealed that CD8 + T cell, resting memory CD4 + T cells, Tregs, resting NK cells, Monocytes, and resting mast cells had the most apparent correlation with ferroptosis DEGs, indicating that the interaction of FRGs with immune cells may be the potential pathological mechanism leading to the onset and progression of LN.Subsequently, two distinct molecular subtypes of LN glomerular tissues were determined on the basis of the expression profiles of 10 differentially expressed FRGs.
Interestingly, LN tubulointerstitium tissues were also classified into two molecular subtypes, demonstrating the repeatability of the clustering.It has been widely recognized that the dysfunction of immune cells is a notable hallmark of LN [34,38].Nevertheless, whether the mechanisms underlying the poor prognosis in LN patients are closely associated with the differences in the immune cell ratios needs further elucidation.Therefore, we next evaluated immunological features between two distinct molecular subtypes and found Subtype 2 was primarily enriched in cytokine receptor interaction, immune-related pathways.Additionally, Subtype 2 exhibited significantly higher infiltration levels of memory B cells, M0 macrophages, and neutrophils, indicating that ferroptosis might be associated with the dysfunction of both innate immunity and adaptive immunity.Moreover, immune scores and immune checkpoints, including ICOS, CTLA4, CD86, CD70, CD40, CD27, and PDCD1 also exhibited higher levels in ferroptosis Subtype 2 compared to Subtype 1. Combined with these results, we can infer that ferroptosis Subtype 2 may efficiently activate immunity response and may benefit from immunotherapy.
Subsequently, we established a 9-gene protective model based on neural network machine learning for predicting the progression of LN, composed of ALB, BHMT, CUBN, DPYS, GAMT, GSTA1, HAO2, PAH, and SLC27A2.A scoring model (NeuraLN) exhibited a high AUC and P-R value in the validation set.More importantly, patients with Subtype 2 displayed higher NeuraLN when compared to patients with Subtype 1.In addition, higher immune infiltrating levels and immune scores were observed in high-NeuraLN patients.At the same time, correlation analysis revealed that NeuraLN was significantly correlated with memory B cells, CD8 + T cell, and M0 macrophages.These results demonstrated that the NeuraLN is a crucial indicator of progress in LN patients.
It is worth mentioning that we performed the clinical specimen validation and found five down-regulated genes, including ALB, BHMT, GAMT, GSTA1, and HAO2, which were consistent with the above bioinformatics analysis findings.Serum albumin (ALB) is a nutritional indicator that could largely reflect the living conditions of patients [39].The decreased level of ALB has been proven to be associated with the severity of kidney diseases [40,41].Another protective gene BHMT is a novel prognostic biomarker for various diseases.The BHMT-dependent methylation pathway, for example, has been shown to play a role in regulating oligodendrocyte maturation [42].Deficiency or inhibitation of BHMT may be the vital factor leading to poor prognosis of hepatocellular carcinoma [43].The creatine biosynthetic pathway is crucial for phosphate-related cellular energy production and storage, especially in tissues with high metabolic demands [44].GAMT is a key enzyme involving the endogenous pathway of creatine biosynthesis and is more expressed in the liver and kidney [45].Furthermore, glutathione transferase A1 (GSTA1) is a phase II conjugating enzyme that detoxifies electrophilic compounds such as carcinogens, therapeutics, environmental toxins, and oxidative stress products [46].Recent studies have demonstrated that GSTA1-mediated ROS and Ca 2+ signaling pathways are mainly implicated in the enhancement of aldosterone secretion, and low-activity of GSTA1 is associated with iron overload-induced kidney injury [47,48].Bioinformatics analysis suggests that increased HAO2 enable to promote lipid catabolism and prevent lipid accumulation, eventually delaying the progression of clear cell renal cell carcinoma [49].These results were consistent with the findings in our current study that the down-regulation of ALB, BHMT, GAMT, GSTA1, and HAO2 may be predictive of poor prognosis in LN patients.Except for ALB, other genes are rarely reported in LN.In the current study, we identified these ferroptosis subtype-specific signature genes would as being capable of providing novel insights into the prognosis of LN.Some shortcomings need to be emphasized in our study.Firstly, results based on existing databases would exhibit some deviation from reality due to confounding factors such as individual differences.Secondly, the clinical specimen for external validation is small, a larger and multi-center LN cohort need to be utilized for further research.Moreover, the underlying mechanisms of FRGs and ferroptosis-subtype related signature genes in LN require further exploration in more experiments.Though the results where connected to personalized medicine, a connection to current standard of care was not made.
Conclusions
In summary, we systematically evaluated the expression of FRGs in LN and revealed an innovative molecular classification related to ferroptosis.We established a 9-gene protective model on the basis of neural network machine learning, which would be capable of accurately predicting the progression of LN patients.Importantly, external validation of clinical specimens confirmed that ALB, BHMT, GAMT, GSTA1, and HAO2 were associated with the prognosis of LN patients.The protective model based on multiple signature genes would provide a novel strategy to predict the progression of LN.
Fig. 1
Fig.1The flow chart of this study
Fig. 2
Fig. 2 Identification of differential expressed ferroptosis regulators in LN patients.A, B The PCA plot exhibiting the expression profiles of GSE32591 and GSE127797 before (A) and after (B) correction of batch effect.C Venn diagram exhibiting 10 differentially expressed FRGs in LN patients.D Volcano plot depicting the mRNA expression levels of FRGs between healthy individuals and LN patients.E Heatmap exhibiting the differentially expressed FRGs between the aforementioned groups.F, G Correlation plot (F) and network diagram (G) of 10 differentially expressed FRGs.Positive correlations were marked in blue and negative correlations were marked in red.The size of the rectangle showing the specific value of the correlation coefficients
Fig. 3
Fig. 3 Immunological characteristics of normal subjects and LN patients and correlations between the characteristic FRGs and immune cells.A Stack chart exhibiting the relative proportions of 22 infiltrated immune cells from LD and LN samples.B Box plots exhibiting the alterations in infiltrated immune cells between LD and LN groups.C Heatmap exhibiting the correlations between characteristic FRGs and distinct immune cell compositions.*p < 0.05, **p < 0.01, ***p < 0.001.Positive correlations were marked in red and negative correlations were marked in blue.The size of the circle showing the specific value of the correlation coefficients
Fig. 4
Fig. 4 Characterization of molecular subtypes based on characteristic FRGs.A Consensus matrix in LN patients (k = 2).Both the rows and columns of the matrix represent samples.The consistency matrix ranges from 0 to 1, from white to dark blue.B Consensus CDF when k = 2 to 6. C Delta area under CDF curve.D Consensus clustering score exhibiting the subtype score when k = 2-6.E t-SNE exhibiting that LN patients were classified into two distinct ferroptosis subtypes
Fig. 5
Fig. 5 Identification of distinct biological functions and signaling pathways between subtypes.A, B GSVA exhibiting distinct biological processes (A) and signaling pathways (B) between subgroups
Fig. 6
Fig. 6 Association between ferroptosis subtypes and immunological features.A Stack chart exhibiting the abundances of 22 immune cell subpopulations from Subtype 1 and Subtype 2. B Box plots exhibiting the differences in the relative abundance of infiltrated immune cell types between the two ferroptosis subtypes.C Box plots exhibiting the differences in the immune score between the two ferroptosis subtypes.D Box plots exhibiting the mRNA expression of immune checkpoints between two ferroptosis subtypes
Fig. 7
Fig. 7 Construction of a protective model for prediction of LN patients.(A, B) Venn diagram (A) and heatmap (B) exhibiting 12 ferroptosis subtype-specific feature genes in LN patients.C A total of 9 characteristic genes were obtained using the using the Boruta feature selection algorithm.D The visualization of the neural network machine learning model.E Assessment of the classification performance of the neural network machine learning model in the validation cohort.F Representative nomogram based on NeuraLN and immune score for predicting LN progression.G Representative calibration curves for assessing the diagnostic accuracy of the nomogram
Fig. 8
Fig. 8 Association between NeuraLN scoring model and immune microenvironment.A Heatmap exhibiting the expression profiles of the 9 model genes in the validation cohort.B Box plots exhibiting the differences in immune score between the low-and high-NeuraLN group.C Box plots exhibiting the differences in the relative abundances of infiltrated immune cell types between the low-and high-NeuraLN group.D Heatmap exhibiting the correlations between model genes and distinct immune cell compositions.*p < 0.05, **p < 0.01, ***p < 0.001
Fig. 9
Fig. 9 Identification of distinct biological functions and signaling pathways between low-and high-NeuraLN group.A, B GSEA exhibiting distinct signaling pathways (A) and biological processes (B) between low-and high-NeuraLN group
Fig. 10
Fig. 10 External validation of the alterations in model genes based on clinical specimens.A Representative immunohistochemistry images of the 9 model genes including ALB, BHMT, CUBN, DPYS, GAMT, GSTA1, HAO2, PAH, and SLC27A2.B Violin plots exhibiting the quantitative results of 9 model genes expression between controland LN patients.C Correlation analysis between the clinical characteristics and the expression of ALB, BHMT, GAMT, GSTA1, and HAO2 in LN glomerular tissues
Table 1
Basic information of controland LN patients in clinical specimens | 7,578.2 | 2024-07-03T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
An evaluation of irreversible electroporation thresholds in human prostate cancer and potential correlations to physiological measurements
Irreversible electroporation (IRE) is an emerging cancer treatment that utilizes non-thermal electric pulses for tumor ablation. The pulses are delivered through minimally invasive needle electrodes inserted into the target tissue and lead to cell death through the creation of nanoscale membrane defects. IRE has been shown to be safe and effective when performed on tumors in the brain, liver, kidneys, pancreas, and prostate that are located near critical blood vessels and nerves. Accurate treatment planning and prediction of the ablation volume require a priori knowledge of the tissue-specific electric field threshold for cell death. This study addresses the challenge of defining an electric field threshold for human prostate cancer tissue. Three-dimensional reconstructions of the ablation volumes were created from one week post-treatment magnetic resonance imaging (MRIs) of ten patients who completed a clinical trial. The ablation volumes were incorporated into a finite element modeling software that was used to simulate patient-specific treatments, and the electric field threshold was calculated by matching the ablation volume to the field contour encompassing the equivalent volume. Solutions were obtained for static tissue electrical properties and dynamic properties that accounted for electroporation. According to the dynamic model, the electric field threshold was 506 ± 66 V/cm. Additionally, a potentially strong correlation (r = −0.624) was discovered between the electric field threshold and pre-treatment prostate-specific antigen levels, which needs to be validated in higher enrollment studies. Taken together, these findings can be used to guide the development of future IRE protocols.
Irreversible electroporation (IRE) is an emerging cancer treatment that utilizes non-thermal electric pulses for tumor ablation. The pulses are delivered through minimally invasive needle electrodes inserted into the target tissue and lead to cell death through the creation of nanoscale membrane defects. IRE has been shown to be safe and effective when performed on tumors in the brain, liver, kidneys, pancreas, and prostate that are located near critical blood vessels and nerves. Accurate treatment planning and prediction of the ablation volume require a priori knowledge of the tissue-specific electric field threshold for cell death. This study addresses the challenge of defining an electric field threshold for human prostate cancer tissue. Three-dimensional reconstructions of the ablation volumes were created from one week post-treatment magnetic resonance imaging (MRIs) of ten patients who completed a clinical trial. The ablation volumes were incorporated into a finite element modeling software that was used to simulate patient-specific treatments, and the electric field threshold was calculated by matching the ablation volume to the field contour encompassing the equivalent volume. Solutions were obtained for static tissue electrical properties and dynamic properties that accounted for electroporation. According to the dynamic model, the electric field threshold was 506 6 66 V/cm. Additionally, a potentially strong correlation (r ¼ À0.624) was discovered between the electric field threshold and pre-treatment prostate-specific antigen levels, which needs to be validated in higher enrollment studies. Taken together, these findings can be used to guide the development of Irreversible electroporation (IRE) is a non-thermal, soft tissue ablation modality 1 that has been used to treat a variety of tumors in the liver, 2 kidneys, 3 pancreas, 4,5 and prostate. [6][7][8][9] This technique involves generating pulsed electric fields between electrodes that are inserted into or around the region of interest. If the electric field is of sufficient strength and duration, cells die through the creation of long-lasting nanopores in the plasma membrane. 10 Because the mechanism of cell death does not rely on extreme temperatures, IRE can be performed safely near major blood vessels 11 and nerves. 12 Therefore, it is gaining momentum as a viable treatment option for surgically inoperable tumors and an alternative to functionally limiting resection procedures.
The first human IRE trials in the prostate were performed by Onik and Rubinsky. 13 Following the placement of 4 electrodes in a square grid, a NanoKnife V R generator was used to apply IRE pulses between all pairs, including the diagonals (six treatments). Immediately after treatment, continence and potency were preserved, even when ablations involved the urethra and ejaculatory ducts. Additionally, biopsies taken within the treatment zone showed no evidence of cancer and exhibited characteristics encountered in previous animal studies, including a well-demarcated ablation and vascular and nervous preservation. 14,15 Recent clinical trials have demonstrated similar levels of success in a two-center study by Valerio et al. after a median follow-up of six months. 7 Specifically, 75% (18/24) of patients exhibited no signs of residual disease, 100% (24/24) were continent, and potency was preserved in 95% (19/20) of those potent before treatment. Also, no patients had a recto-urethral fistula or urethral stricture. Prospective studies with higher enrollment are currently underway to confirm efficacy. 8,9,16 Despite the initial encouraging results, relatively little is known about the electrical response of prostatic tissue to IRE. For a given set of pulse parameters (pulse duration, number, and repetition rate), the electric field, which is controlled by the applied voltage and electrode spacing, is the primary factor in defining the spatial distribution of cell death. The macroscopic electric field controls the microscopic increase in transmembrane potential (TMP) and the induction of nanopores. The electric field threshold for cell death has been characterized in several tissue types, including normal porcine liver (423 V/cm), 17 normal canine kidneys (575 V/cm), 18 normal canine brain (495-510 V/cm), 19 normal canine prostate (948 V/cm), 20 and normal human prostate (1135 V/cm). 20 Additionally, Neal et al. determined a dynamic conductivity function specific to normal canine prostate based on intrapulse voltage and current measurements. 20 A significant challenge remains in defining the electric field threshold for human prostate tumors. Here, we utilize contrast-enhanced magnetic resonance imaging (MRI) data from a human clinical trial 7 to reconstruct ablation volumes and predict the electric field threshold. Specifically, a numerical model was created for the electrodes embedded within the ablation volume, and the dynamic conductivity function was varied until the calculated current matched the current measured during the clinical trial. Then, the field threshold was determined by matching the ablation volume to the volume encompassed by a specific electric field contour.
Our results indicate that the average electric field threshold predicted by the dynamic model was 506 V/cm. Additionally, we found a correlation (r ¼ À0.624) between the field threshold and pre-treatment prostate-specific antigen (PSA) levels. Taken together, this information could be used to improve the accuracy of IRE treatment planning algorithms for prostate cancer.
II. RESULTS AND DISCUSSION
Treatment of prostate cancer involves a variety of strategies ranging from active surveillance to radical prostatectomy. Focal therapy with IRE may serve an excellent medium by achieving local oncological control with prostatic tissue preservation. Further, patient morbidity may be reduced, and functional outcomes may be improved compared to other techniques, such as brachytherapy, cryoablation, and high-intensity focused ultrasound, whose results have been summarized in Ref. 21. Refer to Sec. IV for a description of the steps for treating patients using a NanoKnife V R (Fig. 1) and simulating the treatments using finite element modeling (Fig. 2).
Determining the electric field threshold for IRE is a vital step in the treatment planning process. It ensures complete destruction of the targeted zone while preserving as much of the healthy tissues as possible. In practice, this improves patient outcomes. Our results indicate that the IRE threshold ranges from 412 to 614 V/cm (Table I) It should be noted that the electric field thresholds determined for the clinical protocols utilized here in cancerous tissue (506 V/cm) are significantly lower than those previously reported in benign prostate parenchyma (1135 V/cm, an average of two trials). 20 In addition to the tissue type, this discrepancy is likely due to a number of compounding factors pertinent to the studies.
The thresholds in Ref. 20 were derived from numerical simulations calibrated to pathologic lesions three and four weeks post-IRE versus radiologic lesions one week post-IRE. The additional time between the IRE treatment and lesion evaluation permits a significant lesion resolution, 14 which would require calibrating the electric field threshold to a smaller volume, resulting in a significantly higher threshold. Furthermore, Ref. 20 used a two-needle electrode array with a single treatment, and the lesions calibrated in this study used a four-needle electrode array with six total treatments. The overlap between successive treatments likely reduced the electric field threshold for cell death, as those regions experienced an elevated number of pulses. 22 Statistical correlations between IRE parameters and PSA measures are summarized in Table II. Pre-treatment PSA demonstrated a potentially strong correlation with the electric field threshold (r ¼ À0.624, p ¼ 0.054). Even though this relationship did not reach statistical significance, the analysis was under-powered, most likely due to the sample size (Table II). Pretreatment PSA demonstrated a strong significant relationship with the change in the current (r ¼ 0.694; p ¼ 0.026) and ablation volume (r ¼ 0.730; p ¼ 0.017). Additionally, the change in PSA demonstrated a strong significant relationship with the ablation volume (r ¼ 0.843; p ¼ 0.004), and the electric field threshold demonstrated a strong significant relationship with the ablation volume (r ¼ À0.896; p ¼ <0.001). The latter may be due to larger regions of overlap between successive treatments.
Perhaps, the two most interesting correlations are between the pre-treatment PSA and the electric field threshold and between the pre-treatment PSA and the change in current (Fig. 5). Being able to modify the aggressiveness of the treatment protocol according to a priori knowledge of PSA safeguards against under-treatment, which can lead to recurrence. One possible where the maximum transmembrane potential (TMP) across a spherical cell is a function of the applied electric field (E) and cell radius (r). Larger cells are able to reach a critical TMP for IRE (e.g., 1 V) with a lower electric field (i.e., larger cells have a lower IRE threshold). Previous studies have shown that the PSA level is correlated strongly with tumor diagnosis and aggressiveness. 24 As malignancy is multifactorial, it is not possible to generalize that more aggressive tumors are comprised of larger cells. Oftentimes, it depends on the type of tumor, and the prostate literature focuses on links between the cancer progression and Gleason score, 25 nuclear morphology, 25 and centrosome size. 26 The well-studied DU145 and PC3 cell lines do show a dependence of the cell size on aggressiveness, with the larger PC3 (Ref. 27) cells having a higher metastatic potential. 28 This hypothesis needs to be tested in future work, but it could validate treatment planning algorithms based on the diameter of cancer cells from biopsy. 29 Additionally, it may be advantageous to use the biopsy to create finite element models of multicellular clusters that account for cell packing density, 30 as it has been shown that the electroporation threshold increases with higher cell packing density. 31 These types of biopsy guided treatment planning procedures may alleviate concerns associated with patient-to-patient variability in response to therapy. The significant correlation between the pre-treatment PSA and the change in current during IRE pulse delivery may be explained by the fact that there is also a significant correlation between the pre-treatment PSA and the ablation volume. Therefore, tumors with a higher pretreatment PSA experienced a greater amount of electroporation spatially, which corresponds to a greater change in current. During IRE of the pancreas, researchers have shown that the change in tissue resistance was significant in predicting local failure or recurrence, but not overall disease free survival. 32 For constant voltage electroporation systems, such as the NanoKnife, the change in current should be inversely proportional to the change in resistance (current ¼ voltage/resistance). Also of interest is the fact that the change in PSA (pre-treatment minus post-treatment) was significantly correlated with the ablation volume. Lower posttreatment PSA scores have been associated with longer disease-free survival following highintensity focused ultrasound. 33 There are several limitations associated with the numerical models that can be attributed to either the nature of the clinical procedure or a lack of available data. As mentioned in Sec. IV, treatments were designed to completely destroy the tumor, and the boundary of the ablation volume extended into normal prostate tissues. Therefore, the reported electric field thresholds were likely influenced by healthy cells but provide at the least a conservative estimate for cancerous tissues. For a separate study to overcome this limitation, the tumor would need to be purposely undertreated, which may be possible in treat and resect procedures. Another limitation of this study is that the ablation volumes were measured at one week post-treatment using MRI. Tissue pathology is required to validate that the measurements are correlated with the volume of dead cells. It is possible that the ablation volume changes over time and an earlier or later time point would be more representative of the electric field threshold for cell death. Finally, major blood vessels, 34 nerves, and ducts may distort the electric field. 20 These heterogeneities should be included in future image reconstructions and simulations to study their impact on the IRE threshold.
III. CONCLUSION
This investigation provides valuable data regarding the electric field threshold for IRE in human prostate cancer tissues. Volumetric reconstructions were created from one week posttreatment MRIs and incorporated into numerical models designed to simulate the electric field distribution during electroporation. The average electric field threshold that was determined (506 6 66 V/cm) is comparable to that of other tissue types [e.g., normal porcine liver (423 V/ cm), 17 normal canine kidney (575 V/cm), 18 normal canine brain (495-510 V/cm) 19 ], as well as experiments performed on three-dimensional in vitro tumor models of pancreatic cancer (500 V/ cm). 35 It was also discovered that there is a potential correlation between the electric field threshold and the pre-treatment PSA, with higher PSA scores having a lower lethal threshold. This warrants future investigations into this relationship, as it opens the door for using a physiologic measurement to guide treatment planning.
A. Clinical workflow
This is a retrospective analysis of men who underwent therapeutic IRE, and no ID numbers are assigned to studies granted retrospectively according to local regulations. The results of previous clinical studies, which were performed with patients' consent, IRB approval, and Good Clinical Practices, can be found in Ref. 7. Briefly, patients were treated at Princess Grace Hospital in London/UK or St. Vincent's Prostate Cancer Centre in Sydney/Australia. All patients initially underwent a multi-parametric MRI in addition to demonstrating clinically significant prostate cancer through an analysis of their histology. Clinically significant cancer consists of a Gleason pattern !4 and/or a cancer core length !4 mm. Patients were treated while under general anesthesia with deep muscle paralysis using pancuronium bromide. A NanoKnife generator was used to deliver the IRE treatment. 4-6 needle electrodes were positioned not more than 2 cm apart transperineally at the margin of the cancer lesion under transrectal ultrasound guidance, and the electrode exposure length was set to either 1.5 cm or 2 cm depending on the size of the tumor. 90 pulses with a pulse length of 70 ls were applied between each electrode pair. 10 test pulses were initially delivered in order to verify proper electrode placement and parameter selection. If the current was determined to be in the proper range (20-40 A), then the remaining 80 pulses were delivered. An example of the NanoKnife clinical workflow is shown in Fig. 1 for patient P2. The operator enters the electrode spacing using a graphical user interface, and the system calculates the required voltage to achieve a constant voltage-to-distance ratio of 1500 V/cm. The order of electrode pair activation is determined from the highest voltage (i.e., largest spacing) to the lowest voltage, and the remaining pulse parameters can be modified (pulse length and pulse number). The target volume was defined by MRI and histopathology with a safety margin of 3-5 mm. All treatments were performed within a single session. One week post-treatment, a contrast-enhanced MRI was performed to evaluate the local effect of the treatment. Additionally, PSA levels were evaluated every three months, and MRIs were repeated at six months and one year.
B. IRE lesion reconstruction
Three-dimensional reconstructions of the ablation volume were created for 17 patients who participated in the clinical trial using the one week post-treatment MRIs. The ablation area is easily visible on post-treatment early scans as the non-enhancing zone on the contrast-enhanced MR sequence. Additionally, the locations of the electrodes and the apex and base of the prostate were labeled by the study clinicians. The location of the electrodes was determined by matching the grid reference recorded during the procedure and the margins of the ablation zone. It is important to note that the prostate gland (pre-and post-treatment) and tumor (pretreatment) were also reconstructed but excluded from the simulations. This simplification was made for several reasons, including the fact that the ablation volume extended beyond the margin of the tumor into the prostate gland and a lack of available data on the differences in electrical properties between the tumor and the prostate gland.
Of the 17 reconstructions, 7 were omitted from the analysis for various reasons. 2 patients exhibited large protrusions in their ablation volume which might have resulted from underlying heterogeneous features (e.g., urethra, bowel, and neurovascular bundle) not included in the simulation; 3 patients exhibited non-contiguous ablations; 2 patients received an extra treatment for which the electrode marker data were unavailable. The 10 patients who were analyzed all received an equivalent energy dose. 9 of the patients received 90 pulses per treatment with a pulse duration of 70 ls, and 1 patient (JC01) received 70 pulses per treatment with a pulse duration of 90 ls. As mentioned, based on the electrode spacing, the voltage-to-distance ratio was 1500 V/cm. An exception was patient P1, who had one treatment at 1364 V/cm. In this case, the electrode spacing (2.2 cm) required a larger voltage than available on the NanoKnife (3000 V maximum) to reach 1500 V/cm (see Table I).
C. Numerical simulations
A 3D finite element model was developed using COMSOL Multiphysics (version 5.2, Burlington, MA) to calculate the electric field distribution during IRE therapy. This methodology has been validated as a means to predict the electric field threshold for reversible electroporation and IRE. 36 The ablation volume was placed inside a cube (7.5 cm), which was large enough to encompass the surrounding prostate gland (Fig. 2), and the four electrodes were modeled as stainless steel cylinders [r ¼ 2 Â 10 6 S/m (Ref. 37)] with a height set according to the exposure length used during treatment. Initially, the electrical conductivity of the tissue (ablation volume and cube) was set to a static, baseline value (r 0 ¼ 0.284 S/m), which was determined by lowvoltage (50 V) pre-pulse current measurements in normal canine prostate. 20 It was not possible to perform similar measurements from the human clinical data included herein, as the NanoKnife generator does not report pre-pulse currents, and the voltage delivered during the first pulse was high enough to induce electroporation. Boundary conditions (electric potential and ground) were applied to the outer surfaces of electrode pairs to mimic the clinical procedure. The electric field distribution within the tissue (E ¼ Àr/) was obtained for each electrode pair by solving where / is the electric potential. The overall electric field distribution from all six treatments was defined as a separate variable by taking the maximum values from each pair-based combination. Then, the electric field threshold was calculated by performing a volume integration of the maximum electric field distribution above a certain electric field contour. The value of the field contour was varied in 1 V/cm increments until the volume matched the actual ablation volume. A second set of simulations were performed with a dynamic electrical conductivity function to account for the electroporation-induced conductivity increase from the formation of nanopores r Á r Ej j Þr/ ð Þ¼ 0: ð This has been shown to be more accurate than the static model in predicting the permeabilized volume of tissues. 38 The conductivity function, r(jEj), was defined as a step function that increased from r 0 to r max over a transition zone of 800 V/cm centered at 500 V/cm with a continuous second derivative (Fig. 3). The characteristics of the transition zone were chosen to mimic the function developed for normal canine prostate. 20 A parametric study was run on r max for each patient to match the calculated current to the maximum measured current during the last set of ten pulses from the first electrode pair activation (see Fig. 1). The first electrode pair was evaluated to avoid compounding effects from multiple treatments. The current was calculated by integrating the normal current density across a cut plane between the electrodes. Following the determination of the patient-specific conductivity function, the solution for the electric field distribution was obtained under these conditions for each electrode pair combination. Then, an additional variable was created for the overall electric conductivity distribution from all six treatments by taking the maximum values from each pair-based simulation. Finally, the electric field threshold was calculated in the same fashion as described earlier.
D. Statistical analysis
Correlations were determined between all variables of interest, including the pre-treatment PSA, change in current during treatment, change in conductivity during treatment, and the electric field threshold. Additionally, the correlation between the ablation volume and the change in PSA was also determined. All statistical analyses were conducted using JMP Pro 11 (Cary, North Carolina, USA) with a significance level of p 0.05. Due to a low sample size (n ¼ 10), the statistical power for most relationships was very low (<60%). Despite this low power, significance relationships were still found. | 5,054 | 2017-10-09T00:00:00.000 | [
"Physics",
"Biology"
] |
Acoustic transfer of protein crystals from agarose pedestals to micromeshes for high-throughput screening
An acoustic high-throughput screening method is described for harvesting protein crystals and combining the protein crystals with chemicals such as a fragment library.
Acoustic droplet ejection (ADE) is an emerging technology with broad applications in serial crystallography such as growing, improving and manipulating protein crystals. One application of this technology is to gently transfer crystals onto MiTeGen micromeshes with minimal solvent. Once mounted on a micromesh, each crystal can be combined with different chemicals such as crystal-improving additives or a fragment library. Acoustic crystal mounting is fast (2.33 transfers s À1 ) and all transfers occur in a sealed environment that is in vapor equilibrium with the mother liquor. Here, a system is presented to retain crystals near the ejection point and away from the inaccessible dead volume at the bottom of the well by placing the crystals on a concave agarose pedestal (CAP) with the same chemical composition as the crystal mother liquor. The bowl-shaped CAP is impenetrable to crystals. Consequently, gravity will gently move the crystals into the optimal location for acoustic ejection. It is demonstrated that an agarose pedestal of this type is compatible with most commercially available crystallization conditions and that protein crystals are readily transferred from the agarose pedestal onto micromeshes with no loss in diffraction quality. It is also shown that crystals can be grown directly on CAPs, which avoids the need to transfer the crystals from the hanging drop to a CAP. This technology has been used to combine thermolysin and lysozyme crystals with an assortment of anomalously scattering heavy atoms. The results point towards a fast nanolitre method for crystal mounting and highthroughput screening.
Introduction
Acoustic droplet ejection (ADE) is an automated and keyboard-driven technology for growing protein crystals (Yin et al., 2014;Villaseñ or et al., 2012), improving the quality of protein crystals and transferring protein crystals onto data-collection media (Soares et al., 2011) such as pin-mounted micromesh sample holders. ADE transfers momentum from a sound pulse to move liquids and suspended crystals from the source location through a short air column to an arbitrary destination (Ellson et al., 2003;Fig. 1) with a trajectory precision of $1.3 for solutions of $30 mm crystals.
High-throughput screening of chemical libraries (such as fragment libraries) using X-ray crystallography requires a fast and flexible crystal-mounting technology. Acoustic crystal mounting is an attractive choice for high-throughput screening applications (Table 1). Since ADE is automated, its success is not dependent on the manual dexterity or physical aptitude of the experimenter. ADE is gentle in that no tools (for example pipette tips) touch the source medium or the destination medium. This prevents contamination, chemical leaching, mechanical stress on crystals or loss of specimen owing to surface adhesion (McDonald et al., 2008). Transfer is fast (500 transfers s À1 for multiple transfers to the same micromesh and 2.33 transfers s À1 for transfers to different micromeshes), which simplifies serial applications such as distributing crystals onto different micromeshes and combining each crystal with a different chemical. Acoustically transferring both a crystal and a screened chemical and soaking them together on the same micromesh minimizes the use of protein, chemicals and time.
The ejection trajectory is highly accurate, which allows crystals and screening chemicals to be individually passed from wells in a source plate (described in x2) through a small (1 mm diameter) aperture and onto a micromesh that is secured in a sealed pin platform box that contains mother liquor (Fig. 2). At present, pin-mounted micromeshes are manually snapped into the pin platform, where they are secured in a fitting that mechanically compresses the metal pin (all components are printed by a three-dimensional printer and print files are available on request). Each micromesh is then individually targeted by our acoustic system, so that crystals and screening chemicals can be transferred from a source plate and combined on the micromesh. The pin platform box ensures that the micromesh is in vapor equilibrium with the mother liquor before, during and after the transfer of crystals and screening chemicals. This means that each crystal can be soaked with its screening chemical on a micromesh for as long as desired without the crystal dehydrating. It is also possible to co-crystallize proteins and chemical fragments (or other screened chemicals) in situ directly on micromeshes using a similar technique (Yin et al., 2014).
This study uses the Echo 550 liquid-handling instrument (Labcyte Inc., Sunnyvale, California, USA) to transfer suspended crystals and chemicals from source wells containing CAPs onto micromeshes. Innovations in the Echo line of instrumentation have decreased the 'dead volume' (an inaccessible region for ejection) at the bottom of each source well to <4 ml (Harris et al., 2008). However, crystallization experiments tend to yield few crystals, many of which then disappear into this 4 ml region. Consequently, acoustic crystal transfer is only practical if the crystals are suspended at or near the ejection region.
Here, we describe the use of agarose gels to construct concave pedestals that support protein crystals at a suitable location for acoustic ejection; crystals and chemicals are ejected onto each micromesh for high-throughput screening. Acoustic droplet ejection (ADE) from a concave agarose pedestal (CAP). ADE uses sound energy to transfer 2.5 nl microdroplets of liquids (such as chemical libraries) or suspended solids (such as mother liquor containing small protein crystals) from a source well, through a short air column (1-10 mm) to a micromesh. Sound energy from the transducer is channeled to the focal point (i.e. ejection zone), displacing the surface, where a controlled ejection occurs. The droplet size is governed by the wavelength of the sound emitted; we used a fixed wavelength to eject chemicals and crystals in 2.5 nl increments. Chemicals are ejected from unmodified source wells. Protein crystals are ejected from source wells that have a CAP with the same chemical composition as the mother liquor of the crystals, ensuring that the crystals remain intact and viable for transfer. Agarose, being acoustically transparent, allows the transfer of most suspended solids (such as crystals) with very high precision onto a standard micromesh. Protein crystals in mother liquor are sequestered in the concave basin and suspended above the dead volume. A 2% agarose solution in the random-coil phase (at 100 C) was mixed in a 1:1 ratio with crystallization conditions for lysozyme, thermolysin, stachydrine demethylase and photosystem II. Wells of a 384-well polypropylene source microplate were overfilled with 70 ml of the agarose and precipitant mixture. To create the concave topography of the pedestal, 40 ml were removed from the wells after a 3 s cooling period. Table 1 Characteristics of different crystal-harvesting techniques.
We used acoustic methods to mount crystals on micromeshes and then to add chemicals that soak into the already mounted crystals [Le Maire et al. (2011) refer to soaking chemicals with crystals that were ready for data collection on plates as in crystallo soaking]. Characteristics of robotic crystal-harvesting techniques such as the universal manipulation robot (UMR) are shown in the column headed 'Robotic' (Viola et al., 2011) and manual crystal-harvesting characteristics are shown in the column headed 'Hand mount'. The time needed to mount crystals was measured for acoustic mounting with the Echo 550 (see x2). The transfer speeds using other techniques were obtained from published videos (http:// www.ruppweb.org/cryscam/umr_small.wmv) or personal communications. The remaining characteristics were obtained from published data (Deller & Rupp, 2014 The concave pedestals consist of acoustically transparent hydrogels (polymerized matrix materials with high water content). The pedestals suspend protein crystals above the dead volume and sequester them precisely at the ejection zone, where the acoustic ejection pulse occurs. Many types of hydrogels are transparent to acoustic energy. We chose agarose to create concave agarose pedestals (CAPs) for this study because agarose is a safe and common laboratory reagent. In contrast, gelatin pedestals require overnight refrigeration and acrylamide pedestals are made with toxic substances. Crystals can be pipetted onto CAPs for serial transfer onto micromeshes. The pedestal is impermeable to protein crystals but is permeable to mother-liquor chemicals (so the crystals retain the same chemical composition as the mother liquor). Agarose is acoustically transparent, which facilitates easy and rapid serial transfer of crystals from the CAP to micromeshes. In some cases it may be advantageous to serially transfer microcrystals onto micromeshes in this way, both to save time and to minimize the X-ray background contribution from solvent. However, we believe that the largest utility for acoustic crystal mounting will derive from its ability to readily combine just-mounted crystals with chemicals such as heavy atoms, cryoprotectants, additives that improve crystal quality and of course fragment libraries (Table 2). We have also grown crystals directly on CAPs to avoid manual transfer (http:// www.youtube.com/channel/UCtCiMjlzBnq5VYZzrEi3EiQ). When growing crystals directly on CAPs, agarose is a better choice than agar because the impurities in agar cause the matrix to acquire a yellow tint that can make crystals harder to see.
Methods
We used a commercially available Echo 550 liquid-handling instrument (Labcyte Inc) to transfer two standard crystal samples (lysozyme and thermolysin), a metalloprotein sample (stachydrine demethylase) and membrane-protein crystals (photosystem II) from a 384-well polypropylene microplate (Labcyte Inc) source plate onto pin-mounted micromeshes that were secured in a pin platform box (Fig. 2). The temperature inside the acoustic transfer chamber was tightly controlled at 22 C. The crystals used in this experiment were selected to represent a broad range of crystallization conditions and physical properties, such as fragile rod-shaped thermolysin, rigid cuboidal lysozyme and plate-shaped stachydrine demethylase crystals. The concave agarose pedestals (CAPs) contained the same chemical environment as the crystal mother liquor, including cryoprotectants. Cryoprotection of lysozyme and stachydrine demethylase was with mother liquor plus 15% glycerol (10 ml mother liquor plus 1.5 ml glycerol), thermolysin was soaked in mother liquor plus 20% ethylene glycol (10 ml mother liquor plus 2.0 ml ethylene glycol) and photosystem II crystals were stage-soaked to mother liquor plus 30% glycerol (10 ml mother liquor plus 1, 2 and 3 ml increasing concentrations of glycerol) 1 .
To enable the ejection of all crystals, we pre-loaded the source plate with $30 ml CAPs (Fig. 1). Each CAP was composed of 1% agarose and mother liquor containing cryoprotectant. Each type of protein crystal was separately grown on a cover slip in a standard hanging-drop preparation. The crystals were manually pipetted from their hanging drop onto the CAPs. Each pedestal suspended the crystals above the dead volume that is inaccessible for transfer by the Echo 550. Furthermore, the concave shape of the pedestal concentrated the crystals in the ejection zone (the middle of each source well). Crystals were acoustically transferred from the CAP onto a pin-mounted micromesh (Fig. 2) and cryocooled for X-ray data collection (cryocooling is described in x3.3). Subsequent to each crystal-ejection event, the concave shape ensured that the remaining crystals descended to the ejection zone. The concentration of crystals on the CAP determines the average number of crystals ejected with each 50 nl drop (approximately five crystals per micromesh for lysozyme and thermolysin and one crystal per every two micromeshes for stachydrine demethylase and photosystem II; see x2.3). We used thermolysin crystals to measure the time needed to harvest our specimens. The crystal-harvesting rate was found by averaging 15 timed trials of 495 crystal transfers to five distinct locations on 99 micromeshes, which required an average of 212.4 s to complete (Table 1).
Since each micromesh contains one or a few crystals, additional chemicals can be acoustically added to the already mounted crystals on the micromesh. For example, chemicals from a fragment library can be rapidly distributed so that one or a few crystals on each micromesh are soaked with each chemical fragment. This system allows easy and fast exploitation of protein crystals for high-throughput screening (or Table 2 Time needed for typical serial crystallography applications. The transfer rate for the Echo 550 is 500 transfers s À1 from a single location and 2.33 transfers s À1 when moving between source locations or between destination locations. Approximately 1 min is needed to exchange plates. We assume that there are sufficient pin platform boxes pre-loaded with micromeshes for each experiment. The time to complete the two first tasks was measured (x3.2 and x3.3), while for the two last tasks it was simulated using water in place of the chemical library (we have not yet acquired a large chemical library). $53 min 20 s serial crystallography) applications such as fragment library screening, cryo-condition search, heavy-atom screening, crystal improvement with additives and fast screening for diffraction quality.
Screening of compatible crystallization conditions
To demonstrate the general applicability of this crystalmounting method for samples grown using standard crystallization conditions, we surveyed the chemical compatibility of agarose crystal supports with commercial crystallization screens. 15 ml of crystallization conditions from 96 deep-well commercial crystallization plates, JBScreen Cryo HTS L (Jena Biosciences), Additive Screen (Hampton Research), MemGold (Molecular Dimensions) and MCSG-4 (Microlytic), were dispensed into a 384-well Poly Pro source microplate and centrifuged at 1216g for 60 s. The volume in each well was measured using the Echo 550 WellPing software and adjusted until all wells contained 15 AE 5 ml. The 384-well polypropylene plate was placed into a hot water bath (shallow enough to keep water out of the wells) and maintained at $70 C. 20 ml of a 2% solution of agarose in distilled water was prepared in an Erlenmeyer flask and maintained at $100 C on a hotplate until the agarose dissolved. The agarose solution was then cooled to 70 C. It is important to maintain the Erlenmeyer flask with the agarose solution at 70 C, because higher temperatures lead to bubbles and melt the pipette tips, while lower temperatures cause the concave basin to cool asymmetrically. 15 ml of the agarose solution were manually dispensed into each well of the heated 384-well polypropylene plate. Any observed bubbles were ruptured using the pipette tip. The 384well polypropylene plate was removed from the bath and (after cooling) centrifuged (1216g for 60 s). Each CAP was examined for air bubbles, firmness (as verified by probing with a toothpick) and visual evidence of precipitation (owing to incompatibility between mother liquor and agarose). We inspected the concave shape of each CAP. Finally, we added 10 ml of water to each CAP and attempted to eject droplets of this water using the Echo 550.
Protein crystallization and plate preparation
A 2% agarose solution was heated (100 C for $10 min) in a water bath until it reached a random-coil state (a polymer conformation where monomer subunits are randomly oriented but are still bound to adjacent subunits). The agarose solution was then cooled to 70 C and mixed in a 1:1 ratio with the following mother-liquor solutions: 0.2 M sodium acetate, 8% NaCl for lysozyme, 0.05 M NaOH, 15% ammonium sulfate for thermolysin, 10% glycerol, 10% PEG 3350, 25 mM hexammine cobalt chloride, 100 mM HEPES pH 7.0 for stachydrine demethylase and 40% PEG 5000 for photosystem II.
In order to achieve a concave basin, the wells must be over-filled with tacky agarose (when cooled to $70 C, agarose becomes somewhat adhesive) and mother-liquor solution so that the agarose adheres to the walls of the source well, resulting in a bowl-shaped Pin platform box. Crystals and screened chemicals can be transferred in the acoustic transfer chamber of the Echo 550 (a) using sound pulses generated by a transducer (b) to eject crystals and chemicals contained in a source plate (c) into a pin platform box (d) (shown without the lid for clarity; Yin et al., 2014). The pin platform lid (e) isolates the pin platform ( f ) to prevent dehydration. The internal environment is governed by mother-liquor solution that is secured in 1% agarose and is deposited into a moat (g) in the pin platform. The window (h) is used to view specimens and to add components through apertures (i) in the lid. After all of the crystals are mounted, tape is used to seal the apertures (j). The pin platform box contains 96 sockets for securing pin-mounted micromeshes (k). The crystals are transferred onto the pin-mounted micromeshes. Once mounted, the crystals can be combined with cryoprotectants, heavy atoms, crystal-improving additives or with a fragment library; these chemicals are acoustically transferred from the same source plate (c) or from a different source plate. The pin platform box is in equilibrium with the mother liquor before, during and after the crystals and chemicals are transferred onto the micromeshes. The inset (l) shows a magnified view of a photosystem II crystal that was transferred onto the micromesh, where it was combined with a chemical. All components of the pin platform box are three-dimensionally printed (print files are available on request). surface when excess agarose is removed from the center of each well. The wells of a 384-well polypropylene source microplate were overfilled with 70 ml of the agarose and mother-liquor mixture using a pipette. After allowing 3 s for the agarose to adhere to the sides of the well, 40 ml were aspirated out of the well from the center. This created a concave basin in the agarose gel (Fig. 1). A custom-made positioning tool secured the pipette tip in the center of each well to ensure a symmetric bowl shape.
Crystals of lysozyme (50 mg ml À1 ), thermolysin (50 mg ml À1 ) and stachydrine demethylase (20 mg ml À1 ) were grown by standard hanging-drop protocols (4 ml of protein solution combined in a 1:1 ratio with mother-liquor solution over a 500 ml reservoir). The photosystem II crystals were donated. Crystals were manually pipetted from each hanging drop onto the agarose pedestal, where gravity led them to accumulate in the center. The plate was sealed with adhesive plastic. Using the Echo 550, the supernatant above the crystals was removed in 1 ml increments (by serial ejection onto the plastic adhesive that sealed the source plate; no pin platform box was present) until crystals were observed in the ejecta using a light microscope (supernatant removal). A 1 ml volume was chosen because the emergence of crystals from the CAP was observed to be gradual, so the number of crystals lost in the 1 ml supernatant-removal procedure was small compared with the total number in the well. The adhesive plastic was peeled off after the supernatant was removed. 50 nl of crystal suspension was then acoustically transferred from the CAP to each micromesh (Fig. 2). In cases where the crystal concentration was high (lysozyme and thermolysin), each micromesh contained an average of approximately five crystals. In cases where the crystal concentration was low (stachydrine demethylase and photosystem II), only mother liquor was ejected onto some of the micromeshes. If crystals were not observed on each micromesh (using a Leica microscope) then additional transfers were made.
Each micromesh that contained crystals was cryocooled. When cryocooling many crystals on pin-mounted micromeshes, the entire pin platform was manually dropped into liquid nitrogen (see x3.3). When cryocooling only a few crystals on pin-mounted micromeshes, each crystal was individually cooled by hand. Diffraction data were collected on beamlines X12C and X29 at the National Synchrotron Light Source (NSLS). Data sets were processed with HKL-2000 (Otwinowski & Minor, 2001) and further processed using CTRUN-CATE in the CCP4i suite (Winn et al., 2011). Structures were obtained by molecular substitution from published models and were refined using REFMAC (Winn et al., 2003) and ARP/ wARP (Perrakis et al., 2001) (starting models: lysozyme, PDB entry 1lyz; thermolysin, 4tln; stachydrine demethylase, 3vca; photosystem II, 1fe1; Diamond, 1974;Holmes & Matthews, 1981;Daughtry et al., 2012;Zouni et al., 2001). Each atomic model was further screened for binding to agarose (ZINC database 87496095) using AutoDock Vina (Trott & Olson, 2010), confirming that the tightest predicted binding pose for agarose monomers has zero electron density 2 (we could not find any electron density for sugar molecules that might have originated from the agarose gel).
Preparing crystals for screening against a heavy-atom library
Thermolysin and lysozyme crystals were obtained as described in x2.2. Crystals were manually transferred from the thermolysin and lysozyme hanging drops to a CAP containing thermolysin mother liquor and to a CAP containing lysozyme mother liquor as described in x2.1. Additionally, eight watersoluble heavy-atom salt solutions (cupric sulfate, iron chloride, nickel sulfate, hexammine cobalt chloride, potassium iodide, sodium iodide, sodium bromide and copper nitrate) and three insoluble suspensions (platinum chloride, nickel chloride and molybdenum chloride) were added to discrete locations on the same polypropylene source plate. Hence, the same source plate contained all of the building blocks for our screening experiment (the protein crystals and the screened chemicals). To assemble the experiments using these building blocks, two pin platform boxes were loaded with pins and mother liquor (thermolysin mother liquor for the thermolysin crystals and lysozyme mother liquor for the lysozyme crystals; Fig. 2).
Assessing the acoustic transparency of hydrogels
Agarose is one example of a class of materials termed hydrogels, most or all of which we predicted to be functionally transparent to the types of sound waves (frequencies, waveforms etc.) used for acoustic crystal mounting. To determine the acoustic transparency of various hydrogels, three wells of a 384-well polypropylene microplate were prepared with Table 3 Crystallization conditions and agarose compatibility screening.
Four commercially available crystallization plates (each containing 96 different conditions) were screened for incompatibility between the components of the commercial kits and 2%(w/v) agarose. 20 ml of each crystallization condition from the commercially available plates were added to the wells of a 384-well polypropylene plate with 20 ml 2% agarose in the random-coil state. Conditions that resulted in precipitation were recorded. All conditions formed a hardened gel, so this information was not recorded in the table. After cooling, 10 ml of water were added to the wells and 2.5 nl drops of this water were ejected using the Echo 550. Wells from which a drop was not ejected were also recorded. Crystallization, CAP and transfer of thermolysin, lysozyme, stachydrine demethylase and photosystem II. Proteins were crystallized using the hangingdrop method (a, d, g, j). Wells were preloaded with a 1% agarose and mother-liquor pedestal. Crystals were transferred manually with a pipette into wells of an acoustically transparent 384-well polypropylene plate (b, e, h, k). The concave basin of the CAP caused crystals to concentrate at the ejection zone under the force of gravity. Crystals (indicated by arrows) were transferred onto MiTeGen micromeshes for X-ray diffraction analysis (c, f, i, l) (see Supplementary Fig. S2).
pedestals of gelatin (3% unflavored gelatin; commercial gelatin), agarose (2% agarose; Sigma-Aldrich catalog No. A6877) and acrylamide [16%(w/v) 29:1 acrylamide; Sigma-Aldrich catalog No. A7802]. For each hydrogel, we used the Echo 550 WellPing software to send five acoustic pulses (11.5 MHz) through the material and to listen to the resulting reflected acoustic signal. The five reflected acoustic profiles from each material were then averaged.
Agarose pedestals are compatible with most crystallization conditions
To test the compatibility of agarose pedestals with common crystallization conditions (Table 3), 20 ml of each crystallization condition was mixed with 20 ml 2% agarose at 70 C and allowed to cool into a gel. Once hardened, each gel was tested for (i) firmness, (ii) acoustic ejection and (iii) the presence of precipitate. 10 ml mother liquor was added to the gel and 2.5 nl were ejected out of each well onto a plastic cover using the Echo 550. Transfer success was observed under a Leica microscope. A high percentage of wells were both firm enough to support a distinct layer of mother liquor and able to eject this mother liquor (Table 3). Each well was also examined for precipitation using the Leica microscope. Any solution (agarose and mother liquor) that appeared to form a precipitate was recorded (CAPs were examined with a light microscope and any discoloration was noted as a precipitate). In cases where the initial agarose preparation has a precipitate, adjustment of the agarose concentration and/or the precipitant concentrations usually allowed an effective CAP (data not shown). Crystallization cocktails that stubbornly inhibit gel formation {for example, ammonium sulfate ! 30%(w/v) [30%(w/v) = 40% saturation] or PEG 5000 ! 50%(w/v)} and prevent droplet ejection can be soaked in the mother liquor after the gel has hardened. We therefore believe that this method is generally applicable to most common protein crystallization conditions.
CAPs eliminate dead volume and reduce loss of crystals
Lysozyme, thermolysin, stachydrine demethylase and photosystem II crystals were transferred from their hangingdrop crystallization plates (Figs. 3a, 3d, 3g and 3j, respectively) and suspended on CAPs in a source plate. The concave basin assured that many crystals remained in the ejection zone of the wells (Figs. 3b, 3e, 3h and 3k). After supernatant removal, the crystals on the CAP were acoustically transferred onto micromeshes (Figs. 3c, 3f, 3i and 3l). Diffraction data from acoustically mounted crystals of lysozyme and thermolysin were comparable to diffraction from manually mounted control crystals, demonstrating that acoustic ejection from a concave agarose pedestal does not adversely affect the quality of the data (Table 4) or of the resulting electron density (see Supplementary Fig. S2). In all cases, the quality of recorded data was compatible with data from manually mounted crystals. AutoDock Vina was used to predict the best binding location between each protein structure and agarose monomers; inspection showed that there was no electron density in these areas. Each electron-density map was also visually examined using Coot (Emsley & Cowtan, 2004) Table 4 Data-collection and model-refinement statistics.
Diffraction data from acoustically mounted crystals of lysozyme and thermolysin were comparable to diffraction from a manually mounted control crystal (left columns). Diffraction from acoustically mounted crystals of stachydrine demethylase and photosystem II were typical of these crystals (private communication). In the case of lysozyme and thermolysin, each of the ten data sets from acoustically mounted crystals and each of the ten data sets from hand-mounted crystals was obtained from a single crystal. Where appropriate, average values and standard deviations are shown for each group of ten data sets from similar crystals. In the case of stachydrine demethylase and photosystem II, diffraction data from multiple acoustically mounted crystals were combined into a single data set. there was no large contiguous difference density that could correspond to a sugar molecule.
Crystals can be combined with chemicals directly on a micromesh
The thermolysin and lysozyme crystals described in x2.3 were transferred onto pin-mounted micromeshes that were secured in two pin platform boxes (as described in x3.2). For each type of protein crystal, 50 nl of crystal suspension were transferred (through apertures) onto each of 36 micromeshes (on average each micromesh contained approximately five crystals). Once all of the crystals were distributed to micromeshes, each heavy-atom solution described in x2.3 was acoustically transferred (through apertures) onto three different crystal-containing micromeshes of thermolysin and three of lysozyme. Three controls with no heavy atoms were included for each type of protein crystal. Each micromesh with crystals plus heavy atoms was soaked for 1 h. The two pin platform boxes were in equilibrium with the mother liquors of thermolysin and lysozyme (Fig. 2), so the crystals were soaked without dehydrating. After soaking, the adhesive tape was detached from the back of each pin platform and the lid was removed. Each pin platform (filled with crystal-containing micromeshes) was dropped 'face down' into liquid nitrogen, so that the cryocoolant flowed through the window of the pin platform and flash-cooled each of the crystals. Under liquid nitrogen, the pin platform was rotated to face up and each pinmounted micromesh was manually inserted into a MiTeGen Reusable Base (model B1A-R) 4 . X-ray data were obtained from all 36 thermolysin heavyatom soaks and from all 36 lysozyme heavy-atom soaks. The data revealed anomalous signal for some known lysozymebinding heavy atoms (nickel sulfate and the iodide salts) but not for sodium bromide (which is a lysozyme ligand in a Protein Data Bank structure). Surprisingly, copper sulfate also yielded a detectable anomalous signal when soaked with lysozyme (the PDB did not previously contain a copper derivative of lysozyme). The structure of this derivative was readily solved (PDB entry 4p2e) using the anomalous diffraction from three bound Cu atoms (one at a twofold position near Leu129, another coordinated by His15 and Glu7, and a third discreetly disordered copper near Asp52; similar to Teichberg et al., 1974). None of the insoluble salts yielded anomalous data when soaked with lysozyme or thermolysin. For both thermolysin and lysozyme, all of the heavy atoms with accessible white lines (excluding iodine and iron) were confirmed to have been transferred by observing a fluorescence peak at the expected energy using a monochromator excitation scan. Table 2 summarizes the time needed for the Echo 550 to perform soaking experiments of this type.
Hydrogels are acoustically transparent
This study reports the use of agarose gels to support protein crystals at a suitable location for automatic crystal transfer using ADE. Since hydrogels are composed principally of water, we hypothesized that they are likely to be acoustically transparent. Other materials tested for acoustic transparency include gelatin and cross-linked polyacrylamide gels. All of the tested hydrogels were shown to be completely acoustically Acoustic transparency of hydrogels. Many hydrogels are acoustically transparent to the waveform and frequency used to transfer crystals (or other materials) onto X-ray data-collection micromeshes (11.5 MHz). The intensity of the reflected sound is shown for hydrogels of acrylamide (green), agarose (red) and gelatin (blue). In all hydrogel cases, a concave pedestal was deposited to a height of 4.4 mm in one well of a 384-well polypropylene plate. Water was then added to a height of 6.7 mm. Five acoustic pings were then transmitted through each well using the Echo 550 and the reflected intensities were recorded as a function of time. The five pulses were averaged for each substance and the averaged values were plotted on a single graph; the horizontal axis is the measured reflected intensity (arbitrary units) and the vertical axis is time. In our control (purple), a Thermanox cover slip was placed on an agarose support to show an example of a material that is acoustically semitransparent (see Supplementary Fig. S1). Because the speed of sound in all of these substances is virtually identical to that in water, the vertical axis is displayed as a distance (in millimetres). The expected location of the interface between the hydrogel and the water is indicated. Acoustic transfer of crystals from a support matrix to micromeshes can only occur if the largest reflection is from the air-water interface. In the case of the three classes of hydrogels tested, the observed acoustic reflection from the gel-water interface was zero, indicating that all of these materials are possible candidates for positioning specimens at the acoustic focus point. transparent (Fig. 4). Cross-linked acrylamide (green) demonstrated no visible reflection at the gel-water interface, but did show noticeable attenuation in the reflected intensity at the water-air interface. This may indicate that some scattering or absorption occurred in the body of the gel, although no reflection was visible at the surface. The scattering/absorption from the 2% agarose gel was much smaller (red), but like the acrylamide there was zero reflection from the gel-water interface. The gelatin (blue) showed no attenuation of the surface reflection and no reflection from the gel-liquid interface. Agarose was selected for this study because it is a common laboratory material, it hardens faster than gelatin and it is safer to work with than acrylamide. However, both gelatin and polyacrylamide were found to be suitable supports for protein crystal transfer using acoustic methods (data not shown). Noncrystallographic applications that could benefit from acoustic touch-less ultralow-volume specimen preparation (such as SAXS and electron microscopy) may be incompatible with agarose supports. In cases where the properties of agarose are found to be unsuitable, other hydrogels may offer an acoustically compatible solution. In cases where the objective is not to eject crystals but rather to monitor crystal growth, an acoustically semi-transparent medium such as Thermanox may be suitable. Recently, 1%(w/v) agar was used to fabricate a coupling 'plug' that conducted sound energy from an acoustic transducer to a crystal suspension at the Linear Coherent Light Source (LCLS; Roessler et al., 2014). The sound pulses were used to inject crystal containing droplets into the LCLS at a rate of 60 crystal injections per second, matching the LCLS pulse frequency in order to achieve a 60% 'hit rate' of X-ray pulses that yielded diffraction patterns.
Discussion
Full automation of the high-throughput macromolecular crystal structure determination pipeline would increase productivity in conventional structural biology, as well as enable novel discovery-based solutions to stubborn problems in structural biology (particularly using high-throughput screening of chemical libraries). This goal has been frustrated by the difficulty involved in automating fast transfer of crystals from growth plates onto supports suitable for X-ray data collection. In cases where very high speed is not required, robotic solutions (Viola et al., 2007), laser tweezer-assisted mounting (Wagner et al., 2013) and laser-assisted recovery on thin films (Cipriani et al., 2012) are promising alternatives to manual mounting of individual crystals. For fast serial mounting of crystals of a particular protein, investing time to prepare a CAP allows rapid mounting using acoustic methods. We have demonstrated that acoustic crystal mounting from CAPs will sustain a high rate of 2.33 transfers s À1 . Combined with automated protein production (Banci et al., 2006;Grä slund et al., 2008), crystallization (Bolanos-Garcia & Chayen, 2009) and end-station automation (Snell et al., 2004), this will accelerate the output of crystallization facilities to match the data-collection speeds available at next-generation synchrotrons.
Presently, acoustic transfer technology is an advanced method for small-volume liquid transfer. Compared with conventional methods, the acoustic transfer method does not require high-level hand coordination or dexterity. Automated crystal mounting at speeds of several transfers per second prevents loss of crystal viability owing to desiccation, and allows the crystals to be soaked in crystallo (on a micromesh) with chemical libraries such as chemical fragments, heavy atoms, cryoprotectants etc. Acoustic ejection also eliminates contact between specimens and pins, tips and nozzles, which reduces the risk of cross-contamination with laboratory compounds and contamination by chemicals that leach out of the plastic tubing (McDonald et al., 2008).
Acoustic micro-mounting with no dead volume (and no lost volume per transfer) is particularly advantageous when purified protein is in limited supply. Advances in protein expression and purification have significantly relaxed the sourcematerial bottleneck in crystallography, but stubborn cases with poorly expressing proteins still occur. Acoustic ejection of protein crystals from CAPs saves scarce purified protein resources by ensuring that all or most of the available protein crystals are rapidly dispensed to micromeshes, where each can be individually combined with chemicals in a high-throughput manner. Acoustic transfer also economizes on chemicals, such as fragment libraries, which are difficult to obtain in large quantities. Thus, acoustic transfer from CAPs allows highthroughput screening of chemical libraries even in cases of crystals of poorly expressing proteins.
Acoustic crystal handling accelerates the rate of specimen preparation to match the rate at which specimens might be examined at modern synchrotron X-ray sources. Automation also has other advantages in addition to speed. A fully automated structure-determination pipeline (including crystal handling) allows a researcher with no laboratory access to orchestrate cutting-edge science by linking the capabilities of automated protein production and purification, automated crystal growth and automated crystal handling and data collection. Full automation will also preserve the intact flow of machine-generated metadata for the full project lifecycle. Most importantly, the automation of specimen handling will make available to all researchers the utility of centrally archived chemical libraries (including fragment libraries, heavy atoms, crystal-improving additives and cryoconditions) because the Echo 550 will be located at a central facility so that chemical acquisition costs can be pooled among a community of users.
Using the strategies outlined here, high-throughput screening can be accomplished rapidly and using limited quantities of protein and chemicals. By sequestering crystals into the ejection zone in a concave basin, most of the crystals in the well can be ejected onto micromeshes. Pre-loaded concave agarose pedestals simplify acoustic crystal transfer and increase yields for easy access to serial crystallography techniques such as ligand screening, cryo-search, heavy-atom screening and crystal improvement.
research papers
Personnel for this study were recruited largely through the 2013 spring and summer session and 2014 spring session of the Science Undergraduate Laboratory Internships Program (SULI) and Graduate Research Internship Program (GRIP) supported through the US Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS). Major ongoing financial support for acoustic droplet ejection applications was through the Brookhaven National Laboratory/US Department of Energy, Laboratory Directed Research and Development Grant 11-008 and from the Offices of Biological and Environmental Research and of Basic Energy Sciences of the US Department of Energy, and from the National Center for Research Resources (P41RR012408) and the National Institute of General Medical Sciences (P41GM103473) of the National Institutes of Health. Additional support was provided by a Louis Stokes Alliances for Minority Participation fellowship. Data for this study were measured on beamlines X12C, X25 and X29 of the National Synchrotron Light Source. We thank Jan Kern and Rosalie Tran for kindly supplying photosystem II crystals and supporting our efforts to determine that these specimens are suitable for serial crystallography with acoustic methods. We thank Labcyte Inc., and particularly Joe Olechno, Richard Stearns and Richard Ellson, for their support and guidance. We thank the coeditor and the reviewers of the manuscript for taking the time to help us to address areas that were outside our expertise. Author contributions: ASS designed the experiment and wrote the paper. CMC, DLE, AS, CGR, ET, OC and ASS grew crystals, obtained data and analyzed data. ASS and KJ designed and built the labware. ASS, CGR and RMS trained and supervised student interns. RA and AMO provided the expressed protein. MA, CGR, AMO and ASS designed a related fragment-screening system that supports the current effort. | 8,824.2 | 2015-01-01T00:00:00.000 | [
"Chemistry",
"Biology",
"Materials Science"
] |
Tumor suppressor XAF1 induces apoptosis, inhibits angiogenesis and inhibits tumor growth in hepatocellular carcinoma.
X-linked inhibitor of apoptosis (XIAP)-associated factor 1 (XAF1), a XIAP-binding protein, is a tumor suppressor gene. XAF1 was silent or expressed lowly in most human malignant tumors. However, the role of XAF1 in hepatocellular carcinoma (HCC) remains unknown. In this study, we investigated the effect of XAF1 on tumor growth and angiogenesis in hepatocellular cancer cells. Our results showed that XAF1 expression was lower in HCC cell lines SMMC-7721, Hep G2 and BEL-7404 and liver cancer tissues than that in paired non-cancer liver tissues. Adenovirus-mediated XAF1 expression (Ad5/F35-XAF1) significantly inhibited cell proliferation and induced apoptosis in HCC cells in dose- and time- dependent manners. Infection of Ad5/F35-XAF1 induced cleavage of caspase -3, -8, -9 and PARP in HCC cells. Furthermore, Ad5/F35-XAF1 treatment significantly suppressed tumor growth in a xenograft model of liver cancer cells. Western Blot and immunohistochemistry staining showed that Ad5/F35-XAF1 treatment suppressed expression of vascular endothelial growth factor (VEGF), which is associated with tumor angiogenesis, in cancer cells and xenograft tumor tissues. Moreover, Ad5/F35-XAF1 treatment prolonged the survival of tumor-bearing mice. Our results demonstrate that XAF1 inhibits tumor growth by inducing apoptosis and inhibiting tumor angiogenesis. XAF1 may be a promising target for liver cancer treatment.
INTRODUCTION
Hepatocellular carcinoma (HCC) is involved in multiple gene alterations including tumor suppressor inactivation, oncogene activation and apoptosis-related gene dysregulation [1].Many studies have shown that inhibition of apoptosis plays an important role in tumor growth and drug resistance [2].Inhibitor of Apoptosis (IAP) is identified as a family of endogenous inhibitors of caspases [3,4].IAPs are characterized by highly conserved Baculoviral IAP Repeats (BIR) that inhibit apoptosis and include 8 members [5].X-linked IAP (XIAP) is the most potent member of human IAPs that inhibit the role of caspases [6].XIAP directly binds to caspase-3, -7, -9 and prevents their activities to initiate or execute apoptotic pathways [7].XIAP has been shown to be overexpressed in most human cancer cell lines and cancer tissues including HCC tissues.Its overexpression has been demonstrated to be the independent factor for predicting the poor prognosis of HCC patients after liver transplantation [8].Studies have shown that XIAP antisense nucleic acid and small molecule inhibitors of XIAP induce apoptosis and inhibit tumor growth in HCC cells [9], indicating that targeting inhibition of XIAP may be a new approach for HCC therapy [10].
HCC has been recognized as a hypervascular cancer, but their vasculature type is not uniform.Smallsized and well-differentiated HCC generally show very few tumor vessels, whereas advanced HCC exhibit rich blood vessels [11].Several factors participating in the development of microvasculature have been identified.Vascular endothelial growth factor (VEGF) is well established as one of the key regulators of angiogenesis [12].VEGF activates VEGF-receptor (VEGFR), resulting in triggering network of VEGFR signaling pathways that promote endothelial cell growth, migration, and survival from pre-existing vasculature.VEGF also mediates vessel permeability that has been shown to be associated with malignant effusions and mobilizes endothelial progenitor cells from the bone marrow to distant sites of neovascularization [12].Studies have shown that VEGF are strongly involved in the development of liver tumor neovascularization and the infiltration of cancer cells into the tumor capsule in HCC [13].VEGF overexpression is correlated with the clinicopathological features of HCC [14].The well-established role of VEGF in promoting tumor angiogenesis has led to the development of agents that selectively target VEGF pathway [15].Therefore, the suppression of VEGF expression may provide a novel strategy in the treatment of HCC [11].XIAP-associated factor 1 (XAF1) is identified as a XIAP-binding protein and can directly bind preferentially to XIAP BIR2 and antagonize the anti-caspase activity of XIAP to induce apoptosis [16].XAF1 triggers the re-localization of XIAP from the cytosol to the nucleus, then sequester XIAP.Unlike the overexpression of XIAP in most human cancer tissues, XAF1 is ubiquitously expressed in normal and fetal tissues but weakly expressed or even undetectable in most human cancer cell lines [17] and human cancer tissues including gastric [18], colon [19] and pancreatic cancer [20] .Loss of XAF1 expression is correlated strongly with tumor staging and progression in human cancers [20,21].Loss of heterozygosity of XAF1 gene has been reported in human colorectal cancers [22] and promoter CpG hypermethylation of XAF1 has been found in several human malignant tumors such as gastric [18,23], colon [23], melanoma [24] and urogenital tumor [25][26][27].A recent report showed that a significantly low XAF1 expression in poorly differentiated HCC was related to the resistance to apoptosis [28].
Our previous studies have shown that XAF1 induced apoptosis through intrinsic and extrinsic apoptosis pathways in gastric and colon cancer cells [29,30].We have found that XAF1 induces autophagy by upregulating beclin 1 and inhibiting AKT pathway [31].Recent studies have shown that p53 can suppress the transcription of XAF1 by interacting with a high affinity responsive element within XAF1 promoter in gastrointestinal cancer cells [32].Studies have shown that XAF1 inhibits the migration of endothelial cells [33] and degrades survivin [34].Previous studies have shown that overexpression of survivin is associated with angiogenesis [35,36].Our and other have demonstrated that inhibition of suvivin suppress tumor angiogenesis [37,38].However, whether XAF1 inhibits tumor angiogenesis remains unknown.In this study, we investigated the effects of adenovirusmediated XAF1 expression on liver tumor growth and tumor angiogenesis.We found that XAF1 could induce apoptosis and inhibit VEGF expression, tumor angiogenesis and tumor growth.Therefore, the restoration of XAF1 expression may be a new approach for liver cancer treatment.
The XAF1 is weakly expressed in HCC tissues and HCC cell lines
The mRNA and protein expressions of XAF1 were determined in 3 HCC cancer cell lines SMMC-7721, Bel-7404 and Hep G2, as well as 30 primary HCC cancer and paired non-HCC tissues.The result showed that the mRNA and protein expressions of XAF1 were lower in three HCC cell lines SMMC-7721, BEL-7404 and Hep G2 cancer cells compared non-cancer tissue of liver (Fig. 1A).
We further determined the expression of XAF1 in 30 human liver cancer tissues and paired adjacent noncancer liver tissues.Western blot showed that XAF1 expression was lower in liver cancer tissues than that in the paired non-HCC tissues (Fig. 1B).IHC showed that XAF1 was expressed in non-HCC tissues but not in cancer tissues (Fig. 1C).XAF1 was localized in both cytoplasm and nucleus (Fig. 1C).Quantitative analysis of XAF1 expression showed that 66.7% (20/30) of non-HCC tissues strongly expressed XAF1, whereas only 16.7% (5/30) of liver tissues expressed XAF1 (Fig. 1D) (X 2 =15.43,P< 0.01).The results suggest that HCC tissues weakly expressed XAF1.
Restoration of XAF1 expression inhibits proliferation and induces apoptosis of HCC cells
We determined the effect of restoration of XAF1 expression on proliferation of HCC cells in vitro.We established XAF1 stable transfectants in SMMC-7721 and BEL-7404 cells.Western blot confirmed that the XAF1 stable transfectants overexpressed XAF1 compared to the control transfectants (Fig. 2A).Quantitative analysis showed that the number of cells in the wells of SMMC-7721/XAF1 and BEl-7404/XAF1 stable tranfectants cultured were significantly lower than those in the wells of SMMC-7721/Vector and BEl-7404/Vector stable tranfectants cultured (Fig. 2B-2C).The results suggest that constitutive overexpression of XAF1 inhibits cell proliferation of liver cancer cells.
To determine the effects of transient expression of XAF1, we infected SMMC-7721 cells with Ad5/ F35-XAF1 virus.We found that the mRNA and protein expressions of XAF1 were increased in a dose-dependent manner (Fig. 2D) and a time-dependent manner (Fig. 2E).MTT assay showed that infection of Ad5/F35-XAF1 virus resulted in inhibition of cell proliferation in dose-and time-dependent manners in SMMC-7721 cells, compared to Ad5/F35-Ctrl (Fig. 2F).These results show that the transient expression of XAF1 also inhibits cell proliferation of liver cancer cells.
To determine the mechanisms of XAF1-induced apoptosis, we detected the expressions of apoptosisrelated proteins 48 hours post-infection of Ad5/F35-XAF1 in SMMC-7721 cells by Western blot.Ad5/F35-XAF1 infection consistently induced the cleavage of caspase-9, 8, 3 and PARP and the release of mitochondrial cytochrome c into the cytosol in dose-and time-dependent manners (Fig. 3E-3F).However, Ad5/F35-Ctrl had no such effect (Fig. 3E-3F).The results indicate that Ad5/F35-XAF1 treatment induces apoptosis of HCC cells through activating the
Ad5/F35-XAF1 inhibits tumor angiogenesis by downregulating VEGF expression
Previous results suggest that XAF1 decreased migration and tube formation of mouse endothelial cells [33].VEGF plays a critical role in endothelial cells.We determined the effect of XAF1 on VEGF expression and found that Ad5/F35-XAF1 virus treatment markedly decreased mRNA and protein expression of VEGF in SMMC7721 and Hep3B cells (Fig. 5A-5B).RT-PCR result showed that mRNA expression of VEGF was significantly decreased in the tumor tissues treated with Ad5/F35-XAF1 compared to that in tumor tissues treated with Ad5/F35-Ctrl (Fig. 5C).IHC showed that protein expression of VEGF was much lower in the tumor treated with Ad5/ F35-XAF1 than that in the tumor treated with Ad5/F35-Ctrl (Fig. 5D, left panel).The ratio of VEGF positive staining was significantly lower in the tumor treated with Ad5/F35-XAF1 than that in the tumor treated with Ad5/ F35-Ctrl (0.91% ± 0.17% vs 8.12 % ± 0.74%, P < 0.01, Fig. 5D, right panel).
Ad5/F35-XAF1 treatment prolongs survival time of tumor-bearing mice
We finally assessed the effect of Ad5/F35-XAF1 on the long-term survival of tumor-bearing mice.The mice bearing tumor were treated with Ad5/F35-XAF1 and Ad5/ F35-Ctrl, respectively.The survival time was observed for 4 months.The experimental end point was defined as the time when an entire group of mice died.Death was defined as natural death if tumor burden or tumor size was over 2 cm 3 .All mice died of natural death in Ad5/F35-Ctrl group, and no mouse died in Ad5/F35-XAF1 group by the end time point (110 days).Ad5/F35-XAF1-treated mice survived significantly longer than Ad5/F35-Ctrl-treated mice (P < 0.01).Our results suggest that Ad5/F35-XAF1 can prolong the survival time of tumor-bearing mice (Fig.
6A).
We also evaluated the safety of Ad5/F35 virus treatment in vivo by detecting the pathologic alterations of these four important organs from the mice 4 weeks after the treatment with Ad5/F35 virus.Histology analysis showed that the tissues of heart, liver, lung and kidney in all mice did not exhibit obvious pathologic changes 4 weeks after treatment (Fig. 6B).These results demonstrate the safety of Ad5/F35-XAF1 gene therapy.
DISCUSSION
XAF1 is a tumor suppressor gene identified by two-hybridization in yeast [16].The restoration of XAF1 expression has been shown to induce cell apoptosis in gastric and colorectal cancer cell lines and strengthen the apoptotic effects of chemotherapeutic drugs and TNF Related Apoptosis Inducing Ligand (TRAIL) [29,30].Gene therapy for recombinant adenovirus vector mediated XAF1 significantly suppressed tumor growth in gastric and colon cancer in vitro and in vivo [29][30][31].Qi et al [40] also reported that XAF1 had potent antitumor activity when it was delivered by conditionally replicated adenovirus vector ZD55.In this study, we have shown, for the first time, that the restoration of XAF1 expression inhibited tumor growth and suppressed tumor angiogenesis in HCC both in vitro and in vivo.
Our previous studies have shown that XAF1 is weakly expressed in human gastric, colon and pancreatic cancer tissues [20,23,30].A recent result showed that weak expression of XAF1 was associated with androgen deprivation resistance in prostate cancer [41].The weak expression of XAF1 has been shown to be associated with portal vein tumor thrombi (PVTT), preoperative AFP level, tumor size, and recurrence of liver cancer [42].The low expression of XAF1 was linked to apoptosis resistance of liver cancer [28].Similarly, in this study, we also found that mRNA and protein expression levels of XAF1 were very low or undetectable in three HCC cell lines and HCC tissues compared to those in the adjacent non-cancer tissues.These results suggest that weak expression of XAF1 may play a role in the development of HCC [43].
In this study, we further demonstrated that the restoration of XAF1 could inhibit cell proliferation and induce apoptosis in vitro and in vivo in HCC cells.XAF1 is identified as XIAP binding protein and inhibits the function of XIAP anti-apoptosis.We found that Ad5/F35-XAF1 treatment induced the cleavage of caspase-3,-8,-9 and PARP but also increased the release of cytochrome c.Caspases are major proteins which execute cell apoptosis.Caspase-8 and caspase-9 are initial factors in extrinsic and endogenous apoptotic pathways.Cytochrome c is an important factor involved in mitochondria apoptotic pathway (intrinsic pathway).Our results suggest that Ad5/ F35-XAF1 induces apoptosis of liver cancer cells through both endogenous and exogenous pathways, supporting our previous reports that XAF1 induces apoptosis in gastric and colon cancer cells [29,30].
In this study, one new finding is that XAF1 could inhibit tumor angiogenesis in HCC.HCC has been shown to be a highly vascular tumor, and increased vasculature can lead to tumor rupture [44].Angiogenesis (of new microvessel formation) is essential for the growth and progression of HCC because it enables the delivery of oxygen and nutrients [45].Angiogenesis is regulated by angiogenic factors, such as VEGF and angiopoietins, which can be secreted by some tumor cells [46].VEGF has been demonstrated as a central regulator of the angiogenic process in physiological and pathological conditions [47].VEGF, also known as vascular permeability factor, stimulates the proliferation of endothelial cells through specific tyrosine kinase receptors [47].VEGF is an important factor to evaluate the angiogenetic degree of tumor tissue.Previous results have shown that VEGF is strongly expressed and localized predominantly to cancer cells in HCC tissues, and its expression was strongly correlated with MVD and tumor size [13].Therefore, the inhibition of angiogenesis has become a novel therapeutic choice [11,48,49].In this study, we found that Ad5/F35-XAF1 treatment significantly decreased the expression of VEGF in HCC cell lines and tumor tissues.Furthermore, Ad5/F35-XAF1 treatment significantly inhibited MVD in HCC xenograft tissues.This result suggests that XAF1 suppressed angiogenesis in vivo.A study reported that XAF1 delivered by ZD-55 vector suppressed proliferation of mouse endothelial cells in vitro [33].Our results demonstrate that XAF1 inhibits HCC cancer growth via suppressing VEGF expression and angiogenesis.
In this study, we used a recombinant adenovirus Ad5/F35 vector to mediate XAF1 expression.Ad5/ F35 is a chimeric adenoviral vector which the fiber type 5 is replaced by fiber type 35 in adenovirus type 5 [50,51].The infective efficiency of Ad5/F35 virus is higher than adenovirus type 5 in most human cell lines such as hematopoietic cells, stem cell, lymphocytes and cancer cells [52,53].Although a previous study showed that adenoviruses may has a problem of delivery [54], we found that the infective efficiency of Ad5/F35-XAF1 virus was high in HCC cells.Therefore, Ad5/F35-XAF1 may also be considered as useful tool to study the role of XIAP.Many drugs have been developed to inhibit XIAP.For example, co-treatment of PI-103 or 17-AAG and TRAIL decreased XIAP and enhanced apoptosis [55].Isorhapontigenin downregulates XIAP [56].CDKI-73 decreased XIAP and synergizes with fludarabine for cancer inhibition [57].The specificity of these drugs may be further confirmed by experiments with Ad/F35-XAF1 in the future.
In summary, our study demonstrates that the restoration of XAF1 expression induces tumor cell apoptosis and inhibits tumor angiogenesis.The XAF1 may be a promising candidate for HCC gene therapy.
Cell lines and tissue samples
Human HCC cell lines SMMC-7721, BEL-7404, BEL-7402 (Shanghai Institutes for Biological Sciences, Shanghai, China), HepG2, Hep3B and human embryonic kidney cells (HEK293T) (ATCC, Manassas, USA) were maintained in DMEM medium (Gibco, CA, USA) containing 10% fetal bovine serum(FBS), (Gibco, CA, USA), 100U/ml penicillin and 100µg/ml streptomycin.All cell lines were maintained at 37°C and 5%CO 2 .The 293T cell line was used for the construction and amplification of Ad5/F35 vectors.Paired tumor and normal liver tissues were obtained from 30 patients who underwent surgical treatment in Ruijin Hospital.All cell lines were tested for mycoplasma by a PCR method (Stratagene) and all cell lines were mycoplasma negative.Tissues were snap frozen in liquid nitrogen and stored in -80°C.The frozen sections were examined to ensure that tumor specimens contained more than 70% malignant cells and normal specimens were free of tumor before proceeding to RNA and protein extraction.The study was approved by the Ethics Committee of Ruijin Hospital.
Construction and infection of recombinant adenovirus
A 740bp fragment of the XAF1 CDS was cut with EcoR I and BamH I and then subcloned into adenovirus vector p-shuttle plasmid PDC316 to generate PDC316-XAF1.PDC316-XAF1 plasmid was co-transfected with framing plasmid pBHG-fiber5/F35 into 293T cells.Recombinant adenovirus Ad5/F35-XAF1 was generated by screening and purification according to the method described previously [31].The control adenovirus Ad5/ F35-Control (Ctrl) was also generated using similar methods as described above.HCC cell lines SMMC-7721, BEL-7404, BEL-7402, Hep G2 and Hep 3B were infected with these three types of recombinant virus at different multiplicity of infection (MOI) and at different time points, respectively.
Cell proliferation assay
Cancer cells were seeded into 96-well plates at 1×10 4 cells/per well.After 24 hours, cells were infected with Ad5/F35-XAF1, Ad5/F35-Ctrl at different MOIs (10, 50 and 100) and at different time intervals (24, 48 and 72 hours).Non-infected cells were the control group.MTT (1 mg/mL) (Sigma, St. Louis, Mo) was added to the cells and incubated for additional 4 hours at 37°C.The supernatant fluid was removed, and 100 µL of dimethylsulfoxide (DMSO) (Sigma) was added to each well.The absorbance (OD value) was measured using a micro enzyme-linked immunoabsorbent assay reader (Thermo Scientific, Waltham, MA) at the wavelength of 570 nanometers (nm).The results were presented as cell viability.The inhibition rate of cell proliferation was calculated according to the following equation: Inhibition rate of cells = (1-OD Ad5/F35 /OD control ) ×100%.
Apoptosis assay
HCC cells were seeded in 12-well plates at a density of 1×10 5 cells/well and infected with different MOIs of adenovirus for 48 hours or with different time points (for 12, 24, 48 and 72 hours).Hep G2, BEL-7402 and SMMC-7721 cells were infected with Ad5F/35 virus (MOI 100) alone or combined with 5-Fluorouracil (5-FU) at 25 µg/mL for 48 hours.Cell apoptosis was determined by Flow cytometry (FCM) using Annexin V-FITC/PI double staining (BD Bioscience, San Jose, CA).Apoptosis was also determined by a terminal deoxynucleotidyl transferase biotin-dUTP nick end labeling (TUNEL) (Roche, Mannheim, Germany) assay according to the manufacturer's instructions.The xenograft tumor tissue sections from nude mice were fixed into 10% formalin.After deparaffinized and rehydrated, the tissue slides were added into the TDT enzyme and label solution (1:9) for an hour.Then, the tumor sections were incubated with POD (anti-fluorescein antibody).Staining was visualized using diaminobenzidine.Cells with brown nuclei staining were defined as apoptotic cells by light microscopy.The percentage of apoptotic cells was assessed in 5 randomly selected fields viewed at 400 × magnification.The apoptotic index (AI) was calculated as number of apoptotic cells/total number of nucleated cells ×100%.
Adenovirus Mediated XAF1 Gene Therapy in Xenograft Mice Model of HCC
Five-to six-weeks old female BALB/c nude mice (Special Pathogen Free, SPF) were bred in the Animal Experimental Centre of Shanghai Institutes for Biological Sciences (Shanghai, China).SMMC-7721 cells (1×10 7 SMMC-7721) were injected subcutaneously (s.c.) into the right flanks of mice.When the size of tumor reached 100 -150 mm 3 approximately, mice were randomly assigned to experimental group in which Ad5/F35-XAF1 were injected into tumor masses at 3 sites with 1×10 9 plaque forming units (PFU) and to control group that received injection of Ad5/F35-Ctrl into tumors.Each group had 5 mice.The virus was injected into tumor every other day with a total of seven times.The experiment was repeated 3 times.Tumor size was measured using a caliper every three days after injection until experimental end time points.Tumor volume (V) was calculated according to the following formula: V (mm 3 ) =1/2 ab 2 (a: relatively shorter diameter, b: relatively longer diameter).Animals were euthanized 28 days after treatment, and their tumors were weighed and harvested for histology analysis, immunohistochemistry (IHC) and Western blot analysis.To evaluate the safety of Ad5/F35 virus treatment, four important organs, heart, liver, lung and kidney, were harvested from the mice injected with Ad5/F35 virus.To investigate the survival time of the xenografted mice treated with Ad5/F35-XAF1, other two groups of xenografted mice (5 mice/group) were also treated with Ad5/F35-Ctrl and Ad5/F35-XAF1 respectively as the same methods above.The end point was until an entire group of mice died.
XAF1 and VEGF staining were recorded as the ratio of positively stained cells to all tumor cells in five different areas at × 400 magnification.MVD was evaluated according to the method described previously [39].MVD was the average of the vessel counts (CD31 positive staining) obtained in the three sections.Areas of the highest neovascularization were chosen, and microvessel counting was performed at ×200 magnification in three chosen fields.Any immunoreactive endothelial cell or endothelial cell cluster that had been distinctly separated from adjacent microvessels was considered a single countable vessel.The results regarding angiogenesis in each tumor were expressed as the absolute number of vessels/0.74mm 2 (× 200 field).In all assays, matched isotype control antibodies were used and found to be www.impactjournals.com/oncotargetunreactive in all cases.
Statistical analysis
Data are presented as the means ± SD.The significance of the difference between groups was evaluated with the Student's t-test or one way variant analysis (ANOVA) by using SPSS15.0software.P < 0.05 was considered significant.The chi-square test was used to analyze the difference in expression of XAF1 in human HCC samples.The Kaplan-Meier method was used to analyze survival time of tumor-bearing mice.
Figure 1 :
Figure 1: Expression of XAF1 in HCC cells and HCC tissues.(A) The mRNA and protein expression of XAF1 in three human HCC cell lines and human non-cancer liver tissue detected by RT-PCR and Western Blot, respectively.(B) The expression of XAF1 was determined in human liver cancer tissues by Western Blot.T: liver cancer tissue; N: paired non-cancer liver tissue.β-actin was internal control.(C) XAF1 was expressed in the paired non-HCC tissues but not in HCC tissues.The HCC tissues and paired non-HCC tissues were suffered from H&E staining (upper lane)and XAF1 immunostaining (bottom lane) (magnification × 400).Representative images were shown.(D) Low expression of XAF1 in HCC tissues.The presented data are XAF1 positive expressing cells from 30 human HCC cases and non-cancer liver tissues.p < 0.004.
Figure 2 :
Figure 2: Overexpression of XAF1 inhibited cell proliferation of HCC cells.(A) The overexpression of XAF1 protein in stable SMMC-7721/XAF1 transfectants and BEL-7404/XAF1 transfectants compared to the stable control transfectants detected by Western Blot.(B-C) Overexpression of XAF1 inhibited cell proliferation in stable SMMC-7721/XAF1 transfectants (B) and BEL-7404/ XAF1 transfectants (C).The stable transfectants were cultured and counted in indicated time points.Data represent the means ± SD of three independent experiments.*p< 0.01, compared to control stable transfectants.(D-E) Ad5/F35-XAF1 virus infection increased XAF1 expression in HCC cells.SMMC-7721 cells were infected with Ad5/F35-XAF1 virus at indicated MOI for 48 hours.The mRNA expression of XAF1 was detected by RT-PCR.(F) Ad5/F35-XAF1 virus infection inhibited cell proliferation.SMMC-7721 cells were infected with Ad5/F35-XAF1 and control virus Ad5/F35-Ctrl at indicated MOI for 24, 48 and 72 hours.Cell proliferation was determined by MTT assays.The data are means ± SD of three independent experiments.
Figure 4 :
Figure 4: Ad5/F35-XAF1 inhibits tumor growth in vivo.(A) SMMC-7721 cells were subcutaneously injected into the right back of female nude mice.When growing to approximately 100 mm 3 -150 mm 3 , tumor received the injection of Ad5/F35-XAF1 and Ad5/F35-Ctrl virus at 1×10 9 PFU/ at 3 sites for 5 days.Photos were taken from representative mice 4 weeks after treatment.(B) Tumor volume presented is means ± SD of five mice from (A). (C) Tumor weight was measured 4 weeks after treatment.The data presented are means ± SD of five tumors each group.*p < 0.05, compared to the Ad5/F35-Ctrl group.(D-E) Ad5/F35-XAF1 treatment increased XAF1expression and induces apoptosis in vivo.Tissue sections were suffered to XAF1 immunostaining (D) and TUNEL assay (E).(Original magnification × 400).Quantification of XAF1 expression and apoptotic index were described in "Materials and Methods."Data presented are means ± SD of five mice.*p < 0.05, compared to the Ad5/F35-Ctrl group.
Figure 5 :
Figure 5: Ad5/F35-XAF1 inhibits VEGF expression and tumor angiogenesis.(A-B) Ad5/F35-XAF1 inhibited VEGF expression in HCC cells.SMMC-7721 cells and Hep 3B cells were treated with Ad5/F35-XAF1 for 48 hours.The mRNA (A) and protein (B) expression of VEGF was determined by RT-PCR and Western Blot, respectively.(C) Ad5/F35-XAF1 inhibited mRNA expression of VEGF in xenografted tumors determined by RT-PCR.(D) Ad5/F35-XAF1 inhibited protein expression of VEGF in xenografted tumors determined by IHC.The data presented are ± SD of five mice.*p < 0.05, compared to the Ad5/F35-Ctrl group.(E).Ad5/F35-XAF1 tumor angiogenesis in vivo.Tumor angiogenesis was assessed by IHC with CD31 antibody on sections of tumors from mice treated with Ad5/ F35-XAF1 or Ad5/F35-Ctrl.(original magnification × 400).Quantification of angiogenesis was described in "Materials and Methods."The MVD was the average of the vessel counts obtained in the five sections of each group.Data presented are means ± SD of five mice.*p < 0.05, compared to the Ad5/F35-Ctrl group. | 5,637.8 | 2014-06-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers. We introduce HotpotQA, a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowing QA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison. We show that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions.
Introduction
The ability to perform reasoning and inference over natural language is an important aspect of intelligence. The task of question answering (QA) provides a quantifiable and objective way to test the reasoning ability of intelligent systems. To this end, a few large-scale QA datasets have been proposed, which sparked significant progress in this direction. However, existing datasets have limitations that hinder further advancements of machine reasoning over natural language, especially in testing QA systems' ability to perform multi-hop reasoning, where the system has to reason with information taken from more than one document to arrive at the answer. [5] The band was active from 1987 to 1990.
[6] Frontman Andrew Wood's personality and compositions helped to catapult the group to the top of the burgeoning late 1980s/early 1990s Seattle music scene.
[7] Wood died only days before the scheduled release of the band's debut album, "Apple", thus ending the group's hopes of success.
[8] The album was finally released a few months later. Q: What was the former band of the member of Mother Love Bone who died just before the release of "Apple"? A: Malfunkshun Supporting facts: 1, 2, 4, 6, 7 Figure 1: An example of the multi-hop questions in HOTPOTQA. We also highlight the supporting facts in blue italics, which are also part of the dataset.
First, some datasets mainly focus on testing the ability of reasoning within a single paragraph or document, or single-hop reasoning. For example, in SQuAD (Rajpurkar et al., 2016) questions are designed to be answered given a single paragraph as the context, and most of the questions can in fact be answered by matching the question with a single sentence in that paragraph. As a result, it has fallen short at testing systems' ability to reason over a larger context. TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) create a more challenging setting by using information retrieval to collect multiple documents to form the context given existing question-answer pairs. Nevertheless, most of the questions can be answered by matching the question with a few nearby sentences in one single paragraph, which is limited as it does not require more complex reasoning (e.g., over multiple paragraphs).
Second, existing datasets that target multi-hop reasoning, such as QAngaroo (Welbl et al., 2018) and COMPLEXWEBQUESTIONS (Talmor and Berant, 2018), are constructed using existing knowledge bases (KBs). As a result, these datasets are constrained by the schema of the KBs they use, and therefore the diversity of questions and answers is inherently limited.
Third, all of the above datasets only provide distant supervision; i.e., the systems only know what the answer is, but do not know what supporting facts lead to it. This makes it difficult for models to learn about the underlying reasoning process, as well as to make explainable predictions.
To address the above challenges, we aim at creating a QA dataset that requires reasoning over multiple documents, and does so in natural language, without constraining itself to an existing knowledge base or knowledge schema. We also want it to provide the system with strong supervision about what text the answer is actually derived from, to help guide systems to perform meaningful and explainable reasoning.
We present HOTPOTQA 1 , a large-scale dataset that satisfies these desiderata. HOTPOTQA is collected by crowdsourcing based on Wikipedia articles, where crowd workers are shown multiple supporting context documents and asked explicitly to come up with questions requiring reasoning about all of the documents. This ensures it covers multi-hop questions that are more natural, and are not designed with any pre-existing knowledge base schema in mind. Moreover, we also ask the crowd workers to provide the supporting facts they use to answer the question, which we also provide as part of the dataset (see Figure 1 for an example). We have carefully designed a data collection pipeline for HOTPOTQA, since the collection of high-quality multi-hop questions is nontrivial. We hope that this pipeline also sheds light on future work in this direction. Finally, we also collected a novel type of questions-comparison questions-as part of HOTPOTQA, in which we require systems to compare two entities on some shared properties to test their understanding of both language and common concepts such as numerical magnitude. We make HOTPOTQA publicly available at https://HotpotQA.github.io.
1 The name comes from the first three authors' arriving at the main idea during a discussion at a hot pot restaurant.
Data Collection
The main goal of our work is to collect a diverse and explainable question answering dataset that requires multi-hop reasoning. One way to do so is to define reasoning chains based on a knowledge base (Welbl et al., 2018;Talmor and Berant, 2018). However, the resulting datasets are limited by the incompleteness of entity relations and the lack of diversity in the question types. Instead, in this work, we focus on text-based question answering in order to diversify the questions and answers. The overall setting is that given some context paragraphs (e.g., a few paragraphs, or the entire Web) and a question, a QA system answers the question by extracting a span of text from the context, similar to Rajpurkar et al. (2016). We additionally ensure that it is necessary to perform multi-hop reasoning to correctly answer the question.
It is non-trivial to collect text-based multi-hop questions. In our pilot studies, we found that simply giving an arbitrary set of paragraphs to crowd workers is counterproductive, because for most paragraph sets, it is difficult to ask a meaningful multi-hop question. To address this challenge, we carefully design a pipeline to collect text-based multi-hop questions. Below, we will highlight the key design choices in our pipeline.
Building a Wikipedia Hyperlink Graph. We use the entire English Wikipedia dump as our corpus. 2 In this corpus, we make two observations: (1) hyper-links in the Wikipedia articles often naturally entail a relation between two (already disambiguated) entities in the context, which could potentially be used to facilitate multi-hop reasoning; (2) the first paragraph of each article often contains much information that could be queried in a meaningful way. Based on these observations, we extract all the hyperlinks from the first paragraphs of all Wikipedia articles. With these hyperlinks, we build a directed graph G, where each edge (a, b) indicates there is a hyperlink from the first paragraph of article a to article b.
Generating Candidate Paragraph Pairs. To generate meaningful pairs of paragraphs for multihop question answering with G, we start by considering an example question "when was the singer and songwriter of Radiohead born?" To answer this question, one would need to first reason that the "singer and songwriter of Radiohead" is "Thom Yorke", and then figure out his birthday in the text. We call "Thom Yorke" a bridge entity in this example. Given an edge (a, b) in the hyperlink graph G, the entity of b can usually be viewed as a bridge entity that connects a and b. As we observe articles b usually determine the theme of the shared context between a and b, but not all articles b are suitable for collecting multihop questions. For example, entities like countries are frequently referred to in Wikipedia, but don't necessarily have much in common with all incoming links. It is also difficult, for instance, for the crowd workers to ask meaningful multihop questions about highly technical entities like the IPv4 protocol. To alleviate this issue, we constrain the bridge entities to a set of manually curated pages in Wikipedia (see Appendix A). After curating a set of pages B, we create candidate paragraph pairs by sampling edges (a, b) from the hyperlink graph such that b ∈ B.
Comparison Questions. In addition to questions collected using bridge entities, we also collect another type of multi-hop questionscomparison questions. The main idea is that comparing two entities from the same category usually results in interesting multi-hop questions, e.g., "Who has played for more NBA teams, Michael Jordan or Kobe Bryant?" To facilitate collecting this type of question, we manually curate 42 lists of similar entities (denoted as L) from Wikipedia. 3 To generate candidate paragraph pairs, we randomly sample two paragraphs from the same list and present them to the crowd worker.
To increase the diversity of multi-hop questions, we also introduce a subset of yes/no questions in comparison questions. This complements the original scope of comparison questions by offering new ways to require systems to reason over both paragraphs. For example, consider the entities Iron Maiden (from the UK) and AC/DC (from Australia). Questions like "Is Iron Maiden or AC/DC from the UK?" are not ideal, because one would deduce the answer is "Iron Maiden" even if one only had access to that article. With yes/no questions, one may ask "Are Iron Maiden and AC/DC from the same country?", which re-Algorithm 1 Overall data collection procedure Input: question type ratio r1 = 0.75, yes/no ratio r2 = 0.5 while not finished do if random() < r1 then Uniformly sample an entity b ∈ B Uniformly sample an edge (a, b) Workers ask a question about paragraphs a and b else Sample a list from L, with probabilities weighted by list sizes Uniformly sample two entities (a, b) from the list if random() < r2 then Workers ask a yes/no question to compare a and b else Workers ask a question with a span answer to compare a and b end if end if Workers provide the supporting facts end while quires reasoning over both paragraphs.
To the best of our knowledge, text-based comparison questions are a novel type of questions that have not been considered by previous datasets. More importantly, answering these questions usually requires arithmetic comparison, such as comparing ages given birth dates, which presents a new challenge for future model development.
Collecting Supporting Facts. To enhance the explainability of question answering systems, we want them to output a set of supporting facts necessary to arrive at the answer, when the answer is generated. To this end, we also collect the sentences that determine the answers from crowd workers. These supporting facts can serve as strong supervision for what sentences to pay attention to. Moreover, we can now test the explainability of a model by comparing the predicted supporting facts to the ground truth ones.
The overall procedure of data collection is illustrated in Algorithm 1. their questions into the train-easy set if an overwhelming percentage in the sample only required reasoning over one of the paragraphs. We sampled these turkers because they contributed more than 70% of our data. This train-easy set contains 18,089 mostly single-hop examples.
We implemented a question answering model based on the current state-of-the-art architectures, which we discuss in detail in Section 5.1. Based on this model, we performed a three-fold cross validation on the remaining multi-hop examples.
Among these examples, the models were able to correctly answer 60% of the questions with high confidence (determined by thresholding the model loss). These correctly-answered questions (56,814 in total, 60% of the multi-hop examples) are split out and marked as the train-medium subset, which will also be used as part of our training set.
After splitting out train-easy and train-medium, we are left with hard examples. As our ultimate goal is to solve multi-hop question answering, we focus on questions that the latest modeling techniques are not able to answer. Thus we constrain our dev and test sets to be hard examples. Specifically, we randomly divide the hard examples into four subsets, train-hard, dev, test-distractor, and test-fullwiki. Statistics about the data split can be found in Table 1. In Section 5, we will show that combining train-easy, train-medium, and trainhard to train models yields the best performance, so we use the combined set as our default training set. The two test sets test-distractor and testfullwiki are used in two different benchmark settings, which we introduce next.
We create two benchmark settings. In the first setting, to challenge the model to find the true supporting facts in the presence of noise, for each example we employ bigram tf-idf (Chen et al., 2017) to retrieve 8 paragraphs from Wikipedia as distractors, using the question as the query. We mix them with the 2 gold paragraphs (the ones used to collect the question and answer) to construct the distractor setting. The 2 gold paragraphs and the 8 distractors are shuffled before they are fed to the model. In the second setting, we fully test the model's ability to locate relevant facts as well as reasoning about them by requiring it to answer the question given the first paragraphs of all Wikipedia articles without the gold paragraphs specified. This full wiki setting truly tests the performance of the systems' ability at multi-hop reasoning in the wild. 5 The two settings present different levels of difficulty, and would require techniques ranging from reading comprehension to information retrieval. As shown in Table 1, we use separate test sets for the two settings to avoid leaking information, because the gold paragraphs are available to a model in the distractor setting, but should not be accessible in the full wiki setting.
We also try to understand the model's good performance on the train-medium split. Manual analysis shows that the ratio of multi-hop questions in train-medium is similar to that of the hard examples (93.3% in train-medium vs. 92.0% in dev), but one of the question types appears more frequently in train-medium compared to the hard splits (Type II: 32.0% in train-medium vs. 15.0% in dev, see Section 4 for the definition of Type II questions). These observations demonstrate that given enough training data, existing neural architectures can be trained to answer certain types and certain subsets of the multi-hop questions. However, train-medium remains challenging when not just the gold paragraphs are present-we show in Appendix C that the retrieval problem on these examples are as difficult as that on their hard cousins.
Dataset Analysis
In this section, we analyze the types of questions, types of answers, and types of multi-hop reasoning covered in the dataset.
Question Types. We heuristically identified question types for each collected question. To identify the question type, we first locate the central question word (CQW) in the question. Since HOTPOTQA contains comparison questions and yes/no questions, we consider as question words WH-words, copulas ("is", "are"), and auxiliary verbs ("does", "did"). Because questions often involve relative clauses beginning with WH-words, we define the CQW as the first question word in the question if it can be found in the first three tokens, or the last question word otherwise. Then, we determine question type by extracting words up to 2 tokens away to the right of the CQW, along with the token to the left if it is one of a few common prepositions (e.g., in the cases of "in which" and "by whom").
We visualize the distribution of question types in Figure 2, and label the ones shared among more than 250 questions. As is shown, our dataset covers a diverse variety of questions centered around entities, locations, events, dates, and numbers, as well as yes/no questions directed at comparing two entities ("Are both A and B ...?"), to name a few.
Answer Types. We further sample 100 examples from the dataset, and present the types of answers in Table 2. As can be seen, HOTPOTQA covers a broad range of answer types, which matches our initial analysis of question types. We find that a majority of the questions are about entities in the articles (68%), and a non-negligible amount of questions also ask about various properties like date (9%) and other descriptive properties such as numbers (8%) and adjectives (4%). Multi-hop Reasoning Types. We also sampled 100 examples from the dev and test sets and manually classified the types of reasoning required to answer each question. Besides comparing two entities, there are three main types of multi-hop reasoning required to answer these questions, which we show in Table 3 accompanied with examples.
Most of the questions require at least one supporting fact from each paragraph to answer. A majority of sampled questions (42%) require chain reasoning (Type I in the table), where the reader must first identify a bridge entity before the second hop can be answered by filling in the bridge. One strategy to answer these questions would be to decompose them into consecutive single-hop questions. The bridge entity could also be used implicitly to help infer properties of other entities related to it. In some questions (Type III), the entity in question shares certain properties with a bridge entity (e.g., they are collocated), and we can infer its properties through the bridge entity. Another type of question involves locating the answer entity by satisfying multiple properties simultaneously (Type II). Here, to answer the question, one could find the set of all entities that satisfy each of the properties mentioned, and take an intersection to arrive at the final answer. Questions comparing two entities (Comparison) also require the system to understand the properties in question about the two entities (e.g., nationality), and sometimes require arithmetic such as counting (as seen in the table) or comparing numerical values ("Who is older, A or B?"). Finally, we find that sometimes the questions require more than two supporting facts to answer (Other). In our analysis, we also find that for all of the examples shown in the table, the supporting facts provided by the Turkers match exactly with the limited context shown here, showing that the supporting facts collected are of high quality. Aside from the reasoning types mentioned above, we also estimate that about 6% of the sampled questions can be answered with one of the two paragraphs, and 2% of them unanswerable. We also randomly sampled 100 examples from train-medium and train-hard combined, and the proportions of reasoning types are: Type I 38%, Type II 29%, Comparison 20%, Other 7%, Type III 2%, single-hop 2%, and unanswerable 2%.
Model Architecture and Training
To test the performance of leading QA systems on our data, we reimplemented the architecture described in Clark and Gardner (2017) as our baseline model. We note that our implementation without weight averaging achieves performance very close to what the authors reported on SQuAD (about 1 point worse in F 1 ). Our implemented model subsumes the latest techni-cal advances on question answering, including character-level models, self-attention (Wang et al., 2017), and bi-attention (Seo et al., 2017). Combining these three key components is becoming standard practice, and various state-of-the-art or competitive architectures (Liu et al., 2018;Clark and Gardner, 2017;Wang et al., 2017;Seo et al., 2017;Pan et al., 2017;Salant and Berant, 2018;Xiong et al., 2018) on SQuAD can be viewed as similar to our implemented model. To accommodate yes/no questions, we also add a 3-way classifier after the last recurrent layer to produce the probabilities of "yes", "no", and span-based answers. During decoding, we first use the 3-way output to determine whether the answer is "yes", "no", or a text span. If it is a text span, we further search for the most probable span.
Supporting Facts as Strong Supervision.
To evaluate the baseline model's performance in predicting explainable supporting facts, as well as how much they improve QA performance, we additionally design a component to incorporate such strong supervision into our model. For each sentence, we concatenate the output of the selfattention layer at the first and last positions, and use a binary linear classifier to predict the probability that the current sentence is a supporting fact. We minimize a binary cross entropy loss for this classifier. This objective is jointly optimized with the normal question answering objective in a multi-task learning setting, and they share the same low-level representations. With this classifier, the model can also be evaluated on the task of supporting fact prediction to gauge its explainability. Our overall architecture is illustrated in Figure 3. Though it is possible to build a pipeline system, in this work we focus on an end-to-end one, which is easier to tune and faster to train.
Results
We evaluate our model in the two benchmark settings. In the full wiki setting, to enable efficient tfidf retrieval among 5,000,000+ wiki paragraphs, given a question we first return a candidate pool of at most 5,000 paragraphs using an inverted-indexbased filtering strategy 6 and then select the top 10 paragraphs in the pool as the final candidates using bigram tf-idf. 7 Retrieval performance is shown in 6 See Appendix C for details. 7 We choose the number of final candidates as 10 to stay consistent with the distractor setting where candidates are 2 Table 5. After retrieving these 10 paragraphs, we then use the model trained in the distractor setting to evaluate its performance on these final candidate paragraphs.
Following previous work (Rajpurkar et al., 2016), we use exact match (EM) and F 1 as two evaluation metrics. To assess the explainability of the models, we further introduce two sets of metrics involving the supporting facts. The first set focuses on evaluating the supporting facts directly, namely EM and F 1 on the set of supporting fact sentences as compared to the gold set. The second set features joint metrics that combine the evaluation of answer spans and supporting facts as follows. For each example, given its precision and recall on the answer span (P (ans) , R (ans) ) and the supporting facts (P (sup) , R (sup) ), respectively, we calculate joint F 1 as Joint EM is 1 only if both tasks achieve an exact match and otherwise 0. Intuitively, these metrics penalize systems that perform poorly on either task. All metrics are evaluated example-byexample, and then averaged over examples in the evaluation set.
The performance of our model on the benchmark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts. From the distractor setting to the full wiki setting, expanding the scope of the context increases the difficulty of question answering. The performance in the full wiki setting is substantially lower, which poses a challenge to existing techniques on retrieval-based question answering. Overall, model performance in all settings is significantly lower than human performance as shown in Section 5.3, which indicates that more technical advancements are needed in future work.
We also investigate the explainability of our model by measuring supporting fact prediction performance. Our model achieves 60+ supporting fact prediction F 1 and ∼40 joint F 1 , which indicates there is room for further improvement in terms of explainability.
In Table 6, we break down the performance on different question types. In the distractor setting, comparison questions have lower F 1 scores gold paragraphs plus 8 distractors. Table 6: Performance breakdown over different question types on the dev set in the distractor setting. "Br" denotes questions collected using bridge entities, and "Cp" denotes comparison questions.
than questions involving bridge entities (as defined in Section 2), which indicates that better modeling this novel question type might need better neural architectures. In the full wiki setting, the performance of bridge entity questions drops significantly while that of comparison questions decreases only marginally. This is because both entities usually appear in the comparison questions, and thus reduces the difficulty of retrieval. Combined with the retrieval performance in Table 5, we believe that the deterioration in the full wiki setting in Table 4 is largely due to the difficulty of retrieving both entities. We perform an ablation study in the distractor setting, and report the results in Table 7. Both selfattention and character-level models contribute notably to the final performance, which is consistent with prior work. This means that techniques targeted at single-hop QA are still somewhat effective in our setting. Moreover, removing strong supervision over supporting facts decreases performance, which demonstrates the effectiveness of our approach and the usefulness of the supporting facts. We establish an estimate of the upper bound of strong supervision by only considering the supporting facts as the oracle context input to our Table 7: Ablation study of question answering performance on the dev set in the distractor setting. "-sup fact" means removing strong supervision over supporting facts from our model. "-train-easy" and "-trainmedium" means discarding the according data splits from training. "gold only" and "sup fact only" refer to using the gold paragraphs or the supporting facts as the only context input to the model. model, which achieves a 10+ F 1 improvement over not using the supporting facts. Compared with the gain of strong supervision in our model (∼2 points in F 1 ), our proposed method of incorporating supporting facts supervision is most likely suboptimal, and we leave the challenge of better modeling to future work. At last, we show that combining all data splits (train-easy, train-medium, and train-hard) yields the best performance, which is adopted as the default setting.
Establishing Human Performance
To establish human performance on our dataset, we randomly sampled 1,000 examples from the dev and test sets, and had at least three additional Turkers provide answers and supporting facts for these examples. As a baseline, we treat the original Turker during data collection as the prediction, and the newly collected answers and supporting facts as references, to evaluate human performance. For each example, we choose the answer and supporting fact reference that maximize the F 1 score to report the final metrics to reduce the effect of ambiguity (Rajpurkar et al., 2016). As can be seen in Table 8, the original crowd worker achieves very high performance in both finding supporting facts, and answering the question correctly. If the baseline model were provided with the correct supporting paragraphs to begin with, it achieves parity with the crowd worker in finding supporting facts, but still falls short at finding the actual answer. When distractor paragraphs are present, the performance gap between the baseline model and the crowd worker on both tasks is enlarged to ∼30% for both EM and F 1 .
We further establish the upper bound of human performance in HOTPOTQA, by taking the maximum EM and F 1 for each example. Here, we use each Turker's answer in turn as the prediction, and evaluate it against all other workers' answers. As can be seen in Table 8, most of the metrics are close to 100%, illustrating that on most examples, at least a subset of Turkers agree with each other, showing high inter-annotator agreement. We also note that crowd workers agree less on supporting facts, which could reflect that this task is inherently more subjective than answering the question.
Related Work
Various recently-proposed large-scale QA datasets can be categorized in four categories.
Single-document datasets. SQuAD (Rajpurkar et al., 2016(Rajpurkar et al., , 2018 questions that are relatively simple because they usually require no more than one sentence in the paragraph to answer. Multi-document datasets. TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) contain question answer pairs that are accompanied with more than one document as the context. This further challenges QA systems' ability to accommodate longer contexts. However, since the supporting documents are collected after the question answer pairs with information retrieval, the questions are not guaranteed to involve interesting reasoning between multiple documents.
KB-based multi-hop datasets. Recent datasets like QAngaroo (Welbl et al., 2018) and COM-PLEXWEBQUESTIONS (Talmor and Berant, 2018) explore different approaches of using pre-existing knowledge bases (KB) with pre-defined logic rules to generate valid QA pairs, to test QA models' capability of performing multi-hop reasoning. The diversity of questions and answers is largely limited by the fixed KB schemas or logical forms. Furthermore, some of the questions might be answerable by one text sentence due to the incompleteness of KBs.
Free-form answer-generation datasets. MS MARCO (Nguyen et al., 2016) contains 100k user queries from Bing Search with human generated answers. Systems generate free-form answers and are evaluated by automatic metrics such as ROUGE-L and BLEU-1. However, the reliability of these metrics is questionable because they have been shown to correlate poorly with human judgement (Novikova et al., 2017).
Conclusions
We present HOTPOTQA, a large-scale question answering dataset aimed at facilitating the development of QA systems capable of performing explainable, multi-hop reasoning over diverse natural language. We also offer a new type of factoid comparison questions to test systems' ability to extract and compare various entity properties in text.
A.1 Data Preprocessing
We downloaded the dump of English Wikipedia of October 1, 2017, and extracted text and hyperlinks with WikiExtractor. 8 We use Stanford CoreNLP 3.8.0 (Manning et al., 2014) for word and sentence tokenization. We use the resulting sentence boundaries for collection of supporting facts, and use token boundaries to check whether Turkers are providing answers that cover spans of entire tokens to avoid nonsensical partial-word answers.
A.2 Further Data Collection Details
Details on Curating Wikipedia Pages. To make sure the sampled candidate paragraph pairs are intuitive for crowd workers to ask high-quality multi-hop questions about, we manually curate 591 categories from the lists of popular pages by WikiProject. 9 For each category, we sample (a, b) pairs from the graph G where b is in the considered category, and manually check whether a multi-hop question can be asked given the pair (a, b). Those categories with a high probability of permitting multi-hop questions are selected.
Bonus Structures. To incentivize crowd workers to produce higher-quality data more efficiently, we follow Yang et al. (2018), and employ bonus structures. We mix two settings in our data collection process. In the first setting, we reward the top (in terms of numbers of examples) workers every 200 examples. In the second setting, the workers get bonuses based on their productivity (measured as the number of examples per hour).
A.3 Crowd Worker Interface
Our crowd worker interface is based on ParlAI (Miller et al., 2017), an open-source project that facilitates the development of dialog systems and data collection with a dialog interface. We adapt ParlAI for collecting question answer pairs by converting the collection workflow into a systemoriented dialog. This allows us to have more control over the turkers input, as well as provide turkers with in-the-loop feedbacks or helpful hints to help Turkers finish the task, and therefore speed up the collection process.
Please see Figure 4 for an example of the worker interface during data collection.
B Further Data Analysis
To further look into the diversity of the data in HOTPOTQA, we further visualized the distribution of question lengths in the dataset in Figure 5. Besides being diverse in terms of types as is show in the main text, questions also vary greatly in length, indicating different levels of complexity and details covered.
C.1 The Inverted Index Filtering Strategy
In the full wiki setting, we adopt an efficient inverted-index-based filtering strategy for preliminary candidate paragraph retrieval. We provide details in Algorithm 2, where we set the control threshold N = 5000 in our experiments. For some of the question q, its corresponding gold para- graphs may not be included in the output candidate pool S cand , we set such missing gold paragraph's rank as |S cand | + 1 during the evaluation, so MAP and Mean Rank reported in this paper are upper bounds of their true values.
C.2 Compare train-medium Split to Hard Ones Table 9 shows the comparison between trainmedium split and hard examples like dev and test under retrieval metrics in full wiki setting. As we can see, the performance gap between trainmedium split and its dev/test is close, which implies that train-medium split has a similar level of difficulty as hard examples under the full wiki setting in which a retrieval model is necessary as the first processing step. Table 9: Retrieval performance comparison on full wiki setting for train-medium, dev and test with 1,000 random samples each. MAP and are in %. Mean Rank averages over retrieval ranks of two gold paragraphs. CorAns Rank refers to the rank of the gold paragraph containing the answer. | 7,635.6 | 2018-09-25T00:00:00.000 | [
"Computer Science"
] |
THE ORIGIN OF NUMBER AND THE ORIGIN OF GEOMETRY: ISSUES RAISED AND CONCEPTIONS ASSUMED BY EDMUND HUSSERL A ORIGEM DO NÚMERO E A ORIGEM DA GEOMETRIA: QUESTÕES LEVANTADAS E CONCEPÇÕES ASSUMIDAS POR EDMUND HUSSERL
The objective of this article is to present the concept of origin, as presented in Husserl’s initial studies, and the same concept as it appears in his final work. The views he assumed in the different phases of his life are addressed: in Halle, when he follows Brentanian psychology to support the origin of number; in Göttingen, where he remains until 1916, when his thinking about reduction matures; and the final stage in Freiburg. The article presents ways through which he understood and explained the origin of number, considered in the first phase and his first studies as central for the clarification of the fundaments of mathematics. It further presents how he explained the origin of geometry, under the dimension of the lifeworld, as well as his conception of knowledge and reality, already understood in the 1930s and throughout the years nearing his death as life-world.
He was a mathematician living through the movement that had been taking place from the beginning of the 19th century that displayed a growing concern about placing Analysis on solid arithmetic bases. In the growing climate of discussions in the academic mathematical environment, the lack of a theory of real numbers became evident, and it was understood that such theory could provide a more correct or at least complete basis for technical demonstrations. Thus, that the arithmetization of Analysis was shown as a promising theory of the real line on purely arithmetic foundations.
Following this objective, Husserl pursued the task of investigating philosophically the fundaments of arithmetic, seeking to clarify the "original bases" of the edifice of mathematics 7 . At the beginning of his academic life, he had not yet conducted a phenomenology of the sciences. By that time, he merely took the crisis of the fundaments of mathematics that was being contemplated, and advanced in search for clarification of the basic concepts of that science. So, he worked with the origin of number, looking into the psychological acts which originate it, as he then understood it. He performed a transcendental phenomenological analysis regarding the sciences, in general, in the works he presented from around 1930 until his death in 1937, published in the Crises of European Sciences and Transcendental Phenomenology 8 . This analysis is important for the comprehension of what was presented in The Origin of Geometry .
In 1882 Husserl presented his doctoral thesis Beiträge zur Theorie der Variationsrechnung whose objective, according to Prada (1986, p. 53), was simplifying the method of proof in the calculation of variations so as to reduce the problems regarding differential equations. In his research, he realized that the very concept of number was taken as a given, without being questioned: What is a number? How is it constituted?
This matter absorbs him to such an extent that it comprises his post-doctoral thesis (Habilitationsschrift), entitled Concerning the Concept of Number: Psychological sets. He proved, also, that Rational Numbers -Q are countable and that Real Numbers IR are continuous, that is, IR are bigger than Q. 7 As I understand from the context of Husserl's work, mathematics is viewed as a Western science founded in the theorizing thought of the Greeks. 8 This translation to English of Logische Untersuchungen -LU was conducted by David Kerr, who was, at the time of the translations, assistant professor of philosophy at Yale University.
Study ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative In that work Husserl focused on three main issues: 1. What is number itself? 2. In what kind of cognitive act is number itself truly present to our minds? (HUSSERL, 2003, p. ix) and, in Husserl's words, But how can one speak of concepts which one does not genuinely [eigentlich] have? And how is it not absurd that upon such concepts the most secure of all sciences, arithmetic, should be grounded? The answer is: Even if we do not have the concept given in the authentic [eigentlicher] manner, we still do have it given -in the symbolic manner. The discussion of this essential distinction, and the psychological analysis of the symbolic number representations, are to form the task of the following chapters (HUSSERL, 2003, p. 203).
As stated in the title of the work itself, it is a psychological analysis. This title already exposed the connection of Husserl with Weierstrass, a mathematician, and Brentano, an important psychologist in German academic circles of that time, for assuming descriptive psychology, and whose classes he attended between 1884 and 1886.
In that aspect he differed from the prevailing school of psychologists of his time who worked with genetic psychology 10 . He studied with Brentano, who exerted great influence on him, especially his early work.
The psychological analysis of number and the theoretical impediments detected
Husserl inherited from Brentano the method for working with the description of experiences and the notion of intentionality. Both experience and intentionality, persist throughout the development of his work, however, they were modified in the different phases of his life. These can be understood as those in which he worked more predominantly with Brentanian psychology, in Halle; the phase when he worked in Göttingen, when he developed the bases of eidetic phenomenology and performed a metaunderstanding of the work carried out around 1910, aiming to understand the method that was used in the research developed, culminating, in 1913, with the publication of Ideas -General introduction to pure phenomenology (1972), which inaugurated his third phase, 9 This article is based on a translation to the English language by Dallas Willard, from the School of Philosophy, University of Southern California, Los Angeles, edited by Springer Science+Business Media, B.V., 2003. This is a publication of "Edmund Husserl -Collected works; editor Rodof Bernet, volume X. 10 Genetic Psychology, as Brentano says, has the purpose of explaining psychology by formulation of laws of coexistence and succession; the method where these methods are discovered is that of natural sciences. So, it follows an inductive procedure to stablish general laws as "general facts" (De Boer, 1978, p. 52). Descriptive psychology describes the psychical phenomena to explain them. The main point is that he assumes those phenomena as intentional.
In the period he was in Halle he worked with the origin of numbers and started to shift towards self-criticism, heeding the criticism of colleagues, especially Frege and, deepening rigorous philosophical thinking, dedicating himself to exposing insights on ways to overcome the obstacles detected and visualizing possibilities not for working with psychical phenomena, but with the eidos or essence (Wesen) 11. . This indicates his theoretical shift in relation to aspects of Brentano's descriptive psychology. This is because "The essence (Eidos) 12 is an object of a new type. Just as the datum of individual or empirical intuition is an individual object, so the datum of the essential intuition is a pure essence" (HUSSERL, 1972, p 49). And further affirmed "Thus the essential insight is intuition, and if it is insight in the pregnant sense of the term, and not a mere, and possibly vague, representation, it is a primordial dator" (HUSSERL, 1972, p. 49).
Next, I will focus on Brentano's vision, to explain the important notions that are revisited by Husserl, initially following them, and subsequently, making significant theoretical modifications.
Brentano sees reality as being divided into physical phenomena and psychical phenomena. The former refers to the observable reality in its empiricism. Psychical phenomena are characterized by their properties, among which the most important is intentionality. Thus, he assumes that intentionality is in the external reality because it is a property of psychical phenomena, which belong to reality.
Intentionality is a characteristic that focuses on the difference between genetic psychology, predominant at the time, and descriptive psychology, assumed by Brentano.
This concept is derived from Middle Ages scholasticism. According to De Boer (1978), the scholastics of that period characterize psychical phenomenon as intentional and immanent to the mind. That is, it is an intentional and mental phenomenon, thus, separated from external reality. The being is within the sphere of consciousness, where the cognitive process that operates with images of external objects and representations of species occurs. For example, red is the color present in the external object and the intentional red 11 According to Ales Bello (2000, p. 37), he understands that the use he used to make of the word "idea" in Logische Untersuchungen -LU, could have been equivocal and confused with the Kantian concept of idea, thus he prefers to substitute it by Wesen, a German word which means essence, or eidos, a foreign, Greek word. 12 It is important to reiterate that his conception of essence or eidos as assumed by Husserl differs from that explained in the theory of Ideas of Plato. ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 392 Special Edition: Philosophy of Mathematics color that represents it. It is represented in the mind. Cognitive acts take place in the mind and what is presented in the mind is considered reality. It must be highlighted that, according to this view, intentionality remains within the scope of consciousness.
Study
When understanding reality divided into physical and psychical phenomena, Brentano makes an important change regarding the worldview of scholastics, since even psychical phenomena are not circumscribed to the mind. This aspect concerns ontology.
As for intentionality, which also affects epistemology, he views it as an act of pointing to or addressing a phenomenon, both physical and psychical. This is an understanding of intentionality as "intent", which is not confined within consciousness, but connects it with the outside world 13 . This way of understanding intentionality remains in Brentano's and Husserl's philosophy, albeit with modifications, resulting from the development of the latter.
In the Philosophy of Arithmetic Husserl follows Brentano. Thus, he seeks the origin of number in psychological acts, which, as we have seen, belong to the phenomena of reality. Following his master, he describes internal psychological experiences. And, like him, Husserl believes that the task of philosophy is to clarify the concepts of science.
Thus, he seeks to clarify the origins of numbers within the scope of arithmetic which includes natural numbers or whole numbers. Thus, numbers (seen either as natural or whole numbers) are the foundation of the theory of real numbers with its continuity property. Thus, arithmetic is the base for the construction of reals which ground mathematics. When applying himself to the study of the origin of number, Husserl tackled a central issue for constituting mathematics, and, faithful to his views regarding the task of philosophy, as previously stated, sought to clarify an important concept for that science.
In addition to the issue of the origin of number, he also sought to clarify the symbolic methods present in the modes of operation in this area. For him, the analysis of the whole number "[...] by no means merely serves arithmetical purposes. The correlated concepts of unity, multiplicity and numbers are concepts fundamental to human knowledge on the whole, and, as such, they lay claim to a distinctive philosophical intent [...]" (HUSSERL, 2003, p. 14). ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Psychology is important for the classification of such concepts, as it encompasses the interests that concern the clarification of arithmetic, as well as the psychical acts which form the basis of the origin of numbers In the analysis that he develops in PA, Husserl states that concepts in the context of mathematical theories are not important, although mathematicians always seek to define all the terms with which they work. The concept of number, for example, is sometimes defined by way of the concept of equivalence. But this is an artificial construction whereby something familiar is obscured by something remote and strange.
Study
Euclid, for example, defines "the number is a multiplicity of units" (HUSSERL, 2003, p. 15). This definition itself leads to a successive return to it, as it does not explain what unit or multiplicity are. Then, one might question how to understand number, while avoiding this circularity in the exposition of such understanding, thus avoiding the tautological character of the concepts, or the lack of meaning of definitions given by mathematicians, or even the recourses which they use to resolve the exposition; resources that cause other difficulties either of mathematical nature or otherwise.
In PA he exposed extensive studies he conducted on ways mathematicians present the concepts of number. As an example, to clarify the depth of his studies, I list some flashes of what he presented regarding F.A. Lang in PA, page 35, and others. Following Kant's idea who, more generally, Lang stresses "the temporal 'form of intuition as the foundation of the concept of number', which came from the well-known Aristotle's conception Time is the number of movement in respect to earlier or later" Husserl (2003, p. 33). Husserl stated that Lang supposes "that everything that time achieves for Kant can be derived with far greater simplicity and certainty from the representation of space" (HUSSERL, 2003, p. 35).
On pages 36 and 37 of PA Husserl stated "the synthesis upon which the concept of number is founded (the collective combination in our language) is, for him (Lang), a synthesis of spatial intuitions. In quite the same manner as geometry, accordingly, arithmetic is supposed to rest upon spatial intuition" (HUSSERL, 2003, p. 37) 14 . He argued that Lang reduces mathematical thinking and logic to spatial intuition. By acting this way, the representation of space becomes, in his view, a psychological prerequisite for the origin of the concept of number. Therefore, instead of clarifying such fundaments, it obscures them. Husserl's intention is to explain the origin of number, without resorting to explanations which are external to the very actions it actualizes. He saw such concepts as originating in intuition, which realizes acts that encompass the phenomenon in point through intentionality. Wherefore they originate from the intended objects, while "engulfed or encircled" by intuition or perception.
He defines number as "a specification of the concept of plurality, and he believes he has found the origin of the concept of plurality in the inner perception of collecting acts" (DE BOER, 1978, p. 64). When given a multiplicity, the question "how many?" is posed and the answer is given indicating the appropriate number. In Philosophy of Arithmetic, Husserl expresses the concrete bases of the abstraction from the concrete phenomena that form the foundations of multiplicity, and thus, of number. He distinguishes the authentically represented multiplicities from those represented symbolically. The former concerns the psychological characterization of the abstraction.
The abstraction is carried out in the perception of a collection of objects from which special contents are separated, because attention is focused on them, from other objects that do not capture the attention. In this phase of Husserl's professional life, considerably reliant on Brentano's psychology, abstraction is derived from the attention present in the perception act. Abstraction creates a separation in the aspects of content of the object that are present in perception. This can result in both negative and positive aspects. From a negative standpoint, this leads to a distancing from something, or results in aspects being ignored. From a positive standpoint something can be particularly emphasized.
Husserl believes that abstraction is at the core of the formation of concept, which cannot be seen without a concrete intuition. He states, "Hence, even when we represent the general concept of multiplicity, we always have in consciousness the intuition of some concrete multiplicity by means of which we abstract the general concept" (HUSSERL, 2003, p. 83) and he argues that if it were not so, we would fall into the nominalism that sees in the concept only the signs, but not the meaning of what it is said. If it is so, then, what is the basis of the general concepts?
This question comes up especially in the case of a priori concepts, which form the basis of laws 'grounded purely in concepts. Yet they are the only general concepts in the full sense of the world, for the empirical concepts that rest on induction are not universal but are limited in their extension to the perceived instances and contingent state of research. According to Husserl, the general concept is abstracted from that which a several objects have in common. Abstraction is an act of attention in which we disregard the difference and However, the general concept is not formed directly from the apprehension and respective collection of content. As the interest that affects the act of isolating characteristics is not directed to the content of phenomena, but exclusively to the linkage among content, intentionally separated and perceived in thought.
The grasp of contents, and the collection of them, is of course the precondition of the abstraction. But in that abstraction the isolating interest is not directed upon the contents, but rather exclusively upon their linkage in thought -and that linkage is all that is intended (HUSSERL, 2003, p. 83).
Thus, the concept of oneness derives from the abstraction of content perceived in objects, separated from the totality of their content, due to the attention being focused on their aspects, and actualized in intentional linkage, which is effected through thinking.
The analysis of the origins, thus conducted and understood, enables the elimination of ambiguities that can obscure what is mentioned in concepts, to the extent that it allows the recovery of the path of abstraction, reaching the intuited object. Such procedure of retroactively resuming the path taken in the constitution of concepts, persisted in his thinking, even though he increasingly reached the understanding of the complexity that surrounds the trajectory of such constitution. Within the scope of PA, this path goes from concept to intuition of the object, considering the connections of abstracted aspects that occur in thought 15 .
In PA Husserl highlights the genuine symbolic concepts, based in original intuition. They show the concept in symbolic terms. Such matter became the focus of his investigation as he believed the logic of his time to be flawed. This is an important study as it brought up the issue of thinking about logic and acting technically with logical laws.
This questioning founded his criticism of the sciences, which operate following the laws of logic. Such criticism was revisited under another guise in The Crisis when he conducted a phenomenological investigation of modern science.
15 Even though this issue may give rise to discussions, which have occupied the minds of mathematicians throughout history, about mathematical notions being discovered, meaning they are outside human capability for thinking about them, or if they are constructed by the human mind; for Husserl all human knowledge and, therefore, the sciences and in them mathematics, are constituted and produced by the human being, understood as a person who, in the living-body (understood in the concreteness of flesh and bone), is constituted in its organic totality by physical, psychical and spiritual dimensions. The former also encompasses sensations, the latter emotional and reasoning states, the spiritual, states of judgment. He raised the following questions: can the justifications of symbolic concepts, such as the concept of number, be assessed intuitively? What is logical reasoning based on, both in scholastic and algebraic logic?
While working on the issues that he had raised regarding symbolic concepts, Husserl was more concerned with philosophical aspects. His criticism of logic was directed to calculations which, in general, are conducted within the realm of reasoning that relies more significantly on articulations based on rotely applied rules, than in their underlying theoretical bases. He pointed to a significant issue; the justification of the implicit affirmation of symbolic concepts, such as in the case of numbers greater than twelve. According to Husserl, these affirmations cannot be verified through the only method he considered pertinent: concrete intuition, which enables the comprehension of the concept of number. He further explained it is the case of symbolic concepts of numbers, formed by repetitive addition of units. These are represented by signs. Such as: 1+1+1+1..., whose series of natural numbers is then transposed onto systematic series, as explained in De Boer (1978).
Symbolic concepts carried out only in thought can be found in this intuitive process. He further explained that as a result of our ability to string together thoughts maintained with symbolic representation, we move further and further away from the original act of intuition, and end up operating solely with calculation rules, stripped of genuine meaning, except for their own rules. The operations conducted become mechanical, successful only with the aid of signals which, in turn, are based on symbols.
This procedure is widely known in mechanically performed counting.
Thus, it is understood that mathematics, treated for its particularities of arithmetic and logic, distances itself from reflective thought about its fundaments, and rather becomes highly technical. This is so because concepts and rules are substituted by signs that simplify complex combinations. Thus, on one hand, energy is saved and, on the other, meaning is lost. For instance, in the decimal system, 10+1+1 is substituted by 12; 10+10, by 20, etc. We operate with signals, according to the rules of the game. So, mathematics can be seen as a game of signs that may even be words.
How, therefore, could one conjoin number concepts in operations, since each remains identically what is; and since each concept in and for itself, is only a single one, how are we ever to conjoin identical concepts? The answer is obvious; the arithmetician absolutely does not operate with the number as such, but rather with the generally represented objects of those concepts. The signs which he combines in calculating have the character of general signs, signs formed with number concepts as their basis (HUSSERL, 2003, p. 191 While manifesting his comprehension regarding the fundamental activities of numbers, Husserl stated that they are directly formed solely by the enumeration process; enumeration of things or, recognizing another extension, enumeration of units. However, numbers can also be formed in an indirect way, through calculation operations, which are understood as the fundamental activities of addition and division. He understood addition as "To add is to form a new number by collective combinations of units from two or more numbers" (HUSSERL, 2003, p.193).
Nonetheless, this is a significant issue for arithmetic. It regards each number appearing, in its internal constitution, as one additive conjunction of units or ones. In logic, however, addition of units is not a logic specification or addition in general but refers to one and zero.
Husserl argued that the definition that number is an answer to the question 'how many?' is in tune with the investigations presented in PA, regarding the original intuition of number. Nonetheless, he stated that not all answers to that question are admissible, except for those that may be positive. In the case of a negative answer, a no-many or no multiplicity will result, which is a special case of many. Arithmetically one and none are two possible answers to how many. "Linguistically one and none function just like numbers and therefore the grammarian is at liberty to regard them as numerical determination. But logically they are not that" (HUSSERL, 2003, p.138).
Nominally, the concept of number one must be distinguished from the concept of unit or the concept of one. One as answer to "how many?" does not coincide with multiplicity, from the standpoint of the concept. Along with the concept of number, seen as multiplicity, the concept of unit is an inseparable given. However, this is not true for the concept of number one, as a subsequent result to technical development. According to Husserl, "even if such distinctions are a matter of indifference to the arithmeticianand justly is sothey nevertheless must be such for the logician" (HUSSERL, 2003, p.142). To the question 'how many apples?' we do not obtain as answer a unity, but one apple, for instance. Thus, the expression of multiplicity of units generally does not mean the same of multiplicity of numbers one. "To identify the two is to attach to the term 'unit', in addition to the many equivocations which it possesses anyway, a new one, from which it is still free in common linguistic usage" (HUSSERL, 2003, p. 143 Even though many mathematicians and philosophers accused him of being opposed to logic in PA, and not recognizing its value, this was not the case. According to De Boer, Husserl was a competent expert in algebraic logic. Through his studies regarding such logic he came to understand that there is no fundamental difference between logic in arithmetic and logic in algebra. 17 Logic is the theory of correct thought; logic calculation is a method that aims to spare us difficulty while thinking. Thus, calculating is not specified as thought, but as a substitute for thought. That is why operating with signals can become a game of signals devoid of meaning.
His considerations regarding algebraic logic are also valid for scholastic logic that, in his view, had degenerated into a technique for calculating and drawing conclusions.
He believed that Logic and algebra are completely different theoretical fields. Thus is possible for one and the same person to be a good algebraic technician but only moderately successful, as a philosophy of algebra: the devising and application of a calculus might well be accompanied by a lack of insight into the essence and its cognitive value (DE BOER, 1978, p. 67).
This understanding moved on to the criticism of sciences, brought about in an explicit and concise manner in The Crisis of European Sciences.
The objective of the present section was to explain the ways through which Husserl understood the origins of number in Philosophy of Arithmetic, as well as indicate that, at that time, the criticism towards sciences, especially mathematics and, within its realm arithmetic, was already occurring. The criticism he endured from mathematicians, mainly Frege, regarding founding arithmetic in psychological acts was scathing. For Husserl himself it became challenging to account for the logic that underlies the theory of arithmetic. In the 1890s, his investigations focused on logic, resulting in the publication of Logic Investigations (Logische Untersuchungen -LU) 18 , in 1900/1901. 19 The objective 16 It is interesting that this idea brought up by Husserl in PA will be revisited by Martin Heidegger in the article Discourse on Thinking (1962), when he develops the argumentation regarding calculative and meditative thinking. 17 Husserl would further elaborate the arguments regarding this matter in Philosophy of Arithmetic, with "higher operations" and Logic Investigations, volumes V and VI (HUSSERL, 2008 of that work was clarifying the concepts of pure logic. Pure in the sense that it is founded in the essences 20 or eidos. In the section dealing with the origin of geometry, I will focus on his understanding of essence, aiming to show that he had already theoretically distanced himself from Brentano.
Husserl believed that knowledge and specifically the science with which he worked, should not be based on theories that offered assumptions to support deductions and inferences to produce new theories, and conducted his work accordingly.
On the contrary, the attitude proposed by Husserl is to work from the bottom up (Von unten). The absolute principle, to be understood in this case as the genesis of essential knowledge and, therefore, as a foundation, notwithstanding only in the sense of valid knowledge, can be expressed by the well-known formula 'principle of all principles' (emphasis added by the author of this essay), according to which 'all originally offering view is a legitimate source of knowledge, that is, all that is originally presented in intuition (in the flesh, so to speak) must be accepted as presented, but also solely within the limits in which it is presented (ALES BELLO, 2000, p.38/39, author's translation ) In Prolegomena -LU, (HUSSERL, 2008) Husserl already presented concrete collectivesregarding categorial perception that offered examples of specific cases of ideals.
The changes conducted regarding the origin of numbers appear to be a significant insight, that I understand as the germ of the later developed eidetic phenomenology. The origins of number no longer lie in the act of collecting, but in the correlated eidos, as this phenomenology recognizes that such act only spawns the concept of collecting.
According to De Boer (1978) Through the doctrine of numbers as ideal entities, Husserl honors Frege's criticism of his 'psychologism'. Although Frege did not distinguish the various forms of psychologism and unjustly accused Husserl of reducing contents to acts, Husserl must have felt that here Frege's criticism really did apply to his philosophy of arithmetic. Arithmetic concerns itself not with psychical facts but with ideal entities. The mathematical and the psychological belong to 'such different worlds that the very thought of interchange between them would be absurd (DE BOER, 1978, p. 280 A relentless thinker at the same time concerned with his affirmations as well as with the method and procedures to make such affirmations, Husserl travelled a long journey between these two works. His concern with method lead him to incessantly revisit what had been realized, exposing objections, and explaining his insights to solve them. This shows his dissatisfaction with what had been obtained and his insistence in declaring important work such as Ideas I (HUSSERL, 1931), and Cartesian Meditations (HUSSERL, 1977), as "introduction to phenomenology". 21 3 The origin of geometry, the critique of european sciences and the concept of history In the first paragraph in Appendix VI, The Origen of Geometry, of the Crisis of European Sciences , Husserl set out his goal for that text. I particularly consider it a sui generis essay, as it represents a synthesis of his thinking and understanding of science from the perspective of the way it is constituted and produced.
To this end, he uses geometry as an example to show his understanding. In the core of his phenomenological thinking, synthesis was always understood as being transitional, since it does not solve the questions and doubts pursued in the arguments articulated to answer them. Rather, in the very movement of thinking, and by thinking, he opened himself to other questions, other doubts, he pointed to obscure passages and established the urgency of pursuing the investigation.
In the following paragraphs he explained his objective: Rather, indeed above all, we must inquire back in the original meaning of the handed-down geometry, which continues to be valued with this very same meaningcontinued and at the same time was developed further, remaining simply 'geometry' in all its new forms. Our considerations will necessarily lead to the deepest problems of meanings, problems of science and of the history of sciences in general and indeed in the end to the problems of universal history in general; so that our problems and expositions concerning Galilean geometry take on an exemplary significance (HUSSERL, 1970, p.353 Why geometry and why Galileo to start this "archaeological excavation" work in search of the depth of the problems of the meaning of "origin", now focused on the physics of modern times? The difference in terms of the way of viewing and seeking to understand origin already stood out when mentioning physics, a modern science, whereas in the PA he sought the origin of the number in the subjectivity of psychological acts performed by the subject. We have seen that many of Husserl' critics fixated on the importance that, in that Clarifying the scientific concepts is the appeal to which Husserl responded and which remained alive and active throughout his professional life. He wanted to understand how and why science is produced, using, as previously mentioned, geometry as an example. By committing to this undertaking, he highlighted the importance of descriptive psychology as a methodological resource, since it contributes to the description (Aufklärung) of the origin (Ürsprung) of concepts of logic with the descriptive method; moreover, it opens horizons for a radical move from descriptive psychology to phenomenology. However, his critique of psychologism, which intends to support the foundations of pure or formal logic, persisted. When he referred to psychologism or historicism, he referred to the vision of both, psychology, and history, taken as factual and in line with positivism. Throughout his journey, from Philosophy of Arithmetic to We have seen in PA that the psychical phenomenon is intuited in the act that apprehends its content through perception. It is an act resulting from the subject focusing his attention on the phenomenon. In LU, it is the object that appears to consciousness as perceived, as givenness-sense from the perceived thing. It is the fruit of an intentional act; therefore, no longer derived from psychological attention. Thus, the object is viewed as intentional. This is a significant change because consciousness stops dealing with the content of the object and starts dealing with the essence or eidos and, thus, performs the acts of knowledge with ideas.
How does this change take place? Husserl no longer understood consciousness as a psychical phenomenon, therefore belonging to the external reality of the subject, just as he conceived while working with the Brentanian view; rather he understood it as internal to the subject, as one of the subject's regions. Consciousness targets the phenomenon (object of the external world) and, through intentionality, perceives the phenomenon, which now comes as an idea (essence) and no longer as content of the object. This is the noesis-noema 22 movement, which is so striking in Husserlian phenomenology.
The change in perspective regarding reality and knowledge from descriptive psychology to essential phenomenology and transcendental phenomenology is a movement that extended from the 1890s to 1913, culminating with the publication of Ideas (HUSSERL, 1972). It covers the epoché, also called Reduktion and Ausschaltung, which allows the realization of eidetic reduction, typical of essential phenomenology, and transcendental reduction, typical of transcendental phenomenology. Reality is no longer regarded as physical phenomena and psychical phenomena; it is merely a reality external to the subject. Husserl did not propose to tackle it, because he understood that this was not possible and, at that stage, he did not even consider studying it. His goal was to investigate the ways through which knowledge takes place.
One cannot avoid considering that knowledge is about reality, but about which reality? The question I am asking will be explained throughout his work. I will return to it in this essay, when I present the understanding of the origin of geometry. 22 Noesis concerns the acts performed by the consciousness, while noema concerns the thing focused by the intentional look. Reduction is a voluntarily act performed by the subject, in an action that does not intend to build or explain something, but only to let oneself be carried away by something, intentionally focused on seeking to understand it beyond what can be seen immediately.
It is a search that takes place as a silent response to what torments the subject, impelling him to go on, to unravel the mystery that is announced beyond sight. It is a movement focused on the givenness-sense of the thing in its ways of givenness. These ways of showing itself in its characteristics, such as the coldness of ice, the redness of the color red that shows itself in an object in varying luminosities, the rough sound of a stone falling onto another, or the shrill of a piece of metal hitting another, the roughness of a thorny surface, etc. This givenness can be perceived by the senses of the subject who perceives, in different nuances, depending on the moment the subject experiences it, as well as according to the greater or lesser acuity of possibilities for feeling of the organism of the subject themselves. 23 By means of reductionwhich may either be eidetic or transcendentalattention is shifted from factuality to essentiality. It is a movement to place in suspension what one seeks to know and forsake a priori beliefs and concepts that postulate about what is investigated. In eidetic reduction, the reality of the factual world is forsaken, as if it were disconnected from the focus to which the subject steers their intentional gaze.
Husserl did not deny this reality; moreover, he stated that nothing is lost in this reduction.
He saw the residue that remained in this epoché as "[...] is not reality itself but a psychological sense to which we retreat because true reality is inaccessible" (DE BOER, 1978, p. 472). The reality dealt with by the subject in the acts of consciousness is not a representation of the factual and objective object; but rather an ideal presentation, as it is derived from the movement of noesis-noema that brings it as its essence. The idea or essence understood in this way is not a fantasy; it has its own objectivity. It brings with it the thing, the noema, perceived as a unity of meaning apprehended in the ways it offers itself. The consciousness we are talking about was understood by Husserl, in that phase of his professional life, as a region of the subject. 23 In Idee vol. II, Husserl explains the understanding of the living-body, explaining in detail the intertwining of the different sensationstactile, visual, auditory, gustatory, olfactory, of kinesthetic movement, as they are always intentionalamong themselves and with other cognitive and judgmental acts which occur in the carnality of such body. The comprehension of the living-body and its respective experiences are important contribution made by Husserl that affected philosophy, psychology as well as contemporary social sciences. The residues left in the eidetic reduction remain as a question that, in a silent way, inhabited his investigations and, little by little, came to be outlined as an answer that came with his understanding of the world and that, in the works conducted during his more mature phase, were included in the conception of life-world (Lebenswelt), when they no longer remained simply as residues. In addition to the residues that remain with this epoché, the understanding of consciousness as one of the regions of a person also populates his theoretical concerns. This question became clearer, arising in an insight that led to the collapse of the naturalistic world view. These concerns became evident as he realized a metacomprehension of the methodological procedures present in his investigations. As previously mentioned, his goal was to understand how he realized that, what his investigative procedures were and, thus, also be able to validate a possible and, for him, crucial perspective, which is that of critical, i.e., radical reflection. When making efforts in this direction, emerging from a deep dive, he clearly realized that consciousness could not be considered as a region of the person as it is assumed in eidetic reduction, but that it had to be understood as non-regional, fluid, moving consciousness, as an originator of meaning, so that it could reflect about itself. Here we see the radical change brought about by transcendental phenomenology that causes the collapse of the naturalistic conception of the world which is understood as an objective reality and external to the subject. Radical reflection concerns the consciousness taking itself as a phenomenon, placing itself in epoché and reflecting about itself. This change in his phenomenology, which boomed from 1913 to 1916 in Göttingen, caused dissatisfaction among his disciples. This dissatisfaction of his disciples 24 revolved around their understanding that, with the views about transcendental reduction, especially regarding consciousness as an absolute principle of meaning, Husserl was changing the perspective of phenomenology with which he had been working up to that point. According to them, he was heading towards an idealism that indicated, as they saw it, a trend towards a solipsistic closing of consciousness. 25 Back to consciousness. To an unsuspecting glance, consciousness seems to be revolving around itself, working with contents devoid of signs of the external world.
However, this is not the case. How and why? What does it realize? The living experiences 24 Among such disciples, who drifted away from Husserl and followed different theoretical paths are: Martin Heidegger, Edith Stein, who in turn indicates, according to Ales Bello (2000, p.50), Max Scheller, Alexander Pfänder, Adolf Reinach, Hedwig Conrad-Martius and Jean Hering. 25 It is worth pointing out that this type of criticism is still repeated to this day by readers with superficial knowledge of his work. In a continuous flow, consciousness records experiences, realizing them. To do so, it can no longer be understood as a region of the person. It should be in all regions of the living-body, recording occurrences, operating as a center for meaning and as a 26 The living-experience (Erlebnisse) speak, at first glance, about the life that flows, as it is lived. We live experienced acts in motion for the duration of their temporality. At each moment we live the present moment of the act taking place. Psychical acts, such as perceiving, imagining, fantasizing, remembering, reflecting, which are inherent to human beings, even if they occur uniquely in separate individuals. Living experiences flow, slips from now to what has been, making room for other living experiences. We know we are living, but only by an act of consciousness do we realize what we are experiencing. This act is to perceive the experience as being lived and Husserl calls it "Erlebniss". 27 Living-body is a temporal and spatial complexity because it is always here and now, in its carnality, although we understand that by remembering one can refer to past experiences, living them through remembrances. One can imagine, fantasize, and wonder through the spaces thus opened to the activities of thought. As it is temporal, it lives in the historicity, temporality that extends through its acts in actualization, that is, in action. Actions that are performed and expanded in spatiality. It is the incarnated body, the intertwined totality of physical (relative to the organism), psychical and spiritual activities. Physical activities display the structure and operation of different organs, their neurological and chemical aspects, the sensorial organs; psychical activities refer to emotions, pleasure, pain, as well as cognitive and spiritual reasoning; they speak about acts of judgment, such as greater, smaller, more beautiful, uglier. Its characteristic is to be an intentional movement, which sets the tone for their occupation of space and actions: the subject is always in the process of doing something, responding to something, going towards, revealing an attitude, a way of being present. It is the presence of the person in flesh and blood. At the same time, the person "exposes" herself in it and brings the external world into it, including the other, into their internal reality. I understand that an abyss is opened when we realize the complexity of the living-body. What is internal brings what is external and the self, and acts of consciousness... the possibility of knowledge. The connection with the external world and the pole of a dialectic "me -other", in which the intersubjectivity constitutes "me" (my living-body) -the other (their living-body) just like me because they can also see me as an "other", feel, expose ... and different from me. This pole, in all the complexity of this living-body, opens to the other primarily through this intropathic act, as well as language. 28 Intropathy is knowledge of the other that occurs directly in the experiences in which the other is given (brought, exposed) to the self in its corporeality. It is a constituent perception of intersubjectivity. It is not, therefore, a theoretical concept or a predicatively constructed statement. 29 Intersubjectivity is the reality constituted by the subject (living-body) being with another subject (livingbody) in the life-world in which they are. It is in this dimension of reality that social organization is produced, as well as science and all the socio-historical materiality that is produced and that is there to bring sense for a subject who is willing to understand it. In Cartesian Meditation -An Introduction to Phenomenology (HUSSERL, 1977), in the Fifth Meditation, Husserl exposes his thinking about the way in which the other is constituted for the subject. Following his conception and proceedings, phenomenology seeks to understand how the object is constituted by the subject, to make sense for them. In the case of the other human being, this procedure is also performed. 30 "Pure ego" is not the same as human personality. Nor is it given in the order of manifestations in relation to phenomenic circumstances. It is an absolute ipseity and takes place in its unity deprived of evidence; it can be appropriately apprehended through a reflexive conversion of the view that highlights it as a center of functions. While pure ego, it does not hide inner secrets and wealth itself, it is utterly simple, it is utterly enlightened; all wealth is in the cogito and in the modality of its functions and, in that it can be appropriately understood (HUSSERL, 2002, p. 109, author's translation) Study ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, knowledge, then becomes absolute as it turns itself into a generator of sense to the living being, while actualizing the acts it performs. These are acts that occur in the carnality of the living-body, which is intentional and always moves towards something to be done. This is understood here not only as a physical act, but as any intended activity; thinking, fantasizing, imagining, etc.
The concern imposed to Husserl that led him to carry out this whole movement of constitution of phenomenology, was of an explicitly epistemological nature. He wanted to know how the meaning of the world was constituted for the subject, highlighting, even when he exposed transcendental reduction, the reality brought about in the acts of consciousness. As explained before, it regards ideal reality, as, when perceived, reality is entwined in intentionality as eidos, thus giving way to idealization.
The search that placed him in this movement was for understanding the constitution and production of science. His goal was to realize the concept of philosophy which he understood, at the beginning of his work, as making scientific concepts clear.
Initially, as previously explained in this article, he focused on mathematics; within mathematics he focused on Analyses of arithmetic, and within it he focused on number.
In this movement he was launched beyond science, entering the tortuous paths of understanding the person, the person's development, the ways in which the reality of the world makes sense to the person, how this sense is expressed and shared with others with whom they are, and even the reality itself in which science remains in historical-cultural production.
He is aware of the crisis affecting modern science. In the 1880s, 1890s and 1900s, he specifically placed in epoché the crisis of the fundaments of mathematics. In the 1920s and 1930s, he focused on the crisis shaking Europe, in social and historical terms, but he continued to worry about science. At that time, his conception of philosophy was more comprehensive, and he was led by it: "[...] philosophy is 'to teach us how to carry out the eternal work of humanity'. It must not only enlighten man about actual states of affairs, it must also provide leadership in ethical and religious matters" (DE BOER, 1978, p. 497 .2020.v.8.n.18.337 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 407 Special Edition: Philosophy of Mathematics already expressed this concern in the article published in the Japanese journal Kaizo (HUSSERL, 2006). Therefore, it was about conceiving this crisis as teleologicalhistorical. In footnote number 1, on page 3 of Crisis, he expressed it as follows: The work that I am beginning with the present essay, and shall complete in a series of further articles in Philosophia, makes the attempt, by way of a teleological-historical reflection upon the origins of our critical scientific and philosophical situation, to establish the unavoidable necessity of a transcendental-phenomenological reorientation of philosophy. Accordingly, it becomes, in its own right, an introduction to transcendental phenomenology (HUSSERL, 1970, p. 3). This is a task that led him into the crisis then installed in Europe, searching for its origin, understanding that it is within the sciences themselves, exact and natural, but also in the humanities. His analyses led him to understand that these sciences, as much as they have developed on the basis of the exactness of their theories and the certainty that supports their statements, to a lesser degree present in psychology, have nothing to offer to man about his rational or irrational ways of being, of being free and of living in the dimension of rationality and freedom. He stated "Scientific, objective truth is exclusively a matter of stablishing what the world, the physical as well the spiritual world, is in fact" (HUSSERL, 1970, p. 6). To the sciences which assume this perspective, the historical occurrences are understood as an unending concatenation of illusory progress and bitter disappointment.
Therefore, the shift in his investigative concern is clear, leaving behind the origin of the concepts studied in the dimension of the subjective psychical acts and starting to focus on the expression of the articulations of sense understood by the subject and communicated in the intersubjective sphere, going towards the life-world (Lebenswelt) and its teleological/historical reality. Within this understanding of reality, he investigated the origin of the crisis, whose core he believed could be found in the metaphysical view of the positivist sciences that postulate about the reality and the way man should be. He believed that the philosophy of positivist sciences that was established in the movement of revival of metaphysical philosophy of the Renaissance destroys philosophical thinking.
It was established at that time, and persisted in the following centuries, especially the 18th century, self-styled "century of lights", when it becomes evident: [...] the ardent desire for learning, the zeal for a philosophical reform of education and of all humanity's social and political forms of existence, which makes that much abused Age of Enlightenment so admirable. We possess an undying testimony of this spirit in the glorious 'Hymn to Joy' of Schiller and Beethoven. It is only with painful feelings that we can understand this hymn today. A greater contrast with the present situation is unthinkable" (HUSSERL, 1970, p.10). ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, He argued that the new sciences undoubtedly seemed initially successful when they exposed their favorable results by applying their theories. However, he considered that this initial impetus was giving way to a feeling of failure. This is the beginning of "a long period extending from Hume and Kant to our time, of passionate struggle for a clear, reflecting understanding of the true reasons for this centuries world failure" (HUSSERL, 1970, p.11). He stated that sciences dissolve internally because they do not understand the meaning of their original foundation when it appears as a branch of philosophy. That means that when they separate themselves from philosophy, they stop thinking philosophically about their meaning, what they say about the world, mankind, and life itself. Thus, a crisis is set, initially latent and later acute, of the European community that speaks of the lack of meaning of its cultural life, viewed in terms of its total Existenz.
Study
While reflecting on this problematic situation, he focused his views on human history, looking at the present. He stated "we can gain self-understanding, and thus inner support, only by elucidating the unitary meaning which is inborn in this history from the origin through the newly established task (of the Renaissance), the driving force of all [modern] philosophical attempts" (HUSSERL, 1970, p. 14).
He believed that it was necessary to revisit, in a critical manner, the historical path following the trail initially sought by philosophy. Thus, he put in epoché the origin of the new idea of the universality of science in the reshaping of mathematics. In carrying out this investigation, he explained the basic transformation of the idea of universal philosophy that arose at the beginning of the Modern Age, from Descartes onwards. The view that prevailed was that of an infinite world obtained by a rational, coherent, and systematic method. With such view, an infinite horizon was opened to mathematics. It supported the sciences, both through the possibility of applying their theories and by serving them as a methodical and ontological basis. Husserl called the presence of the ontology of mathematics in science the mathematization of nature. In § 9 of Crisis of European Sciences, he provided a magnificent exposition of this mathematization, by presenting the way in which Galileo theorized modern physics, as he sees it.
He questioned how did a shift in gaze, and practices, theoretical or not, occur, which deviates from the daily living-experience in the life-world about the reality which is given subjectively, to the way of viewing the world through the filter of geometric exactness? Going backwards from the present to the point of radical change in terms of the epistemological and metaphysical perspective that paved the way for philosophical Study ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 409 Special Edition: Philosophy of Mathematics and scientific knowledge since ancient Greece, he found the turning point in Galileo. He put Galileo and his philosophy and practice into epoché and, through a full analysis, clarified the ideas that Galileo used. What Galileo had at hand as pre-data knowledge available and that reached his time and his culture by tradition was Euclidean geometry.
He understand tradition as carrying acquisitions historically sedimented and offered to those who are interested in them; they can be revisited and improved in the interior of the whole itself, which is thus brought about. In the specific case of Galileo considered herein, the acquisitions offered to him refer to mathematics and its respective praxis.
Galileo assumes mathematics in terms of Euclid's geometrical theory which deals with space understood as an exact space and not one uncertain which comes from the appearance of worldly objects. When he conducts this shift in the concept of space, then the space with which the science of physics deals becomes an exact space. Thus, mathematics provides the accuracy that underpins certainty. Geometry brings the exactness concerning geometric space, allowing applications directly to the reality of the physical world. Galileo, as Husserl interprets his way of dealing with Geometry, opened the horizon of practice, without reflection on the original intuition that is at the core of the production of knowledge that has been established. In this blind application, with no reflection on its meaning, natural sciences (and, as a result, sciences in general) gain accuracy and can work with the invariance of geometricized space. Then, they move away from the imperfection of shapes perceived by people in empirical practice, as well as from the inconstancy of a reality that is always in motion, just as in motion is the subject that perceives it. Pure geometry becomes "applied geometry, a means for technology, a guide in conceiving and carrying out the task of systematically constructing a methodology of measurement for objectively determining shapes in constantly increasing 'approximation' for the geometrical ideals, the limits-shapes" (HUSSERL, 1970, p. 28-29). The direct application of geometric shapes and formulas to physics leads the subject to distance themselves from the world of living-experience; moreover, it fosters distrust and non-acceptance of the original intuition, replaced by the accuracy and certainty of science.
I believe that here there is a forced cut in the realm of positivist sciences. The truth is dictated by scientific theory, which postulates about mundane reality. It is beyond experienced reality and the experiences of subjects who, in their sensitivity, perceive nuances that are discarded by scientific theory. This cut entails a schizophrenic vision, because the subject needs to deny their sensitivity and perception and impose the Study ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/ RPQ.2020.v.8.n.18.337 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 410 Special Edition: Philosophy of Mathematics "scientific" truth upon themselves. Beyond this understanding, the question remains silent: how is science generated and who generates it? Husserl believed that the meaning of mathematization of nature: does not lie in the pure interrelations between numbers (as if were formulae in the pure arithmetical sense; it lies in what the Galilean idea of universal physics, with its (as we have seen) highly complicated meaning-content, gave as task to scientific humanity, and in what the process of its fulfillment through successful physics results ina process of developing particular methods, and mathematical formulae and 'theories' shaped by them (HUSSERL, 1970, p. 41).
With exactness, obtained through measurement, the conception of number obtained in a scale is originated. Now, there is a change in the very structure of mathematics. The effects of algebra and its way of thinking, which is widespread in modern times, are revealed. Arithmetic thought, along with algebra, as Husserl had already clarified in the PA, moves away from all original intuition regarding numbers, from numerical relations and numerical laws. This reduction of thinking rapidly extends to all geometry, to pure mathematics, to formalized algebra itself.
Galilean spirit expanded from physics to natural sciences, emptying itself of its original meaning and leading to an all-embracing universalization. It expanded from physics to the other natural sciences and from those to spiritual sciences, among which psychology and history, with which Husserl had struggled since the beginning of the 20th century. Galileo's idea consisted in seeking validation in knowledge of proven hypotheses, according to established procedures. Thus, the knowledge of science would advance from hypothesis to hypothesis and remain as an endless chain of verifications.
Newton stated: "[...] the ideal of exact natural scientists, says 'hypothesis non fingo', and implied in this is the idea that he does not miscalculate and make errors of method" (HUSSERL, 1970, p. 42) When Husserl interpreted analytically and reflexively this phenomenological analysis, he exposed his understanding of what lies beyond it: the recurring prediction, which extends to infinity 31 . He also believed that the erroneous understanding of mathematization brought about at least two conceptions that have developed and persist to this day. One regarding the subjective character of the sensitive qualities that led Hobbes to formalize it as a doctrine of the subjectivity of the concrete phenomenon of an intuitive nature and of the world in general. The other conception regards nature as seen, ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 411 Special Edition: Philosophy of Mathematics in its own being, such as mathematics, from which derives the certainty that the knowledge produced by natural sciences is valid and speaks of the totality. Mathematics, in turn, assumes itself as the way nature is, and envisages itself as powerful.
The meaning of origin of geometry
Husserl understood the origin of geometry based on the original intuition of a subjectivity and on the historical a priori. It is in the interweaving of these two conceptions that resides the singularity of the way he viewed origin in that period.
Given the theme of this article, I will focus on history, which brings about the idea of horizon and of life-world, as well as enables questions about his phenomenological vision, including transcendental phenomenology.
In the 1930s, he turned intensely to the question of history, studying it from the perspective of transcendental phenomenology. The origin, in this final phase of his life, is understood within the framework of a complexity in which the reality of the world, understood as life-world, historical a priori, horizon, language, and tradition are intertwined. At the same time, he was concerned about the crisis of European philosophy 32 and his understanding of the renovation of this philosophy, already expressed in Kaizo, in 1922(HUSSERL, 2006. He believed that in the investigation of this origin the following elements intertwine: [...] understandings and expressions/understandings among subjects, who, through language and tradition, remain present in the historical-cultural world, which can also be understood as the world of the historical a priori where we live within the surroundings of what is there as a given (BICUDO, 2016, p.22). Throughout his work, the origin, was always understood as an originating intuition (Ürsprung), an act in which clear evidence of the intentional object is provided.
This is brought to consciousness as the presence (presentation of the idea) that occurs in the very moment. In this instant, the intentional object is presented to consciousness as idea (essence) without intermediation of the sign that can point to and express the intuition. "This perception, or intuition of oneself in presence, would not only be the instance in which "signification" in general could occur, it would also ensure the 32 "The spiritual shape of Europe" -what is it? It means showing the immanent philosophical idea of the history of Europe (spiritual Europe) or, similarly, its immanent teleology, which makes itself known, from the point of view of universal humanity as such, as a rupture and the beginning of the development of a new age of man, the age of humanity, which from now on no longer can live nor wants to live except in the free formation of its existence, of its historical life, from ideas of reason, from infinite tasks. (HUSSERL, 2008, p. 322). ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 412 Special Edition: Philosophy of Mathematics possibility of an originating perception or intuition in general, that is, the nonsignification as a "principle of principles" (DERRIDA, 1994, p.70) 33 . Therefore, the origin occurs in a subjective dimension, in the movement of constitution of knowledge. In the subjectivity of the subject is in progress the understanding of the sense of what is perceived and its articulation in a comprehensible unity that can be expressed by the subject for himself and for the other. It is in the expression act that language is present, bringing its signs and symbols, as well as its logical form, enabling both the communication among subjects and the socio-cultural production of knowledge. Such communication demands, besides language, the entropathic perception, both constitutive of intersubjectivity. According to Derrida, through the example of geometry, Husserl wanted to show that the present moment offers: "[...] the very ideal and absolute certainty that the universal shape of all experience (Erlebiniss), and therefore all life, has always been and will always be the present. There is and there will only be the present. The being is presence or modification of presence" (DERRIDA, 1994, p.63).
Study
I believe that in the now the subject has a clear vision of what comes to him in intuition. But the present is only an instant that slides into the past and brings the yet to come (future). In this flow of intentional experiences, consciousness, as understood in transcendental phenomenology, registers them. What is intuited or perceived can be revisited by the subject, who experienced it, in an intentional way, in response to a decision to bring it, through remembrances to the then present moment. This flow is dynamic, it is a movement that is always occurring in the living-body, therefore in its carnality. This movement of constitution of knowledge is pre-predicative, it occurs in the hyletic dimensions, therefore sensory, psychical, and spiritual.
To become a form that exposes the unity of meaning actualized in this movement, when knowledge is so constituted requires language. It is used to express the sense that was made for the subject in this articulated movement, given the functional character of consciousness. However, it brings with it the signs, the symbols conveyed by words and the logic of the grammar of the language of the culture of the world which the subject (where this movement takes place) inhabits. The polysemy of words is rooted in this 33 It is worth pointing out that, in this excerpt, Derrida mentions "principle of principles". I wondered if he was mentioning more than one "principle of principles" in Husserl's philosophy. But I realized that intuition is the actualization of an act in which evidence takes place immediately, without mediation of signs. This is so, because it is direct, and brings into perception what is perceived in the flow of experiences recorded. in historicity by tradition, as it can be written or recorded through different mediatic technologies. Moreover, and more significantly for the production of sciences, language sustains logical activity, since it is specifically linked to language, as well as to the ideal cognoscitive configuration that is specifically generated within it.
The explicated judgement becomes an ideal object capable of being passed on. It is this object exclusively that is meant by logic when it speaks of sentences or judgements. And thus, the domain of logic is universally designated; this is universally the sphere of being to which logic pertains insofar as it is the theory of sentences [or propositions] in general (HUSSERL, 1970, p. 364).
Then, the act can objectively be consummated.
It becomes ideally objective and, as such, capable of being transmitted and resumed passively by consciousness or in active production mode, when it is possible for consciousness to intentionally reactivate the original spiritual act, and the iteration of the ideality that then would occur within the subjective sphere, now extended to the intersubjective sphere, moving into the chain of repetitions of what is identical (BICUDO, 2016, p.40).
Thus, language transposes the judgments, in the form of propositional logic and the respective grammar, allowing premises and what they say to be linked in a rational connection, deductive or inductive. The ideas, now formally conveyed, remain in their historicity within the culture of the surrounding world, and can be reactivated in search of comprehension of the original evidence, if those who reactivate them choose to do so.
However, it must be pointed out that Husserl, in The Origin of Geometry (1970), stated that this is not the common way for people to get acquainted with scientific theories. The most common is for people to take their statements as given and repeat such statements without questioning what they convey about the world. By doing this, they may even develop scientific theories, in the direction given by logical progression, or apply them.
The progress of deduction follows formal-logical self-evidence; but without the actually developed capacity for reactivating the original activities contained within its fundamental concepts, i.e., without the 'what' and the 'how' of its prescientific materials, geometry would be a tradition empty of meaning; and if we ourselves did not have this capacity, we could never even know whether geometry had or ever did have a genuine meaning, one that really be 'cashed in' (HUSSERL, 1970, p. 366).
Modern sciences are constituted and produced through this conduct. They come to our present through historical-cultural tradition which also brings about "the world of (HUSSERL, 2008, p. 522). This is the world of historical a priori. This is the a priori of the being of humanity and of the world which surrounds it, which is valid for its experiences, thought and actions. It concerns the way of being of man and the world, as the individually understood subject is always in the world, that is, as a living-body who in their carnality lives here (in space) and now (in time), which stretch through a horizon of past events and that are announced in what is to come, therefore in the surrounding-world-of-things-and-culture.
Which is [...] the world of transmitted products, acquisitions of previous activities and transmitted forms of acting with meaning, as an objective cultural happening. However, people and the entire personal horizon for each person belonging to the surrounding world are correlated, and who are in it with personal spirituality configured in the action and from it (as essentially determinant spiritual estate), and that in the present act continues to develop today (HUSSERL, Anexo XXVI, 2008, p. 522).
Such finding led Husserl to assert that "Historicity in this broader sense has always been in progress and, in its course, is precisely a universal that belongs to human existence" (HUSSERL, 2008, p.523). As humanity, we belong to this world and, each one of us, are in it and are with it, being with others, seeking to understand it. It is an intuited world, a living-world in which we are both as ordinary people and as scientists, whatever the case. "The life-world is always there for mankind before science, then, just as it continues its manners of being in the epoch of science". (Husserl, 1970, p. 123) By focusing on geometry, Husserl took it, from the present, as a guiding thread to pursue its historical tradition, as he understood that when geometry is given in the present life-world, it brings with it all the historical-cultural tradition that, through the sedimentation of idealizations and theoretical formalization, came to be consolidated into a theoretical body, called geometry. It is in this present; being in its own manner of being, nonetheless, it cannot be understood in an isolated and transient way, but in the horizon of its historicity.
[…] Carried out systematically, such self-evidence results in nothing other and nothing less than the universal a priori of history with all its highly abundant component elements. We can also say now that history is from the start nothing other than the vital movement of the coexistence and the interweaving of original formations and sedimentations of meaning (HUSSERL, 1970, p. 371).
Thus, the issue of historical a priori that accompanies the movement of lifeworld and that is also an a priori of history which, in turn, is the a priori of scientific knowledge. But, as explained above, that is a chain that involves spiritual acts and psychical acts, therefore of individual and corporeal subjectivity. That is, of a subjectivity that performs acts, which are historical in double measure: because historicity is a universal belonging to human existence and because this subjectivity is incarnated and is in the life-world that has its own historical horizon. This is a complex issue. Husserl himself posed the question: "Meanwhile, let us recapitulate once more that historical facts (as well as the present fact, that we exist) are only objective based on a priori. But does the a priori, in turn, presuppose history?" (HUSSERL, Anexo II to § 9 a, 2008, p.367).
I believe that by posing this question, Husserl is being faithful to his thinking.
Even though this was one of his final works, he still presented it as an introduction, still not at peace with the path travelled. His doubt: "Does historical a priori presuppose what is historical?" indicates the complexity in which we find ourselves while seeking to understand reality. 34 Apparently, here the entirety of Husserl's philosophy faces a background problem. In order to explain that I resort to the words of David Kerr who translated Crisis from German to English. If all theoretical activity presupposes the structure of the life-world, then, he argues, this must also apply to phenomenology, which, in this case, could not be brought about without assumptions, one of its bases. Kerr arguments that Husserl needs to show that phenomenology can effectuate the telos of every theory without being "caught" in its "arché".
Was he being caught?
Residues of phenomenological reduction, the point of contention for his disciples and which became problematic for himself, no longer remain with the life-world. This issue arose because Husserl stated that nothing from reality was outside of this reduction. Now, reality can no longer be conceived as objectively given; a prevailing vision in modern positivism. It is a complexity that is present in the historical a priori that involves us all and, thus, the very questions about it. Putting it in suspension to understand it is a possible, laborious investigation. Nothing prevents transcendental phenomenology from putting it in epoché as it did geometry itself. Derrida (1994, p.117) criticized Husserl, as he did not fulfill what he promised: to know the thing itself always. I do not see it that way, as from the beginning of his work, Husserl made statements regarding the perception of the thing, not the thing itself. The motto go-to-the-thing-itself is related to what Derrida Study ISSN 2525-8222 DOI: http://dx.doi.org/10.33361/RPQ.2020 Qualitative Research Journal. São Paulo (SP), v.8, n.18, p. 387-418, ed. especial. 2020 416 Special Edition: Philosophy of Mathematics himself said in the excerpt above: look at it and perceive it without intermediaries. The thing-in-itself is perceived, that is it. Point.
Through the historical a priori of the life-world we can understand the origin of geometry as historically investigable in the wake of its constitution and production, in the reality of the present instant.
What was treated in the present article
While conducting the work contained in this article, I understood that the concept of origin can point to one of the tips of the many threads through which one can begin to comprehend the meaning of Husserl's work. However, beyond this finding, the meaning that is evidenced is the origin regarding the principle of principles of phenomenology, as it maintains the understanding, including in The Origin of Geometry, that every originally offering vision must be accepted as it presents itself, but only within the limits in which it is presented. It persists in the concrete realization of its method, as stated by Ales Bello (2000, p.49), which is based on adherence to what is essentially, that is, originally presented.
Husserl was challenged by modern science in terms of understanding it; how it is constituted and produced. Initially, he focused on arithmetic and its foundation and, in a crescendo, encompassed mathematics, modern sciences and took geometry as exemplary to pace the guiding thread that can be pulled in the present, go backwards, aiming to clarify the way through which this science was constituted.
To do so, he considered, the origin of the idea in the subjective act of evidence that occurs in the now, and advanced, conducting rigorous investigations regarding the understanding of psychical acts; the hyletic dimension of experiences, as well as their intertwining in the unity of the living-body, in which consciousness itself moves in its fluid lightness, functioning as a source of meaning and a conductor of reflection; of the constitution of the sphere of intersubjectivity and objectivity, explaining entropathy and dedicating itself to language.
His investigation explicitly focused on knowledge. He stated that the reality of the external world is not viewed as a priority in its theme. He even separated it from what he wanted to investigate, in the movement of epoché. However, in my opinion, the issue of reality remains as intriguing residue. He became aware of that while experiencing the glaring crisis that took hold of Europe in the aftermath of World War I, about which he talked in 1922. He set out to investigate the origin of that crisis, with the rigor of phenomenological procedures, always seeking to understand it in the context of the basic underlying issues brought about by the vision of modern science that dominates the logic of the academics who assume it.
Not only that, but also dominate the manner ordinary people act, involved in and by the reality of the life-world, given the practical applications of the theories of these sciences.
He thematized the life-world. He understood it as a historical a priori that embraces us all, and transcends us all, constituting itself as a ground where we live individually with others, in our finitude, and collectively as mankind. A priori that precedes science, constituting its a priori. However, it is not an objectively given reality, simply external to the subject who gets to know it and who can know it objectively and accurately. It is rather a complex reality, which brings in itself the vivacity of the life of the world, thus Lebenswelt, and in the movement of being, the world. Therefore, the residues concerning reality that remain in the eidetic reduction, in the movement of epoché, are no longer left out, as this reality is also embodied in the living-body of each of us, individually and together, with our co-subjects, all present in the production and in life within the lifeworld.
Is it a closed system? I do not believe so. I see it as a horizon, open to understandings. I believe that Husserl was aware of this, and that is why, to the end of his days he referred to his work as introduction to phenomenology. He assumed his philosophical position: philosophy is to teach us how to carry out the eternal task of humanity. It must not only enlighten men about actual states of affairs, but also provide leadership in ethical and religious matters. | 17,812.2 | 2020-10-07T00:00:00.000 | [
"Mathematics",
"Philosophy"
] |
QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization
Background RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. Results By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. Conclusion The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.
Background
RNA sequencing (RNA-seq) has emerged as a powerful technology in transcriptome profiling [1][2][3]. Our previous side-by-side comparison between RNA-seq and microarray in investigating T cell activation demonstrated that RNA-seq analysis has many advantages over microarray analysis [4]. In contrast to hybridization-based microarray analyses, RNA-seq has the extra benefits of obtaining transcription start and stop sites, alternative spliced isoforms, and genetic variants in addition to gene expression levels. One apparent shortcoming of early nonstranded (standard) RNA-seq protocols is that a sequence read loses the strand origin information, thus making it difficult to determine accurately the expression levels of overlapping genes transcribed from opposite strands. A comparison of stranded with non-stranded RNA-seq led us to conclude that stranded RNA-seq provides a more accurate estimation of gene expression levels than nonstranded RNA-seq [5].
Short reads generated by RNA-seq experiments must first be aligned, or mapped, to a reference genome or transcriptome assembly. The general objective of mapping or aligning a collection of sequence reads to a reference is to discover the true location (origin) of each read with respect to that reference. Although a large number of read mapping algorithms have been developed in recent years [6][7][8][9][10], the accurate alignment of RNA-seq reads is still a challenge. Indeed, some features of a reference genome such as repetitive regions, assembly errors, and assembly gaps render this objective impossible for a subset of reads. Furthermore, because RNA-seq libraries are constructed from transcribed RNA, intronic sequences are not present in exon-exon spanning reads. Therefore, when aligning the sequences to a reference genome, reads that span exon-exon junctions have to be split across potentially thousands of bases of intronic sequence. Many of the RNA-seq alignment tools, including STAR [11], GSNAP [12], MapSplice [13], and TopHat [14], use reference transcriptomes to inform the alignment of junction reads. The benefits of using a reference transcriptome to map RNA-seq reads have been demonstrated clearly in our previous RNA-seq analyses [15,16].
The second important step in most RNA-seq analyses is gene or isoform quantification. A common method to estimate gene or transcript abundance from RNA-seq data is to count the number of reads that map uniquely to each gene or transcript. RPKM (reads per kilobase per million reads) is widely used to represent the relative abundance of mRNAs for a gene or transcript. A number of algorithms have been developed to infer gene and isoform abundance [17,18], including RSEM [19,20], Cufflinks [21], IsoEM [22], featureCounts [23], and HTSeq [24]. A gene can be expressed in one or more transcript isoforms; accordingly, its expression level should be represented as the sum of its isoforms. However, estimating the expression of individual isoforms is intrinsically more difficult because different isoforms of a gene typically have a high proportion of genomic overlap. Accordingly, a simpler union exon-based approach has been proposed, in which all overlapping exons of the same gene are first merged into union exons, and the total length of the union exons is taken to represent the gene length. We carried out a side-by-side comparison between union exon-based approach and transcriptbased method in RNA-seq gene quantification [25], and found that gene expressions were significantly underestimated when the union exon-based approach was used. Therefore, we strongly discourage using the union exonbased approach in gene quantification despite its simplicity.
Although the time and cost for generating RNA-seq data are decreasing, the analysis of massive amounts of RNA-seq data still remains challenging. Numerous software packages and algorithms for basic data quality control (QC) and analyses have been developed, which has led to the need to apply these tools efficiently to obtain results within a reasonable timeframe, especially for large datasets. Based on our own experience with inhouse analyses of multiple RNA-seq datasets of varying size using open source tools, the main challenges, gaps, and bottlenecks for large-scale RNA-seq data analyses can be summarized as follows: 1. Selecting appropriate software packages and setting software-specific parameters. Making the right or best choice can be difficult because many similar tools are available. Setting software parameters is even harder if not impossible, because it often requires both an in-depth understanding of the algorithms and sufficient hands-on experience, which disadvantages researchers new to this field. 2. Writing scripts to make different components work seamlessly in a pipeline. A variety of algorithms have been designed to perform different tasks, but they have been developed (and/or maintained) independently by different research groups and often use different programming languages. Moreover, those algorithms do not understand each other well, and the output(s) from one algorithm often cannot be used as input(s) for another algorithm. As a result, additional bridging scripts are necessary, which ideally requires a data analyst who is familiar with a number of programming languages, including Shell script, Perl, Python, Java, C/C++, and R. 3. Integrating and summarizing analyses results from individual samples. In general, most algorithms are implemented to process an individual sample. Consequently, the results of primary RNA-seq data analyses have to be further processed, integrated, and summarized for reporting, presentation, and downstream analysis. Usually, data integration and summarization are tedious and not easy to execute efficiently. 4. Identifying RNA-seq sample outliers. It is not uncommon that some samples have low quality and often substitute samples are not available, especially for RNA-seq of clinical specimen. RNA-seq is a complicated multistep process that involves sample collection/stabilization, RNA extraction, fragmentation, cDNA synthesis, adapter ligation, amplification, purification, and sequencing. Any mistake in this complex sequence of protocols can result in biased or even unusable data. Therefore, it is necessary to establish stringent RNA-seq data quality metrics to identify outliers that should be excluded from further downstream data analysis. 5. Detecting sample swapping and mislabeling. For large-scale RNA-seq studies in which hundreds or even thousands of RNA samples are sequenced and analyzed, it is not unusual that some samples are mishandled and appear to be swapped or sequenced more than once. Such errors can become a serious problem for downstream data analyses and interpretation of results, especially for longitudinal sample analyses. It is difficult to identify such mistakes based only on RNA-seq QC metrics and/or gene expression profiles. To confirm whether samples are from the same subject, it is more reliable to compare genetic markers among samples, such as single nucleotide polymorphisms (SNPs). 6. Sharing the results of RNA-seq data analyses with experimental scientists. Nearly all RNA-seq data analyses are performed using Linux clusters or workstations; however, analyses results in Linux are often inaccessible to most experimental scientists. RNA-seq data analyses typically generate a large number of files and large amounts of data that are difficult to comprehend or digest directly by experimental scientists. Therefore, easily accessible interfaces are needed that not only provide a quick and easy way for non-expert users to obtain high-level visualizations of the main RNA-seq analyses outputs (e.g., QC results), but also allow them to drill down further or export the results into additional analysis applications of their choice. To the best of our knowledge, very few RNA-seq related open source packages provide all these options.
To address these challenges, we have implemented a new pipeline named QuickRNASeq to advance the automation and visualization of RNA-seq data analyses results, and have constantly improved and refined its implementation since its inception. QuickRNASeq significantly reduces data analysts' hands-on time, which results in a substantial decrease in the time and effort needed for the primary analyses of RNA-seq data before proceeding to further downstream analysis and interpretation. Additionally, QuickRNASeq provides a dynamic data sharing and interactive visualization environment for end users. All the results are accessible from a web browser without the need to set up a web server and database. The rich visualization features implemented in QuickRNASeq enable non-expert end users to interact easily with the RNAseq data analyses results, and to drill down into specific aspects to gain insights into often complex datasets simply through a point-and-click approach.
Implementation
QuickRNASeq is designed for simplicity and visual interactivity. A few important principles dictate its implementation. First, all components of the pipeline are freely available in the public domain. Second, it is easy to deploy and use. Third, all analyses results including RNA-seq QC metrics, sample correlations, and gene quantifications are accessible via a web browser and can be further explored interactively. An overview of QuickRNASeq ( Fig. 1) illustrates its three main steps.
Step #1 performs RNA-seq read mapping, counting, aligned read QC, and SNP calling.
Step #1 processes each sample completely independently of each other, and is computationally intensive. Therefore, all samples can be processed in parallel in a high performance computing cluster (HPC), or in a serial fashion on a standalone workstation.
Step #2 merges the results from the individual sample and generates an integrated and interactive project report for data interpretation in Step #3.
Input files
In addition to raw sequence reads in FASTQ format, the only other required inputs are a reference genome file in FASTA format and a corresponding gene annotation file in GTF (gene transfer format). QuickRNASeq can be applied to any species as long as its genome and gene annotations are available; for example, human, mouse, rat, and cynomolgus or rhesus monkeys. A gene annotation file can exist in many formats, but GTF has become the de facto standard; however, not all tools accept gene annotation files in GTF format as input. For example, RSeQC (RNA-seq quality control package) [26] accepts gene annotation only in BED (browser extensible display) format, though the majority of gene annotations in the public domain are not available in BED format. To ensure that the exact same annotations are used by the different components in QuickRNASeq, we wrote Perl scripts to convert gene annotation files from GTF to BED format. This avoids any discrepancy or inconsistency among gene annotations that are available in different formats.
Step #1: single sample processing This step consists mainly of read mapping, counting, aligned read QC, and SNP calling, and the corresponding algorithms used to perform these tasks are STAR [11,27], featureCounts [23], RSeQC [26], and VarScan [28] respectively. STAR aligns spliced sequences of any length with moderate error rates, provides scalability for emerging sequencing technologies, and generates output files ready for transcript/gene expression quantification [27]. The algorithms featureCounts [23] and HTSeq [24] are comparable in terms of counting results, but featureCounts is considerably faster than HTSeq by an order of magnitude for gene-level summarization and requires far less computer memory. Read mapping and counting typically are very time consuming, and we chose STAR and featureCounts in QuickRNASeq mainly because of their high speed and accuracy.
The RSeQC [26] package provides a number of modules that can comprehensively inspect sequence quality, nucleotide composition bias, PCR bias, GC bias, mapped reads distribution, coverage uniformity, and strand specificity. All such QC metrics are valuable for outlier detection. VarScan [28] is a platform-independent software tool that can detect variants in RNA-seq data. It employs a robust heuristic/statistic approach to call variants that meet desired thresholds for read depth, base quality, variant allele frequency, and statistical significance. To verify samples from the same subject, it is unnecessary to call SNPs across all chromosomes. In practice, it is sufficient to use only SNPs from the chromosome that contains the major histocompatibility complex (MHC) genes. For human, mouse, and rat, these are chromosomes 6, 17, and 20, respectively. As mentioned earlier, numerous software packages that can perform similar tasks are freely available; however, we found that the Step #1 is computationally intensive, and processes individual samples independently.
Step #2 integrates RNA-seq data analysis results from the individual samples in Step #1 and generates a comprehensive project report.
Step #3 offers interactive navigation and visualization of RNA-seq data analyses results combination of STAR, featureCounts, RSeQC, and VarScan represents one of the best toolsets.
Computational algorithms for RNA-seq analyses are continuously being improved, including STAR, feature-Counts, RSeQC, and VarScan. Therefore, we designed our pipeline to be independent of its underlying software version and ensured that it can handle RNA-seq samples from a variety of species. To decouple the dependence of QuickRNASeq pipeline upon underlying computational algorithms and species, we introduced a plain text configuration file that can store project, species, and software-specific parameters. This configuration file also improves the reproducibility of RNA-seq data analyses and simplifies the command lines in QuickRNASeq. For the convenience of QuickRNASeq users, a configuration file template has been provided for easy customization.
Step #2: data integration, QC, and summary Step #2 aims mainly to merge results generated in Step #1 for each individual RNA-seq sample. Additionally, it runs many across-sample calculations, such as correlationbased QC and a SNP correlation matrix. As shown in Fig. 1, the second step performs the following tasks: Each individual task listed above is performed by a corresponding Bash, Perl, or R script, and a master script coordinates the execution of all these tasks. The main scripts and their functions are listed in Table 1. As shown in Table 1, the primary RNA-seq data analyses can be performed by as few as two shell command lines (starfc-qc.sh and star-fc-qc.summary.sh). All the plots generated in Step #2 are ready for presentations, and the gene counting table can feed downstream differential analysis algorithms. The highly automated features in Step #2 make QuickRNASeq an efficient tool for typical standard RNA-seq analyses, and our pipeline substantially reduces the hands-on time (not the computational time) that data analysts have to spend on primary RNA-seq data analyses.
We implemented a correlation-based QC to detect potential outliers in the RNA-seq data by calculating a MADScore for each sample. In general, an outlier appears to deviate markedly from other samples in a RNA-seq study, and thus its correlation with other samples will be relatively low. The MADScore is calculated as follows. For each sample, calculate the correlation difference, which is simply the difference between the average of all the pairwise correlations that involve the sample and the average of all the pairwise correlations that do not involve the sample. If a sample is an outlier, then the difference will be negative. Accordingly, there will be a vector of values (one for each sample). Then this vector of difference is converted to MADScores (robust Z-scores) by subtracting the medians and dividing it by median absolute deviations (MAD). A standard MADScore cutoff (e.g., −5) is set to determine the outliers.
Step #3: interactive data visualization Primary RNA-seq data analyses results are represented by a standard file folder structure, and an integrated supports a large number of plotting types and offers sample grouping, data transformation, and many other features that are usually only seen in commercial software. SlickGrid [32] is a powerful web-based spreadsheet component that supports searching, sorting, and pagination of tabular datasets, and can be scaled to handle millions of data points. Nozzle [33] is an R package that provides an API (application programming interface) to generate HTML reports with dynamic user interface elements. Nozzle is designed to facilitate summarization and rapid browsing of complex results in data analysis pipelines where multiple analyses are performed frequently on big datasets. By combining these visualization libraries with RNA-seq analyses results, we created multiple dynamic HTML pages to present the RNA-seq QC metrics, and to present gene expression profiles in boxplot and heat map formats dynamically and interactively.
Results and discussion
Test run of QuickRNASeq on a publicly available dataset GENCODE annotation [34,35] is based on Ensembl [36] but with improved coverage and accuracy, and thus is used by the ENCODE consortium [37] as well as many other projects (e.g., 1000 Genomes [38]) as the reference gene set. Therefore, we chose the GENCODE annotation for our test run. GENCODE Release 19 was downloaded from the GENCODE web site [35]. An analysis of RNA-seq data from 1641 samples across 43 tissues of 175 individuals in the Genotype-Tissue Expression (GTEx) project [39,40] revealed the landscape of gene expression across tissues, and catalogued thousands of tissue-specific genes. For our test run, we selected 48 GTEx samples from five donors. The sample identifiers, annotations, and RNA-seq mapping summaries for all 48 samples are listed in Table 2. Note that a sequence read can be aligned uniquely to a reference genome, or mapped to multiple locations. Some reads cannot be mapped to the reference genome at all. The percentages of reads that were uniquely mapped, mapped to multiple locations, or unmapped are given in Table 2. The complete report for our test run of the GTEx dataset can be downloaded directly from the QuickRNASeq project home page, and is briefly described below.
All analyses results accessible from a single entry webpage
A screenshot of the entry webpage for the results of the test run is shown in Fig. 2. The page uses Noozle's presentation template, which collates sections into a single neat web page with functionalities to expand or collapse individual or whole sections. In the "QC Metrics" section, both static images and interactive plots are provided for a variety of QC measures, including read mapping summaries, read counting statistics, SNP correlations among samples, number of expressed genes at various RPKM cutoffs, and correlations among gene expression profiles. All static QC plots can be enlarged into a new window by clicking on the iconized image, and the corresponding more dynamic and interactive plots are accessible by clicking the pointing hand icon. The interactive plots of QC measures offer many interactive features over static images, such as zooming in and zooming out. The raw data that was used to generate these figures can be accessed simply by clicking the corresponding hyperlinked text. The "Parallel Plot" and "Expression Table" SNP correlation to detect mishandled samples SNP correlation plots help to verify whether samples are from the same subject or not. By definition, SNP concordance among samples from the same subject will be much higher than those samples from different subjects. In the first case, typical examples may be samples of different tissues from the same subject or longitudinal samples from the same subject. For simplicity, we selected samples from three donors to illustrate the usefulness of the SNP concordance plot (Fig. 3). As we expected, the SNP correlation plot in Fig. 3a is clustered by the donors. The corresponding correlation plot after the swap of SRR598044 and SRR608096 is shown in Fig. 3b where the correlation pattern indicates that the two samples are wrongly labeled. The true identifiers for the two swapped samples are indicated on the right of the plot. Sample swapping is typically very difficult to detect when it occurs. We have tried different methods to rectify mislabeled or swapped samples and found that a SNP correlation-based approach gave the best results (data not shown).
Integrated QC metrics for individual sample
The parallel plot in Fig. 4 is a common way to visualize high-dimensional data and it is used widely in multivariate data analysis. We implemented the parallel plot to link all related QC measurements for all samples into one plot. Each axis within the plot represents a sample feature or a QC measurement. There are multiple ways users can control the look and feel of the plot, such as selecting a subset of samples to view, changing the order of the axes by drag-and-drop, and removing unwanted axes for a clearer view by dragging them off the plot to either side. The linked table is searchable, and for any selected sample in the table, its corresponding QC measures are highlighted simultaneously on the plot with tooltips showing the measurement values. MAD, an alternative and more robust measure of dispersion has been proposed to detect outliers [41]. We extended MAD to implement a correlation-based QC to detect potential outliers. The MADScore was calculated as described above, and is listed in the table in Fig. 4. To determine whether a potential outlier identified from the correlation-based QC is a true outlier, we recommend that the corresponding QC report is also checked. The comprehensive QC report for an individual sample can be accessed by clicking the corresponding sample identifier in the table in Fig. 4. For example, some representative RNA-seq QC metrics for SRR603068 (highlighted in Fig. 4) are shown in Fig. 5. The metrics correspond to reads duplication rate, distribution of reads versus percentages of GC content, nucleotide composition bias, distribution of read quality score, plot of junction saturation, and characteristics of the splicing junction sites.
Two strategies are used to determine the read duplication rate, as indicated in Fig. 5a. For the sequence-based strategy, reads with exactly the same sequence content are regarded as duplicated reads, whereas, for the mapping-based strategy, reads mapped to the same genomic location are regarded as duplicated reads. For spliced reads, reads mapped to the same starting position that splice the same way are regarded as duplicated reads. SRR603068 is a brain sample, and its nucleotide composition is biased towards A/T, as indicated in Fig. 5c. For RNA-seq data, we often want to know whether the sequencing depth is enough for the analyses, and the saturation plot shown in Fig. 5d is very valuable for this. For a well annotated organism, the number of expressed genes in a particular tissue is almost fixed so the number of splice junctions is also fixed. These numbers should be rediscovered from saturated RNA-seq data. The plot in Fig. 5d indicates that more reads should be sequenced for performing alternative splicing analyses. In Fig. 5f, all multiple splicing events spanning the same intron have been consolidated into one splicing junction, and a novel junction is considered as complete_novel if neither of the two splice sites can be annotated by a gene model. Otherwise, it is partial_novel, meaning that one of the splice site (5′SS or 3′SS) is new, while the other splice site is annotated (known). While the majority of junctions in Fig. 5f are annotated, over 20 % are either complete_novel or partial_novel.
Interactive visualization of gene expression profiles
One of the most important objectives in many RNA-seq studies is to estimate gene expression levels under certain biological or disease conditions. With the help of the visualization tools shown in Fig. 6, differences in gene expression levels across samples under different conditions can be highlighted easily by a few mouseclicks either in the boxplot (Fig. 6b) or heat map view (Fig. 6c). A keyword search box at the top of the table (Fig. 6a) provides an easy way to look at related genes such as kinases and interleukins. Gene expression profiles can be grouped and split on the fly according to the sample annotations, such as tissue type, visiting time, and treatment arms. Moreover, the look and feel of a plot, such as font size, color, plot type, and scales for x-axis and y-axis, can be customized by right clicking on the plot and selecting relevant options from the dropdown menu. An annotated heat map (Fig. 6c) is informative in comparing gene expression profiles across different conditions, and can help reveal the relationships between gene expression levels and corresponding biological conditions. Detailed instructions on how to use advanced visualization features of the interactive plot are described in the QuickRNAseq user guide that is bundled with the QuickRNASeq package.
Scalability of QuickRNASeq
All samples can be processed in parallel in Step #1 of the QuickRNAseq pipeline (Fig. 1). In principle, there is no limitation to the number of RNA-seq samples, as long as enough storage is available. For easy data sharing, the web 2.0 visualization tools allow user to interact with the analyses results without the need of a web server and/or database. Therefore, in QuickRNAseq we pack all the data into JavaScript objects within a HTML document. For a RNA-seq project with 1000 samples, the number of gene expression data points can exceed 20 million, assuming that more than 20,000 genes are expressed. As a result, most browsers such as Internet Explorer, Safari, Firefox, and Chrome fail to load such huge datasets because they surpass the memory limit allocated to these web browsers. To solve this problem, we used pako [42], a web-based compression technique, to significantly reduce the number of objects to be created without compromising the end user experience.
Limitations and running of QuickRNASeq
QuickRNASeq is presumed to be executed in a HPC environment, which can process multiple samples in parallel. The out-of-the-box QuickRNASeq pipeline has been fully tested in a HPC computing environment using the IBM Platform's Load Sharing Facility (LSF) [43], a powerful workload management platform for demanding, distributed HPC environments. The IBM Platform's LSF provides a comprehensive set of intelligent, policydriven scheduling features that enable users to utilize all the computing infrastructure resources and ensure optimal application performance. In addition to LSF, many other notable job scheduling software are available [44]. For a cluster that uses a job scheduler other than LSF, star-fc-qc.sh (implementation of Step #1 in Fig. 1) needs to be customized accordingly. The only required change in the script is the way of job submission, and this command is dependent the job scheduling software. For researchers with no access to a HPC computing environment, we implemented star-fc-qc.ws.sh, a customized script that runs on a standard Linux workstation. Of course, analyzing large RNA-seq datasets on a single workstation is not typical and not recommended.
For gene quantifications, QuickRNAseq requires a complete genome sequence and well-annotated genes as inputs. The pipeline is not intended for the discovery of novel isoforms. QuickRNASeq is designed for use by bioinformaticians, experimental biologists, and geneticists in the fields of genome-scale analysis, functional genomics, and systems biology; however, downloading, installing, and running the QuickRNASeq pipeline in a Linux environment will require some basic computerbased expertise. A README.txt is provided along with the QuickRNASeq package, which explains step-by-step Fig. 6 Interactive visualization of gene expression. a Gene expression levels of selected genes are displayed in a searchable table. b Boxplot view of the expression levels of CKM (creatine kinase, muscle). c Heat map view of gene expression levels of selected genes. Expression values can be grouped or split according to the sample annotations, such as tissue type. Each plot is highly customizable on the fly by right clicking on the plot and selecting relevant options from the dropdown menu how to run QuickRNASeq. In addition, users can examine the configuration and sample annotation file under the test_run folder in the QuickRNASeq package. QuickRNASeq can be run without a sample annotation file, but it is strongly recommended that users provide meaningful annotations for all samples. A proper annotation file should be tab delimited, and QuickRNASeq requires that the first and second columns correspond to sample and subject identifiers, respectively. Sample names should start with a letter, and should not contain any white spaces.
In QuickRNASeq, we selected FeatureCounts, a union exon based approach, for gene quantification. According to our own most recent research [25], union exon based approach is discouraged. Unfortunately, there is still a long way to go for the switch from union exon based approach to transcript-based method in estimation of gene expression levels because of the inaccuracy of isoform quantification [25], especially for those isoforms with low expression, and gene-based annotation databases. Traditionally, functional enrichment analyses rely upon annotation databases such as Gene Ontology (GO) [45], Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways [46] and other commercial knowledge systems. All such annotations have been recorded and centered on genes, not transcripts or isoforms. In practical RNAseq data analyses, the switch from gene to isoform in quantification should ideally go with the switch in annotation hand by hand.
The current version of QuickRNASeq focuses on the automation of primary processing steps in RNA-seq data analyses, and these steps are in general biological question independent. We plan to expand QuickRNA-Seq to downstream analyses in the future, including differential analysis and pathway enrichment. Downstream analyses are usually driven by biological questions and experimental designs and thus different from project to project. How to automate such analyses in a user friendly manner remains a challenge for our practical implementation.
QuickRNASeq versus QuickNGS
While this paper was in preparation, Wagle et al. [47] published QuickNGS, a new workflow system to analyze data from multiple next-generation sequencing (NGS) projects at a time. QuickNGS uses parallel computing resources, a comprehensive backend database, and the careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. An overview of our comparison of the QuickRNASeq pipeline with the QuickNGS workflow is provided in Table 3. In summary, compared with QuickNGS, QuickR-NASeq is more tailored to RNA-seq data. In QuickRNA-Seq, we developed scripts to perform RNA-seq-specific data integration and to generate integrated and interactive project reports in a fully automated manner. All the results from QuickRNASeq can be shared easily and further explored from a web browser on a personal computer even without internet access. Our pipeline QuickRNASeq provides a noticeable advancement of RNA-seq data analyses by incorporating a high degree of automation together with interactive visualizations.
Conclusions
By combing the best open source tool sets developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we implemented the QuickRNASeq pipeline, which significantly reduces the efforts involved in primary RNA-seq data analyses and generates an integrated project report for data sharing and interactive visualization. The dynamic visualization features enable end users to explore and digest RNA-seq data analyses results intuitively and interactively, and to gain deep insights into RNA-seq datasets. The configuration file contains project, species, and software related parameters, and thus improves the reproducibly in RNA-seq data analyses. We have already applied QuickRNASeq to in-house large scale RNA-seq projects, and its current version is stable and mature for public release and adoption. | 7,382.2 | 2016-01-08T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Extracellular reduction of solid electron acceptors by Shewanella oneidensis
Shewanella oneidensis is the best understood model organism for the study of dissimilatory iron reduction. This review focuses on the current state of our knowledge regarding this extracellular respiratory process and highlights its physiologic, regulatory and biochemical requirements. It seems that we have widely understood how respiratory electrons can reach the cell surface and what the minimal set of electron transport proteins to the cell surface is. Nevertheless, even after decades of work in different research groups around the globe there are still several important questions that were not answered yet. In particular, the physiology of this organism, the possible evolutionary benefit of some responses to anoxic conditions, as well as the exact mechanism of electron transfer onto solid electron acceptors are yet to be addressed. The elucidation of these questions will be a great challenge for future work and important for the application of extracellular respiration in biotechnological processes.
Introduction
Exactly 30 years ago in 1988 Ken Nealson and Charles Myers published a report in which they describe bacterial manganese reduction and growth with manganese as the sole electron acceptor. Their model organism was a bacterium at this point named Alteromonas putrefaciens MR-1 (Nealson and Myers, 1988). Since then, the genus of this strain has been renamed Shewanella to honour the Scottish microbiologist James M. Shewan (MacDonell and Colwell, 1985) and the species is called oneidensis, since the organism was isolated from lake Oneida in Upstate New York (Venkateswaran et al., 1999). The genus Shewanella shows a very high respiratory versatility. Most of its representatives can reduce a variety of inorganic and organic electron acceptors that can be soluble (e.g. dimethylsulfoxide (DMSO), fumarate, nitrate, nitrite, trimethylamine-N-oxide (TMAO), oxygen, humic acids) or in a solid state (ferrihydrite, hematite, birnessite, electrodes). Their niches seem to be redox stratified environments in which the electron donor is not the limiting factor (Nealson and Scott, 2006;Fredrickson et al., 2008). To date, Shewanella oneidensis MR-1 is the best understood model to study extracellular electron transfer processes. Its strategy is to use c-type cytochromes as electron transfer proteins and flavins to facilitate the electron transfer process. In fact, the use of c-type cytochromes is a widespread solution to transfer electrons to the cell surface within various bacterial genera and a high number of c-type cytochrome encoding genes is characteristic for many dissimilatory metal reducers (Heidelberg et al., 2002). S. oneidensis is a Gram-negative organism and respiratory electrons will have to pass two membranes and the periplasm in order to get into contact with the solid electron acceptor at the cell surface. The final electron transfer step for the reduction of insoluble electron acceptors in S. oneidensis seems to be rather unspecific. Consequently, outer membrane cytochromes can reduce a wide range of substrates ranging from different insoluble minerals and electrodes to soluble compounds like humic acids or metal complexes (Richter et al., 2012); Table 1). The reduction of toxic metals that become insoluble upon reduction like uranium or chromium and the reduction of electrodes attracted the interest of applied microbiologists and engineers to the physiology of the model organism S. oneidensis. Moreover, its ability to reduce insoluble iron minerals is of high importance from an environmental science perspective. Iron is the fourth most abundant element in soil. Hence, its reduction has widespread implications for biogeochemical cycling. The reductively dissolved ferrous iron is an important trace nutrient and can initiate a number of environmentally relevant abiotic redox transformations as for instance the reduction of nitroaromatic compounds and azo dyes (Rügge et al., 1998;Elsner et al., 2004).
In this review, we will follow the path of the electrons from the cytoplasm to the cell surface (illustrated in Fig. 1). We will summarize the achievements of many groups working with S. oneidensis and will highlight novel results and research directions. Even after 30 years of research, S. oneidensis has still secrets that are not understood and we will formulate several open research questions at the end of the review.
The central carbon metabolism in S. oneidensis
Shewanella oneidensis can use only a limited number of carbon sources under anoxic conditions. Here, growth was so far reported with lactate, pyruvate, N-acetyl-glucosamine and DNA (Lovley et al., 1989;Pinchuk et al., 2008;Hunt et al., 2010;Brutinel and Gralnick, 2012a). Still, under oxic conditions growth is possible with a wider variety of substrates including different dipeptides, amino acids and short organic acids (Table 2). While some Shewanella strains are able to grow on glucose, S. oneidensis lacks the ability to import and phosphorylate glucose to glucose-6-phosphate. Nevertheless, Howard and colleagues could show a rapid adaptation of S. oneidensis to aerobic growth on glucose (Howard et al., 2012). This ability is due to a deletion in a genomic region that includes nagR, the gene for the regulator of the N-acetyl-glucosamine catabolism. This leads to the constitutive expression of the N-acetyl-glucosamine permease and kinase genes. The corresponding enzymes both have a promiscuous activity toward glucose (Chubiz and Marx, 2017). Other researchers achieved growth on glucose also under anoxic conditions by the heterologous expression of a glucose facilitator and a glucokinase and established a glucosedependent current production in a bioelectrochemical system (Choi et al., 2014;Nakagawa et al., 2015). An adaptation strategy similar to the experiments conducted by Howard lead recently to the development of an S. oneidensis strain that can use xylose as carbon and electron source under oxic and anoxic conditions (Sekar et al., 2016). Now, with the design of xylose and glucoseconsuming strains, lignocellulose hydrolysates could be used as a sustainable carbon source for biotechnological conversions catalysed by S. oneidensis.
Shewanella oneidensis uses the EntnerDoudoroff (ED) pathway for sugar (N-acetyl-glucosamine) catabolism (Scott and Nealson, 1994;Serres and Riley, 2006; Yang et al., 2006). Contrary to what was believed for a long time, the ED pathway is wide spread within aerobic or facultative anaerobic heterotrophic organisms (Chen et al., 2016). It has a lower ATP yield compared to glycolysis but it is thermodynamically more favourable, which results in considerably less costs for the amount of enzymes required to sustain the flux through the pathway (Flamholz et al., 2013). In other words, the ED pathway could be advantageous under conditions where ATP is not the limiting factor, which might be the case in respiratory organisms. Still, under anaerobic respiratory growth conditions ATP from substrate level phosphorylation seems to be the only or at least major available energy pool, while electron transfer to the terminal electron acceptors is not accompanied with an oxidative phosphorylation-based ATP production. Consequently, a deletion mutant lacking all genes of the ATP synthase showed almost no phenotype under anoxic growth conditions with fumarate as electron acceptor, while it was highly affected under oxic conditions (Hunt et al., 2010).
In line with the apparent anaerobic substrate level phosphorylation based ATP production is a common downregulation of the citric acid cycle under anoxic conditions, which mainly serves for the production of precursor molecules for biomass growth. Uncommon is that the organism uses a longer oxidative branch to produce succinyl-CoA instead of a reductive branch leading form oxaloacetate to succinyl-CoA (Brutinel and Gralnick, 2012b). Also uncommon is the apparent use of the 2-methylcitrate synthase under anoxic conditions instead of the canonical citrate synthase for the conversion of oxaloacetate and acetyl-CoA into citrate (Brutinel and Gralnick, 2012b).
Respiratory electrons enter the quinone-pool under anoxic conditions via NADH oxidation mainly by Nqr1 or the oxidation of carbon compounds by D-or L-lactate dehydrogenases or formate dehydrogenases (Myers and Myers, 1993;Saffarini et al., 2002;Myers et al., 2004;Pinchuk et al., 2010Duhl et al., 2018). Recent results indicate that the redox potential of the terminal electron acceptor influences the percentage to which the NADH-and formate-dehydrogenases contribute to the reduction of the quinone pool. Using a bioelectrochemical system and a working electrode poised to either +0.5 V, +0.2 V or 0 V vs. SHE (Standard Hydrogen Electrode), Hirose and colleagues could elucidate that a deletion mutant in all four NADH dehydrogenase encoding gene clusters is almost completely unable to produce current at +0.5 V (Hirose et al., 2018). In contrary, the difference in current production was not significant compared to the wild type at the lower potentials tested. This observation, that higher redox potentials of anoxic electron acceptors lead to a higher percentage of electron flux via NADH dehydrogenases could also be verified using fumarate (E 0 ′ = -0.03 V), nitrate (E 0 ′ = -0.43 V) and MnO 2 (E 0 ′ = -0.53 V). The NADH dehydrogenase mutant showed a decreased ability to use nitrate and MnO 2 but showed a similar growth rate compared to the wild type with fumarate. The regulation of the contribution of these two ways to reduce the intracellular quinone pool is advantageous to the cell, as NADH dehydrogenasede-pendent quinone reduction leads to a higher production of proton motif force per electron transferred compared to formate dehydrogenase. Not only the contribution of membrane bound dehydrogenase but also that of inner membrane quinols change with the potential of the electron acceptor. Low potentials lead predominantly to a reduction of the cellular menaquinone pool while higher potentials also trigger the use of ubiquinone-8 as electron carrier (Hirose et al., 2018). Still, the dominant quinone under anoxic conditions seems to be menaquinone-7 (MQ-7, (Venkateswaran et al., 1999)), which has a dual function for the transfer of respiratory electrons onto the cell surface, as it is also a specific cofactor for the menaquinol oxidase and tetraheme c-type cytochrome CymA (cytoplasmic membrane protein A) located on the outer leaflet of the cytoplasmic membrane (McMillan et al., 2012). Interestingly, the dominant catalytic mode of CymA under in vitro conditions is the reduction of MQ-7 and hence the reverse of its natural role (McMillan et al., 2013). Nevertheless, it was also shown in vivo that it is possible to use CymA for the import of electrons into the quinone pool if the organism is supplied with a cathode poised to a suitable potential as electron donor and either fumarate or oxygen as electron acceptor (Ross et al., 2011;Rowe et al., 2018).
CymA distributes electrons to a variety of electron transfer pathways, with terminal electron acceptors that are reduced either within the periplasm (nitrate, nitrite, fumarate, hydrogen peroxide) or at the cell surface (e.g. metal oxides, quinone analogues, dimethyl sulfoxide (DMSO)) (Myers and Myers, 2000;Schwalb et al., 2002; . The table lists only compounds that were metabolized similarly to lactate.
Acetate Nealson and Myers (1990) 3C
Propionate Scott and Nealson (1994) a Mixed results, see also Serres and Riley, (2006). b contrary to Biolog data by Rodrigues et al. (2011). 2003; Gralnick et al., 2006;Schuetz et al., 2011). Since CymA is also necessary for nitrate and MnO 2 reduction and Hirose and colleagues observed also the use of ubiquinone-8 as electron carrier under anoxic conditions, it must be possible to oxidize with this enzyme menaquinol and ubiquinol. The redox potential window of CymA is between -0.3 and 0 V (Hirose et al., 2018). Hence, the oxidation of ubiquinol-8 would most probably necessitate the completely oxidized state of CymA to render this reaction thermodynamically feasible.
Electron transfer in the periplasmic space of Shewanella
The periplasm of S. oneidensis spans approximately 235 Å and for this reason was proposed to be too wide for direct electron transfer between CymA and the decaheme metal reducing protein A (MtrA) bound to the outer membrane complex MtrCAB Fonseca et al., 2013;Edwards et al., 2018). Multiheme cytochromes predicted to be localized to the periplasm of S. oneidensis and present in high amounts during anaerobic growth may receive electrons from CymA and transfer them to the inner face of the outer membrane, including to MtrA (Richardson et al., 2012). The most abundant periplasmic proteins found in anaerobically grown Shewanella cells are the monoheme cytochrome ScyA, the fumarate reductase FccA and the small tetraheme cytochrome STC (Tsapin et al., 2001;Meyer et al., 2004). Although the genes of these cytochromes are up-regulated during extracellular respiration (Rosenbaum et al., 2012), individual gene deletion mutants suggested that none of these proteins per se play a critical role in metal reduction (Schuetz et al., 2009;Gao, Barua, et al., 2010). While ScyA was identified as a mediator of electron transfer between CymA and the diheme c-type cytochrome peroxidase CcpA , FccA was shown to be the only respiratory fumarate reductase in S. oneidensis MR-1 (Myers and Myers, 1997). FccA is a soluble 64 kDa unidirectional fumarate reductase (Pealing et al., 1992) that is composed of an N-terminal domain with four c-type hemes which are homologous to STC, a C-terminal flavoprotein domain with a non-covalently bound FAD group and a flexible clamp domain that may control the access of the substrate to the active site (Taylor et al., 1999). The four hemes in FccA are arranged in a quasi-linear architecture that allows an efficient conduction of electrons across the length of the N-terminal domain to the active site of the protein (Taylor et al., 1999). Thermodynamic and kinetic studies have shown that the reduction of the heme domain occurs through hemes I and II of FccA, that receive the electrons from its physiological partner (Fonseca et al., 2013) and distribute them by intramolecular electron transfer to the other hemes according to their reduction potential (Pessanha et al., 2009;Paquete et al., 2014). Since electron exchange among the hemes is faster (> 10 5 s -1 ) than electron transfer to the FAD group (~100 s -1 ) (Jeuken et al., 2002;Pessanha et al., 2009), two electrons are always available for the reduction of fumarate. Besides fumarate reduction, FccA also functions as an electron transfer shuttle between CymA and MtrA for the reduction of extracellular substrates (Schuetz et al., 2009). It was demonstrated that the degree of reduction of FccA controls the activity of this moonlighting protein (Paquete et al., 2014). At low electron flux from the cell metabolism, FccA receives the electrons from CymA and transfers them to the outer-membrane complexes for the reduction of insoluble electron acceptors (Schuetz et al., 2009;Fonseca et al., 2013). As the electron flux increases, FccA will become fully reduced, which enhances the catalytic efficiency of fumarate reduction, offering another option for discharging the electrons to prevent metabolic arrest. This switching mechanism allows Shewanella to quickly alternate between reduction of soluble and insoluble electron acceptors, without the production of new enzymes (Paquete et al., 2014). The physiological function of the 12 kDa tetraheme cytochrome STC was for several years unclear, mainly due to a lack of a phenotype under conditions of metal or DMSO reduction Gao, Barua et al., 2010). Recent studies have shown that this protein is involved in several anaerobic respiratory processes (Fonseca et al., 2013;Alves et al., 2015) including the reduction of solid electron acceptors at the cell surface. STC functions as an electron transfer hub, receiving electrons from CymA and distributing them to a number of terminal oxidoreductases (Alves et al., 2015). It is able to interact with MtrA for the reduction of metal compounds (Fonseca et al., 2013), DmsE for the reduction of DMSO, octaheme tetrathionate reductase (OTR) for the reduction of nitrogen compounds, but not with c-type cytochrome Nir for nitrite reduction (Alves et al., 2015). This study, together with Fonseca et al. (2009), also revealed that STC does not operate as a molecular wire, and that it functions like a cul-de-sac that forces electrons to enter and leave the protein by the same heme (Fonseca et al., 2009;Alves et al., 2015). This enables Shewanella to transfer electrons within the periplasm in a controlled and efficient manner, preventing the risk of diverting electrons to side redox pathways or production of radical species that would damage the cell.
A double deletion mutant of S. oneidensis in both fccA and cctA (the gene encoding STC) has shown that these proteins share a functional redundancy (Sturm et al., 2015), being both involved in extracellular respiration. These proteins have an overlapping activity and at least one is necessary for coupling respiratory oxidation of CymA to efficient electron transfer to ferric citrate, DMSO and nitrate (Sturm et al., 2015). Interestingly, in vitro studies have shown that STC and FccA do not interact with each other (Fonseca et al., 2013), which suggests the coexistence of two non-mixing redox pathways to transfer electrons across the periplasmic gap to terminal reductases, one involving FccA and the other involving STC (Fonseca et al., 2013).
Electron flux across the outer membrane
In S. oneidensis the electron flux across the outer membrane can occur via one of four porin cytochrome conduits; the MtrCAB complex, (Richardson et al., 2012); the MtrFED complex the DmsEFA DMSO reductase system (Gralnick et al., 2006) and the SO_4359SO_4360 system (Schicklberger et al., 2013). Of these the MtrCAB complex, which is constitutively expressed, is the best characterized and has a clear role in dissimilatory metal reduction. The MtrFED and SO4359-60 complexes have no clear phenotype, but allow extracellular respiration on soluble Fe(III)chelates when exogenously expressed in mtrCAB deletion strains Schicklberger et al., 2013). Both MtrCAB and MtrFED porin cytochrome complexes contain a transmembrane barrel that forms a putative channel in the outer membrane. Two multiheme cytochromes enter the channel from opposite sides of the membrane and bind close enough so that electrons are capable of hopping between hemes of adjacent cytochromes, forming a functional electron conduit across the outer membrane (Hartshorne et al., 2009). This electron conduit can be isolated as a stable complex with a length of approximately 170 Å and inserted into proteolipsome models to show bidirectional electron transfer across the lipid bilayer, as well as rapid reduction of different types of insoluble iron oxide (White et al., 2013;Edwards et al., 2018).
The genes mtrC, mtrA and mtrB are expressed within the same operon and the synthesized peptides are transported through the Sec pathway to the periplasm in an unfolded state (Shi et al., 2008). Both MtrC and MtrA are folded by the S. oneidensis cytochrome c maturation pathway (ccm) in the periplasm and MtrC is transported to the cell surface by the Type II secretion pathway while MtrA remains in the periplasm (DiChristina et al., 2002). MtrB is transported through the cytoplasmic membrane and across the periplasm before assembling as a porincytochrome complex in the outer membrane with MtrA (Schicklberger et al., 2011). Extracellular MtrC, which is anchored to the outer membrane by a covalently attached lipid, binds to the MtrAB complex on the surface of the cell to generate the fully functional MtrCAB complex (Hartshorne et al., 2009;Edwards et al., 2018).
The interactions between the three proteins indicate that MtrA interacts more tightly with MtrB, and is essential for correct folding of MtrB. Different knockout studies have shown that correct MtrB and folding in the outer membrane requires the presence of MtrA (Hartshorne et al., 2009;Schicklberger et al., 2011) and it is possible that MtrA also functions as a chaperone for MtrB, by acting as a scaffold for MtrB assembly.
The molecular structure of MtrB is not yet known, but topology predictions suggest that MtrB is a transmembrane β-barrel with 28 antiparallel β-strands (Beliaev and Saffarini, 1998). Consistent with the structure of other outer membrane cytochromes, the loops connecting the β-strands on one side of the predicted β-barrel are longer than the other. These extended loops are typically exposed on the cell surface, allowing them to interact with extracellular ligands. They also play an important role in the maturation of transmembrane barrels, as they fold inside the barrel during membrane insertion and then interact with the charged groups on the membrane exterior. Surprisingly, the extended loops of MtrB are not on the surface but appear to interact with MtrA, rather than MtrC, suggesting that the folding mechanism for incorporation of MtrB into the outer membrane of Shewanella is different to that of other transmembrane β-barrels (White et al., 2013).
The soluble N-terminal domain of MtrB is approximately 16 amino acids long, and contains a CXXC motif that is present within the Shewanella MtrB family, but not within paralogs of MtrB in other strains. The first of the two cysteines (cysteine-42 in S. oneidensis) was shown to be essential for Fe(III) reduction in S. oneidensis and it has been suggested that these cysteines may be involved in MtrB transport across the periplasm and correct insertion into the outer membrane (Wee et al., 2014).
The locus containing the mtrCAB operon also contains omcA, which encodes for a second outer membrane cytochrome that is expressed independently of the mtr-CAB operon. In contrast to the essential mtrA and mtrB genes, deletion of either mtrC or omcA only causes a partial loss of Fe(III) reduction . Deletion of both mtrC and omcA had a cumulative effect resulting in a near complete loss of Fe(III) reduction, suggesting that both OmcA and MtrC are capable of accepting electrons from the MtrAB transmembrane electron conduit. This is surprising as the only interactions shown in vitro have been between MtrC, MtrA and MtrB (Ross et al., 2007). In vivo crosslinking studies are still the only evidence for an MtrCABOmcA interaction, and this might have been the trapping of a transitional interaction rather than a stable complex (Myers et al., 2004;Shi et al., 2006). The X-ray crystal structures of both MtrC and OmcA have been solved to atomic resolution and show significant structural similarity (Edwards et al., 2014;. Both structures contain hemes arranged in two chains that intersect, forming a 'staggered-cross' pattern with hemes II, V, VII and X at the termini of each chain. Hemes V and X are exposed at opposite ends of the structure, while hemes II and VII are close to two β-barrel domains that flank the heme-containing domains. The hemes in both MtrC and OmcA are superposable with the exception of heme V, which is displaced and in a different orientation in each structure (Edwards et al., 2015). The crosslike arrangement of hemes gives four possible electron ingress/egress routes, with hemes V and X suggested as potential sites for direct metal oxide reduction and hemes II and VII as responsible for flavin reduction. Heme VII has been shown to be most likely involved in flavin interaction through both molecular dynamic simulations on MtrC and mutagenesis studies on axial ligands of OmcA hemes (Babanova et al., 2017;Neto et al., 2017).
Recent results indicate that the outer membrane of S. oneidensis undergoes dynamic structural rearrangements that are triggered by an electron acceptor limitation. These structures were first reported in 2010 and were called nanowires according to conductive extracellular structures that were observed in G. sulfurreducens first . Meanwhile we know that the structures are not pili as they are in G. sulfurreducens but chains of outer membrane vesicles that are filled with periplasmic proteins (Pirbadian et al., 2014;Subramanian et al., 2018). It is therefore still under debate whether these structures can be called nanowires. Although they are not composed of pili-subunits, they can serve for the same purpose, which is the transfer of respiratory electrons beyond the dimensions of an individual cell (El-Naggar et al., 2010). It seems as if they can catalyse electron transfer along their length by electron hopping between outer membrane cytochromes and diffusion of cytochromes along the surface of the vesicles (Subramanian et al., 2018). The formation of these structures might be of special importance for growth of S. oneidensis in biofilms in which only certain layers of the cells can be in direct contact with the electron acceptor and other cells might depend on electron transfer over micrometre distances to reach the terminal electron acceptor.
The role of flavins
Two studies reported independently the excretion of flavins by S. oneidensis cells grown under batch conditions (Canstein et al., 2008;Marsili et al., 2008). FAD is transported through the cytoplasmic membrane via the bacterial FAD exporter Bfe (Kotloski and Gralnick, 2013). Thereafter the 5′-nucleosidase UshA processes FAD to AMP and FMN which is the major flavin molecule in the culture supernatant (Canstein et al., 2008;Covington et al., 2010). Experiments with a mutant in the FAD transporter revealed that electron transport was accelerated 4-fold in the presence of extracellular flavin (Kotloski and Gralnick, 2013). Moreover, a potential flavin binding site was observed in the structure of the MtrC analogue MtrF (Clarke et al., 2011) and the Mtr-pathway was additionally revealed to be necessary for the reduction of flavins . There are still two hypotheses regarding the role of flavin molecules. The first hypothesis is, that these molecules act as freely diffusible shuttle molecules (Canstein et al., 2008;Marsili et al., 2008) while the other opinion is that these flavins are in fact cofactors of outer membrane cytochromes facilitating one electron transport via the formation of semiquinones (Okamoto et al., 2013;Xu et al., 2016). Recent results from differential pulse voltammetry conducted in different groups strongly support the second hypothesis and emphasize the importance of outer membrane cytochrome bound semiquinones for extracellular electron transfer in S. oneidensis (Okamoto et al., 2013;Xu et al., 2016). Moreover, keeping the flavins in close proximity to the cell surface in the form of cofactors would also minimize the risk that these compounds could be lost via diffusion or be used as growth-supporting substrate by other microorganisms of the respective ecosystem. Of note, Oram and Jeuken proposed recently, that flavin-independent electron transfer by S. oneidensis is in fact not direct but mediated by soluble iron (Oram and Jeuken, 2016). This soluble iron could be released via partial cell lysis and is necessary for electron transfer in the high potential range between 0 and 0.2 V. Evidence for this model stems from experiments with the iron siderophore and chelator deferoxamine. The addition of this substance almost completely disabled electron transfer at the high potential range in the chosen setup.
Secreted flavins will have an additional role as shuttling molecules especially under laboratory batch conditions that allow the accumulation of flavins in the medium. This role is accentuated for instance by the study of Jiang et al. in which the removal of spent medium in a bioelectrochemical system was shown to lead to a drastic decrease in current while the re-addition of this medium to the system increased the current to 80% of the original level. Interestingly, it was almost irrelevant if the cells could directly contact the electrode or if the electrode was masked by a nonconductive material with nanoholes that hamper direct microbe-electrode-interaction but allow for flavin diffusion (Jiang et al., 2010). Furthermore, an overproduction of flavins can be used as a tool to enhance current production in bioelectrochemical systems. This increased flavin production can either be achieved by the synthetic overproduction of flavin molecules or by adding limiting amounts of oxygen as co-electron acceptor to the working electrode. Addition of oxygen increases cell growth and flavin production (Teravest et al., 2014;Yang et al., 2015). Moreover, an increased flavin production was also observed, when DMSO was added to ferric iron reducing cultures. Although the effect of adding a second electron acceptor might have several points of actions and not only flavin production, it becomes clear that it can be advantageous although the initial thought would be that the breakup of electron transfer routes might decrease the efficiency of ferric iron or anode reduction (Cheng et al., 2013).
Regulation of extracellular respiration
Compared to model organisms like E. coli, S. oneidensis uses a differing way to establish its anaerobic physiology. E. coli uses cAMP and the cAMP receptor protein (CRP) for the hierarchical usage of different carbon sources. The two different master regulators ArcAB (aerobic respiration control protein) and FNR (fumarate and nitrate reduction regulatory protein) are used to sense the redox potential and to adapt to anoxic conditions (Postma et al., 1993;Escalante et al., 2012;Förster and Gescher, 2014). In contrast, S. oneidensis mainly deduces the availability of oxygen from cAMP levels and uses CRP as the master regulator to switch on the expression of proteins involved in anaerobic energy generation. Consequently, crp mutants are unable to thrive with Fe 3+ , Mn 4+ , nitrate, fumarate or DMSO (Saffarini et al., 2003). In contrary, neither deletion of the arcA nor the fnr analogue etrA resulted in mutants with a growth difference compared to the wild type with ferric iron or MnO 2 as electron acceptor (Beliaev et al., 2002;Gao, Wang, et al., 2010;Cruz-García et al., 2011). The deletion of arcA leads to a decreased expression of cytochromes linked to anaerobic respiration like cymA, omcA and dmsAB. Nevertheless, an impact on anaerobic growth with ferric iron or manganese oxide was not detectable, while a growth defect could be observed for DMSO as electron acceptor. Gao et al., 2008). Recent results indicate that the Arc-system might be involved in sensing the extracellular redox status and that it uses this as a trigger to regulate the expression of several proteins in the cytoplasm and the cytoplasmic membrane. Therefore, it was speculated that the Arc-system might be involved in the observed metabolic shift (see above) from anoxic conditions with a low redox potential electron acceptor to a high redox potential electron acceptor (Hirose et al., 2018). Still, as growth with nitrate and MnO 2 does not seem to be affected by an arcA deletion, it is not clear how important the Arc-system is for a redox potentialbased metabolic shift under anoxic conditions (Gao et al., 2008;Hirose et al., 2018).
The production of cAMP is catalyzed by an adenylate cyclase. S. oneidensis contains three genes for putative adenylate cyclase (Charania et al., 2009). Nevertheless, only a double deletion of the gene for the cytoplasmic membrane bound enzyme CyaC and the predicted soluble enzyme CyaA lead to a phenotype that was similar to the crp deletion strain (Charania et al., 2009). Expression of cyaC alone from a plasmid was sufficient to complement a triple mutant in all adenylate cyclase genes, while cyaA expression lead to an incomplete suppression of the phenotype. CRP is directly involved in the regulation of key genes for extracellular respiration as CRP binding sites were detected upstream of omcA, mtrC, mtrA, cymA and cctA (Gao, Wang, et al., 2010;Kasai et al., 2015;Barchinger et al., 2016). Moreover, CRP is involved in D-lactate oxidation by activating the expression of the lldP-dld operon encoding the genes for a lactate permease and a novel membrane bound D-lactate oxidase. Consequently, the Δcrp mutant can only grow on D,L-or L-lactate (Kasai et al., 2017). Interestingly, a CRP binding site could not be detected upstream of the fccA gene. Hence, the effect of the crp deletion on fumarate reduction seems to be indirect, possibly since CRP also positively regulates heme synthesis and c-type cytochrome maturation (Charania et al., 2009;Barchinger et al., 2016). Of note, the concentration of cAMP influences the expression of many more genes than CRP alone, since a cyaC deletion mutant showed a defect in the up-or downregulation of 1255 genes, compared to only 359 genes for the crp strain (Barchinger et al., 2016). So far, it is not clear how the adenylate cyclases are regulated and what the phosphate donating factor for cAMP production might be.
Another regulatory factor that was recently revealed to play a role is the extracytoplasmic function sigma factor RpoE. So far it was known that RpoE is essential for growth under suboptimal conditions (high or low temperatures, high salinity, oxidative stress (Dai et al., 2015) but a corresponding mutant is also negatively affected in growth on minimal medium as it reaches less than 50% of the final optical density compared to the wild type. Targets for RpoE are genes involved in outer membrane lipoprotein transport and folding, lipopolysaccharide production and periplasmic proteases (Dai et al., 2015). Moreover, RpoE binding sites were detected upstream of the genes for the outer membrane cytochromes OmcA and MtrC as well as genes involved in c-type cytochrome maturation and heme biosynthesis (Barchinger et al., 2016). Expression of the rpoE gene itself is upregulated twofold in the absence of oxygen (Barchinger et al., 2016) and the presence of metals and thiosulfate (Beliaev et al., 2005). Further analysis will have to reveal what the overall impact of RpoE on extracellular respiration is. Still, it will be difficult to distinguish this direct function on extracellular respiration from its general involvement in stress response.
The production of c-type cytochromes is also affected by the availability of iron. Iron depletion leads to a downregulation of mtr-gene expression. This process relies to a major extend on the ferric uptake regulator (Fur) that binds ferrous iron and is important for iron homeostasis . The impact of iron-sensing by Fur on the expression of some c-type cytochrome encoding genes seems to be mediated by the small RNA RyhB. Iron depletion leads to an upregulation of this sRNA and its interaction with certain mRNAs causes an accelerated degradation of these target mRNAs. RyhB targets are for instance cymA, cctA as well as mtrABC (Meibom et al., 2018).
Open questions
Extensive efforts have been made by numerous groups across the world to unravel the mechanism of extracellular electron transport by S. oneidensis. The results have already clarified tremendously our understanding of the process. Still, there are several aspects that remain unclear and that are relevant for our fundamental understanding of extracellular electron transfer and also for the implementation of knowledge based strategies to harness this interesting metabolism for biotechnological applications: Why c-type cytochromes? The vast majority of the organisms known to perform extracellular electron transfer contain c-type cytochromes and these play a central role in the process (Koch and Harnisch, 2016). However, it is so far not understood what prevents other redox proteins, for example containing ironsulfur clusters, which cover an even broader range of potentials (Liu et al., 2014) from fulfilling this role. Model organisms are what they are also because they are easy to grow under laboratory conditions. Hence, we may be in a situation like the story of 'looking where the light is better' and find more and more c-type cytochrome using organisms because we are blind for other solutions that evolved in the environment. It might just not be easy to find these different solutions since bioinformatic approaches do not help without model organisms and since the organisms that use these solutions might not be easy to cultivate so far.
Why so many c-type cytochromes? Again, the research shows that the contribution of cytochromes does not rely on a single cytochrome, but that at least dozens to more than (in the case of Geobacter sulfurreducens) cytochromes are encoded in the genomes of the organisms and that multiple of them are simultaneously expressed. The question why so many, even though genome surveys show that very few are conserved across species (Gao, Barua, et al., 2010;Liu et al., 2014) is so far not answered although we begin to understand that some of the cytochromes evolved to fulfil their role only under certain redox potential conditions.
Why multiheme proteins? It is still unclear what advantage is accrued from the process being mediated by cytochromes with multiple hemes. While on one hand, none of these multiheme cytochromes are long enough to span the width of the periplasm and therefore establish a fixed electrically conducting wire, on the other hand, the arrangement of the multiple cytochromes associated with the outer membrane porins is clearly excessive if the objective was simply to span the thickness of the outer membrane to deliver electrons to the cell surface. | 8,219.4 | 2018-07-31T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
A Novel Teaching-Learning-Based Optimization with Error Correction and Cauchy Distribution for Path Planning of Unmanned Air Vehicle
Teaching-learning-based optimization (TLBO) algorithm is a novel heuristic method which simulates the teaching-learning phenomenon of a classroom. However, in the later period of evolution of the TLBO algorithm, the lower exploitation ability and the smaller scope of solutions led to the poor results. To address this issue, this paper proposes a novel version of TLBO that is augmented with error correction strategy and Cauchy distribution (ECTLBO) in which Cauchy distribution is utilized to expand the searching space and error correction to avoid detours to achieve more accurate solutions. The experimental results verify that the ECTLBO algorithm has overall better performance than various versions of TLBO and is very competitive with respect to other nine original intelligence optimization algorithms. Finally, the ECTLBO algorithm is also applied to path planning of unmanned aerial vehicle (UAV), and the promising results show the applicability of the ECTLBO algorithm for problem-solving.
Introduction
Global optimization is a universal issue to the entire scientific community. It has been applied widely in many different fields such as chemical engineering [1], molecular biology [2], the training of neural networks [3], job shop scheduling [4], and network design [5]. However, in most cases, global optimization problems are nonlinear and nondifferentiable and, hence, gradient-based methods cannot be used. In recent years, a lot of effective optimization algorithms have been developed and used to successfully solve global optimization problems that are nonlinear and nondifferentiable. Typical algorithms include particle swarm optimization (PSO) [6] proposed by Kennedy and Eberhart in 1995 and inspired by swarm behavior of fish schooling and bird flocking, differential evolution (DE) [7] which mimics Darwinian evolution, group search optimizer (GSO) [8] which is inspired by animal searching behaviors, artificial bee colony (ABC) [9] which simulates the foraging behavior of honey bees, water cycle algorithm (WCA) [10] which is based on the observation of water and cycle processes and how rivers and streams flow to the sea in the real world, cuckoo search (CS) [11] which mimics the brooding behavior of some cuckoo species, backtracking search algorithm (BSA) [12] which is developed from the differential evolution algorithm, differential search algorithm (DSA) [13] which is inspired by the migration of superorganisms utilizing the concept of stable motion, and interior search algorithm (ISA) [14] which is inspired by interior design and decoration.
Recently, Rao et al. [15] proposed the teaching-learningbased optimization (TLBO) algorithm inspired by the teaching-learning process in a classroom. e algorithm simulates two fundamental phases of learning consisting of the "Teacher Phase" and the "Learner Phase." One of the remarkable advantages of the TLBO algorithm is its simple computation. e other important advantage of the TLBO algorithm is that it does not require specific controlling parameters (the crossover and mutation probability, etc.) except for the common controlling parameters (the size of population and the problem dimensional), which makes the TLBO algorithm easy to implement and more quickly convergence speed. Hence, it has been extended to engineering optimization [15], physics-biotechnology optimization [16], multiobjective optimization [17], heat exchanger design [18], dynamic economic emission dispatch [19], and so on.
Although the TLBO algorithm has some advantages, it has some undesirable dynamical properties that degrade its searching ability. One of the most important issues is that there exists the lower exploitation ability and the smaller scope of solutions in the later stages of evolution. Another issue is regarding the ability of the TLBO algorithm to balance exploration and exploitation [20]. Exploration is the ability that the TLBO algorithm develops global solution space, and the exploitation is the ability that the TLBO algorithm searches the approximately optimization solution in local solution space. Overemphasize of exploration process prevents the population converging, while too much emphasis on the exploitation process tends to cause the premature convergence of the population. In practice, the exploration and exploitation processes contradict each other and, in order to achieve good solutions, the two processes should be properly trade-off. To improve the performance of the TLBO, modified or improved algorithms are proposed in recent years, such as an elitist teaching-learning-based optimization (ETLBO) algorithm [21], teaching-learningbased optimization with neighborhood search (NSTLBO) [22], and teaching-learning-based optimization with dynamic group strategy (DGSTLBO) [20].
Although the modified TLBO algorithms have better performance than the TLBO for some classical problems, some important issues are not considered such as there are still the lower exploitation ability and the smaller scope of solutions in the later stages of evolution. To address these issues, this paper proposes a novel version of TLBO that is augmented with error correction and Cauchy distribution (ECTLBO) in which the Cauchy distribution is utilized to expand the searching space and error correction to avoid detours to achieve more accurate solutions. e rest of this paper is organized as follows. Section 2 first briefly introduces the TLBO algorithm and the details of its implementation. Section 3 presents TLBO with error correction and Cauchy distribution (ECTLBO). Section 4 analyzes the results of ECTLBO and several related optimization algorithms via a comparative study. Section 5 applies ECTLBO algorithm to path planning of unmanned aerial vehicle (UAV). Finally, the work is summarized in Section 6.
Teaching-Learning-Based Optimization
e teaching-learning-based optimization algorithm is a nature-inspired algorithm analogous to the teachinglearning process in a class between a teacher and learners. e process of implementing TLBO consists of two phases, "Teacher Phase" and "Learner Phase." e "Teacher Phase" stands for learning from the teacher while the "Learner Phase" denotes learning through the interaction between learners.
Teacher Phase.
During the Teacher Phase, the updating formula of the learning for a learner X i (i � 1, 2, N, where N is the number of learners), X i is a vector of a learner which includes x ij consisting of various subjects such as literature, mathematics, and English (j � 1, 2, D, X i (x i1 , x i2 ,, x iD , where D is the number of subjects which a learner X i studied)), is where X i,new is a newly generated individual according to X i , X t is the best individual of current population, X mean is the current mean value of all individuals, r is a vector whose elements are distributed randomly in [0, 1], and T F is a teaching factor deciding the value of the X mean to be changed. e value of T F is either 1 or 2, indicating the learner learns something or nothing, respectively, from the teacher. e value of T F is decided randomly with equal probability: (2)
Learner Phase.
During the Learner Phase, each learner interacts with other learners to improve his or her knowledge. A learner X i learns something new if the other learner X j has more knowledge than him or her. f(X i ) is the summary of all the scores of subjects for the ith learner, and the updating formula of the learning for a learner X i is
A Novel Teaching-Learning-Based Optimization
In this study, a novel version of TLBO that is augmented with an error correction strategy and Cauchy distribution (ECTLBO) is proposed.
Error Correction Strategy.
Some learners who have a bad result because of the bad study method should be guided correctly. Because the study method of some learners toward the teacher is wrong, this time if this is not corrected in time, there will be a detour phenomenon. e study method has problems even wrong, and this leads to opposite. Although each learner spends a lot of efforts, the effect is not too obvious. So, it must have correction function to avoid detours to achieve faster convergence speed and the precision of the optimization as long as the learner who has back phenomenon is corrected in a timely manner. e updating equation of the Teacher Phase is
Cauchy Distribution.
Cauchy distribution is a common distribution in probability theory and mathematical statistics, and the probability density function in dimension is as follows: 2 Computational Intelligence and Neuroscience It is the standard Cauchy distribution when the parameter t equals 1. Figure 1 is the probability density curves of standard Gauss distribution, standard Cauchy distribution, and standard uniform distribution, respectively. As can be seen from Figure 1, the peak of Cauchy distribution at the origin is the smallest of three different distributions, while the velocity of the long flat shape near to zero is the slowest. So, if the mutation strategy of Cauchy distribution is used in the Teacher Phase and Learner Phase, its disturbance ability or self-adjustment ability is the strongest of three different distributions, and the basic TLBO algorithm is more likely to jump out of the local optimum and improve the search speed. e updating equation of the Learner Phase is
Flowchart of Distribution of ECTLBO Algorithm.
As explained above, the flowchart of an error correction strategy and Cauchy distribution (ECTLBO) is shown in Figure 2.
Test Benchmark Functions.
To evaluate the performance of the ECTLBO algorithm, 6 contest benchmark functions [23] are used in a set of experimental studies. e definition of these functions is given in Table 1.
Experimental Platform and Termination Criterion.
All experiments are conducted on the same computer with a Celoron 2.26 GHz CPU, 2 GB memory, and windows XP operating system with MATLAB 7.9. For the purpose of decreasing statistical errors, all experiments are repeated 25 times for all 6 test functions of 30 dimensions. Also, 300,000 function evaluations (FEs) [24] are used as the stopping criterion.
Performance Metric.
e mean value (F mean ) of the function error value f(X) − f(X * ) is recorded to evaluate the performance of each algorithm, where f(X) and f(X * ) denote the best fitness value for solution and the real global optimization value of the test problem, respectively. e standard deviation (SD) indicates robust of various optimization algorithms on F 1 -F 6 test functions on 30 dimensions. To verify whether the overall optimization performance of various optimization algorithms is significantly different, statistical analysis is used to compare the results obtained by the algorithms for the same kind of problems. erefore, the statistical tool Wilcoxon's rank sum test [25] at a 0.05 significance level is adopted. e Wilcoxon's rank sum test assesses whether the mean value (F mean ) of two solutions from any two algorithms is statistically different from each other.
Comparison of ECTLBO with Relevant TLBO Algorithms.
e ECTLBO algorithm is compared to four different relevant TLBO algorithms: TLBO, ETLBO, NSTLBO, and DGSTLBO. e parameters for four relevant TLBO algorithms are taken from their references listed above. Each algorithm runs independently 25 times, and the statistical results of F mean and SD are provided in Table 2, the last three rows of which show the experimental results, and the best results are shown in bold. e evolution plots of NSTLBO, TLBO, ECTLBO, ETLBO, and DGSTLBO are illustrated in Figure 3. In addition, the semilogarithmic convergence plots are used to analyze the relationship of the mean errors of the functions.
In this section, the ECTLBO algorithm is compared with four relevant TLBO algorithms. From the statistical mean value (F mean ) given in Table 2, the overall performance of the ECTLBO algorithm is significantly better than that of other algorithms. e ECTLBO algorithm outperforms NSTLBO, TLBO, ETLBO, and DGSTLBO on six, six, five, and six test functions out of six test functions, respectively. As can be seen from the statistical mean value (F mean ) in Table 2, the ECTLBO algorithm is better than the other four algorithms for functions F 1 , F 2 , F 4 , F 5 , and F 6 . For function F 3 , the ECTLBO algorithm performs the same as the ETLBO algorithm in terms of the statistical mean value (F mean ). erefore, it is interesting to note that the overall performance of the ECTLBO algorithm is significantly better than the original TLBO, ETLBO, NSTLBO, and DGSTLBO algorithms.
Considering the above situations, the main reason is that the Cauchy distribution can expand the searching space and error correction to avoid detours to achieve more accurate solution, helping to identify a more promising solution. at is to say, exploration and exploitation are balanced better in the ECTLBO algorithm. erefore, it can be concluded that the ECTLBO algorithm performs most effectively for accuracy among the five relevant TLBO algorithms. Computational Intelligence and Neuroscience
Comparison of ECTLBO Algorithm with Nine Original
Intelligence Optimization Algorithms. In this section, the ECTLBO algorithm is compared with nine original intelligence optimization algorithms including PSO [6], DE [7], GSO [8], ABC [9], WCA [10], CS [11], BSA [12], DSA [13], and ISA [14]. From the statistical mean value (F mean ) of Random select a X j for every X i in the class
No Yes
No Yes Table 3, the ECTLBO algorithm is better than nine original intelligence optimization algorithms on functions F 1 , F 2 , and F 6 . is indicates that robustness of the ECTLBO algorithm is better than that of nine original intelligence optimization algorithms on functions F 1 , F 2 , and F 6 .
Path Planning of Unmanned Aerial Vehicles (UAV)
Problem. UAV is a rather complicated global optimum problem in mission planning. is problem aims to look for or figure out an optimal or suboptimal flight route from the starting point to the target under specific complex combat field environment in time [26]. Figure 4 shows that the problem of UAV is considered as D-dimensional function optimization problem in essence. e original coordinate system Oxy is converted to a new coordinate system Ox′y′. e axis x′ is divided equally into D parts, the y′ coordinates of each node on the vertical line are optimized, and a set of points composed of vertical coordinates of D points is obtained. By connecting these points in sequence, we get a path connecting the starting point and the destination.
ere are two main goals of UAV path planning: avoiding threats and minimizing fuel costs. erefore, before studying UAV route optimization, we must determine the performance indicators of each path. e following cost equations [27] are used to describe the minimum threat cost and minimum fuel safety performance: where L is the length of the route, J is a generalized cost function, w t is a threat cost, w f is the cost of oil consumption, and the coefficient k ∈ [0, 1] is the trade-off factor of the threat factor and the performance of fuel consumption.
According to the path planning performance index formula, the cost weight of each edge in the network diagram is calculated. For the feasible UAV route of section i, the cost weight value can be expressed as It is assumed that all radars in the enemy defense area are the same and not interconnected, and the radar threat model is simplified, and the radar signal is proportional to 1/d 4 (d indicates the distance between the UAV to the enemy radar and the missile threat position), so the threat cost between the two nodes when the UAV is flying along the line i of the network map. e approximation is considered to beproportional to the integral of 1/d 4 along this edge (as shown in Figure 5). In simulation studies, it is usually simplified to divide the segment into five segments within the threat: where L ij represents the length of the connection point; d 0.1,k , d 0.3,k , d 0.5,k , d 0.7,k , and d 0.9 show distances from the center k of the threat source at 0.1, 0.3, 0.5, 0.7, and 0.9; t k represents the threat weight of k threats; and N t is the number of threat positions. Table 4 gives the threat points of the UAV and the coordinates of the starting point and the target point. By selecting the appropriate parameters, the classical algorithms such as ECTLBO and TLBO, NSTLBO, ETLBO, DGSTLBO, PSO, and DE are applied to the UAV path
Analysis of Simulation Results and Comparisons.
"−," "+," and " ≈ "denote that the performance of the corresponding algorithm is significantly worse than, significantly better than, and similar to that of ECTLBO, respectively.
Computational Intelligence and Neuroscience 5 "−," "+," and "≈"denote that the performance of the corresponding algorithm is significantly worse than, significantly better than, and similar to that of ECTLBO, respectively.
Computational Intelligence and Neuroscience 7 planning, and the simulation results are shown in Figure 6, respectively. As can be seen from Figure 6, TLBO, NSTLBO, ETLBO, and DGSTLBO all fall into local optimal. Although PSO and DE do not fall into the local optimal, the optimal route map generated by these two algorithms is obviously longer than that of the algorithm ECTLBO. At the same time, it can be seen from Figure 6 that the unmanned aerial vehicle (UAV) route obtained by the ECTLBO algorithm has successfully avoided all the threat sources and successfully reached the task end. By comparing with TLBO, NSTLBO, ETLBO, DGSTLBO, PSO, and DE algorithms, the experimental results show that the ECTLBO algorithm can get higher quality navigation trace and higher quality convergence and better avoid the threat route. And the convergence speed is faster than other classical algorithms.
Summary and Conclusions
is paper presents a new version of the TLBO algorithm (ECTLBO), in which error correction and Cauchy distribution are introduced. e performance of the ECTLBO algorithm is evaluated compared with that of other variant TLBO algorithms, nine original intelligence optimization algorithms. e experimental results verify that the ECTLBO algorithm has overall better performance than that of other variant TLBO algorithms and is very competitive among them. Besides that, we also applied it to UAV, and the simulation results show that the path planning stability and path quality by the proposed approach are much more smooth, and shorter and more optimal than those by other well-known algorithms.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,215.4 | 2018-08-01T00:00:00.000 | [
"Computer Science"
] |
Special Collection on advanced practices in aerospace and energy engineering
Numerous significant and interdisciplinary technological advances happen in aerospace end energy engineering nowadays. They span the increased use of computers in every stage of product design, over different modeling, computation, optimization, and artificial intelligence methods, to contemporary (unconventional, non-invasive) experimental techniques and enhanced measuring equipment, alongside dramatic breakthroughs in material science as well as highprecision and additive manufacturing methods. Not to mention the idea of electrification of flow machines together with the increased use of unconventional fuels. Stringent requirements for prolonged working life at optimal performance and smallest environmental impact call for enhanced, innovative, and multidisciplinary approaches in the areas of aerospace and energy engineering. This special collection covers a wide range of topics related to the state-of-the-art numerical and experimental research conducted worldwide with the main purposes of: encouraging further excellent research and industrial advancement, enabling scientific collaboration and knowledge exchange as well as pointing to future trends in aerospace and energy engineering. Even though this SC was open for submission in the second half of 2021, while the pandemic was still raging, the authors found strength and willingness to present their research for which we are truly grateful. It includes 12 papers by more than 35 authors affiliated to 10 different scientific institutions (universities and research institutes worldwide). The journal AiME performed the complete, rigorous review processes and provided great support to the authors. All the papers were selected on the basis of quality and scientific excellence of their topics and content. This collection aims to provide a fundament for the most contemporary research in mechanical engineering and beyond. In particular, problems addressed come from the areas of microfluidics, flow measurements (including both measuring techniques and equipment), turbomachinery, aircraft propulsion, supersonics, flow control, multiphase flows, additive manufacturing, and engineering design. Methods used in presented research studies encompass analytical, numerical, and experimental approaches. Furthermore, strong relations and comparison to real applications and operation are made. For instance, incompressible and compressible flow through microtubes, that also takes into account gas rarefication, applicable in bioengineering and MEMS (that is being increasingly employed), is investigated analytically in Guranov et al. and Milićev and Stevanović. The presented results (pressure, velocity, and temperature profiles) match well with other results from literature and are easily applicable. On the other hand, Bikić et al. numerically and experimentally investigates the benefits (mostly visible in lower energy consumption) of a novel design of flow meters, typically employed in industry due to their simplicity, reliability, and ease of maintenance. Another industrial application, a transient analysis of hydropower plants, is presented in Svrkota et al. From a data analysis performed on 270 hydropower plants with crossflow turbines, a simple mathematical model that estimates the turbine performance characteristics is formed. Its accuracy of 5%–10% is demonstrated on three case studies. On a more abstract level, this study also illustrates how important it actually is to relate empirical and theoretical data, and constantly improve our starting assumptions. That is where high-quality, reliable experimental studies also come into play, particularly in the field of fluid mechanics that is very hard to describe purely theoretically and where the governing equations are not yet closed and many flow phenomena remain unresolved, turbulence being just one of them. Experimental investigation of the turbulent swirl flow in a piping system, a highly complex transient, 3D flow, is presented in Čantrak and Janković It was performed by PIV, a contemporary, noninvasive measuring technique, and it provides insight into the turbulence structure as well as abundant validation data. A special kind of turbomachinery, a jet engine, or more precisely, its core part is investigated in Davidović et al. Four different configurations of a tubular combustion chamber were
Special Collection on advanced practices in aerospace and energy engineering
Numerous significant and interdisciplinary technological advances happen in aerospace end energy engineering nowadays. They span the increased use of computers in every stage of product design, over different modeling, computation, optimization, and artificial intelligence methods, to contemporary (unconventional, non-invasive) experimental techniques and enhanced measuring equipment, alongside dramatic breakthroughs in material science as well as highprecision and additive manufacturing methods. Not to mention the idea of electrification of flow machines together with the increased use of unconventional fuels.
Stringent requirements for prolonged working life at optimal performance and smallest environmental impact call for enhanced, innovative, and multidisciplinary approaches in the areas of aerospace and energy engineering. This special collection covers a wide range of topics related to the state-of-the-art numerical and experimental research conducted worldwide with the main purposes of: encouraging further excellent research and industrial advancement, enabling scientific collaboration and knowledge exchange as well as pointing to future trends in aerospace and energy engineering.
Even though this SC was open for submission in the second half of 2021, while the pandemic was still raging, the authors found strength and willingness to present their research for which we are truly grateful. It includes 12 papers by more than 35 authors affiliated to 10 different scientific institutions (universities and research institutes worldwide). The journal AiME performed the complete, rigorous review processes and provided great support to the authors. All the papers were selected on the basis of quality and scientific excellence of their topics and content.
This collection aims to provide a fundament for the most contemporary research in mechanical engineering and beyond. In particular, problems addressed come from the areas of microfluidics, flow measurements (including both measuring techniques and equipment), turbomachinery, aircraft propulsion, supersonics, flow control, multiphase flows, additive manufacturing, and engineering design. Methods used in presented research studies encompass analytical, numerical, and experimental approaches. Furthermore, strong relations and comparison to real applications and operation are made.
For instance, incompressible and compressible flow through microtubes, that also takes into account gas rarefication, applicable in bioengineering and MEMS (that is being increasingly employed), is investigated analytically in Guranov et al. 1 and Milic´ev and Stevanovic´. 2 The presented results (pressure, velocity, and temperature profiles) match well with other results from literature and are easily applicable. On the other hand, Bikic´et al. 3 numerically and experimentally investigates the benefits (mostly visible in lower energy consumption) of a novel design of flow meters, typically employed in industry due to their simplicity, reliability, and ease of maintenance. Another industrial application, a transient analysis of hydropower plants, is presented in Svrkota et al. 4 From a data analysis performed on 270 hydropower plants with crossflow turbines, a simple mathematical model that estimates the turbine performance characteristics is formed. Its accuracy of 5%-10% is demonstrated on three case studies. On a more abstract level, this study also illustrates how important it actually is to relate empirical and theoretical data, and constantly improve our starting assumptions. That is where high-quality, reliable experimental studies also come into play, particularly in the field of fluid mechanics that is very hard to describe purely theoretically and where the governing equations are not yet closed and many flow phenomena remain unresolved, turbulence being just one of them. Experimental investigation of the turbulent swirl flow in a piping system, a highly complex transient, 3D flow, is presented in Č antrak and Jankovic´5 It was performed by PIV, a contemporary, noninvasive measuring technique, and it provides insight into the turbulence structure as well as abundant validation data. A special kind of turbomachinery, a jet engine, or more precisely, its core part is investigated in Davidovic´et al. 6 Four different configurations of a tubular combustion chamber were experimentally tested to establish the range of its operability. The proposed design and experimental methodology can easily be applied to similar structures. Another example of combined numerical and experimental studies of a multifunctional bulkhead separating the cold (compressor) and hot (turbine) sections of a gas generator can be found in Kolarevic´et al. 7 Through coupled flow and thermo-structural analyses, a novel, innovative gas generator bulkhead design is proposed and validated. Although every experiment poses some difficulties, both in preparing and conducting, measuring the quantities of supersonic flows is particularly challenging, due to highly sensitive flow field, extremely high pressures and temperatures, and very small timescales. A novel technique suitable for wind tunnel measurements, with additional points of contact between the model and the support sting, is proposed in Vukovic´et al. 8 It is reported that transient starting and stopping loads can be reduced by more than 50%, which enables achieving higher Mach numbers in the working section. Another interesting example of an enhanced wind tunnel measuring technique is described in Xue et al. 9 on a case of a jet controlled air vehicles. The peculiarities here are that the model is completely free, not constrained by any additional support, because a small volume high-pressure gas cylinder is incorporated directly into the model of the air vehicle, which enables the investigation of changes in model's attitude and orientation with jet both on and off. Further possibilities of different kinds of flow control (both passive and active, in both low-and high-speed flows), and various benefits that can be achieved in terms of increased efficiency, lift-to-drag ratio, stabilization of the boundary layer, separation or stall delay, even aircraft control, etc. are covered in Svorcan et al. 10 The paper also points to some directions of further development and references numerous computational and experimental studies alike. On the other hand, this collection also addresses multiphase flows. Density, thermal conductivity, and viscosity of dispersions of agricultural biomass particles in ionic liquid are experimentally investigated in Radojcˇin et al. 11 Dispersions with different mass concentration of particles were studied at different temperatures to investigate the possibility of creating a new, enhanced heat transfer fluid. Continuing onto novel structural materials, research presented in Vorkapic´et al. 12 investigates the possibilities of enhancing mechanical properties of 3D printed thermoplastic polymers. The starting material is widely available and affordable, but with poor mechanical properties partially due to the anisotropy that accompanies additive manufacturing. In other to employ 3D printed structures in energy and aerospace applications, their characteristics must be improved and made more reliable.
It may be observed that this special collection, as it is focused on flow machines, covers numerous and diverse fluid dynamics topics, but also some structural ones. All possible investigative approaches (analytical, numerical, experimental) are employed and their usability and importance are once again demonstrated, particularly their combination and comparative analyses. State-of-the-art research directions in natural and technical sciences, such as: increased reliability and capabilities of numerical simulations that greatly shorten and economize the design process, novel materials and manufacturing methods, data analysis in aerospace and energy engineering, efficiency improvement, novel and unconventional technical solutions, environmentally friendly materials and processes, etc. are pursued. At the same time, the importance as well as complexity and high cost of experimental research in aerospace and energy engineering is accentuated and a special focus is given to the most contemporary measurement methods that provide (usually so scarce and unavailable) usable experimental data.
This collection aimed to join together, intertwine but also compare/contrast: numerical and experimental methods, together with all their respective advantages and disadvantages, established, well-proven versus novel, contemporary practices and research trends, conventional versus renewable energy sources, engineers and scientists specializing in fields of mechanical, aerospace, energy, electrical and civil engineering and technology, and provide them a place to present, discuss, and improve their ideas and achievements, and the Editors believe these goals have been achieved. The explored topics are multidisciplinary and current. They stimulate further research and incite international collaboration. In the end, they propose ways to efficiently answer to the growing energy demands and increase the quality of everyday life in the best possible way. We hope the readers will feel the same way.
It has been our pleasure and honor to organize and participate in this SC. We are sincerely grateful to all the authors as well as the managing office of the journal that has been very helpful during the entire submission and publication process. We are also grateful to the Ministry of Education, Science, and Technological Development of Republic of Serbia that supports most of the presented research studies.
The complete SC is available online on. link to the page Lead Guest Editor: | 2,826.6 | 2022-10-01T00:00:00.000 | [
"Engineering"
] |
Epigenetics: The New Frontier in the Landscape of Asthma
Over the years, on a global scale, asthma has continued to remain one of the leading causes of morbidity, irrespective of age, sex, or social bearings. This is despite the prevalence of varied therapeutic options to counter the pathogenesis of asthma. Asthma, as a disease per se, is a very complex one. Scientists all over the world have been trying to obtain a lucid understanding of the machinations behind asthma. This has led to many theories and conjectures. However, none of the scientific disciplines have been able to provide the missing links in the chain of asthma pathogenesis. This was until epigenetics stepped into the picture. Though epigenetic research in asthma is in its nascent stages, it has led to very exciting results, especially with regard to explaining the massive influence of environment on development of asthma and its varied phenotypes. However, there remains a lot of work to be done, especially with regard to understanding how the interactions between immune system, epigenome, and environment lead to asthma. But introduction of epigenetics has infused a fresh lease of life in research into asthma and the mood among the scientific community is that of cautious optimism.
Introduction
Asthma, a chronic and recurrent disease of the airways, has over the years continued to attract the attention of the scientific community due to its widespread prevalence and associated morbidity and mortality, irrespective of age and social bearings. Even in the present day and age, the mortality figures continue to remain high [1]. Overall expenditure associated with asthma far exceeds that incurred with tuberculosis and human immunodeficiency virus (HIV) infection/ acquired immunodeficiency syndrome (AIDS) [1], put together. Despite the presence of a wide variety of therapeutic options, there are none that can provide an effective cure for asthma. In light of this, the research into obtaining a better understanding of the pathophysiology and development of therapeutic options that might offer a chance at curing asthma has never let up.
Recent scientific explorations into the pathogenesis of asthma have revealed it to possess a very complex and multitiered foundation. Despite possessing a genetic component, the asthma phenotypes are not predestined or predetermined. This plasticity in asthma pathophysiology has often been held responsible for the variable phenotypes seen among asthmatics [2].
The reasons for the variability in the asthma phenotypes had often confounded the researchers. It was considered that a comprehension of the reason for variability in the asthma phenotypes could lead to a better grasp of its pathophysiology and, subsequently, newer therapeutic options. This paved the way for entry of epigenetics in asthma. However, the explorations made by the field of epigenetic research in obtaining an understanding of asthma are still in their infancy, especially in comparison to cancer. However, the mounting scientific experimental data emerging from various studies points to a growing interest in this domain [3][4][5].
In light of the ever burgeoning appeal of epigenetics in asthma, it is pertinent that we try to comprehend the line of thinking that indicates a possible role of epigenetics in asthma pathogenesis.
Genetics in Asthma: A False Dawn or the Stepping Stone
It had to be first ascertained that asthma had a significantly determinable genetic component in its pathophysiology. A massive study aimed at investigating the development of asthma among twins revealed that asthma development rate was about 4 times higher in monozygotic twins as compared to dizygotic twins [6]. The twin studies proved to be the ideal stepping stone for further research to be conducted and aimed at establishing a genetic angle to asthma pathophysiology. Aided by the fact that asthmatic intermediate phenotypes are highly heritable and are found to be clustered in families, extensive research into genetics in asthma was carried out. The familial inheritance of the variable asthma phenotypes was pegged at an astounding 60% [7]. The reason for the heritability has been attributed to the presence of nucleotide variants. Hence, in an effort to determine the various nucleotide variants, initially genome-wide linkage studies were carried out. These revealed a handful of genes, that is, ADAM33 [8], DPP10 [9], PHF11 [10], GPRA [11], CYFIP2 [12], HLAG [13], and PTGDR [14], to be closely associated with asthma. However, only ADAM33 and GPRA were associated with an increased incidence of development of asthma [8,11]. Due to lack of convincing results and the limitations of genome wide linkage studies, the researchers changed course and focussed on candidate gene methods for identifying asthma associated single nucleotide polymorphisms (SNP). It is interesting to note here that this tactic yielded 300 genes containing SNPs associated with asthma [7]. The SNPs identified using candidate gene approach could lead to an increased risk in asthma development, but the actual possibility of development of asthma due to these SNPs was not found to be significant [7]. During the period of 2007-2010, about eight genome-wide association studies (GWAS) were carried out [15][16][17][18][19][20][21]. These GWAS have yielded information about the various new pathways that may be implicated in asthma pathophysiology and further examination could potentially throw up new candidates for drug development. Though strong associations were established between various genes identified here, the odds ratio (OR) was always within 0.5-1.5.
Despite the wealth of scientific information obtained from the linkage studies, candidate gene approach, and GWAS, most of the nucleotide variants identified till date could only be associated with a small increment in the risk of development of asthma. Additionally, careful scrutiny of the various GWAS revealed various limitations, among which neglecting the gene-environment interactions that may contribute to asthma pathogenesis has been rated as one of the major pitfalls. The results of these studies could only explain a miniscule part of the "issue" of heritability of asthma phenotypes. Hence, as a countermeasure, it was suggested that future studies should lay special focus on examining the environmental influence on the asthma phenotypes [22]. This recommendation gains significance in light of the fact that the childhood and adult onset asthma's incidence has increased substantially in the last few decades [23], which in turn suggests the possibility that gene-environment interaction may have a significant role in development of asthma. This is further substantiated by the results of the study conducted on twins [6], as mentioned earlier here. As per the study in focus here, about 19% of monozygotic twins developed asthma in concordance with one another. Now, ideally speaking, as monozygotic twins bear the same genetic constitution, they should develop asthma concordantly. However, the rate of concordance being lower than 20% in monozygotic twins is suggestive of nongenetic factors at play in the development or rather the lack of development of asthma.
Environment-Gene Interactions: The Bedrock of Epigenetics in Asthma
While the role of genetic factors in determining the susceptibility of individuals towards development of asthma is unquestionable, increasing evidence suggests a significant role of environment in shaping up of the asthmatic phenotypes. The results of a GWAS examining the effect of exposure to toluidine-diisocyanate (TDI) in Korean population go a long way in backing up this claim [17]. The results of this study, exhibiting a strong association between the gene CTNNA3 (Catenin alpha 3, alpha-T Catenin) and TDI induced asthmatic phenotype, with an OR = 5.84, hinted that the inclusion of gene-environment interactions could solve the issue of "missing heritability" in asthma. Similarly, many GWAS and preclinical and clinical studies have highlighted the potential role of environment in determination of asthma phenotypes. The first GWAS that established a strong association between asthma and 17q21 locus also revealed that there was an increased risk in the development of asthma in offspring in the families with passive exposure to environmental tobacco smoke early on in their life (OR = 2.5) as compared to families with no prior exposure to tobacco smoke (OR = 1.38). This was attributed to variants of ORMDL3 gene at rs8076131 [24,25]. Another retrospective nested case control study suggested that prenatal exposure to tobacco smoke led to a much higher risk of development of asthma, be it childhood onset asthma or persistent asthma. The possibility of gene-environment interaction modulating the asthmatic phenotype was boosted further by an increased risk of asthma development in offspring born to maternal grandmothers with a history of smoking despite no smoking history noted in the mother's case [26]. Airway particulate matter is another major environmental factor that has been greatly studied for its impact on asthmatic phenotypes. An in vitro study involving human bronchial epithelial cells exhibited that exposure to diesel exhaust particulate matter could potentially bring about chromatin modifications, which in turn could produce a significant impact on the phenotype [27]. Further, analysis of a separate cohort involving elderly male subjects in the Normative Aging Study showed that level of particulate matter at work sites could possibly be correlated with various epigenetic mechanisms that in turn could modulate the respiratory phenotype of the subjects [28].
Besides environmental pollutants, dietary factors, for instance vitamin D [29], vitamin E [30], and Mediterranean diet [31], have also been examined for their effects on development of asthmatic phenotype. However, it is folic acid that has been studied most extensively for its consequences on asthma phenotypes. By supplementing folate during the period of pregnancy and weaning, airway hyperresponsiveness as well as chemokines and immunoglobulin E (IgE) production was found to be increased in animal studies [32]. Folate supplementation is strongly correlated with an increased risk of development of asthma in children [33,34]. However, these results are in stark contrast to the results of another study that reported a positive correlation between folate intake during the gestational period and reduction in the risk of atopy and wheezing in children 2 years and older [35]. The status of folate supplementation as a major player in modulating phenotypes was further solidified when it was discovered that folate supplementation was associated with a decrease in the birth weight of newborns. The gene implicated here is insulinlike growth factor 2 (IGF-2) [36]. The recent introduction of various gut and airway microbes into the picture has complicated the interaction between various environmental factors and genes in development of asthma.
The astounding increase observed in the incidence, prevalence, and the severity of asthma in the past few decades strongly substantiates the claim that environmental exposure plays a titular role in the pathogenesis of asthma, especially via their interactions with the genetic variants. However, the rate at which the environmental exposure brings about changes can hardly be accounted for by the alterations in native DNA (deoxyribonucleic acid) sequences. An alternative explanation of this can be provided by the field of epigenetics. It deals with the study of various epigenetic marks that may be introduced into the human genome either prenatally or during various susceptible periods in the life of children, especially during newborn stage or adolescence. These epigenetic marks of the human genome can be modified more readily and rapidly by exposure to environmental factors. This can subsequently bring about modifications in the manifestations and the variability of the phenotypes in this disease. In this paper, we shed some light on the epigenetic mechanisms that could possibly vary the asthma phenotype.
Epigenetic Mechanisms in Asthma: The Means to an End
Exploration of epigenetics in the field of asthma is in its nascent stages. The paucity of studies at this stage has led to the epigenetic epidemiological data about asthma trickling in slowly. The question that arises now is how epigenetic mechanisms bring about various alterations in the asthma phenotype leading to the wide variability in this disease. The possible epigenetic mechanisms could involve DNA methylation, posttranslational histone alteration, and noncoding RNA (ribonucleic acid) dysregulation. However, the most commonly implicated mechanism in most of the studies has been DNA methylation induced genetic dysregulation. There are various theories that have been propounded in a bid to explain the potential role of DNA methylation in the development of asthma. Among them, the commonly explored one is when exposure to particulate matter leads to demethylation or rather hypomethylated state of long interspersed nucleotide elements (LINE-1) [28,37] and Arthrobacter luteus (Alu) repeated elements [37], which subsequently leads to the activation of various promoter genes in these genetic segments and increased incidences of genomic alterations and instability and transcriptional dysregulation [38,39]. It has been hypothesised that exposure to air particulate matter through catalytic redox cycling may increase the production of reactive oxygen species (ROS) [40]. The oxidative damage produced by these ROS prevents the interaction between DNA and methyltransferases enzyme, leading to hypomethylated CpG sites [41]. Besides altering the interaction between DNA and methyltransferases enzymes, metal exposure could induce crucial alterations in the DNA methylation machinery itself [42]. One group of researchers observed that cadmium could inhibit the activity of DNA methyltransferases by attaching to the methyltransferases binding site on DNA and subsequently interfering with the DNA and methyltransferases interaction [43]. Cellular stores of methyl groups tend to undergo depletion on exposure to arsenic, possibly aiding in the hypomethylation of DNA [42]. The hypomethylation of various repetitive elements and subsequent transcriptional dysregulation has been shown to be associated with cellular stress [44] and alveolar inflammation [40].
In case of asthma specific candidate gene for inducible nitric oxide synthase (iNOS), exposure to particulate matter [37] could lead to demethylation of iNOS gene. Consequently, this may lead to increased expression of proinflammatory iNOS leading to respiratory and cardiovascular inflammatory states. Though any study is yet to shed any light on the mechanism by which particulate matter can demethylate iNOS gene, there is a high probability that it might be mediated via the ROS action as suggested in a study [45].
A first of a kind proof of concept study conducted by the Columbia Center for Children's Environmental Health (CCCEH) was able to draw out a correlation between exposure to polycyclic aromatic hydrocarbons (PAHs) and methylation of acyl-CoA synthetase long chain family member 3 (ACSL3) gene [46]. ACSL3 gene, associated with fatty acid metabolism, primarily expressed in lung and thymus, was identified as an original potential epigenetic marker for environmentally induced asthmatic state. Acyl-CoA synthetase is essential for production of acyl-CoA that can be used for both of intracellular lipids and, simultaneously, their degradation through beta oxidation yielding energy [47,48]. Additionally, acyl-CoA synthetase is critical to phospholipid modification in the burgeoning T cells [49]. It has been hypothesised that hypermethylation of this singular gene could potentially have far-reaching effects in the pathophysiology of asthma. Prenatal exposure of PAH was associated with increased methylation status of ACSL3 gene, which in turn has been associated with increased incidences of development of childhood asthma [46]. The question about how various functional alterations brought about by hypermethylation of this gene impact asthma remains unanswered. Further mechanistic studies are needed to delineate out the role of ACSL3 in the development of asthma, especially childhood asthma. However, the recent reports on ACSL3 as a potential epigenetic marker for prediction of PAH associated asthma provide the first yet important step in this research domain.
Development of asthma also involves a key transcription factor modulating the activity of T regulatory cells; that is, Forkhead box transcription factor 3 (Foxp3). T regulatory cells are involved in the initial steps of sensitization to allergens and IgE production consequent to exposure to allergens [50]. It has been suggested that exposure to air pollutants may bring about methylation of promoter regions involving the Foxp3 gene, which decreases the expression of Foxp3 and subsequently the development and functioning of T regulatory cells [51]. Expression of chemokine receptors CCR4 and CCR8 on T regulatory cells is governed by Foxp3 as well [52]. These chemokine receptors may be critical for guiding the movement of T regulatory cells to the bronchial epithelium [53]. This is substantiated by the reduced number of T regulatory cells observed in the bronchoalveolar lavage fluid of asthmatic subjects [54], reduced number of circulating T regulatory cells [55], and impairment in chemotaxis of T regulatory cells to respiratory epithelial cells [56]. Consequently, Foxp3 methylation hints towards worsening of asthma pathology due to impaired production and functioning of T regulatory cells, thus providing a very interesting mark that can be targeted while exploring potential epigenetic therapeutic options against asthma.
There is a heap of evidence to suggest that the adaptive immune programming seen in the pathogenesis of asthma may be amenable to epigenetic modifications [57][58][59][60][61]. It has been observed that, at its resting state, the IL-4, that is, T helper 1 (Th1) cytokine, and IFN-, that is, T helper 2 (Th2) cytokine genes in a naïve CD4 T cell, are methylated [62]. However, exposure to an allergen can tilt the balance between Th1 and Th2 responses in favour of proallergic Th2 cytokine responses by demethylating the IL-4 promoter [62]. The demethylation of IL-4 locus is strongly associated with the IL-4 cytokine expression, subsequently leading to STAT6 phosphorylation resulting in the activation of the master regulatory GATA-3 gene and, finally, increasing the expression of IL-4 [63]. Further, GATA-3 activation has also been shown to repress TBET expression, which is the Th1 differentiation master regulator. GATA-3 has been shown to increase the methylation status of IFN-locus as compared to their naïve state [64]. Thus, DNA hypomethylation of the CpG promoter sites of IFN-could activate the expression of IFN-. This counterregulatory cytokine could provide a protective cover against the proallergic cytokines [61]. However, insights into how the DNA methylation or hypomethylation of the IL-4 and IFN-promoter sites is carried out are lacking.
Besides DNA methylation, studies on histone modifications have also revealed interesting marks that could serve as potential therapeutic targets. Experimentation with HDAC inhibitors, for instance, Trichostatin A, has been associated with Th2 skewing and enhanced GATA-3 expression, which is suggestive of the crucial role that HDAC may play in maintaining the Th1 versus Th2 balance [65]. The deliberations over the potential protective role of HDAC against proallergic Th2 cytokine production received a shot in the arm when it was observed by the same group of researchers that histone acetyl transferase (HAT) activity was significantly elevated and HDAC expression was reduced in atopic asthmatics as compared to atopic nonasthmatics. The levels of HAT and HDAC were correlated strongly with bronchial hyperreactivity in this study, thus establishing pioneering work in drawing an association between epigenetic modifications and a phenotype that can be observed clinically and measured [65]. However, the role of HAT and HDAC in the pathogenesis of asthma is not so straightforward. An interesting study exploring the protection provided by the Gramnegative bacterium Acinetobacter lwoffii F78 has attributed it to increased H4 acetylation of the IFN-promoter site, which was found to be missing on treatment with a HAT inhibitor [66]. Another experiment demonstrated a strong correlation between T regulatory cell induction and HDAC inhibition by various bacterial metabolites which in turn was associated with decreased proinflammatory cytokine expression in dendritic cells [67]. But in cumulation, via the results emerging from these studies and many others in this domain of epigenetic research, it has been established beyond reproach that both DNA methylation and histone modifications are two extremely influential and flexible tenets of determination of T helper cell lineages, which may find application in prevention of asthma.
Finally, our discourse on the epigenetic mechanisms that have been speculated to contribute to asthma pathogenesis would remain incomplete without a mention of the microR-NAs (miRNAs). Though not as extensively researched compared to DNA methylation and histone modification, recent animal and even human studies have revealed a potentially significant role of miRNAs in asthma pathogenesis. It has been suggested that the robust nature and tissue specific distribution of miRNAs could find utility in identification of "at-risk" individuals for asthma [68]. As reported in a few murine studies, miRNA profiling here has helped identify a few potential biomarkers in asthma pathogenesis [69,70]. In addition to these animal studies, abnormal profiles of miRNA have been examined in humans too. A study showed that there were dramatic alterations in the expression patterns of 200 and above miRNAs among asthmatics [71]. A large majority of these remain uncorrected or modestly corrected despite treatment with steroids [71]. Another recent study revealed that about 26 miRNAs are aberrantly expressed in the bronchial smooth muscle cells of asthmatic cells [72]. Besides, the target mRNAs of these aberrantly regulated miRNAs were found to play important roles in asthma pathogenesis in the form of cellular proliferation through phosphatase and tensin homolog and phosphoinositide 3kinase/Akt signaling pathways. Thus, these mRNAs which were being looked at as potential antiasthma targets have now led us to multiple miRNAs that could be manipulated to control asthma [72]. These study results are not in isolation. Similarly, evidences have also emerged from various other studies that have shed light on the potential therapeutic role of miRNAs in asthma. An animal study pointed to a role of miR-223 in granulocyte production and inflammatory response [73]. Further, as per another murine study, another miRNA, miR-126's blockade, was found to be associated Scientifica 5 with diminished Th2 cellular responses in the lung, leading to decreased inflammation, eosinophilic recruitment, and hypersecretion of mucus [74]. An elegant in silico study revealed that targeting miR-9 could potentially find utility in treatment of steroid resistant asthma [75]. The authors have hypothesised that miR-9 through its role in the glucocorticoid signaling pathway could serve as a novel target for treatment of steroid resistant asthma [75]. These encouraging evidences from the preliminary studies point to a potential diagnostic, prognostic, and therapeutic role of miRNAs in asthma.
Perspectives and Conclusion
Despite the many theories like the hygiene hypothesis and triggers like allergens, diet, and so forth that have been identified to play a significant role in the development of asthma, none of them come close to providing a unified welldeveloped mechanism that can account for the majority of these triggers. Epigenetic mechanisms not only have opened up a new and a different field for exploration in asthma but also strive to explain most of the existing theories concerned with asthma. Further, reversibility of potential epigenetic therapies lends an added advantage to it. Epigenetics also attempts to understand the complex gene-environment interactions that have often confounded the researchers for so long. Utilising epigenetic principles, various critical marks that are linked closely to asthma and interact with environmental factors can be identified and can serve as the template for development of potential therapeutic options. In addition, obtaining an understanding of the complex epigenetic-environment interactions can aid in formulating interventions for at-risk individuals and help prevent the development of asthma in the first place.
However, the complexity of asthma as a disease poses a significant challenge towards filling in the missing pieces of the puzzle of asthma pathogenesis. Though the contribution of epigenetics in clarifying the earlier confounding aspects of asthma pathogenesis has been significant, the real challenge that lies ahead is in understanding how the genetic variability, epigenetic marks, environment, transcriptome, and the adaptive immune system interact with one another to produce different types of asthma phenotypes. Besides this, there too remains significant work to be done in understanding the effects of epigenetic regulatory mechanisms in different cells. There is also the additional aspect of limitations of the designs of various studies that have been conducted till date. The cohorts examined in various studies have been small and have not been sufficient to extrapolate the results to clinical settings. However, epigenetics in asthma promises potentially exciting rewards and it is hoped that the research work that has been limited to only the laboratories until now can soon take a leap forward and find application in the clinical settings. | 5,475.4 | 2016-05-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
ON THE SENSITIVITY OF THE TUNNELING CURRENT TO ELECTRIC FIELD IN A MOSFET WITH TWO GATES
A theoretical model to evaluate the sensitivity of the tunneling current to the electric field in an n-channel MOSFET with two gates is proposed. This sensitivity is calculated in a real situation.
INTRODUCTION
It is well-known that MOSFETs with control gate and floating gate play an important role in EEPROM for low-voltage microcontrollers.This role arises mainly from the physical electronics involved in the above devices; in particular, the tunneling current through the oxide layer constitutes a relevant feature of these devices.This current is very sensitive to the electric field in the oxide layer so that evaluation of this sensitivity is very useful in order to estimate quantitatively the performance of the devices.The aim of this paper is to establish a parameter to estimate the above sensitivity; at this respect, we think that our mod- el improves really the state of the art since to date not much has been done with respect to theoretical research on MOSFETs performance.At any rate, with respect to this research, we can mention Refs.[1][2][3].*Corresponding author.M.A. GRADO-CAFFARO AND M. GRADO-CAFFARO
THEORETICAL MODEL
First of all, we will consider the mathematical expression of the tunneling current density through the oxide layer of an n-channel MOSFET with two gates: the control gate and the floating gate.We will assume an A1-SiO2-Si device so that the magnitude of the current density in question is given by the Fowler-Nordheim model, namely [1][2][3][4] where e is the electron charge, E denotes strength of the electric field in the oxide layer, mo stands for the electron rest-mass, m, is the tun- neling electron effective mass, h is the reduced Planck's constant and Eg is the barrier height of Si to SiO2 [1-4].Now we define the fol- lowing quantity: s j-ldJ/dE so that by formula (1) one obtains: By using the numerical values of the parameters involved in Eq. ( 2), including m, .1.1mo and Eg 4.35eV (room temperature) [3,5], it follows: s ,.E-l(2 + 6.48 x 101E-1) (3 where E is expressed in V/m and s in m/V; we can conceive s as a sen- sitivity parameter which measures the sensitivity of J to E (in Fig. 1, s is depicted as a function of E for a range of E-values of interest).
Notice that s decreases as E increases; a typical value of E is 3 x 108V/m which corresponds to Vox 3 V and tox 100 , where Vo, stands for the voltage drop across the oxide layer and to is the oxide thickness (see previous references).The above numerical values correspond obviously to a uniform electric field E, Vo,/to which in practice constitutes a reasonable approximation.
FIGURE
Plot of s versus E for a range of interest.
CONCLUDING REMARKS
The model described previously represents a useful approach to estimate the sensitivity of J to E; our method may be regarded as a technique extrapolable to other situations in the context of high-speed electronics.In particular, by examining the E-dependence of s, it is very easy to see that for E<< 3 x l08V/m s varies sharply with E although the situation corresponding to E << 3 x 10 8 V/m is irrelevant in practice; in contrast, E-values near 3 x 108 V/m are relevant with a relatively remarkable variation of s in terms of E; on the other hand, values between 3 x 108V/m and 12 x 108V/m present some interest. | 805 | 1999-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
REMOVAL OF METHYLENE BLUE BY ADSORPTION ONTO RETAMA RAETAM PLANT : KINETICS AND EQUILIBRIUM STUDY
The feasibility of using medicinal plants species Retama raetam as a low cost and an eco-friendly adsorbent for the adsorption of cationic dye methylene blue from simulated aqueous solution has been investigated. Adsorption kinetics of methylene blue onto Retama raetam plants was studied in a batch system. The effects of pH and contact time were examined. The methylene blue maximum adsorption occurred at pH 8 and the lowest adsorption occurred at pH 2. The apparent equilibrium was reached after 120 min. Optimal experimental conditions were determined. Adsorption modelling parameters for Freundlich and Langmuir isotherms were determined and, based on R, various error distribution functions were evaluated as well. Adsorption isotherm was best described by linear Freundlich isotherm model. Thermodynamic studies show that adsorption was spontaneous and exothermic. For determining the best-fit-kinetic adsorption model, the experimental data were analyzed by using pseudo-firstorder, pseudo-second-order, pseudo-third-order, Esquivel, and Elovich models. Linear regressive and non-linear regressive method was used to obtain the relative parameters. The statistical functions were estimated to find the suitable method that fit better the experimental data. Both methods were appropriate for obtaining the parameters. The linear pseudo-second-order (type 9 and type 10) models were the best to fit the equilibrium data. The present work showed that plant Retama raetam can be used as a low cost adsorbent for the removal of methylene blue from water.
Introduction
The textile industry is one of industrial waste water source.This contaminated water is very toxic for the humans and animals [1].Methylene blue is used in colouring paper, dyeing cottons, wools, silk, leather and coating for paper stock.Although methylene blue is not strongly hazardous, it can cause some harmful effects, such as heartbeat increase, vomiting, shock, cyanosis, jaundice, quadriplegia, and tissue necrosis in human organisms [2].
Chemical coagulation-fl occulation [3], different types of oxidation processes [4], biological process [5], membrane-based separation processes [6] and adsorption [7] were the treatments used in the purifi cation of waters.The most effi cient method used for the quickly removal of dyes from the aqueous solution is the physical adsorption [8].Biosorbents, such as wood sawdust [9], waste-biomass [10], delonix regia [11], agricultural solid waste [12], are able to remove effi ciently the colour from water.
Retama raetam plants can be used as biosorbent.This species belonging to the Fabaceae family has a very productive vertical and horizontal root system, which can reach 20 m.This, in turn, increases substantially the stabilization of the soil.Moreover, the Retama species contributes to the biofertilisation of poor grounds, because of their aptitude to associate with fi xing nitrogen bacteria Rhizobia.Therefore, the genus of Retama is included in a re-vegetation program for degraded areas in semi-arid Mediterranean environments [13].
Retama raetam is a common plant in the North African and East Mediterranean region.In Algeria, it is located in Sahara and Atlas regions and is used in folk medicine under the common name "R'tam" to reduce the blood glucose and skin infl ammations, while in Lebanon it is used as folk herbal medicine against joint aches and in Morocco against skin diseases.Previous pharmacological studies on the plant have revealed its various medicinal properties: antibacterial, antifungal, antihypertensive, antioxidant, antiviral, diuretic, hypoglycaemic, hepatoprotective, nephroprotective and cytotoxic effects.Retama species have been reported to contain fl avonoids and alkaloids [14].
However, there are no reported studies on the adsorption of cationic dyes by Retama raetam.This work aims to understand the potential of Retama raetam for removal of methylene blue dye from simulated aqueous solution in batch mode.The adsorption effi ciency of methylene blue was investigated in order to optimize the experimental parameters.The statistical functions were used to estimate the error deviations between experimental and theoretically predicted adsorption values, including linear and non-linear method.The optimization procedure required a defi ned error function in order to evaluate the fi t of equation to the experimental data.
Materials
Methylene blue (3,7-bis (Dimethylamino)-phenazathionium chloride tetramethylthionine chloride, C 16 H 18 N 3 SCl•3H 2 O, Mw =373.9 g/mol, Figure 1) used in the present study was purchased from Merck (Germany), being selected from the list of dyes normally used in Algeria.Retama raetam plants were collected in Mostaganem region (Algeria), washed several times with deionized water to remove the color and dried at 105°C for 5 h in a convection oven.The residual organics and lipids were respectively removed by methanol and petroleum ether.After this procedure, Retama raetam was washed again with distilled water.
Methods
The Retama raetam was characterized by pH measurement of the pH PZC (point of zero charge).The pH PZC of an adsorbent is a very important characteristic that determines the pH, at which the adsorbent surface has net electrical neutrality [16].
The pH PZC of Retama raetam was measured by pH drift method: 0.1 mg of Retama raetam is added to 100 mL of water with varying pH from 2 to 12 and stirred for 24 h.Final pH of the solution is plotted against initial pH of the solution and shown in Figure 2 [17].The value of pH PZC for Retama raetam was determined as pH 6. Adsorption isotherms are important for the description of how adsorbates interact with an adsorbent being also critical in optimizing the use of adsorbent.Thus, the correlation of equilibrium data using either a theoretical or empirical equation is essential for interpretation of the adsorption data and prediction, as well.Several mathematical models can be used to describe experimental data of adsorption isotherms.Two famous isotherm equations, the Langmuir and Freundlich, were employed for further interpretation of the obtained adsorption data.
Adsorption kinetics of methylene blue onto Retama raetam was studied in a batch system.The effects of pH and equilibrium time were examined.The adsorption parameters were optimized.In each experiment pre weighed amount of adsorbent (0.04 g) was added to 200 mL of dye solution (20 mg/L) taken in a conical fl ask of 250 mL and 0.1 M NaOH or 0.1 M HCl were added to adjust the pH value.This solution was agitated at 300 rpm and centrifuged.The methylene blue concentration in solution was determined at λ max = 665 nm by using UV-1700 PHARMA SPEC SHIMADZU spectrophotometer.The adsorbed amount of methylene blue per mass unit of adsorbent at time t, q (mg/g), (Eq.( 1)) and the dye removal effi ciency (R, %) (Eq.( 2)) were calculated as: where C 0 is the initial concentration of methylene blue (mg/L), C is the dye concentration at time t, V is the solution volume (L) and M is the adsorbent mass (g) [18].
D. Badis et al. / Chem.J. Mold. 2016, 11(2), 74-83 The effect of pH was evaluated by mixing 0.2 g of adsorbent with 1 L of methylene blue simulated aqueous solution of 20 mg/L.The pH value of solution was varied from 2 to 13, by adding 0.1M NaOH or 0.1M HCl solutions.The suspension was shaken for 24h at 25°C.Kinetic experiments were performed by mixing 200 mL of dye solution (20 mg/L) with 0.04 g of adsorbent for different time (5, 10, 30, 60, 90, 120, 150, and 180 min).The initial pH for each dye solution was set at 8. Methylene blue concentration in the supernatants was determined and the adsorbed amount of methylene blue was calculated.
Results and discussion
For studying the effect of every parameter, it is necessary to fi x the values of other ones.The elimination of pollutant from simulated aqueous solution by adsorption is extremely infl uenced by the medium of solution, which affects the nature of the adsorbent surface charge, the ionization extent, the aqueous adsorbate species speciation and the adsorption rate.The adsorptive process through functional groups dissociation on the adsorbate and adsorbent were affected by a pH change [19].The adsorption of methylene blue augments with increasing the pH of the solution.According to the data presented in Figure 3, the best value of adsorption capacity, q e = 9.938 mg/g, was recorded at pH 8. From this study, it is obvious that in the basic medium, the negatively charged species tend to dominate leading to a more negatively charged surface.In this case, the adsorbent surface is negatively charged.The methylene blue adsorption increases due to the enhancement of electrostatic attractions between the negative charge of Retama raetam particles and the positive charge of methylene blue species.The experimental data for methylene blue adsorption on Retama raetam were analyzed with the Freundlich and Langmuir equations.Equations of these models [20] are presented in Table 1, where q is the equilibrium dye concentration on adsorbent (mg/g), q m is the monolayer capacity of the adsorbent (mg/g), C is the equilibrium dye concentration in solution (mg/L), K L is the Langmuir adsorption constant representing the energy constant related to the heat of adsorption, n and K F are Freundlich constants related to adsorption intensity of the adsorbent and adsorption.A non-linear and linear fi tting procedure using Excel and Origin software were used, respectively.The constants of all models were given in Table 2.
Table 1
Adsorption isotherms models and their linear and non linear forms [20].
Applied model
Non linear form Linear form The coeffi cient of correlation indicated that Freundlich isotherm fi tted the experimental data better than Langmuir isotherm.Good agreement between the experimental isotherms and the Freundlich model was found in the case of systems: pentachlorophenol/(M)Al-MCM-41 [21], and toluene/activated carbon [22].
The optimization procedure required a defi ned error function in order to evaluate the fi t of equation to the experimental data.The best-fi tting equation is determined using the well-known special functions to calculate the error deviation between experimental and predicted data.The mathematical equations of these error functions were illustrated in Table 3. [30] where n is the number of experimental data points, q calc is the predicted (calculated) quantity of methylene blue adsorbed onto Retama raetam, q exp is the experimental data, p is the number of parameters in each kinetic model, ARED is the average relative error deviation (dimensionless parameter), ARE is the average relative error (dimensionless parameter), ARS is the average relative standard error (dimensionless parameter), HYBRID is the hybrid fractional error function (dimensionless parameter), MPSD Marquardt's is the percent standard deviation (dimensionless parameter), MPSED Marquardt's is the percent standard deviation (dimensionless parameter), SAE=EABS is the sum of absolute error (mg/g), SSE is the sum of the squares of the errors (mg/g) 2 , and Δq(%) is the normalized standard deviation (mg/g).The constants of all error analysis are represented in Table 4.The data of adsorption isotherm are essentially required for designing the adsorption systems.In order to optimize the design of a specifi c sorbate/sorbent system for removal of methylene blue from aqueous solution, it is important to establish the most appropriate correlation for the experimental kinetic data.Applicability of some statistical tools to predict the optimum adsorption isotherms of methylene blue onto Retama raetam after linear regression analysis showed that the highest R 2 value and the lowest ARED, ARE, SAE, ARS, MPSD, Δq, SSE, MSPED and HYBRID values can be suitable and meaningful tools to predict the best-fi tting equation models.
The best fi tting is determined based on the use of these functions for calculation of the error deviation between experimental and predicted equilibrium adsorption isotherm data, after linear analysis.Hence, according to Table 4, it seems that the linear Freundlich model was the most suitable mode to describe satisfactorily the studied adsorption phenomenon.Therefore, based on the mentioned results, the best useful error estimation statistical tools point out the non linear Freundlich model, followed by linear Freundlich model, as the best-fi tting models.
In order to better understand the effect of temperature on the adsorption of methylene blue onto Retama raetam, the free energy change (ΔG °, J mol -1 ), enthalpy change (ΔH °, J mol -1 ) and entropy change (ΔS °, J K -1 mol -1 ) were determined (such parameters refl ect the feasibility and spontaneous nature of the process) using Eqs.( 3)- (5).
The combination of Eqs.( 3) and (4) gives Eq.( 5): where R is the universal gas constant (8.314J K -1 mol -1 ), T is the absolute temperature (Kelvin) [31].Experiments were performed using 20 mg/L dye solutions with 0.2 g of Retama raetam for 24 h at various temperatures.The apparent equilibrium constant K c of the adsorption is defi ned as Eq.( 6) [20]: The enthalpy and entropy can be obtained from the slope and intercept of the linear plot of lnK c versus 1/T.The obtained thermodynamic parameters are given in Table 5.A negative enthalpy value of −4.51110 5 kJ/mol indicates that adsorption was exothermic.A negative entropy value of -62.687J/mol and a negatively decreasing Gibbs free energy indicates the increase in the randomness in the solid-liquid interface and adsorption spontaneity [32].
Figure 4 illustrates the effect of contact time on decolorization (dye adsorption) with Retama raetam.The plot (simulated aqueous solution) can be divided in three zones: (i) 0-30 min, which indicate the fast adsorption of methylene blue, suggesting a rapid external diffusion and surface adsorption; (ii) 30-60 min, show a gradual equilibrium, and (iii) 60-180 min, indicate the plateau of the equilibrium state.The adsorption was rapid at the initial stage of the contact, but it gradually slowed down until the equilibrium.The fast adsorption at the initial stage can be attributed to the fact that a large number of surface sites are available for adsorption.After a lapse of time, the remaining surface sites are diffi cult to be occupied.Adsorption is a complex process that is infl uenced by several parameters related to adsorbent and to the physicochemical conditions, under which the process is carried out [33].For understanding the mechanism of the adsorption process, the following equations: pseudo-fi rst order (Lagergren Model) [2], pseudo-second order [34], Esquivel [35], pseudo-third order [36], and Elovich [37] were selected to fi t the experimental kinetic data.Equations of these models are presented in Table 6.
Table 6
Adsorption kinetics models and their linear and non linear forms.
Applied model Non Linear form Linear form Reference Pseudo-fi rst order
Pseudo-fi rst order (type 1) ) 1 (
Pseudo-second order
Pseudo-second order (type 9) [39] Pseudo-second order (type 10) where k 1 is pseudo-fi rst order rate constant (min -1 ), k 2 is pseudo-second order rate constant (g/(mg min)) , k 3 is pseudo-third order rate constant (g 2 /(mg 2 min)), K E is Esquivel rate constant (min), k 4 is Elovich rate constant (mg/(g min)), k 5 is extent of surface coverage and activation energy of the process (g/mg), k 6 is extent of surface coverage and activation energy of the process (g/mg), k 7 is Elovich rate constant (mg/(g min)), q e is amount of adsorption at equilibrium (mg/g), and θ is dimensionless parameter (=q/q e ).For the non-linear and linear fi tting procedures Excel and Origin software were used, respectively.The constants of all models were given in 4.167 0.899 q = 0.240*ln(t) + 8.756 Table 7 shows that q e , k 2 and R 2 values obtained from the two linear forms of pseudo-secondorder expressions were the same.The value of q e and k 2 were calculated to be, respectively, 9.932 mg g -1 and 1.926 g mg -1 min -1 for linear pseudo-second-order and 9.908 mg g -1 and 1.97 g mg -1 min -1 for non linear pseudo-second order biosorption.The constants of all error analysis are represented in Table 8.
Adsorption kinetic data are the basic requirements for the design of adsorption systems.In order to optimize the design of a specifi c sorbate/sorbent system to remove methylene blue from aqueous solution, it is important to establish the most appropriate correlation for the experimental kinetic data.Applicability of some statistical tools to predict optimum adsorption kinetics of methylene blue onto Retama raetam after linear regression analysis showed that the highest R 2 value and the lowest ARED, ARE, SAE, ARS, MPSD, Δq, SSE, MSPED, and HYBRID values could be suitable and meaningful tools to predict the best-fi tting equation models.
The best fi tting is determined based on the use of these functions to calculate the error deviation between experimental and predicted equilibrium adsorption kinetic data, after linear analysis.Hence, according to Table 4, it seems that the linear pseudo-second order type 9 and type 10 models were the most suitable models to describe satisfactorily the studied adsorption phenomenon.Therefore, based on these mentioned results, the best useful error estimation statistical tools should point out the linear pseudo-second order type 9 followed by linear pseudo-second order type 10 as the best-fi tting models.D. Badis et al. / Chem. J. Mold. 2016, 11(2), [74][75][76][77][78][79][80][81][82][83] In the most studied adsorption systems, the pseudo-fi rst-order model does not fi t well over the entire adsorption period and is generally applicable over the fi rst 20-30 min of the sorption process.The pseudo-second-order model is based on the biosorption capacity of the solid phase and it generally predicts the "chemisorption" behaviour over the whole time of adsorption [20].
Obtained results, presented in Table 4 show that the pseudo-fi rst-order model data do not fall on straight lines indicating that this model was less appropriate.In contrast, the pseudo second order kinetics have shown very low ARED, ARE, SAE, ARS, MPSD, Δq, SSE, MSPED, and HYBRID and high R 2 values for type 9, 10 linear pseudo-secondorder, and non linear pseudo-second-order expressions suggest that it is appropriate to use the pseudo-second-order model, suggesting that it is applicable to the adsorption kinetics.This suggests that, the biosorption of methylene blue onto Retama raetam is a chemisorption process involving exchange or sharing of electrons mainly between the dye ions and the sorbent functional groups [20].Using linear method it was found that a theoretical pseudo-second order model represents well the experimental kinetic data of adsorption of methylene blue onto Retama raetam based on a Type 9 and 10 pseudo-second-order kinetic expression.
Studies regarding the use of Retama raetam as biosorbent are in progress.More technical and experimental optimisations and treatments should be realised to improve the adsorption capacity of Retama raetam.For example, use of more effective pre-treatment methods and reduction in particle size (larger specifi c adsorption area, m 2 /g) may further improve the rate and the extent of adsorption of methylene blue onto Retama raetam.Besides, the methylene blue -loaded biomass itself has to be treated, in order to avoid a pollution transfer.Indeed, one of the more common questions aroused by biosorption processes involves the fate of the biosorbent after the process.Care must be taken that solving one problem, not to create another.The sorbed methylene blue can be recovered by extraction from the biomass in order to be concentrated and then stored, reused, or eliminated.Also, the decontamination of the methylene blue -loaded biomass by biodegradation is a very interesting approach.
Conclusions
Retama raetam plant was used for the adsorption of methylene blue in simulated aqueous solution.In batch mode, the adsorption was highly dependent on two operating parameters (pH, contact time).The obtained results revealed the following optimal conditions: pH value of 8 and 120 min of contact time, which lead to 90.38 % methylene blue removal.
Kinetics data correlated well with the pseudo second order kinetic model (type 9 and type 10), whereas equilibrium study was best described by non linear Freundlich isotherm model.
Figure 2 .
Figure 2. Point of zero charge (pH PZC ) of the Retama raetam used for the adsorption experiments.
Figure 3 .
Figure 3.Effect of the initial pH of solution on equilibrium adsorption capacity of Retama raetam. | 4,513.6 | 2016-12-01T00:00:00.000 | [
"Chemistry"
] |
The Application of Artificial Intelligence in the Field of Transportation
: Urban traffic is the lifeblood of urban economic activities and plays a very important role in the development of the urban economy and the improvement of people's living standards. Although the automobile industry brings convenience, it also brings a heavy burden to urban traffic. The imbalance of urban traffic supply and demand has become a serious problem faced by major cities, especially in large cities, the phenomenon of traffic congestion. This not only affects the normal operation of the city, but also reduces the daily work efficiency of people. This paper is based on mature scientific principles and physical devices, such as dynamic Bayesian network, machine vision, machine learning, pattern recognition technology, temperature sensor, optical fiber sensor, etc. The application of artificial intelligence technology in highway traffic is discussed in detail, which lays a theoretical foundation for the development of intelligent traffic in the future.
INTRODUCTION
Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.Research in this field includes robotics, language recognition, image recognition, Natural language processing and expert systems, etc.Since the birth of artificial intelligence, the theory and technology have become more and more mature, and the application field has also continued to expand.It is conceivable that the technological products brought by artificial intelligence in the future will be the "container" of human intelligence.This paper discusses the application of artificial intelligence technology in traffic flow management and highway maintenance.The aim is to improve the convenience of commuting through the feasibility of artificial intelligence technology, at the same time, reducing the probability of traffic accidents, give people more a convenient, stable, safe travel environment.
Introduction to Artificial Intelligence
Artificial Intelligence (AI), as an important branch of computer science, was formally proposed by McCarthv at the Darhnouth Society in 1956, and is currently known as one of the three cutting-edge technologies in the world.
Professor Nilson (Nilson), the famous artificial intelligence research center of Stanford University in the United States, defines artificial intelligence as "artificial intelligence is the subject of knowledge -the subject of how to express knowledge and how to obtain and use knowledge.Another famous American university Professor Winston believes that "artificial intelligence is the study of how to make computers do intelligent jobs that only humans could do in the past.In addition, there are many definitions of artificial intelligence, which have not yet been unified, but these statements all reflect the basic ideas and basic content of artificial intelligence.From this, artificial intelligence can be summarized as the study of the laws of human intelligent activities.Construct artificial systems with certain intelligent behavior.The word intelligence comes from Latin and literally means to gather, collect, gather, and choose from.It is generally believed that intelligence refers to the ability of human beings to be manifested by mental work in the activities of understanding the world and transforming the world.That is, the individual's comprehensive ability to reasonably analyze, judge and act purposefully on objective things and effectively deal with the surrounding environment [1].
Development History of Artificial Intelligence
Since ancient times, human beings have tried to use machines to replace part of the human brain work according to their own level of knowledge and the technical conditions at that time, in order to improve the ability to conquer nature.In 850 AD, there is a legend in ancient Greece that robots were created to help people work.In more than 900 BC in China, there are also records of the legend of singing and dancing robots, which shows that the ancients had the illusion of artificial intelligence.As history progressed, by the end of the twelfth century to the beginning of the thirteenth century, the Spanish theologian and logician Romen Luee attempted to create a general-purpose logic machine that could solve a variety of problems.In the seventeenth century, French physicists and mathematicians made the world's first mechanical adder that could calculate and obtained practical applications.Then the German mathematician and analyst G.W. Leibniz developed and made a calculator for all four operations on the basis of this adder.He also put forward the design idea of a logic machine, which is to reason about the characteristics of objects through the symbol system.This idea of "universal symbols" and "inference calculation" is the bud of modern "thinking" machines, so he is praised by later generations.The first founder of mathematical logic.Then, British mathematician and logician Boole initially realized LeibniZ's idea of symbolization and mathematization of thinking, and proposed a brand-new algebraic system, which is Boolean algebra widely used in computers later.At the end of the 19th century, British mathematician and mechanist C. Babbage devoted himself to the research of different engines and analytical engines.Although it could not be fully realized due to the limitation of conditions, his design idea deserved to be the highest achievement of artificial intelligence in that year.
Predicting traffic flow to improve traffic congestion
With the development of China's automobile industry, the problem of congestion on urban roads and expressways is becoming more and more serious.Based on traffic flow data mining and the establishment of a traffic flow prediction model, it can effectively predict traffic congestion and guide vehicles to choose reasonable travel routes [1].
The evolution prediction of traffic congestion and the determination of congestion nodes are of great significance to government congestion and passenger travel.Traditional research often uses mathematical formulas or simulation software to analyze the traffic conditions of the road network.Most of these research methods require certain assumptions and a very complicated model revision process.Traffic volume continues to increase as smart hardware grows [2].Traffic congestion prediction based on large data volume is derived.In this research method, the congestion state of the road network is abstracted as a high-dimensional matrix which changes with time, and a large amount of data is integrated and sorted by mathematical means to establish an effective mathematical model, at last, we can predict the road traffic condition more accurately by learning the historical trend among the matrix elements.
Intelligent traffic lights dynamically adjusted according to traffic flow
With the rapid development of computer technology, the detection technology based on machine vision has been applied to the traffic monitoring system, through video detection technology can be real-time detection of vehicles in the intersection of the queue length.The data collected through video detection are arranged and integrated to generate the distribution of the waiting queues at intersections, so that a real-time and dynamic time allocation control scheme is adopted to alleviate the vehicle queuing situation at intersections through dynamic control of traffic lights, to a certain extent, solve the city morning and evening rush hour traffic disease [3].This scheme is divided into three parts: Video Image Acquisition, digital image processing, traffic lights and signal control.First, real-time images of vehicle queue length are collected by cameras installed in four directions at intersections, and the image data are stored and transmitted; then real-time external processing is carried out by digital signal processor, the queue length is calculated by image preprocessing, image segmentation and setting up virtual box in real time Finally, the vehicle queue length is used as the decision variable of the pedestrian crossing time to allocate the traffic light time in real-time.
Traditional traffic lights waste a lot of green time, for the intersection of two-way traffic flow inconsistent situation is very unfavorable.The above-mentioned scheme can make the best use of the green time, avoid the waste of the green time and increase the waiting time of vehicles, and can effectively alleviate the traffic congestion at the intersection.
Detection of roads using pattern recognition and image recognition (1) Road icing detection technology
In all kinds of traffic accidents, the proportion caused by road slippage and icing is as high as 70%.The traditional method generally relies on human detection, but due to the rapid weather changes and the short validity period of the data, it cannot meet the safety needs of drivers.Therefore, a road icing detection system that can realize real-time monitoring is extremely important.It can provide online, real-time information of road condition for drivers and relevant departments, and reduce accidents [4].
The road icing detection system mainly includes optical fiber sensor, piezoelectric film Sensor, temperature sensor and image sensor and so on, which can detect and report the road condition in cold weather.In the detection system, the optical fiber sensor is mainly used for ice detection in the range of 2-10 mm [5].And the piezoelectric film sensor is mainly used for ice thickness detection before 0-2 mm [6].At the same time, we should use the temperature sensor to monitor the temperature in real time.When the temperature is higher than a critical value, we can judge the ice-free situation, and then shut down the whole system.The image sensor in the system is a high-definition camera with an adjustable set-distance.It can collect real-time images of the road surface and reduce the error rate of the detection system.After sending back the data of each sensor of the road surface detection system, the critical values of different road icing conditions are set, the data are analyzed and classified to establish the model, and finally the concrete road conditions are judged by the combination of pattern recognition technology and real-time data, to improve the accuracy of road icing detection, thus, road icing detection system will be more accurate and timely than the traditional way of data transmission to the driver and related departments, to achieve the greatest possibility of information exchange of road information, the invention ensures the safety of the driver and reduces the occurrence of road accidents.
Road crack detection With the increase in maintenance work, real-time, fast and efficient non-destructive testing technology of road surface has become the basis of road management and road maintenance operations.
For the detection of pavement cracks, the damage is mainly judged from the width, length and depth of pavement cracks.However, there are not many online detection devices for pavement cracks at present, and no similar mature products have come out in China.At present, most of them are detected by manual visual inspection, which has the advantage of being accurate, but has shortcomings such as low efficiency, requiring more manpower and material resources, and affecting road traffic.If the road surface is photographed with a vehiclemounted camera, a driving recorder or a drone, machine vision technology is used instead of manual detection of road marking damage, combined with GPS data to achieve accurate damage positioning, and then local maintenance can be performed, which can not only reduce road marking maintenance.cost, improve the maintenance efficiency of road markings, and also make up for some defects of manual detection [7].Therefore, the use of machine vision technology for road crack detection and maintenance has very important significance and broad application prospects.
Optimal selection of construction time for damaged roads
In the event of an emergency traffic accident or an emergency around the road surface, artificial intelligence technology is used to predict the impact of the road near the construction site in order to give drivers advance notice to avoid the area, the best route for emergency vehicles to reach the scene is very important.The traditional solution is to choose fewer working days for vehicles, control the construction site, limit the traffic density by reducing the width of the traffic lane, and maintain the construction space, although this can avoid the impact of the accident site on traffic, however, there are drawbacks of low efficiency and even increasing the burden of road traffic congestion [8].A combination algorithm based on association rule mining and Dynamic Bayesian network is introduced to construct causal tree from congestion when using artificial intelligence technology to make prediction, the probability of their propagation is estimated based on the information of time and space [9].The frequent sub-structures of these causal trees not only reveal the repeated interaction between space-time congestion, but also reveal the potential bottleneck or defect of the existing traffic network design.Using the new algorithm to automatically choose the construction time of the damaged road, can maximize the reduction of the impact on traffic, to facilitate the travel of citizens.
System Life prediction based on fault curve
There is an inherent fault curve in traffic equipment, the fault rate is high at the beginning of the equipment's use, stable in the middle of the use, low fault rate, at the end of the equipment's life, the fault rate starts to rise again.Equipment Life prediction in intelligent maintenance is based on fault curve.Firstly, the service life of the equipment is obtained based on the method of high acceleration test.During the working process, the use time of traffic equipment is accumulated, and based on artificial intelligence, the load under different working conditions is normalized, calculated and the weighted working time is formed.
Based on the weighted working time and the high acceleration test, the prediction of the service life of the equipment is obtained.
CONCLUSION
The application of artificial intelligence technology in traffic management can help traffic management work in an efficient and orderly manner, and maximize the utilization efficiency of the transportation field.With continuous expansion, artificial intelligence will be used in many fields in the future to improve people's production and living standards and strengthen my country's comprehensive national strength.This research still has the limitation of implementation, and has not considered the construction cost and maintenance cost of intelligent transportation.If we want to solve this problem, we need to carry out experiment and technology upgrade to control the cost, do not blindly put them into mass production.
In the future, artificial intelligence technology will be more widely used in the field of transportation, as long as we have better technology and more reasonable planning. | 3,204.6 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Modulating the Electrical and Mechanical Microenvironment to Guide Neuronal Stem Cell Differentiation
Abstract The application of induced pluripotent stem cells (iPSCs) in disease modeling and regenerative medicine can be limited by the prolonged times required for functional human neuronal differentiation and traditional 2D culture techniques. Here, a conductive graphene scaffold (CGS) to modulate mechanical and electrical signals to promote human iPSC‐derived neurons is presented. The soft CGS with cortex‐like stiffness (≈3 kPa) and electrical stimulation (±800 mV/100 Hz for 1 h) incurs a fivefold improvement in the rate (14d) of generating iPSC‐derived neurons over some traditional protocols, with an increase in mature cellular markers and electrophysiological characteristics. Consistent with other culture conditions, it is found that the pro‐neurogenic effects of mechanical and electrical stimuli rely on RhoA/ROCK signaling and de novo ciliary neurotrophic factor (CNTF) production respectively. Thus, the CGS system creates a combined physical and continuously modifiable, electrical niche to efficiently and quickly generate iPSC‐derived neurons.
DOI: 10.1002/advs.202002112
the cerebral cortex in health and disease have been hindered by the availability of model systems. [2,3] Because induced pluripotent stem cells (iPSCs) are derived from an individual's own cells, they provide an exciting opportunity to create more effective model systems. While efficient strategies using inhibitor cocktails of SMAD and Wnt signaling have been established to promote differentiation of human iPSCs or embryonic stem cells into neural precursor cells (NPCs) rapidly, further maturation into functional neurons in vitro is a lengthy process and has largely lacked the mechanical and electrical cues seen during development. [1,[4][5][6][7] Additionally, current techniques to develop post-mitotic cortical neurons such as exposure to trophic factors or neurogenin 2 regulation may not fully recapitulate the myriad of pathways that naturally trigger differentiation. [8,9] Bioengineered scaffolds offer a unique platform to begin to address these limitations. [10] During neural development, a combination of chemical, mechanical and electrical signals guide stem cell fate. [11][12][13][14] To date, inert polymers such as soft hydrogels whose interactions rely on the inherent polymer properties and cannot be continuously modulated have largely focused on the mechanical properties to interact with these neural precursor cells. [15][16][17] Cells are able to transduce mechanical perturbation via a signaling cascade involving downstream effectors such as yes-associated protein (YAP), transcriptional coactivator with PDZ binding motif (TAZ), and small worm phenotype and mothers against decapentaplefic proteins (SMAD) to maintain stem cell pluripotency. [18][19][20] Electrical stimulation has also been found to play an important role in developing neurons and plays a role in epigenetic reprogramming and gene signaling. [21,22] This electrical activity during early neuronal development suggests an essential role in cell differentiation and maintenance of phenotype. In embryonic cells, for example, voltage-gated sodium and calcium channels generating electrical signals are critical for the development of neuronal precursors and differentiating neurons. [23] Additionally, in mice hippocampal precursor cells, excitation of cultured cells with voltage-gated calcium channels represses glial fate genes and induces expression of neural fate genes. [12] Owing to the importance of activity-dependence in early neural development and physiological maintenance, electrical stimulation is also known to be important in developing neurons and can be expected to play a role in determining cell fate. Indeed, while previous studies have demonstrated the enhancement of neuronal differentiation by the application of electrical fields, [24] only a limited set used a combination with mechanical influences. We asked whether applying both mechanical forces and electrical stimulation may affect the speed in which pro-neuronal differentiation occurs in culture. Conductive polymers provide a novel platform to interact with stem cells even after seeding and provide a microenvironment with tunable electrical stimulation.
Here, we explore utilizing a new conductive graphene scaffold (CGS) to enhance direct-differentiation of iPSCs to a human cortical neuronal fate. By altering mechanical properties of CGS through a new method of nanoconfinement of carbon nanofibers (CNFs) utilizing chemical reduction of graphene oxide (GO). We were able to alter the stiffness of the scaffold by controlling the ratio of CNF to graphene oxide. Moreover, an exposure to electrical stimulation provided a rapid procedure to obtain iPSC-derived neurons with more mature molecular signature and electrophysiological characteristics. To evaluate the underlying molecular mechanism for such effect, we also performed augmentation or reduction experiments and found Ras homolog family member A (RhoA) and ciliary neurotrophic factor (CNTF) were important for CGS-mediated iPSC conversion to neurons.
3D CGS Promotes Neuronal Conversion of iPSCs
Electrical activity plays an important role in modifying neural activity [25,26] along with other environmental factors modulating stem cell reorganization, migration, proliferation, and differentiation. [27][28][29][30] To harness these effects, we prepared a unique 3D macroporous and mechanically soft CGS capable of electrical stimulation using chemical reduction of graphene oxide (rGO) combined with CNFs ( Figure 1A-C; Figure. S1A-D, Supporting Information). In contrast, a 2D CGS demonstrated a less porous structure ( Figure S1E,F, Supporting Information).
An additional benefit of the CGS platform is that it is made via a straightforward process using readily available products. Modeling the distribution and gradient of the electrical field across the CGS ( Figure 1C) found a relatively uniform electric field (±800 mV; 100 Hz). By simply varying the ratio of CNFs, we were able to alter the stiffness of the scaffold, with higher concentrations of CNFs resulting in softer scaffolds (1:1 of CNF:GO ≈3 kPa) ( Figure 1D). The conductivity of the CGS did not vary across different stiffness with the stiff CGS containing the least amount of CNF and soft CGS containing the highest concentration of CNF ( Figure S1A, Supporting Information). On the crosssectional plane, the CGS morphologically was aligned and rGO was intertwined with CNFs ( Figure S1B, Supporting Information).
Mechanotransduction is a key parameter for cell migration, proliferation, and differentiation. [31][32][33][34] It has been established that the Hippo/YAP pathway of pluripotent stem cells on different stiffness of polymeric hydrogels is a regulator for pluripotent stem cell differentiation into neurons. [19] Traditional iPSC culture techniques use dual SMAD inhibitors, resulting in promoting neuronal conversion. [1] Moreover, it has been recently observed that RhoA, a cytoskeletal dynamics regulator, is a main effector on neuronal differentiation of murine pluripotent stem cells via SMAD downregulation. [35] Because of this, we evaluated if changes in stiffness of the CGS increased the rate of neuronal differentiation on our CGS and if this was related to changes in the YAP/p-SMAD pathway through RhoA downregulation in human iPSCs.
Preconditioning of iPSCs under N2B27 media supplemented with dual SMAD inhibitors (Dorsomorphin and SB431542) generated PAX6 + /Nestin + neural precursor cells (Figure 2A and Figure S2A-D, Supporting Information). To determine whether these precursor cells undergo selective differentiation on stiffer CGSs, we tested CGSs with varying elasticity (≈3 to 12 kPa). The soft CGS substrate was more efficient at increasing expression of early and mature neuronal markers (TUJ1 + and MAP2 + ) than the stiff CGS or glass substrates ( Figure 2B-D). Without the 3D structure, the 2D CGS produced less early and mature neuronal markers ( Figure S2E,F, Supporting Information). Human iPSCs initially preconditioned with N2B27 media supplemented with dual SMAD inhibition (Dorsomorphin and SB431542) for 7d were passaged onto CGSs. After passaging, the cells were maintained with N2B27 media without dual SMAD inhibitor. ICF, immunocytofluorescence (green arrow). Bar plots showing percentages of B) TUJ1 + and C) MAP2 + at 7d on CGSs of varying elasticity. D) Representative immunocytofluorescence analysis of RhoA (green)/YAP (red) and TUJ1 (green)/MAP2 (red) in iPSCs cultured on glass, stiff, and soft CGS. Cell nuclei were counterstained with DAPI (blue). Scale bars indicate 25 for RhoA/YAP and 50 µm for TUJ1/MAP2, respectively. E) qRT-PCR analysis of Rhoa, an intracellular signal mediator, in iPSCs cultured on glass, stiff or soft CGS. Expression levels are normalized to GAPDH. F) Bar plot showing rigidity-dependent nuclear co-localization of YAP in iPSCs cultured for 2d on different substrates. B,C,E,F) Analyzed using a one-way ANOVA, followed by Tukey's HSD post hoc test with * p < 0.05 and ** p < 0.01. Values represent the mean of independent experiments (n = 4); error bars, SD.
Next, we examined if the RhoA and YAP/p-SMAD pathways were involved with these changes as observed in previous studies. [19,35] The expression level of Rhoa from iPSCs on both glass (control) and stiff CGS were similar ( Figure 2E). Notably, iPSCs on soft CGS had decreased Rhoa transcription. Given that Rhoa-dependent F-actin polymerization occurs at the cell periphery, [35] we expected differences between the peripheral stress fibers in cells cultured on substrates of varying elasticity. Quantification of F-actin in cells of each group estimated the difference in cytoskeletal stress ( Figure S2G,H, Supporting Information). Peripheral stress fibers, visualized by staining for Factin, were decreased in neural precursor cells cultured on soft CGS compared to the cells on glass and stiff CGS, consistent with the reduction of Rhoa activity in the soft CGS.
Immunocytofluorescence analysis revealed that RhoA and YAP/p-SMAD expression were regulated by substrate stiffness ( Figure 2D-F and Figure S2G,I, Supporting Information). A decrease in F-actin polymerization associated with less Rhoa transcription has been linked to reduced transcriptional regulatory activity of YAP and p-SMAD. [19,20] In our experiments, YAP and p-SMAD are localized in the nucleus on both the glass and stiff CGS, whereas it is mainly excluded from the nucleus on the soft CGS ( Figure 2D,F and Figure S2G,I, Supporting Information). Previously, YAP/p-SMAD have been implicated in signaling pathways elicited by mechanical stimuli. [18][19][20] Cell cultures on the soft CGS exhibited efficient p-SMAD and YAP sequestering with significant decreases in the proportion of cells with co-localization in the cell nucleus ( Figure 2D,F and Figure S2I, Supporting Information).
Taken together, our experiments indicate that the mechanical stimuli conferred by the CGS platform are more efficient at promoting pro-neuronal markers than the glass substrate alone. Additionally, YAP/p-SMAD sequestration coincides with RhoA downregulation in rigidity-dependent, neuronal differentiation of iPSCs ( Figure S2J, Supporting Information). We found that decreasing the stiffness of the CGS augmented early neuronal conversion of iPSCs and that RhoA is implicated in this regulation which is consistent with previous results. [19] Further experiments are required to demonstrate a causal relationship, but our findings demonstrate the importance of this pathway in the CGS system.
Electrical Stimulation Augments the Generation of iPSC-Derived Neurons
Electrical stimulation is able to alter the transcriptome of stem cells. [22,36,37] Due to the lack of conductivity in traditional culture systems and material limitations (i.e., inert hydrogel or scaffold), the combination of electrical and mechanical cues on guiding iPSC differentiation remains largely unexplored. Utilizing the properties of our CGS platform, we simultaneously applied mechanical and electrical stimulation to iPSCs to determine the effect on neuronal differentiation. Previous studies demonstrated that a single 1 h period of electrical stimulation results in sustained alteration of progenitor cell gene expression. [22,38] Utilizing these parameters for duration of stimulation, the voltage for electrical stimulation of the iPSCs was optimized by assessing E) The morphology of a whole-cell patch-clamp recorded and surrounding eGFP-expressing iPSCderived cells by two-photon imaging. Scale bar indicates 10 µm. F) Traces of multiple action potentials triggered in an iPSC-derived cell cultured on soft CGS Stim for 14d (black). Tetrodotoxin (TTX) completely prevented the generation of action potentials (red). G) Traces of membrane current in response to step voltage-clamp from −100 to +10 mV. H) Representative membrane potentials upon step current injections of iPSC-derived cells cultured 14d on soft CGS and soft CGS Stim . I) The summary results of averaged spike amplitude from iPSC-derived cells without or with electrical stimulation. J,K) Maximum spike number and percentage of cells with indicated firing frequencies at 14d of differentiation on soft CGS without or with electrical stimulation. B) Cell nuclei were counterstained with DAPI (blue). Scale bars indicate 100 µm. C) Analyzed using a one-way ANOVA, followed by Tukey's HSD post hoc test with * p < 0.05 and ** p < 0.01. Values represent the mean of independent experiments (n = 4); error bars, SD. I,J) Analyzed using a paired Student's t-test with * p and ** p < 0.01 and 0.05, respectively. N = 7 and 9 for soft CGS and soft CGS Stim from four batches of independent cell cultures. K) Analyzed using a Fisher exact test with ** p < 0.01. neuronal differentiation from iPSCs for CGS stiffness (elasticity ranging from 3 to 12 kPa) and voltage (applied voltages ranging ±100 mV to ±3 V) ( Figure 3A; Figure S3 and Table S1, Supporting Information). Exposure to ±800 mV at 100 Hz for 1 h on the soft CGS (≈3 kPa) significantly increased the generation of TUJ1 + neurons without adverse cytotoxicity after 7d culture on the CGSs (Figure S3A-D and Table S1, Supporting Information). These studies show that stimulation with a voltage of greater than 1000 mV increases cell death ( Figure S3A,B, Supporting Information) while reducing the percentage of cells expressing immature neuronal markers ( Figure S3C, Supporting Information). To investigate the impact of the frequency of stimulation on the cells, we evaluated immature and more mature neuronal markers (TUJ1 and MAP2, respectively) across several frequencies (i.e., direct current (DC), alternating current (AC) 10, 50, and 100 Hz). We found that stimulation with AC frequencies promotes TUJ1-positive neurons ( Figure S3E, Supporting Information). Furthermore, the higher frequency stimulation (100 Hz) caused an increase in more mature MAP-2 positive neurons. Based upon these results, we chose the optimum stimulation parameters of ±800 mV at 100 Hz in the subsequent studies to better understand how the pro-neuronal effect may be conferred through the use of the mechanically and electrically interactive CGS system. With these stimulation parameters, the expression of TUJ1 and MAP2 were highly increased by electrical stimulation relative to unstimulated groups ( Figure 3B,C and Figure S3F, Supporting Information).
To confirm that iPSC-derived neurons using soft CGS Stim differentiate into electrophysiologically active neurons, we used a fluorescence-guided approach to perform whole-cell patch-clamp recordings ( Figure 3D) and confirm the morphology of the differentiated neurons with two-photon imaging ( Figure 3E). After 7d of culture on the CGS, a small spike-let was observed from iPSC-derived neurons in the soft CGS Stim condition, whereas no distinct spike-let was found in soft CGS without an exposure to electrical stimulation ( Figure S3G, Supporting Information). After 14d of culture on the scaffolds, full action potentials were repetitively induced in response to step current injection. The action potentials were sensitive to the blockage of voltage-gated sodium channels by tetrodotoxin (TTX, 1 × 10 −6 m; Figure 3F). Typical biphasic inward-and outward-currents from voltage-gated sodium and potassium channels were also observed ( Figure 3G), suggesting the iPSC-derived neurons are electrophysiologically functional on the soft CGS Stim . Furthermore, the average spike amplitude was significantly larger with soft CGS Stim (p < 0.01 compared with soft CGS without electrical stimulation, Figure 3H-K). Utilizing the soft CGS Stim , 100% of the iPSC-derived neurons were capable of firing, with 20% neurons showed more mature firing patterns ( Figure 3J,K). This time frame is comparable or faster than that achieved by some previous pluripotent stem cell-based neuronal differentiation methods and does not require specialized mechanobiology or genome transfection (Table S2, Supporting Information).
To determine how the CGS platform compared to a standard small molecule technique, immunofluorescent and electrophysiological characterization of iPSC-derived neurons using a standard technique for neural differentiation media was performed. [5] The electrophysiological and neural markers were not as mature as those seen with the soft CGS with electrical stimulation at 21d ( Figure S4, Supporting Information).
Electrical Stimulation Promotes Neurotrophic Factor Signaling
To explore the mechanisms by which an exposure to electrical stimulation promotes neuronal conversion, we evaluated how gene expression of neuronal markers differed between unstimulated and electrically stimulated iPSCs (Figure 4A,B). Gene expression analysis confirmed downregulation of the neuroectodermal stem cell marker Nestin and induction of neural markers including Tubb3, Map2, and Syn1 in soft CGS and soft CGS Stim . In addition, the exposure to electrical stimulation drastically induced the efficiency of neural markers in stiff CGS Stim (Figure S5A, Supporting Information). This indicates that electrical activity-associated transcription factors increased, resulting in neuronal differentiation.
Given the induction of neural markers with electrical stimulation, genes of interest were identified from previous work showing transcriptome changes in neural stem cells from electrical stimulation. [22] Candidate neurotrophic genes were evaluated with real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR). CNTF changed significantly with electrical stimulation on both stiff and soft CGS ( Figure 4C and Figure S5b, Supporting Information). Genes such as Mmp14, Mmp9, nNos, Vegfa, Bdnf, and Nt3 did not change significantly with electrical stimulation on the CGS. To assess if similar increases were observed in protein production, enzyme-linked immunosorbent assay (ELISA) results confirmed that CNTF levels were elevated by electrical stimulation independent of substrate stiffness ( Figure 4D and Figure S5c, Supporting Information). Additionally, immunocytofluorescence verified the increased expression of the CNTF protein after electrical stimulation of iP-SCs on stiff and soft CGS ( Figure 4E and Figure S5D, Supporting Information). Immunocytofluorescence analysis to further characterize iPSCs on the scaffolds revealed that after 7d of differentiation (or 14d from iPSCs), the differentiated neurons on soft CGS and soft CGS Stim did not express Nestin; but the majority of cells did express TUJ1 ( Figure S5E, Supporting Information). Staining for the proliferation marker, Ki67, revealed more proliferation on glass and stiff CGS compared to the soft CGS, suggesting further maturation of cells on the soft CGS ( Figure S5F, Supporting Information). However, glial marker (GFAP) positive cells were not significantly different between groups within this timeframe ( Figure S5G, Supporting Information). Additionally, qRT-PCR quantification of the CNTF release at 5d after stimulation from the NPCs demonstrates that the single time point of electrical stimulation causes sustained increase in CNTF expression for multiple days ( Figure S6, Supporting Information).
Altering Combined Pathways Involved in Mechanical and Electrical Stimulation Orchestrates Neuronal Conversion
The soft CGS Stim combines mechanical and electrical cues and was the most effective stimulation paradigm to generate iPSCderived neurons (Figure 3). Compared to published differentiation protocols, the combined mechanical and electrical stimulation is as efficient for neuronal differentiation, if not more so, than those using glass substrate with exogenous factors (Table S2, Supporting Information). Consequently, we postulated that modulating the specific pathways altered by soft CGS Stim as described above and utilizing similar mechanisms as previously described would also influence neuronal differentiation of iPSCs cultured on a traditional glass substrate. To recapitulate changes in the RhoA/Rho-associated protein kinase (ROCK) expression seen with the soft CGS, we applied the RhoA/ROCK inhibitor, Thiazovivin (TV). To determine if the autocrine feedback from neurotrophic factors modulated by electrical stimulation including VEGFA, BDNF, NT3, and CNTF was critical, we applied exogenous factors to the iPSCs cultures ( Figure S7A, B, Supporting Information). It was found that the addition of CNTF plays a critical role in the generation of neurons as compared to others. Then, we examined if the combination of main pathways altered by mechanical (RhoA) and electrical (CNTF) stimulation produced greater conversion to mature neuronal cultures than affecting either pathway in isolation (Figure 5A,B).
iPSCs plated on glass were treated with TV alone, CNTF alone, or both TV and CNTF (TV+CNTF). The similarity between the cells propagated on the glass surface with TV+CNTF treatment and soft CGS Stim is striking. Interestingly, expression of the mature neuronal genes Map2 and Syn1 were significantly increased in TV+CNTF culture ( Figure 5C). Consistent with this result, seven days after TV+CNTF treatment, the iPSCs-derived neurons exhibit mature neuronal formation with significant increases in the number of TUJ1 and MAP2 neurons relative to the control glass substrate ( Figure 5D).
Next, we tested if the presence of stiffness-and electricalstimulation associated factors, TV and CNTF, are sufficient to promote the emergence of the neuronal electrophysiological properties in a standard culture system. The presence of TV or CNTF alone slightly increased the averaged action potential amplitudes but did not reach a significance (p = 0.57 and 0.67 for TV and CNTF compared with control, respectively, Figure 5E,F). Using combined treatment (TV+CNTF), the iPSC-derived neurons exhibited multiple action potentials in response to step current injections ( Figure 5G). Moreover, the action potential amplitudes were significantly larger (p < 0.05 compared with control, Figure 5F). With the combined treatment (TV+CNTF), most of the iPSC-derived neurons were capable of firing, with ≈30% of neurons showing significantly more mature firing patterns (Figure 5H). These results demonstrate that combination treatment by two different pathways (i.e., mechanical and electrical stimulation) can dramatically improve the efficacy of in vitro iPSC neuronal differentiation.
To more deeply explore whether soft CGS Stim and the combined TV+CNTF promoted earlier conversion of iPSC-derived neurons as opposed to altering electrophysiological characteristics, we assessed for immature and more mature neuronal markers (Tuj1 and MAP2, respectively) at an earlier time point (day 9 from iPSCs). We found that at the earlier time, more of the iPSC-derived neurons had mature characteristics suggesting that the new methods promote accelerated maturation of the iPSCderived neurons ( Figures S8 and S9, Supporting Information).
CNTF Mediates Electrical Stimulation-Enhanced iPSC Neuronal Differentiation
To address CNTF's causative role in the rapid appearance of iPSC-derived neurons after electrical stimulation, Cntf expression was reduced by Cntf-shRNA knockdown (KD) (CNTF KD ) with or without stimulation and compared with controls (scrambled-shRNA, Scramble KD , Figure 6A,B and Figure S10, Supporting Information). Subsequent ELISA studies also reveal that CNTF production decreased in CNTF KD with or without an exposure to electrical stimulation ( Figure 6C). iPSCs with scrambled-shRNA (Scramble KD ) did not alter the expected increase in TUJ1 + cells after electrical stimulation ( Figure 6D,E). However, the proportion of TUJ1 + cells in both CNTF KD groups (CNTF KD and CNTF KD+Stim ) were significantly decreased from the scramble plus electrical stimulation group (Scramble KD+Stim ) and similar to the unstimulated scramble group (Scramble KD ). These results demonstrate that the enhanced neuronal differentiation of iPSCs seen with electrical stimulation was not observed without CNTF, indicating CNTF as a possible mechanism.
We next investigated whether CNTF is also essential for the iPSC-derived neurons to express neuronal electrophysiological properties ( Figure 6F-I). Indeed, selective knockdown of CNTF with shRNA (CNTF KD+Stim ) prevented the induction of robust and repetitive induced action potentials upon current injection, whereas in the scrambled shRNA control (Scramble KD+Stim ) the spiking activity was preserved ( Figure 6F,G). In Scramble KD+Stim , iPSC-derived neurons were capable of firing (≈50% of cells), with ≈20% neurons showed more mature firing patterns (Figure 6H,I). However, iPSC-derived neurons in CNTF KD+Stim were not capable of firing (≈0%) ( Figure 6I). These results suggest that CNTF upregulated by an exposure to electrical stimulation is necessary to trigger the differentiation of iPSCs to electrophysiologically active neurons. By blocking CNTF expression, we do not see the rapid differentiation into mature neurons which supports our above conclusions that CNTF is an essential pathway for the proneuronal changes of electrical stimulation. These results further support prior studies which indicate that electrical stimulation can alter gene expression in progenitor cells and are a key mechanism for downstream effects. [22,38,39]
Characterization of Maturation of the iPSC-Derived Neurons
Mature post-mitotic markers were evaluated to further assess the maturity of the iPSC-derived neurons. Because the dual SMAD differentiation protocol favors anterior dorsal fates, markers associated with the telencephalon (FOXG1) and cortical layers were evaluated. During corticogenesis, neurons of the adult cortex form cortical layers, such as TBR1 + (Layers VI, V, Layer 1 Cajal-Retzius cells and subplate), CTIP2 + (Layers VI and V), SATB2 + (Layer II-IV), and BRN2 (Layers II-IV). [1,4] During development, the deeper cortical layers are generated first. We used these markers to identify more mature post-mitotic differentiation from iPSCs in our system using markers that are expressed in rodent brain during different times of cortical development (Figure 7). The enrichment of TBR1 + and FOXG1 neurons in the glass group suggest the generation of markers commonly associated with neurons that are generated earlier in development ( Figure 7A). However, soft CGS culture and to a greater extent with electrical stimulation (soft CGS Stim ) enabled generation of more mature neurons expressing post-mitotic markers CTIP2 + , SATB2, and BRN2 ( Figure 7A,B). We observed that a much higher fraction of the cells become immature neurons (Tuj1) at the earlier time point. The fraction of cells that also acquire markers expressed only in more mature post-mitotic neurons also increases (Satb2, Ctip, Brn2). Thus, our data demonstrate particularly efficient induction of more mature neurons within 14d of iPSC differentiation and introduce an approach to accelerate derivation of these neurons using mechanical and electrical cues.
To further characterize the iPSC-derived neurons obtained by inhibiting RhoA and adding CNTF, the same post-mitotic markers were analyzed. Addition of TV, CNTF, and TV+CNTF enabled the generation of neurons expressing CTIP2 + (Figure 7C,D). However, there was no generation of SATB2 + neurons or increase The summary result of averaged spike amplitude of Scramble KD+Stim and CNTF KD+Stim conditions. H,I) Maximum spike number and quantification of percentage of cells with indicated firing frequencies at 14d of differentiation on soft CGS with electrical stimulation. CNTF KD was tested and Scramble KD was utilized as a control group. B,C,E) Analyzed using a one-way ANOVA, followed by Tukey's HSD post hoc test with ** p < 0.01. NS indicates nosignificance between CNTK KD and CNTF KD+Stim (p > 0.99, p = 0.99, and p = 0.77, respectively). Values represent the mean of independent experiments (n = 4); error bars, SD. G,H) Analyzed using a paired Student's t-test with * p < 0.05, respectively. N = 6 and 4 for Scramble KD+Stim and CNTF KD+Stim , respectively from four batches of independent cell cultures.
in BRN2 + neurons. Although neuronal differentiation by the addition of both factors may induce more mature post-mitotic neurons, it appears mechanical and electrical stimulation likely affect multiple pathways to further accelerate the induction of neuron maturation.
Conclusion
This work developed a CGS platform modified with carbon nanofibers to apply mechanical and electrical stimulation to provide a method of efficient stem cell neuronal differentiation. A 3D, soft CGS (≈3 kPa) exposed to electrical stimulation rapidly generates iPSC-derived neurons with a more mature, post-mitotic identity and that have more mature electrophysiological properties by 14d of differentiation on the CGS. The described method has the additional advantage of being less reliant upon soluble factors (i.e., BDNF and NT3), [1,5] inhibitor cocktails (i.e., SMAD, Notch, or Wnt pathway inhibitors) [4,6] or viral vectors (i.e., NeuroD1, Ascl1 or Brn2) [9,40,41] (Table S2, Supporting Information), which have inherent drawbacks (i.e., prolonged procedure duration, low yields, and viral contamination). Utilizing mechanical and electrical cues via our CGS platform appears to efficiently produce more mature neurons assessed both by markers expressed and through electrophysiological measurements of the iPSC-differentiated neurons than many of the current protocols.
Notably, our results suggest two distinct pathways by which the combination of mechanical and electrical activity greatly affects human iPSC differentiation. The first mechanism involves stiffness-dependent neuronal differentiation associated with the RhoA pathway. Varying the stiffness of the CGS affected cytoskeletal polymerization and regulated the RhoA signaling cascade. Mechanical feedback from intrinsic forces through cell-cell or cell-matrix junctions controls RhoA expression by simultaneously activating ROCK. [42] The optimal stiffness used in these studies range from 2 to 3 kPa (soft CGS), similar to developing cortical tissue. [43] Increased stiffness in CGS by lowering CNF content (1:10, ≈12 kPa) limited neuronal conversion of iPSCs. Its downstream pathways such as YAP and SMAD, which regulate the pluripotency of stem cells, were also controlled, [18][19][20] further elucidating the importance of RhoA in stem cell differentiation and in a cell's response to mechanical stress. Future development of materials that have tunable stiffness is a valuable tool to optimize stem cell therapies and will help to further delineate how to modulate important differentiation pathways.
The second mechanism by which electrical stimulation enhanced neuronal conversion of iPSCs involves promoting trophic factor release (CNTF) and enables the transcriptional changes necessary for neuronal differentiation. To date, most of the literature on neuronal differentiation of iPSCs has concentrated on exogenously delivered neurotrophic factors (i.e., BDNF and NT3). [1,5] However, electrical activity plays an essential role in early development of the nervous system as the stimulation directly or indirectly regulates endogenous neurotrophic factor gene expressions including VEGFA, NT3, and BDNF. [12,25] Our work demonstrates that iPSCs respond to the electrical activity of their microenvironment, increasing endogenous CNTF release to enhance neuronal differentiation. If the CNTF pathway was inhibited, the rapid conversion of iPSCs was not observed. Our work adds to the prior literature which uses exogenously delivered CNTF to develop motor neurons and induce neuronal differentiation in retinal cells. [6,44,45] The use of CNTF as described above shows to our knowledge a new efficient method to differentiate neuronal cortical cells using CNTF or electrical cues. The ability to identify important pathways from stem cell biology and act on those pathways is a powerful tool to develop new strategies for neuronal regeneration applications.
Cortical development occurs with deep layer neurons produced first, followed by upper layer neurons expressing a variety of cortical markers. [1,4] The capability to rapidly create iPSCderived, mature post-mitotic neurons indicates effective neuronal maturation due to combined modalities of stimulation. Although animal in vivo models have been developed to investigate the regulatory role of neurons in various disease states including amyotrophic lateral sclerosis, spinal muscular atrophy, and addiction; [46][47][48] such models do not necessarily represent human pathophysiology. By deriving iPSCs from patients with these disease states in our 3D, soft CGS, in vitro disease modeling and drug screening by more accelerated directed differentiation of iP-SCs is possible. Further biological characterization is required to determine the exact characteristics of the post-mitotic neurons that were formed, but our study demonstrated the 3D, soft CGS can be used to generate neurons with markers often observed in more mature cortical neurons.
The 3D CGS can be perturbed at various time points with electrical stimulation to observe the response to better understand neurologic disease states. This provides an advantage over traditional 2D, inert polymeric, and organoid systems to allow for continuous interactions with in vitro stem cells. [10] These findings reinforce the concept that the combination of mechanical and electrical cues concurrently affects rapid neuronal conversion and emphasizes the need to often alter multiple pathways to enact change upon the nervous system. The 3D, conductive CGS allows for manipulation of the microenvironment of stem cells for regenerative and disease modeling strategies.
Experimental Section
Fabrication and Characterization of the CGS-Preparation of GOs: GOs were fabricated using the modified Improved Hummers method with modification. [49] Briefly, 1 g of graphite flakes (Micro890, high purity graphite flake with a D50 particle size between 7-11 µm, 99%+ carbon purity) supplied from Ashbury Carbons (Asbury Carbon, NJ) was added into acidic solution (9:1, H 2 SO 4 :HNO 3 , Sigma-Aldrich), and the solution was stirred without heat for 30 min. 6 g of potassium permanganate (KMnO 4 ) (Sigma-Aldrich, St. Louis, MO) was slowly added into the solution, and the solution was covered with tin foil, heated to 50°C, and incubated overnight. 5 mL of H 2 O 2 ice-cold solution was added and stirred for 2 h. The solution was centrifuged at 4500 rpm for 45 min. The purification with HCl (0.1 m) and three consecutive washes with DI H 2 O were applied to remove unreacted carbon and metal ions. Collected GOs (2 mg mL −1 ) were stored at 4°C.
Fabrication and Characterization of the CGS-Preparation of 3Dnanoconfined conductive graphene scaffold (3D and 2D CGS): CNFs (Sigma Aldrich) had a diameter ≈100 nm and length ≈20-200 µm (manufacturer's specifications). Varying ratios of CNFs (0, 1:10, 1:2, and 1:1) were suspended in GO solution and were sonicated using a bath sonicator (Branson Ultrasonic, Danbury, CT) for 1 h at 60 W followed by centrifugation at 800 rpm for 5 min (Allegra 25R, Beckman Coulter, Indianapolis, IN) to sediment CNF bundles. The concentrated GO:CNF suspensions were degassed to remove any bubbles. Reducing agents including sodium iodide and ascorbic acid, at a concentration of 10 wt% to induce selfassembly of CGS, were added into suspensions; and it was poured into cylindrical molds. The suspensions formed the CGS at 80°C within 24 h. The CGS had dimensions of 6 × 2 mm (diameter × height). To form a 2D CGS, the same steps were used to form 3D CGS. Subsequently, the CGS was placed between glass slides and then weight of 1 kg was applied for 1 h. The CGSs were neutralized by washing with deionized water until pH of supernatant equilibrated to 7. Collected CGSs were autoclaved and stored at 4°C.
Fabrication and Characterization of the CGS-Characterization and measurement: Morphological properties were assessed by scanning electron microscopy (Zeiss Sigma FESEM). Specimens were sputter coated with Au-Pd, and attached to aluminum stabs with double-sided copper tape. The morphological status was detected using In-Lens secondary electron detector with accelerating voltage 10 kV. Rheological properties of the CGS were measured using a stress-controlled mechanical analyzer (AR-G2, TA instrument, New Castle, DE). Static compression tests were performed at 37°C in the strain ramp mode with a ramp rate of 5 µm s −1 . The dimensions of the tested samples were 6 × 2 mm (D × T, D: diameter; t: thickness). The compressive stress of the CGS was derived from the force divided by the cross-sectional area of the scaffolds. The bulk electrical conductivities of cylindrical CGSs were measured by the four-probe method with metal electrodes attached to the ends of samples. The electric field behavior in the CGS was simulated by ANSYS Maxwell static 3D electromagnetic finite element simulation of the electric field distribution (ANSYS HFSS, ANSYS Corp., Canonsburg, PA). In particular, an AC conduction analysis was performed to plot the electric field distribution in the CGS. The simulator conducts the Maxwell equations in a defined region of space with diameter (OD: 6 mm), height (H: 2 mm), and square electrodes (1 × 1 mm). The simulator calculated electric field distribution (mV mm −1 ) while the current density ranging from 0 to ±2 V gradually increased.
Neuronal Differentiation of iPSCs on CGS: All stem cell procedures were approved by Stanford's Stem Cell Research Oversight committee (SCRO: 616). Culture of the human iPSC line (human iPSC was generated from BJ fibroblasts using mRNA reprogramming factor sets leading to the overexpression of OCT4, SOX2, KLF4, and c-MYC as described previously) was carried out on a matrigel-treated 6-well plate in mTeSR. [50] Cells were incubated at 37°C in 5% CO 2 , and passaged every 5-7d with Accutase (Innovative Cell Technologies, San Diego, CA). iPSCs from passage 51-55 were used in these studies.
Day (0): Human iPSCs at ≈90% confluency were first washed with room temperature 1× DPBS without Ca 2+ and Mg 2+ once. Wash was aspirated and cells were primed by the treatment with NPC differentiation base medium for 7d (4 mL per 6-well plate) under standard cell culture condition (37°C, 5% CO 2 ). Fresh medium was replenished every 24 h. Day (7): After the induction procedure, NPCs were washed with DPBS once. The cells were then detached from the plates with Accutase (1 mL per well) and placed in incubator (37°C). After 5 min, the side and bottom of the plate was gently rubbed to dislodge the cells from the plate surface. Then cells were collected into a 15 mL conical tube using a 10 mL serological pipette and 9 mL of DMEM/F12 containing RhoA/ROCK inhibitor, TV (2 × 10 −6 m) was added. Cells were centrifuged at 1200 rpm for 5 min at room temperature. After centrifugation, supernatant was aspirated and the cell pellet was resuspended in NPC maintenance medium + TV (2 × 10 −6 m). Cells were re-plated on CGS (ranging from 3 to 12 kPa) (100 000 cells cm −2 ) previously coated with 10 µg mL −1 poly-l-ornithine solution Day (1; or day 8 from iPSCs)-Electrical Stimulation: The media was aspirated and the scaffold was transferred to electrical stimulation chamber. In vitro iPSC-electrical stimulation was applied by means of an alternating current-electrical stimulation, custom built cell culture chamber (Figure 1a). The chamber consists of parallel indium tin oxide (ITO) patterned electrodes, separated by a distance of 3 mm. The electrodes were connected to a waveform generator (Keysight, Englewood, CO). The cells cultured on CGS were exposed to electrical stimulation for 1 h. After the stimulation was applied, the scaffolds were carefully transferred to a 24well plate with 1 mL of fresh medium. Day (2; or day 9 from iPSCs)-Cell viability after electrical stimulation: To optimize the cell viability after the stimulation, the metabolic rate of reasazurin (Life Technologies, Carlsbad, CA) and Live/Dead staining (Life Technologies, Carlsbad, CA) were performed. For the metabolic assay, the media containing 0.5% of reasazurin was added to the cell culture. After a 6 h incubation, the media (10 µL) was collected and mixed with 990 µL of DPBS. The fluorescence intensity of solution was recorded using a multiplate reader (Ex: 535 nm; Em: 585 nm). Lysed cells were utilized as a positive control. The same density of cells cultured on a TC plate coated with the same coating substances including PLO/Laminin were used as a negative control. For the Live/Dead assay, the supernatant was aspirated and the cells were incubated with Live/Dead solution. After staining, the cells were visualized by fluorescence microscopy (Keyence All-in-One Fluorescence Microscope (BZ-X700) (Keyence corp., Itasca, IL) (Live, Ex: 485 nm; Em: 535 nm and Dead, Ex: 535 nm; Em: 565 nm).
Immunocytofluorescence: On day 7 (or day 14 from iPSCs), cells were fixed with 4% paraformaldehyde (electron microscopy sciences) for 1 min and then permeabilized and blocked with blocking buffer (0.1% Triton X-100, 1% BSA (Fisher BioReagents, Santa Clara, CA), 5% normal goat serum (NGS, Invitrogen, Waltham, MA) for 1 h at room temperature. Primary antibodies (listed in Key Resource, Table S3, Supporting Information) were incubated in blocking buffer at 4°C overnight, followed by three 15 min PBS washes and detected by secondary antibodies (Alexa Flour 488, 555, or 647, Life Technologies). Samples were counter-stained with DAPI (Sigma-Aldrich, St. Louis, MO) to visualize nuclei and mounted with Fluoromount Aqueous Medium (Sigma-Aldrich, St. Louis, MO) before imaging. Samples were imaged on a Keyence All-in-One Fluorescence Microscope (BZ-X700) (Keyence corp., Itasca, IL) using 20× or 60× objectives. Image of cells on CGSs are presented as maximum intensity projections of z-stacks generated from BZ-X Analyzer. The neuronal differentiation efficacy of iPSCs was quantified by counting the total number of TUJ1-positive cells with neuronal morphology. The number of TUJ1-positive cells was divided by the total number of cell nuclei (DAPI-positive) to demonstrate the percentage of neuronal differentiation. Calculations were performed using randomly selected four different locations from four individual samples at 20× magnification. In addition, co-localization of proteins such as YAP and p-SMAD in cell nucleus was quantified by counting the number of co-positive for DAPI. The overall of pattern of those staining was categorized as nuclear only or cytoplasmic only through use of z-stacks. Z-stacked microscopy images of two different channels for YAP/p-SMAD and DAPI were merged into one image to determine overlap. All image quantification for percentages of marker-positive cells was performed by a blinded individual via ImageJ and manual counting.
RNA Isolation and Quantitative Real-Time PCR (qRT-PCR) Analysis: Total RNA was extracted from cells using a Qiagen RNeasy Plus Micro Kit (Qiagen, Germantown, MD). After accomplishing first-strand cDNA synthesis by iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA), quantitative real-time polymerase chain reaction (qRT-PCR) was performed with Taqman-polymerase and primers (Qiagen, Germantown, MD) for gene expression analysis. qRT-PCR was carried out on a QuantStduio 6 Flex Real-Time PCR System (ThermoFisher, Waltham, MA). The Delta-Delta CT method was utilized for relative expression levels with GAPDH as a housekeeping gene and iPSCs grown on glass as references. Taqman primers used in these studies are listed in Key Resource, Table S3 (Supporting Information).
ELISA Analysis: For CNTF ELISA, the conditioned media was collected at 24 h after exposure to electrical stimulation. Samples were assayed by CNTF Development kit from Peprotech (Peprotech, Rocky Hill, NJ) according to the manufacturer's instructions.
Factor Addition/KD Study: Day 1 (or day 8 from iPSCs):To perform the factor addition study, human NPCs cultured on a precoated glass substrate (PLO/Laminin with 100 000 cells cm −2 ) were conditioned with RhoA/ROCK inhibitor (Thiazovivin, 2 × 10 −6 m), CNTF (1 ng mL −1 ), or a combination of both. Cell cultures were maintained for 7d with a medium change every other day. After the culture, cells were utilized for subsequent analysis including immunocytofluorescence and qRT-PCR analysis.
Generation of CNTF KD iPSCs:To conduct the KD study, CNTF expression in human iPSCs was inhibited using CNTF shRNA (SC-41921-V) and control scrambled shRNA lentiviral particles (SC-108080) purchased from Santa Cruz Biotechnology (Santa Cruz Biotechnology, Santa Cruz, CA). Briefly, iPSCs were cultured in mTeSR on matrigel-coated 6-well plates until 30% confluent then treated with 20 µL concentrated viral particles containing shCNTF or the nontargeting control (scrambled, SC) overnight (day 0). The viral particle-containing medium was removed and cells were allowed to recover (day 1) before selecting with 0.5 µg mL −1 puromycin (day 2). Cells were cultured continuously in mTeSR with puromycin for 7d total. Cell samples were either harvested for qRT-PCR, immunocytofluorescence, and in-cell western blot analysis to monitor CNTF knockdown efficiency. After the selection was completed, CNTF knockdown cells were utilized for subsequent study and analysis as described above. For the in-cell western assay, the subcellular level of CNTF was quantified in situ using infrared (IR) intensity. After the inhibition of CNTF, the cells were plated in a 96-well plate (20 000 cells per well) and were immunolabeled with an IR-conjugated secondary antibody using the standard immunocytofluorescence protocol. After the completion of the staining procedure, the plate was imaged using an Odyssey Fc IR imaging system (LiCor, Lincoln, NE). CNTF intensity was normalized to GAPDH expression by using the Odysset CLx Image Studio Analysis software. After thorough analysis, CNTF KD iPSCs were primed to NPC differentiation medium for 7d and then the cells were further utilized for the study of mechanical and electrical impact on www.advancedsciencenews.com www.advancedscience.com neuronal differentiation on the CGS by following the protocol as described above.
Electrophysiology: On day 7 and 14 (or 14 and 21d from iPSCs), electrophysiological properties of the differentiated neurons on the CGS were investigated. The cultured iPSC-derived neurons were transferred from a 37°C incubator to the recording chamber, superfused with artificial cerebrospinal fluid (ACSF) containing 125 × 10 −3 m NaCl, 2.5 × 10 −3 m KCl, 2 × 10 −3 m CaCl 2 , 1.25 × 10 −3 m NaH 2 PO 4 , 1 × 10 −3 m MgCl 2 , 25 × 10 −3 m NaHCO 3 , and 15 × 10 −3 m d-glucose (300-305 mOsm) at a rate of 2-4 mL min −1 . All solutions were saturated with 95% O 2 and 5% CO 2 . The iPSCderived neurons were recorded at room temperature (20-22°C) within 1 h after transferring. Due to the opaque property of the CGS substrate, which prevented using regular differential interference contrast (DIC) optics, epi-fluorescence signal-guided whole-cell patch-clamp recording was performed. eGPF-expressing iPSCs were identified with a water-immersion objective lens (60×, NA = 1.1; Olympus, Japan) mounted on an upright microscope (Olympus BX-51) equipped with a mercury light source, and appropriate filter sets For voltage clamp recording, the series and input resistance of the iPSC-derived neurons was measured by injection of hyperpolarizing pulses (−5 mV, 100 ms). The initial series resistances were <20 MΩ. In current clamp recording mode, a bridge balance was applied to compensate series resistance. Resting membrane potential was adjusted to −70 mV via somatic current injection. Current steps (800 ms, 5-60 pA with 5 pA step size) were injected to the iPSC-derived neurons to test the membrane properties and to evoke action potentials. Recordings were obtained with a Multiclamp 700B (Molecular Devices, USA). Signals were filtered at 2.2 kHz and digitized at 10 kHz with NI PCIe-6259 card (National Instruments). In some patch-clamp recordings, two-photon imaging was performed to reveal the morphology of the Alexa Fluor 488 (2 ×10 −6 -5 ×10 −6 m) filled iPSCs (Figure 7b,c) with a custom built twophoton laser-scanning microscope equipped with a mode-locked tunable (690-1040 nm) Ti:sapphire laser Mai Tai eHP (Spectra-Physics, USA) tuned to 925 nm. The electrophysiology and imaging data were acquired with custom-made software written in Matlab (Mathworks) described previously. [51] The individual making the measurements was blinded from the groups.
Quantification and Statistical Analysis: All the data are presented as the mean ± standard deviation (SD) of four independent experiments. The n values indicate the number of independent experiments conducted or the number of individual experiments. An analysis of variance (ANOVA) test was used for multicomponent comparisons (n > 3 independent variables) after the normal distribution was confirmed using Shapiro-Wilk normality test. Tukey post hoc analysis was performed to investigate the differences between variables. Image analysis, cell counting, and electrophysiological analysis were blinded and performed by independent investigators. The statistical parameters are summarized in Table S4 in the Supporting Information.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 10,532.2 | 2021-02-18T00:00:00.000 | [
"Engineering",
"Materials Science",
"Medicine"
] |
Nanocarriers for Diagnosis and Targeting of Breast Cancer
Breast cancer nanotherapeutics is consistently progressing and being used to remove the various limitations of conventional method available for the diagnosis and treatment of breast cancer. Nanoparticles provide an interdisciplinary area for research in imaging, diagnosis, and targeting of breast cancer. With advanced physicochemical properties and better bioavailability, they show prolonged blood circulation with efficient tumor targeting. Passive targeting mechanisms by using leaky vasculature, tumor microenvironment, or direct local application and active targeting approaches using receptor antibody, amplification in the ability of nanoparticles to target specific tumor can be achieved. Nanoparticles are able to reduce cytotoxic effect of the active anticancer drugs by increasing cancer cell targeting in comparison to conventional formulations. Various nanoparticles-based formulations are in the preclinical and clinical stages of development; among them, polymeric drug micelles, liposomes, dendrimer, carbon nanotubes, and nanorods are the most common. In this review, we have discussed the role of nanoparticles with respect to oncology, by particularly focusing on the breast cancer and various nanodelivery systems used for targeting action.
Introduction
The development and inventions of various nanoscale technologies have provided new field of research among chemistry, biology, toxicology, medicine, material science, engineering, and mathematics. Nanotechnology is the manipulation of cellular and molecular components of matter. Nanotechnology yields the incredibly small particles of size ranging between tens to hundred nanometers. These small particles are known as nanoparticles, which are considered as the engineered materials mainly cluster of molecules, atoms, and molecular fragments. These innovations are referred to as nanomedicines by the National Institute of Health and have the potential of carrying chemotherapeutic agents to the targeted site. Being a nanotechnologically engineered material, nanocarriers must have four characteristics of its own such as size of the material should be in nanometer, properties of the material should be in nanoscopic dimensions, behavior of the material should be displayable with suitable mathematical expression, material should be man-made [1]. Rationale behind the development on nanocarriers is that polymeric particles, metal, and semiconductors have unique structural, magnetic, optical, and electronic properties which make them a suitable drug delivery carrier for targeting [2].
Nanoscale devices form the concept of biodegradable self-assembled nanoparticles which can be targeted to the cancer-affected area and can be used as contrast imaging agents [3]. Breast cancer is a major ongoing public health problem, and at present, there are less curative options for the patients suffering from breast cancer, while emerging nanotechnologies give a promising new approach for the early detection and treatment of breast cancer. Nanoparticles provide an interdisciplinary area for research in imaging, diagnosis and targeting of breast cancer ( Figure 1).
Human Breast Cancer
It is also called prostate cancer which originates from breast tissue. Breast cancer is most frequently diagnosed in women; approximately up to 7% of breast cancers are being diagnosed in women having their age below 40 years and less than 4% in women below the age of 35 years [4]. In young women, the breast cancer is uncommon [5]. Breast cancer is a heterogeneous disease and has different subtypes, which are based on the expression level of progesterone receptor, estrogen receptor, and HER-2/neu receptor (human epidermal growth factor receptor 2) [6]. Breast cancer stem cells play a major role in growth and formation of metastastic breast cancer. Breast cancer stem cells have a potential for undergoing self-renewal and side by side give rise to daughter cells which results in the formation of tumor cells in bulk having self-replicating potential. Breast cancer stem cells make small-small part of most tumors, whereas in others like in melanoma it comprises 25% of total mass [7]. Based on the TNM (tumor nodes metastasized) system, breast cancer can be divided into four stages: based on the size of tumor (T), whether the tumor has spread to the lymph nodes (N) in the armpits or not, and whether the tumor has metastasized (M).
Types of stages in breast cancer are as follows.
(1) Stage 0: it consists of three types of breast carcinoma. (2) Stage I: it is divided into two stages Ia and Ib.
(a) Stage Ia: tumor is 2 cm or small and not found outside the breast. (b) Stage Ib: small clusters are found in the lymph nodes and either tumor is 2 centimeters or not found in breast.
(3) Stage II: it is divided into Stage IIa and Stage IIb.
(a) Stage IIa: tumor is found to be larger than 2 cm but not larger than 5 cm. Cancer has not spread to the lymph nodes. (b) Stage IIb: tumor is larger than 2 cm but not greater than 5 cm. Cancer spreads to 1 to 3 axillary lymph nodes or to the lymph node near the breastbone.
(4) Stage III: it is divided into IIIa and IIIb.
(a) Stage IIIa: the tumor is larger than 5 cm and cancer spreads to 1 to 3 axillary lymph nodes. (b) Stage IIIb: the tumor spreads to 9 axillary lymph nodes. (c) Stage IIIc: tumor may be of any size causing swelling or ulcer and has spread to chest wall. Cancer has spread to 10 or more axillary lymph nodes.
In treatment point of view, Stage IIIc is divided into operable and inoperable.
(5) Stage IV: cancer has spread to other parts of the body, mostly to lungs, bone, or liver. Now nanotechnology comes up with a bright way to overcome the problem related to breast cancer. Many researchers focus on different types of nanotechnology-based drug delivery system and their mechanism of action in such type of carcinoma. Various types of nanoparticles are used for the detection of breast carcinoma; among these, carbon nanorods (gold nanorods [8]), nanowires (Au nanowires [9]), and nanobarcodes are the most common. Semiconductor quantum dots (QDs) are a new advancement in nanotechnology; these are small nanoscale light-emitting particles and are better in comparison to fluorescent protein and organic dyes. Unique electronic and optical properties of semiconductor quantum dots make them suitable agents for cellular and in vivo biomolecular imaging [10]. Yu et al. had synthesized cadmium oxide-selenium powdered QD of 2-nanometer diameter approximately that produces a blue emission and a 7-nanometer diameter quantum dots showing red light emission [11]. Superimposed optical images and Xray have shown high resolution and high sensitivity for the location of not only bigger breast cancer but also for small abnormal tumor daughter cells. Chemotherapy in the form of nanoparticles can be delivered by active and passive pathway. Nanotechnology is used in molecular cancer diagnosis by employing biomarker and nanoparticles probes. Multiple ligands can be conjugated on tiny single nanoparticles and provide a multivalent effect for increased specificity and binding affinity, hence used as suitable diagnostic agent.
Chemotherapeutic Nanoparticles
Chemotherapeutic drugs are "cytotoxic" in nature, which means cell-killing drugs. They play a vital role in the management and treatment of both initial-stage breast cancer and advanced breast cancer. Cytotoxic chemotherapy is essential for palliation of women with hormone-insensitive or hormone-refractory breast cancer and is administered into human body by taking therapeutic goals into consideration such as relief from pain, disease progression, relief from symptoms, prolonged life of patient, and improvement in mood disturbances of suffering women [12]. It is administered orally or by intravenous injection. It works systemically by killing cancer cells throughout the body along with normal cells, which leads to various short-term and long-term side effects. Mostly chemotherapy is used in advanced breast cancer, but may also be used to treat early-stage breast cancer. By using nanoparticles as carrier, cytotoxic side effects may be reduced and targeting may be achieved.
Even the most advanced chemotherapeutic agents do not differentiate between normal cells and cancerous cells efficiently, which leads to nonspecific distribution of drug in the body and causes systemic toxicity and adverse effects [13]. The maximum allowable dose of the drug gets limited; in order to achieve the desired therapeutic effect in the tumor tissue, large quantity of drug has to be administered in order to achieve anticancer effect, but this is not economical and undesirable toxicity may also appear [13,14]. Nanoparticles are the promising carrier system for the targeted delivery of chemotherapeutic agents by using both active and passive targetings, and systemic toxicity or normal cell toxicity can be avoided [14].
Active and Passive Drug Delivery
Mostly the nanoparticles accumulate in tumor cells as expected because of pathophysiological characteristics of tumor blood supplying vessels. There is an increased demand of oxygen and nutrients to the tumor cells or tissue as it is increasing in size as well as in its shape. In order to supply nutrients and oxygen, new blood capillary system is being developed which is not developed properly and hence becomes permeable to some particles of specific size [15]. Types of targeting are shown in Figure 2.
Passive Targeting by Nanoparticles.
Passive targeting can differentiate between normal and tumor tissues and has the advantage of direct permeation to tumor tissue ( Figure 3). Drug administered passively in the form of prodrug or inactive form, when exposed to tumor tissue, becomes highly active. Nanoparticles that are expected to show localization on specific tissues or at specific sites of disease follow the biological mechanisms such as ERS (enhanced retention system) or EPS (enhanced permeation system) effect. To prolong the circulation and to achieve increased targeting efficiency, the size should be below 100 nanometers in diameter and the surface of the nanoparticles should be hydrophilic in nature in order to circumvent clearance by macrophages. The hydrophilic surface of the nanoparticles provides protection against plasma protein adsorption to the surface, and this can be made possible by using hydrophilic polymer coating, like polyethylene glycol (PEG), polysaccharides, poloxamines, or poloxamers or by using block or branched amphiphilic copolymer [16,17]. Passive targeting system is further classified into (a) leaky vasculature, (b) tumor microenvironment, and (c) local drug application.
(1) Leaky Vasculature. Maeda and Matsumura had first displayed the enhanced permeation and retention effect by using polymer to form nanoparticles. Concept of enhanced permeability and retention is based on two factors [18].
(a) The capillary endothelium system in malignant tissue shows more permeation to macromolecules in comparison to normal tissue endothelium; this makes circulating polymeric nanoparticles permeable into the tumor. (b) Tumor lacks lymphatic drainage; hence, more drug gets accumulated in side tumor tissue. By using a suitable biodegradable polymer, the concentration of drug gets 10 to 100 times higher than that of free circulating drug.
(2) Tumor Microenvironment. Tumor microenvironment provides the advantage of the passive drug targeting. Active state of chemotherapeutic agent is conjugated with tumorspecific material and administered into the body. When this drug-polymer conjugate reaches its desired destination, tumor environment converts into more active form. This phenomenon is called tumor activated prodrug therapy. Mansour et al. had developed an albumin-bound form of doxorubicin and showed in an in vitro study, that doxorubicin was efficiently cleaved by matrix metalloproteniase-2 [19].
Active
Targeting. By conjugating the nanoparticles with a drug to desired target site, an active targeting may be achieved ( Figure 4). Active targeting allows the increased accumulation of the drug in cancer tissue. Directing the nanoparticles to the cancer cell can be done by the following ways. This approach is basically based on the specific interactions, like lectin carbohydrate, ligand receptor, and antibody-antigen [22].
(1) Carbohydrate-Directed Targeting. An excellent example of active drug targeting is lectin carbohydrate. Carbohydrates present on the surface of tumor cell are different than those in the normal cell. Lectin is a nonimmunological protein, which is capable of binding and recognizing the glycoproteins which are present on the surface of the cell. Certain carbohydrates interact with lectins to form the cell-specific binding moieties. These carbohydrates moieties can be used for target drug delivery system for lectins (lectin direct targeting); similarly, lectins can also be used for the targeting of the surface carbohydrates (reverse lectin targeting). Specific carbohydrate present on tumor can be targeted and anticancer effect may be achieved.
(2) Receptor Targeting. Endocytosis plays a major role in this type of active targeting. Ideally drug is being conjugated to polymer carrier; this carrier gets incorporated into the cell and localized at the cell surface. Once the drug-polymer conjugate reaches the tumor intracellular environment, dissociation of drug takes place and anticancer effect is being achieved. Three essential molecules can be delivered by this targeting system: (a) antigen or receptors, (b) drug-polymer conjugates, and (c) ligands or antibodies.
(3) Antibody Targeting. Kirpotin et al. had described the evidence of monoclonal antibody mechanism for targeting nanoparticles to solid tumor tissue in vivo. Prepared formulation was targeted towards the HER-2 (human epidermal growth factor receptor 2) cancer and was prepared by conjugating the anti-HER-2 monoclonal antibody fragments with liposomal-grafted polyethylene glycol chain. Increased cellular uptake of the drug was observed; hence, antibody targeting provides the new opportunities for drug delivery system in breast cancer [22].
Types of Nanodelivery System
Different types of nanodelivery system having different physicochemical properties with different materials have been formulated so far in order to cure diseases. Most commonly studied among these are polymeric micelles, dendrimers, liposomes, carbon nanotubes, and nanorods (Table 1).
Polymeric-Based Drug Carrier.
The drug is either covalently bound or physically entrapped in polymer matrix, depending on the method of preparation [23]. Polymers can be divided into two groups: natural and synthetic polymers. Polymers like chitosan, albumin, and heparin occur naturally and have been a choice of material for the delivery of DNA, protein, and oligonucleotides as well as drug. Gradishar et al. had formulated conjugate of paclitaxel with serum albumin to form nanoparticles formulation. This drug-polymer conjugate has been applied for the treatment of metastatic breast [24].
Polymeric Micelles.
Micelles are generally colloidal particles having a size range usually in between 5 and 100 nanometers in diameter. Micelles mainly consist of surface active agents (surfactant) or amphiphiles, which are made up of two different regions, hydrophobic tail and mostly hydrophilic head. Amphiphiles exist as monomers in aqueous medium at low concentration as a true solution. By increasing the concentration of amphiphiles, self-assembled aggregations are being formed called micelles within the narrow concentration window [27]. The concentration above which the micelles formation takes place is called CMC (critical micelles concentration). Above the CMC, the micelles are being formed by the dehydration of the hydrophobic tails with favorable entropy. Van der Waals bonds are responsible for the formation of micelle core by combining hydrophobic polymers in symmetrical way. Conventional oral administration of anticancer drugs showed reduced absorption and reduced bioavailability [27].
Polymeric micelles provide an excellent advantage of smaller size in comparison to liposomes. Polymer selection plays an important role in the formation of micelles, and the selection for the micelles formation is based on the characteristics of both hydrophobic and hydrophilic block polymers. Hydrophilic outer shell of the micelles gives steric stability and prevents rapid uptake of formulation by reticulo endothelial system and provides longer duration of circulation time inside the body [28]. Hydrophobic and hydrophilic polymers are the block polymers for the formation of the micelles which assemble themselves in an aqueous environment to form hydrophobic core which is being stabilized by hydrophilic shell. By arranging these block polymers, different patterns of micelles are being formed; hence, these polymers are called diblock copolymer (A-B type copolymers), triblock copolymer (A-B-C type copolymer), and grafted polymers [29].
Xue et al. had developed biodegradable diblock amphiphilic copolymer (mPEG-b-p(LA-CO-MCG) having carboxylate group for platinum chelation. The cytotoxicity of the drug-polymer conjugate towards breast cancer was lower than of cisplatin but comparable to that of oxaliplatin. This polymer conjugate showed the potential use as a targeted carrier vehicle due to its reduced side effect [30]. Zhang et al. had developed a combination of salinomycin and octreotidemodified paclitaxel-loaded PEG-B-PCL polymer micelles. This combination therapy showed improved treatment of breast cancer. Combination was designed in order to eradicate both breast cancer stem cells and breast cancer cells which cannot be eradicated by conventional chemotherapy. Elimination of cancer cell is based on the mechanism of receptor-mediated endocytosis [31]. Octreotide-modified paclitaxel follows the active targeting mechanism, whereas salinomycin follows the passive targeting mechanism. Liu et al. had formulated curcumin-loaded biodegradable selfassembled polymeric micelles called as curcumin polymeric micelles which showed good water solubility and had met the intravenous administration requirements. Sustained release and lower cytotoxicity of curcumin polymer micelles may serve as candidate for antimetastasis agent for breast cancer [32].
Polymer-based imaging with near-infrared (NIR) fluorophores provides efficient advantages for tumor imaging, such as improved plasma half-lives, large surface area, less toxicity, stability, and improved targeting. For in vivo imaging of tumor, NIR fluorophores are increasing its hold [33]. Along this, NIR fluorophores do not require expensive instruments, a local cyclotron, or incontinent radionuclidelabeling step [34]. Kim et al. have developed NIR Cy5.5labeled hydrophobically modified glycol chitosan nanoparticles (HGC-Cy5.5) with molecular weight ranging from 20 to 250 kDa. In vivo biodistribution study revealed that lowmolecular-weight HGC-Cy5.5 showed faster clearance from the body in comparison to high-molecular-weight HGC-Cy5.5, whereas high-molecular-weight HGC-Cy5.5 had high tumor targeting capacity than low-molecular-weight HGC-Cy5.5. These probes provide promising imaging agents, which are used to detect solid tumor [35]. Kim et al. have developed NIR fluorescent-activatable polymeric nanoparticles (Cy5.5) linked effector caspase-specific peptide having efficient biocompatibility and cell permeability. These nanoparticles were specifically apoptosis sensitive nanoparticles (80-100 nm). This probe could be used as an imaging agent for apoptosis in single cells [36].
5.1.2.
Dendrimer. Nanosized branched structures are called dendrimer. The name comes from the Greek word "dendron" which means tree-like structures. With various architectural variations, uniformity in size, branching length, shape and increased surface area can be achieved. Dendrimers show higher biocompatibility and certain changes in the structure of dendrimers; pharmacokinetic parameters can also be predictable. Hence, dendrimers can be optimal and unique carrier system for anticancer drug [37,38]. Dendrimer can be grown towards outward direction from the central core; this process is known as divergent method designed by Newkome and Tomalia [39][40][41], or it may be formulated by the Frechet's method, in which the dendrimers are made toward inside direction, that is, from the periphery to inner core [42]. Dendrimers are also described on the basis of the branching unit they consist of, like dendrimer with central branch core molecule is considered as generation 0 (G0) and with each successive addition of increased branching point they may be considered as G1, G2 and so forth. Dendrimers may be categorized by terminal generation, like G6 consists of polymer with five generations of branching points. Dendrimers form the globular shape and attain higher diameter with increasing branching generation [43]. Dendrimers and dendrons are monodispersed and usually highly symmetric, spherical compounds. Dendrimer can be used as carrier system for the treatment of diseases like AIDS, cancer, malaria, and so forth.
Wang et al. had synthesized G4 polyamidoamine dendrimer (G4 PAMAM-D) conjugate with antisense oligodeoxynucleotides (ASODN). The conjugate showed more stability less toxicity, and increased bioavailability. In vivo studies on xenograft mice model showed that the conjugate has more accumulating efficiency to inhibit tumor vascularisation of breast tumor than naked ASODN [44]. Gupta et al. had conjugated doxorubicin (DOX) to polypropylene imine (PPI) as well as folic acid to fifth-generation polypropylene imine. The conjugated ligands DOX-PPI-FA and PPI-FA have less haemolytic activity, thus more stable and less toxic [45]. Fluorescence studies showed higher cellular uptake by tumor cell of the formulated conjugate ligand. Results of the study revealed that folic-acid-conjugated PPI dendrimers may be a better choice for anticancer drug targeting in the future.
Samuelson et al. have developed translocator protein (TSPO) dendrimer imaging agent with significantly increased targeting and imaging characteristics. The reported study revealed that TSPO can be used as an imaging agent in brain, breast, and ovarian cancer as well as in prostate carcinoma. The main synthesizing material used to produce TSPO dendrimer was 1-(2-chlorophenyl) isoquinoline-3-carboxylic acid (ClPhIQ acid). Hence, TSPO targeted dendrimer is a real-time imaging agent for breast cancer [46].
5.1.3.
Liposomes. Liposomes drug delivery system can change the biodistribution and pharmacokinetics of the drug in such a way that it shows overall improvement in the pharmacological properties of chemotherapeutic agents [47][48][49]. Due to the success achieved by the liposomal-based chemotherapeutic agents in clinical trials, liposomal formulations are currently used for the treatment of the breast cancer like Doxil liposomal preparation [50]. Liposomes consist of lipid bilayer membrane which surrounds the aqueous core. Depending on the solubility of active pharmaceutical ingredient, either it is loaded to lipid membrane or to the hydrophobic core. On the basis of lamellarity and size, liposomes are classified into three: small unilamellar vesicles, large unilamellar vesicles, and multilamellar vesicles [51]. At present, various kinds of cancer drugs have been loaded to this lipid-based system by using different preparation methods. Liposomes are the potential carrier system for anticancer drugs due to the following three pharmacological parameters.
(a) Liposomes provide slow and sustained release. (b) Liposomes are able to reduce cytotoxicity of chemotherapeutic agents by altering the biodistribution of entrapped drug. (c) Liposomes enhance the drug accumulation.
Doxil, a liposomal-based formulation which consists of cholesterol and high phase-transition temperature phospholipid hydrogenated soy phosphatidylcholine (HSPC) gives a stable drug delivery system with enhanced biocompatibility, efficacy and reduced cytotoxic effects [52]. Anthracycline doxorubicin, an active cytotoxic agent, when encapsulated inside the aqueous core of the liposome, significantly shows decrease in the cardiotoxicity [53]. Hence, higher dose of the chemotherapeutic agents can be given to the patient as in the form of liposomal drug delivery system, which can transfer significant amount of the anticancer drug to the desired targeted site.
Shahun et al. had formulated liposomes of doxorubicin (DOX) which is actively targeted to breast cancer by using engineered peptide ligands, P18-4. The effect of the peptide ligand on breast cancer with respect to accumulation cytotoxicity and growth inhibition was studied by varying the molar ratio of P18-4. It was found that the engineered P18-4 peptide can improve the antitumor efficacy by using optimum density [54]. Urbinati et al. had incorporated histone deacetylase inhibitors (HADCi) which belong to class 1 trichostatin and PXD 101 into liposome in large amounts. Phosphatidylcholine, cholesterol, and distearoyl phosphoethanolaminepolyethylene glycol were used to make liposomes and were used in a ratio of 64 : 30 : 6. Liposomes were checked for their toxicity and were measured in MCF-7, T47-D, SKbr 3, and MDA-MB-231 breast cancer cell lines. Formulation made by Urbinati et al. showed improvement in drug accumulation not only in breast cancer but other cancers also get eradicated [55]. Park had prepared pegylated liposome as a suitable drug carrier for doxorubicin. The study revealed the substantial efficacy towards the breast cancer and reduced toxicity of anticancer drug. Pegylated liposomal doxorubicin can be used either in combination with other chemotherapeutics or as monotherapy for breast cancer. Pegylated liposomal formulation can further be used for molecular targeting [56].
Dagar et al. had developed vasoactive intestinal peptide receptors (VIP-R) as a breast cancer targeted imaging with increased pharmacokinetics, biodistribution and with a better imaging ability. VIP-R, a 28-amino-acid mammalian neuropeptide, was attached covalently to the surface of the sterically stabilized liposomes (SSL) which was further encapsulated to a radionuclide (Tc99 m-HMPAO). Presented study revealed that VIP-R is 5 times more expressive in human breast cancer in comparison to other imaging probes.
BioMed Research International 7 SSL without VIP showed significantly less accumulation than Tc99 m-HMPAO-encapsulated SSL with VIP [57].
Carbon
Nanotubes. The allotropes of carbon with a cylindrical nanoshape structure are called carbon nanotubes. Carbon nanotubes belong to the fullerene structure. Representation of the carbon nanotubes is similar to the rolled sheets of graphene rings. A carbon nanotube provides the variety of promising biomedical applications in comparison to other nonmaterials. Carbon nanotubes are more dynamic and are used potentially not only in cancer cell imaging but are also used for drug delivery system. The unique biological and chemical properties, hollow monolithic structure, nanoneedle shape, and the ability of carbon nanotubes to incorporate any functional group make them a suitable carrier system for chemotherapeutic agents. This allows a passive diffusion of carbon nanotubes across the lipid bilayer, or it may attach to the surface of the cell and subsequent endocytosis (engulfing by cells) takes place [58,59].
Carbon nanotubes can be categorized into two as follows.
SWCNTs consist of one layer of graphene sheet with diameter of 1-2 nm and length varies from 50 to several hundred nanometers. On the other hand, MWCNTs are multiple layers of SWCNTs which are coaxially arranged with diameter variation of 5 to 100 nm. SWCNTs and MWCNTs have different mechanisms of cell penetration. By using confocal microscopy imaging, it has been observed that SWCNTs have the ability to incorporate inside the cells, whereas MWCNTs are not incorporated into the cells. Size of the carbon nanotube also affects the cellular uptake; due to this, SWCNTs show localized effect in cell and prolonged distribution [60]. Drug can either be loaded into the carbon nanotubes or be attached to the surface of the carbon nanotube. Attachment of anticancer drug can be done by either noncovalent bonding or covalent bonding, which includes electrostatic interactions -stacking and hydrophobic interactions [61]. Research has been done by Wu et al. to deliver an anticancer drug, 10hydroxyl camptothecin (HCPT), by covalent attachment on the outer surface of the MWCNT. Similarly, succinic anhydride was reacted with HCPT to obtain carboxylic groups on its surface; amino acids were then incorporated to the MWC-NTs. Carbon nanotubes coated with HCPT and amino group were functionalized by carboxylic group. This enhances the cell uptake of MWCNTs-HCPT and increased blood circulation with high drug accumulation to the tumor [62]. Liu et al. had conjugated paclitaxel (PTV) to branched polyethylene glycol chain on SWNTs. SWNTs-PTX conjugate exhibited higher drug accumulation, higher bioavailability, and little toxicity. Murine 4T1 breast cancer model shows suppression in tumor growth, enhanced permeation and retention. SWNTs-PTX delivery is the promising treatment for cancer therapy in the future, with higher efficacy and minimum cytotoxic effect [63]. Chen et al. had developed nanocarbon tube by chemical functionalization of SWNTs (f-SWCNTs) with DSPE-PEG-amine. The conjugate bounded to small interfering RNA (SiRNA) was targeting towards breast cancer. Disulfide bond was used for siRNA-mediated gene targeting. Resulting study shows that there is increase in the uptake of SWNTs-SiRNA by 83.55% into the breast carcinoma Bcap-37. Proliferation inhibition was found to be 44.53% for 72 hours in B-cap-37 cells. This novel strategy of chemical functionalization is effective carrier system and is a very advanced or significant therapy for breast cancer in the future [64].
Avti and Sitharaman have developed europium-catalyzed single-walled carbon nanotubes (Eu-SWCNTs) as cellular imaging probe for breast cancer cells. These probes, when excitated at 365 nm and 458 nm wavelengths, showed bright red luminescence. Mechanism of uptake of Eu-SWCNTs is endocytosis, and it was demonstrated in the study that Eu-SWCNTs showed 95%-100% labeling efficiency. The study revealed that Eu-SWCNT is an excellent cellular imaging probe for breast cancer, having excitation value with invisible range [65].
Nanorods.
Morphologically the nanorods are nanoscale materials in nanotechnology. Their dimension varies from 1-100 nm and they can be synthesized chemically. Nanorods have a high surface area and are biocompatible, hence, a promising approach for breast cancer. For gold nanorods, because of their special physicochemical properties, they are widely used for imaging, biosensing, photothermal therapy, and for drug delivery system. Inert and nontoxic nature of gold nanorods makes them a suitable nanomedicines carrier system applicable in biomedical field [66,67]. Different cellular uptake patterns are being followed by the single gold nanoparticle and aggregated gold nanoparticles, and during their uptake these particles interact with the compartments of cellular membrane [68,69]. Eghtedari et al. had functionalized gold nanorods (GNRs) for in vivo targeting to breast cancer which was grown on athymic nude mouse. Herceptin (HER), a monoclonal antibody, was used to functionalize the gold nanorods by molecular recognition of tumor cells of breast along with PEG (polyethylene glycol). Eghtedari et al. revealed the in vitro stability study of fabricated herceptin-PEG-gold nanorods in blood and in vivo study for breast cancer in nude mice model for breast carcinoma. To achieve a successful targeting to in vivo cancer cell, extra engineering efforts are required to make them stable inside the microenvironment of the cancer cells, biocompatible, have prolonged circulation in the blood to reach targeted site, and able to search cancer cells and bind to them. To prolong the circulation time, gold nanoparticles must be protected from reticuloendothelial system, and for this polyethylene glycol (PEG) has shown a promising effect [70].
Connor et al. had studied the cytotoxic effect of gold nanoparticles as noncytotoxic under suitable experiment condition. Small size of nanorods makes them potentially useful for drug delivery and gene therapy, hence, provides drug delivery system with lower cytotoxicity towards normal cell and increased chemotherapeutic efficiency towards abnormal cancer cell [71]. Xiao et al. had developed multifunctional water-soluble gold nanorods (GNRs) as a nanocarrier for tumor targeting. pH-sensitive behaviour of GNRs causes the release of drug, by minimizing the cytotoxic nonspecific systemic distribution of anticancer drug, during circulation inside the human body side by side increasing the efficiency of anticancer drug to targeting tumor [72].
Likewise, zinc oxide nanorods (ZnO) also provide a promising approach in cancer for imaging and drug delivery system for cancer therapy. ZnO nanorods are self-organizing nanomaterials which can be grown on any substance with high quality of crystalline and amorphous properties. This provides ZnO nanorods with large surface area to volume ratio and higher efficiency for photoimaging. Generally, white light is being observed in photonic device and potentially used in photodynamic therapy. Photosensitizers are being taken by cancer cell in photodynamic therapy for cancer followed by exposure to white light [73]. Zhang et al. had fabricated zinc oxide (ZnO) nanorods as a drug carrier for the anticancer drug daunorubicin (DNR) in photodynamic therapy, by using simple one-step solid-state reaction at a normal room temperature in the air. The investigation revealed that the combination of ZnO-nanorods-DNR has induced remarkable decrease in cytotoxicity of anticancer drug and considerable increase in the cancer cell targeting mediated by reactive oxygen species (ROS) in human hepatocarcinoma cells (SMMC-7721 cells) [74]. Kishwar et al. had conjugated developed ZnO nanorods (ZnO-NRs) with protoporphyrin dimethyl ester (PPDME) and used it in the treatment of breast cancer. ZnO nanorods were developed on borosilicate glass capillaries tip by using aqueous chemical growth technique. Developed PPDME-conjugated ZnO-NRs have induced cell localized toxicity indicating potential application in necrosis of breast carcinoma.
Wang et al. have developed multifunctional nanoparticles of gold and pearls consisting of single amine-modified gold nanorod, and Fe 3 O 4 "pearls" were used to give final touch with the help of carboxyl group. Reported study demonstrated the effectiveness of the gold nanorod in breast cancer photothermal ablation and dual-mode imaging of breast cancer [75].
Conclusion
Human breast cancer is still an extremely complex and dangerous disease with multiple questions. Nanotechnology is a fast emerging area of science with potential for imaging, monitoring, diagnosing, and delivery of drug to specific targeted tumor cells. Nanoparticles offer the advanced methods of tumor targeting with improved efficacy and decreased toxicity. Many nanoparticle formulations are already in clinical practices. Ongoing efforts by researchers, scientists, and other medical personnel in the field of nanotechnology will consistently produce the new platform for nanoparticles. In the near future, nanotechnology will not only show a greater application in oncology, but the discipline of medicines will also be benefitted. | 7,299.6 | 2013-06-24T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Hospital crowdedness evaluation and in-hospital resource allocation based on image recognition technology
How to allocate the existing medical resources reasonably, alleviate hospital congestion and improve the patient experience are problems faced by all hospitals. At present, the combination of artificial intelligence and the medical field is mainly in the field of disease diagnosis, but lacks successful application in medical management. We distinguish each area of the emergency department by the division of medical links. In the spatial dimension, in this study, the waitlist number in real-time is got by processing videos using image recognition via a convolutional neural network. The congestion rate based on psychology and architecture is defined for measuring crowdedness. In the time dimension, diagnosis time and time-consuming after diagnosis are calculated from visit records. Factors related to congestion are analyzed. A total of 4717 visit records from the emergency department and 1130 videos from five areas are collected in the study. Of these, the waiting list of the pediatric waiting area is the largest, including 10,436 (person-time) people, and its average congestion rate is 2.75, which is the highest in all areas. The utilization rate of pharmacy is low, with an average of only 3.8 people using it at the one time. Its average congestion rate is only 0.16, and there is obvious space waste. It has been found that the length of diagnosis time and the length of time after diagnosis are related to age, the number of diagnoses and disease type. The most common disease type comes from respiratory problems, accounting for 54.3%. This emergency department has congestion and waste of medical resources. People can use artificial intelligence to investigate the congestion in hospitals effectively. Using artificial intelligence methods and traditional statistics methods can lead to better research on healthcare resource allocation issues in hospitals.
Medical resources are limited, and the demand for medical resources continues to increase 1 . Various countries have varying degrees of shortage of medical resources, which is more severe in developing countries [2][3][4] . Rational allocation of macro-level health resources involves regional health planning, the layout of health resources, and so on. And the micro-level involves the rational use of inventory resources, which in medical institutions includes such things as the distribution of human resources, the setting of departments, and the layout of buildings. The direct manifestation of the distribution of medical resources in a hospital is the congestion situation in the hospital 5 . In recent years, the number of daily visits in urban hospitals, especially large general hospitals, has continued to increase 1 . And various public health emergencies in cities have occurred from time to time, resulting in an increasing frequency of congestion in hospitals 6 . This not only seriously affects the quality of medical services and reduces the satisfaction of patients, but also increases the possibility of medical disputes and aggravates the conflict between doctors and patients 7 . Therefore, in order to reduce and alleviate the congestion of the hospital and improve the patient experience, the hospital is trying to reform and optimize the allocation of hospital resources 8 .
Crowding occurs when the medical services required by patients exceed the capacity that the hospital can provide. Generally, Hospital design, department layout and resource allocation affect the congestion in the hospital 9 . Investigating the congestion in the hospital and studying the specific relationship between congestion and hospital design and system planning could provide managers with effective decision-making opinions on the allocation of medical resources.
Artificial intelligence is the study of how to make computers do intelligent tasks that only humans could do. In recent years, artificial intelligence has been widely used in the medical field as a new technology that plays an important role in medical practice such as imaging assisted diagnosis and pharmaceutical exploitation 10,11 . However, there is a lack of high-tech applications related to artificial intelligence in health management. At the same time, artificial intelligence has played a huge role in the field of public management and public security. For example, monitoring the crowd gathering through surveillance video to prevent stampedes and other gathering accidents 12 .
Similar to the need to monitor crowd aggregation and prevent congestion in the field of public safety, hospital management also needs to manage congestion. In this field, the management of congestion and allocation of resources in the existing hospitals usually relies on the personal experience of the administrators, without quantitative data analysis. Manual collection of crowd data for quantitative study in hospitals requires a lot of manpower and time. When dealing with resource allocation problems, AI can have better adaptability and responsiveness than manual scheduling 13,14 . Therefore, similar to use AI to monitor crowd aggregation and prevent congestion in the field of public safety, the use of advanced computer technology to manage hospital congestion is a feasible method.
The emergency department (ED), which has a diagnosis and treatment room of general clinics, internal medicine, surgery, gynecology, pediatrics, and other specialized departments, is a relatively independent unit. And the emergency department often requires receiving a mass patient, easily forming a short peak of visits 15 . Therefore, the emergency department is a perfect research object for congestion research. In order to provide an early alert for the manager and a reference for solving the congestion in the hospital, this research aims to use artificial intelligence technology to intelligently monitor and analyze the time and location characteristics of the hospital congestion, the relationship among different medical treatment processes and its' influencing factors.
Methods
Study design and research object. This research is based on the existing hospital's treatment process and big data of the crowd to analyze the congestion of the hospital and the rationality of their resource allocation. This research focus on the spatial and temporal distribution of the density of the crowd through artificial intelligence identification, collection and analysis. We collect data from a tertiary hospital. Tertiary care institution settings are adopted in China, and tertiary hospitals being the largest hospitals with the highest technology levels. Tertiary hospitals are similarly the most congested providers while having the most complete treatment link. Studying the congestion of tertiary hospitals is beneficial to study the congestion law of each diagnosis and treatment link in medical institutions. We randomly selected a tertiary hospital. We communicate with the emergency department of this hospital and conduct an on-site inspection, so as to understand the emergency procedures in the actual operation and the areas with large people flow. And we randomly selected a week of monitoring video records from the emergency department with 168 h of video, from 26 October to 1 November 2020, including the registered hall, the waiting area of Pediatrics, the waiting area of Pharmacy, the waiting area of the Internal medicine and Surgery, the waiting area of Inspection department. In addition, we get disease diagnosis-related information, including patient's age, gender, disease diagnosis information and visit times. All information extracted does not involve basic information, such as individual names.
Data collection. In this study, we create a model of patient flow in the hospital to distinguish the various areas of the hospital. The research divides the whole process of medical treatment into three parts: Registration, Visits and Examinations, and the Post-diagnosis process. Registration is divided into online registration and onsite registration. Only the patient's arrival in the hospital is considered in the study. In this part, we collect the data from the registration hall of the hospital. The diagnosis part includes diagnostic rooms of all departments. Usually, the doctor arranges an examination for the patient after the initial consultation. And then, the patient returns to the clinic for further diagnosis after the examination. In this study, we focus on the time of initial diagnosis. Therefore, we collect the data from waiting areas of various departments in the hospital. The post-diagnosis process refers to the process after diagnosis until leaving the hospital. This part includes inspection, payment, taking medicine, etc. Based on this model, we collect three categories of information: (1) Image information: monitoring information at various nodes; (2) medical visit information, i.e. the visitor records of this emergency department; (3) Architectural information, i.e. the architectural situation in this emergency department.
Therefore, we focus on the actual scene of the emergency department and collect surveillance videos of crowded areas in the emergency department. The acquisition area to be collected includes emergency registration, the waiting area of the department of Pediatrics (the Department of Pediatrics in the emergency department is in an independent area), the department of internal medicine and Surgery, the inspection waiting area and an independent pharmacy. For each area, we collect the surveillance video of that area in one week in batches. The daily collection time of each area is the working time of the department. Of these, Pediatrics and internal medicine and Surgery are 24-h jobs, and registration offices, inspection departments, and pharmacies are from 8:00 a.m. to 6:00 p.m. Every half hour, we intercept a one-minute surveillance video. Therefore, the Pediatrics www.nature.com/scientificreports/ department and the department of internal medicine and Surgery collect 48 videos per day, while the other areas collect 21 videos per day. We collect 1113 videos in total. The videos are divided according to the acquisition area, and the videos of different areas are saved in chronological order. Because all the areas we collect are waiting areas. Most people in the video are stationary. In order to reduce the total amount of data and accelerate the data processing, we choose a relatively low sampling rate. The area of each scene is large, the shortest length of the corridor is about 8 m. We assume that the walking speed through pedestrians is 1.5 m/s, so the sampling rate of 0.2 FPS may not miss the people who are passing through. Video frame extraction is performed at Frames Per Second (FPS) = 0.2, i.e., 1 image is collected every 5 s. 10 images are collected for each video. The images are divided according to the acquisition area and chronologically deposited into different folders. Finally, we get 11,130 images for crowd counting. The data collection at this stage provides a realistic basis for the next project design.
On the other hand, we also collate the records of emergency visits. This record contains the number of patients, gender, age, outpatient diagnosis, the earliest time of receipt, the earliest time of diagnosis, the earliest time of payment, and the earliest time of dispensing medication. This record can be combined with video data to provide more detail on studying the congestion in this emergency department.
Image process based on convolutional neural network. We use a convolutional neural network (CNN) to measure the number of people. CNN is one of the most popular AI models in the field of image recognition right now. Compared with the traditional artificial neural network method, the structure of CNN is simpler and can save more computing resources. At present, CNN models can identify people with masks very well 16 .
We annotated 1000 images from the collected images as the training dataset for training our model. At first, the MATLAB 2020a has been used to annotate the training dataset. We annotated each head in the image and generated a map which has the same size as the original image. In the annotated map, the pixel value at the annotation is one and others pixel value is zero. Then, we use a Gaussian kernel with a kernel size is 15 and σ is 4 to perform Gaussian filtering on the annotated map to obtain the density map as the ground truth of our dataset. The total pixel value of the density map is the total number of people in the image.
We build a Multi-fusion convolutional neural network (MFCNN) based on the ResNet 17 and U-Net 18 to predict the number of people in the images. Our MFCNN uses the encoder-decoder structure. In the encoder, our MFCNN has an input block and four downsampling blocks. The input block condenses the image to a quarter of its original size. This can reduce the amount of memory consumed by the model at runtime. The feature image then passes through four consecutive downsampling blocks. In order from shallow to deep, these blocks gradually include more base blocks to enhance the feature extraction ability. The base block is built based on the bottleneck layer of ResNet. In decoder, the feature map in the depth would be upsampled and fused with the feature map with larger size. Finally, fine-grained regressor refines the feature map and generates a density map to predict the number of people.
In the fine-grained compressor, the input feature map is cut into strips from the width and height dimensions. Each strip is sent to a base block for fine-grained local feature extraction. This step can increase the ability of extracting the correlation between features to obtain a more complete feature image. Then, these strips are spliced into a feature image through the width and height dimensions. Feature maps with fine-grained feature enhancement in the two dimensions are fused as the final feature map. The regression layer composed of a global average pooling layer and a convolutional layer predicts the density map through the final feature map. The predicted density map is enlarged to the size of the input image by a dilated convolutional layer. We use global average pooling layer and 1 × 1 convolutional layer to replace the fully connection layer. This enables our model to adapt to various input images of different sizes.
The training process is shown in Fig. 1. In each epoch, the system records training errors and corrects the network in the next epoch. After training, the system compares the errors from each epoch and picks the network with the best results. Finally, the best network for crowd counting can help us get the number of people from images. Our experiments are based on a Python environment. Batch size is set to 8, Adam is the optimizer, and 50 epochs are learned at a fixed learning rate of 1e−5.
Statistics. Statistics on CNN model. We use two indexes to evaluate the prediction results to predict the number of people. The mean counting error (MCE), which measures instance counting accuracy for R images: where C i gt represents the ground truth number of people in the i-th picture, and C i pred is the number of people predicted by the generator. MCE can numerically represent the average number of false identifications per image. A smaller value of MCE means a smaller amount of counting error per image on average.
The root mean squared error (RMSE) can reflect the deviation value of the prediction error of each instance: where R is the number of responses, C i gt represents the ground truth number of people in the i-th picture, and C i pred is the number of people predicted by the generator. RMSE is a commonly used model performance www.nature.com/scientificreports/ evaluation metric in the field of object counting. A smaller value of RMSE means better accuracy of the model prediction.
Statistics on numbers of people. Based on the obtained crowd data, from the two perspectives of different areas at the same time and the same area at different times, the crowd information, which is collected from the images, is sorted by date. First, we identified the number of people in every image by CNN for each sampling time point. The number of people from all pictures at this sampling time point is then averaged to get the number of people at this sampling time point in one area. The average number of individuals at each site is finally counted into a table in chronological order.
Statistics on time spent. The emergency department visit records are processed to calculate the length of visit time after diagnosis (T1) and the length of diagnosis time (T2). T1 is the time of the earliest drug delivery time or the earliest payment time minus the time of pickup. T2 is the earliest time to diagnosis minus the time of pickup. After that, we extracted the influencing factors that might be associated with T1 and T2 from the records of emergency department visits: gender, age, number of diseases, and kind of diseases.
Unit congestion rate. According to Hall's Personal Space Theory of Social Psychology, personal distance is usually defined as 4 feet, or 1.2 m 19 . Therefore, we can estimate the theoretical number of people awaiting treatment in an area based on the area of this location and the personal distance, which makes the waiting crowd feel psychological discomfort, as shown in Eq. (3). Then take the ratio of the actual number of people to the theoretical capacity as the unit congestion rate in the area, as shown in Eq. (4).
Based on the unit congestion rate, it fits the curve graph of the congestion rate change over time in each position within a day; draws the fluctuation diagram of the unit congestion rate change along the medical process at each time point; draws the fluctuation diagram of the unit congestion rate change on each medical process at the same time on different days of a week.
Influence factor. In addition, we use t-test, ANOVAs test and Kruskal Wallis test to investigate the correlation of changes in visit length or diagnosis time with several influencing factors. Explored whether these influencing factors could contribute to hospital congestion by affecting the length of a visit or diagnosis time.
All analyses are performed using the social science statistics package (SPSS) version 25 (SPSS, Chicago, USA). First, the descriptive statistics (frequency, percentage, average and standard deviation [SD]) are calculated. We use T-test to compare the 2 groups. And ANOVAs test is performed for multi-group comparison to evaluate the difference of continuous variables, when the data is the normal distribution and conforms to the assumption of homogeneity of variance. In addition, the non-parameter method is adopted. When the test of variance shows heterogeneity of variance, we use the Kruskal-Wallis H test to find the relationship between groups. The probability value of P < 0.05 means statistically significant.
Ethical statement. All methods are performed in accordance with the relevant guidelines and regulations by including a statement in the "Methods" section. The acquisition of surveillance video and building data is approved by the hospital and approved by the ethics committee. The acquisition of visit records is approved by the participating hospitals and all patients. Informed consent is obtained by telephone from the participants. For patients aged < 18 years, consent is obtained from a legal guardian. The data are all processed for desensitization, and low-resolution images are used for video images. The research has approval from the ethics committee of the Shanghai Jiaotong University School of medicine.
Result
The hospital record has 4717 records, of which 1042 records can not count the length of diagnosis time or the length of visit time after diagnosis, and 3675 records are useful, accounting for 77.9% of the total.
Validation of multi fusion convolutional neural network. We use 1000 annotated images to train our multi fusion convolutional neural network. 600 images, of which 600 are used for model training and 400 for model testing. The division of test and training sets is random. Mall dataset contains 2000 images. We randomly choose 1600 images for model training and 400 for model testing.
We compared our MFCNN with some classical CNN model on our dataset and Mall dataset, which is a public dataset and has a similar scenario to our dataset. In Table 1, the experimental results show that our MFCNN has the lowest mean counting error (MCE) and root mean squared error (RMSE), which means our MFCNN has the better counting accuracy than the classical CNN model. Distribution of waiting people in surveillance video. Using a convolution neural network, we calculate the number of people waiting in the waiting area of each department, and the results are recorded in Table 2 (3) N theoretical = area For Registration, the maximum peak of waiting number in the morning occurred on Monday and it in the afternoon occurs on Friday; There is a small peak almost every morning at 11:00; The maximum peak of waiting people occurs later on weekends than it on weekdays.
For Pediatrics, the number of people waiting for treatment increase rapidly from 8:30 am to 9:30 and maintained a high level throughout the day, with a smaller decrease in waiting numbers at midday and afternoon rest times; Usually, the number of people waiting for treatment on weekend nights is smaller than that on weekday nights. For the waiting areas of Internal medicine and Surgery department, the number of people waiting for treatment is large and fluctuating.
In inspection, the first peak of waiting people on each day comes earlier than it in other departments, usually around 8:30; And the waiting number has a rapid upstroke before the emergency department knocking off at 18:00. There is not a large gap in the total reception number per day at the pharmacy, but waiting numbers in various periods of the day fluctuated greatly. Although the average waiting number in the registered area is smaller than that of Pediatrics, its maximum waiting number is close to that of Pediatrics. Crowdedness is noted in all areas except testing departments and pharmacies, where crowdedness is greatest in Pediatrics waiting areas. Table 3 and Fig. 3, based on the available data, we have listed 4 factors that may affect the length of visit time after diagnosis and diagnosis time: age, gender, number of diagnoses, disease types. Of the patients, 66% are children under ten years of age. We make separate comparisons between groups of children. The period under 1-year-old is called infancy, the period from 1 to 3 years old is called early childhood, the period from 3 to 6 years old is called preschool, and the period from 6 to 12 years old is called school age. Only one disease has been diagnosed in most patients. The most common comes from respiratory problems, accounting for 54.3%. Table 4, we obtain the architectural data for this emergency in the diagnostic area through building drawings and field surveys and we calculate the Optimal number of people waiting for treatment for the area based on Personal Space Theory 19 . Among them, the registered lobby has the largest building area. And Monday and Friday have the highest number of doctors arranged.
Discussion
It is important to optimize the allocation of health resources. Current tertiary hospitals in China also commonly suffer from the problems of 'three long and one short' (long registration time, long waiting time, long time to pick up drugs, and short visiting time). Through the study of hospital congestion, it is helpful for managers to understand the flow of patients in the hospital. Thus, it can improve the configuration of resources and the visiting service of hospitals. www.nature.com/scientificreports/ The survey shows that the burden in Pediatrics is substantial. Pediatrics had the largest waiting list and the greatest patient density. The waiting number of Pediatrics rose rapidly during the morning hours from 8 to 9 am and is maintained around 40 thereafter. In terms of the congestion rate, the average congestion rate of Pediatrics is 2.75 19 . In fact, the pediatric environment is also chaotic, and the affected children often cry loudly. This environment would aggravate the anxiety of patients and their families and thus reduce the patient's visiting experience and satisfaction 22 . Additionally, the emphasis placed on the next generation in Chinese traditional culture has been easily magnified by the parents' anxiety from the malaise of child visits, which further worsens the diagnosis and treatment experience. Poor experience of patient may lead to decreased medical effectiveness and even causes Doctor-patient Conflict 23,24 . Increasing the number of physicians to reduce the peak waiting number might be an effective way to improve their experience. However, at present, pediatrics is a subject that doctors are reluctant to choose. The lack of availability of pediatricians has become increasingly significant. Government and hospitals should improve the treatment of pediatrics to increase the number of pediatricians. Additionally, providing possible length of waiting time can help to enhance the visitor's experience 25,26 . At the same time, expanding the area of the waiting region to reduce the density of the waiting people may improve the patient's experience. Moreover, improving the environment of pediatric, including recreational imaging, wallpaper color, children's activity areas and animation to divert the child's attention, might reduce the crying of the child and relieve the anxiety of the child's family 22 .
The medical examination is an important link in the modern medical process and inspection reports are the important basis for physicians to initiate diagnosis and treatment. It should be noticed that the distribution of waiting people in the Inspection shows a dumbbell type distribution: high at both heads and low in the middle. The first peak of waiting number in the Inspection comes earlier than it in other departments, which is usually at 8:30, while the others are at 9:30. In addition, the number of waiting people at the Inspection is rise before offhours, which may indicate that patients are eager to complete the medical test before off hours to avoid dragging the test to the next day. Previous studies have shown that sampling examinations are required for the majority of www.nature.com/scientificreports/ patients, and the waiting time for sampling examinations is long 27 . In Table 3, concurrence of multiple diseases does not result in a longer diagnostic time for physicians, but significantly increases the length of time used after diagnosis. Because multiple diseases are possible, the patient is referred for multiple medical tests to further define the type and circumstances of the disease. So doctors need to rely on laboratory examinations to detect the patient's condition, and complex diseases spend significantly more time on inspection to be determined like Oncology cancer 28 . The speed of examination constrains the speed of the whole visit process. The main reason for the patients waiting in the queue at the Inspection department is waiting for samples collection. Too long waiting time could affect the fluency of all visit progress and cause patient dissatisfaction 22,29 . Faced with the above problems, on the one hand, hospitals may implement a shift by improving the scheduling mechanism, changing the scheduling time of the laboratory department, increasing the number of workers in the morning and offhours, or by adding a shift to reduce the waiting time of patients in the queue and avoid the centralized outbreak of inspection needs. On the other hand, the hospital could improve testing devices 30,31 . The use of more advanced equipment can reduce the time required for a single inspection. In addition, by conducting a health economics analysis of devices' effectiveness, hospitals can comprehensively judge how much unit time inspection efficiency and hospital operation efficiency have been improved by the introduction of high-efficiency equipment 32 . Meanwhile, pharmacies with very low crowding rates also deserve attention. As can be seen in Table 4, the independent emergency pharmacy is less frequently used but occupies a significant amount of space. Other studies have shown that pharmacies are indeed a vulnerable, wasteful and inefficient link in hospitals 33 . The emergency pharmacy could be combined with the outpatient department pharmacy. In this way, the hospital carries on the unified planning to the pharmacy, reduces resources waste, and improves the working efficiency 34 . On the other hand, hospitals can cooperate with outside drugstores. So that some patients can choose to buy drugs in drugstores. This is helpful in shortening the total time of patients in the hospital and reduce hospital expenses (Supplementary Information).
In Fig. 3, there is a clear positive correlation trend between patient age and length of diagnosis time. This may be related to disease complexity in elder patients 35 . Expressive skills and memory are relatively poor in the elderly compared to the young 36,37 . Physicians may spend more time communicating to determine the type of condition and disease development when faced with an elder patient. For the elder patients, we can strengthen the role of the family doctors, let them join the consultation and make use of well-established case records so that the patient's appeal can be more clearly expressed 38,39 . Nevertheless, younger patients generally spend more time after diagnosis than elder patients. Because most elder patients are Geriatric disorders or Chronic diseases patients, and they regularly visit the hospital for diagnosis 40 . Therefore, these patients are generally more familiar with the department setting and visit process, while young patients may not be familiar enough. In response to the issue of inexperienced access for young people, we suggest that an introduction to the visit process can be provided on the appointment software to help young people understand the visiting process in the hospital and the location of each department. On the other hand, the hospital can set some striking signals or electronic navigation, which facilitates patients to find destinations quickly 41 .
In addition, respiratory diseases occupy more than 50% of the patients in the emergency department and most of them are influenza patients. This leads to congestion in the emergency department and a waste of medical resources. Most influenza patients could choose to go to community hospitals rather than tertiary hospitals. From other studies, we found that this is a common phenomenon that patients are more inclined to seek treatment in tertiary hospitals rather than community hospitals 15 . We believe that the local government also needs to strengthen the construction of community hospitals, strengthen the training of family doctors, and improve the hierarchical diagnosis and treatment system 39,42 . Through community hospitals, family doctors and other means, to relieve the crowdedness of central hospitals and to reduce the time-consuming of patients 43 .
Conclusion
This study presents a summary analysis of congestion at a tertiary hospital through a survey based on crowd counting via CNN. We propose a process of medical treatment based on hospital visit flow to divide the regions of the hospital. Meanwhile, we propose the unit congestion rate, which is used to calculate the crowdedness of a location. To count the number of waiting people, we propose the multi fusion convolutional neural network. Through the statistical analysis of the monitoring data by artificial intelligence, we validate the congestion existing at this hospital. Based on this, we propose improvements in the allocation of medical resources to the hospital. In addition, this study finds associations of age, disease category with the length of diagnosis time, and length of time after diagnosis. And we also find a relation between the number of diagnoses and length of time after diagnosis. These empirical data also help us to make improved opinions on the allocation of medical resources in this hospital and local medical policies for the local government.
However, the sample size is not large enough due to the short period of data acquisition. Some possible periodic features may fail to be detected. And our failure to access the appointment registration to face-to-face visits, so we are not able to conduct a more complete study of the whole visit flow. In the future, we will conduct more research to better play the role of AI in medical management.
Data availability
The datasets use and/or analyse during the current study are available from the corresponding author on reasonable request. | 7,336 | 2023-01-06T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
“Impact of HR practices on corporate image building in the Indian IT sector”
HR practices have been always significant for researchers and industries because of the large expansion of the companies in various fields, which requires leaders who can identify and introduce innovative and effective HR practices to utilize and retain human resources for the long period. In the 21st century, there has been tremendous growth in the IT sector all around the world. The Indian IT sector has become glob- ally competitive and high-tech in the rapidly developing Indian economy. Hence, this study was undertaken in the Indian IT sector to identify the impact of HR practices on corporate image building. This paper applied a quantitative research method. Data from 100 respondents were collected with the help of a structured questionnaire to test the hypotheses. Pearson’s correlation coefficient and SPSS 20 software were used for data analysis. The findings of the regression analysis proved that there is a significant relationship between employer branding and corporate image building. The analysis of this study indicates that the IT companies implementing the HR practices based on employer branding and corporate social responsibility get benefits in building the corporate image and give positive results to them. HR practices have a significant rela- tionship with corporate image building.
INTRODUCTION
The foundation of corporate image building is new in the field of human resource management, as previously it was applied in marketing management. Corporate image has always been the key topic in the context of advertisement and marketing. Unfortunately, there are not enough studies investigating the relationship between HR practices and corporate image building. The relationship with employees and customers will lead to the creation of customer satisfaction, corporate image, and customer loyalty (Giao et al., 2020;Alam & Noor, 2020). In addition, corporate image is identified as organization reputation, impressions, and ideas of leaders and customers' beliefs (Richard & Zhang, 2012). To explain the knowledge of HR practices and their impact on corporate image building, this study has been undertaken in the Indian IT sector.
Service-based organizations put too much effort into attracting and retaining their talented employees (human resources) within the organization. Due to changing scenarios of globalization, privatization, liberalization, and several economic reforms, organizations are becoming attentive and internationally strategic to choose the best workforce and utilize their best talents for the expansion and growth of the companies. In the IT sector, the role of HR professionals has become challenging and tough. Now, it becomes the responsibility of leaders to nurture, motivate, and effectively manage their workforce to achieve the targets. Customer satisfaction and corporate image have a reciprocal relationship with customer loyalty (Zaid et al., 2021). IT sector in India has been growing with very high speed and contributed to the growth and development of the country. Randstad, one of the renowned HR firms, was surveyed in 2013, by which Microsoft has confirmed that the Indian IT sector is one of the most talented employers in the world. As per the report submitted by NASSCOM, in 2015 IT sector in India contributed largely to the Indian economy as it shifted from 1.2% to 7.5% in the GDP. In addition, the Indian IT sector generated 2.5 million employments and became one of the largest and top IT capital in the world. However, in India, IT industries facing a lot of challenges due to rapid technological change and competition in the IT market.
The world sets its eyes on India with a lot of anticipation and considering it as the epicenter of growth. The competitive advantage rate shows that India among others has a low-cost advantage over others and it is 5-6% less expensive than the U.S. India has become the topmost off-shoring destination of IT companies across the globe. Indian talents have effectively proven their capabilities by delivering both off-shore and on-shore services across the world, the latest technologies have offered a new gamut of opportunities to the top IT firms in India. As per the data released by DPIIT (2018) the computer hardware and software sector in India has attracted FDI (Foreign Direct Investment) cumulative inflows worth US$ 37.23 billion from April 2000 to March 2019 and stands second in the inflow of FDI. The leading IT firms in India like Infosys, TCS, Tech Mahindra, and HCL have large IT projects for their employees. They are announced safe in terms of retention and can offer good salary packages for them. Because of this, it becomes a necessity to maintain their image in this competitive world and HR is the major source of organization. It is vital to understand the role of HR practices and their impact on corporate image building. The HR practices are the organizational tool that can be utilized for the betterment of the organization in 360 degrees. In this scenario, this study examined the impact of HR practices on corporate image building in the Indian IT sector.
LITERATURE REVIEW
The impact of HR practices on corporate image building in the IT sector has come into the limelight and presently become a subject that requires more attention in the context of human resources. Syed-Ikhsan and Rowland (2004) stated that the employees are the most important assets for an organization; however, they can be well utilized by only a few of them. HR practices are those organizational activities that help in managing and motivating human resources to attain the goals of the organization (Tiwari & Saxena, 2012). Tran et al. (2015) indicated that time and experiences help in creating a corporate image more consistent reputation, which includes five main variables: awareness, favorability, familiarity, advocacy, and trust. HR practices (Harel & Tzafrir, 1999) are dependent upon the external and internal factors of variables of the organization like employee behavior, employer-employee relations, employee productivity, company image, financial performance, etc. Many pieces of research have been conducted on HR practices and there are many best practic-es concluded by each of them. Some of them are the security of a job, teamwork, training and development, effective communication channel, effective hiring process (Pfeffer, 1994), efficient remuneration, employee involvement, performance reward system, and flexible working environment (Redman & Mathews, 1998).
There have been various experimental studies that define the influence of HR practices on many variables like HR practices -competitive advantage (Hendry & Pettigrew), HR practices -employer-employee relationship and trust (Tzafrir et al., 2004) HR practices -organizational performance, HR practices -financial performance, HR practices -innovative climate and HR practices -job satisfaction (Petrescu & Simmons, 2008). However, to identify the impact of HR practices on corporate image building, it is necessary to analyze the HR practices and the variables that may be correlated with corporate image building.
Employer branding refers to the image of the companies as the best place to work in the minds of present employees and the key stakeholders of the external market. The competitive scenario in the IT industry introduces employer branding that helps IT firms in retaining talented employees for a long period. It became essential for the organization to do employer branding at a large scale to face the competition in the service market. The best employer is the one who gives the best for creating a good brand image in the mind of employees, customers, clients, and stakeholders, confirming as the best place to do the job. Therefore, employer branding is considered as the process of engaging, attracting, and retaining the present and future employees who will enhance the company brand.
The corporate image and employer image are proved to be an important factor that affects organizational attractiveness and specifies the role of social identity in the organization (Younis & Hammad, 2021). Employer branding needs some attributes within the organization to make it the best and attractive place for those people who can positively give their best performance within it. Depending upon the functionality, employer branding is divided into internal and external branding. Internal employer branding refers to the development of policies and programs and building corporate culture for the betterment of internal individuals. External employer branding making a positive relationship with professionals, clients, customers, stakeholders by using an effective medium of communication and providing the best services to them. Corporate image is directly associated with brand love, which also leads to purchase intentions (Trivedi, 2020). However, other important determinants of employer branding are competitiveness and transparency, which an organization has to consider in their branding policies and plans for an effective result (Jenner et al., 2008). Building an employer brand includes practices where each employee should be a star ambassador of the organizational brand. The role of HR leaders in employer branding is to raise the effectiveness of the employer brand and make it a routine experience to build the company image.
Corporate social responsibility is an organized practice of the balanced integration of environmental and social considerations into business and operations decisions. A large number of or-ganizations realize that having a social corporate image is an asset to them. Researchers and marketing practitioners emphasize that CSR plays a vital role in the customer decision-making process. On a global scale, CSR has become popular among all types of organizations (Monfort et al., 2021). CSR provides a competitive advantage in designing the attractive corporate image of an organization. Corporate image and customer satisfaction partially mediate the association between corporate social responsibility and financial performance.
CSR is a dynamic factor in constituting a corporate image. Lindgreen and Swaen (2010) indicated that a maximum number of companies have started the social responsibility activities, which helps them understand the consumer attitude towards environmental protection, social responsibility, ecology, and the nature of consumption habits. These force organizations to think about the new alternatives of social activities with principles of CSR specifically (Uygun & Gupta, 2020).
David and Gallego (2009) illustrated that socially responsible companies try hard to meet the legal needs of CSR and do more than expected in the area of human resource management introducing all the latest technologies for environmental protection. These CSR activities provide a competitive advantage to the companies. According to the World Business Council-Sustainable Development, the most popular definition of CSR is that it is a commitment and contribution towards business ethics and economic development. Companies that care for the environment and wellbeing of society seem favorable in comparison to the companies that do not care stated that a large number of companies improve their CSR practices to build their corporate image and alternatively corporate image enhances company reputation (Rasheed & Gupta, 2021).
The concept of corporate social performance is an extension of corporate social responsibility and its main emphasis on results is achieved. CSP concept has somehow developed parallel to the CSR concept. Both concepts go hand in hand, the difference lies between what companies "perform" and what companies have as their "responsibilities". It has clearly illustrated the concept of CSP. The CSP model was elaborated and three CSP dimensions were introduced, namely principles, processes, and policies. Further, argued that CSP is mainly a business organization configuration of all principles of SR, the overall processes of social responsiveness, and programs and policies in the context of the firm relationship with the society.
CSP is the long-term goal set by the organizations to create long-term shareholders relationships by taking the opportunities and managing the risk of environmental, social, and economic performance (Carroll, 2015). As the attitude of customers toward the organization is the function of corporate image (Gürlek et al., 2017), organizations, such as retailers, with a better corporate image are preferred by the customers.
AIMS AND HYPOTHESES
This study aims to test the exact relationship of employer branding, corporate social responsibility, and corporate social performance with corporate image building. These three relevant factors have been identified from the literature review and have been found relatable and genuine to study their impact on the corporate image building. Hypotheses are formulated as follows: H1: There is a significant relationship between employer branding and corporate image building.
H2: There is a significant relationship between corporate social responsibility and corporate image building.
H3: There is a significant relationship between corporate social performance and corporate image building.
METHODOLOGY
This study is a quantitative approach to understand the impact of HR practices on corporate image building in the IT sector. Data is collected through a questionnaire that contains 12 close-ended questions based on the variables used in this study. This study includes data from both primary as well as secondary data sources. The primary data is the first-hand information that is collected from the employees of top IT companies in India. Whereas the secondary data includes online and offline publications, journals, and textbooks.
Data were collected through a questionnaire circulated to the employees of IT companies in India specifically from top IT leaders. The questionnaire is structured and includes questions that are intended to test the hypotheses. The questionnaire is divided into two sections; the first part contains questions with a nominal scale to collect the general information about the respondents (demographic information). The other part of the questionnaire contains multiple-choice questions with 5 options to select the answer.
The IT companies that were chosen for this study include IBM, TCS, HCL, Infosys, and Accenture. The sampling technique used to select the companies was convenient sampling methods. Data were collected from 108 employees where 8 were rejected due to incompletion and filling errors in the questionnaire. Thus, the total sample size was 100 and the data was collected from the managerial staff of these companies. IT sector was chosen for this study because a large number of a higher educated workforce is working in this sector and the IT sector in India is one of the top recruiters nowadays.
RESULTS
The collected data were analyzed with the statistical analysis tool SPSS (Statistical Package for Social Sciences), version 21. This software is one of the analysis tools that help performing descriptive statistics like correlation and regression analysis, factor analysis, and many other tests to check the hypotheses and the validity of the data. Pearson's correlation coefficient was used to analyze the data in this study. This method was used to test the relationship between the variables of HR practices (employer branding, CSR, and CSP) with corporate image building.
The total data reveals that 42% of females and 58% of males filled the questionnaire. It means that there is a good distribution of employees in the IT sector in terms of gender. Out of 100, 83% are below 30 whereas the rest are between 30-60 years, simply defines that IT sector has more of youngsters working with them. The first few questions in the questionnaire include the name of the respondent, their age and gender, organization name, and their designation. The language used in the questionnaire is simple and short to make it easy to understand and took lesser time to fill. The rest of the questions is based on the variables of the study to check the hypotheses to test the relationship between HR practices and corporate image building. The three main variables that arise from the data were employer branding, corporate social responsibility, and corporate social performance.
H1: There is a significant relationship between employer branding and corporate image building. Note: a means dependent variable: corporate image building; b means independent variable: employer branding. Table 1 shows that the significance value is 0.202. The calculated value of significance is greater than 0.202, much higher than the minimum acceptance value i.e. 0.5. This shows that the relationship between employer branding and corporate image building is significant. This result signifies that there is a positive relationship between employer branding and corporate image building, which means that if HR practices are based on the employer branding introduced in the Indian IT sector, then it will give positive results and help in building the corporate image. The regression equation y = b1x1 + A in Table 2 shows that corporate image building = 0.117 (employer branding) + 1.575.
The result signifies that corporate image increases by 0.117 for every one unit increase in employer branding. The beta coefficient value in Table 2 is .129 positive, which shows that employer branding has a 12.9% influence on corporate image building. Hence, H1 is significant and accepted.
H2: There is a significant relationship between corporate social responsibility and corporate image building. H3: There is a significant relationship between corporate social performance and corporate image building. Table 5 shows that the value of significance for the third variable (corporate social performance) is 0.197, which means there is a significant relationship between corporate social performance and corporate image building but the beta coefficient value is negative -.0.130. Table 6 shows that the regression equation y = b1x1 + A will be -0.79 (corporate social performance) + 2.017.
The result depicts that the influence of corporate social performance on corporate image building is low and negative. Hence, the relationship between corporate social performance and corporate is non-significant and H3 is not accepted.
CONCLUSION
This study investigated HR practices and their impact on corporate image building in the IT sector in India. After reviewing the secondary data, three main variables were identified (employer branding, corporate social responsibility, and corporate social performance). Based on this, a questionnaire was prepared to collect the data from IT professionals of IT companies in India. The collected data were analyzed with SPSS software to check the interrelationship between the dependent and independent variables. Three hypotheses were formed and tested with a regression correlation coefficient. The findings of the regression analysis proved that there is a significant relationship between employer branding and corporate image building. H1 is proved as significant and is accepted. It was found that the other variable i.e. corporate social responsibility also has a significant relationship with corporate image building. Hence, H2 is accepted. Whereas corporate social performance (third variable) does not have a significant relation with corporate image building, which means that H3 is rejected.
The analysis of this study indicates that the IT companies implementing the HR practices based on employer branding and corporate social responsibility get benefits in building the corporate image and give positive results to them. HR practices have a significant relationship with corporate image building.
There are three limitations of this study. First, this study is limited to five IT companies in India, so the results are restricted to these IT companies (IBM, TCS, HCL, Infosys, and Accenture). The results may vary for other IT companies. Second, the sample size for this study is only 100 respondents and Note: a means dependent variable: corporate image building; b means independent variable: corporate social performance. | 4,326 | 2021-07-06T00:00:00.000 | [
"Business",
"Computer Science"
] |
Experimental investigation of exterior reinforced concrete beam - column joints strengthened with hybrid FRP laminates
In the present study, an experimental and theoretical investigation is carried out on the reinforced concrete exterior beam-column joints strengthened with the hybrid fibre reinforced polymer (HFRP). The effect of reversible distress that develops in the joint region due to seismic force is determined experimentally by applying reverse cyclic loading on the tip of the beam. In theoretical analysis, the shear strength of strengthened joints was determined, and satisfactory correlations with experimental results were established. Hence, the proposed physical model provides valuable insight into the strength behaviour of the joints.
Introduction
Most old structures have been designed and constructed for gravity loads. In reinforced concrete framed structures, the most critical component is the exterior beam-column joint which is susceptible to severe environmental conditions. Many researchers have made an extensive study of past earthquakes, with the focus on the causes and reasons for failure in the beam-column joint. Kaushik and Jain [1], Saatcioglu et al. [2] observed that during the Sumatra earthquake (2004) the damage caused to the reinforced concrete structure was due to a lack of proper seismic design and detailing. In Bhuj Earthquake (2001) there was severe damage to the exterior beam-column joints due to the instability of the column. The instability was caused due to disorganization and insufficient longitudinal and shear reinforcements. Two major modes of failure at the joints are (a) joint shear failure, and (b) end anchorage failure (Ghobarah and Said,[3]). From the survey of past earthquakes, it can be observed that beam-column joints in reinforced framed structures are crucial zones for an effective transfer of load between structural connections (i.e. beams and columns). For the gravity load design practice, the design check for joints is not necessary as they are not critical. The failure of reinforced framed structures is due to heavy distress caused by the joint shear resulting in failure of the building. One of techniques that are used to rehabilitate structures damaged in earthquakes is by means of repair and retrofitting. Past research shows that Fibre Reinforced Polymers (FRP) can be used for retrofitting the exterior, interior, and corner beam-column joints due to the ease of application, cost-effectiveness and high corrosion resistance, low unit weight, high tensile strength to stiffness ratio and excellent fatigue behaviour (Ozcan et al. [4]). Antonopoulos and Triantafillou [5] pointed out that the presence of the transverse beam strongly affects the FRP influence on beam-column joints. Attari et al. [6] studied the effect of the external strengthening of beam-column joints using different types of fibre reinforced composites. The shear strength and ductility could be improved with the combination of carbon and glass fibre reinforced polymers. The external bonding of FRP sheets with epoxy resin is an easy retrofitting technique for the reinforced concrete beam-column joint subjected to seismic loads (Engindeniz et al. [7]). Mosallam and Banerjee [8], Parvin and Granata [9], Said and Nehdi [10], Mukherjee and Joshi [11], and Parvin and Wu [12] have noted that strengthening of connections can increase the moment capacity, ductility, initial stiffness, energy dissipation capacity, and reduce joint rotations and stresses in both concrete and reinforcement. Mahini and Ronagh [13] tested seven scaled-down plain / FRP-retrofitted RC exterior joints of a typical ordinary moment resisting frame under monotonic/ cyclic loads. Their test results show that the method is also effective for enhancing strength of the system. Zou et al. [14] investigated a 3-storey frame strengthened with FRP around its columns. It was noted that there was only a marginal increase in stiffness after strengthening. Slightly enhanced stiffness adds to the overall stability of the frame as stiffer columns lead to higher seismic forces. In addition, the failure mode of the frame was shifted from a column side-sway mechanism to an acceptable storey deformation level with weak beam-strong column behaviour. Mahini and Ronagh [15] and Niroomandi et al. [16] studied the peak strength of plain FRP-retrofitted joints and compared the result with those of the same frame retrofitted with steel braces. The results indicated that the FRP retrofitted RC frame is better than the steel braced retrofitted frame. Studies were also conducted on theoretical capacity models (Ghobarah [17], Priestley et al. [18], Fave and Kim [19], Park and Mosalam [20], and Masi et al. [21]) in order to predict the strength capacity of beam-column joints and the sub-assembly failure sequence. The reliability of capacity models was also assessed by the definition of theoretical and experimental joint shear. The extensive literature survey necessitated carrying out an experimental study on beam-column joint strengthened with repairing techniques and laminations under reversed cyclic loading. A comparison of the joint shear strength of exterior reinforced concrete beam-column joints before and after retrofit was also conducted. The scope of present work is limited to preliminary experimental investigations aimed at comparing behaviour of exterior beam-column joints retrofitted with Hybrid FRP with banana fibres in mat form and chopped form. With the proposed retrofit method, this research work aims to avoid joint shear failure and hence to promote beam flexural hinging.
Research significance
The HFRP laminations (mat banana fibre and chopped banana fibre) and GFRP wrapping are effective and economical retrofit techniques for improving seismic performance of the exterior beam-column joint. In this work, the experimental verification focuses on the load -drift envelope behaviour, ultimate load, load -drift hysteretic loops, cumulative energy dissipation, stiffness, and ductility. The theoretical validation was carried out by determining the horizontal shear force and shear strength at the joints. GRAĐEVINAR 73 (2021) 4, 365-379 Experimental investigation of exterior reinforced concrete beam -column joints strengthened with hybrid FRP laminates
Analysis of RC building and design of beamcolumn joint
An eight storeyed reinforced concrete building located in Chennai, India, in Seismic Zone III (design acceleration coefficient (Sa/g) = 2.5 as per IS 1893 (Part 1) [22] on medium -Type II soil ( Figure 1
Detailing of beam-column joint scaled model
The experiment was carried out in the laboratory; the specimen size was designed according to the testing machine to the onethird scale of the prototype. The dimensions and reinforcements of the scaled model (1/3 rd ) following Cauchy's law of similitude (Carvalho [25]) and the beam-column joint region were detailed as per SP34 [24] (cf. Figure 2). Beam and column elements were extended between points of contraflexure (assumed to be at midspan in the beam and at midheight in the columns) where hinge connections were introduced. The height of the specimen (L c ) is taken as the distance between the point of contraflexure in the column. The dimension and the area of the scaled model with prototype are shown in Table 1.
Specimen description
Specimens were designated as S2BRF and S3BRF (specimen before retrofitting). The specimens after retrofitting were designated as S2ARF and S3ARF (strengthened with HFRP laminations with eight layers of GFRP wrapping). The retrofitted beam-column joint laminations are shown in Figure 3.
Retrofitting materials
Retrofitting of the exterior beam-column joint was carried out by removing and replacing concrete in the regions of damaged joints. Special attention was given to ensuring a good bond between the new and existing concrete during the process of retrofitting. The Experimental investigation of exterior reinforced concrete beam -column joints strengthened with hybrid FRP laminates damaged region was replaced by new concrete. The strength of the mix which was used to repair the joint had to have the same strength as the existing concrete (39 MPa). The maximum size of aggregate to be used for making repair concrete was less than 10 mm. The mix proportion was 1:1.51:2.53 by weight, and the water -cement ratio was 0.45. Forging slag was used (20 %) as partial replacement of fine aggregate and an elastomer material SBR (Styrene butadiene rubber) was used in such proportion (20 %) by volume of water that improves bonding with the cement paste. Super-plasticizer (1 %) was also used as a water reducing agent to get the required workability. Epoxy resin (3:1) was used as a bonding layer between the existing concrete and new concrete and it also ensured that most of the visible cracks were fully filled with epoxy resin.
Hybrid fibre reinforced polymer (HFRP) laminate
Two types of HFRP laminate strengthening systems were used in this research. The first system consisted of banana fibre in mat form (bidirectional) with glass fibre (bidirectional 610g/m 2 ). The second system consisted of banana fibre in chopped form (12 mm length and 100-125 microns diameter) as reinforcement and epoxy as matrix. The method of fabrication adopted was the resin transfer method. The liquid resin mix comprising the epoxy resin (Araldite LY 556) and hardener (HY 951) in the proportion of 10:1 by weight was used for both lamination systems. For the first system (Mat form) the volume fraction was 60 % hybrid reinforcement and 40 % matrix, whereas for the second system (chopped form) the volume fraction was 48 % hybrid reinforcement and 52 % matrix. Detailed characteristics of the two types of HFRP laminate and the epoxy resin used in this work are given in Table 2.
Glass fibre reinforced polymer (GFRP) For strengthening the two specimens (S2ARF and S3ARF) by GFRP wrapping, the thickness must be at least 35 % greater than that of the sheet thickness to prevent rupture (Granata and Parvin [27]). Eight layers of GFRP (total thickness of 1 mm) are utilized for wrapping both specimens. The tensile strength, modulus of elasticity, and ultimate strain are 81 MPa, 20 GPa, and 4 %, respectively.
Adhesive
The adhesive used in bonding laminate to concrete was Sikadur 330 2 mm in thickness. The resin and hardener were mixed in the ratio 3: l (by weight). A uniform mid-grey colour indicates sufficient mixing of the white resin and black hardener with silica used as filler. 2mm thickness of the Sikadur 330 was used in the present work. The specimens were cured for 7 days before testing. The average strength and modulus were 28.4MPa and 8.6 GPa, respectively.
3.4.2.Retrofitting procedure
The preparation of adhesion surfaces is essential before the lamination could be bonded to concrete. Grit blasting was conducted on damaged concrete surfaces with 180-mesh alumina at an average pressure of 207 kPa (30 psi) in a pressurefed re-circulating machine, and then clean air was blown to remove dust. The adhesive was applied to both surfaces in order to prevent formation of air bubbles by the spread of adhesive from one surface to the other. Ballotini (Glass spheres) was used to get the desired glue line thickness of 2 mm. The bonding pressure was applied by weights to achieve a uniform distribution of load over the plan area of each lamination as shown in Figure 4.
Test setup
The exterior beam-column joint specimens (scaled to 1/3) were tested in the well-equipped set-up of 200 Tonne capacity in a steel loading frame in Structural Dynamic Laboratory, Division of Structural Engineering, Anna University, Chennai, India. To stimulate the test model, the column bottom end was provided with hinged support and the column top end was provided with roller support. 10 % of constant axial load (5 tonnes) was applied at the column top end as this will in turn allow the joint to dissipate higher energy during cyclic loading. The load was kept constant during the entire loading procedure for all specimens. The steel support base was properly fixed to the strong reaction floor. Load cells and dial gauges were used to record the load and displacement of specimens. Two 10 tonne hydraulic jacks were used to apply the reverse cyclic loading at the top and bottom ends of the beams. Displacement control was adopted for the model. The loading protocol considered a series of three cycles at increasing levels of drift. Both the dial gauges and load cells were instrumented at a distance of 50mm from the tip of the beam. The dial gauge was used to control that displacement occurs according to the loading protocol. The corresponding loads were noted using a load cell (Push and Pull). The experimental test setup presented in Figure 5 shows: a) test setup in laboratory-S2ARF and b) test setup in laboratory-S3ARF. The position of monitoring instrument is shown in Figure 6.
Experimental results and discussion
Test results obtained before retrofitting and after retrofitting are described in this section.
Ultimate Load
The ultimate load measured from the experimental investigation for the specimens before and after retrofitting for push and pull direction of loading is shown in Table 3. The maximum ultimate load was observed for S3ARF specimen and was found to be 67.5 % higher than S3BRF specimen. S3ARF specimen showed higher ultimate load compared to S2ARF specimen and the percentage increase is 4.5 %.
Load -Drift Hysteretic loops
The load -drift hysteresis loop was plotted for both systems of laminations, as shown in figures 8 and 9. The pinching effect induced due to bond stress-slip and shear sliding was also considered in hysteretic loops. For retrofitted specimens, the areas of hysteresis loops gradually increased as the drift cycle increased, with better energy dissipating capacity for the S2ARF specimen compared to S3ARF specimen. The load -drift envelope curve is shown in Figure 10.
Energy dissipation
Energy dissipation capacity is an important criterion for assessing performance of a component when subjected to seismic loading, with the assessment depending mainly on the rates of stiffness degradation and strength degradation in each cycle during hysteresis response.
Figure 11. Load -drift hysteresis loop for retrofitted specimen
The cumulative energy dissipation of the specimens obtained from the experiments is shown in Figure 11. The energy dissipation of S2ARF and S3ARF specimens was found to be by 68.3 % and 44.4 % higher compared to that of S2BRF and S3BRF specimens, respectively, at 2.5 % drift ratio, as shown in Figure 12. At higher drift ratios, the specimens with higher joint shear stresses showed higher energy dissipation. An increase in joint shear stress resulted in greater damage to the joint, which led to greater energy dissipation.
Stiffness Degradation
The stiffness of beam-column joints is estimated by using the slope of the peak-to-peak line for each loop at each drift ratio (ACI 318 [29]). The structure has an enhanced ductility due to a lower rate of degradation. At a 2.5 % drift ratio, the stiffness of strengthened S2ARF specimens is by 63.3 % higher compared to that of S2BRF specimens, as shown in Figure 13. The stiffness of a structure is its resistance to deformation and the strengthened S2ARF specimens show greater stiffness than the S3ARF specimens, as shown in Table 4 and Figure 14. GRAĐEVINAR 73 (2021) 4, 365-379 Experimental investigation of exterior reinforced concrete beam -column joints strengthened with hybrid FRP laminates
Ductility
The displacement ductility is defined as the ratio of the ultimate displacement (δ u ) to the yield displacement (δ y ). Yield load (P y ) and δ y are determined as per Figure 15. P max and δ u are the peak load and the corresponding displacement on the load-displacement curve, respectively. The displacement ductility of the specimens indicates that retrofitted specimens behave in a ductile manner for two types of systems. The enhancement in displacement ductility for S2ARF was observed to be by 79.74 % higher compared to S2BRF specimen. For S3ARF the percentage of increase is 68 % only, as given in Table 5. The yield displacement is computed by plotting the load-drift envelope curve from hysteretic loops based on the equivalent elastic-plastic yield model. The ultimate displacement is calculated as the point of maximum force that the specimen could withstand based on peak load, [30].
Specimens S2BRF and S2ARF
In S2BRF specimen, a crack was initiated from the column face towards the beam during the third cycle of loading (0.8 % downward drift). In the fourth and fifth cycles of loading (1 % and 1.5 % of drift), it was observed that a series of flexural and flexural-shear cracks were formed along the beam length. These cracks grew wider during the sixth and seventh cycles of loading (2 % and 2.5 % of drift) ( Figure 16).
Figure 16. Crack pattern of specimen (S2BRF)
For S2ARF specimen, cracks were initiated only during the fourth cycle of loading (1 % drift) and were observed to further develop (3 % drift). The signs of rupture were observed at the top of the column wrap located at the beam-column junction, during the ninth cycle of loading (3.5 % drift). The development of rupture, along with peeling of the column wrap, was observed at the edge nearer to the beam during the tenth cycle of loading (4 % drift). Also, initial de-bonding was observed at the same location of rupture at 4 % drift. During the eleventh cycle of loading (4.5 % drift), full debonding with severe rupture developed at the same location. The specimen was found to attain full failure at beam-column joint during the twelfth cycle of loading (5 % drift) ( Figure 17).
Specimens S3BRF and S3ARF
In S3BRF specimen, during downward drift of the third cycle (0.8 % drift), a crack was initiated at the column face and propagated towards the beam during the third cycle of loading (0.8 % drift). The flexural and flexural-shear cracks developed during the fourth and fifth cycle of loading (1 % and 1.5 % drifts), respectively. Cracks become wider during the sixth and seventh cycles of loading (2 % and 2.5 % of drift), as shown in Figure 18. In S3ARF specimen, as the drift gradually increased, there were no signs of a crack in concrete. The signs of rupture were observed at the top of the column wrap located at the beam-column junction, during the eighth cycle of loading (3.0 % drift). Furthermore, development of rupture was observed along with peeling of the column wrap at the edge nearer to the beam, and initial de-bonding started at the beam-column connection during the ninth cycle of loading (3.5 % drift). Development of deep de-bonding occurred at the connection at the tenth cycle of loading (4 % drift). During the eleventh cycle of loading (4.5 % drift), new de-bonding occurred in the middle of the bottom lamination, at the bottom of the beam, and at the beam-column connection point. De-bonding developed deeper and so the lamination split further from the concrete surface. Complete failure of the beam-column joint specimen occurred during the fourteenth cycle of loading (6 % drift), as shown in Figure 19: a) failure at 5 %, b) failure at 6 %, c) Complete failure at 6 %.
Theoretical analysis
In framed structures, large shear force is generated in the exterior joint region due to seismic force. Some of the internal forces, generated in the concrete will combine to develop a diagonal strut. The truss mechanism depends on the effectiveness of the bond between the concrete and the steel bars (beam and column bars). The transverse reinforcement in the joint confines the concrete diagonal strut in the joint core and contributes to an increase in joint strength. Forces that developed at the proposed beam-column joint are shown in Figure 20. Joint shear forces generated due to external forces are shown in Figure 21. The labels in Figures 20 and 21 have the following meanings: C cb , C c1 , C c2 -compressive force in the beam and the column concrete, C sb , C s1 , C s2 -compressive force in the beam and the column reinforcement, h b , L b -depth and the length of the beam concrete, h c , L c -depth and the height of the column concrete, h b ' , h c ' -depth of the beam and the column reinforcement, M b , M c -moment in the beam and the column P -axial load, T b , T c1 , T c2 -tensile force in beam and the column reinforcement, V b i V c -vertical and horizontal shear force of the beam and the column, V jv , V jh -vertical and horizontal joint shear forces, Z b , Z c -lever arm of the beam and the column.
Joint shear force
The maximum horizontal shear force at the joint (V jh ) from theoretical analysis can be calculated using the equilibrium of forces acting at the connection just before failure, as shown in Eq. (1).
where α -stress multiplier for longitudinal reinforcement at a joint-member interface. (3) Hence (V b ) from theoretical analysis is determined from Eq. (3). The equilibrium of external forces, from which V c (theoretical study) can be computed, is given in Eq. (4). Further maximum horizontal shear force from theoretical analysis (V jh ) is obtained from Eq. (1).
For the HFRP retrofitted specimens, the joint shear force (V jh retrofitted ), corresponding to the maximum beam flexural capacity, can be calculated by assuming that the tensile force in the longitudinal reinforcement of the beam (T b ) does not change in the retrofitted specimens, as shown in Eq. (4). For computing the maximum horizontal shear force from the experimental investigation (V jh ), vertical shear force (V b ) of the beam has been taken as the experimental ultimate load and the horizontal shear force (V c ) in the column is computed from Eq. (4). Hence, (V jh ) is calculated using Eq. (1). Similarly, V jh retrofitted is computed from Eq. (5).
where: T HFRP = ε f A f E f -tensile force in the hybrid fibre reinforced polymer ε f -strain in the HFRP A f -cross section area of the HFRP E f -elasticity of the HFRP.
Joint shear strength
The design ultimate shear capacity of joint before failure (V n ), from experimental and theoretical study, can be computed using Eq. (6).
V n = V c + V s , and V s = 0,87f y A s (6) where: GRAĐEVINAR 73 (2021) 4, 365-379 Sumathi Marimuthu, Greeshma Sivasankara Pillai V s -design link shear force resistance V c -design shear force resistance of concrete in a joint f y -yield stress of reinforcement A s -cross section area of the reinforcement.
While for retrofitted joints, the total shear resistance (V n, retrofitted ) consists of the concrete resistance, the resistance of the ties and the resistance provided by the composite lamination, is given by Eq. (7).
where: V HFRP -design shear force resistance of hybrid fibre reinforced polymer in a joint (Hadi and Tran [31]) r f -FRP reinforcement ratio ε fe -effective strain level in FRP reinforcement E f -elastic modulus of (FRP).
Design guidelines
The increased flexural strength of the section (M b ), calculated as per ACI-ASCE Committee 352 [32] where: d -distance from extreme compression fibre to centroid of tension reinforcement b -width of the concrete beam a -depth of equivalent rectangular compression block α -stress multiplier for longitudinal reinforcement at jointmember interface.
The shear in column (V c ) is calculated based on the nominal flexural strength of the section (11) where L c -the height of the column.
The horizontal shear force (V jh ACI ) can be obtained from Eq.
(1). ACI-440.2R-08 [33] requirements for the nominal flexural strength of the joint retrofitted (M n ) should be satisfied as per Eqs. (12) to (16) where: M ns -contribution of steel reinforcement to the nominal flexural strength M nf -contribution of HFRP reinforcement to the nominal flexural strength A f -the area of external HFRP reinforcement Ψ f -FRP strength reduction factor (0.95).
Flexural strength of the beam is calculated using sectional analysis as presented in Figure 22. β -ratio of depth of equivalent rectangular stress block to non-linear distribution of stress c -distance from extreme compression fibre to the neutral axis b f -effective width of the FRP flexural reinforcement, ε c , ε sc -reffective compressive strain level in the concrete and the steel, ε ct, ε st ε f -effective tensile strain level in the concrete, steel and FRP.
ε' c = 1,7f c / E c ε' c strain corresponding to f c , and E c is elastic modulus of concrete.
where α 1 multiplier on f c to determine intensity of an equivalent rectangular stress distribution for concrete, and f fe effective stress in the FRP stress level attained at section failure.
The maximum tensile force (T n ) that can be carried by the horizontal FRP layer along the beam can be calculated as per the guidelines.
where d f effective depth of the HFRP flexural reinforcement.
Comparison of experimental and theoretical results
The horizontal joint shear force, shear strength, and joint shear ratio, were evaluated for the experimental and theoretical study in Table 6. The detailed comparison is given in Table 7. It can be observed from Table 7 that a reasonably good agreement exists between theoretical and experimental values of V jh . The ACI guidelines values are higher particularly for S2ARF and S3ARF specimens. Note: V jh exp is the experimental joint shear force, V jh Theo is the theoretical joint shear force, and V jh ACI is the ACI guidelines joint shear force. V n exp is the experimental joint shear strength, V n Theo is the theoretical joint shear strength, and V n ACI is the ACI design guidelines joint shear strength.
Conclusions
The following conclusions can be made based on the experimental and theoretical analysis: -The ultimate load was observed to be matching (< 5 % variation) for both types of retrofitted specimens (S2ARF and S3ARF). The experimental ultimate load carrying capacity for retrofitted specimen (S2ARF) is by 4.5 % higher compared to S3ARF specimen. The load carrying capacity of S2ARF was observed to be by 25.7 % higher than that of S2BRF whereas, for S3ARF, the variation is within 6.3 % at 2.5 % drift level.
-A slight increase in the displacement ductility was observed for S2ARF (< 5 %) compared to S3ARF in the retrofitted specimens. It was found that S2ARF specimen had 4.3 % more than the S3ARF. However, there is an enormous increase in the displacement ductility for S2ARF (79.7 %) and S3ARF (68 %) when compared to S2BRF and S3BRF.
-Spindle-shaped hysteretic loops exhibit large energy dissipation capacity for both types of retrofitting systems (S2ARF and S3ARF). The experimental results were compared with the backbone envelope curve of hysteretic load -drift loops (ultimate load and stiffness) -The stiffness shows an increase of 63.3 % for S2ARF specimen compared to S2BRF specimen, and S3ARF is by 41.4 % greater than S3BRF at a 2.5 % drift level. Thus the variation is 16.3 % compared to S2ARF and S3ARF from experimental investigation.
-The un-retrofitted specimen develops diagonal shear failure at the top and bottom of the beam that contributes to the failure of the specimen when tested under reverse cyclic loading at a 2.5 % drift. However, the retrofitted specimen exhibited cracking at the edge, creep at the GFRP wrapping, rupture of HFRP lamination, and suffered bent reinforcement when retested under reverse cyclic loading at a 5 % drift.
-The joint shear force and shear strength based on available design guidelines (ACI) were compared with the experimental value of joint shear strength. Good correlation was observed between the theoretical and experimental results.
-Comparing the seismic performance of the HFRP retrofitted beam-column joints with different strengthening retrofitting methods, it is concluded that both retrofitting schemes have the comparable capability to increase the ductility factor and strength. In specimen S2ARF, the use of HFRP lamination (Mat banana fibre) prevents de-bonding from the concrete surface by up to Table 6. Experimental and theoretical evaluation of joint forces Table 7. Comparison of experimental and theoretical evaluation of joint forces 5 % drift ratio. The HFRP reached a strain up to ultimate strain without failure in both tension and compression. The retrofitted joints exhibited 67.5 % higher loading capacity and dissipated four times the energy dissipated by the un-retrofitted specimens (S2BRF). | 6,517 | 2021-05-12T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
The Notched Stick, an ancient vibrot example
An intriguing simple toy, commonly known as the Notched Stick, is discussed as an example of a “vibrot”, a device designed and built to yield conversion of mechanical vibrations into a rotational motion. The toy, that can be briefly described as a propeller fixed on a stick by means of a nail and free to rotate around it, is investigated from both an experimental and a numerical point of view, under various conditions and settings, to investigate the basic working principles of the device. The conversion efficiency from vibration to rotational motion turns out to be very small, or even not detectable at all, whenever the propeller is tightly connected to the stick nail and perfectly axisymmetrical with respect to the nail axis; the small effects possibly observed can be ascribed to friction forces. In contrast, the device succeeds in converting vibrations into rotations when the propeller center of mass is not aligned with the nail axis, a condition occurring when either the nail-propeller coupling is not tight or the propeller is not completely axisymmetrical relative to the nail axis. The propeller rotation may be induced by a process of parametric resonance for purely vertical oscillations of the nail, by ordinary resonance if the nail only oscillates horizontally or, finally, by a combination of both processes when nail oscillations take place in an intermediate direction. Parametric resonance explains the onset of rotations also when the weight of the propeller is negligible. In contrast with what is commonly claimed in the literature, the possible elliptical motion of the nail, due to a composition of two harmonic motions of the same frequency imposed along orthogonal directions, seems unnecessary to determine the propeller rotation.
Introduction
Screw loosening is a common practical experience; less common may be a spectacular case as the rotation of an ancient Egyptian statue in the Manchester museum [1] in its glass container without any apparent external action. In both cases it is possible to guess that the environmental vibrations contribute to the apparently autonomous movement of the object along a circular pathway centered on its rotation axis.
In the scientific literature there are at least two topics related to this same basic physical phenomenon. The first one is the "vibrot", a term recently introduced in [ from undesired environmental vibrations to a rotation movement [3][4][5][6][7][8][9][10] (e.g., to insert and tighten a screw). The second one is an old mechanical toy, known with various names: Notched Stick, Hui game, Girigiri-Garigari, Gee-haw whammy diddle, Bozo-bozo, whose typical appearance is shown in Fig 1. This one will be the object of the present paper.
Common mechanical toys may have a great education value, often allowing to easily explain both simple and complex physical concepts involved in their working principle. For this reason, they are not only intriguing for children but also for scientists.
The Notched Stick generally consists of a notched stick made of wood on which a propeller loosely fitting a nail and free to spin, is fixed; upon moving a dowel back and forth across the notches, the propeller rotates (see Fig 1).
Of course, some details may be different: the section of the notched stick may be squared, rectangular or circular, the materials used for it may be wood, plastic or metal, the notches may be produced along one side or on the edge. These features do not change the basic working mechanism but however may affect many details of the device behaviour. A short summary of the main literature is given in the following.
To our knowledge the first scientific paper devoted to this toy has been published in 1937 [11]. The author, R.W. Leonard, using a rectangular section notched stick, proposed both a mechanism for the rotation, depending on the combination of two perpendicular linear harmonic vibrations, and a mechanism to control the rotation sense, based on the edge used for the stroking action. 20 years later J. S. Miller, in 1955[12], observed in a short note that the stroking modality controls both the speed and the direction of the propeller rotation, which stops when an out of phase vibration is produced. Thus, he concluded that: "the rotation is clearly a matter of resonance and forced vibrations" [12].
In the same year, in another short note E.R. Laird [13] described a patented "Indian mistery stick" "with a square cross section and notches across one of the edges"; he also proposed to use the finger position on one or the other edge to change the rotation direction. Laird writes that "the action in this case is not due to the resonance in the ordinary sense of the term".
In the following year G. D. Scott [14] in a more detailed analysis uses a stick with a circular section; he accepts the fact that it is possible to control the direction of rotation but he poses again the two basic questions: why does the rotor turn at all and what is the mechanism for controlling the direction? In his opinion "the direct cause of the rotation of the rotor is most certainly a circular or elliptical motion of the nail which serves as its axle"; he excludes the role of an off-center or elliptical hole in the rotor, as well as that of a resonance effect.
He claims that the circular motion is caused by the oscillation of the nail determined by the notched stick and distorted by the action of the thumb of hand or by fingers in a circular or elliptical motion. This same mechanism allows the direction control. 20 years later S. S. Welch [15] relaunched the question. Welch writes that a square section stick will not rotate if a finger is not used to act a pressure on one side of stick, while rectangular stick will.
In a further paper published 10 years later G. J. Aubrecht II [16] repeats Leonard's claim that the Notched Stick is a by-product of native American culture. Moreover, he indicates a wrong reference but for an interesting idea: "Scott suggests that the propeller is driven in the same way keeps a hula-hoop turning". In fact such a suggestion is not present in Scott's short note, but we believe this is a very interesting hint, reconsidering the importance of the shape of the propeller hole where the nail is placed through. Aubrecht assumes that the spin of the propeller originates from the presence of an elliptical motion of the nail, imparting a torque to the propeller every time the two bodies are in contact. In 1988 H. J. Schilchting and U. Backhaus [17] also have provided an analytical description of the phenomenon proposed by Scott. In 1992 a paper by Scarnati and Tice [18] was devoted to an analysis of the building details of the device, although without any original novelty on the analysis.
More recently a very interesting paper has been published [19] that describes the use of a metal stick and a robotized device to act on the stick in a repeatable way. The conclusions of the work are quoted below: "(1) The revolution of the propeller is caused by the elliptical motion excited at the end of the rod.
(2) The vibration system can be treated as a lumped-constants system.
(3) Two factors of phase difference between two vibration directions exist. One is the shift in the resonance frequency, and other is waveform difference of the driving forces.
Furthermore, simulation results indicated the validity of the model composed of lumped constants." The above state-of-the-art review suggests that the Notched Stick is commonly believed to successfully convert vibrations into rotational motion thanks to a composition of two rod movements, one along the horizontal direction and one along the vertical direction, thus resulting in an elliptical motion of the nail that drags the propeller. The aim of the present study is to show that, in spite of the common belief, the occurrence of an elliptical motion of the nail is not really necessary to induce the rotation of the propeller in this kind of device. The result is preliminarily suggested by a qualitative discussion of some simple analytical models of the toy, then experimentally demonstrated by a suitable device vibrating in a controlled way, and finally substantiated by some numerical simulations based on a more realistic analytical model. The preliminary models point out that the working principle of the device can be explained by the occurrence of parametric and/or non-parametric resonance phenomena, recognizing a mechanical analogy between the toy and a suitably excited oscillator, and removing the need for an elliptical motion of the nail. The experimental evidence is provided by tests performed, for the first time, imposing acoustic or controlled piezoelectric vibration to the stick. A further support to the conclusions comes from numerical modelling within the framework of multibody dynamics simulations. The study of the way the Notched Stick converts a vibrational stress into a rotary motion allows to speculate about the development of new technologies able to transform vibrational noise into mechanical energy.
Features of the device and phenomena possibly involved in its running
The device as a master-slave dynamical system To a satisfactory degree of approximation the nail-propeller system can be regarded as a master-slave dynamical system: owing to the very small mass of the propeller, one may assume that the motion of the nail (master) is not significantly affected by the motion of the propeller (slave). This means that for the propeller the nail simply behaves as a time dependent constraint, that moves in a given way.
Possible kinds of nail motion
At rest the nail is assumed to be placed horizontally. The nail may undergo different kinds of motion: (1) a horizontal sinusoidal motion; (2) a vertical sinusoidal motion; (3) a sinusoidal motion in a plane inclined with respect to the vertical direction, with a horizontal and a vertical component of the same frequency and with no phase difference; (4) an elliptical motion, with a horizontal component and a vertical component of the same frequency, possibly unlike amplitudes and different phases. In the case of equal amplitudes and a phase difference of ¼ a period the nail motion is circular and uniform.
In any case the frequency of the motion is that of vibration of the stick that holds the nail, induced by moving the dowel back and forth across the notches. The assumption that the two possible components of the nail motion are sinusoidal, in both the horizontal and the vertical direction, constitutes an approximation, though it seems reasonable.
Coupling between the nail and the propeller
The contact between the moving nail and the hole at the center of the propeller may be: -loose, like a ring hanging on a stake, or a horizontal axis hula-hoop. The clearance between the outer surface of the nail and the inner surface of the propeller hole is relatively large compared to the diameter of the nail; -tight, which means that the external boundary of the hole basically does not slip on the nail because of the static friction forces. The rotation of the propeller takes place owing to a ball bearing set up at its center: the inner ring of the ball bearing adheres to the nail, whereas the external one moves along with the propeller.
In the first case, if along the edge of the hole the propeller remains in contact with the nail during the motion, one may assume that the propeller rolls without slipping along the edge of the nail; such a pure rolling is made possible by the static friction.
The device may be conceptualized as a horizontal axis hula-hoop in which the friction forces due to the contact between the axis and the propeller allow the rotation around the axis and may act by chance along the axis (moving the propeller back and forth) and are not balanced by the gravity as in a vertical axis hula-hoop.
Role of gravity
The role played by gravity seems to be significant in the starting stage, the onset of the propeller motion, particularly when the oscillation amplitudes of the nail are small.
Such a role stems from the eccentric position of the propeller center of mass relative to the center of the nail, position that: -certainly occurs in the case of a loose propeller-nail coupling; -requires some asymmetry feature of the propeller in the case that the propeller-nail coupling is tight. If the propeller were perfectly axisymmetrical with respect to the axis of the nail, the propeller would be statically equilibrated and the effect of weight on the device dynamics would be negligible or even theoretically null.
In practice, in these conditions the propeller behaves as a physical pendulum with an oscillating point of suspension.
Simple modelling. Tight nail-propeller coupling
In the case of a tight coupling between the nail and the propeller a simple model of the system consists of a flat rigid body P that can freely rotate around a point C of it in a fixed plane; the point C is representative of the nail position and is forced to move according to an assigned time-law. One may introduce an inertial reference frame Oxy in the same plane, with horizontal and vertical axes Ox and Oy and unit vectors e x and e y , respectively. The point C will move according to a law of the form: where ξ(t) and η(t) are given functions of time. If G denotes the center of mass of P, possibly different from C due to slight asymmetries of the device, a is the distance between G and C, and φ stands for the angle that the segment CG forms with the vertical straight line drawn downwards through C, assuming ideal constraints the Lagrangian of the system in the reference Oxy takes the form: where I is the moment of inertia of the plate P around the axis through G orthogonal to Oxy, m the mass of P, g the gravity acceleration and the dot denoted time derivative (see Fig 2). This is the Lagrangian of an ideal holonomic system with time-dependent constraints, whose equation of motion can be written as: i.e., explicitly: on having introduced a simple viscous terms D φ ¼ À b φ _ , with friction constant β, to account for energy dissipation due to rotation in air.
In the case that G coincides with C, that is a = 0, the equation of motion reduces to Iφ __ ¼ À b _ φ and the system simply becomes a perfectly equilibrated rotor with a viscous damping; all the configurations of the plate are stable rotational equilibria and the nail motion is not effective in inducing a rotational motion of the propeller. Noticeably, gravity plays no role.
If G is different from C but gravity can be neglected, the motion is described by: and corresponds to an unbalanced rotator parametrically excited with viscous damping, already discussed in the literature as a dynamical model of hula-hoop [20]. When G does not coincide with C but gravity is significant, the full Eq 4 must be considered and the system appears as a compound pendulum parametrically excited and subjected to a viscous damping. If x __ ¼ Z __ ¼ 0, i.e. the system is not excited by the nail motion, there are precisely two rotational equilibria of the plate, corresponding to φ = 0 and φ = π, stable and unstable respectively. As far as the possible onset of rotational motions is concerned, three remarkable cases can be distinguished according to the kind of nail motion: (i) for Z __ ¼ 0 but x __ 6 ¼ 0 which means that the nail C only moves horizontally, both the rotational equilibria of the unexcited system are deleted. Necessary condition for the onset of rotations, small oscillations around φ = 0 may be amplified by a mechanism of ordinary resonance, since for φ = 0 the equation of motion admits the linear approximation: and can be regarded as the equation of a damped harmonic oscillator with a forcing À max __ ;
Fig 2. Simple model with tight nail-propeller coupling.
A simple model of the device in the case of a tight nailpropeller coupling. C denotes the nail position in the vertical plane Oxy, whose coordinates (ξ,η) vary according to a given time law. G is the projection on the same plane of the center of mass of the propeller P. Finally, a stands for the (typically small) distance of G from the nail axis and φ is the rotation angle of the propeller. Constraints are assumed to be ideal, but allowance is made for energy dissipation by a viscous term (ii) if Z __ 6 ¼ 0 and x __ ¼ 0 which corresponds to a purely vertical motion of the nail, the configurations φ = 0 and φ = π are still rotational equilibria of the system. However the configuration φ = 0 may become unstable owing to phenomena of parametric resonance [21], because the motion is governed by an exact equation of Hill's type with damping: In particular, for η = ε cosωt, with ε and ω some positive constants, the result is a Mathieu equation with damping; (iii) finally, whenever both x __ and Z __ are nonzero, thus describing the case where the nail oscillates in the horizontal as well as the vertical direction, the rotational equilibria φ = 0 and φ = π of the unexcited system disappear and the onset of the rotation may arise from both kinds (parametric and ordinary) of resonance.
Simple modelling. Loose nail-propeller coupling
In the case where the coupling between the nail and propeller is loose, the propeller can be modelled as a possibly inhomogeneous circular ring Γ, of mass m, that can roll without slipping, in a fixed plane, on the outer edge of a circular disk D representative of the nail. The disk undergoes an assigned, purely translational motion which corresponds to the imposed motion of the nail. Such a translational motion can be simply described by the motion of the centre A of the disk. Denoted with R the radius of the disk, and with r (r>R) and C the radius and the center of the ring Γ, respectively, the angular velocity of Γ can be easily expressed in terms of the angle φ that the segment from A to C forms with the vertical straight-line drawn through A downwards (see Fig 3): and is a vector orthogonal to the plane of the motion. The center of mass G of the ring may possibly differ from the centre C due to slight asymmetries of the propeller, and its position relative to G in the rest frame of the ring can be specified by the constant distance a = CG and the constant angle α shown in Fig 3. As before, an inertial reference frame Oxy may be introduced in the plane of the motion, with horizontal and vertical axes Ox and Oy and unit vectors e x and e y , respectively. The centre A of the disk will move according to a given time law of the form: with appropriate functions of time ξ(t) and η(t). If the constraints are assumed to be ideal, the Lagrangian of the system in the reference Oxy is given by: and the equation of motion takes the form: where I denotes the moment of inertia of the ring relative to its centre of mass G and, as before, allowance is made for energy dissipation by means of a viscous term If the disk is at rest, so that x __ ¼ 0 and Z __ ¼ 0 (fixed nail), the two rotational equilibria of the ring Γ are derived from the obvious equation ðr À RÞsinφ þ asinðφ þ aÞ ¼ 0. Such equilibria persist if x __ ¼ 0 and Z __ 6 ¼ 0 (purely vertical motion of the disk), and the amplification of small motions around the stable equilibrium should be attributed to processes of parametric resonance able to destabilize it. In contrast, rotational equilibria are removed in the case of purely horizontal motion of the disk (i.e., x __ 6 ¼ 0 and Z __ ¼ 0) and the possible growth of oscillation amplitude is due to ordinary resonance. In the general setting (x __ 6 ¼ 0 and Z __ 6 ¼ 0), parametric and ordinary resonance processes are expected to coexist.
Noticeably, in the typical case where α = 0 and/or a = 0 Eq 11 takes the same form of Eq 4 already derived for the previous model of tight nail-propeller coupling.
If additionally the effect of gravity is small and the oscillations of the disk can be assumed harmonic, another typical condition for common Notched Stick devices, the existence of almost uniform and stable rotational motions can be analysed and justified [22]. A light pitchless propeller loosely mounted at the end of the rod by a nail to rotate freely. Another model of the device more appropriate for the case of a loose nail-propeller coupling. In the vertical plane Oxy the nail is represented by a rigid disk D of center A and radius R, animated by a purely translational motion whose description is given in terms of the varying coordinates ξ(t), η(t) of A. The inner edge of the propeller hole, represented by the rigid circular ring Γ of radius r and center C, is assumed to roll without slipping on the outer profile of the nail D. Pure rolling requires static friction between Γ and D, but it does not invalidate the assumption of ideal constraints. The moment of inertia of the propeller with respect to the axis passing through its center of mass and orthogonal to Oxy is supposed to be known. The orthogonal projection G of the center of mass on the plane Oxy may not coincide with the center C of the ring Γ, as described by the distance a and the angle α. The propeller rotation is parametrized by the angle φ. The following Table 1 resumes all the cases.
Effect of the nail motion
As a consequence of the previous discussion, the effect of the nail motion on the propeller can be easily described in a qualitative way. If the propeller is tightly connected to the nail and turns out to be perfectly axisymmetrical with respect to the axis of the nail (the propeller center of mass lies on the symmetry axis of the nail), the nail motion does not trasfer any significant rotation to the propeller, but possible small effects due to friction forces.
In contrast, if the propeller center of mass is not aligned with the nail axis, the propeller behaves as a parametrically excited (and damped) compound pendulum and the following situations may occur: (1) if the nail only oscillates in the vertical direction, the trivial equilibrium position of the propeller survives, but it could be made unstable by a process of parametric resonance; (2) when the nail only oscillates in the horizontal direction, the trivial equilibrium position is removed and the oscillation amplitudes of the propeller may grow, up to the possible onset of a rotation, as a result of a process of ordinary resonance; (3) if the nail oscillates along an intermediate direction between the horizontal one and the vertical one, both the effects previously described take place; (4) if, finally, the nail describes an elliptical motion, as a composition of two harmonic motions of the same frequency, arbitrary amplitudes and a phase difference which is not an integer multiple of π, a condition similar to case (3) occurs.
Moreover, a similar behaviour would be observed also in the case where the weight of the propeller were completely negligible. In that case, however, all the configurations would be of rotational equilibrium in the absence of nail motion.
Nevertheless, introducing an oscillation of the nail along a given direction would destroy all the positions of rotational equilibrium but those where the propeller center of mass is aligned with the direction of the nail oscillation; such configurations, however, could be (or could not be) unstable as a consequence of parametric resonance phenomena and give rise to a rotational motion of the device [22].
The possible oscillation of the nail along an elliptical path would remove all the positions of rotational equilibrium, thus allowing the onset of rotations of the propeller by analogous mechanisms. Generally speaking, the onset of a rotational motion of the propeller DOES NOT NECES-SARILY REQUIRE that the nail must follow an elliplical motion, but it may occur also in the case of a purely linear motion of the nail along a fixed direction: along the vertical (rotation triggered by parametric resonance), along the horizontal (rotation induced by ordinary resonance), or along an intermediate direction (rotation induced by a combination of both kinds of resonance).
The previous models, although useful to illustrate qualitatively the possible onset of a rotational motion of the propeller as a consequence of a nail oscillation along a straight-line, suffer however of a considerable level of approximation to be completely useful in the quantitative description of the real device. In the case of a loose propeller-nail coupling, and particularly in the presence of very strong vibrations of the nail, one may expect temporary losses of contact and subsequent collisions between the propeller and the nail, involving impulse forces that would make the dynamics of the device hardly predictable, particularly in the initial stage of onset of the possible rotation. Another phenomenon to take into account is the possible sliding of the propeller hole edge on the outer surface of the nail, which makes the system eventually affected by dynamic friction forces. Last, but certainly not least, the assumption that the motion of the propeller takes place in a fixed vertical plane is typically unrealistic for a practical device, since the propeller may slide back and forth along the nail surface, parallel to the nail axis.
The above remarks suggested two lines of investigation: (1) the design and implementation of an appropriate device, to carry out experimental observations under conditions as controlled as possible; (2) the development of a more realistic (and complex) numerical model to reproduce, as far as possible, the results of the experiments. Obviously, as we will see, the price to pay was a growth of the number of degrees of freedom and an adequate modeling of impulse and sliding friction forces.
Materials
The used sticks were made of chestnut or beech, the rotating part of raft wood or polyurethane. The cross-section of sticks was 9x9 mm or 9x21mm. The rotating axis simply consisted of a slightly conic nail of mean diameter (1.64mm) fixed on the top of the stick. The size of different specimens was chosen as close as possible.
In order to impose a controlled vibration to the stick a loudspeaker and a piezoelectrical device (piezo) were exploited. These devices are currently used in our laboratory for experiments in wettability and partly described in previous literature [23,24]; the piezo movement PI 601 with a 505 PI power source moves along one direction only with a maximum elongation of 300 μm. For both devices a self-developed Labview software has been used to manage the vibration through a computer card or an external amplifier. In the case of the piezo the effective exciting voltage was checked through a parallel electric connection (manual feedback control).
The length of the stick was evaluated with respect to the vibrating support, with the same length protruding from the support and fixed with a very rigid rubber band in the case of the loudspeaker, or with a metallic constraint blocked with screws in the case of the piezo support.
It is noticeable that to simulate the presence or the absence of a mechanical backlash between the rotation axis and the rotating part of the stick we have used a microbearing on which the rotating propeller has been mounted; the microbearing could be fixed to the nail or not. The presence of a microbearing fixed to the nail corresponded to the absence of the backlash, because the propeller was fixed to the exterior of the bearing. In the absence of a microbearing stuck to the nail, the rotation of the propeller around the nail was affected by the microbearing inner diameter, inevitably larger than the nail diameter and thus associated to a backlash of a fraction of a millimeter. In this second case the presence of a microbearing avoids that the soft material of the propeller could be damaged or modified by the experimental procedure and keeps constant the backlash size.
All the experiments have been carried out on an antivibrating table. This condition is very important: runs performed on a common laboratory table do not give the described results. This is probably due to the fact that, in absence of an antivibrating support, the vibrations generated by the loudspeaker or by the piezo are in fact dissipated by the support. The experiment has been mounted on a Newport VH 3030-OPT antivibration table.
Experimental method
The apparatus described above has been used in different ways; the sinusoidal vibration has been imposed to the stick put transversally on the loudspeaker or fixed to the piezo base.
The main difference between these two vibration sources is that in the case of the loudspeaker some vibration on the horizontal axis cannot be avoided, even if the main contribution is reasonably in the vertical direction. In contrast, for the piezo source one can impose a purely vertical or horizontal vibration, or even at a 45˚inclined direction, combining the effect of horizontal and vertical vibration (in phase).
The experiments have been performed using a propeller without or with backlash, the first case corresponding to the use of the microbearing fixed to the nail, and the second to a propeller without the microbearing or, better, endowed with a microbearing not fixed to the nail.
In all these cases short movies have been recorded from which it was possible to extract some useful information such as the rotation direction, the speed and acceleration of the propeller and those of the nail head.
To collect the movies an EXILIM EX-FH25 Casio camera has been used, able to collect movies until 1000 fps. This kind of camera is very cheap and thus has some intrinsic limitation, reducing the pixel number of the image at the highest speed. The reduced size allowed however to capture the system image with an acceptably high resolution. The Casio Exilim FH-25 has a powerful macro objective; the movies have been taken at different distances from the rotating propeller; in the case of 1000 frames/sec images the pixel number is only 228x64. In the movies used for the calculation of rotation frequency the resolution was of about 3pixel/mm, but to detect the nail oscillation the pixel/mm ratio was modified to at least 10pixel/mm; therefore the final image resolution is of the order of 0.1mm. The minimum focus distance in supermacro mode is 1cm only, thus probably it is possible to achieve even a better pixel/mm ratio than that obtained in the present study.
A picture of the experimental device is shown in Fig 4.
Experimental results
In a first stage of the experiments the vibration-to-rotation tests have been carried out using a loudspeaker as a generator and without any bearing. It was apparent that some excitation frequencies are more efficient than others in the transformation of vibrational to rotational motion. It seemed useful, however, to reduce the oscillation amplitude of the propeller by means of a microbearing NOT fixed on the nail, simply to impose a fixed backlash between the nail and the bearing with a very small lateral oscillation. In these conditions it was possible to obtain the results shown in Fig 5. The Notched Stick was fixed to the loudspeaker through a very tight, although elastic connection. The plot shows the number of rotations per second versus the strain frequency (red line) on the horizontal axis. It is possible to detect a couple of resonance frequencies, whose value is about 75 Hz and 130 Hz, probably related by a harmonic correlation.
These experiments showed however that it was difficult to exclude a horizontal component due to the relative weakness of the elastic bonding and to the loudspeaker behaviour. For this reason the experiments have been repeated using the piezoelectric generator as a vibration source; in this case it was possible to induce a purely vertical strain of the stick.
The effect observed on the same device with the same protruding length and a very rigid mechanical blocking is illustrated in Fig 6A. In this condition (blue line) the frequency of 156Hz has been found as the main resonance frequency.
In order to improve statistics, compute standard deviations and check repeatability, the same measurements have been carried out four times, also in a larger interval of excitation frequencies than that in Fig 6A. The results are shown in Fig 6B, where vertical error bars represent one standard deviation of uncertainty, while the error on the imposed piezo frequency turns out to be negligible. Notice the similarity with the simulated results of Fig 6A. The same experiments has been repeated by two different methods: applying the mechanical strain along the horizontal axis and eliminating the backlash, with the fixation of the microbearing on the nail (same diameter). The resulting effect is that while the horizontal strain does not change the result in a significant way, the elimination of the backlash prevents the rotation; the propeller simply moves erratically in both directions without any regular rotation. At this point we have decided to apply the mechanical strain along an intermediate direction, so that we may consider the result as a combination of a vertical and a horizontal excitation; this has a positive effect in the presence of a backlash, but yields no rotation effect in the case where backlash is absent.
Nail head motion tracking
In order to understand the effect of the vibration imposed by the loudspeaker and the piezoelectric devices on the motion of the Notched Stick components, experiments performed at different excitation frequencies were recorded by means of the previously described highspeed camera. In particular, the displacement of the head of the nail both along the vertical (y) and the horizontal (x) axes was measured from collected movies by means of a tracking algorithm exploiting OpenCV [25] routines. More precisely, a contrast-based criterion was used to define the nail head and the position of its center of mass was recorded at each frame.
For the experiment performed adopting the piezo vibration source and a strain frequency of 156Hz, Fig 7 reports a portion of the resulting nail head trajectory fitted by a sinusoidal functions. Although the piezo apparatus should in principle transfer the vibration along a single axis, a displacement of the nail head along both the x and the y direction was measured. The Notched Stick However, it should be pointed out that the x component is probably due to the not perfectly rigid connection between the piezo and the Notched Stick devices and that it shows a much smaller amplitude than the y one, thereby not strongly affecting the motion. Fig 6A and 6B are probably due to the circumstance that, for technical reasons, between the two sets of measurements the apparatus had to be dismounted and the restored again (re-use of the antivibrating table for other purposes). The unavoidable aging of wood and polyurethane components, induced by the many operations carried out on the device, may also have played a minor role.
Materials and methods
In addition to the experiments, multibody dynamics simulations of the Notched Stick were performed within the framework of the Msc.Adams software [26]. This numerical simulation technique is a useful tool for the investigation and the design of mechanical systems comprising of several moving bodies, providing with a clear picture of their dynamic and kinematic response under different conditions and allowing to test any possible configuration [27].
In simulations the Notched Stick was reduced to the nail, the propeller and the microbearing, as shown in Fig 8. Each element was modeled as a rigid body with geometrical sizes and physical properties of the experimentally-tested prototype, described in the Materials section. A small cubical block with wood characteristics, mimicking the terminal portion of the stick, was also added to prevent the propeller to slide out of the nail during the motion.
The propeller and the bearing were rigidly joined while the tip of the nail was fixed to the ground reference frame through a cylindrical shaft.
The interactions among the Notched Stick components were modeled by adding a contact force, F c , to the equations of motion upon the occurrence of a collision (continuous contact model, [28,29]). Among the possible formulations of F c , the MSC.Adams hard-coded impact function [26] was exploited in this work, expressing this force as a combination of a non-linear spring in parallel with a damper where u and u ' are the relative displacement and velocity of the interacting bodies, k the spring generalised stiffness parameter and n the non-linear power exponent. c denotes instead the damping coefficient and, to properly represent the energy dissipation rate and prevent numerical instability, it varies gradually from 0 to a maximum value c max depending on the relative displacement of the interacting bodies, with c max applied when u � u � . Parameters c max and u � , as well as k and n are to be defined by the user and several strategies can be adopted for their assessment. Particularly, in this work, c max and u � were tuned so to get a match between the number of revolutions of the propeller measured in simulations and in experiments performed with the piezoelectric oscillator for a 156 Hz frequency.
An estimate to the scale of the stiffness parameter k was deduced instead from the Hertz theory of contact (assuming n = 3/2) [30]. Also friction at the contact location was accounted for by adopting the MSC.Adams hard-coded formulation of the Coulomb model [26], with the friction coefficient smoothly varying from the static, μ s , to the dynamic friction value, μ d , as a function of the tangential relative velocity of the colliding bodies [31].
The interaction between the nail and the bearing, both made of steel, was modeled assuming c max = 0.6 kg/s, u � = 0.01 mm, k = 1e 5 N/mm 3/2 , μ s = 0.74, μ d = 0.6, with the latter two coefficients corresponding to the steel-steel dry friction values suggested by literature [32]. For the contact between the propeller and the wood-like block, parameters c max = 5 kg/s, u � = 0.01 mm, k = 5e 3 N/mm 3/2 were imposed.
The Lagrangian equations of motion of each modeled body were solved using the Hilber-Hughes-Taylor (HHT, [33,34]) integrator with automatic step tuning and maximum numerical error of 10 −6 .
To simulate the motion of the Notched Stick under piezoelectric vibration, the displacement of the nail head was imposed (to the nail head centre) by the sinusoidal law mimicking the oscillation along the vertical axis of the nail that is induced in the experiments (see the Experimental results section) by the vibration transmitted from the stick. Amplitude, A, and frequency, ω, were deduced by fitting the above equation to the displacement measured from the high-speed video recordings of the 156 Hz experiment by tracking the motion of the nail head (see Fig 7). Also the motion given by the loudspeaker can in principle be simulated by applying simultaneously two sinusoidal displacements to the nail head, imposing the oscillation of the nail along both the vertical and the horizontal axes. However, since the vertical component has generally a predominant influence on the motion of the propeller, in this work numerical simulations focus on the simpler case of a single vertical oscillation.
For this nail motion condition, simulations lasting 8s were performed varying the excitation frequency and compared with experimental results. The model was then further exploited to test the effect of the backlash between the microbearing and the nail, Δr, by gradually modifying the microbearing radius, r, between the two limit conditions r = 0 (no microbearing) and r = r nail (no backlash).
Numerical results
Previously described numerical simulations and experimental tests performed with the piezoelectrical vibration source provided the revolution frequency of the Notched Stick propeller as a function of the tested oscillation frequencies. Fig 6A compares the results and shows a certain resemblance between the two data sets, both in terms of trend and average number of revolutions.
Additionally, simulations provided the angular displacement, θ, and the angular velocity, dθ/dt, of the propeller. The plot of these quantities is depicted in Fig 9 for two different excitation frequencies and some affinities with the phase portrait of the simple pendulum should be Phase portrait of the propeller. As in the case of the simple pendulum, when the propeller does not rotate, e.g. at 65 Hz (left), the diagram shows the trend of a closed curve. In contrast, the trend of the diagram at 156 Hz is that of an open curve (right), due to the revolution of the propeller; the final stage of the run appears rather noisy, probably owing to nail/propeller shocks and relative slip, that sometimes may suddenly enhance friction forces and angular speed variations. Unfortunately, the backlash between the nail and the propeller turns out to be necessary to the onset of rotations, but it also makes the dynamics of the system far form being trivial. https://doi.org/10.1371/journal.pone.0218666.g009 The Notched Stick pointed out. Indeed, when the propeller oscillates but does not rotate-e.g. at 65 Hz (Fig 9, left)-a closed displacement-velocity diagram (phase plot) is obtained. Instead, as the spinning starts-e.g. at 156 Hz (Fig 9, right)-an open phase diagram is derived.
Besides the effect of the excitation frequency, modelling was further used to analyse the influence of the backlash between the bearing and the nail radii, Δr, a variable that previously described experiments suggested to be crucial for the revolution of the propeller. Bearings with internal radius between 0.82 mm (bearing radius equal to the nail radius, Δr = 0, no backlash) and 1.25 mm were introduced in the Notched Stick model and each of them was tested at 126, 143, 156, 169 Hz. Primarily, simulations enforced the experimental observation that Δr = 0 prevents any rotational motion of the propeller. For all the tested frequencies, revolution was then shown to take place for Δr ranging between 0.09 and 0.28 mm.
Simulation outputs confirmed therefore that the Notched Stick requires a loose connection between the nail and the propeller to convert the transmitted vibration into spinning of the propeller. Although imperfect, the propeller of our experimental device is relatively well-balanced: its axis of symmetry passes through its center of mass. If no backlash were present, the center of mass of the propeller would almost coincide with the center of rotation of the propeller about the center of the nail; the forces applied to the propeller (friction and inertial forces in the reference frame where the nail head is at rest) would have no way to yield a significant moment on the propeller, that hardly could be forced to rotation. However, results highlighted that not every backlash ensures the revolution and provided a range of eligible Δr values for the investigated Notched Stick configuration.
It is finally worth noting that for the Δr = 0 case, simulations were performed by applying not only the motion of the nail along the vertical axis (defined by Eq 13), but also (i) a single sinusoidal oscillation of the nail along the horizontal axis and (ii) a vertical and a horizontal vibration simultaneously. Nevertheless, for all these cases no rotation of the propeller was observed, thus strengthening the conclusion that a backlash is necessary to develop such a kind of motion.
Discussion
As shown by the scientific literature on the Notched Stick discussed in the Introduction section, two main mental models have been developed to explain how the device works: (a) combination of harmonic motions and (b) hula hoop description. Moreover no experimental approach has been developed different from the mere manual or robotized use of the original device.
We stress that the term vibrot is used here in a broader sense, i.e. it is intended to mean generically a "device able to convert linear oscillatory motion into rotational motion", never mind the basic mechanism responsible for such a conversion, while in a more strict sense a vibrot is a device specifically designed to this purpose, in order to obtain: (1) a preferred onesided orientation of the rotational motion, and (2) a reasonable (although however low) efficiency in the conversion. Nevertheless the set of experiments, whose results are shown in Fig 6B, has allowed to confirm that the Notched Stick may at least partially satisfy the requirement (1) of the "strict" definition, since in most cases (80-90%) the rotation was counterclockwise (front view), in 5 to 10% cases clockwise and in less than 5% changed its sense during the experiment.
We have introduced a simpler experimental but well repeatable method which is based on the use of a common vibration generator (a loudspeaker or a piezoelectric oscillator). This different experimental approach probably simplifies the phenomenon but allows to focus our attention on the intrinsic mechanism of transformation of the original vibration into a rotation.
The device may certainly be considered as a pendulum, whose support is not completely fixed but may move in a well defined zone (delimited by the backlash size) along the vertical or horizontal direction.
In general a pendulum may acquire energy and overcome the upper dead point approaching a rotation condition and this holds true also for the Notched Stick. However, the possibility to vary periodically the position of the support along the two directions may explain the onset of rotational motion in terms of parametric and non-parametric resonance. This same possibility introduces the alternative description in terms of a horizontal hula-hoop. The classical hula-hoop rotates about a vertical link (commonly the human body or a part of it) and its movement can be described as resulting from the balance of the weight of the hula-hoop and the friction force due to the contact with the body. In this way the hula-hoop may rotate, the rotation generates the friction and the friction opposes to the weight; the motion has ben interpreted as a combination of parametric and non-parametric resonance, as discussed in a classical paper [34].
What happens if the hula-hoop rotates about a horizontal link (i.e., an arm)? Such a problem has been analysed in literature, even if the most interesting papers are in Japanese [35]; however this provides an equivalent and alternative description framework of the Notched Stick, as previously discussed. In this case the horizontal axis of the Notched Stick has the role of opposing to the gravity force and the effect of the rotation produces, through the friction, a component force along the nail, eventually resulting in a horizontal displacement.
Qualitatively, static friction helps the onset of rotational motion because (1) it makes possible the momentum trasfer from the nail to the propeller and (2) it yields no energy dissipation, since there is no sliding of the propeller on the nail surface. In contrast, dynamic friction may oppose the onset of rotational motion, due to the dissipative nature of the force. So, it is conceivable that the best performances can be obtained by some intermediate values (neither too large nor too small) of static and dynamic dry friction coefficients. The experimental investigation of the role played by friction would have involved the use of different materials for the nail and the propeller hole, but such analysis was beyond the scope of the present work.
Conclusions
Experimental and numerical results prove that the Notched Stick may act as a vibrot, a device able to convert an oscillatory motion (that of the stick) into a rotational one (that of the propeller). The effect turns out to be negligible, or does not occur at all, whenever the propeller is tightly connected to the stick nail and perfectly axisymmetrical with respect to the axis of the nail, so that the propeller center of mass lies on the symmetry axis of the nail. The small effects possibly observed can be probably attributed to friction forces. In contrast, the device succeeds in converting vibrations into rotations if the propeller center of mass is not aligned with the nail axis, a condition occurring when either the nail-propeller coupling is not tight or the propeller is not completely axisymmetrical relative to the nail axis. The propeller can be thought as a damped parametrically excited compound pendulum whose rotation may be induced by a process of parametric resonance for purely vertical oscillations of the nail, by ordinary resonance if the nail only oscillates horizontally or, finally, by a combination of both processes when nail oscillations take place in an intermediate direction (a conclusion that reasonably still holds in the more general case where a composition of two harmonic motions of the same frequency along orthogonal directions is imposed to the nail). Parametric resonance induces the onset of rotations also when the weight of the propeller can be regarded as negligible. The possible elliptical motion of the nail seems anyway unnecessary to determine the rotation of the propeller, since rotations are detected also in the case of a purely linear oscillation of the nail along any fixed direction. As an alternative description the device may be conceptualized as a horizontal axis hula-hoop (the propeller) moving around a horizontal body (the nail), but without any need of complex elliptic movements.
As a conclusion, although poorly efficient, the Notched Stick can be definitively regarded as a significant example of vibrot. | 11,634 | 2019-06-26T00:00:00.000 | [
"Engineering",
"Physics"
] |
Localized surface plasmon enhanced cathodoluminescence from Eu 3 +-doped phosphor near the nanoscaled silver particles
We elucidate that the luminescence from Eu 3+ -doped phosphor excited by the electron collision can be modified on location near the metallic nanoparticles. The Eu 3+ -doped phosphor was fabricated on the nanoscaled Ag particles ranging of 5 nm to 30 nm diameter. As a result of the cathodoluminescence measurements, the phosphor films on the Ag particles showed up to twofold more than that of an isolated phosphor film. Enhanced cathodoluminescence originated from the resonant coupling between the localized surface plasmon of Ag nanoparticles and radiating energy of the phosphor. Cathodoluminescent phosphor for high luminous display devices can be addressed by locating phosphor near the surface of metallic nanoparticles. ©2011 Optical Society of America OCIS codes: (250.5403) Plasmonics; (250.1500) Cathodoluminescence. References and links 1. S. M. Lee and K. C. Choi, “Enhanced emission from BaMgAl10O17:Eu 2+ by localized surface plasmon resonance of silver particles,” Opt. Express 18(12), 12144–12152 (2010). 2. J. R. Lakowicz, “Plasmonics in biology and plasmon-controlled fluorescence,” Plasmonics 1(1), 5–33 (2006). 3. E. Ozbay, “Plasmonics: merging photonics and electronics at nanoscale dimensions,” Science 311(5758), 189– 193 (2006). 4. B. Moine and G. Bizarri, “Rare-earth doped phosphors: oldies or goldies?” Mater. Sci. Eng. B 105(1-3), 2–7 (2003). 5. T. Hayakawa, K. Furuhashi, and M. Nogami, “Enhancement of D0FJ emissions of Eu 3+ ions in the vicinity of polymer-protected Au nanoparticles in solgel-derived B2O3SiO2 glass,” J. Phys. Chem. B 108(31), 11301– 11307 (2004). 6. R. Reisfeld, M. Pietraszkiewicz, T. Saraidarov, and V. Levchenko, “Luminescence intensification of lanthanide complexes by silver nanoparticles incorporated in sol-gel matrix,” J. Rare Earths 27(4), 544–549 (2009). 7. J. Zhu, “Enhanced fluorescence from Dy owing to surface plasmon resonance of Au colloid nanoparticles,” Mater. Lett. 59(11), 1413–1416 (2005). 8. X. Fang, H. Song, L. Xie, Q. Liu, H. Zhang, X. Bai, B. Dong, Y. Wang, and W. Han, “Origin of luminescence enhancement and quenching of europium complex in solution phase containing Ag nanoparticles,” J. Chem. Phys. 131(5), 054506 (2009). 9. Y. Wang, J. Zhou, and T. Wang, “Enhanced luminescence from europium complex owing to surface plasmon resonance of silver nanoparticles,” Mater. Lett. 62(12-13), 1937–1940 (2008). 10. K. Y. Yang, K. C. Choi, and C. W. Ahn, “Surface plasmon-enhanced spontaneous emission rate in an organic light-emitting device structure: cathode structure for plasmonic application,” Appl. Phys. Lett. 94(17), 173301 (2009). 11. K. Y. Yang, K. C. Choi, and C. W. Ahn, “Surface plasmon-enhanced energy transfer in an organic light-emitting device structure,” Opt. Express 17(14), 11495–11504 (2009). 12. W. A. Murray and W. L. Barnes, “Plasmonic materials,” Adv. Mater. (Deerfield Beach Fla.) 19(22), 3771–3782 (2007). 13. K. H. Cho, S. I. Ahn, S. M. Lee, C. S. Choi, and K. C. Choi, “Surface plasmonic controllable enhanced emission from the intrachain and interchain excitons of a conjugated polymer,” Appl. Phys. Lett. 97(19), 193306 (2010). 14. J. H. Kang, M. Nazarov, W. B. Im, J. Y. Kim, and D. Y. Jeon, “Characterization of nano-size YVO4:Eu and (Y,Gd)VO4:Eu phosphors by low voltage cathodoand photoluminescence,” J. Vac. Sci. Technol. B 23(2), 843– 848 (2005). 15. C. C. Wu, K. B. Chen, C. S. Lee, T. M. Chen, and B. M. Cheng, “Synthesis and VUV photoluminescence characterization of (Y, Gd)(V, P)O4:Eu 3+ as a potential red-emitting PDP phosphor,” Chem. Mater. 19(13), 3278– 3285 (2007). #145138 $15.00 USD Received 19 Apr 2011; revised 13 Jun 2011; accepted 14 Jun 2011; published 23 Jun 2011 (C) 2011 OSA 4 July 2011 / Vol. 19, No. 14 / OPTICS EXPRESS 13209 16. A. K. Levine and F. C. Palilla, “A new, highly efficient red-emitting cathodoluminescent phosphor (YVO4:Eu) for color television,” Appl. Phys. Lett. 5(6), 118–120 (1964). 17. T. Hayakawa, S. T. Selvan, and M. Nogami, “Field enhancement effect of small Ag particles on the fluorescence from Eu-doped SiO2 glass,” Appl. Phys. Lett. 74(11), 1513–1515 (1999). 18. V. Bulović, V. Khalfin, G. Gu, P. Burrows, D. Garbuzov, and S. Forrest, “Weak microcavity effects in organic light-emitting devices,” Phys. Rev. B 58(7), 3730–3740 (1998). 19. K. Matsubara, H. Tampo, H. Shibata, A. Yamada, P. Fons, K. Iwata, and S. Niki, “Band-gap modified Al-doped Zn1xMgxO transparent conducting films deposited by pulsed laser deposition,” Appl. Phys. Lett. 85(8), 1374– 1376 (2004). 20. T. Hayakawa, S. Tamil Selvan, and M. Nogami, “Enhanced fluorescence from Eu owing to surface plasma oscillation of silver particles in glass,” J. Non-Cryst. Solids 259(1-3), 16–22 (1999). 21. M. S. Elmanharawy, A. H. Eid, and A. A. Kader, “Spectra of europium-doped yttrium oxide and yttrium vanadate phosphors,” Czech. J. Phys. 28(10), 1164–1173 (1978). 22. N. Noginova, Y. Barnakov, H. Li, and M. A. Noginov, “Effect of metallic surface on electric dipole and magnetic dipole emission transitions in Eu doped polymeric film,” Opt. Express 17(13), 10767–10772 (2009). 23. B. J. Lawrie, R. F. Haglund, Jr., and R. Mu, “Enhancement of ZnO photoluminescence by localized and propagating surface plasmons,” Opt. Express 17(4), 2565–2572 (2009). 24. K. Okamoto, I. Niki, A. Shvartser, Y. Narukawa, T. Mukai, and A. Scherer, “Surface plasmon enhanced super bright InGaN light emitter,” Phys. Status Solidi C 2(7), 2841–2844 (2005). 25. K. Okamoto, I. Niki, A. Shvartser, Y. Narukawa, T. Mukai, and A. Scherer, “Surface-plasmon-enhanced light emitters based on InGaN quantum wells,” Nat. Mater. 3(9), 601–605 (2004).
Introduction
Localized surface plasmon resonance has attracted immense interest from many material science researchers of rare-earth ions such as europium (Eu), terbium (Tb), and dysprosium (Dy) [1].Although rare-earth ions construct a luminescent center in biological sensing, fluorescence imaging analysis, and display phosphors, the efficacy of their luminescence has not yet been addressed [2][3][4][5].The localized surface plasmon induced by a metallic nanostructure can enhance the intensity of the photoluminescence of rare-earth ions [1].Some studies have examined the enhanced photoluminescence of rare-earth ions by using the localized surface plasmon induced by metallic particles [6][7][8][9].Reisfeld et al. showed that the luminescence of Eu complexes can be increased when the electronic levels of the Eu complex interact with the radiation field of silver (Ag) nanoparticles [6].Zhu showed that the enhanced fluorescence of Dy 3+ ions is due to the localized surface plasmon of Au colloidal nanoparticles [7].In addition, Fang et al. [8] and Wang et al. [9] reported that the luminescence of Eu 3+ ions dispersed in a solution could be enhanced when Ag nanoparticles are mixed in the solution.However, most of the existing works are limited to the emission intensity of pure rare-earth ions in the solution phase.Moreover, studies on enhanced luminescence mainly involve the use of rare-earth ions under optical excitation of ultraviolet light.The results of those studies fail to show the challenges of utilizing rare-earth ions in commercial display devices [9][10][11].
We demonstrate how metal-induced plasmon can enhance the cathodoluminescence of a rare-earth ion doped phosphor system that is used in commercial display devices, such as a field emission display or a carbon nanotube backlight unit.We used Ag nanoparticles as the plasmon inducer and introduced a dielectric spacer to prevent luminescence quenching.Up to twofold enhancement factor was obtained when the Ag nanoparticles were evaporated to a thickness of 3.5 nm and spaced 20 nm from the light emitter.In addition, the distance dependency on the plasmon-enhanced luminescence was investigated by varying the spacer thickness in a range of 0 nm to 80 nm.The fact that the emission level is intensified as a result of the excitation caused by a collision with an electron beam is an important finding since the first report in self-emissive display technology.
Experimental
We used the test sample for the cathodoluminescence (CL) measurement as shown in Fig. 1.The multilayer test sample consists of a phosphor layer (top), a dielectric spacer, Ag nanoparticles, and a glass substrate coated with indium tin oxide (ITO, bottom).The ITOcoated glass was selected for the adhesion of Ag particles and the front structure of field emission displays.To induce localized surface plasmon resonance near the phosphor, we fabricated the Ag nanoparticles (up to several dozen nanometers in size) below the phosphor layer by thermal evaporation method.The Ag nanoparticles were evaporated to a deposition thickness ranging from 0 nm to 3.5 nm at a constant rate of 0.1 Å/s.The Ag nanoparticles were fabricated randomly as a way of ignoring the resonant interaction between the metallic particles.As a dielectric spacer, magnesium oxide (MgO) was inserted between the phosphor layer and the Ag particles.In this work, the dielectric spacer physically separates the light matter and the Ag particles.We deposited the MgO spacer by means of e-beam evaporation.The MgO spacer was evaporated to a deposition thickness ranging from 0 nm to 80 nm at a constant rate of 0.3 Å/s.The phosphor material used in this work is YVO 4 :Eu 3+ ; it was manufactured in the form of a transparent thin film.A 100 nm thick layer of YVO 4 :Eu 3+ was deposited on the spacer, Ag particles, and ITO-coated glass by means of a RF magnetron sputter at 150°C.The thickness measurements presented in this paper were obtained by using a quartz crystal oscillator as a deposition monitor.The surface morphology of the evaporated Ag nanoparticles was determined via scanning electron microscopy (SEM) with a FEI (Netherlands) Sirion microscope.In addition, the influence of Ag nanoparticle on the surface roughness of thin film phosphor was determined with the aid of Atomic Force Microscope with a NanoMan AFM (Veeco, USA).The CL measurements were taken by a SEM with a Gatan MonoCL3 system.The phosphor samples were excited by an electron beam from an electron gun under an acceleration voltage of 5 kV and an electron beam current of 10.1 µA.The localized surface plasmon resonance can be resulted in extinct of incident electromagnetic wave.Thus, the extinction spectra enabled us to study the plasmonic resonance caused by metallic nanoparticles [12].An extinction spectrum of the Ag particles was obtained with a Shimadzu spectrophotometer (UV-2550, Japan).The surface status of randomly fabricated Ag nanoparticles in the present work was confirmed through the SEM images shown in Fig. 2. The Ag particles were randomly deposited on the substrate to a thickness of 0.5 nm to 3.5 nm after thermal evaporation at a constant deposition rate.At a deposition thickness of 0.5 nm, the fabricated Ag particles had a near-spherical shape with an average of 5 nm diameter.When the deposition thickness reached 3.5 nm, the diameter of the Ag particles expanded to an average of 30 nm.The distribution of the Ag particles also became dense, and the voids between the particles disappeared.At this range of deposition thickness, the Ag particles failed to form a continuous film; instead, they formed a cluster of isolated islands [13].Figure 3 shows the extinction spectra of randomly fabricated Ag nanoparticles in relation to the evaporation thickness.The extinction spectra were measured under the fully fabricated sample structure: the Ag particles were surrounded by a dielectric spacer (MgO) and ITOcoated glass.The localized surface plasmon resonance can be modified according to the dielectric materials around the metal particles.The extinction peak of the Ag nanoparticles inserted between the MgO and ITO layers moved from 470 nm to 560 nm as the deposition thickness of the Ag nanoparticles increased.The extinction spectrum of the Ag particles was red-shifted and the half-bandwidth became broad; this behavior is remarkably in line with the increase in size and distribution of the Ag particles.Moreover, the extinction spectra that show an increase in the thickness of the Ag particle deposition deeply overlap the emission area of the Eu 3+ -doped phosphor.
Results and discussion: CL spectra of Eu 3+
-doped phosphor Figure 4 depicts the CL spectra of phosphor films measured under the excitation of a collision with an energetic electron beam and wavelength-dependent enhancement factors.Among a few of radiation peaks of the Eu 3+ ions, we focused on two radiation peaks: a magnetic dipole transition at 590 nm and an electric dipole transition at 620 nm [14].The Eu 3+ ion is radiated by the transition process of 4f-4f orbital electrons [15,16].Since 4f electrons are electrically shielded by electrons on the 5s and 5p orbit, the luminescent spectrum of Eu 3+ ions is not dominantly governed by the host matrix.The CL intensity in Fig. 4(a) was plotted in relation to the thickness of the Ag particle deposition at a distance of 20 nm from the dielectric spacer; the reference for the plotting was isolated from the Ag particles.The emission intensity of phosphor deposited on the Ag particles increased as the thickness of the Ag particle deposition increased.The increased part of the emission wavelength was mainly observed at around 620 nm.Furthermore, although the emission intensity showed a significant difference on the existence of Ag particles, the spectral position of the radiation peaks remained constant.This behavior implies that the emission wavelength of phosphor was not modified by the insertion of Ag particles.Figure 4(b) shows that the enhancement at 620 nm by electric dipole transition is higher than that at 590 nm by magnetic dipole transition.At 3.5 nm of Ag evaporation, the difference in wavelength-dependent enhancement gets larger.To explore the possibility that this enhancement originates from the localized surface plasmon resonance of Ag nanoparticles, we calculated the asymmetric ratio by dividing the integrated intensity around 620 nm by the integrated intensity around 590 nm.The asymmetric ratio of Eu 3+ ions that is reflected by the increase or decrease in luminescence can be changed by coupling it with the plasmon resonance [17].Therefore, this can be used as good evidence for detecting the localized plasmon medicated luminescence.As shown in the inset figure of Fig. 4(b), the asymmetric ratio of up to 1.5 nm of Ag nanoparticles is the same as the reference, while the asymmetric ratio increased after 2.5 nm of Ag nanoparticles that the CL intensity increased significantly.Therefore, the enhanced cathodoluminescence is due to the increased asymmetric ratio from the Ag nanoparticles. Figure 5 shows the enhancement factor of the emission intensity of Eu 3+ -doped phosphor films as a function of the thicknesses of the Ag nanoparticle deposition and the MgO dielectric spacer.The enhancement factor was obtained by dividing the integrated emission intensity of a sample with Ag nanoparticles by the integrated emission intensity of a reference sample without Ag nanoparticles and a dielectric spacer.Figure 5(a) shows how the thickness of the Ag nanoparticle deposition affects the enhancement factor of the emission intensity when the thickness of the dielectric spacer is restricted to 20 nm.The enhancement factor increases as the thickness of the Ag nanoparticle deposition increases.At a deposition thickness of 3.5 nm, the enhancement factor increased up to 1.92 times.Figure 5(b) shows how the dielectric spacer influences the emission enhancement when the thickness of the Ag nanoparticle deposition is 3.5 nm.In other words, the enhancement factor in Fig. 5(b) depends on the distance between the light emitter and the Ag nanoparticles.Initially the enhancement factor of the sample without Ag nanoparticles (square) is constant, regardless of the MgO thickness.The thickness of the dielectric spacer does not change the emission intensity of the Eu 3+ -doped phosphor.In contrast, the enhancement factor of the sample with Ag nanoparticles (circle) reaches its maximum level at an MgO thickness of 20 nm; after that, the enhancement factor decreases as the MgO thickness increases.The enhancement effect by the localized surface plasmon became weaker for MgO thicknesses of up to 80 nm.Note that there was no increase in the emission of the sample with Ag nanoparticles when the particles and light emitter were not separated by the dielectric spacer.That is the reason the luminescence can be quenched in the case of direct contact between the Ag nanoparticles and the light emitter.The enhancement factor calculated in Fig. 5 can potentially change; owing to the Ag nanoparticles, the potential change is more likely to be caused by the surface morphology than the localized surface plasmon.To observe the surface morphology of the sample structure, we used an atomic force microscope (AFM) to measure the roughness of a three-dimensional cross section.The roughness of a 1 µm 2 area is calculated by the root mean square.Figure 6 shows that the phosphor films used in this work have a rough surface.Because of the difficulty of distinguishing surface morphologies in three-dimensional AFM images, we present the root mean square values of the roughness in Table 1.The reference sample without the Ag nanoparticles or the dielectric spacer had a roughness of about 4.6 nm due to the phosphor deposition.When the thickness of the MgO and Ag nanoparticle deposition is increased, the differences of roughness are just 3.15 nm and 1.15 nm, respectively.The difference in surface roughness can be negligible in relation to the total thickness of the prepared sample.To understand the local field enhancement of electromagnetic near-fields due to Ag nanoparticles, we carried out Finite Deference Time Domain (FDTD) calculation.Figure 7(a) shows a schematic diagram for numerical analysis that uses the FDTD [1].For an approximate, Eu 3+ ion doped phosphor was modeled as a 620 nm emitting point dipole.Also, an Ag nanoparticle with a diameter of 30 nm based on SEM images shown in Fig. 2, which is assumed to have a spherical shape, was placed 2 nm from the point dipole.The Ag nanoparticle was located on the border between MgO and ITO layers.The point dipole source was perpendicularly oriented to the metal surface along the Z axis. Figure 7(b) shows the electric field distribution of only the isolated dipole source without any Ag nanoparticles, while Fig. 7(c) shows the electric field distribution in the presence of an Ag nanoparticle.Finally, Fig. 7(d) shows the ratios of the enhanced or quenched electric field intensity in the X-Z plane around the Ag nanoparticle.Here, the local field enhancement factors were estimated by dividing the field intensity in the presence of an Ag nanoparticle by the field intensity of an isolated phosphor.All of the figures were represented in a logarithmic scale (base 10) for clarity.As shown in Fig. 7(d), the near-field distribution of the emitting dipole in the presence of Ag nanoparticles increased drastically.It is noteworthy that the enhancement occurred when the Ag nanoparticle was embedded in MgO and ITO.This result means that the Ag nanoparticle surrounded by a higher refractive index of MgO (n = 1.74) and ITO (n = 1.8) than air (n = 1) had a resonance with a red emission centered at 620 nm [18,19].
Therefore, we believe that the Ag nanoparticle used in the present experiment can have a plasmon resonance with a Eu 3+ -doped phosphor, compared with the theoretical calculation.In this work, we revealed that Ag nanoparticles can modify the CL of Eu 3+ -doped phosphor in close proximity to the surface of the Ag nanoparticles.When the phosphor is deposited on Ag nanoparticles surrounded by a 20 nm of MgO dielectric spacer, the emission intensity of the sample with a 3.5 nm of Ag nanoparticles is 1.92 times larger than that of the reference sample.We believe that this enhancement originates from the localized surface plasmon resonance caused by the Ag nanoparticles.The extinction spectra show that the plasmon resonance peak of the Ag nanoparticles is moved toward a long wavelength.This behavior is due to an increase in the size of the Ag nanoparticles and a decrease in the voids among the Ag nanoparticles.At an Ag nanoparticle deposition thickness of 3.5 nm, the plasmon band of Ag nanoparticles deeply overlaps with the red emission of the Eu 3+ -doped phosphor, which means there is strong resonant coupling between the Ag nanoparticles and the excited species under irradiation from the energetic electron beam.When Eu 3+ ions are irradiated by the electron beam, the spontaneous emission of Eu 3+ ions is mainly composed of 590 nm peak ( 5 D 0 7 F 1 ) and 620 nm peak ( 5 D 0 7 F 2 ) transitions.The radiation peaks at 590 nm and 620 nm contribute to the orange and red emission, respectively [5,17,20,21].The CL of Eu 3+ ions depends on the magnitude of the transition probability of luminescent ions.Although there are several types of transitions, such as an electric quadrupole and a magnetic dipole, the electric dipole transition is the most dominant.The electric dipole transition is hypersensitive to the polarizability of the ligand and to the site asymmetry caused by the charge distribution.In addition, it can be modified by incorporating an electromagnetic wave around the atom [22].The 590 nm peak ( 5 D 0 7 F 1 ) and 620 nm peak ( 5 D 0 7 F 2 ) of the Eu 3+ ions are governed by the magnetic dipole and electric dipole transitions, respectively [1,5].Modifying the probability of the electric dipole transition at 620 nm is a novel way to enhance the red emission intensity of Eu 3+ -doped phosphor.Figure 4 shows that the enhancement at 620 nm is larger than the enhancement at 590 nm, a fact that supports our belief.The localized surface plasmon caused by the Ag nanoparticles induces the concentration and enhancement of the local electric field around the metal nanoparticles, which increases the polarizability of the ligand and the site asymmetry [23].Figure 4(b) provides the evidence that the asymmetric ratios of samples with Ag nanoparticles increased compared to the reference.Therefore, the enhanced emission of Eu 3+ -doped phosphor can be attributed to the resonant coupling between the localized surface plasmon and the electric dipole transition and also to the site asymmetry caused by the local field enhancement.Remarkably, the CL intensity of the samples with Ag nanoparticles increases as the thickness of the Ag nanoparticle deposition increases.It is also strange that this enhancement only occurs when a dielectric spacer is inserted between the light emitter and the Ag nanoparticles.If the light emitter is in direct contact with the Ag nanoparticles, the luminescence of the light emitter can be quenched.To prevent luminescence quenching, we deposited an MgO dielectric spacer onto the Ag nanoparticles [24,25].In our results, the CL intensity of the Eu 3+ -doped phosphor reaches a maximum when the MgO thickness is 20 nm and then decreases when the MgO thickness exceeds 20 nm.This behavior explains why the localized surface plasmon of the Ag nanoparticles exerts an influence within a few dozen nanometers and why the resonant effect decays exponentially [24,25].From these results, we predict that the surface plasmon enhanced cathodoluminescence can be useful for a display device with the high luminous efficacy.In addition, the surface plasmon mediated the Eu 3+ -doped phosphor has the advantage in sight of color purity.Since the radiation at 590 nm shows the orange emission, relatively strong emission intensity at 620 nm can contribute the pure red emission.
Conclusion
In conclusion, this paper is the first to report that the red emission from Eu 3+ -doped phosphor under the pumping of an electron collision can be enhanced by coupling the localized surface plasmon resonance with the radiation energy of CL phosphor.In addition, the enhancement only occurs within a few dozen nanometers from the surface of the Ag nanoparticles, depending on the thickness of the dielectric spacer.It is greatly significant that the CL intensity can be enhanced by the localized surface plasmon caused by the metallic particles.Further work is needed to implement high luminous display devices with metallic particles, such as field emission display or a carbon nanotube backlight unit.
Fig. 1 .
Fig. 1.Graphical representation of the multilayer test sample: the phosphor layer, the dielectric spacer (MgO), the Ag nanoparticles, and the ITO-coated glass substrate.
Fig. 4 .
Fig. 4. (a) CL spectra taken at a dielectric spacer distance of 20 nm between the phosphor layer and the Ag nanoparticles for the following deposition thicknesses: 0 nm, 0.5 nm, 1.5 nm, 2.5 nm, and 3.5 nm, (b) partially integrated enhancement factors at magnetic and electric dipole transitions.(The inset figure shows asymmetric ratios, which are calculated by 02 01
Fig. 5 .
Fig. 5.The enhancement factors of the integrated CL intensity in relation to (a) the thickness of the evaporated Ag and (b) the thickness of the dielectric spacer; and comparison of this ratio with that of the reference sample without Ag nanoparticles.
Fig. 6 .
Fig. 6.The three-dimensional AFM images of the surface roughness of the fully fabricated samples for the following thicknesses of Ag nanoparticles deposition: (a) 0 nm, (b) 0.5 nm, (c) 1.5 nm, (d) 2.5 nm, and (e) 3.5 nm, all samples have an MgO spacer of 20 nm.The measured area is 1 µm 2 .
Fig. 7 .
Fig. 7. Numerical analysis of the localized field enhancement on the resonance of an Ag nanoparticle and the emitting dipole under the sample structure, (a) simulation configuration for the FDTD calculation and the near-field distribution around this, (b) an isolated 620 nmradiating dipole, (c) the diameter of 30 nm of an Ag nanoparticle placed 2 nm from the radiating dipole, and (d) the enhanced/quenched electric field intensity.
Table 1 . The Calculated Surface Roughness from AFM Measurements of the Phosphor Film as the Following Thicknesses of Ag Nanoparticles and an MgO Deposition*
*The roughness values are derived from the root mean square. | 5,784.8 | 2011-07-04T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm.
Introduction
Nowadays the increasing technology of airborne sensors with their capabilities for capturing images, including those on board the new generations of Unmanned Aerial Vehicles, demands solutions for different image-based applications. Natural spectral signature classification is one of such applications because of the high image spatial resolution. The areas where the identification of spectral signatures are suitable include agricultural crop ordination, forest areas determination, urban identification and damage evaluation in catastrophes or dynamic path planning during rescue missions or intervention services also in catastrophes (fires, floods, etc.), among others. This justifies the choice of the images with different spectral signatures as the data where the proposed approach is to be applied, providing an application for this kind of sensors.
All classification problems need the selection of features to be classified and their associated attributes or properties, where a feature and its attributes describe a pattern. The behaviour of different features has been studied in texture classifications [1][2][3]. There are two categories depending on the nature of the features used: pixel-based [4][5][6] and region-based [2,[7][8][9][10]. A pixel-based classification tries to classify each pixel as belonging to one of the clusters. The region-based identifies patterns of textures within the image and describes each pattern by applying filtering (laws masks, Gabor filters, wavelets, etc.), it is assumed that each texture displays different levels of energy allowing its identification at different scales. The aerial images used in our experiments do not display texture patterns. This implies that textured regions cannot be identified. In this paper we focus on the pixelbased category. Taking into account that we are classifying multi-spectral textured images, we use as attributes the three visible spectral Red-Green-Blue components, i.e., the RGB colour mapping. The RGB map performs better than other colour representations [11]; we have verified this assertion in our experiments, justifying its choice.
An important issue reported in the literature is that the combination of classifiers performs better than simple classifiers [1,[12][13][14][15][16]. Particularly, the studies in [17] and [18] report the advantages of using combined classifiers against simple ones. This is because each classifier produces errors on a different region of the input pattern space [19].
Nevertheless, the main problem is: what strategy to choose for combining individual classifiers? This is still an open issue. Indeed in [13] it is stated that the same method can work appropriately in one application and produce poor results in another. Hence, our goal is to find a combined strategy that works conveniently for classifying spectral signatures in images. In [15] and [20] a revision of different approaches is reported including the way in which the classifiers are combined. Some important conclusions are: 1) if only labels are available, a majority vote should be suitable; 2) if continuous outputs like posterior probabilities are supplied, an average or some other linear combinations are suggested; 3) if the classifier outputs are interpreted as fuzzy membership values, fuzzy approaches, such as aggregation operators, could be used; 4) also it is possible to train the output classifier separately using the outputs of the input classifiers as new patterns, where a hierarchical approach can be used [1].
We propose a new approach which combines two individual classifiers: the probabilistic parametric Bayesian (BP) approach [21] and the fuzzy clustering (FC) [21,22]. The following two phases are involved during any classification process: training and decision. Really, the combination of the outputs provided by the two individual classifiers is carried out during the decision phase, as we will explain later. Given a set of training data, scattered through the tri-dimensional RGB data space and assuming known the number of clusters and the distribution of the samples into the clusters, both BP and FC individual classifiers estimate their associated parameters. Based on these estimated parameters, during the decision phase, each individual classifier provides for each pixel to be classified, a support of belonging to a cluster, BP provides probabilities and FC membership degrees, i.e., continuous outputs.
Because the number of classes is known, we build a network of nodes net j for each class w j , where each node i in the net j is identified as a pixel location i ≡ (x, y) in the image which is to be classified. Each node i is initialized in the net j with the output probability, provided by BP, that the node belongs to the class w j . This is the initial state value for the node i in the net j . Each state is later iteratively updated through the Deterministic Simulated Annealing (DSA) optimization strategy taking into account the previous states and two types of external influences exerted by other nodes on its neighbourhood. The external influences are mapped as consistencies under two terms: regularization and contextual. These terms are clique potentials of an underlying Markov Random Field model [23] and they both involve a kind of human perception. Indeed, the tri-dimensional scenes are captured by the imaging sensor and mapped in the bi-dimensional space, although the third dimension is lost under this mapping, the spatial grouping of the regions is preserved, and they are visually perceived grouped together like in the real scene.
The above allows the application of the Gestalt principles of psychology [24,25], specifically: similarity, proximity and connectedness. The similarity principle states that similar pixels tend to be grouped together. The proximity principle states that pixels near to one another tend to be grouped together. The connectedness states that the pixels belonging to the same region are spatially connected. The proximity and connectedness principles justify the choice of the neighbourhood for defining the regularization and contextual terms and the similarity establishes the analogies in the supports received by the pixels in the neighbourhood coming from the individual classifiers. From the point of view of the combination of classifiers the most relevant term is the regularization one. This is because it compares the supports provided by the individual classifier FC as membership degrees and the states of the nodes in the networks, which, as aforementioned, initially are the probabilities supplied by the individual classifier BP as supports. Therefore, this is the term where the combination of classifiers is really carried out making an important contribution of this paper.
The choice of BP and FC as the simple classifiers for the combination is based on their well tested performance in the literature and also in the possibility of combining continuous outputs during the decision phase under a mechanism different from the classical one used in [15]. Nevertheless, different classifiers providing continuous outputs or some others where this can be obtained could be used. As mentioned before, we have focused the combination on the decision phase; this implies that other strategies that apply the combination based on the training one are out of the scope of this paper. One of them is proposed in [26], which has been used in various classification problems. In this model, a selector makes use of a separate classifier, which determines the participation of the experts in the final decision for an input pattern. This architecture has been proposed in the neural network context. The experts are neural networks, which are trained so that each network is responsible for a part of the feature space. The selector uses the output of another neural network called the gating network [15].
The input of the gating network is the pattern to be classified and the output is a set of outputs determining the competences for each expert. These competences are used together during the decision with the classifier outputs provided by the experts. Under the above considerations we justify the choice of BP and FC as the base classifiers for the proposed combined strategy.
We have designed similar combined strategies. The first one is based on the fuzzy cognitive maps (FCM) framework [27] and the second in the analog Hopfield neural network (HNN) paradigm [28], where in the latter an energy minimization approach is also carried out. The best performance achieved, considering both strategies, is about an 85% success. After additional experiments with the HNN, we have verified that this is because the energy falls some times in local minima that are not global optima. This behaviour of HNN is reported in [29]. The DSA is also an energy optimization approach with the advantage that it can avoid local minima. Indeed, according to [23] and reproduced in [29], when the temperature involved in the simulated annealing process satisfies some constraints (explained in the section 2.2) the system converges to the minimum global energy which is controlled by the annealing scheduling instead of the nonlinear first-order differential equation used in HNN. This is the main difference of the proposed DSA technique with respect to the HNN approach. The FCM does not work with energy minimization, but because it does not improve the results of HNN, we think that it is unable to solve this problem. Hence, we exploit the capability of the DSA for avoiding local minima, making the main contribution of this paper. The DSA outperforms the FCM and HNN combined strategies, also the classical ones and the simple classifiers.
The paper is organized as follows. In Section 2 we give details about the proposed combined classifier, describing the training and decision phases, specially the last one where the DSA mechanism is involved. In Section 3 we give details about the performance of the proposed strategy applied to natural images displaying different spectral signatures. Finally, the conclusions are presented in Section 4.
Design of the Classifier
The system works in two phases: training and decision. As mentioned before, we have available a set of scattering patterns for training, partitioned into a known number of classes, c. With such purpose, the training patterns are supplied to the BP and FC classifiers for computing their parameters. These parameters are later recovered during the decision phase for making decisions about the new incoming samples, which are to be classified.
Training Phase
During the training phase, we start with the observation of a set X of n training samples, i.e., where d is the data dimensionality, which is set to 3 because the samples represent the R,G and B spectral components of each pixel. Each sample is to be assigned to a given class w j , where the number of possible classes is c, i.e., j = 1, 2,…,c.
a) Fuzzy Clustering (FC)
This process receives the input training patterns and computes for each i X x at the iteration t its membership grade j i and updates the class centres, as follows [20,22]: x v is the squared Euclidean distance between x i and v j and equivalently 2 ir d between x i and v r . The number m is called the exponent weight [22,30]. The stopping criterion of the iteration process is achieved when or a number t max of iterations is reached, set to 50 in our experiments; has been fixed to 0.01 after experimentation. Once the fuzzy clustering process is carried out, each class w j has associated its centre v j .
b) Bayesian Parametric (BP) estimation
Assuming known the distribution (Gaussian) for each class w j , the probability density function is expressed as follows: where the parameters to be estimated are the mean m j and the covariance C j , both for each class w j with n j samples. They are estimated through maximum likelihood as given by equation (3): where T denotes transpose. The parameters v j , m j and C j are stored to be recovered during the next decision phase.
Decision Phase
Given a new sample x i , the problem is to decide which the cluster it belongs is. We make the decision based on the final state values after the DSA optimization process. As mentioned before, the DSA is an energy optimization based approach with the advantage that it can avoid local minima. Indeed, in accordance with [23] and reproduced in [29], when the temperature involved in the simulated annealing process satisfies some constraints, explained below, the system converges to the minimum global energy which is controlled by the annealing. The minimization is iteratively achieved by modifying the state of each node through the external influences exerted by other nodes and its own state on the previous iteration.
As mentioned during the introduction, for each cluster w j , we build a network of nodes, net j . Each node i in the net j is associated to the pixel location i ≡ (x, y)in the image, which is to be classified; the node i in the net j is initialized with the probability where j ik s is the symmetric weight interconnecting two nodes i and k in the net j and can be positive or negative ranging in [-1,+1]; j k p is the state of the neighbouring node k in the net j . Each j ik s determines the influence that the node k exerts on i trying to modify the state j i p . According to [21] the self-feedback weights must be null (i.e., 0 j ii s ). The DSA approach tries to achieve the most network stable configuration based on the energy minimization. From equation (4) The term j ik s is a combination of two coefficients representing the mutual influence exerted by the k neighbours over i, namely: a) a regularization coefficient which computes the consistency between the states of the nodes and the membership degrees provided by FC in a given neighbourhood for each net j ; b) a contextual coefficient which computes the consistency between the class labels obtained after a previous classification phase. Both consistencies are based on the similarity Gestalt's principle [24,25], as explained in the introduction. The neighbourhood is defined as the m-connected spatial region, m i N , where m is set to 8 in this paper and allows the implementation of the proximity and connectedness Gestalt's principles [24,25], also explained in the introduction The regularization coefficient is computed at the iteration t according to the equation (5): where j k is the membership degree, supplied by FC, that a node (pixel) k with attributes x k belongs to the class w j , computed through the equation (1). These values are also mapped linearly to range in [−1,+1] instead of [0,+1]. From (5) we can see that ( ) j ik r t ranges in [−1,+1] where the lower/higher limit means minimum/maximum influence respectively.
The contextual coefficient at the iteration t is computed taking into account the class labels l i and l j as follows, where values of 1 and +1 mean negative and positive influence respectively: Labels l i and l k are obtained as follows: given the node i, at each iteration t, we know its state at each net j as given by the next equation (8), initially through the supports provided by BP; we determine that the node i belongs to the cluster w j if j , so we set l i to the j value which identifies the cluster, j = 1,..., c. The label l k is set similarly. Thus, this coefficient is independent of the net j , because it is the same for all networks. Both coefficients are combined as the averaged sum, taking into account the signs: Note that ( ) ik c t after a previous decision phase.
The simulated annealing process was originally developed in [31,32] under a stochastic approach. In this paper we have implemented the deterministic one described in [21,33] because, as reported here, the stochastic is slow due to its discrete nature as compared to the analogue nature of the deterministic. Following the notation in [21], let ( ) be the force exerted on node i by the other nodes Where, as always, t represents the iteration index. The fraction ( , ) f depends upon ( ) j i u t and the temperature T at the iteration t.
The equation (8) f . This modification represents the contribution of the self-support from node i to its updating process. This implies that the updated value for each node i is obtained by taking into account its own previous state value and also the previous state values and membership degrees of its neighbours.
The introduction of the self support tries to minimize the impact of an excessive neighbouring influence. Hence, the updating process tries to achieve a trade-off between its own influence and the influence exerted by the nodes j by averaging both values. One can see from equation (7) that if a node i is surrounded by nodes with similar state values and labels, should be high. This implies that the ( ) j i p t value should be reinforced through equation (8) and the energy given by equation (4) is minimum and vice versa. Moreover, at high T, the value of ( , ) f is lower for a given value of the forces ( ) j i u t . Details about the behaviour of T are given in [21]. We have verified that the fraction ( ) ( ) [23,30,33], the following annealing schedule suffices to obtain a global minimum: T(t) = T 0 /log(t+1), with T 0 being a sufficiently high initial temperature. T 0 is computed as follows [34]: 1) we select four images to be classified, computing the energy in (4) for each image after the initialization of the networks; 2) we choose an initial temperature that permits about 80% of all transitions to be accepted (i.e., transitions that decrease the energy function), and the temperature value is changed until this percentage is achieved; 3 order of magnitude as that reported in [33]). We have also verified that a value of t max = 200 suffices, although the expected condition T(t) = 0, t → +∞ in the original algorithm is not fully fulfilled. The assertion that it suffices is based on the fact that this limit was never reached in our experiments as shown later in the section 3, hence this value does not affect the results. The DSA process is synthesized as follows The decision about the classification of a node i with attributes x i as belonging to the class w j is made as follows:
Comparative Analysis and Performance Evaluation
To assess the validity and performance of the proposed approach we describe the tests carried out according to both processes: training and classification. First, we give details about the setting of some free parameters involved in the proposed method.
Setting Free Parameters
We have used several data sets for setting the free parameters; these are: 1) nine data sets from the Machine Learning Repository [35]: (bupa, cloud, glass, imageSegm, iris, magi4, thyroid, pimaIndians and wine); 2) three synthetic data sets manually generated with different numbers of classes and 3) four data sets coming from outdoor natural images, also with different numbers of classes. The use of these data, some of them different from the images with different spectral signatures, is justified under the idea that the values of the parameters to be set must have so much general validity as it is possible.
a) Parameters involved in the FC training phase
They are the exponential weight m in equation (1) and the convergence parameters ε and t max used for its convergence. The number of classes and the distribution of the patterns on the clusters are assumed to be known. We apply the following cross-validation procedure [21]. We randomly split each data set into two parts. The first (90% of the patterns) is used as the training set. The other set (validation set) is used to estimate the global classification error based on the single FC classifier. We set m = 2.0 (which is a usual value) and vary from 0.01 to 0.1 in steps of 0.015 and estimate the cluster centres and membership degrees for each training set. Then, we compute the error rate for each validation set. The maximum error was obtained with ε = 0.1 for 10 iterations and the minimum with ε = 0.01 and 47 iterations. Fixed those values, we vary m from 1.1 to 4.0 in steps of 0.1 and estimate once again the cluster centres and the membership degrees with the training set. Once again the validation sets are used for computing the error rates, the minimum error value is obtained for m = 2.0. The settings are finally fixed to m = 2.0, ε = 0.01 and t max = 50 (expanding the limit of 47).
b) DSA convergence
The ε used for accelerating the convergence in the DSA optimization approach is set to 0.01 by using the validation set for the four data sets coming from the outdoor natural images mentioned above. Verifying, that t max = 20 suffices.
Training Phase
We have available a set of 36 digital aerial images acquired during May in 2006 from the Abadin region located at Lugo (Spain). They are images in the visible range of the spectra, i.e., red-green-blue, 512 512 pixels in size. The images were taken during different days from an area with several natural spectral signatures. We select randomly 12 images from the set of 36 available. Each image is down sampled by two, eliminating a row and column of every two; so, the number of training samples provided by each image is the number of pixels. The total number of training samples is n = 12 256 256 = 786,432.
We have considered that the images have four clusters, i.e., c = 4. Table 1 displays the number of patterns used for training and the cluster centres estimated by the individual classifiers, which are v i for FC and m i for BP, equations (1) and (3) respectively.
Decision Phase and Comparative Analysis
The remaining 24 images from the set of 36 are used as images for testing. Four sets, S0, S1 S2 and S3 of six images each, are processed during the test according to the strategy described below. The images assigned to each set are randomly selected from the 24 images available.
a) Design of a test strategy
In order to assess the validity and performance of the proposed approach we have designed a test strategy with two purposes: 1) to verify the performance of our approach as compared against some existing strategies (simple and combined); 2) to study the behaviour of the method as the training (i.e., the learning) increases.
Our proposed combined DSA (DS) method is compared against the base classifiers used for the combination (BP and FC). It is also compared against the following classical combiners that apply the decision as described immediately after [15,20]. Consider the pixel i to be classified. BP and FC provide the probability j i p and membership degree j i respectively, that the pixel i belongs to the class w j . After applying a rule, a new support j i s is obtained for that pixel of belonging to w j as follows: a) Mean rule (ME) . These rules have been studied in terms of reliability [36]. Yager [37] proposed a multi-criteria decision making approach based on fuzzy sets aggregation. It follows the general rule and the scheme of the combiners described in [21]. So, DS is also compared against the fuzzy aggregation (FA) where the final support that the pixel i belongs to the class w j is given by the following aggregation rule: The parameter a has been fixed to 4 by applying a cross-validation procedure as the described in section 3.1a). Given the supports, according to each rule, the decision about the pixel i is made as follows: Finally, and what it is more important, DS is compared against the optimization strategy based on the Fuzzy cognitive Maps (FM) [27] and the Hopfield neural Network (HN) [28] paradigms. Both are based on the same network topology like the used in this paper and compute the regularization and contextual coefficients similarly to the proposed in this paper through the equations (5) and (6), but using the membership degrees provided by FC for the networks initializations. Nevertheless, for comparison purposes, we have changed the roles in the experiments carried out here, so that the nodes in both FM and HN are initially loaded with the probabilities as in the proposed DS approach.
In order to verify the behaviour of each method as the learning degree increases, we have carried out the experiments according to the following three STEPs described below STEP 1: given the images in S0 and S1, classify each pixel as belonging to a class, according to the number of classes established during the training phase. Compute the percentage of successes according to the ground truth defined for each class at each image. The classified pattern samples from S1 are added to the previous training samples and a new training process is carried out (Section 2.1) with the same number of clusters. The parameters associated to each classifier are updated. The set S0 is used as a pattern set in order to verify the performance of the training process as the learning increases. Note that it is not considered for training. STEPs 2 and 3: perform the same process but using the sets S2 and S3 respectively instead of S1; S0 is also processed as before.
As one can see the number of training samples added at each STEP is 6 512 512 because this is the number of pixels classified during the STEPs 1 to 3 belonging to the sets S1, S2 and S3.
To verify the performance for each method we have built a ground truth for each image processed under the supervision of expert human criteria. Based on the assumption that the automatic training process determines four clusters, we classify each image pixel with the simple classifiers obtaining a labelled image with four expected clusters, and then we select the image with the best results, always according to the expert.
The labels for each cluster, from the selected labelled image, are manually touched up until a satisfactory classification is obtained under the human supervision. This implies that each pixel has assigned a unique label in the ground truth, which serves as the reference one for comparing the performances. Figure 1(a) displays an original image belonging to the set S0; Figure 1(b) displays the correspondence between clusters and labels, in the left column the colour according to the values of the corresponding cluster centre and in the right column the artificial colour labels, both in the tridimensional RGB colour space; (c) labelled image for the four clusters obtained by our proposed DS approach.
The correspondence between labels and the different spectral signatures is: 1. (a) (b) (c) Figure 2 displays the distribution of a representative subset of 4,096 patterns from the image of the Figure 1(a), obtained by down sampling the image by eight, into the clusters in the tri-dimensional RGB colour space, where the centres of the classes, obtained through the BP classifier during the training phase, are also displayed; they are the four m j cluster centres, displayed in the same colour as the labels in the Figure 1(b). As one can see, there is no a clear partition into the four clusters because the samples appear scattered in the whole space following the diagonal. Hence, the classification of the borders patterns becomes a difficult task because they can belong to more than one cluster depending on their proximity to the centres. Table 2 shows the percentage of error during the decision for the different classifiers. For each STEP from 1 to 3, we show the results obtained for both sets of tested images S0 and either S1 or S2 or S3. The average error rate for the set SN at each STEP is given by: In the Table 2 square brackets indicate the rounded and averaged number of iterations required by DS, HN and FM for each set (S0, S1, S2 and S3) at each STEP (1, 2 and 3). Figure 3 displays the ground truth image for the one in Figure 1(a) which has been manually rectified from the results obtained through the BP classifier. As in the image of Figure 1(c), each colour identifies the corresponding label for the four clusters represented in Figure 1. Figure 1 have been manually rectified. Table 2. Average percentages of error and standard deviations at each STEP for the four sets of tested images S0, S1, S2 and S3.
c) Discussion
Based on the error rates displayed in Table 2, we can see that in general, the proposed DS approach outperforms the other methods and achieves the less error rates for STEP 3 in both sets S0 and S1. All strategies achieve the best performance in the STEP 3. Of particular interest is the improvement achieved for the set S0 in STEP 3 with respect the results obtained in STEPs 1 and 2 for that set. Based on the above observations, we can conclude that the learning improves the results, i.e., better decisions can be made as the learning increases. A detailed analysis for groups of classifiers is the following: 1) Simple classifiers: the best performance is achieved by BP as compared to FC. This suggests that the network initialization, through the probabilities supplied by BP, is acceptable.
2) Combined rules: the mean and product rules achieve both similar averaged errors. The performance of the mean is slightly better than the product. This is because, as reported in [38], combining classifiers which are trained in independent feature spaces result in improved performance for the product rule, while in completely dependent feature spaces the performance is the same. We think that this occurs in our RGB feature space because of the high correlation among the R, G and B spectral components [39,40]. High correlation means that if the intensity changes, all the three components will change accordingly.
3) Fuzzy combination: this approach outperforms the simple classifiers and the combination rules. Nevertheless, this improvement requires the convenient adjusting of the parameter a, with other values the results get worse. 4) Optimization and relaxation approaches: once again, the best performance is achieved by DS, which with a similar number of iterations that HN obtains better percentages of successes, the improvement is about 3.6 percentage points. DS also outperforms FM. This is because DS avoids satisfactorily some minima of energy, as expected.
For clarity, in Figure 4(a) the performance of the proposed DS approach for the set S0 is displayed against HN, because both are optimization approaches based on energy minimization; ME which is the best method of the combination rules and BP, the best method of simple combiners. Figure 4(b) shows the energy behaviour for the four sets (S1, S2, S3 and S0 in STEP 3) against the averaged number of iterations required to reach the convergence. The energy decreases as the optimization process increases, as expected according to the equation (4). Similar slopes can be observed for the sets S0, S2 and S3. On the contrary, the slope for S1 is smoother; this explains the greater number of iterations required for this set during the convergence.
Overall, the results show that the combined approaches perform favourably for the data sets used. The MA and ME fusion methods also provide best results than the individual ones. This means that combined strategies are suitable for classification tasks. This agrees with the conclusion reported in [13] or [15] about the choice of combined classifiers. Moreover, as the learning increases through STEPs 1 to 3 the performance improves and the number of iterations for S0 decreases, because part of the learning has been achieved at this stage. This means that the learning phase is important and that the number of samples affects the performance.
The main drawback of the DS, as well as also for the HN and FM approaches, is its execution time, which is greater than the methods that do not apply relaxation processes. This is a general problem for all kind of relaxation or optimization approaches.
All tests have been implemented in MATLAB and executed on an Intel Core 2 Duo, 2.40 GHz PC with 2.87 GB RAM operating under Microsoft Windows XP service pack 3. On average, the execution time per iteration and per image is 10.1 seconds.
Conclusions
During the decision phase, we have proposed a combined strategy under the DSA framework performing favourably as compared against other existing combined strategies including those with similar design and based on optimization and also against the individual classifiers. The application of the similarity, proximity and connectedness Gestalt's principles allows combining probabilities and membership degrees, supplied by the BP and FC classifiers respectively, by means of the regularization and contextual coefficients. The probabilities supplied by BP are used as initial states in a set of neural networks, which are specifically designed with such purpose. These states are iteratively updated under the DSA optimization process through the external influences exerted by the nodes in the neighbourhood thanks to the application of the Gestalt's principles.
In future works the updating through the DSA of both probabilities and membership degrees could be considered. With the proposed combined approach, we have established the bases to be able for combine more than two classifiers. This can be made by re-defining the regularization coefficient.
Also, if we try to combine classifiers providing outputs in different ranges always it should be possible to map all outputs in the same range. This allows the combination of different kinds of classifiers including self-organizing maps or vector quantization with BP or FC by example. | 8,103.2 | 2009-09-08T00:00:00.000 | [
"Computer Science"
] |
Observation-based analysis of ozone production sensitivity for two persistent ozone episodes in Guangdong, China
. An observation-based method (OBM) is developed to investigate the sensitivity of ozone formation to precursors during two persistent elevated ozone episodes observed at 77 stations in Guangdong. Average OH concentrations derived at the 77 stations between 08:00 and 13:00 local time stay within a narrow range of 2 . 5 × 10 6 to 5 . 5 × 10 6 cm − 3 with a weak dependence on the NO x . These values are in good agreement with OH values observed at a rural station in the Pearl River Delta (PRD). They also agree well with a box model constrained by the ambient conditions observed during the two episodes. The OBM has been used to evaluate the ozone production efficiency, ε (NO x or volatile organic compound, VOC), defined as the number of O 3 molecules produced per molecule of NO x (or VOC) oxidized. Average values of ε (NO x ) and ε (VOC) determined by the OBM are 3.0 and 2.1 ppb ppb − 1 , respectively, and both compared well with values in previous studies. Approximately 67 % of the station days exhibit ozone formation sensitivity to NO x , and approximately 20 % of the station days are in the transitional regime sensitive to both NO x and VOC, and only approximately 13 % of the station days are sensitive to VOC. These results are in semi-quantitative agreement with the ozone formation sensitivity calculated by the box model constrained by ambient conditions observed during the two episodes. However, our OBM results differ from those of most previous investigations, which suggested that limiting the emission of VOC rather than NO x would be more effective in reducing ozone reduction in Guangdong.
Introduction
Increases in surface ozone (O 3 ) can have serious adverse impacts on human health and ecological systems Song et al., 2017;Lin et al., 2018). In addition, tropospheric ozone is a significant greenhouse gas (IPCC, 2013). With a high rate of urbanization and industrialization, and the increasing use of motor vehicles, Guangdong has been suffering from severe O 3 pollution . The primary pollutant in Guangdong has switched from particulate matter to O 3 since 2015, thanks to a stringent emission control policy that has effectively reduced other air pollutants (Department of Ecology and Environment of Guang-dong Province, 2016). In fact, the number of days with O 3 as the primary pollutant is 68.7 %, far exceeding that of PM 2.5 (15.8 %) and PM 10 (8.3 %) in 2020 (Department of Ecology and Environment of Guangdong Province, 2021).
O 3 is a secondary pollutant produced from photochemical reactions involving nitrogen oxides (NO x ) and volatile organic compounds (VOCs) (Trainer et al., 2000;. The sensitivity of O 3 production is nonlinearly dependent on precursor concentrations and is usually categorized into photochemical regimes such as NO x -limited or VOC-limited (Kleinman et al., 1994;Sillman et al., 1998). There have been a number of studies on the sensitivity of O 3 production to NO x and VOC by photo-K. Song et al.: Observation-based analysis of ozone production sensitivity chemical air quality models (Sillman et al., 2003;Lei et al., 2004;Tang et al., 2010), as well as observation-based methods (OBMs) (Thielmann et al., 2002;Zaveri et al., 2003;Shiu et al., 2007). Several modeling approaches have been used to evaluate the O 3 production sensitivity, including the L N /Q method, where L N is the radical loss via the reactions with NO x and Q is the total primary radical production (Kleinman et al., 2001;Kleinman, 2005;Mao et al., 2010); the relative incremental reactivity method (RIR) (Shao et al., 2009;Cheng et al., 2010;Lu et al., 2010a;Xue et al., 2014;; and the Empirical Kinetics Modeling Approach (EKMA) (Dodge, 1977). These model-based studies usually have large uncertainties in their input parameters, particularly in the emission inventories and photochemistry of VOC (Chang et al., 2020). Observation-based methods can avoid some of the uncertainties by using observations to constrain the analysis (Thielmann et al., 2002;Zaveri et al., 2003;Shiu et al., 2007).
In this study, we adopt the approach proposed by Shiu et al. (2007) and develop an OBM to evaluate the O 3 production sensitivity during two multi-day O 3 pollution episodes in Guangdong. In this OBM, the concentration of OH is derived from observed NO x and CO in a new approach as described in the methodology section. The OBM is then used to evaluate the ozone production efficiency, ε(NO x or VOC), defined as the number of O 3 molecules produced per molecule of NO x (or VOC) oxidized. Finally, 3D-EKMA plots are generated basing on the OBM. The rest of the paper is organized as follows: Sect. 2 describes the data sources and analysis methods, Sect. 3 presents the results and discussions, and Sect. 4 presents a summary and conclusions.
Data
Hourly surface O 3 , PM 2.5 , CO and NO 2 concentration data at 77 out of a total 102 stations in Guangdong operated by the China National Environmental Monitoring Centre (CNEMC) during the period 2018-2019 are used in this study (available at http://www.cnemc.cn/en/, last access: 10 November 2021). The 77 stations (Fig. 1a) are chosen for their completeness of data. It can be seen in Fig. 1a that polluted stations are mainly located in the PRD, while clean stations are located in the northeast of Guangdong. In this study, we choose two persistent O 3 pollution episodes to perform the OBM analysis, specifically 2 to 8 October 2018 and 24 September to 1 October 2019. Figure 1b is the same as Fig. 1a except it shows the average ozone concentrations of all ozone-exceeding days in Guangdong in 2018 and 2019. One can see clearly that the ozone distribution during the two episodes in autumn is representative of and even slightly higher than the ozone concentrations during ozone pollution days in Guangdong in the entire 2 years. In fact, the monthly peak ozone concentrations in Guangdong tend to occur in September and October because Guangdong is usually under heavily overcast conditions with southerly winds bringing clean moist air from the South China Sea in the summer which tends to suppress the ozone formation.
where k 1 represents the reaction rate constant for the reaction of NO with O 3 . This equation neglects the reactions of NO reactions with HO 2 and RO 2 . The uncertainty due to this neglection is around 20 %, which is acceptable as discussed in Sect. 3.5. The value of k 1 is taken from Seinfeld and Pandis (1998): k 1 (1 ppm min −1 ) = 3.23 × 10 3 exp (−1430/T ) .
Derivation of VOC
In this study, we use CO as a tracer to estimate VOC. This tracer method has been widely used in previous studies (Heald et al., 2003;Hsu et al., 2010;Shao et al., 2011;Yao et al., 2012;Tang et al., 2013). Individual VOCs at 08:00 local time (LT) are calculated by multiplying the freshly emitted CO at 08:00 LT with the ratio of VOC/CO in the emission inventories of Huang et al. (2021), according to the equations listed in the Supplement. The freshly emitted CO is assumed to be the difference in CO between 08:00 and 13:00 LT as shown in Fig. 2 (Eq. S1 in the Supplement). The CO at 13:00 LT is considered to be the leftover CO for the following day and is used to evaluate the leftover VOCs (Eq. S2). Oxidized VOCs (OVOCs) are estimated from the observed ratios of CH 2 O, CH 3 CHO and ketone to CO (Wang et al., 2016;Wu et al., 2020
Derivation of OH concentrations
The ratio ethylbenzene / m,p-xylene has been suggested to be a good measure of the photochemical processing by OH (Calvert, 1976;Singh, 1977;Shiu et al., 2007). Following a Lagrangian trajectory, the ratio can be shown as where E and X represent concentrations of ethylbenzene and m,p-xylene at time t, respectively. E 0 and X 0 are their corresponding initial concentrations, k x and k e are their reaction rate constants with OH, and k x and k e equal to 2.17 × 10 −11 and 7.0×10 −12 cm 3 s −1 , respectively (Atkinson, 1990). With a known value of E 0 /X 0 , [OH × t] can be evaluated from observed E/X at time t. This provides an OBM-derived density of OH.
In the real atmosphere, the Lagrangian condition rarely exists due to turbulence mixing as well as atmospheric advection. Nevertheless, Eq. (4) tends to hold because atmospheric transport affects the two species similarly. This is a key advantage of the OBM. In this study, due to limited measure-8406 K. Song et al.: Observation-based analysis of ozone production sensitivity ments of VOC, we use CO and NO x to replace ethylbenzene and m,p-xylene, respectively.
Calculation of oxidized VOC and NO x
In this study, we consider the reaction of NO 2 with OH as the only removal process for NO x and assume the removal of NO x is pseudo-first order as shown below. In this case, following the Lagrangian trajectory, we have the following: where k is the reaction rate constant of NO x with OH. The reaction rate constant for NO 2 and OH is 1.04×10 −11 cm 3 s −1 at 25 • C and 1 atm pressure according to Sander et al. (2003). Since NO 2 is part of NO x , the value of k should be scaled down by the ratio NO 2 /NO x . The average of the NO 2 /NO x ratio is about 0.6, thus k for NO x is prescribed at 6.0 × 10 −12 cm 3 s −1 . Similarly, we have the following: where k VOC and k co are the reaction rate constants of VOCs and CO with OH, respectively. K VOC values of individual VOCs are listed in Table S1 in the Supplement, and k co is prescribed at 1.4 × 10 −13 cm 3 s −1 (Atkinson et al., 2006). Since the Lagrangian condition is sometimes not observed, it is necessary to select the time periods during which the quasi-Lagrangian condition as shown in Fig. 2 is approximately valid. The selection criterion is that the ratio of CO concentrations between 08:00 and 13:00 LT lies within 50 % of 1 standard deviation (vertical bars) of the ratio of CO shown in Fig. 2, which is assumed to be in the Lagrangian condition. This criterion usually filters out about 60 % of data; i.e., about 40 % of the days satisfy approximately the Lagrangian condition. We have tested this selection criterion by parameterizing it between 30 % and 80 % of 1 standard deviation and found our major results are robust within this range.
Dilution effect
Diurnal variations of pollutants averaged over all stations and the two episodes are shown in Fig. 2. Previous studies have shown that part of the early-morning rise in O 3 is due to O 3 entrained from the residual layer above the boundary layer during the development of the boundary layer in the morning (Shiu et al., 2007;Zhao et al., 2019). We adopt the approach proposed by Shiu et al. (2007) to account for the dilution effects. Specifically, the reduction of CO concentrations from 08:00 to 13:00 LT (approximately 20 %) is assumed to be the dilution effect and used for all other species. The uncertainty due to this assumption is discussed in Sect. 3.5.
2.2.6 Emissions of NO x , CO and VOCs between 08:00 and 13:00 LT Equations (5), (6) and (7) do not account for the emissions of NO x , CO or VOC during the period of 08:00-13:00 LT. Inclusion of these emissions would affect the value of OH derived from Eq. (5) as well as the dilution effect. We estimate the emission of NO x by taking advantage of the fact that NO x reaches a quasi-steady state around 13:00-16:00 LT as evident in Fig. 2. We believe that the quasi-steady state is maintained by the balance between the oxidation of NO x and its emission. This is based on the notion that oxidation of NO 2 by OH is the predominant sink of NO x in 13:00-16:00 LT, of which the integration over the mixed or boundary layer should be balanced by the emission flux of NO x according to the continuous equation of NO x . Assuming the oxidation loss rate of NO x in the mixed layer is uniform with height, we obtain that the divergence of the hourly NO x emission rate is equal to the oxidation loss rate of NO x at 13:00-16:00 LT.
Using the average OH of 5 × 10 6 cm −3 at noon derived from Eq. (5) (Fig. 4) and mean NO x at 13:00-16:00 LT (Fig. 2), a value of approximately 1.8 ppb h −1 can be obtained. This value is assumed to be the hourly NO x emission rate between 08:00 and 13:00 LT. The emissions of CO and VOC are calculated using their ratios to NO x in the emission inventories of Huang et al. (2021).
Box model
A photochemical box model with a carbon bond mechanism (PBM-CB05) (Yarwood et al., 2005;Coates and Butler, 2015; is used to simulate the O 3 production rate and OH radical. Unlike emission-based models, the PBM-CB05 used in this study is based on observed concentrations of air pollutants and meteorological parameters (Y. . In the CB05 module, VOCs are grouped according to carbon bond type and the reactions of individual VOCs are condensed using the lumped structure technique (Yarwood et al., 2005;Coates and Butler, 2015). In this study, the pollution indicators (O 3 , NO, NO 2 , CO and VOC) and meteorological parameters (temperature, relative humidity, pressure) observed during the two episodes are utilized as input parameters for the model. There are 37 VOC species considered in our case. The model simulation starts from 07:00 and ends at 18:00 LT with hourly input data based on observed concentrations of air pollutants and meteorological parameters during the two episodes.
Results and discussion
3.1 Air quality and meteorological conditions Figure 3 shows the time series of hourly concentrations of air pollutants. The time period covers the two ozone episodes and extends to 2 d before and 2 d after. Mean maximum daily 8 h average (MDA8) O 3 concentrations in episode 1 was 88.7 ppb, and in episode 2 was 99.6 ppb. The average daily concentration of CO of the two episodes was 0.74 and 0.85 ppm, respectively. The corresponding time series of key meteorological parameters are shown in Fig. S1 in the Supplement. As O 3 is formed through photochemical reactions involving precursors NO x and VOC, strong solar radiation, high temperature and low wind speed have been identified to be common conditions conducive to the formation of ozone . During both O 3 episodes, the weather in Guangdong was dominated by high-pressure systems with warm and cloudless conditions, and northeasterly winds. In particular, the average maximum temperature for episode 1 was 28 • and for episode 2 was 30 • . The general patterns of O 3 concentrations of the two episodes were similar. Relatively high O 3 concentrations with northerly or northeasterly winds appeared at least 2 d before the episode in both episodes. Afterward, the high O 3 kept increasing or stayed at a high level until the prevailing northeasterly wind shifted away and the surface pressure dropped. Starting on 22 September 2018, a precipitation event occurred which obviously ended the first episode. The heavier cloud cover greatly reduced the intensity of solar radiation and O 3 photochemical formation reactions. The disappearance of high O 3 in the second episode is believed to be related to a shift to southerly winds that brought in clean moist air from the South China Sea. Figure 4 shows the hourly OH concentrations between 08:00 and 13:00 LT derived from Eq. (4) based on the concentrations of NO x and CO observed at the 77 stations. Average OH concentrations derived at the 77 stations between 08:00 and 13:00 LT stay within a narrow range of 2.5 × 10 6 cm −3 to 5.5 × 10 6 cm −3 with a weak dependence on the NO x . The mean OH concentrations and their 1 standard deviations derived by the OBM (black dots and black vertical bars, respectively) are approximately 30 % higher than the mean OH concentrations and 1 standard deviations observed at a rural station in PRD in October-November 2014 (blue line and blue shade, respectively) (Tan et al., 2019). Nevertheless, there is a complete overlap of the 1 standard deviations of the two data sets (blue shade and black vertical bars), which indicates a good agreement between our OBM OH values and those observed by Tan et al. (2019). In another comparison with a previous investigation, our OH concentrations are approximately 40 % lower than the OH calculated by a box model constrained by observed air pollutants during an experiment at a remote island site in the PRD from August to November 2013 (red line and red shade) (Y. . There is also a nearly complete overlap of the 1 standard deviations of the two data sets (red shade and black vertical bars). Figure 4 also includes the noon OH concentrations calculated by the box model described above. The box model is constrained by the ambient conditions observed during the two episodes. The average modeled OH concentration is approximately 3.2 × 10 6 cm −3 with a 1 standard deviation of 0.6 × 10 6 cm −3 (red cross and red vertical bar, respectively). This value of OH is approximately 40 % less than the OH values of 5.5 ± 4.3 × 10 6 cm −3 derived by the OBM at noon. Again, there is a good overlap of the 1 standard deviations of the two data sets. The agreement among the OH concentrations derived by the OBM, the box model and field observations gives credence to our observation-based analysis, at least in terms of the derived OH concentration which plays a critical role in the O 3 formation.
OH concentrations derived from OBM
Nevertheless, we acknowledge that the OH concentrations derived here are approximately a factor of 3 to 5 lower than the OH concentrations observed at Backgarden (a suburban site about 70 km downwind of Guangzhou) during an intensive campaign in 2006, in which the OH reached daily peak values of 15-26 × 10 6 cm −3 (Lu et al., 2012). This discrepancy remains unresolved.
Ozone production efficiency
Ozone production efficiency (ε) is defined as the number of O 3 molecules produced per molecule of NO x (or VOC) oxidized photochemically (Liu et al., 1987;Trainer et al., 2000). ε can be calculated by the following equations: where [O 3 ] represents the amount of ozone generated from 08:00 to 13:00 LT which is equal to the observed difference in O 3 between 08:00 and 13:00 LT, after adjustment to the dilution factor.
[NO x ] ( [VOC]) represents the consumption and oxidation of NO x (VOC) between 08:00 and 13:00 LT. Figure 5a shows the relationship of ε as a function of the average NO x concentration between 08:00 and 13:00 LT. As expected ε is greater at lower NO x ; i.e., the O 3 production efficiency is greater in rural and suburban environments than urban conditions, in agreement with previous findings (Liu et al., 1987;Kleinman et al., 2002). The value of ε(NO x ) converges to a narrow range of about 1.0 ± 0.5 when NO x is greater than 70 ppb. This range of ε(NO x ) in Fig. 5a is consistent with previous investigations in urban environments (Sillman et al., 1998;Daum et al., 2000) as well as in rural environments (Chin et al., 1994;Trainer et al., 1995). Compared to previous investigations in PRD areas, values in Fig. 5a at NO x higher than 20 ppb are in good agreement with the ε(NO x ) values of 2.1-2.5 found at urban stations in PRD by Yu et al. (2020) and Lu et al. (2010b). However, ε(NO x ) values of 6.0-13.3 were found at rural stations in PRD (Lu et al., 2010b;Wei et al., 2012;Xu et al., 2015;Yang et al., 2017), which are about a factor of 2 higher than our values at low NO x . Considering that our values are derived for two ozone pollution episodes in which the ε(NO x ) should be higher than non-episode periods, this discrepancy is puzzling. Figure 5b is the same as Fig. 5a except that the x axis is changed to [NO x ] or the oxidized NO x . Figure 5b shows a relatively smoother distribution compared to Fig. 5a, most likely because the oxidized NO x , rather than NO x itself, is more closely related photochemically to [O 3 ]. As [NO x ] increases beyond 30 ppb, ε[NO x ] levels off linearly to a nearly constant value around 1.0 when [NO x ] approaches 80 ppb (Fig. 5b). ε [VOC] is also greater at lower [VOC] and has an asymptotic value of about 1.0 ± 0.5 when [VOC] becomes greater than 50 ppb (Fig. 5c). Figure 5b and c have some useful implications for the ozone control strategy. For instance, ε[NO x ] = 1.7 when [NO x ] = 50 ppb can be interpreted as in a highly polluted ambient environment in Guangdong where [NO x ] equals 50 ppb, approximately 1.7 ppb of ozone is produced for each ppb of NO x oxidized. The overall average value of ε[NO x ] is about 3.0 (Fig. 5b), which implies on average 3.0 ozone molecules are produced for each NO x molecule oxidized. The overall average value of ε[VOC] is approximately 2.1 (Fig. 5c), which implies 2.1 ozone molecules are produced for each VOC molecule oxidized, which is about 50 % less efficient than that of NO x .
Photochemical oxidation of a VOC molecule under common ambient urban conditions produces approximately two or more peroxyl radicals -one HO 2 and more than one RO 2 (Seinfeld and Pandis, 1998;Jacob, 1999). Because there is abundant NO x in the ambient atmosphere in Guangdong, nearly all peroxyl radicals are expected to react with NO to produce NO 2 and then O 3 . Jacob (1999) suggested an ozone formation rate of 2 [VOC] in the urban atmosphere. This is in excellent agreement with the overall value of 2.1 [VOC] found here by the OBM. This agreement, as well as the consistency with previous investigations on the ε[NO x ], provides credence again to the observation-based analysis of this study.
Ozone sensitivity to precursors
The sensitivity of ozone formation ( O 3 ) to ozone precursor NO x is examined in Fig. 6a, in which O 3 (right-hand side in red) and the oxidized VOC (left-hand side in black) are plotted as a function of the oxidized NO x . Similarly in Fig. 6b O 3 (right-hand side in red) and the oxidized NO x (left-hand side in black) are plotted as a function of the oxidized VOC. It can be seen in Fig. 6a that O 3 increases with the value of oxidized NO x . The increase first has a very sharp slope of about 2.0 ppb ppb −1 when oxidized NO x is below 30 ppb, indicating a strong sensitivity of ozone formation to oxidized NO x . The slope flattens out quickly to around 0.2 ppb ppb −1 when oxidized NO x gets greater than 30 ppb, suggesting other factors such as VOC and the VOC/NO x ratio may become more important in controlling the ozone formation rate. Figure 6b shows that O 3 increases with the value of oxidized VOC with a slope of about 0.4 ppb ppb −1 . However, this slope is much smaller than that of NO x , especially in the low oxidized NO x regime (< 30 ppb). In a brief summary for Fig. 6a and b, the ozone formation is most sensitive to the oxidized NO x in relatively clean regimes of oxidized NO x < 30 ppb. In more polluted regimes, other factors such as the initial VOC and/or the VOC/NO x ratio appear to have a significant impact on the ozone formation. Additional evidence in support of these points is elaborated below. Figure S2 presents a three-dimensional EKMA-like depiction of ozone formation rates ( O 3 , black dots, 471 points) plotted as a function of the oxidized NO x (x axis) and oxidized (VOCs + CO) (y axis). The colored plane is a linear regression to the ozone formation rates (black dots), and the green and red bars denote positive and negative deviations of individual dots from the plane, respectively. Different color shades from blue to red denote different concentrations of O 3 in ppb. The equation for [ O 3 ] represents the plane as a function of the oxidized NO x ( NO x ) and oxidized VOC ( VOC). The coefficients in front of NO x and VOC in the equation are the ozone sensitivities to NO x and VOC, respectively. The plane fits the black dots (ozone formation rates) reasonably well with an R 2 value of 0.423. The coefficient of NO x is 0.755 which is about 3 times of that of VOC (0.247), indicating the ozone formation rate is about 3 times more sensitive to NO x than VOC when considering all data at the 77 stations in Guangdong during the two episodes. This is consistent with the findings from Fig. 6a and b.
Some uneven congregations of red and green bars appear; e.g., a large number of red bars have low values of NO x , while many green bars tend to have moderate values of NO x and high values of VOC. This suggests that there is a need to divide Fig. S2 into different congregations or regimes. Figure 7 is the same as Fig. S2 except it is divided into four quadrants of different levels of oxidized ozone precursors: panel (a) shows low NO x and low VOC ( NO x < 20 ppb, VOC < 25 ppb), panel (b) shows high NO x and high VOC ( NO x > 20 ppb, VOC > 25 ppb), panel (c) shows low NO x and high VOC ( NO x < 20 ppb, VOC > 25 ppb), panel (d) shows high NO x and low VOC ( NO x > 20 ppb, VOC < 25 ppb). In total, 39 % of all data points (184 out of 471 points) lie in panel (a), the slope of O 3 against NO x (coefficient of NO x in the equation) is approximately 1.54 ppb ppb −1 (p value < 0.01), while the slope of O 3 against VOC (coefficient of VOC) has a value of 0.28 ppb ppb −1 (p value = 0.021). These values of slopes imply that the ozone formation at stations in panel (a), a relatively clean environment, is about 5 times more sensitive to NO x than VOC; i.e., the ozone formation is NO x -limited. This is in good agreement with the conclusion reached based on Figs. 6a, b and S2. Panel (b) contains about 20 % of the data points. The coefficient of NO x is 0.3 ppb ppb −1 (p value < 0.01), while the coefficient of VOC is 0.29 ppb ppb −1 (p value = 0.043), suggesting that the ozone formation is sensitive to both VOC and NO x . This quadrant belongs to the transitional regime. Panel (c) has 28 % of the data points, and the coefficients of NO x and VOC are 2.25 ppb ppb −1 (p value < 0.01) and 0.04 ppb ppb −1 (p value = 0.785), respectively. Here again the ozone formation is NO x -limited. Panel (d) has 13 % of the data points, the coefficients of NO x and VOC are 0.18 ppb ppb −1 (p value = 0.126) and 0.91 ppb ppb −1 (p value = 0.037), respectively. These values of coefficients Figure 7. Three-dimensional depiction of ozone formation rate ( O 3 , z axis) plotted as a function of oxidized NO x (x axis) and oxidized VOC (y axis). The black dots denote values of O 3 , the colored plane is the best linear fit to the black dots, and the green and red bars denote positive and negative deviations from the plane, respectively. The equation listed represents the surface as a function of oxidized NO x and oxidized VOC. R 2 is the square of correlation coefficient of the linear regression. Four quadrants: (a) low NO x and low VOC (NO x < 20 ppb, VOC < 25 ppb), (b) high NO x and high VOC (NO x > 20 ppb, VOC > 25 ppb), (c) low NO x and high VOC (NO x < 20 ppb, VOC > 25 ppb), (d) high NO x and low VOC (NO x > 20 ppb, VOC < 25 ppb).
indicate that the ozone formation is more sensitive to VOC than NO x ; i.e., the ozone formation is VOC-limited.
The analysis above provides an observation-based method for evaluating the ozone-precursor sensitivity. This method has the potential to provide quantitative information on the ozone control strategy for individual regions. In theory, the quadrants can be further divided into, for example, a specific region represented by individual stations, such that an ozone control strategy suitable to the region could be developed. In practice, this is limited by the data available for making the three-dimensional plot like Fig. 7.
We have compared the OBM results to those of the box model constrained by the observed ambient environment in this study. Figure 8 shows the traditional 2D-EKMA plot calculated by the model. To facilitate the comparison, the x axis and y axis in Fig. 8 are changed to hourly oxidized NO x and oxidized VOC, respectively, rather than the usual early-morning concentrations of NO x and VOC. The modeled results are shown in colored isopleths of ozone increments between 06:00 and 16:00 LT, while results of the OBM are shown in colored dots for ozone increment or formation between 08:00 and 13:00 LT. The difference in the length of time has a negligible effect on the ozone increment as evident in Fig. 2. The OBM values agree with the model results semiquantitatively. For instance, the colored dots of OBM shift from blue (20 ppb) to green (60-80 ppb) consistently with the colored isopleths, but the OBM dots rarely turn yellow when modeled isopleths become greater than 90 ppb. Two red lines (left red and right red) are added to Fig. 8 to facilitate the assessment of the sensitivity of ozone formation. There are 127 points located to the left of left red line, which clearly belongs to the NO x -limited regime according to the modeled ozone isopleths. There are 141 points located to the right of right red line, which clearly belongs to the VOClimited regime according to the modeled ozone isopleths. In between the two red lines there are 203 points, which are in the transitional regime sensitive to both NO x and VOC. These three regimes overlap and agree in ozone formation sensitivity with panels (a) and (c), panel (d) and panel (b) of the OBM results, respectively. However, the numbers of points in the three regimes deviate significantly from those of the four OBM quadrants. For example, panel (b) has only 97 points compared to the 203 points in the transitional regime of Fig. 8; panel (d) has only 60 points compared to the 141 points in the VOC-limited regime of Fig. 8; while panels (a) and (c) have 314 points compared to the 127 points in an NO x -limited regime of Fig. 8. In terms of ozone sensitivity, the modeled results show a nearly equal number of points in the NO x -limited regime as the VOC-limited regime, while the OBM results show five to one in favor of the NO x -limited regime. A quantitative agreement between the OBM results (dots) and the modeling results (isopleths) would require shifting the dots in Fig. 8 leftward by approximately 0.5-1 ppbv h −1 , which would mean a reduction of OH by approximately 30 %-50 %. Interestingly this requirement matches well with the fact that modeled OH is approximately 40 % less than the OH value derived by the OBM at noon as shown in Fig. 4.
Comparing with previous studies, we notice that almost all previous researches suggested that limiting the emission of VOCs in Guangdong would have a positive role in reducing ozone reduction (Zhang et al., 2008;Jiang et al., 2018), but different results may appear in different places and time. Yu et al. (2020) found that NO x reduction in Shenzhen has led to higher ozone production from 2015 to 2018 given the nearly constant VOCs. However, the ozone mitigation would be benefit from further NO x reduction under the conditions of 2018. Yang et al. (2019) analyzed the relationship between ozone and precursors in PRD from 2007 to 2017 and found that the northeastern PRD was NO xlimited and the southwest VOC-limited. Obviously, these findings are in general different from our results except in a highly polluted environment like panel (b). Some of the difference can be explained by the fact that most of the previous studies were focused on urban regions, while many rural stations are included in our OBM analysis. Finally, we acknowledge that our results are based on the analysis of only two multi-day ozone episodes which maybe not representative of the general ambient environment in Guangdong. A comprehensive regional and temporal OBM analysis is needed to make a definitive comparison with previous findings.
In summary of Sect. 3.4, the sensitivity of ozone formation to its precursors is complex and highly dependent on the ambient conditions of the station day. Our OBM shows that approximately 67 % of the station days exhibit ozone formation sensitivity to NO x , approximately 20 % of the station days are in the transitional regime sensitive to both NO x and VOC; only approximately 13 % of the station days are sensitive to VOC. These findings are different from results of most previous studies, which favor ozone formation sensitivity to VOC.
Uncertainty analysis
Significant uncertainties and limitations exist in our OBM analysis. First and foremost is the uncertainty involved with the Lagrangian air mass assumption, which does not take into account mixing, entrainment or surface deposition effects. Omitting the mixing of NO x emitted between 08:00 and 13:00 LT into the Lagrangian air mass can lead to an underestimate of the OH concentration, while omitting the mixing of CO emission can underestimate the dilution effect. We account for the mixing of NO x emission by assuming that NO x reached a quasi-steady state around 13:00-16:00 LT (Sect. 2.2.6), and in turn the mixing of CO and VOC emissions are calculated using their ratios to NO x in the emission inventories of Huang et al. (2021). However, no surface deposition effect is included. The selection criterion defined by 50 % of 1 standard deviation (1.0 ± 0.5σ ) from the mean CO distribution works well in filtering out those data deviating significantly from the Lagrangian condition. However, the criterion filters out about 60 % of the data, thus limit-ing the representativeness of the OBM analysis. This limitation has been evaluated by relaxing the selection criterion to 1.0 ± 0.8σ , which filters out only about 30 % of the data. No significant difference has been detected, suggesting the results of the OBM analysis are representative of the majority of the data. Another source of uncertainty is that one single dilution factor is adopted for all air pollutants, including O 3 , CO, PM 2.5 and NO x . In this context, it is reassuring to find out that the dilution factors derived independently from CO and PM 2.5 agree within 10 % with each other. In a brief summary, we estimate the uncertainty involved with the Lagrangian assumption to be in the range of 20 %-40 %.
The second largest source of uncertainty is the evaluation of VOCs. Individual VOCs, including OVOCs, are calculated based on the observed concentration of CO and the ratio of VOC/CO in the emission inventories as discussed in Sect. 2.2.2. We have evaluated the VOCs and OVOCs derived this way by comparing their contributions to the OH reactivity observed by Tan et al. (2019) in PRD in autumn 2014. There is a reasonable agreement between our estimates of the contributions of NO x , CO, OVOCs and VOCs to the OH reactivity and those of Tan et al. (2019) except for a 35 % underestimation of VOCs. Hence we estimate the uncertainty in the evaluation of VOCs to be in the range of 30 %-50 %.
Another source of uncertainty may come from the neglection of heterogeneous reactions in this study. The largest impact of neglecting heterogeneous reactions is most likely to involve NO x between 08:00 and 13:00 LT, during which the OH is derived. Since the effect of heterogeneous reactions is included in the observations, the neglection of any heterogeneous removal of NO x (e.g., deposition of NO x on aerosols in the humid conditions in Guangdong) can lead to an overestimate of OH concentrations by the OBM. This would have a significant impact on the outcome of this study, as OH plays a critical role in the photochemistry of NO x , VOCs and ozone. On the other hand, presence of significant natural sources of NO x such as biogenic emission and/or lightning source in 08:00-13:00 LT would lead to an underestimate of OH concentration.
Finally, another source of uncertainty is attributable to the coarse resolution of CO measurements which is reported at 0.1 ppm intervals. As a result, many hourly CO data would show identical values and lose their value as a tracer.
Summary and conclusions
In this study, two persistent elevated ozone episodes in Guangdong (77 stations) that occurred on 2-8 October 2018 and 24 September-1 October 2019 were analyzed to investigate the sensitivity of ozone generation to precursor concentrations at the 77 stations. An OBM is developed by modifying the approach suggested by Shiu et al. (2007). Specifically, NO x and CO are used in this OBM to substitute for the two hydrocarbon species utilized in Shiu et al. (2007).
Major outputs from the OBM include the OH concentrations, O 3 production efficiency and the sensitivity of ozone formation to the precursors at the 77 stations during the two ozone episodes. The average OH concentrations between 08:00 and 13:00 LT agree well with the OH values observed at a rural station in PRD in October-November 2014 by Tan et al. (2019). The OH values derived from the OBM are also in good agreement with a box model constrained by the ambient conditions observed during the two episodes. On the other hand, the OH concentrations derived here are approximately a factor of 2 to 4 lower than the OH concentrations observed at Backgarden, a suburban site about 70 km downwind of Guangzhou (Lu et al., 2012).
The O 3 production efficiency against NO x , ε(NO x ) = [O 3 ]/ [NO x ], is greater at lower NO x (Fig. 5a), in agreement with previous findings (Liu et al., 1987;Kleinman et al., 2002). The value of ε converges to a narrow range of about 1.0 ± 0.5 when NO x is greater than 70 ppb. This range of ε(NO x ) is consistent with previous investigations in urban environments (Sillman et al., 1998;Daum et al., 2000) as well as in rural environments (Chin et al., 1994;Trainer et al., 1995). Compared to previous investigations in PRD areas, our values of ε(NO x ) at NO x higher than 20 ppb are in good agreement with the values of 2.1-2.5 found at urban stations in PRD by Yu et al. (2020) and Lu et al. (2010b). However, ε(NO x ) values of 6.0-13.3 were found at rural stations in PRD (Lu et al., 2010b;Wei et al., 2012;Xu et al., 2015;Yang et al., 2017), which are about a factor of 2 higher than our values at low NO x . Considering that our values are derived for two ozone pollution episodes in which the ε(NO x ) should be higher than non-episode periods, this discrepancy is puzzling. The overall average value of ε[NO x ] is about 3.0 (Fig. 5b), which implies on average three ozone molecules are produced for each NO x molecule oxidized. The overall average value of ε[VOC] is approximately 2.1 (Fig. 5c), which implies 2.1 ozone molecules are produced for each VOC molecule oxidized, about 50 % less efficient than that of NO x . Jacob (1999) suggested an ozone formation rate of 2 [VOC] in the urban atmosphere. This is in excellent agreement with the value of 2.1 [VOC] found here by the OBM. This agreement, as well as the consistency with previous investigations on the ε[NO x ] and OH concentrations, provides credence to the observation-based analysis (OBM) of this study.
The sensitivity of ozone formation to its precursors is complex and highly dependent on the ambient conditions of the station day. Our OBM shows that approximately 67 % of the station days exhibit ozone formation sensitivity to NO x , approximately 20 % of the station days are in the transitional regime sensitive to both NO x and VOC, and only approximately 13 % of the station days are sensitive to VOC. These findings are different from results of most previous studies, which favor ozone formation sensitivity to VOC. Some of the difference can be explained by the fact that most of the previous studies were focused on urban regions, while many rural stations are included in our OBM analysis. Finally, we acknowledge that our results are based on the analysis of only two multi-day ozone episodes which may not be representative of the general ambient environment in Guangdong. A comprehensive spatial and temporal OBM analysis is needed to make a definitive comparison with previous findings. | 9,884 | 2022-01-24T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Fluorescent Nanocomposite of Embedded Ceria Nanoparticles in Crosslinked PVA Electrospun Nanofibers
This paper introduces a new fluorescent nanocomposite of electrospun biodegradable nanofibers embedded with optical nanoparticles. In detail, this work introduces the fluorescence properties of PVA nanofibers generated by the electrospinning technique with embedded cerium oxide (ceria) nanoparticles. Under near-ultra violet excitation, the synthesized nanocomposite generates a visible fluorescent emission at 520 nm, varying its intensity peak according to the concentration of in situ embedded ceria nanoparticles. This is due to the fact that the embedded ceria nanoparticles have optical tri-valiant cerium ions, associated with formed oxygen vacancies, with a direct allowed bandgap around 3.5 eV. In addition, the impact of chemical crosslinking of the PVA on the fluorescence emission is studied in both cases of adding ceria nanoparticles in situ or of a post-synthesis addition via a spin-coating mechanism. Other optical and structural characteristics such as absorbance dispersion, direct bandgap, FTIR spectroscopy, and SEM analysis are presented. The synthesized optical nanocomposite could be helpful in different applications such as environmental monitoring and bioimaging.
Introduction
Cerium oxide nanoparticles (ceria NPs) attract great research and commercial interests due to their observable capability to capture radicals and dissolved oxygen [1,2]. These promising properties are applicable in many applications related to the medical, environmental, and sustainable energy fields [3][4][5]. Depending on the oxidation state of cerium ions, ceria could exist in two forms: active with associated charged oxygen vacancies (Ce 3+ ) and non-active with no formed O-vacancies (Ce 4+ ) [6]. The formed O-vacancies act as probes to scavenge some charged objects such as radicals and dissolved oxygen. The conversion of solid oxide from a lower oxidation state (Ce 3+ ) to a higher one (Ce 4+ ) occurs in a fast manner while the opposite reaction is difficult to achieve [7]. Therefore, it is a challenge to keep ceria NPs in active form, containing Ce 3+ ions to be fitted into the previously mentioned applications.
Results
In this presented work, two different methods are introduced to produce fluorescent electrospun nanofibers, which are electrospun PVA nanofibers created with the embedded in situ ceria nanoparticles technique and the deposited ceria nanoparticles within a crosslinked PVA nanofiber technique. The embedded in situ ceria nanoparticle concentration can be controlled in PVA electrospun nanofibers with less ceria agglomeration inside the fiber [18].
The new contribution is depositing ceria nanoparticles in a crosslinked electrospun PVA nanofiber surface. However, this technique shows some difficulty in controlling the ceria nanoparticle concentration due to the losses that happen during the deposition technique; also ceria nanoparticle agglomerations were difficult to control, leading the presence of florescence quenching mechanism compared to the first technique, though it can be used in some applications such as drug delivery that deal with the crosslinked nanofibers mat as ceria nanoparticle carriers or a cohesive media. Despite the used deposition technique, ceria nanoparticles show that they are still active, as discussed in the next section.
Optical Characterization of Ceria NPs Embedded inSitu with PVA Nanofibers
The absorbance curves of PVA NFs embedded in situ in ceria NPs with concentrations of 0.25% and 0.75% are shown in Figure 1a. This absorbance raise to 400 nm is correlated to the embedded ceria NPs, as the PVA nanofibers alone, whether crosslinked or not, show no absorbance at all over the detected range of wavelengths based on experimental testing. The optical allowed direct bandgap can be calculated directly from the obtained absorbance curves using the following Equation (1) [19].
αpEq " ApE´E g q 1{2 (1) where α is the absorbance coefficient, A is a constant that depends on the effective masses of electrons and holes in ceria NPs, E is the absorbed photon energy, and E g is the allowed direct bandgap. Figure 1b shows the relation between (αE) 2 versus E, and the intersection of the extrapolation of the linear part of the (αE) 2 curve with the E (x-axis) is equal to the allowed direct bandgap E g is the particular composition of ceria NPs.
Nanomaterials 2016, 6, 102 3 of 10 where α is the absorbance coefficient, A is a constant that depends on the effective masses of electrons and holes in ceria NPs, E is the absorbed photon energy, and Eg is the allowed direct bandgap. Figure 1b shows the relation between (αE) 2 versus E, and the intersection of the extrapolation of the linear part of the (αE) 2 curve with the E (x-axis) is equal to the allowed direct bandgap Eg is the particular composition of ceria NPs. Fluorescence intensity measurements were obtained for different concentrations of ceria NPs which were embedded in situ in PVA NFs. Studied concentrations of ceria NPs as examples were <1 wt %, 5 wt %, and 10 wt %. As shown in Figure 2, the fluorescence emission appears at a wavelength approximately equal to 520 nm under 430 nm excitation, which is one of the embedded ceria NPs' optical characteristics [20], with no fluorescence peak emitted from the non-optical mat of PVA-only nanofibers based on our experimental measurements. The TEM image of ceria nanoparticles and the SEM image of PVA nanofibers with embedded ceria nanoparticles in situ are shown in Figure 3a,b. The average grain size of ceria nanoparticles is ~6 nm and the formed nanofibers' mean diameter is ~200nm. From Figure 3b, it can be shown that some ceria nanoparticles are agglomerated on the electrospun nanofibers. Fluorescence intensity measurements were obtained for different concentrations of ceria NPs which were embedded in situ in PVA NFs. Studied concentrations of ceria NPs as examples were <1 wt %, 5 wt %, and 10 wt %. As shown in Figure 2, the fluorescence emission appears at a wavelength approximately equal to 520 nm under 430 nm excitation, which is one of the embedded ceria NPs' optical characteristics [20], with no fluorescence peak emitted from the non-optical mat of PVA-only nanofibers based on our experimental measurements. The TEM image of ceria nanoparticles and the SEM image of PVA nanofibers with embedded ceria nanoparticles in situ are shown in Figure 3a,b. The average grain size of ceria nanoparticles is~6 nm and the formed nanofibers' mean diameter is~200 nm. From Figure 3b, it can be shown that some ceria nanoparticles are agglomerated on the electrospun nanofibers.
where α is the absorbance coefficient, A is a constant that depends on the effective masses of electrons and holes in ceria NPs, E is the absorbed photon energy, and Eg is the allowed direct bandgap. Figure 1b shows the relation between (αE) 2 versus E, and the intersection of the extrapolation of the linear part of the (αE) 2 curve with the E (x-axis) is equal to the allowed direct bandgap Eg is the particular composition of ceria NPs. Fluorescence intensity measurements were obtained for different concentrations of ceria NPs which were embedded in situ in PVA NFs. Studied concentrations of ceria NPs as examples were <1 wt %, 5 wt %, and 10 wt %. As shown in Figure 2, the fluorescence emission appears at a wavelength approximately equal to 520 nm under 430 nm excitation, which is one of the embedded ceria NPs' optical characteristics [20], with no fluorescence peak emitted from the non-optical mat of PVA-only nanofibers based on our experimental measurements. The TEM image of ceria nanoparticles and the SEM image of PVA nanofibers with embedded ceria nanoparticles in situ are shown in Figure 3a,b. The average grain size of ceria nanoparticles is ~6 nm and the formed nanofibers' mean diameter is ~200nm. From Figure 3b, it can be shown that some ceria nanoparticles are agglomerated on the electrospun nanofibers.
Effect of Crosslinking with Ceria NPs "in Situ" or Spin-Coated
As discussed before, the esterification method is used as a chemical crosslinking method to convert the generated NFs from being a soluble mat to being a hydrophobic one that can resist dissolving in water. Here, the optical characterizations of crosslinked PVA NFs with in situ embedded ceria NPs at concentrations of 1 wt % and 5 wt % are studied, as are ceria NPs spin-coated on the mat of crosslinked PVA NFs as a post-addition step of the nanoparticles over the nanofibers, as shown in both Figures 4 and 5. Both figures show the absorbance and bandgap for in situ embedded ceria NPs and spin-coated ceria NPs on a mat of crosslinked PVA NFs, respectively. Regarding fluorescence intensity, as shown in Figures 6a and 7a, the fluorescence intensities of crosslinked PVA NFs embedded with in situ ceria NPs at 1 wt % and ceria NPs spin-coated on crosslinked PVA NFs have the same behavior as discussed in non-esterified NFs. That gives a promising conclusion that our electrospun nanocomposite can be optically fluorescent in addition to being hydrophobic. Figure 6b shows the reduced fluorescence intensity.
Effect of Crosslinking with Ceria NPs "in Situ" or Spin-Coated
As discussed before, the esterification method is used as a chemical crosslinking method to convert the generated NFs from being a soluble mat to being a hydrophobic one that can resist dissolving in water. Here, the optical characterizations of crosslinked PVA NFs with in situ embedded ceria NPs at concentrations of 1 wt % and 5 wt % are studied, as are ceria NPs spin-coated on the mat of crosslinked PVA NFs as a post-addition step of the nanoparticles over the nanofibers, as shown in both Figure 4 and Figure 5. Both figures show the absorbance and bandgap for in situ embedded ceria NPs and spin-coated ceria NPs on a mat of crosslinked PVA NFs, respectively. Regarding fluorescence intensity, as shown in Figure 6a and Figure 7a, the fluorescence intensities of crosslinked PVA NFs embedded with in situ ceria NPs at 1 wt % and ceria NPs spin-coated on crosslinked PVA NFs have the same behavior as discussed in non-esterified NFs. That gives a promising conclusion that our electrospun nanocomposite can be optically fluorescent in addition to being hydrophobic. Figure 6b shows the reduced fluorescence intensity.
Effect of Crosslinking with Ceria NPs "in Situ" or Spin-Coated
As discussed before, the esterification method is used as a chemical crosslinking method to convert the generated NFs from being a soluble mat to being a hydrophobic one that can resist dissolving in water. Here, the optical characterizations of crosslinked PVA NFs with in situ embedded ceria NPs at concentrations of 1 wt % and 5 wt % are studied, as are ceria NPs spin-coated on the mat of crosslinked PVA NFs as a post-addition step of the nanoparticles over the nanofibers, as shown in both Figures 4 and 5. Both figures show the absorbance and bandgap for in situ embedded ceria NPs and spin-coated ceria NPs on a mat of crosslinked PVA NFs, respectively. Regarding fluorescence intensity, as shown in Figures 6a and 7a, the fluorescence intensities of crosslinked PVA NFs embedded with in situ ceria NPs at 1 wt % and ceria NPs spin-coated on crosslinked PVA NFs have the same behavior as discussed in non-esterified NFs. That gives a promising conclusion that our electrospun nanocomposite can be optically fluorescent in addition to being hydrophobic. Figure 6b shows the reduced fluorescence intensity. Figure 7b presents the SEM image of the chemically crosslinked PVA nanofibers due to esterification and Figure 8 shows the FTIR spectrum of the crosslinked synthesized nanocomposite. Figure 7b presents the SEM image of the chemically crosslinked PVA nanofibers due to esterification and Figure 8 shows the FTIR spectrum of the crosslinked synthesized nanocomposite.
Discussion
From Figure 1b, the resulting allowed bandgap range matches other literature which confirms the formation of active ceria with some quantity of tri-valiant cerium ions with corresponding Ovacancies [10,21]. From the fluorescence graphs in Figure 2, in the range of <1 wt % of ceria NPs, the fluorescence intensity peak increases with increasing the concentration due to more formed optical trivaliant states of cerium ions with corresponding electron transitions of 5d-4f levels [22]. However, the increase of a higher ceria NP concentration above 1 wt % leads to a decrease in the fluorescence emission intensity peaks. This may be due to a static fluorescence quenching effect which can dominantly appear at higher ceria NP concentrations greater than 1 wt %, leading to this decrease in the fluorescence emission. From these experimental results, we can conclude that optimal concentrations of ceria NPs embedded in situ in PVA NFs that give a higher fluorescence intensity are less than 1 wt %.
As the optically effective materials are ceria NPs, the absorbance and bandgap curves in both Figures 4 and 5 are slightly different in each method; however, all the resulting bandgap values are in the accepted ranges of active ceria NPs, around 3.5 eV, which was shown before for noncrosslinked nanofibers with embedded ceria. All of these experimental results indicate that optical properties of ceria NPs are not affected much by the crosslinking technique. However, it could be estimated that ceria nanoparticles are more optically active within the in situ embedding technique than in the spin-coating one, due to the smaller obtained direct allowed bandgap and the higher value of the fluorescence intensity peak. Figure 7b shows the reduction in the average nanofibers' mean diameter, compared to the nonesterified case, in range of 150 nm but with less porosity and a higher possibility of beads, compared to the non-esterified PVA electrospun nanofibers as shown earlier in Figure 5b. In FTIR spectroscopy as shown in Figure 8, most of the original peaks of both PVA and malic acid are not affected by adding ceria NPs. That gives evidence that the polymeric host of ceria keeps its original chemical bonds such as OH alcohol, free hydroxyl, C-H alkane, C-O carboxylic acid, ester and C=O carboxylic bonds.
Chemicals
All chemicals are used as received without further purification. Cerium (III) chloride heptahydrate (99.9%), Mowiol 10-98 Poly(vinyl alcohol), Malic acid are purchased from Sigma-Aldrich (St. Louis, MO, USA). Methanol, ethanol and hydrochloric acid were purchased from an Egyptian local chemical agent (Alexandria, Egypt) as commercial grade solutions.
Discussion
From Figure 1b, the resulting allowed bandgap range matches other literature which confirms the formation of active ceria with some quantity of tri-valiant cerium ions with corresponding O-vacancies [10,21]. From the fluorescence graphs in Figure 2, in the range of <1 wt % of ceria NPs, the fluorescence intensity peak increases with increasing the concentration due to more formed optical tri-valiant states of cerium ions with corresponding electron transitions of 5d-4f levels [22]. However, the increase of a higher ceria NP concentration above 1 wt % leads to a decrease in the fluorescence emission intensity peaks. This may be due to a static fluorescence quenching effect which can dominantly appear at higher ceria NP concentrations greater than 1 wt %, leading to this decrease in the fluorescence emission. From these experimental results, we can conclude that optimal concentrations of ceria NPs embedded in situ in PVA NFs that give a higher fluorescence intensity are less than 1 wt %.
As the optically effective materials are ceria NPs, the absorbance and bandgap curves in both Figure 4 and Figure 5 are slightly different in each method; however, all the resulting bandgap values are in the accepted ranges of active ceria NPs, around 3.5 eV, which was shown before for non-crosslinked nanofibers with embedded ceria. All of these experimental results indicate that optical properties of ceria NPs are not affected much by the crosslinking technique. However, it could be estimated that ceria nanoparticles are more optically active within the in situ embedding technique than in the spin-coating one, due to the smaller obtained direct allowed bandgap and the higher value of the fluorescence intensity peak. Figure 7b shows the reduction in the average nanofibers' mean diameter, compared to the non-esterified case, in range of 150 nm but with less porosity and a higher possibility of beads, compared to the non-esterified PVA electrospun nanofibers as shown earlier in Figure 5b. In FTIR spectroscopy as shown in Figure 8, most of the original peaks of both PVA and malic acid are not affected by adding ceria NPs. That gives evidence that the polymeric host of ceria keeps its original chemical bonds such as OH alcohol, free hydroxyl, C-H alkane, C-O carboxylic acid, ester and C=O carboxylic bonds.
Chemicals
All chemicals are used as received without further purification. Cerium (III) chloride heptahydrate (99.9%), Mowiol 10-98 Poly(vinyl alcohol), Malic acid are purchased from Sigma-Aldrich (St. Louis, MO, USA). Methanol, ethanol and hydrochloric acid were purchased from an Egyptian local chemical agent (Alexandria, Egypt) as commercial grade solutions.
Nanoparticles Synthesis
Undoped ceria nanoparticles are prepared using chemical precipitation technique similar to Chen et al., but with some modifications [23]. Cerium (III) chloride heptahydrate of 0.5 g is inserted into a beaker with adding 40 mL of distilled water, and the solution is stirred using a magnetic stirrer at rate of 500 rpm for 24 synthesis process. The solution is heated to 50˝C in a hot water bath for two hours with added 1.6 mL of ammonium hydroxide. Then, it is stirred for 22 h at room temperature. The long period of stirring fractures any remaining nanorods into nanoparticles. The solution is then centrifuged, washed with de-ionized water and ethyl alcohol to remove any unreacted cerium chloride and ammonia.
Polymers Preparation with Embedded Nanoparticles
A concentration of 10 wt % PVA solution is prepared by mixing 10 g PVA pellets with distilled water of 90 mL. The solution is heated to 100˝C for 30 min then it was stirred overnight. This paper introduces different methods of producing PVA nanofibers with embedded ceria NPs. First, ceria NPs with different weight percentages of (0.25%, 0.5%, 0.75%, 1%, 5%) are added in situ to the PVA solution. The mixture is stirred for 30 min before it is entering the electrospinning process. Further, it would be shown another way to add ceria as post-synthesis treatment by spin-coating on the electrospun nanofibers.
Electrospinning Process
The electrospinning setup consists of high voltage power supply (Spellman High Voltage Electronics corporation model CZE1000R (Hauppauge, New York, NY, USA), a syringe pump (NE1000-Single Syringe Pump, (New Era, Farmingdale, New York, NY, USA) which is used to regulate the pumping rate of polymer solution, a 5 mL plastic syringe with 18 gauge metallic needle to store the polymer solution, and a circular metallic collector of radius 10 cm covered with aluminum foil is used as a target. A schematic of electrospinning setup is shown in Figure 9. The voltage power supply is connected to the needle while the collector is grounded. The distance between the needle tip and the collector is fixed at 15 cm. The voltage difference between the needle and target is 25 kV, with a flow rate of the polymer solution at 2 mL/h for 30 min running time per sample. Undoped ceria nanoparticles are prepared using chemical precipitation technique similar to Chen et al., but with some modifications [23]. Cerium (III) chloride heptahydrate of 0.5 g is inserted into a beaker with adding 40 mL of distilled water, and the solution is stirred using a magnetic stirrer at rate of 500 rpm for 24 synthesis process. The solution is heated to 50 °C in a hot water bath for two hours with added 1.6 mL of ammonium hydroxide. Then, it is stirred for 22 h at room temperature. The long period of stirring fractures any remaining nanorods into nanoparticles. The solution is then centrifuged, washed with de-ionized water and ethyl alcohol to remove any unreacted cerium chloride and ammonia.
Polymers Preparation with Embedded Nanoparticles
A concentration of 10 wt % PVA solution is prepared by mixing 10 g PVA pellets with distilled water of 90 mL. The solution is heated to 100 °C for 30 min then it was stirred overnight. This paper introduces different methods of producing PVA nanofibers with embedded ceria NPs. First, ceria NPs with different weight percentages of (0.25%, 0.5%, 0.75%, 1%, 5%) are added in situ to the PVA solution. The mixture is stirred for 30 min before it is entering the electrospinning process. Further, it would be shown another way to add ceria as post-synthesis treatment by spin-coating on the electrospun nanofibers.
Electrospinning Process
The electrospinning setup consists of high voltage power supply (Spellman High Voltage Electronics corporation model CZE1000R (Hauppauge, New York, NY, USA), a syringe pump (NE1000-Single Syringe Pump, (New Era, Farmingdale, New York, NY, USA) which is used to regulate the pumping rate of polymer solution, a 5 mL plastic syringe with 18 gauge metallic needle to store the polymer solution, and a circular metallic collector of radius 10 cm covered with aluminum foil is used as a target. A schematic of electrospinning setup is shown in Figure 9. The voltage power supply is connected to the needle while the collector is grounded. The distance between the needle tip and the collector is fixed at 15 cm. The voltage difference between the needle and target is 25 kV, with a flow rate of the polymer solution at 2 mL/h for 30 min running time per sample.
Crosslinking Procedure
Vapor phase esterification process was done in oven on two subsequent steps similar to the procedure mentioned in Reference [24], but with some changes. In the first step, electrospun nanofibers of PVA only or PVA with embedded ceria were placed in a container along with a small amount of malic acid (1-2 g) with added few droplets of HCl. The container was sealed from the ambient moisture and placed in an oven at 80 °C for 15 min. Esterification was produced via heterogeneous reaction during the first stage of 15 min. In the second step, the sample was cured for
Crosslinking Procedure
Vapor phase esterification process was done in oven on two subsequent steps similar to the procedure mentioned in Reference [24], but with some changes. In the first step, electrospun nanofibers of PVA only or PVA with embedded ceria were placed in a container along with a small amount of malic acid (1-2 g) with added few droplets of HCl. The container was sealed from the ambient moisture and placed in an oven at 80˝C for 15 min. Esterification was produced via heterogeneous reaction during the first stage of 15 min. In the second step, the sample was cured for 20 min in an oven at 120˝C. Through the discussed esterification technique, the produced NFs become crosslinked and more resistive to solubility as a hydrophobic material. Then, we have ceria nanoparticles hosted by electrospun crosslinked PVA nanofibers in two forms. Firstly, ceria NPs were added in situ with the PVA solution before being electrospun and esterified. Secondly, ceria nanoparticles were added as a post-synthesis step on the crosslinked electrospun PVA-only nanofibers through spin-coater, MTI-100 model (Richmond, CA, USA), at a speed of 500 rpm.
Characterization of the Synthesized Nanocomposite
PVA nanofibers with embedded ceria NPs, whether in situ added or spin-coated, are optically characterized by measuring its optical absorbance, and fluorescence intensity curves. Optical absorbance in a wavelength within range from 300 to 700 nm was measured by using ultra violet-visible (UV-Vis) spectrophotometer of model (PG T92+) (Beijing, China). From absorbance curves, corresponding band gap of the formed nanocomposite can be determined, as will be discussed in next section. Fluorescence intensity measurements have been detected by a hand-made fluorescence spectroscopy setup, as shown in Figure 10. The used fluorescence setup is composed of UV light emitting diode (LED, Thorlab) (Newton, NJ, USA) with 430 nm excitation wavelength, Monochromator (Newport cornerstone 130) (Newport, Irvine, CA, USA) which was set to obtain fluorescence intensity at wavelength from 500 to 700 nm, Oriel photomultiplier tube (PMT) (Newport PMT77340) (Newport, Irvine, CA, USA) as a fluorescence intensity detector, and a power meter (Newport 1918-R) (Newport, Irvine, CA, USA) to display PMT detection readings. Fluorescence intensity was measured by positioning NFs solid sample holder inclined by 45˝between UV-LED and the input port of the monochromator, so that the input optical signal to the monochromator is perpendicular to the initial LED excitation signal for minimum scattering effect [10]. The output port of the monochromator was directly connected with PMT, which was directly connected to 1918-R power meter. 20 min in an oven at 120 °C. Through the discussed esterification technique, the produced NFs become crosslinked and more resistive to solubility as a hydrophobic material. Then, we have ceria nanoparticles hosted by electrospun crosslinked PVA nanofibers in two forms. Firstly, ceria NPs were added in situ with the PVA solution before being electrospun and esterified. Secondly, ceria nanoparticles were added as a post-synthesis step on the crosslinked electrospun PVA-only nanofibers through spin-coater, MTI-100 model (Richmond, CA, USA), at a speed of 500 rpm.
Characterization of the Synthesized Nanocomposite
PVA nanofibers with embedded ceria NPs, whether in situ added or spin-coated, are optically characterized by measuring its optical absorbance, and fluorescence intensity curves. Optical absorbance in a wavelength within range from 300 to 700 nm was measured by using ultra violetvisible (UV-Vis) spectrophotometer of model (PG T92+) (Beijing, China). From absorbance curves, corresponding band gap of the formed nanocomposite can be determined, as will be discussed in next section. Fluorescence intensity measurements have been detected by a hand-made fluorescence spectroscopy setup, as shown in Figure 10. The used fluorescence setup is composed of UV light emitting diode (LED, Thorlab) (Newton, NJ, USA) with 430 nm excitation wavelength, Monochromator (Newport cornerstone 130) (Newport, Irvine, CA, USA) which was set to obtain fluorescence intensity at wavelength from 500 to 700 nm, Oriel photomultiplier tube (PMT) (Newport PMT77340) (Newport, Irvine, CA, USA) as a fluorescence intensity detector, and a power meter (Newport 1918-R) (Newport, Irvine, CA, USA) to display PMT detection readings. Fluorescence intensity was measured by positioning NFs solid sample holder inclined by 45° between UV-LED and the input port of the monochromator, so that the input optical signal to the monochromator is perpendicular to the initial LED excitation signal for minimum scattering effect [10]. The output port of the monochromator was directly connected with PMT, which was directly connected to 1918-R power meter. The mean synthesized nanoparticle size was observed by transmission electron microscopy (TEM, JEOL) (Tokyo, Japan), with accelerating potential of 80 kV. Surface morphology and diameter size of electrospun nanofibers before and after esterification were investigated using scanning electron microscopy (FEI Quanta 200) (Hillsboro, OR, USA). After sputter-coating with gold, the fiber size distribution of randomly selected SEM micrograph was measured using Image-J software. The formed nanocomposites have been characterized using FTIR spectroscopy (Varian 670-IR) (Santa Clara, CA, USA).
Conclusions
This paper presents the study of adding the property of optical fluorescence to electrospun PVA nanofibers through embedding ceria nanoparticles using different techniques. In this work, ceria nanoparticles with a greater Ce 3+ ionization state have been added in situ to a PVA solution and then generated together as nanofibers via the electrospinning technique. In addition, to make the electrospun fiber hydrophobic mat, PVA nanofibers have been esterified using malic acid. In this case, The mean synthesized nanoparticle size was observed by transmission electron microscopy (TEM, JEOL) (Tokyo, Japan), with accelerating potential of 80 kV. Surface morphology and diameter size of electrospun nanofibers before and after esterification were investigated using scanning electron microscopy (FEI Quanta 200) (Hillsboro, OR, USA). After sputter-coating with gold, the fiber size distribution of randomly selected SEM micrograph was measured using Image-J software. The formed nanocomposites have been characterized using FTIR spectroscopy (Varian 670-IR) (Santa Clara, CA, USA).
Conclusions
This paper presents the study of adding the property of optical fluorescence to electrospun PVA nanofibers through embedding ceria nanoparticles using different techniques. In this work, ceria nanoparticles with a greater Ce 3+ ionization state have been added in situ to a PVA solution and then generated together as nanofibers via the electrospinning technique. In addition, to make the electrospun fiber hydrophobic mat, PVA nanofibers have been esterified using malic acid. In this case, ceria nanoparticles are added, whether in situ or on the esterified nanofibers as a post-synthesis treatment using spin-coating. In all synthesized nanofibers, the active ceria nanoparticles still keep their direct bandgap value around 3.5 V, which indicates that it is still optically active. All the synthesized nanofibers are shown to be visibly fluorescent under 430 nm optical excitation. The crosslinked nanofibers using esterification show a smaller mean diameter with greater number of beads compared to the non-esterified fibers. This new optical biodegradable nanofiber mat could be applicable in a wide variety of applications, including environmental sensors and cancer treatment. | 6,623.4 | 2016-06-01T00:00:00.000 | [
"Materials Science"
] |
A DSM-Based CCPM-MPL Representation Method for Project Scheduling under Rework Scenarios
Rework risks caused by information flow interactions have become a major challenge in project scheduling. To deal with this challenge, we propose a model integrating the critical chain project management method, design structure matrix method, and max-plus method. Our model uses a start-to-start relationship of activities instead of the traditional finish-to-start relationship, which also allows overlaps between activities. We improve the accuracy of the rework safety time in two ways: (1) the overall overlapping effect is taken into consideration when calculating the rework time of an activity arising from the information flow interaction of its multiple predecessors overlapped with it; (2) the rework time arising from activity overlaps, the first rework time, and the second rework time are calculated as components of the rework safety time in our model, while the last one is ignored in traditional methods. Furthermore, the accuracy of time buffers is improved based on the improved rework safety time. Finally, we design the max-plus method to generate project schedules and appropriately sized time buffers. The empirical results show that the project schedule generated by the proposed method has a higher on-time completion probability, as well as more appropriately sized project buffers.
Introduction
Rework is defined as "the unnecessary effort of redoing a process or an activity implemented incorrectly the first time." [1] Regarded as inevitable and epidemic in projects [2], rework has become a major impediment that adversely affects project performance. Many studies have shown the impact of rework risks on cost overrun and project delay. Love investigated 161 construction projects in Australia and concluded that rework was the main cause of overrun cost, which can reach 52% increased cost on average [3]. Similarly, a study by Barber et al. reported that rework costs can be as high as 23% of contract values [4]. Hwang et al. surveyed 381 projects in Singapore and found that more than 80% of corporations and 59% of projects have experienced client-related rework, resulting in a cost increase of 7.1% and a schedule delay of 3.3 weeks on average [5].
Considerable research studies have explored the causes of rework as it significantly impacts project performance.
Love argued that design and construction errors, as well as omissions and changes, are the main reasons of rework [3]. Ye et al. proposed several other reasons for rework involving contract management and management scope, etc. [6]. However, the most widely held view is that information flow interactions between activities should be responsible for rework [7,8]. To reveal and model rework relationships among activities and other factors, several approaches have been developed such as cognitive mapping (CM) [9,10] and system dynamics (SD) [11,12].
Despite the awareness of the causes of rework risks, tools to manage rework and mitigate its adverse impact are still insufficient. Traditional project scheduling tools, such as the critical path method (CPM), project evaluation and review technique (PERT), and the Gantt chart (GC), have been widely used in project management for decades but are not applicable under rework scenarios, as they cannot deal with rework appropriately [13,14]. For example, in the CPM network, the forward pass assumes that the maximum value is propagated to as its successor start, and the backward pass assumes that it finishes as late as possible [15]. In this way, the so-called time difference can be treated as float to deal with rework or delays. Yet, such approach can only deal with part of delays. Due to these limitations, rework has not been managed or controlled effectively in practice, which commonly results in low efficiency of traditional project schedule management tools. is brings challenges for both practice and theory, thus appealing for novel methods to deal with such issue and predict the project schedule with consideration of rework scenarios. erefore, the need exists for a new project scheduling approach to model rework relationships and establish reliable schedules under rework scenarios for effective project schedule prediction and control. e design structure matrix (DSM), first proposed by Steward, proved to be a powerful tool in modeling rework relationships [16]. A DSM-based discrete-event Monte Carlo simulation approach was then presented by Browning and Eppinger [17] and expanded in later studies [18][19][20] to predict project duration. However, this approach is highly computationally demanding and time-consuming, also inapplicable for large projects. It is also unreasonable that an activity could not begin until all its precedent activities have been completed in the simulation process, which greatly limits its application. e critical chain project management (CCPM) method, first proposed by Rand and Goldratt [21], which applies the theory of constraints to project management, is another emerging and potential tool to manage information uncertainty by setting resource and time buffers in project schedules. Based on prior studies, Zhang et al. first combined CCPM with DSM to model rework relationships between activities and to address rework risks by setting time buffers that consider rework safety times [8]. However, this method calculates the rework safety time between an activity and its each predecessor, respectively. e overall effect of overlaps between an activity and its multiple predecessors on rework safety times as well as a learning curve is not considered, which yields the total rework safety time of an activity longer than required in practice. In addition, the second rework times are overlooked when calculating rework safety times. e activities in the generated project schedule meet FTS � 0 relationship, which cannot effectively reflect the activity overlaps. Ma et al. developed a critical chain design structure matrix method to calculate the rework time [22]. However, all the rework times between an activity and its predecessors are added, which created a big project buffer. erefore, further meaningful research is needed to integrate and refine current CCPM methods to better address rework risks. To sum, this study aims to predict a more accurate project schedule with consideration of rework scenarios and provide informative insights for practice.
In this study, we develop a quantitative model integrating a DSM into the CCPM-max-plus linear (CCPM-MPL) framework by considering various rework factors. e model considers the second rework time and the overall overlapping effect between a critical activity and its multiple predecessors when calculating rework safety time. e formulas in the max-plus method to calculate time buffers using the root square error method (RSEM) are built and adjusted to generate project schedules with a start-to-start (STS) relationship. Project schedules generated with the proposed method absorb rework risks and other uncertainties, thereby enabling project practitioners to predict project duration more accurately. e remainder of this paper is organized as follows. Section 2 reviews the relevant literature. Section 3 explains the proposed method in detail, including the project scheduling process with the max-plus method, the calculation model of rework safety times, and a refined buffer determination method. Section 4 evaluates the proposed method using an empirical analysis.Section 5 concludes this paper.
Buffer Determination in the CCPM.
Ever since the CCPM method was first proposed, it has not only been increasingly applied in project scheduling but also been continually extended and refined [23]. ere are three main types of buffers-the resource buffer (RB), the feeding buffer (FB), and the project buffer (PB)-which are distinguished by their respective locations and functions. RBs, which function as warnings and consume no time, are placed before critical activities to protect the chain from the risk of critical resource tightness [24]. FBs are placed at the convergence of the critical chain and noncritical chains to protect the chain from activity variants on noncritical chains [25]. PBs are placed at the end of schedules to protect against exceeding project durations [26]. e determination of buffer sizes depends on various factors, such as risk preferences of the project team, project complexity, personnel, and equipment capacities [27]. e cut-and-paste (CAP) method and RSEM are widely used among a number of buffer-sizing methods. e CAP method retains half of activity duration as safety time, estimates half of the sum of safety times on the critical chain to be the PB, and that of the noncritical chain to be the FB in order to guarantee project duration. However, as this procedure is linear, the buffer size increases linearly, which may cause an unnecessarily large amount of protection [28]. In order to improve the effectiveness of the CAP method, Shi and Gong proposed the RSEM, which is based on the large number law and the central limit theorem [29]. e RSEM calculates the square root of the total squares of the difference between the safety estimate time and the average estimate time for each activity on the critical chain as the PB and that on the noncritical chain as the FB. Zhang et al. argued that the RSEM outperformed the CAP method for large projects because the former was less affected by critical chain length [8]. Furthermore, considering that various project characteristics and attributes could influence buffer sizes, several modified buffer-sizing methods have been proposed. Tukel et al. [30] proposed the adaptive procedure with resource tightness (APRT) and the adaptive procedure with density (APD) methods to determine time buffers-both of which take resource tightness and network complexity into consideration. Shi et al. further calculated resource tightness using a fuzzy method and determined time buffers taking into account network complexity and risk project managers' preference [31]. In addition, other researchers have also developed different kinds of buffer-sizing methods that consider other project attributes, including degree of project uncertainty [32], project network characteristics [33], and project teams' risk preferences [25]. In recent years, there has been some research focused on objective and quantitative float/buffer allocation to critical activities. With inspiration from voting theory, Su et al. created and validated a functioning method for float allocation that fairly mitigates risk [15]. e same authors further employed voting methods to apportion contract float to correlated activities in network schedules to projects from delays [34]. Table 1 lists the aforementioned buffer calculation methods.
Although great advances have been made on buffersizing methods, they all assume that project activities are independent and neglect the codependency of activities, as well as the effect of rework risks arising from information flow interactions on project duration. ese factors resulted in smaller estimated project durations than necessary [8]. As previously mentioned, Zhang et al. proposed a refined DSMbased CCPM method, which took the codependency of activities and rework risks into account when determining buffer sizes [8], but this method still has several limitations on modeling rework and determining rework safety times, which deserves further research.
DSM as a Process Management
Technique. DSM as a modeling tool for process architecture has been widely accepted and adopted in different application areas, such as project scheduling and product design [22,38]. Process DSM uses matrix formation to visualize and analyze activities and logic relationships or dependencies between activities that include sequential, parallel, and coupled activities. In process DSM, an off-diagonal element A ij represents information flow from activity j to activity i or dependency between the two activities. For a binary DSM, if the dependency between activity i and activity j exists, A ij is often filled with symbol "×" or "·"; otherwise, it will be null [39]. However, binary DSMs cannot effectively express the strength of information flows or dependencies between activities and are therefore limited to a qualitative analysis of process architecture. In order to overcome the deficiency, a numerical DSM, in which the bigger value of elements means a higher degree of dependency between activities, was developed to express dependencies between activities more accurately in order to quantitatively analyze process architecture [40]. Diagonal elements A ii are used to represent a certain activity characteristic, such as duration of activity i, or are simply empty in most cases. e matrix-based expression of DSMs is applicable for rework analysis, as it can clearly display rework-related, information flow interactivity [38]. Regarding DSMs, much related research has been performed to address rework risks. Browning and Eppinger first proposed a DSM-based discrete-event Monte Carlo simulation model to estimate project duration and cost [17]. ey argued that processes with appropriate overlapping and rework, instead of processes with the fewest feedback marks in the DSM, may lead to less project durations and reduced costs [38]. is framework was adopted by a number of subsequent studies and extended by considering additional constraints in process scheduling. For instance, Cho and Eppinger proposed a heuristic to schedule stochastic and resource-constrained projects under rework scenarios [18]. Yassine et al. proposed two kinds of genetic algorithm-based approaches to find near-optimal project durations for resource-constrained, multiple-project scheduling problems [39]. Overall, prior studies have explored the possibility and efficiency of using a DSM in project scheduling to predict and control rework and have reported significant progress. However, the DSM-based expression is hard to transfer to a traditional network diagram, which is commonly used in practice by project managers [41]. To apply a DSM effectively in project planning and scheduling, this paper developed an accurate DSM-based calculation model of rework duration and a DSM-based CCPM-MPL representation method.
e Effect of Overlaps between Activities on Rework Times.
Zhang et al. introduced the concept of rework safety time, which is defined as the safety time caused by rework, to deal with rework risk [8]. e value of rework safety time is the total expected rework time of the activity. e initial safety time determined without considering rework is called activity safety time. e sum of activity safety time and rework safety time is called the total safety time.
Project teams face complex situations when an activity may be overlapped with multiple precedent activities, and information transmission from any one of them may cause its rework and affect its rework time. In prior studies [8,18], the rework times caused by information transmission from its multiple predecessors were simply summed up to be the default rework amount. e calculation method is highly questionable because the accumulation of rework times may exceed the longest individual overlap time. Such overlap time should be the upper limit of rework time and occurs in the worst scenario in which the completed work has to be reworked completely. In order to relieve multiple predecessors' effect on rework calculation times, a formula developed by Dehghan is introduced and explained in the following section [42].
Furthermore, reworking an activity requires less effort compared to the first time in most cases because of the learning curve effect and the activity participants' adaption [17]; learning curves are utilized to accurately measure effects of rework risks on activity duration. Further study by Osborne found that reworked activities exhibited little further learning-curve effect after an initial rework [43]. erefore, the learning curve for each activity is considered as a fixed value when calculating the rework safety time; it means that a fixed proportion of original activity duration will be taken for rework execution whether it is the first or subsequent rework.
Methodology
In this section, we present project scheduling using a startto-start relationship between activities with the max-plus method first. is part is the integration of CCPM, DSM, and max-plus method. DSM is used to represent the logic of activities, and the max-plus method is employed to decouple relationships among these activities to further determine the critical chain of the project. Based on these efforts, we then build a model that considers the learning curve effects and overlaps with multiple predecessors to calculate rework safety times accurately. Finally, we propose a refined buffer determination method that is expressed in max-plus algebra to generate appropriately sized time buffers and protect project schedules. Specifically, notations used in this section are listed in Table 2.
Project Scheduling with a Max-Plus Method.
Four major operators of max-plus algebra are defined as follows. For two (1) RSEM [36] Buffer � ������������ n i�1 (S i − A i ) 2 S i represents the duration with 90% completion probability A i represents the duration with 50% completion probability APRT [30] r(i, q) represents resource requirements of i for q Rav(q) represents the level of resource acquisition of q �������� � n i�1 VAR i represents the sum of duration variance of activity in the critical chain VAR i TOTPRE represents the number of activity relationships in the chain NUMTASK represents the number of activities in the network Buffer calculation considering resource tightness [37] Buffer �
�������������������� �
α i represents the physical resource tightness β i represents the information resource tightness σ iy represents the standard deviation of activities Buffer calculation considering resource tightness, network complexity, and risk coefficient [31] Buffer In addition, two basic unit elements for these operators are given by ε( ≡ − ∞) and e( ≡ 0), and ⊗ and ⊙ are prior to ⊕ and ∧. Previous studies on the CCPM-MPL framework have assumed that an activity could not begin until all its precedent activities were completed. at means, only the FTS relationship is taken into consideration in project scheduling, and no overlapping exists between activities. However, this assumption is highly questionable, considering the fact that precedent constraints between activities include, but are not limited to, the FTS relationship, and overlapping between activities is prevalent in actual projects [44]. erefore, multiple relationships need to be considered in project scheduling and integrated into the CCPM-MPL framework to develop project schedules closer to actual circumstances.
Our work takes the STS relationship between activities into account by transforming it into a finish-tofinish (FTF) relationship, which is more suitable for embedment in a max-plus method. As shown in Figure 1(a), a STS relationship exists between activities A and B with the value of a, which means that activity B cannot start until activity A has started for at least a days, at which time a can be transformed into a corresponding value of the FTF relationship which is b, as shown in Figure 1(b). Let matrix F ss represent the STS relationship between activities (see Figure 2(a)) and matrix F represent the corresponding FTF relationship (see Figure 2(b)). If the FTF relationship exists, the element is then filled with the corresponding value; otherwise, it will be filled with ε. e transforming process can be expressed as follows: where D is the matrix of activity durations and filled by the diagonal elements with each activity's duration and the nondiagonal elements with negative infinity. e CCPM-MPL framework adopted in this paper has been modified to make it more applicable to projects. First, as we consider the characteristics of projects, including uniqueness and one-time process, the start time of a certain project activity is regarded to be unaffected by completion of the same kind of activities in other projects previously completed. Second, for simplicity, external inputs and external outputs are not included in this study. Project scheduling processes in the CCPM-MPL representation are given in the following. e earliest finish time for each activity is calculated as where (F) * � e n ⊕ F⊕ · · · ⊕(F) l− 1 . e (i, j)th element of matrix (F) * means the largest deviation of the finish time of activities j and i if activities i and j are on one or more paths at the same time; otherwise, it is ε. For simplicity, we set x 0 � (e, e . . . , e) T √√√√ √ √√√√ √ n , which is an n × 1 matrix and means the beginning of activities in the project will not be affected by other projects with e representing zero. e earliest start time for each activity is calculated as the difference between its earliest finish time and duration according to the following equation: e corresponding output time, denoted as the maximum of the earliest finish time for all activities, is given by where C 0 � (e, e, . . . , e) √√√√ √√√√ n , which is a 1 × n matrix with e representing zero. e latest start time for each activity is calculated as the difference between the project output time and the sum duration of activity i, as well as the duration of subsequent critical activities, according to the following equation: e total floats in all activities are obtained as e critical chain is then determined by the set of activities α that satisfy α[TF] α � 0 .
Calculation of Rework Safety Times.
We define the rework safety time as the rework time caused by information flow interactions, including rework time arising from overlaps with precedent activities, the first rework times, and the second rework times. erefore, the rework safety time of an activity is the sum of the three kinds of rework times above. Ignoring rework times when estimating the safety times may lead to smaller time buffers than necessary.
Effects of Information Flow Interactions on Rework
Risks. As previously mentioned, information uncertainties are the main reasons for rework [7]. Such uncertain information may transmit or feedback from one activity to another, creating information flow. Information flow interactions may cause rework risks in projects and consequently cost more rework time. Although information transmission and feedback exist in both overlapping and nonoverlapping activities, information flow interactions in these two conditions are significantly different. For nonoverlapping activities, rework risks mainly arise from information feedback from successive activities and information transmission from precedent activities after completion of their rework. As depicted in Figure 3, A is a precedent activity of , and together, they perform sequentially according to relationships. After C has been completed, the performance information of C will be generated and transmitted from C to A, which may result in rework of A. Such a case is known as the first rework [45]. e first rework time is shown as the shadow area in Figure 3(a). After the completion of rework, A transmits or feeds back some revised information to B, which may cause rework of B. Such cases are called the second rework. e second rework times are shown as the shadow areas in Figures 3(b) and 3(c). Both the first rework and the second rework should be taken into consideration in project scheduling to avoid that the planned project duration is much less than the actual duration.
In addition to the aforementioned two reasons, information flow interactions between overlapping activities are another major cause of rework. It is noteworthy that only overlapping-dependent activities can generate this type of rework risk for projects. Two activities overlapping without a dependent relationship will lead to no rework. As shown in Figure 4, t A represents the completed time of activity A before A transmits preliminary information to its successive activity B. t B represents the execution time of activity B when B receives information from A, while t 1 represents the duration in which B continually receives information from A and transfers feedback information to A simultaneously until A is complete. Hence, the overlap between A and B is represented as the sum of t B and t 1 . e rework duration arising from information flow interactions between overlapping activities can be represented as RT 0 , which is a function of overlapping duration-i.e., t A and t 1 .
Determining Rework Times Arising from Activity
Overlaps. In order to quantify the effects of rework risks on activity and project duration, four matrices are introduced to Figure 1: (a) Activities with the STS relationship; (b) the transformation of STS to the FTF relationship. To calculate rework times between overlapping activities, the overlap times (O ij ) need to be determined first. e formula is shown as follows: Matrix F can then be achieved through equation (3), and the earliest start times of activities can be determined through equation (5). Based on the earliest start times of activities, the overlap duration (O ij ) between activities i and j can be obtained by erefore, the learning curve for each activity is modeled as a fixed value in this research, which means that a fixed proportion of original activity duration will be taken for rework execution whether it is the first or subsequent rework.
Based on the aforementioned definitions, the rework duration (RT 0 ) in Figure 4 can be calculated by where LC i represents the learning curve of activity i. Figure 5 shows a successor activity i with all of its entire predecessors (j � 1 to n), and each of the predecessors has an overlapping period with activity i (O ij : j � 1 to n). [RT 0 ] ij refers to the rework time of activity i caused by its predecessor j, and O i , which satisfies O i � max(O i1 , O i2 , . . . , O in ), represents the maximum overlapping period of activity i with all its predecessors, while [RT 0 ] i represents the rework time of activity i caused by its predecessors 1 to n.
According to probability theory, if two events (A and B) are independent of each other and can occur simultaneously, the occurrence probability of the union of the two events, denoted as P(A ∪ B), can be calculated by It should be noted that the probability principles are still valid when more than two events exist as well.
Since an activity can overlap with multiple predecessors simultaneously and the overlaps are not mutually exclusive, all of the reworks caused by individual overlaps between activity i and its different predecessors can be considered as nearly independent of each other. According to equation (11), the rework time resulting from individual overlaps can be calculated by is replaced with X ij to represent the calculation process more concisely. For any overlapping time (O ij ), at least X ij ′ exists that satisfies erefore, the calculation process of X ij ′ can be expressed with respect to individual rework duration ([RT 0 ] ij ) and the maximum individual overlapping period (O i ) as follows: e rework time can then be expressed as a function of the maximum individual overlapping period (O i ) and the union of X ij ′ (j � 1, 2, . . . , n), that is, Figure 4: Information flow interaction between overlapping activities.
Advances in Civil Engineering
When j � 2, according to equations (12) and (13), the rework time is calculated by Similar equations can be extended from equation (14) for the calculation of rework times in the case of j � 3, 4, . . . , n. When j � n, the calculation process is given as e rework time is more reasonable than that determined by traditional methods because of the following three reasons. First, the rework time does not exceed the longest individual overlapping period. Second, the result will never be less than the longest individual rework time, and third, the rework time will increase as the number of precedent activities increases.
Determining the Times of the First and the Second
Reworks. Based on the previously defined parameters, including rework probability ([RP] ij ), rework intensity ([RI] ij ), learning curve (LC i ), and activity duration ([D] ii ), the first rework times of activity i ([RT 1 ] ij ) caused by information feedback from activity j, shown as the shadow in Figure 3(a), can be obtained by [17] e total time of the first rework of activity i ([RT] i ) caused by information feedback from all its successive activities can be obtained by Similarly, the total time of the second rework of activity i ([RT 2 ] i ) caused by information feedback from all of its successive activities, shown as the shadow in Figures 3(b) and 3(c), can be obtained by ] i ), whose calculation process is shown as follows:
Activity Sequence Optimization to Reduce Information
Flow Interactions. As previously mentioned, information flow interactions cause the codependency between activity durations and bring about rework risks. e rework times of activities are highly correlated to the intensity of information flow interactions. Noticeably, compared to information transmission, information feedback is more likely to lead to a higher intensity of information flow interactions. erefore, we can reduce the intensity of information flow interactions and activity rework times by shortening the distance of the information feedback path through the optimization of the activity sequence [46]. It is introduced as the objective function to optimize the activity sequence: where [RP] i,j represents the (i, j)th value extracted from the rework probability matrix.
Determining Buffer Sizes with the Refined RSEM in a Max-Plus Representation.
In this section, we improve the traditional RSEM by adopting more accurately calculated rework safety times and express them in max-plus algebra to make the calculation process more effective.
Determining Project Buffers.
According to the central limit theorem, the activity safety times and rework safety times are first allocated at the end of the project, and the refined RSEM is then used to determine the project buffer. e principle of the refined RSEM is shown in Figure 6, in which d i represents the estimated duration of activity i with a 50% completion rate, and ST i denotes the total safety time of activity i. ST i is composed of the rework safety time (ST ri ) and the activity safety time (ST 0i ), which has a 90% completion rate. e project buffer can then be calculated with refined RSEM by To calculate PB effectively, a virtual activity, two vectors, and a matrix are defined, denoted as V 0 , ST 2 , ST 2 p , and T p , respectively. Virtual activity V 0 is added at the end of schedules and consumes no time. An assumption is made that a relationship exists that satisfies FTF � 0 between V 0 and each other activity. e ith element of ST 2 , denoted as [ST 2 ] i , means the square of total safety time of activity i, and the corresponding element of activity V 0 is e. Vector ST 2 p is transformed from ST 2 by else.
⎧ ⎨ ⎩ (24)
To better interpret matrix T p , a simple case example is shown in Figure 7. It is assumed that the square of total safety times of activities is [a, b, c], and activity B is not on the critical chain. We then achieve the vector ST 2 p as [a, e, c, e] and matrix T p . e PB can then be represented in max-plus algebra as where C T 0 � (e, e, . . . , e) √√√√ √√√√ n+1 , which is a 1 × (n + 1) matrix with e representing zero.
Determining
In preparation of the location and size of the feeding buffer, another two vectors are introduced, where α and β are the set of activities on the critical chain and those on the noncritical chain, respectively. Moreover, an adjacency matrix R is transformed from matrix F by Vector v ′ is then introduced to locate the feeding buffers as follows: where R βα is the adjacency matrix, which represents transitions from noncritical activities to critical ones, and can be obtained by R βα � diag (w) ⊗ R ⊗ diag (v). λ represents the noncritical activity, one of whose successors is critical activity. erefore, a feeding buffer should be inserted behind the activity c. e feeding buffer can then be determined by square rooting of the sum of [ST 2 ] η , where η is the set of activities on a certain noncritical chain. e formula is expressed in max-plus algebra as Advances in Civil Engineering 9 Furthermore, due to the constraint that the feeding buffer should not exceed its total float, the sizes of the feeding buffers should be adjusted by
Case Study
is section presents an empirical analysis to validate the feasibility and effectiveness of the proposed method on addressing rework risks in project schedule management. Section 4.1 presents the project data used in this study. Section 4.2 describes the implementation process of the proposed method in detail, and Section 4.3 compares the schedule generated by the proposed method with that generated by traditional CCPM methods.
Project Dataset.
e case used for the empirical analysis was derived from a feasibility study of a modular real estate construction first introduced by Sullivan [47] and further supplemented by Eppinger et al. [41]. It was a typical project with large-scale rework relationships and overlaps. e data were collected through interviews of individual team members and the group as a whole, and interviews to identify the information exchanges required to execute each task and to better understand how the tasks were completed. e project data are shown in Figures 8(a)-8(d) and Table 3. e figure represents the initial predecessor time-factor matrix P 0 , the initial successor time-factor matrix S 0 , the initial rework probability matrix RP 0 , and the initial rework impact matrix RI 0 , respectively. Table 3 shows activities in the case and corresponding activity parameters including an optimistic duration completion rate of 90%, a pessimistic duration completion rate of 50%, and the most likely duration and learning curves for each activity.
Implementation of the Proposed Method.
e case was implemented with the proposed method, and the steps are shown in Figure 9 and described in detail as follows: Step 1: optimize the activity sequence. A genetic algorithm (GA), which has proven to be a useful tool in addressing complex project scheduling problems by a number of prior studies [48], was developed in MATLAB to address the activity-sequencing problem. Specially, according to previous research [41], the parameters of the GA including population size, number of generations, crossover probability, and the mutation probability were set as 50, 150, 0.95, and 0.08, respectively. Equation (21) was used as an objective function to generate a near-optimal activity sequence. Figure 10 shows the GA convergence process and Step 2: update matrices P 0 , S 0 , RP 0 , and RI 0 into matrices P, S, RP, and RI according to the new activity sequence. e initial elements in the matrices were kept unchanged and transferred to the new places in the matrices in which the row and column represent corresponding original activity, respectively.
Step 3: generate matrices D and R. Matrix D was generated by filling the diagonal elements with a corresponding optimistic activity duration completion rate of 50%, as shown in Figure 11(a). Matrix R was derived from matrix P or S by replacing all of the nonnull elements with e and all of the null elements and upper-triangle elements with ε, as shown in Figure 11(b).
Step 4: generate matrices F ss and F. Matrix F ss was derived from matrices P, S, and D according to equation (9). Matrix F was derived from matrices F ss and D according to equation (3).
Step 5: determine the time parameters of activities including x + E , x − E , and x − L , as well as the critical chain and its length. x + E , x − E , and x − L were calculated according to equations (4), (5), and (7) based on matrices F and D. e total float vector TF was generated by equation (8) Advances in Civil Engineering critical chain was calculated by equation (6), with a result of 68.67 days.
Step 6: determine overlap durations between activities and rework safety times. e overlap durations between activities were obtained by equation (10) based on the vector x − E and matrix D. Rework safety times including rework times arising from the activity overlap, the first rework times, and the second rework times were determined by equations (11) [8]. It is noted that the comparison is based on the optimized sequence of the activities. Previously, to access the effectiveness of the proposed methods in project scheduling under the rework scenarios, the Monte Carlo simulation approach was executed 500 times in the case project in which the duration of each project activity adhered to a beta distribution. Figure 12 shows the simulated actual project durations. While all three methods employed the CCPM-MPL representation method proposed in the paper to generate project schedules, they differed in terms of determination of project buffer sizes. e original RSEM, as previously mentioned, disregarded rework safety times. Zhang's method overcame this problem, but overlooked the effects of the learning curve, overlapping with multiple predecessors, and the second rework on calculation results of rework safety times when determining the project buffer. Our model further factored in these considerations to generate a more accurate project buffer and reliable project schedule. e project buffers and schedules obtained by these three methods and comparison results are presented in Table 4 and Figure 13. e results showed that the estimated project schedule durations generated with Zhang's method and the original RSEM were 104.25 days, including the project buffer of 35.58 days, and 102.11 days including the project buffer of 33.43 days, respectively. When the estimated project durations were superimposed in the simulated project durations using the Monte Carlo approach as depicted in Figure 13, it can be seen that the on-time completion probabilities were 50.40% and 34.60%, respectively. e results show that the schedule generated with the proposed method, which had an on-time completion probability of 85.40%, significantly outperformed schedules generated with the other two methods in ensuring on-time completion probability of the case project.
Furthermore, the proposed method outperformed Zhang's method and the original RSEM by providing an appropriately sized project buffer with a proper buffer consumption rate on average. As illustrated in Table 4, the buffer consumption rate of schedules generated with the proposed method was 89.39%, while that of the other two methods was 100.57% and 107.04%, respectively. e oversized buffer consumption rate means that, in many cases, the calculated project buffer is undersized and unable to protect the project from schedule delay.
Conclusion
In this paper, through an integration of the DSM and CCPM methods, as well as a max-plus method, a new project scheduling method was developed to generate schedules of projects under rework scenarios. Accurate rework safety times of activities were calculated by factoring in rework time arising from activity overlaps, the first rework time, and the second rework time. Time buffers were then determined with the RSEM considering the rework risk. e refined max-plus method was utilized to transform the logic ties, rework relationships, and other parameters of the activity into simple matrix operations, and such parameters are uniformly calculated by the maxplus method to realize the effective scheduling of the project. Formulas in the max-plus algebra were created to calculate buffers with the refined RSEM and generate project schedules with the STS relationship between activities. e empirical results showed that project schedules generated with the proposed method have appropriately sized project buffers and can ensure a higher on-time completion probability compared to those generated with traditional CCPM methods. e paper contributes to the project schedule management research and practice in the following ways. 14 Advances in Civil Engineering We have greatly extended the research framework of the CCPM-MPL representation method. e original CCPM-MPL framework was based on the logical tie in that FTS is equal to 0, and we developed the formula applicability to the situation that STS is not equal to 0. e original CCPM-MPL framework employed the C&PM method to calculate times buffers, and we designed the formula expressed in max-plus algebra for calculation of time buffers with the RSEM and adjusted the max-plus method to generate project schedules with the STS relationship existing between activities. Our model considers the learning curve effects and the overall overlapping effect to calculate rework safety times accurately. DSMs have been integrated into the framework, enabling the generated project schedules to deal with rework risks effectively. We also developed an accurate calculation model of rework safety times by factoring various rework factors and improved the calculation accuracy of rework time.
e proposed method is a promising and powerful tool for project practitioners. Project schedules generated with this proposed method can absorb rework risks and various other uncertainties, enabling practitioners to predict project duration accurately and have sufficient time to deal with rework risks during the project process.
ere also exist several limitations of this study. Considering little attention to the STS relationship in the literature as well as the potential to incorporating overlaps of the STS relationship, this study makes an attempt to focus on the STS relationship, yet we also ignore other types of relationship, such as the STF relationship.
ere are many directions for future research efforts regarding the problem investigated in the current paper. Resource conflict continues to remain a critical challenge in project scheduling and deserves future research by extending the current CCPM-MPL framework. Additionally, the effect of activity overlap on the calculation of activity safety times was not considered when determining the project buffer and feeding buffers. is topic also merits future research. Besides, the current study validates the proposed method with a case study, and further application of this method should be investigated in different types of project and scenario.
Data Availability e data generated or analyzed during the study are available from the corresponding author upon request. Information about the journal's data-sharing policy can be found at http://ascelibrary.org/doi/10.1061/(ASCE)CO.1943-7862. 0001263.
Disclosure
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSFC.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 9,655.8 | 2021-01-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Rapid and selective concentration of bacteria, viruses, and proteins using alternating current signal superimposition on two coplanar electrodes
Dielectrophoresis (DEP) is usually effective close to the electrode surface. Several techniques have been developed to overcome its drawbacks and to enhance dielectrophoretic particle capture. Here we present a simple technique of superimposing alternating current DEP (high-frequency signals) and electroosmosis (EO; low-frequency signals) between two coplanar electrodes (gap: 25 μm) using a lab-made voltage adder for rapid and selective concentration of bacteria, viruses, and proteins, where we controlled the voltages and frequencies of DEP and EO separately. This signal superimposition technique enhanced bacterial capture (Escherichia coli K-12 against 1-μm-diameter polystyrene beads) more selectively (>99%) and rapidly (~30 s) at lower DEP (5 Vpp) and EO (1.2 Vpp) potentials than those used in the conventional DEP capture studies. Nanometer-sized MS2 viruses and troponin I antibody proteins were also concentrated using the superimposed signals, and significantly more MS2 and cTnI-Ab were captured using the superimposed signals than the DEP (10 Vpp) or EO (2 Vpp) signals alone (p < 0.035) between the two coplanar electrodes and at a short exposure time (1 min). This technique has several advantages, such as simplicity and low cost of electrode fabrication, rapid and large collection without electrolysis.
Dielectrophoresis (DEP) refers to the movement of polarizable particles in a non-uniform electric field 1,2 . This technique is an effective means of manipulating a specific type of biological particle, for example, particular species 3 , size 4 , or life state 5 in a heterogeneous particle mixture 6 . Coplanar electrodes such as interdigitated electrodes have been widely used to generate DEP due to their simple fabrication and ease of analysis. However, the DEP force over such electrodes generally decreases exponentially with the height above the electrode surface 7 ; hence it is usually effective close to the electrode surface 8,9 . Furthermore, it is not easy to manipulate nanometer-sized biological particles, such as viruses and proteins, rapidly by using DEP because the dielectrophoretic mobility decreases with the square of the particle diameter 6 .
On the contrary, alternating current (AC) electroosmosis (EO) is fluid motion induced by electrode polarization when AC electric potentials are applied to planar microelectrodes at intermediate characteristic frequencies 10 . This technique has been applied in several fluidic applications, such as micropumps with arrays of asymmetric electrodes [11][12][13] and micromixers for chemical species and electrolytes [14][15][16] with low applied electric potentials. As AC EO is exerted on fluids rather than particles, the use of AC EO alone may be limited in several applications such as sorting, separation, selective concentration, and focusing based on their electrical properties.
In this regard, DEP and EO need to be combined to enable the selective and rapid concentration of particles far from the electrodes as well as near the electrodes onto a particular spot, such as a sensing element, thereby increasing the sensitivity of a sensor to biological particles such as bacteria, proteins, and viruses 17,18 . Few studies have been conducted using both EO and DEP on planar electrodes; in those studies, two electrodes generating ScIENTIfIc RepoRtS | (2018) 8:14942 | DOI: 10.1038/s41598-018-33329-7 DEP were implemented in the gaps between two outer electrodes inducing EO 19,20 . Therefore, the shapes and sizes of the planar electrodes were limited, and more care needs to be taken to avoid electrical shorts between the electrodes.
Other studies in which a pair of sinusoidal signals between two planar electrodes were used have also been reported on; in those cases, both DEP and EO needed to occur at the same AC frequency 17,[21][22][23][24] or to be generated in alternating time intervals 25 (Table 1). However, AC EO generally occurs when the frequency is less than a few kHz 26 while DEP capture usually works effectively in higher frequencies, for example, kHz or MHz regions 27,28 ; therefore, applying a pair of sinusoidal signals to generate both DEP and EO may not work in many cases. Moreover, there are risks in this case that electrode damage or air bubble generation may occur by electrolysis at low frequencies 29 , when the electrical potential needs to be increased for better particle manipulation.
Here, we present a simple and effective method of concentrating bacteria, viruses, and proteins rapidly and selectively on two coplanar electrodes via superimposing AC DEP and EO signals for biosensor applications. Signal superposition techniques have been previously employed in several studies using two sets of sinusoidal waves [30][31][32] , and pulsed sinusoidal waves [33][34][35] ; however, they involved only electrical forces exerted on the particles suspended in fluids such as DEP, traveling wave DEP, and electrorotation for sorting, separation, trapping etc., and hence many of the particles located far from the electrode could not be manipulated rapidly. In the present study, we combine EO with DEP to enhance particle capture selectively and rapidly, which is critical for biosensors requiring rapid detection of biological particles. Although EO and DEP have been extensively studied (Table 1), this topic has not attracted much attention 36 .
Firstly, the frequencies for optimal EO and DEP generation were determined, and the two waveforms were superimposed and applied to the fabricated coplanar electrodes (Fig. 1). Selective concentration of Escherichia coli K-12 was conducted against 1-μm-diameter polystyrene (PS) beads from a bacteria-bead mixture. MS2 viruses and fluorescein isothiocyanate (FITC)-labelled troponin I antibody (cTnI-Ab) were also tested to determine whether this method would work for nanometer-sized particles. The particles concentrated within targeted areas on the electrodes were quantified with different treatments: no solution (negative control), no signal (positive control), lower frequency signals (EO), higher frequency signals (DEP), and superimposed signals (EO + DEP), and the effects of these treatments were analyzed.
Results and Discussion
First, the AC electrical potentials and frequencies for DEP and EO of the particles were determined. To determine these values for the optimal DEP-capture of the bacteria against the beads, the real parts of the Clausius-Mossotti (CM) factors for the bacteria and beads were plotted with respect to the AC frequency 37 (Fig. S1a). The optimal EO frequency for 0.01× phosphate buffered saline (PBS) was also determined by measuring its electrical conductivity and by using the reported Debye length 10 (Fig. S1b). For MS2 viruses and cTnI-Ab, no models on their dielectrophoretic responses have been reported on, as would be necessary to determine their CM factors; therefore, their DEP characteristics were experimentally investigated by varying the AC frequency from 8 kHz to 1 MHz for cTnI-Ab and from 10 kHz to 10 MHz for the MS2 viruses, and the frequency providing maximal capture was selected as the optimal DEP frequency 38,39 (Fig. 2a,b). As the Debye lengths for the salty stock solutions of the MS2 viruses and cTnI-Ab were not available, the optimal EO frequencies for the media were also experimentally determined by varying the EO frequencies of the superimposed signals, fixing the previously determined DEP signals [10 Vpp (peak-to-peak)], and finding the intermediate frequencies inducing maximal capture with low electric potentials (2 Vpp) 17 (Fig. 2c,d). Here, sharp changes in the fluorescence intensities owing to captured viruses and proteins were observed over narrow and low frequency ranges (500 to 2000 Hz and 300 to 800 Hz for viruses and proteins, respectively), which is typical of EO spectra rather than DEP as DEP behavior generally changes with wider frequency ranges 40,41 . Table 2 shows the obtained AC electrical potentials and frequencies for the particles and media.
The used cTnI-Ab had a weak fluorescence due to their small size (30 kDa), so camera exposure time was determined to be 2 s to enhance the fluorescence images. Exposure time of 1-9 s were reported for other proteins 42,43 to enhance the measured signal. Moreover, the protein concentration was kept to 500 ng/ml, under which fluorescence images were not clear, and few clumps of the protein were unavoidable in the solution. The vortex flows of the clumps were occasionally observed near the electrodes when using the lower frequency (EO) and superimposed (EO + DEP) signals (see video in the Supplementary Information), but those were not observed when using the higher frequency (DEP). AC electrothermal (ET) flows can also be considered an alternative to AC EO flows; however, they are commonly induced by Joule heating through salty media or by heating the substrate. AC ET flow is usually dominant at high frequencies (on the order of MHz) or high electrical conductivities (>1000 μS/cm), especially if the applied electric potential is high 27,29,44 . In the present study, AC ET flow was negligible because low conductivity media were used with relatively low electric potentials and no heat sources. In fact, the maximum measured ET flow velocity owing to Joule heating was reported to be ~7 μm/s under the applied electric potentials of 10 Vpp (at 200 kHz) and electrical conductivity of 10000 μS/cm on 60 μm gap coplanar electrodes 45 , which was considerably smaller than the measured flow velocities (~107 μm/s and ~135 μm/s for the bacteria and beads respectively) around the facing electrode edges due to AC EO in the present study. These flow velocities were calculated by measuring the moving distances of the particles around the facing electrode edges during time intervals between two frames of the recorded videos. Furthermore, positive DEP (pDEP) with low conductivity media is stronger and easier to use for particle trapping than negative DEP (nDEP) with high conductivity media 46 . However, it should also be noted that most of biological functionalities are not designed for low conductivity media, and pDEP and EO tend to drop in high conductivity media 46 ; hence pDEP and EO can be limited for certain biological applications. Using the obtained DEP and EO conditions, electrokinetic concentration experiments were conducted for the prepared biological particle solutions with single sinusoidal signals (either DEP or EO) and superimposed signals (DEP + EO). Figure 3 shows fluorescence images of the particles concentrated using different treatments. The superimposed signals concentrated Escherichia coli K-12, MS2 viruses, and cTnI-Ab more effectively than the single treatments, as demonstrated by the shiny lines on the facing edges of the electrodes. Regarding the bead experiments, nDEP occurred at the tested frequency, so the beads were not captured in the region of interest (RoI). We also observed noticeable particle movement under the superimposed signals in the videos, which was not observed under DEP bias only. Figure 4a shows the numbers of E. coli K-12 collected within the RoI over time for different electrical signals. More of the bacteria were captured using the superimposed signals than any of the single electrical signals. In fact, the numbers of E. coli K-12 captured by EO and DEP alone are 0.4% and 9.1%, respectively, of the number collected in the superimposed signal case at 30 s. This result can be ascribed to the fact that DEP is usually effective for the particles close to the electrode surface, and EO flow can drag particles over the electrode toward the region between the electrodes without capturing most of them. The bacterial capture with EO alone in this study was not significantly different from that with no signal treatment (p = 0.999), whereas the bacterial capture with DEP alone was significantly different from that with no signal treatment (p = 0.034). By contrast, the bacterial capture with the superimposed signals was significantly larger than those with the other two electrical treatments. The superimposed signals provided the advantages of both DEP and EO, first moving distant particles toward the region between the electrodes with EO flow, and then capturing the moved particles at a particular position, where the largest electric field occurs, against the flow with pDEP. This superimposition can be employed to enhance the sensor sensitivity and reduce the detection time when used with biosensors 18 . In fact, the amount of DEP-assisted attachment of cTnI (cardiac troponin I) after 1 min was less than that due to sedimentation for 1 h 18 in a cTnI sensor, because the proteins far from the electrode might not be attracted toward the electrode rapidly with DEP alone. Figure 4b shows the numbers of bacteria and beads captured after applying different treatments for 30 s. A moderate speed of ~119 (±6.3) μm/s around the facing electrode edges was adopted for the superimposed treatments, because it allowed many bacteria over the electrodes to be collected by pDEP while the beads were repelled from the electrode edges by nDEP without inertial attachment to the surface, making it possible to collect the bacteria selectively from the bacteria-bead mixture (Supplementary Video 1). Figure 5 shows the calculated net force fields for E. coli and PS beads using COMSOL Multiphysics ® 4.3, and hydrodynamic drag, gravitational, buoyant, and DEP forces were considered for the force field calculation (Supplementary Information). The simulation was conducted for actual experimental geometry (Fig. S2), but close views around the electrodes were demonstrated here to show the differences between the treatments effectively. DEP (pDEP for bacteria and nDEP for beads) was effective within several microns from the electrode edges, and AC-EO dragged flows containing the particles to the electrode edges. The superimposed signal shows the integrated effect of DEP and EO for the enhanced selective concentration of the bacteria against beads (Supplementary Video 2).
The purity of the concentrated bacteria against the beads, i.e., the separation efficiency, was kept more than 99%, where the separation efficiency was defined as the fraction of target particles with respect to all of the particles (target + non-target). High separation efficiencies (over 90%) by DEP for bacterial capture were previously reported using applied voltages of more than 20 Vpp and long electrical activation times (10 min-1 h) [47][48][49] . The superimposed signals enhanced the bacterial capture more selectively (>99%) and more rapidly (30 s) with a low electric potential of DEP (5 Vpp) and simultaneous use of EO (1.2 Vpp). Figure 4c,d show the quantities of MS2 viruses and cTnI-Ab, respectively, that were collected after applying the different types of signals for 1 min. Although nanometer-sized particles such as viruses and proteins are known to be difficult to manipulate by using DEP due to their small sizes, several studies have demonstrated the successful use of DEP for these particles 50,51 . In those studies, long exposure times (5-30 min) [52][53][54] or nanoscale electrode gaps (30-500 nm) 38,39,55 under applied voltages of 5-35 Vpp were applied to increase the electric field for capture. It was observed in the present study that significantly more MS2 and cTnI-Ab were captured using the superimposed signals than either DEP (10 Vpp) or EO (2 Vpp) signals alone (p < 0.035), with a gap of 25 μm between two electrodes and short electric field exposure time (1 min). The electric field gradient in the present electrodes was not high, compared to the previous DEP studies; however, many of the nanoparticles in the present study were continuously moved to the electrode edges by AC-EO, providing those nanoparticles with the chances to be affected by the DEP forces. That is, applying AC-EO corresponded to increasing virus concentration near the
Conclusions
We have demonstrated the rapid and selective electrokinetic concentration of bacteria, viruses, and proteins on two coplanar electrodes via superimposition of AC EO with DEP. The superimposed signals moved the particles distant from the electrodes toward the high electric field area with EO irrespective of the particle size, and then captured the particles selectively at a particular position, i.e., the highest electric field spot, against the flow with pDEP. Significantly more bacteria were captured using the superimposed signals than were collected using the other two treatments, EO and DEP. Moreover, the bacteria were selectively and rapidly concentrated with high purity against polystyrene beads from the mixture. The concentrations of collected nanometer-sized biological particles such as MS2 viruses and cTnI-Ab proteins were also enhanced by using this superimposition (EO + DEP) technique. This technique allowed for a relatively large gap between two electrodes and short electric field exposure time, and high capture efficiency. We believe that the superimposition of AC EO and DEP can be applied to many biosensors requiring the rapid detection of biological particles with simple coplanar electrodes 17,18 .
Microfabrication of Chips and Experimental Set-up.
Two 100-nm-thick indium tin oxide (ITO) coplanar electrodes were fabricated on a glass wafer (6 in. diameter) with 25 μm gaps, using the conventional photolithography and radio-frequency sputtering. The ITO electrodes were then annealed for 1 h at 400 °C in an oven for transparency and electrical resistance reduction. The wafer was diced into chips (1 × 1 cm 2 ) with two coplanar electrodes on each chip, which are shown in the bright field image in Fig. 1a. The detailed fabrication procedure is in the Supplementary Information. An inverted microscope (Eclipse Ti-U; Nikon, Japan) was used to observe the particle motion around the electrodes, maintaining the optical focus on the transparent electrode surface. A 30 μl PDMS well was located at the center of the chip, and 20 μl of the prepared solution was added into the well. Different electrical signals were then applied to the ITO electrodes for either 30 s (for the bacteria-beads) or 1 min (for the viruses and proteins).
Two dual-channel arbitrary function generators (AFG3022C; Tektronix, USA) were used to generate sinusoidal signals 180° out of phase. Four signals from the two function generators were superimposed by a lab-made voltage adder consisting of impedance buffers and frequency mixers, and the signals were monitored by an oscilloscope (DS2072A; RIGOL Technologies Inc., USA) (Fig. 1b).
Videos and images of concentrated fluorescent particles were taken by a cooled interline transfer charge-coupled device camera (ORCA-R2; Hamamatsu, Japan), and the quantities of particles collected in the fixed regions of interests (RoI; 120 × 360 pixels) between the two ITO electrodes were measured using ImageJ. The numbers of particles in the RoI were determined by dividing the total particle area by the single particle area for the bacteria and beads 56 and by measuring the integrated intensities for the viruses and proteins 38,55 . The exposure times for fluorescence imaging and video recordings were 100 ms, 40 ms, 500 ms, and 2 s for the bacteria, beads, viruses, and proteins, respectively, and the videos were recorded using the maximal frame per second setting for each experiment.
Preparation of Biological Particle Solutions with Fluorescence Labeling. Three types of biological particle solutions, bacteria-bead mixtures, viruses, and proteins, were prepared. For the bacteria, 10 μl of E. coli K-12 stock was added to 10 ml of LB broth solution, and the bacteria were grown at 37 °C and 160 rpm in a shaking incubator for 12 h. They were centrifuged at 4000 rpm for 10 min to remove the residual LB broth. The remaining sunk bacteria were suspended in DI water for DAPI labeling (excitation/emission: 360/460 nm) to distinguish them from the red fluorescent beads. The labeled bacteria were then centrifuged and re-suspended in 0.01× PBS 57 . The bacterial number concentration was determined by performing optical density measurements at 600 nm 58 , and the final bacteria concentration was 1 × 10 7 /ml. Red fluorescent (excitation/emission: 542/612 nm) PS beads 1 μm in diameter were suspended in 0.01× PBS buffer with a number density of 1 × 10 7 / ml, and the bacteria and bead solutions were mixed.
For the virus experiments, freeze-dried MS2 phages were dissolved in 1× PBS to obtain a viral mass concentration of 1 mg/ml. Then, 0.5 ml of the MS2 solution was added to 10 ml of E. coli C3000, the host bacterium for MS2 bacteriophages, and incubated at 37 °C and 160 rpm for 5 h. The mixture was then centrifuged at 3000 rpm for 10 min to remove the bacteria, and the MS2-laden supernatant was filtered using a membrane filter. The prepared MS2 stock was then labeled with red fluorescence dye Rh-B (excitation/emission: 562/583 nm in water) by coupling EDC and NHS, and the stock and dye were mixed and purified in a dialysis membrane for 1 week to remove the unbound dye 59 . The concentration of the labeled virus stock solution was approximately 10 7 plaque forming units (pfu)/ml, which was verified by a plaque assay, and the solution was 10,000-fold diluted in DI water. For the protein experiments, FITC-labelled cTnI-Ab was used (excitation/emission: 495/525 nm), and its stock solution was 400-fold diluted in DI water for a mass concentration of 500 ng/ml. The media conductivities of all three test solutions were measured using a conductivity meter (handylab pH/LF 12; SI Analytics GmbH, Germany) ( Table 2). Statistical Analysis. Each experiment in this study was performed at least three times. The average values are shown in the figures with their standard deviations indicated as error bars. Statistical analysis was performed using one-way analysis of variance (ANOVA) followed by the Tukey post hoc test (Table S1). Significantly different results (p < 0.05, 0.01, and 0.0001) are designated with asterisks (***, and ***, respectively). | 4,879 | 2018-10-08T00:00:00.000 | [
"Biology"
] |
Machine Translation System in Indian Perspectives
: Problem statement: In a large multilingual society like India, there is a great demand for translation of documents from one language to another language. Approach: Most of the state government works in there provincial languages, whereas the central government’s official documents and reports are in English and Hindi. Results: In order to have an appropriate communication there is a need to translate these documents and reports in the respective provincial languages. Natural Language Processing (NLP) and Machine Translation (MT) tools are upcoming areas of study the field of computational linguistics. Machine translation is the application of computers to the translation of texts from one natural language into another natural language. It is an important sub-discipline of the wider field of artificial intelligence. Conclusion/Recommendations: There are certain machine translation systems that have been developed in India for translation from English to Indian languages by using different approaches. It is this perspective with which we shall broach this study, launching our theme with a brief on the machine translation systems scenario in India through data and previous research on machine translation.
INTRODUCTION
As India is a large multilingual country, different states have different regional languages; hence for proper communication there is a need of machine translation. But in India the earliest efforts starts from the mid 80s and early 90s. In India several Institutes work on Machine Translation. The prominent Institutes are as follows: • The research and development projects at Indian Institute of Technology (IIT), Kanpur • National Centre for Software Technology (NCST) Mumbai ( Above Institutes co-operate imperative role in the field of machine translation from the years ago. Most of the machine translation systems have been developed by these Institutes by using various domains. Many of the domains have been identified for the development of domain specific translation systems; parliamentary questions and answers, pharmaceutical information, government documents and notice. Various machine translation systems have been developed in India using various systems for language translation from English to Indian languages.
Machine translation systems for Indian languages:
In India Machine Translation systems have been developed for translation from English to Indian Languages and from regional languages to regional languages. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English to Hindi domain with exceptions of a Hindi to English (Sinha and Thakur, 2005) and English to Kannada (Kumar and Murthy, 2006) machine translation system. English is a SVO language while Indian regional languages are SOV and are relatively of free word-order. The translation domains are mostly government documents, health, tourism, news reports and stories. A survey of the machine translation systems that have been developed in media for translation from English to Indian languages and among Indian languages reveals that the machine translation software is used in field testing or is available as web translation service. Indian Machine Translation system (Naskar and Bandyopadhyay, 2002) are presented below; these systems are used to translate English to Hindi language.
Anusaaraka systems among Indian languages: As Anusaaraka (1995) project which started at IIT Kanpur, by Prof. Rajeev Sangal. And is now being continued at IIIT Hyderabad, was started with the explicit aim of translation from one Indian language to another. It was funded by Technology Development in Indian Languages (TDIL) and financial support from Satyam Computers Private Limited.
Anusaaraka's have been built from Telugu, Kannada, Bengali, Punjabi and Marathi to Hindi. It is domain free but the system has been applied mainly for translating children's stories. Anusaaraka aims for perfect "information preservation". In fact, Asnusaaraka output follows the grammar of the source language (where the grammar rules differ and cannot be applied with 100% confidence).
For Example, a Bengali to Hindi Anusaaraka can take a Bengali text and produce output in Hindi which can be understood by the user but will not be grammatically perfect.
For example, for 80% of the Kannada words in the Anusaaraka dictionary (Bharati et al., 1997) of 30,000 root words, there is a single equivalent Hindi word which covers the senses of the original Kannada word. An e-mail server been established for the Anusaaraka's. To run the Anusaaraka on a given text, e-mail has to be sent with the name of the language in the subject line. For example, if 'Telugu' is put in the subject line, it involuntarily runs the Telugu to Hindi Anusaaraka. The focus in Anusaaraka is not mainly on machine translation, but on language access between Indian languages. Anusaaraka systems can be obtained from their website (http://www.iiit.net/ltrc/Anusaaraka/anu_home.html) they are currently attempting an English-Hindi Anusaaraka machine translation system. Anusaaraka mainly focus on language access between Indian languages, using principles of Paninian Grammar (PG) (Bharati et al., 1995) and exploiting the close similarity of Indian languages. Mantra machine translation system: MAchiNe assisted TRAnslation tool (MANTRA) (1999). It translates English text into Hindi in a precise domain of personal administration, specifically gazette notifications, office orders, office memorandums and circulars. Initially, the Mantra system was started with the translation of administrative document such as appointment letters, notification and circular issued in central government from English to Hindi. It is based on the Tree Adjoining Grammar (TAG) formalism from University of Pennsylvania. It uses Lexicalized Tree Adjoining Grammar (LTAG) (Bandyopadhyay, 2004) to represent the English as well as the Hindi grammar. Tree Adjoining Grammar (TAG) uses for parsing and generation.
It is based on synchronous Tree Adjoining Grammar and uses tree transfer for translating from English to Hindi. The system is tailored to deal with its narrow subject domain. The Mantra has become part of "The 1999 Innovation Collection" on information technology at Smithsonian institution's National museum of American history, Washington DC, USA.
This system can be obtained from the C-DAC website (http://cdac.in/html/aai/mantra.asp). About this system the contact person is Dr. Hemant Darbari and Dr. Mahendra Kumar Pandey. This project was funded by the Rajya Sabha Secretariat. The grammar is specially designed to accept, analyze and generate sentential constructions in "Officialese" domain. Similarly, the lexicon is suitably restricted to deal with meanings of English words as used in its subjectdomain. The system is ready for use in its domain. The system is developed for the RajyaSabha Secretariat, the Upper House of Parliament of India. It translate the proceedings of parliament such as study to be Laid on the Table, Bulletin Part-I and Part-II. This system also works on other language pairs such as English-Bengali, English-Telgu, English-Gujarati and Hindi-English and also among Indian languages such as Hindi-Bengali and Hindi-Marathi. The Mantra approach is general, but the lexicon/grammar has been limited to the sub-language of the domain.
MaTra system: The MaTra system (2004), developed by the Natural Language group of the Knowledge Based Computer Systems (KBCS) division at the National Centre for Software Technology (NCST), Mumbai (currently CDAC, Mumbai) and supported under the TDIL Project is a tool for human aided machine translation from English to Hindi for news stories.
It has a text categorization component at the front, which determines the type of news story (political, terrorism, economic and so on.) before operating on the given story. Depending on the type of news, it uses an appropriate dictionary. It requires considerable human assistance in analyzing the input. Another novel component of the system is that given a complex English sentence, it breaks it up into simpler sentences, which are then analyzed and used to generate in Hindi. They are using the translation system in a project on Cross Lingual Information Retrieval (CLIR) (Rao, 2001) that enables a person to query the web for documents related to health issues in Hindi.
Mantra machine translation:
The English to Hindi Anusaaraka system follows the basic principles (Bharati et al., 1997) of information preservation. The system makes text in one Indian language accessible in another Indian language. It uses XTAG based super tagger and light dependency analyzer developed at University of Pennsylvania for performing the analysis of the given English text. It distributes the load on man and machine in novel ways. The system produces several outputs corresponding to a given input. The simplest possible (and the most robust) output is based on the machine taking the load of lexicon and leaving the load of syntax on man. Output based on the most detailed analysis of the English input text, uses a full parser and a bilingual dictionary. The parsing system is based on XTAG (Bandyopadhyay, 2002) (consisting of super tagger and parser) wherein we have modified them for the task at hand. A user may read the output produced after the full analysis, but when he finds that the system has "obviously" gone wrong or failed to produce the output, he can always switch to a simpler output.
AnglaBharti technology: The AnglaBharti project was launched by Sinha et al. (2001) at the Indian Institute of Technology; Kanpur in 1991 for Machine aided Translation from English to Indian languages. Professor Sinha et al. (2001) has pioneered Machine Translation research in India. The approach and lexicon of the system is general-purpose with provision for domain customization. A machine-aided translation system specifically designed for translating English to Indian languages. English is a SVO language while Indian languages are SOV and are relatively of free wordorder. Instead of designing translators for English to each Indian language, AnglaBharti uses a (Dave et al., 2001) pseudo-interlingua approach. It analyses English only once and creates an intermediate structure called Pseudo Lingua for Indian Languages (PLIL).
In AnglaBharti they use rule based system with context free grammar like structure for English, A set of rules obtained through corpus analysis which is used to distinguish conceivable constituents. Overall, the AnglaHindi (Sinha and Jain, 2003) system attempts to integrate example-based approach with rule-based and human engineered post-editing.
AnglaBharti is a pattern directed rule based system with context free grammar (Sinha and Jain, 2003) like structure for English (source language) which generates a 'pseudo-target' (PLIL) applicable to a group of Indian languages (target languages). A set of rules obtained through corpus analysis is used to identify plausible constituents with respect to which movement rules for the PLIL is constructed. The idea of using PLIL is primarily to exploit structural similarity to obtain advantages similar to that of using Interlingua approach. It also uses some example-base to identify noun and verb phrasal's and resolve their ambiguities.
AnglaBharti-II: AnglaBharti-II (2004) (Sinha, 2004) addressed many of the shortcomings of the earlier architecture. It uses a Generalized Example-Base (GEB) for hybridization besides a Raw Example-Base (REB). During the development phase, when it was found that the modification in the rule-base was difficult and might result in unpredictable results, the example-base is grown interactively by augmenting it. At the time of actual usage, the system first attempts a match in REB and GEB before invoking the rule-base. In AnglaBharti-II, provision were made for automated pre-editing and paraphrasing, The purpose of automatic pre-editing module is to transform/paraphrase the input sentence to a form which is more easily translatable. Automated preediting may even fragment an input sentence if the fragments are easily translatable and positioned in the final translation Such fragmentation may be triggered by in case of a failure of translation by the 'failure analysis' module. The failure analysis consists of heuristics on speculating what might have gone wrong. The entire system is pipelined with various submodules. All these have contributed significantly to greater accuracy and robustness to the system. (2004) (Sinha, 2004) approach for machine-aided-translation is a hybridized example-based machine translation approach that is a combination of example-based, corpus-based approaches and some elementary grammatical analysis. The example-based approaches follow human-learning process for storing knowledge from past experiences to use it in future. In Anubharti, the traditional EBMT (Gupta and Chatterjee, 2003) approach has been modified to reduce the requirement of a large example-base. This is done primarily by generalizing the constituents and replacing them with abstracted form from the raw examples. The abstraction is achieved by identifying the syntactic groups. Matching of the input sentence with abstracted examples is done based on the syntactic category and semantic tags of the source language structure. Both of these system architectures, AnglaBharti and AnuBharti, have undergone a considerable change from their initial conceptualization. In 2004 these systems named as AnglaBharti-II and AnuBharti-II. AnglaBharti-II uses a generalized example-base for hybridization besides a raw example-base.and the AnuBharti-II to cater to Hindi as source language for translation to any other language, though the generalization of the example-base is dependent upon the target language.
Anubharti technology: Anubharti
Anuvaadak machine translation: Anuvaadak 5.0 system has been developed by super Info soft private limited, Delhi under the supervision of Mrs. Anjali Rowchoudhury for a general purpose English-Hindi Machine Translation. For specific domains it has inbuilt dictionaries. It has specific domains like Official, formal, agriculture, linguistics, technical and administrative. The meaning of any English word is not available in Hindi in dictionary then there is facility of translation is provided. In the windows family this software runs on any Operating system.
Tamil-Hindi machine aided translation system:
The system Tamil-Hindi Machine-Aided Translation system has been developed by Prof. C.N. Krishnan at Anna University at KB Chandrashekhar (AU-KBC) research centre, Chennai. The translation system is based on Anusaaraka Machine Translation System, the input text is in Tamil and the output can be seen in a Hindi text.
It uses a lexical level translation and has 80-85% coverage. Stand-alone, API and Web-based on-line versions are developed. Tamil morphological analyser and Tamil-Hindi bilingual dictionary are the byproducts of this system. They also developed a prototype of English-Tamil Machine-Aided Translation system. It includes exhaustive syntactical analysis. It has limited vocabulary (100-150) and small set of transfer rules. The system can be accessed at http://www.au-kbc.org/research-areas/nlp/demo/mat/.
English-Kannada machine-aided translation system:
English-Kannada MAT system is developed at Resource Centre for Indian Language Technology Solutions (RC-ILTS), University of Hyderabad by Dr. K. Narayana Murthy. The system is essentially a transfer-based approach and it has been applied to the domain of government circulars. English-Kannada machine translation system using Universal Clause Structure Grammar (UCSG) formalism. The system is funded by the Karnataka government.
UNL-based English-Hindi machine translation system:
The Universal Networking Language (UNL) used as Interlingua for English to Hindi translation, it was developed by the Indian Institute of technology, Bombay. Prof. Pushpak Bhattacharyya working on machine translation system from English to Marathi and Bengali using the UNL formalism.
Shiva and Shakti machine translation:
The system Shiva is an Example-based and the system Shakti is working for three target languages like Hindi, Marathi and Telgu. Shiva and Shakti are the two Machine Translation systems from English to Hindi and are developed jointly by Carneige Mellon University USA, international institute of information technology, Hyderabad and Indian institute of science, Bangalore, India. The system is used for translating English sentences into the appropriate language. Shakti machine translation system (Bharati et al., 2003) has been designed to produce machine translation systems for new languages rapidly. Shakti system combines rulebased approach with statistical approach whereas Shiva is Example-Based machine translation system. The rules are mostly linguistic in nature and the statistical approach tries to infer or use linguistic information. Some modules also use semantic information. Currently system is working for three languages (Hindi, Marathi and Telugu).
Anubaad hybrid machine translation system:
Anubaad a hybrid MT system is developed in the year 2004 for translating English news headlines to Bengali, developed by Bandyopadhyay (2000) at Jadavpur University Kolkata and. The current version of the system works at the sentence level. Hinglish machine translation system: Hinglish a machine translation system for pure (standard) Hindi to pure English forms developed by Sinha and Thakur (2005) in the year 2004. It had been implemented by incorporating additional level to the existing English to Hindi translation (AnglaBharti-II) and Hindi to English translation (AnuBharti-II) systems developed by Sinha.
The system claimed to be produced satisfactory acceptable results in more than 90% of the cases. Only in case of polysemous verbs, due to a very shallow grammatical analysis used in the process, the system is not capable to resolve their meaning.
English to (Hindi, Kannada, Tamil) and Kannada to Tamil language-pair example based machine translation system: English to {Hindi, Kannada and Tamil} and Kannada to Tamil language-pair example based machine translation system developed by Balajapally et al. (2006) in the year 2006. It is based on a bilingual dictionary comprising of sentencedictionary, phrases-dictionary, words-dictionary and phonetic-dictionary is used for the machine translation. Each of the above dictionaries contains parallel corpora of sentence, phrases and words and phonetic mappings of words in their respective files. Example Based Machine Translation (EBMT) has a set of 75,000 sentences most commonly spoken that are originally available in English. These sentences have been manually translated into three of the target Indian languages, namely Hindi, Kannada and Tamil.
Punjabi to Hindi machine translation system: Punjabi to Hindi machine translation system developed by Josan and Lehal (2008) at Punjabi University Patiala in the year 2007. This system is based on direct wordto-word translation approach. This system consists of modules like pre-processing, word-to-word translation using Punjabi-Hindi lexicon, morphological analysis, word sense disambiguation, transliteration and post processing. The system has reported 92.8% accuracy. Hindi to Punjabi machine translation system: Main-Hindi to Punjabi Machine translation System developed by Goyal and Lehal (2010) at Punjabi University Patiala in the year 2009. This system is based on direct word-to-word translation approach. This system consists of modules like pre-processing, word-to-word translation using Hindi-Punjabi lexicon, morphological analysis, word sense disambiguation, transliteration and post processing. The system has reported 95% accuracy.
The overall conclusion of machine translation systems in Indian perspectives that from the year 1995 to 2009 the MT systems developed have achieved lots of success in translating languages. Still work has been carried out to achieve better than previous study.
Overview of machine translation system: Machine
Translation is the application of computers to the translation of texts from one natural language into another natural language. It is an important subdiscipline of the wider field of artificial intelligence. The conclusion of machine translation systems that have been developed in India for translation from English to Indian languages is shown in the Table 1 and 2. It is domain free but the system has been applied mainly for translating children's stories. It aims for perfect "information preservation". It mainly focuses on language access between Indian languages.
Mantra
The system is developed for the Rajya Sabha Secretariat, the upper house of parliament of India, but now it also works on Indian language pairs. Matra It is a text categorization component it breaks up the complex English Sentences into Simpler sentences which then analyzed and used to produce in Hindi.
Angla Bharti
The approach and Lexicon of the system is general Purpose with provision for domain customization. Hinglish machine It had been implemented by incorporating additional level to the existing translation system English to Hindi translation The System claimed to be produced satisfactory acceptable results in more than 90% of the cases.
Shiva and Shakti
The Shiva machine translation system is used for translating English sentences into the appropriate language. Machine Translation Shakti machine translation system has been designed to produce machine translation systems for new System languages Hindi, Marathi and Telugu.
With the above details and information, an important feature of the MT system on this task is the correct manipulation of the terms and concepts of the domain. The main goal of MT systems is correctly identify and process them with high quality.
CONCLUSION
Machine translation is relatively new in Indiaabout two decades of research and development efforts. the goal of TDIL project and the various resource centres under the TDIL project works on developing machine translation systems for Indian languages. There are governmental as well as voluntary efforts under way to develop common lexical resources and tools for Indian languages like pos tagger, semantically rich lexicons and word nets. The NLP association of India, regular international conferences like International National Conference on Natural Language processing (ICON) and lexical resource E groups like<EMAIL_ADDRESS>are consolidating and coordinating NLP and MT efforts in India. | 4,667.6 | 2010-07-30T00:00:00.000 | [
"Computer Science"
] |
Sistematic Study of Benzo [ a ] Pyrene in Coffee Samples
Neste trabalho foi avaliado e otimizado um método para extração e quantificação de benzo[a]pireno (B[a]P) em amostras de café das espécies conillon (Coffea canephora) e arabica (Coffea arabica), verde e torrado em pó. A influência do processo de torração na formação do B[a]P também foi estudada. Estas amostras foram extraídas com acetona, seguidas por saponificação, extração com ciclohexano e purificação em coluna de silica-gel. A quantificação foi feita por CLAE por eluição isocrática em fase reversa com detector de fluorescência. Os limites de detecção e quantificação foram de 0,03 e 0,10 μg kg respectivamente. O intervalo de recuperação foi de 76 a 116% para concentrações entre 1,00 e 3,00 μg kg. Os resultados obtidos para o B[a]P foram de 0,47 a 12,5 μg kg para amostras de café em pó torradas, e não foi detectado a presença de B[a]P em amostras de café verde. Portanto, o controle dos parâmetros de torração é fundamental na obtenção de um produto de boa qualidade, sem prejuízos à saúde da população.
Introduction
Polycyclic aromatic hydrocarbons (PAHs) are classified as a chemical group of more than one hundred different organic compounds containing two or more condensed aromatic rings.They are formed by the incomplete combustion or thermal decomposition (pyrolysis) of organic material and are widely distributed in the environment. 14][5][6] Benzo[a]pyrene is the most commonly studied and measured PAH compound and has served as an indicator of total PAHs contamination. 7,8everal studies have estimated that dietary intake represents around 97% of the total daily non-occupational human exposure to B[a]P and have indicated that of all possible means of human exposure to PAHs, ingestion is the most predominant. 9,10Due to the difficulty of extrapolating toxicity data from animals to humans, although the carcinogenic properties of PAHs and B[a]P have been demonstrated, it has not yet been possible to establish which PAH and B[a]P rates constitute a health risk. 11However, the food contamination studies are included in the JECFA (Joint FAO/WHO Expert Committee on Food Additives) priority list. 12In addition, European Commission has just published the Regulation (EC) No. 208/2005, establishing maximum level of benzo[a]pyrene in some foods. 13ifferent ways of B[a]P contamination in food have been suggested, such as the process of drying seeds, cooking, and roasting, among others.In the specific case of coffee samples, the drying and roasting beans steps may be responsible for B[a]P contamination.The concentrations of B[a]P in roasted coffee stayed in the range of 0.3-15.8][16] B[a]P is chemically inert and hydrophobic, 17 however, its solubility in aqueous solution may be increased by the presence of caffeine, causing caffeine-B[a]P complex formation. 14,18This complex may be responsible for the occurrence of B[a]P in coffee brew.
The objective of this research was to develop an extracting and quantifying B[a]P method in green and roasted ground coffees, including the evaluation of the influence of the roasting process in B[a]P formation.
Experimental
Samples Twenty-four coffee samples were analyzed: four samples of Conillon and twenty of Arabic coffees.Different kinds of green and roasted ground coffees were selected and analyzed: Conillon, Arabica dura, Arabica mole, Arabica rio, Arabica riada, Arabica rio zona, with American, conventional and express roasting.The coffee samples were specially roasted at Embrapa/CTAA for this experiment.The end of the roasting process was determined by visual and instrumental measure of colour parameters using the Hunter L a b system.
Chemicals and reagents
The following reagents were used for B[a]P extraction: silica Gel 60 -particle 0.063-0.200mm (70-230 mesh ASTM), potassium hydroxide (pellets), anhydrous sodium sulphate, cyclohexane, acetone, and methanol.These reagents were analytical grade (Merck).For liquid chromatography, HPLC grade acetonitrile (Merck) and water from the Milli-Q system was used; both were filtered through 0.45 μm Millipore membranes of 47 mm diameter before use.The benzo[a]pyrene standard (Sigma) had a minimum of 97% purity.
Extraction and clean-up
The extraction and clean-up procedures were adapted from those described by Kruijf et al. 7 according to the steps described in Scheme 1.A 20 g portion of ground homogenized coffee samples were extracted in a Soxhlet apparatus over a period of 6 hours with 250 mL of acetone.The solvent was removed under reduced pressure at 40 o C. The residue was saponified with 1.4 g of potassium hydroxide in 50 mL of methanol-water (9:1, v/v) under reflux.After completion the saponification (30 minutes), 120 mL of distilled water was added slowly through a condenser.The mixture was transferred to a separatory funnel and was first shaken with 40 mL of cyclohexane for 2 minutes.After separating the layers, the aqueous layer was twice extracted with fresh portions of 30 mL cyclohexane for 2 minutes.The combined cyclohexane extracts were dried over 50 g of anhydrous sodium sulphate prewashed with 20 mL of cyclohexane, then concentrated in a rotary evaporator at 40 o C to approximately 4 mL.
The concentrated extract was applied to the top of a 5 g silicagel column, which was deactivated with water 15% (m/m).The column was prewashed with 20 mL of cyclohexane, and eluted with 100 mL cyclohexane.The first 10 mL of cyclohexane eluate was discarded; the rest of the eluate was collected and concentrated in a rotary evaporator at 40 o C to about 4 mL and dried using nitrogen stream and gentle heating.The residue was dissolved in 3 mL of HPLC acetonitrile, filtered through a nylon acrodisc, and analysed by HPLC.
HPLC analysis
The benzo[a]pyrene extract from these samples was quantified by a HPLC Shimadzu device coupled with a fluorescence detector.Separations were achieved on a 5 μm Lichrosphere 100 RP18 (4.6 mm i.d. by 15 cm length) operated at room temperature.The mobile phase acetonitrile -water (80:20, v/v) was used at a flow rate of 1.0 mL min -1 .Aliquots of 20 μL green and roasted coffee sample extracts were injected in the HPLC column.The fluorescence detector was operated with an excitation wavelength at 295 nm and the measurement at 405 nm.The chromatographic process was done in 15 minutes.
B[a]P was identified by comparing the time retentions of standard solutions and samples under the same experimental conditions of the chromatography analysis immediately after spiking the samples.Quantification was performed by comparison samples peak areas with those obtained using standard solutions in a calibration curve.
Analytical quality control
In order to verify the accuracy and precision of the analytical procedure, recovery experiments were carried out by spiking the starting solvents (n = 4) with 2 μg kg -1 (ppb) of B[a]P and the selected sample of Conillon conventionally roasted (n = 3) with 1.00 to 3.00 μg kg -1 (ppb) of B[a]P.This coffee sample was chosen because it showed the lowest contamination of B[a]P.
The detection and quantification limits were defined as three and ten times the noise, respectively.The noise values were determined by measuring the amplitude signal by fluorescence analysis of a blank reagent (n = 6).The noise value was considered to be half of the amplitude baseline signal in the retention time region. 19he ground green and roasted coffee samples were analyzed in triplicate.
The recovery results from the blank reagent were in the range of 100 to 105.5% (Table 1), and the recovery values of the spiked conillon conventional roasted coffee samples were found to be between 76 to 116% (Table 2).The detection and quantification limits were 0.03 μg kg -1 and 0.10 μg kg -1 , respectively.
All B[a]P extract determinations in green coffee samples were below detection limit.The ground roasted coffee samples stayed in the range between 0.47 to 12.5 μg kg -1 (ppb) (Figure 1, Table 3).
The coffee samples analyzed were specially roasted for this experiment to evaluate the influence of the roasting process in the formation of benzo[a]pyrene.Different The extraction of B[a]P from coffee samples had been made first by Grimmer and Böhnke 20 methodology modified by Lintas et al. 21However, this process was found to be time and solvent consuming and the recovery values were not as good as expected.
The method described by Kruijf et al. 7 was evaluated, and applied as described originally.However, the benzo[a]pyrene extracts obtained from coffee samples using the Kruijf et al. method 7 had natural unwanted compounds, which have retention times and fluorescence absorbances similar to benzo[a]pyrene.In order to eliminate these compounds, a clean-up step using a silicagel column was added to the procedures of the experiment.
This clean-up step was essential to become the experimental conditions selective for benzo[a]pyrene, as shown in Figure 1, eliminating the unwanted compounds.In these chromatograms, the unwanted compounds were absent from the ground green, roasted and roasted spiked coffee samples during the B[a]P retention time.The recovery values had good yield.
Despite using this new clean-up step, this method should be simple, with fewer experimental steps in comparison to the others, because extraction, solvent evaporation and extract saponification can be perfomed in the same flask.This procedure permited the reduction of both experimental time and solvent quantities, minimizing analyte loss.
Detection and quantification limits were 0.03 μg kg -1 (ppb) and 0.10 μg kg -1 (ppb), respectively, lower than the B[a]P results from the coffee samples; the last were in the range between 0.47 to 12.5 μg kg -1 (ppb) for ground roasted coffee samples (Table 3).The recovery values were 76% to 116% for concentrations of 1.00 to 3.00 μg kg -1 (ppb) (Table 2).These results proved that the present method is useful in B[a]P analyses in ground roasted coffee samples.(nd) = no detected, detection limit was 0.03 μg kg -1 .
Scheme 1 .
Scheme 1. Schematic representation of experimental extraction and purification procedures to obtain benzo[a]pyrene extracts from green and roasted coffee samples.
Figure 1 .
Figure 1.HPLC chromatograms with fluorescence detection of the B[a]P extracts from ground green and roasted Conillon coffee samples (conventional roasted with and without spiking).
Table 2 .
Recovery values (n=3) of the spiked conventionally roasted Conillon coffee sample
Table 1 .
Recovery results to the analysis of blank reagent sample kinds of coffees were used: Conillon, Arabica dura, Arabica mole, Arabica rio, Arabica riada, Arabica rio zona, with American, conventional and express roasting.
Table 3 .
Results of the benzo[a]pyrene extracts (μg kg -1 ) from ground green and roasted coffee samples (n = 3) | 2,360.4 | 2006-10-01T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Smart university implementation in higher education to improve the graduates’ competitiveness
.
INTRODUCTION
The rapid development of digital technology has led to many changes in daily life that cover almost all aspects of life, including university services. The application of digital technology requires universities to adapt to the times and operate more eff ectively and effi ciently (Dong, Zhang, Yip, Swift, & Beswick, 2020, p. 46). This result is in line with Kariapper (2020, p. 4622), which states that the strategic role of digitalization technology is to solve the gap in the quality of educational resources and learning models that are less relevant so that universities can improve the quality of their competitive graduates. Eff orts to utilize this digital technology must be supported by the accessibility of its users to connect to the internet access. According to data from Nuzzaci and La Vecchia (2013, p. 19), internet usage penetration in Indonesia is 176 million people or 64 percent of the total population of Indonesia. The growth population accessing the internet has seen signifi cant growth in the last year, reaching 25.3 million or 17 percent of new internet assessors. This fi gure shows Indonesia's position in the top three, behind China and India. Observing the number of internet users who have increased signifi cantly, one can easily carry out routine activities in real-time without being hindered by place, time, and space. It indicates that geographical boundaries are disappearing due to the existence of the internet (Glisson & Chowdhury, 2020, p. 159).
Along with technological development, people's lifestyles and working habits have undergone a tremendous transformation and ways of learning. The gradual change in the learning environment and the increasing demand for personalized and adaptive learning have pushed the reform and development in education. As the high-end form of a smart education system, the smart university came into reality and has received more and more attention worldwide (Siswanto, Kartanagara, & Chuan, 2021, p. 79). Smart university creates a smart learning environment for the citizens by transforming them into a smart workforce, making it an integral part of the smart city framework. The development and popularity of smart universities also support the knowledge economy. The global smart education market is forecast to grow at a compound annual growth rate of 15.96% between 2018 and 2022. There is a pressing need to perform active research in such a fast-changing domain and obtain a clear understanding of the smart university and its attributes (Bayani, Leiton, & Loaiza, 2017, p. 17751). The development in technology calls for a revolution from the traditional education strategy with predominantly face-to-face teaching/learning into a more innovative way that promotes new education paradigms. Several terminologies to conceptually describe innovative education have been raised, such as smart classrooms, smart learning environments, smart e-learning, blended learning, and ubiquitous learning (Alwi, Dwiningrum, Suyanto, Sunarto, & Surono, 2021, p. 116). Literature defi nes and envisages the smart university concerning the smart education revolution. For instance, a vision for developing an intelligent university is provided. The design and development of an innovative learning environment are explored. The blended learning concept and its applications in the smart university are covered (Nachandiya, Gambo, Joel, & Davwar, 2018, p. 5).
Information and Communication Technology (ICT) development encourages various educational institutions, especially universities, to use the internet to manage education. With the internet, universities can improve the quality of graduates who can compete at national and international levels (Baptist, Utami, Subali, & Aloysius, 2020, p. 62).
Higher education is a continuation of secondary education to prepare students to become community members with academic and professional abilities in applying, developing, and creating science, technology, and the arts. To achieve this goal, legally-formally, universities do not only act as teaching centers because the teaching and learning process carried out in the classroom without being supported by relevant research results will experience setbacks and not develop (Villegas-Ch, Palacios-Pacheco, & Luján-Mora, 2019, p. 24). Higher education as a scientifi c community must play an active, positive role in solving problems faced by society by producing knowledge that is ready to use, in the sense of a problem fi nder. Thus, development can use the knowledge gained through research to explain and predict events in people's lives, the business world, and the industrial world (Damayanti, Santyasa, & Sudiatmika, 2020, p. 94). The universities must produce outputs with solid personalities, superior abilities, intelligence, and creativity to compete with other nations in the face of globalization. Therefore, the existence of universities has an important position and function in developing society. The social change process is such immediate demands that the position and function of the university be realized in a fundamental role. The role of universities is contained in implementing the Three Principles of Higher Education: Education Dharma, Research Dharma, and Community Service Dharma (Salvioni, Franzoni, & Cassano, 2017, p. 17-18) is shown in Figure 1.
Smart university refers to university facilities that support all activities of the academic community in carrying out the obligations of the Three Principles of Higher Education, using information technology as the backbone of support. Smart university implementation is not easy because it involves many facilities. The miniature implementation of smart university technology has fi nally emerged, such as smart classrooms, smart laboratories, smart buildings, smart departments, and smart faculty. Implementing a smart university is needed as a development from a conventional or usual university management situation and then switches to implementing a system using technology (Petrovskiy & Agapova, 2016, p. 2527. A well-developed university can implement the obligations of the Three Principles of Higher Education as a responsibility to science, society, and the environment. The Three Principles of Higher Education obligation to provide education, research, and community service (Marzuki, Zuchdi, Hajaroh, Imtihan, & Wellyana, 2019, p. 283). One of the domains of the Three Principles of Higher Education that may improve service and effi ciency using technology in a smart university environment is education. The use of technology helps create creative and innovative students who will lead them to become excellent graduates who are ready to face challenges in today's digital era (see Figure 2). Applying technology systems in the management of the education sector will increase effi ciency and stakeholder satisfaction. Education is a conscious and planned eff ort to create a learning atmosphere and learning process, so the students can actively develop their potential to have religiousspiritual strength, self-control, personality, intelligence, noble character, and skills needed by themselves, society, and nation-state (Siregar, Lumbanraja, & Salim, 2016, p. 9).
Figure 1
Three principles of higher education Several component processes must be carried out when implementing e-learning, such as content relevant to the learning objectives, using learning methods, and using media elements such as sentences and pictures to distribute content and learning methods. Learning can be done directly with the instructor (synchronous) or individual learning (asynchronous) and by building new insights and techniques related to learning objectives (Daniel, 2015, p. 911). Blended learning is an e-learning strategy that combines several learning strategies with environmental conditions and learning facilities that allow learning objectives to be achieved optimally. For teachers, lecturers, and practitioners in the fi eld of education, there will be many combinations that can be done, especially by utilizing existing telecommunications tools such as the internet, mobile phones, and other information technology (Agnes, Jola, & Gaspersz, 2018, p. 32). Blended learning can also be viewed as a continuum from conventional face-toface to entirely online thus there are several forms of blended learning continuum, including entirely online, where there is no face-to-face at all; entirely online, but there is an option to do face-to-face even though it is not required; mostly full online, but certain days are done face to face either in class or in the lab or the workplace directly; mostly entire online, but students still learn conventionally in class or lab every day; primarily conventional learning in the classroom or lab, but students are required to take part in certain online activities as enrichment or addition; entirely conventional learning, although there are online activities even though it is not required for students to follow them; and entirely conventional learning (Prima, Ganefri, Krismadinata & Saputra, 2019, p. 4).
Internet of things (IoT) is a development that can optimize human life with the help of sensors and artifi cial intelligence that uses the internet network to carry out commands and connect humans with devices. A concept that aims to expand the benefi ts of continuously connected internet connectivity. An object is the IoT if it is contained in an electronic object or any equipment connected to a local and global network through embedded sensors and is permanently active (Winarti, Rahmini, & Almubarak, 2019, p. 181). IoT works by utilizing Jurnal Kependidikan, 6(2), 173-188
Figure 2
The roadmap of graduates competitiveness expectation a programming argument in which every command of an argument produces an interaction and communication between machines that are connected automatically, and the media that connects these devices is the internet (Uskov, Bakken, Karri et al., 2018, p. 5).
IoT can create a complete internet environment and make it easier for people to access various smart technologies integrated with automation that can be used anytime, anywhere. IoT has three main characteristics: objects with devices/measuring devices, interconnected autonomous terminals, and intelligent services.
IoT is implemented in education, which plays a role in teaching institutions and updating the learning system into m-learning and e-learning. The application of IoT in the learning system starts with the help of various devices such as gadgets, tablets, e-book readers, and social media. With the help of the internet, students get information and knowledge through devices connected to the internet (Tenekeci & Uzunboylu, 2020, p. 13). The development of the learning system aims to improve the quality of student learning and the ease of learning, for example, in general, if students cannot keep up with the pace of learning or are unable to attend classes, students will be counted behind in lessons. Applying IoT in the learning system allows students to get lessons quickly (Tenekeci & Uzunboylu, 2020, p. 14).
The lecturers' duties are professional educators and scientists with the main task of transforming, developing and disseminating science, technology, and art through education, research, and community service. In carrying out their professional duties and simultaneously as scientists, lecturers must have knowledge, skills, and attitudes that must be lived and mastered. In addition to having knowledge, skills, and attitudes as professional educators and scientists in the higher education environment, lecturers must have the competence and carry out their duties, including: professional competence with the breadth of academic insight and the depth of lecturers' knowledge of the scientifi c material they are engaged in; pedagogic competence with the mastery of lecturers in various approaches, methods, class management, and evaluation of learning following the characteristics of the material and student development; personal competence, namely the ability of lecturers to present themselves as role models and show enthusiasm and love for their profession; social competence, the ability of lecturers to appreciate diversity, be active in various social activities and work together (Uskov, Bakken, Howlett, & Jain., 2018, pp. 56-57).
Research conducted by Mbombo and Cavus (2021) entitled Smart University: A University In the Technological Age showed that big data provides teachers with a large and necessary amount of information about their students to follow up on each student. With detailed mass information about students, the teacher will be able to examine the evolution and engagement of the student in a precise and concrete way in order to help him/her according to his/her shortcoming in terms of participating in courses, completing homework, and the time spent researching in the online library for example. We can also see the benefi ts that the students will be able to draw, such as the fl exible courses allowing them to access it at any place and at any time to his courses. Students can maintain realtime communication with other students or teachers and have the freedom to participate in discussions or scientifi c forums. The student also has free and unlimited access to study materials and can log in at a time suitable for them. Students experiencing diffi culties will benefi t from personalized follow-ups that will allow them to improve their intellectual level. The advantages at the educational institute level are not negligible because, with this mass information, it will monitor the evolution of each student individually and closely monitor how teachers take care of their student's learning. By having accurate information that can be accessed as quickly as possible, it is possible to analyze each student's progress quickly, precisely, and intelligently. The institute can also spend less than a traditional institute in terms of infrastructure.
Research conducted by Ryu, Kim, and Yun (2015) entitled Integrated Semantics Service Platform for the Internet of Things: A Case Study of a Smart Offi ce shows the results of creating an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domain on a smart campus. Specifi cally, the authors address three main issues for providing integrated semantic services and IoT systems including semantic discovery, dynamic semantic representation, and semantic data storage for IoT resources. Then develop a prototype service for smart offi ces using ISSP, which can provide a personalized offi ce environment by interpreting the user's text input via a smartphone. The IoT technology has had a signifi cant impact on universities, which not only changed traditional teaching practices but has also brought changes in the infrastructure of educational institutions. A smart university has many features like smart parking, inventory, lighting, tracking, and smart corridors with data centers for processing all data types to enhance communication and improves smart-university learning. At present, the design and construction of the smart university are still in the exploration stage. Based on the background of the problem above, this study aims to analyze the implementation of the smart university in optimizing the three principles of higher education so that university support programs in implementing smart universities become more comprehensive. The thing that needs to be prepared is integration in implementing the program, ideally to be replicated more easily by the dean in each faculty (Alsheikh, 2019, pp. 32-33).
METHOD
This type of research is exploratory qualitative with interview methods and literature study in data collection. This method was used to identify the implementation of a smart university in optimizing the three higher education principles. The population of this research was all universities in Semarang, with the sampling method using purposive sampling. The criteria determined are the management study program at a private university in Semarang that implements a smart university in implementing the three principles of higher education. The data sources were used to consist of primary data and secondary data. The analysis technique of the smart university implementation program was carried out through data collection, data reduction, data presentation, and conclusion because this method can explain, assess, and visualize the modeling used. Guided interview questions asked to include: How do you fi nd out about smart universities?; What do you do after you get enough information about smart universities?; Are you immediately interested in implementing it?; What made you interested and then decided to implement a smart university?; What are the stages of making a smart university?; and What are the benefi ts of implementing a smart university?
The concept of a smart university refers to university facilities that support all activities of the academic community in carrying out the obligations of the three principles of higher education using information technology as its primary basis. Smart university implementation is not easy because it involves many facilities that must be realized (Nadeem, Abedin, Cerpa, & Chew, 2018, p. 3). The implementation of smart university technology in miniature has fi nally emerged, such as smart students, lecturers, administrators, classrooms, laboratories, smart buildings, smart departments, and faculty (see Figure 3). At several universities abroad, the concept of technology in smart universities is expected to support advanced learning and research and streamline the process of delivering administrative services (Bayani et al., 2017, p. 17750). In addition to improving the quality of on-university education services to students, the smart university concept is also related to eff orts to improve the quality of education off ered by each university. This is because the smart university concept demands direct implementation of the knowledge learned in university so the success of the smart university concept should be in line with the quality of education delivered in lecturer materials (Komariah, Sofyan, & Wagiran, 2019, p. 211). Digital transformation of the university will direct their services based on technology so the three higher education principles can be achieved more optimally. Many internet-based systems can be applied to universities, and it is possible to implement them for educational purposes (Maciá Pérez, Aciá-Pérez, Berna-Martínez, & Lorenzo-Fonseca, 2021, p. 14).
Since the invention of internet technology, almost anything has become possible in the world of education. Currently, students can learn anywhere and at any time with the existing electronic learning system facilities. E-learning is now increasingly recognized to solve education and training problems in developed countries and developing countries, especially Indonesia. Many people use diff erent terms for e-learning, but in principle, e-learning uses electronic services as a tool (El Firdoussi et al., 2020, p. 6). E-learning can be used as an innovative approach to distributing well-designed, learner-centered, interactive learning and facilitating a learning environment for everyone by using the attributes and resources of various digital technologies during the learning material. Suitable for open learning, a fl exible learning environment (Ramsden, 2018, p. 366). E-learning also off ers new opportunities for instructors and students to enrich the learning and teaching experience through a virtual Amron et al.: Smart university implementation ...
Figure 3
Smart university implementation model environment that supports the delivery, exploration, and application of information (Rico-Bautista et al., 2021, p. 49).
The learning strategy used in e-learning is a blended learning strategy. Blended learning was a learning process that utilizes various approaches. The approach taken was able to utilize a variety of media and technology. With blended learning, the learning process was able to combine various physical and virtual sources. The blended learning strategy can be applied according to the agreed conditions. Blended learning should be seen as a pedagogical approach that applies various learning approaches rather than how big the delivery system is between face-to-face and online (Jubran & Sumiyana, 2016, p. 131). Blended learning should combine wisely, relevantly, and precisely between the potential of face-to-face with the potential of information and communication technology which is rapidly developing, to allow: a shift in the learning paradigm from what was previously more teacher-centered to a new learner-centered paradigm (student-centered learning); an increase in interaction or interactivity between students and teachers, students and students, students/teachers with content, students/teachers with other learning resources; and the occurrence of convergence between various methods, media, learning resources, and other relevant learning environments (Reyna, Hanham & Meier, 2018, p. 42). The schema of technology implementation in developing smart university is described in Figure 4.
FINDINGS AND DISCUSSION
Lecturer performance assessment for the implementation of lectures is based on the success of carrying out lectures according to the schedule set by the study program and Jurnal Kependidikan, 6(2), 173-188
Figure 4
Technology implementation to develop smart university the number of courses taught. The study results show that lecturers can carry out lectures according to their schedules. When implementing a smart university when teaching, the main obstacles are low computer capacity for online learning and poor internet connection. The internet procurement at the lecturer's home is carried out by the lecturer himself so that the internet capacity of each lecturer varies. Besides that, other obstacles are the occurrence of power outages, the lack of examples of learning aids, and the limitations of lecturers in mastering online learning applications. However, the positive side of smart university schemes, namely lecturers save transportation costs and work time effi ciently. Lecturers can do household work between academic working hours, increase lecturer creativity in developing learning media and mastery of information technology for online learning, and work in a safe, more comfortable, relaxed working atmosphere.
The smart-university scheme has changed student academic guidance and fi nal project exams from conventional face-to-face to online. The media used for the fi nal project guidance process are whatsapp group, e-mail, google meet, and zoom. Meanwhile, the fi nal project implementation is used by google meet or zoom. Overall, the process of academic guidance and student fi nal project exams runs according to the schedule determined by the study program. The obstacle that arises is the smooth internet connection between students and lecturers. The positive thing about the smart university scheme for academic guidance is that students consult more intensively with their supervisors because there are no distance, place, or time constraints. Likewise, the fi nal assignment exam makes students more comfortable during exams and safe. Students like online learning because it makes them more disciplined in doing assignments and more familiar with information technology.
The successful implementation of lecturer research in the smart university scheme depends on the nature of the data, location, and type of research. Research carried out as planned is in the laboratory, secondary data is obtained online, and experimental research with treatments that do not involve many people. However, research in the fi eld with surveys and data collection with interview techniques to research subjects can be carried out properly because the implementation of research is carried out with health protocols and data collection with the help of colleagues at the research site. As for the research that must be modifi ed, the time and method of data collection due to some problems is research with surveys, and primary data is taken by interviewing resource persons. Field research will experience diffi culties, especially in collecting fi eld data. Based on the study, the obstacles that arise in the lecturer's research are that the implementation of health protocols will hinder lecturers' mobility, the limited laboratory use schedule, and the limited collection of data by interview. In contrast, the retrieval of secondary data online is constrained by the speed of internet access to download big data. The positive side of the smart university scheme for lecturer research raises the creativity to overcome fi eld data collection methods safely and eff ectively. The online methods make secondary data collection more intensive, and discussions between research members are fl exible in terms of time, eff ort, and cost. Lecturers focus more on reading, exploring various sources of information, analyzing secondary data, and writing research reports, and lecturers are encouraged to explore new ideas for solving research problems.
Lecturer's scientifi c publications can be done in seminar forums, scientifi c journals, patents, intellectual property rights, designs/models, or textbooks. The performance indicators of a lecturer's scientifi c publications can be seen from the quality and quantity of his scientifi c publications, but the quantity of scientifi c publications is a general indicator to assess the performance of a lecturer's scientifi c publications. The positive thing about the smart university scheme in lecturers' scientifi c publications is to bring up research studies topics and publications to fi nd solutions to problems in the digital era in a more varied manner; webinar model seminars with effi cient time, place, and cost; become more focused on writing research reports and publication manuscripts to reputable scientifi c journals.
The forms of community service include occupying leadership outside the institution, developing educational and research results that are used by the community or industry, training or counseling or upgrading or lectures to the community, and serving the community to support the implementation of development, making unpublished community service works or published in community service journals and play an active role in the management of scientifi c journals. Based on the existing forms of community service, community service must have direct interaction with the community, and community service does not require direct interaction with the community. In implementing the smart university scheme, some community service has been successfully implemented by lecturers, but some have not been implemented. The lecturers have successfully carried out community service that does not require direct interaction with the community, for example, the involvement of lecturers as reviewers of scientifi c journals or managers of scientifi c journals, carrying out the development of educational and research results that are utilized by the community or industry and making community service works that not published or published in community service journals. The positive side of the smart university scheme in community service is to stimulate the creativity of lecturers to look for forms of community service that do not have to interact directly with the community; stimulate the creativity of lecturers to learn to use various social media to do community service.
Education is systemically oriented to graduate competence formulated in a quality loop in which all components can be interrelated in educational activities. The systemic review is described in Figure 5. It covers four scopes of activities. The cycle begins with carefully identifying market desires, which is then followed by the determination of competency standards, which are then used to develop curriculum; the implementation stage of education is planning the teaching and learning process, including determining the qualifi cations of teachers following competence; the learning stage studies and practice constantly until a certifi cate of competence is issued and widely circulated to users of educational services; reviewing the suitability of graduates with the competencies required by the market, then taking corrective actions against discrepancies. The on the job assignment is a university that collaborates with industry or companies that can provide real work to its graduates so that the learning process can run and standard work competencies can be met. Attribution theory is concerned with the analysis of the occurrence of interactions in the classroom. In the context of the learning process and in order to improve the ability or competence of students, what needs to be considered is individual diff erences in potential, such as intelligence, interests, talents, and motivation, and various types of student learning.
In the implementation of policy, several strategies and eff orts have been taken that refer to the phenomena that occur in universities in the implementation of the three principles of higher education (see Figure 6).
Strategy-1: Eff orts to procure/higher education facilities. The university provides a library, lecturer room, hall, prayer room/prayer room, meeting room/trial room, room for student activities, restroom, pantry, and parking lot. Universities form learning centers that develop the competence and abilities of the academic community. Universities build knowledge management systems and tools to increase the knowledge and insight of leaders, structures, staff , lecturers, and employees. Universities provide work practice facilities to fulfi ll essential competencies through learning abilities. The university has a database of research results from lecturers and students. Universities have access to scientifi c journals and digital library materials nationally, so conducting training and socialization for librarians and academics is necessary. The college completes the needs of classrooms, laboratory equipment, workshops, and libraries, including living laboratories. Universities have an eff ective and effi cient management system for facilities and infrastructure by utilizing information technology, including a complete inventory system. The management system also includes a pattern of regular reporting from the implementing unit to the management and can be used as information for users (students and lecturers).
Figure 5
The systemic review covers four scopes of activities
Figure 6
The several strategies and eff orts to implementing smart university Strategy-2: Eff orts to improve the lecturer performance and education personnel. The Ministry of National Education and higher education and related elements are more selective in the recruitment of educators and staff with the required standards of educators and education personnel, namely having academic qualifi cations and competencies as learning agents, being physically and mentally healthy, and having the ability to realize educational goals national. Universities and related elements organize training that is relevant to the needs of educators and education staff so that they have competencies and abilities according to their fi elds of work that they can improve the quality of performance of educators and education staff so that they have competencies and abilities according to their fi elds of work so that they can improve their performance. The quality of the performance of educators and education personnel. Universities and related elements empower educators and education staff by following their competencies and qualifi cations. Universities and related elements assist with further study costs for educators and education personnel who excel in their work but are economically incapable. Universities give awards to educators and education personnel who excel in their work. Universities have benchmarks for determining professional abilities. Universities review rules/policies that are more fl exible to encourage educators and education staff to develop their creativity.
Srategy-3: Eff orts to arrange higher education management. Universities streamline the management and utilization of campus facilities and infrastructure. Universities have clear policies, guidelines, and regulations regarding the security and safety of facilities and infrastructure at the institutional level. Evidence of policy implementation must be traceable from more detailed and applicable regulations and periodic reports at the laboratory/studio/ library level and other places where activities are carried out. Universities follow the development of information technology so that all academics are skilled and agile in using information technology.
Strategy-4: Eff orts to improve the quality of higher education graduates. Universities and related elements streamline and synergize curriculum content by taking into account the interests and comparative advantages of the region and the development of science and technology. Universities precisely determine local content curricula according to comparative advantage and regional development. Universities develop student programs that are directed so that graduates have a leadership spirit, are highly dedicated, have physical and mental resilience, and always become creatures who serve and are devoted to god almighty. Universities create a climate familiar with information technology to support the progress of the business and industrial worlds. Universities continue to improve and develop their reputation and competitiveness of universities as centers of excellence in the world of Indonesian higher education and overseas. Universities improve the quality of the learning process so that students and graduates have the competence, knowledge, & skills to contribute signifi cantly to the development of the nation and state. In carrying out the learning process, universities must equip students with cognitive aspects and holistically complement them with moral aspects and social responsibility.
CONCLUSION
Building a smart university requires understanding conceptual design and structural analysis from the perspectives of smart technologies. This becomes possible if big data technology elements for storage and analysis are integrated into decision-making within the confi ned of an intelligent environment. The construction of a smart university also requires smart devices, networks, smart applications, and cloud computing technology to produce information services and management for an eff ective university information system. Integrating these related technologies to design a smart university to achieve a sustaining university Information system becomes necessary as many things, including humans, tend to communicate with smart technologies to create a smart and intelligent environment for eff ective communication and decision-making in a learning environment.
Facilities and infrastructure are supporting elements in implementing the three principles of Higher Education, including buildings, furniture, equipment, and asset and campus security systems. The hardware and software infrastructure condition in universities is still inadequate to support a quality learning process. The performance of educators and education staff has not been optimal due to educators' professionalism and welfare level, and education personnel is not following the challenges of quality improvement. The ability to research lecturers still varies due to the diff erent levels of education achieved by educators and educational staff . Higher education facilities and infrastructure also do not support research activities for lecturers. Higher education management has not been well organized because of the weak commitment of bureaucrats and higher education managers to achieve excellence. In addition, there is a lack of higher education management skills with an increasingly complex spectrum of tasks and educational problems, and there are still higher education managers who do not have educational disciplinary backgrounds. The quality of higher education graduates has not been optimal because there is no synchronization between education policies, the quality of graduates, the industries, and the curriculum for the initiative, creativity, art appreciation, and normative abilities or comprehensive intelligence (holistic), so that morals, character, and tolerance among students decrease and the younger generation. It is rare in tertiary institutions to fi nd workshops that lead to the quality of graduates desired by the user.
This research proposed architecture for manageability and security. In the future, a smart university will be developed based on these layers to understand the applicability and practicability of smart-university. Universities need to improve the performance of educators and education staff , which can be done by allocating research funds so that educators are motivated by research. Managers of higher education are advised to improve their knowledge, attitudes continuously, and skills as well as master the science of education management, change the mindset in the form of new governance following the concept of quality assurance in higher education, prepare adaptive management, and be oriented to the needs of the academic community. Produce outputs and outcomes that meet the community's need for the provision of skilled workers, and universities need to review the curriculum formulation, conduct information gathering from stakeholders, conduct networking with other educational institutions that have implemented quality loops, synergistically collaborate with the industry to produce graduates who become prospective workers who have high professionalism. | 7,830.2 | 2022-11-06T00:00:00.000 | [
"Education",
"Computer Science",
"Business"
] |
Revolutionizing Research Methodologies: The Emergence of Research 5.0 through AI, Automation, and Blockchain
This integrative literature review (ILR) explores the significant impact of incorporating artificial intelligence (AI), automation, and blockchain technology into research methodologies, collectively known as Research 5.0. The study addresses the shortcomings of traditional research methods, which need help managing the complexities and demands of modern scientific inquiry, thereby affecting the reliability and efficiency of research across various fields. The ILR aims to critically assess how these advanced technologies can enhance research processes, guided by a conceptual framework centered on AI, automation, and blockchain. The research method involved a comprehensive literature review and the analysis of qualitative data to identify patterns, challenges, and opportunities for implementing these technologies. The findings reveal that while AI significantly improves research efficiency and accuracy, it also introduces challenges such as algorithmic bias and transparency issues, which can be mitigated through Research 5.0 Explainable AI (RXAI) framework and comprehensive researcher training. Automation enhances consistency but risks reducing human oversight, necessitating hybrid systems that blend human expertise with automated precision. Blockchain strengthens data integrity and transparency yet faces complexity and energy consumption challenges, underscoring the need for scalable and sustainable solutions. The study concludes that while Research 5.0 technologies offer substantial potential, their successful integration requires careful consideration of ethical, technical, and operational challenges. Future research should focus on developing transparent AI systems, hybrid automation models that retain human judgment, and scalable blockchain solutions to advance research methodologies effectively.
Introduction
The evolution of research paradigms has been driven by continuous technological innovations prompted by the need to address increasingly complex societal and scientific challenges.Traditionally, research paradigms have focused on enhancing data collection and analysis techniques, with the primary goal of improving the research process's accuracy, reliability, and efficiency through more refined methodologies and tools [1].However, the growing demands on researchers now call for more comprehensive and advanced approaches that integrate cutting-edge technologies and interdisciplinary methods to address increasingly complex and multifaceted scientific challenges.Research 5.0 has emerged as a transformative paradigm aiming to revolutionize the research process by integrating cutting-edge technologies such as AI, automation, blockchain, and other digital tools [2].This paradigm represents a significant leap forward, fundamentally changing how research is conducted to enhance efficiency, accuracy, and transparency and foster innovation across various types of research.Research 5.0 plays a crucial role in its application across various research domains, with each domain benefiting uniquely from integrating these advanced technologies.In fundamental (pure) research, AI can analyze vast datasets, identify patterns, and generate new hypotheses, thereby driving groundbreaking theoretical advancements [3].Automation streamlines experimental procedures, allowing researchers to focus more on theory development rather than the logistical aspects of research.Research 5.0 leverages advanced technologies in applied research to effectively translate theoretical knowledge into practical solutions [4].AI aids in designing and testing new applications on research methodologies by automating experimental designs, optimizing data collection processes, and simulating research scenarios to refine methods and improve the reliability of outcomes.At the same time, blockchain technology ensures the transparency and integrity of research outcomes, providing a secure foundation for implementing these applications in real-world scenarios [5].Exploratory research can significantly benefit from the integration of Research 5.0, as AI-powered tools can efficiently identify emerging trends and areas of interest within large datasets, enhancing the focus and effectiveness of the exploration process while uncovering insights that might be missed through traditional methods [6].Automation further enhances this process by speeding up data collection and preliminary analysis, allowing researchers to quickly narrow their focus and delve deeper into promising areas of inquiry.Descriptive research also sees significant advantages, with AI uncovering correlations and trends that might not be easily detected through manual analysis [7].Automation supports this by efficiently collecting extensive data from diverse sources, ensuring a thorough research process.Moreover, in explanatory research, AI's ability to simulate complex cause-and-effect relationships gives researchers a deeper understanding of the underlying mechanisms that drive observed phenomena [8].That makes AI an invaluable tool for testing and refining hypotheses, as it allows researchers to analyze complex datasets, identify patterns, and simulates various scenarios with high precision, leading to more accurate and insightful conclusions about the underlying mechanisms of the phenomena under study.AI's ability to manage vast datasets and discover patterns may improve the evaluation of complex causal links between research and its outcomes [9].Implementing blockchain technology in this context further strengthens the evaluation process by ensuring the integrity and transparency of data, which are crucial for establishing the trustworthiness of research findings [10].Quantitative research significantly benefits from AI's ability to manage large datasets, perform complex statistical analyses, and develop predictive models [11].Automation complements this by streamlining data collection through sensors, online surveys, and other digital methods, reducing the likelihood of human error and enhancing overall efficiency.In qualitative research, AI tools like natural language processing (NLP) are used to analyze interview transcripts, focus group discussions, and other qualitative data, helping to identify themes and patterns [12].Automation further supports this process by organizing and coding the data, leading to a more systematic and efficient analysis.Mixed-methods research combines AI and automation's strengths to seamlessly integrate qualitative and quantitative data, resulting in a comprehensive study that bridges different data types and provides deeper insights [13].
From an approach-based perspective, experimental research significantly benefits from automation, enhancing precision and consistency.Automation allows for precise control over experimental settings and ensures effective management of variables across repeated trials [14].AI's real-time analysis capabilities enable the dynamic adjustment of variables, optimizing outcomes as experiments progress.In correlational research, AI's ability to analyze extensive datasets surpasses traditional methods, uncovering correlations that might otherwise go undetected using conventional techniques [15].Longitudinal research is also improved by automation's ability to consistently collect data over extended periods without human intervention, while AI monitors temporal changes and predicts future trends based on historical data [16].In cross-sectional research, AI quickly identifies patterns and anomalies in data collected from various sources [17].Automation facilitates the simultaneous collection of diverse data points from multiple populations.In case study research, AI enhances analysis by efficiently processing large amounts of contextual data, providing deep insights essential for understanding complex case dynamics [18].Additionally, AI enhances the dynamism and responsiveness of action research by offering real-time feedback and suggestions, thereby improving the effectiveness of the planning, acting, observing, and reflecting cycle.The application of Research 5.0 technologies, such as AI and digital tools, has significantly enhanced ethnographic research [19].These advanced technologies streamline the analysis of cultural and behavioral data, allowing researchers to efficiently identify patterns and themes that would have been challenging to detect manually.The integration of Virtual Reality (VR) and Augmented Reality (AR) adds a new dimension to this field, enabling researchers to create immersive simulations of cultural environments that offer deeper and more nuanced insights into the dynamics within the communities being studied [20].VR allows researchers to fully immerse themselves in and interact with recreated cultural settings, enhancing their understanding of these contexts.Meanwhile, AR overlays digital information onto real-world environments, enriching the exploration of cultural phenomena.These technologies democratize ethnographic research by making it more accessible and fostering collaboration.They enable the creation of virtual field sites that can be shared with scholars, students, and the public, leading to a more comprehensive and inclusive exploration of cultural diversity [21].Research 5.0 fundamentally transforms research methodologies by redefining the role of timing [22].Integrating real-time data analysis and predictive modeling enhances the immediacy and accuracy of research insights.In prospective research, AI analyzes current data to generate predictions about future scenarios, guiding the research process toward more informed conclusions [23].Automation supports this process by continuously monitoring and adjusting variables and optimizing future outcomes.In retrospective research, AI's ability to analyze past data deepens the understanding of historical events by uncovering patterns and cause-and-effect relationships [24].Concurrently, blockchain technology ensures the integrity and immutability of historical data, thereby reinforcing the accuracy and reliability of research findings [25].AI's ability to simulate complex theoretical models has significantly advanced data analysis, allowing theories to be tested in virtual environments before being applied in real-world scenarios [26].Automation further enhances this process by handling repetitive tasks, enabling researchers to focus on the more creative aspects of theoretical exploration.Empirical research also benefits from these innovations, as AI enhances data collection and analysis, resulting in more accurate and reliable findings [27].Automation speeds up the empirical testing process, enabling faster iteration and improvement of research methodologies.This combination of AI and automation streamlines the research workflow.It improves the precision of experimental designs and the robustness of empirical results, ultimately driving more effective and innovative solutions to complex research problems [28].Controlled research benefits from AI's ability to optimize variable management in experimental setups, ensuring precision and reducing human error [29].Automation maintains consistent conditions across multiple experiments, enhancing the reproducibility of research outcomes.In uncontrolled research, AI's capacity to analyze uncontrolled variables allows researchers to identify patterns and understand complex data, while digital tools assist in monitoring and documenting uncontrolled research environments [30].This combination of AI and automation improves the accuracy and reliability of research findings and facilitates a deeper understanding of the complex dynamics within uncontrolled settings, leading to more nuanced insights and robust conclusions.Research 5.0 also impacts the research scale, whether focused on micro or macro levels.In micro research, AI provides detailed analysis at a small scale, uncovering insights that broader methodologies might miss [31].Automation handles complex data collection and processing tasks, enabling researchers to focus on specific details.In macro research, AI is utilized to manage and analyze large-scale datasets, revealing trends and patterns at a broader level [32].Automation supports this by efficiently gathering data from large populations or extensive geographic areas, ensuring comprehensive coverage.Together, these technologies enhance the depth and breadth of research, allowing for more nuanced understanding and more extensive analysis across various scales [33].Research 5.0 has a profound impact on research guided by philosophical methodologies.In positivist research, AI provides objective, data-driven insights, reinforcing the positivist focus on observable and quantifiable phenomena, while automation ensures consistency in data collection and analysis [34].Interpretive research benefits from AI technologies like natural language processing (NLP), which assist in analyzing subjective experiences and interpretations, offering more profound insights into human behavior [35].Automation supports this process by organizing and coding qualitative data, enhancing the systematic nature of the interpretivist approach.In critical research, AI examines power dynamics and inequalities within datasets, uncovering systemic issues [36].Additionally, digital tools facilitate participatory research, empowering marginalized voices and fostering inclusivity.Research 5.0 technology significantly enhances certain research forms, such as meta-analyses and systematic reviews [37].AI efficiently interprets and evaluates data from multiple studies, identifying overarching patterns and trends.Automation supports this process by streamlining the collection and organization of data from various sources.Blockchain technology ensures transparency and reduces bias in the review process, thereby increasing the credibility of the findings [38].Integrative reviews benefit from AI's ability to synthesize findings from different approaches, leading to a more comprehensive understanding of a research topic [39].Automation aids in the seamless integration of qualitative and quantitative data.In phenomenological research, AI analyzes qualitative data and uncovers common themes and experiences among participants, thereby enhancing the phenomenological approach [40].With its efficiency in organizing and coding data, automation further supports this by streamlining the research process and saving time.In grounded theory research, AI is crucial in pattern recognition and theory generation based on data [41].Automation facilitates iterative data collection and analysis, enabling more dynamic and responsive theory development.Together, these technologies refine the processes of identifying core themes and constructing grounded theories, making research more effective and adaptable to emerging insights.Research 5.0 represents a significant leap forward in the research paradigm, offering innovative methods to enhance research efficiency, accuracy, and transparency across various fields.By integrating AI, automation, blockchain, and other digital tools into traditional research methodologies, Research 5.0 has the potential to revolutionize the research landscape, enabling it to address the complex challenges of the modern world more effectively [42].However, successfully implementing Research 5.0 requires careful consideration of ethical issues and a commitment to continuous innovation and adaptation.To fully realize the potential of Research 5.0, it is essential to balance technological advancements with ethical research practices [43].This approach will advance scientific knowledge and contribute to creating a more equitable, inclusive, and sustainable global society.
Background
The evolution of research methodology has been a dynamic and continuous process driven by the need to address increasingly complex scientific, social, and technological challenges.Historically, research paradigms have transitioned from foundational models such as positivism and interpretivism to more comprehensive approaches that recognize the value of integrating diverse perspectives and methods [44].These shifts have been prompted by the limitations of traditional techniques in effectively tackling the multifaceted nature of real-world problems, necessitating the development of more inclusive and flexible research frameworks.Within this evolving landscape, Research 5.0 has emerged as a groundbreaking approach that integrates advanced technologies like AI, automation, blockchain, and other cutting-edge tools into every phase of the research process, representing a significant leap forward in research methodology [45].Research 5.0 builds upon the advancements of earlier paradigms, particularly those that emphasized interdisciplinary collaboration and the integration of diverse research methods.What sets Research 5.0 apart is its focus on embedding digital technology throughout the entire research process, fundamentally transforming how research is conceived, executed, and applied [46].This integration goes beyond mere enhancements; it represents a profound shift that delivers unprecedented levels of accuracy, efficiency, and scalability, previously unattainable capabilities.As a result, Research 5.0 is uniquely equipped to address the complex and interconnected challenges of the modern world [47].The problem lies in the increasing inadequacy of traditional research methodologies in addressing these modern scientific challenges, necessitating the integration of advanced technologies like AI, automation, and blockchain within the Research 5.0 paradigm.However, this paradigm currently lacks comprehensive implementation and governance frameworks, particularly regarding ethical considerations and the preservation of human elements in research.The emergence of AI has profoundly influenced Research 5.0, fundamentally transforming how data is analyzed and interpreted across various fields [48].AI is a powerful tool, enhancing researchers' ability to manage large and complex datasets, identify patterns, and generate insights that traditional methods might overlook.AI-powered algorithms can efficiently process vast amounts of data, uncovering correlations, patterns, and anomalies that form the foundation for new ideas and theories [49].This capability is particularly beneficial in fields such as healthcare, environmental research, and economics, where the scale and complexity of data can be overwhelming.By streamlining data processing, AI allows researchers to focus on more creative and interpretative tasks, accelerating the pace of discovery and innovation [50].Automation and robotics are vital elements of Research 5.0, as they enhance the consistency and reliability of experimental research [51].Traditional research methods have often involved laborintensive and repetitive tasks that are prone to human error.Automation addresses these challenges by optimizing workflows, ensuring that experiments are conducted under uniform conditions, and capturing data with the highest precision [52].Robotics can streamline laboratory processes, such as sample preparation and data collection, reducing variability and improving the repeatability of results [53].This level of experimental precision is particularly critical in fields like chemistry, biology, and engineering.Moreover, automation enables researchers to conduct large-scale experiments that would be impractical or impossible with manual methods, thereby expanding the scope and scale of scientific inquiry [54].Blockchain and other digital ledger technologies are crucial in enhancing transparency and accountability in the research process [55].A persistent challenge in research is maintaining data integrity and ensuring the reproducibility of results.In recent years, data manipulation, selective reporting, and other unethical practices have undermined the credibility of scientific research [56].Blockchain technology addresses these issues by providing a secure and transparent platform for managing research data, intellectual property, and collaborative efforts [57].By creating immutable records of data provenance and research outcomes, blockchain ensures the complete traceability and verifiability of the research process, which is essential for maintaining the credibility and trustworthiness of scientific findings.Transparency is especially vital in collaborative environments where multiple stakeholders are involved in data collection and analysis, making blockchain an invaluable tool for safeguarding the integrity of scientific inquiry [58].However, integrating these advanced technologies into research also brings significant ethical challenges.AI, automation, and blockchain use introduce new concerns about data privacy, algorithmic bias, and the potential misuse of research findings.AI algorithms, often trained on large datasets that may contain inherent biases, can inadvertently reinforce or even amplify existing prejudices, raising questions about the fairness and objectivity of AI-generated research outcomes [59].Additionally, the automation of research processes may reduce human oversight, increasing the risk of errors or unethical practices going unnoticed [39].While blockchain offers enhanced security and transparency, it also presents challenges related to data governance and intellectual property management in a decentralized environment [57].These ethical considerations must be carefully addressed to ensure that the benefits of Research 5.0 are realized without compromising the integrity and fairness of the research process.The purpose of this paper is to explore and critically analyze the implementation of Research 5.0 by integrating AI, automation, and blockchain into traditional research methodologies, providing a framework for their application across various research types while addressing the ethical challenges and implications for the future of scientific inquiry.The global implementation of Research 5.0 technologies is underway, with numerous sectors and organizations recognizing their transformative potential.Research 5.0 is being applied in medicine, environmental science, and social policy to address critical global challenges like climate change, public health, economic development, and social equity [1].AI is used to model the spread of diseases, predict the impacts of climate change, and develop personalized medical treatments.Simultaneously, blockchain technology ensures the integrity of clinical trial data and protects patient privacy [60].These examples demonstrate the ability of Research 5.0 to drive innovation and enhance decision-making across multiple disciplines, highlighting its crucial role in addressing the complex issues facing society today.While Research 5.0 holds the potential to drive significant advancements, its implementation comes with challenges.Integrating modern technologies into the research process requires substantial investments in infrastructure, training, and education [61].Researchers need specialized skills and deep knowledge to effectively use these new tools, necessitating a reevaluation of research education.That includes developing curricula incorporating advanced technologies, fostering interdisciplinary collaboration, and cultivating continuous learning and adaptability.Additionally, the successful implementation of Research 5.0 depends on creating supportive policies and institutional frameworks that encourage innovation while maintaining ethical standards [28].Furthermore, beyond the practical challenges, Research 5.0 raises more profound philosophical questions about the impact of these technologies on the very essence of research [14].As automation and data-driven approaches become more prevalent, there is a risk of diminishing the critical human elements of creativity, intuition, and critical thinking [4].That raises important questions about the role of the researcher within the Research 5.0 paradigm and how to ensure that technology enhances, rather than replaces, the human aspects of research.It underscores the need for a balanced approach that integrates traditional research methods with advanced technologies, ensuring that the research process remains comprehensive, inclusive, and attuned to the complexities of the world we seek to understand.This study is significant for several reasons.First, it addresses the urgent need for innovative research methodologies that meet the demands of contemporary scientific inquiry, such as handling large datasets, rapid and accurate analysis, and the growing emphasis on transparency and reproducibility.The study raises awareness within the academic and scientific communities about the appropriate integration of new technologies into research processes, offering practical insights into their application across various sectors.Additionally, the study is essential for exploring the ethical challenges associated with Research 5.0, including concerns about data privacy, algorithmic bias, and the potential for technology to overshadow the human elements of research.Finally, this study is valuable to policymakers and organizations as it provides a framework for regulating these emerging technologies, ensuring they are used to advance scientific knowledge while upholding ethical standards.The main research question addressed by this paper is: How can the integration of AI, automation, and blockchain in the Research 5.0 paradigm effectively address the limitations of traditional research methodologies while maintaining ethical integrity and preserving the human elements of scientific inquiry?
Theoretical Framework
This study investigates the integration of advanced technologies within the Research 5.0 framework, which is structured around three key concepts: AI, automation, and blockchain.It focuses on their transformative potential across various research fields.These technologies are essential for addressing the growing complexities of contemporary scientific inquiry by enhancing the efficiency, accuracy, and ethical integrity of research processes [51;60].AI has revolutionized traditional research methodologies by enabling the analysis of large datasets, uncovering patterns, and generating novel insights.AI applications in research are diverse, ranging from automating labor-intensive tasks such as literature reviews and predictive modeling to conducting complex data analyses [61].These capabilities significantly accelerate the research process while improving the reliability and comprehensiveness of the findings.The importance of AI in this context goes beyond speed and scale, as it allows for in-depth analysis that can reveal correlations and trends that might otherwise be overlooked in traditional research methods [49].Automation enhances AI by systematically managing the repetitive and time-consuming tasks inherent in many research methodologies [53].This approach ensures that experiments are conducted under consistent conditions, which are vital for maintaining accuracy and reproducibility-core tenets of scientific research.The role of automation in research extends beyond mere task execution; it enables extensive experimentation and continuous data monitoring, allowing researchers to conduct more thorough and detailed studies than those possible through manual methods [14].Precision in experimental conditions is especially critical in fields like biology, chemistry, and engineering, where it directly influences the validity of research outcomes.By reducing human error and increasing the scalability of research activities, automation boosts the efficiency and reliability of the research process, ultimately leading to more robust scientific findings [19].Blockchain technology enhances transparency and accountability in the research process by providing a secure and decentralized platform for managing research data and intellectual property [57].In an environment where the credibility of scientific research is often questioned due to issues like data manipulation and biased reporting, blockchain offers a solution that ensures the traceability and verifiability of all research activities.This technology is particularly beneficial in collaborative research settings, where maintaining trust and transparency among various stakeholders is crucial [5].Blockchain ensures the integrity of scientific outputs by creating immutable records of data provenance and research findings, thereby fostering a culture of transparency and accountability.In a research landscape where replicability and data validation are increasingly under scrutiny, blockchain plays a vital role in safeguarding the authenticity of research data, ensuring that scientific advancements are built on a foundation of trust and reliability [58].The study is grounded in a theoretical framework integrating complexity theory, systems theory, ethical frameworks in technology, and innovation diffusion theory.Complexity theory is particularly relevant as it provides valuable insights into the interactions between AI, automation, and blockchain within the complex ecosystems that define modern research environments [62].This theory highlights these technologies' interconnected and dynamic nature, emphasizing how their integration can be optimized to enhance research outcomes.By understanding the intricate relationships within research processes, researchers can effectively leverage these technologies to address the multifaceted challenges of contemporary scientific inquiry.Systems Theory offers a holistic view of the research process, considering it an interconnected system where integrating advanced technologies can systematically improve efficiency, accuracy, and scalability [63].This perspective is crucial for exploring the potential applications of AI, automation, and blockchain across various research domains to create a cohesive and effective research process.Considering the ethical implications of integrating modern technologies, ethical frameworks in technology are essential for establishing a solid theoretical foundation [64].Deontological ethics and utilitarianism are key frameworks for assessing the ethical consequences of using AI, automation, and blockchain in research.Deontological ethics emphasizes the importance of adhering to ethical principles and ensuring these technologies' responsible and transparent implementation [65].That is particularly vital in research contexts where risks such as data privacy breaches and algorithmic bias pose significant challenges.Utilitarianism, which focuses on maximizing overall happiness or well-being for the most significant number of people, offers a valuable lens for evaluating the broader societal impacts of these technologies [66].By incorporating these ethical frameworks, the study ensures that the integration of advanced technologies into research enhances outcomes and aligns with broader social values and ethical standards.Innovation diffusion theory is crucial for understanding AI, automation, and blockchain adoption and implementation in research settings.This theory examines the various factors influencing the uptake of these technologies, including perceived benefits, compatibility with existing research methods, and the impact of societal and institutional norms [67].Innovation diffusion theory provides valuable insights into the challenges and barriers that may hinder the widespread adoption of these technologies [68].It also offers strategies for overcoming resistance and facilitating their integration into mainstream research practices.By applying this theory, practical approaches can be developed to promote the broad acceptance and use of AI, automation, and blockchain technologies, ensuring their transformative potential is fully realized within the research community.The study's theoretical framework integrates complexity theory, systems theory, ethical frameworks in technology, and innovation diffusion theory to develop a comprehensive understanding of how AI, automation, and blockchain can be effectively incorporated into the Research 5.0 paradigm.This multidisciplinary approach is essential for evaluating emerging technologies' benefits, challenges, and ethical implications, offering valuable insights into their potential to revolutionize research methodologies and enhance scientific knowledge in a responsible and impactful way.The study seeks to bridge the gap between technological innovation and ethical research practices to ensure that the adoption of Research 5.0 technologies contributes positively to the advancement of scientific inquiry.
Research Method and Design
This integrative literature review (ILR) seeks to synthesize theoretical and empirical research to comprehensively understand how modern technologies, including AI, automation, and blockchain, are integrated into the Research 5.0 paradigm.The ILR approach allows for an in-depth analysis of how these technologies are transforming research practices across various fields, focusing on their impact on efficiency, accuracy, and ethical considerations [69].The primary objective of this ILR is to consolidate findings from multiple studies, theories, and perspectives to develop a robust conceptual framework that can guide future research in this emerging domain.The review draws from a wide range of sources, including peer-reviewed articles, books, conference papers, reports, and reputable online publications, to identify patterns, common themes, and significant gaps in the literature [70].Ultimately, it aims to provide a nuanced understanding of how AI, automation, and blockchain are shaping the landscape of scientific inquiry within the Research 5.0 framework.The methodology employed in this ILR is particularly well-suited for exploring the complexities of integrating AI, automation, and blockchain into research processes.The researchers began by identifying key advancements and emerging trends related to these technologies, with a focus on their potential to fundamentally transform traditional research methods [71].The ILR approach emphasizes a meticulous and systematic process for collecting and analyzing data, ensuring that the review is both comprehensive and unbiased.This ILR method includes specific sampling criteria to ensure that the selected literature accurately represents the current state of knowledge in the field, with particular attention to sources that address the implications of these technologies for research practices, ethical considerations, and future developments [72].
The structured data collection phase of the ILR is aligned with the study's core objectives, enabling researchers to critically evaluate the quality and relevance of the selected studies.Data collection should be systematic and comprehensive, ensuring all relevant literature is thoroughly reviewed to provide a solid foundation for analysis and synthesis [73].This approach is essential for constructing a coherent and comprehensive narrative that deepens our understanding of the impact of AI, automation, and blockchain within the Research 5.0 paradigm.By adopting this methodology, the review not only provides a thorough examination of the literature but also contributes to the development of a more nuanced understanding of how these technologies are reshaping the landscape of scientific inquiry.The ILR approach effectively synthesizes diverse perspectives and research findings from various academic and industry sources [74].It is particularly well-suited for studying the integration of AI, automation, and blockchain into research processes.These technologies are inherently interdisciplinary, and the ILR provides a comprehensive view of their adoption and integration by drawing on insights from various fields, including technology, ethics, and research methodologies.The goal is to analyze patterns, identify barriers, and highlight opportunities related to adopting these technologies, ultimately offering a detailed understanding of how they can enhance research outcomes and address the challenges of modern scientific inquiry.The primary objective of this ILR is to investigate the key factors influencing the successful integration of AI, automation, and blockchain into research practices while also exploring specific use cases, ethical considerations, and the potential impact of these technologies on research methodologies.By thoroughly analyzing and synthesizing existing literature, the ILR aims to identify recurring themes, emerging trends, and knowledge gaps essential for advancing our understanding of Research 5.0 [75].The integrative approach allows for comparing diverse theories and evidence, fostering a holistic comprehension of the issues [76].Carefully selected criteria guide the review, considering the central research question, stakeholders, technologies, and desired outcomes.This methodology supports the development of a robust theoretical foundation and analytical framework, crucial for guiding future research in the evolving landscape of Research 5.0.The ILR's methodological framework follows a systematic and comprehensive approach, encompassing five key stages: 1) defining the problem, 2) gathering data, 3) assessing the data, 4) analyzing and interpreting the data, and 5) presenting the findings [77].This review began by clearly articulating the study's objectives, scope, and focus, explicitly targeting the integration of AI, automation, and blockchain into research methodologies and identifying the main challenges and opportunities associated with this integration.Relevant keywords and phrases, such as "Artificial Intelligence," "Automation," "Blockchain," and "Research 5.0," were carefully selected and combined using logical operators to create complete search strings.These search queries were then used to extensively explore scholarly databases, journals, digital libraries, and repositories.The data collection process was meticulously aligned with the study's central research question, ensuring the systematic and comprehensive capture of relevant material.After gathering the data, the selected literature was meticulously examined and categorized based on themes, methodologies, key findings, challenges, and opportunities for implementing AI, automation, and blockchain in research.This analysis identified patterns and insights crucial for understanding the current state of Research 5.0 and its implications for the future of scientific inquiry.The final phase of the ILR involved synthesizing the findings to provide a comprehensive overview of the current role of these technologies within the research landscape.A thorough citation search was conducted in both directions to ensure the review's completeness.Detailed documentation of the search process was maintained to ensure the integrity and replicability of the ILR.
A key concern for the credibility of this ILR is the potential for discrepancies between the selected studies and the broader framework of Research 5.0.To mitigate this risk, several strategies were implemented: 1) a comprehensive data collection approach, 2) detailed documentation of data sources, including publication years and keywords used, and 3) careful consideration of selection bias during the literature evaluation process [78].The search strategy covered a wide range of scholarly databases and search engines, including Google Scholar, IEEE Xplore, ACM Digital Library, PubMed, Web of Science, and Scopus, ensuring that the literature reviewed was both extensive and representative of the most relevant research in the field.Search terms were deliberately combined to cover a broad spectrum of literature, and additional targeted searches were conducted in specialized databases to focus on specific aspects of AI, automation, and blockchain within the context of research 5.0.In instances where recent research specifically focused on integrating these technologies within the Research 5.0 paradigm was lacking, the review utilized related literature to provide context and insights.The ILR approach was chosen for its ability to synthesize information from various sources, allowing for a deep understanding of the complex aspects of AI, automation, and blockchain integration in research.
The ILR method provides a comprehensive and detailed analysis of patterns, trends, and gaps in the existing literature [79], making it particularly well-suited for examining the intricate dynamics of these technologies in the context of Research 5.0.This methodology ensures that the findings of this study contribute significantly and meaningfully to the ongoing discourse about the future of research and the role of advanced technologies in shaping scientific inquiry.Tables 1, 2, and 3 provide a concise summary and ranking of the selected articles based on their citation counts.This ranking gives readers an understanding of the relative influence and significance (as indicated by rank) of the arguments presented in the current literature concerning integrating AI, automation, and blockchain technologies within the Research 5.0 paradigm.These tables are valuable tools for assessing the importance and credibility of the contributions made by various studies, allowing readers to gauge the impact of each article in shaping the discourse on new technologies in research methodology.The rankings highlight the significance of each study in the field, helping readers identify which works have had the most substantial impact on the ongoing conversation about the transformative potential of Research 5.0 technologies.
Findings of the Study Enhanced Research Efficiency and Accuracy through AI Integration
Integrating AI into research methodologies marks a significant leap forward in pursuing enhanced efficiency and precision in scientific inquiry.AI's ability to rapidly and accurately process large datasets transforms the research landscape, particularly in fields where managing complex and voluminous data is a primary challenge [11].Traditional research methods, which depend on human data processing, need to be revised to meet modern scientific investigation demands.AI addresses these limitations by automating complex data analysis tasks, enabling researchers to uncover patterns and insights that would otherwise remain hidden.This automation accelerates the research process and improves the reliability of findings by minimizing human error and introducing a level of consistency that manual approaches need help to achieve.However, while the benefits of AI in boosting research productivity are clear, a key challenge lies in ensuring that the outputs generated by AI systems are not only accurate but also ethically sound and understandable to human researchers [64].The potential for algorithmic bias and the opacity of AI-driven results necessitate careful consideration of how AI tools are designed, trained, and implemented in research contexts.Furthermore, AI's role in enhancing research accuracy is vital in predictive modeling and hypothesis testing.By leveraging machine learning algorithms, researchers can develop models that highly accurately predict outcomes based on existing data [51].These models can be continuously refined as new data becomes available, leading to progressively more precise predictions and insights.This iterative model refinement process is a powerful tool for advancing scientific knowledge, especially in fields like genomics, climate science, and epidemiology, where predictive accuracy is crucial.However, reliance on AI-driven models introduces new challenges, particularly the "black box" problem, where researchers cannot easily interpret the internal workings of AI models [50].This lack of transparency can undermine trust in AI-generated research findings, especially in critical areas such as healthcare and public policy, where the implications of research are far-reaching.Therefore, while AI has the potential to enhance research accuracy significantly, it also necessitates the development of additional tools and frameworks that ensure transparency, interpretability, and ethical accountability in AI-driven research [19].
The current body of literature highlights the profound impact of AI on research methodologies, particularly in enhancing efficiency and precision.AI's ability to manage extensive datasets and perform complex statistical analyses has revolutionized research, leading to faster and more accurate conclusions across various fields [49].For instance, healthcare research has benefited from AI's capacity to analyze vast amounts of patient data, leading to earlier disease diagnosis, personalized treatment plans, and improved patient outcomes.In environmental research, AI models have been used to predict climate change patterns with unprecedented accuracy, facilitating better-informed decision-making and policy formulation.These examples underscore AI's critical role in advancing research precision, demonstrating that AI is not merely a data analysis tool but a pivotal catalyst for innovation and discovery in modern science [23].However, the literature also underscores the challenges associated with AI integration, particularly the ethical implications of relying on algorithmic decision-making in research.Recurring concerns about algorithmic bias and the interpretability of AI models highlight the need for ongoing research into developing and applying AI tools that uphold scientific integrity and ethical responsibility [24].Moreover, the analysis of existing studies reveals a consensus on the need for greater transparency and accountability in AI-driven research.While AI enhances research productivity by automating routine tasks and accelerating data analysis, the opaque nature of many AI systems poses a significant challenge [2].The difficulty in understanding how AI models arrive at specific conclusions can lead to a lack of trust in the results, especially in fields where the research has far-reaching implications for public policy or human welfare.To address these challenges, the literature advocates for the development of explainable AI (XAI) approaches, which aim to make AI models more transparent and interpretable for human researchers [6].Additionally, there is a growing emphasis on the importance of interdisciplinary collaboration in the development and implementation of AI tools.This collaborative approach, which brings together experts from various fields, ensures that the ethical, technical, and practical aspects of AI integration are thoroughly considered.It is seen as essential for fully harnessing AI's potential in research while minimizing the associated risks, ultimately leading to more robust, reliable, and ethically sound research outcomes [7].Within the framework of Research 5.0, incorporating artificial intelligence into research methodology offers a significant chance to improve efficiency and accuracy.Nevertheless, incorporating AI presents obstacles that must be resolved to harness its capabilities thoroughly.Algorithmic bias is a significant risk that can distort outcomes if AI models are not meticulously constructed and trained [64].In order to reduce this danger, it is crucial to incorporate inherent bias detection models into AI systems and guarantee that models are trained using diverse datasets to prevent biases that originate from limited or unrepresentative data.This strategy will ensure the preservation of the authenticity of research conclusions, guaranteeing that AI-driven findings are both precise and morally justifiable.Furthermore, the lack of transparency in AI-generated outcomes, commonly known as the "black box" issue, is a substantial obstacle [17].To tackle this issue, it is essential to create Research 5.0 Explainable AI (RXAI) frameworks.These frameworks would facilitate AI systems delivering explicit and comprehensible justifications for their decision-making procedures, fostering transparency and responsibility in AI-powered research.
Another crucial element of AI integration in Research 5.0 is the intricacy associated with predictive modeling and the difficulties of effortlessly integrating AI into current research operations.In order to surmount these obstacles, it is crucial to cultivate cooperation between experts in artificial intelligence and researchers.This partnership guarantees that artificial intelligence technologies are customized to address the distinct requirements of many research fields, enabling seamless and efficient integration [43].Moreover, to ensure that researchers can effectively manage and utilize AI technologies in Research 5.0, it is essential to mandate comprehensive AI training certification from reputable companies or institutions in the field.This need will provide researchers with the necessary abilities to effectively navigate the intricacies of AI, hence improving the overall efficiency and accuracy of research procedures in Research 5.0.
Automation's Role in Streamlining Research Methodologies
Automation is crucial in optimizing research procedures, significantly enhancing the efficiency and consistency of research processes.By automating repetitive and labor-intensive tasks, researchers can significantly reduce the time and effort required to conduct experiments, collect data, and perform analyses [4].Automation tools, such as robotic systems and software automation platforms, ensure these tasks are executed with exceptional accuracy and consistency, thereby minimizing human errors and variability.Consistency in experimental conditions is vital in research, as it directly impacts the reliability and reproducibility of results [28].However, while automation improves operational efficiency, it raises concerns about the potential reduction of human oversight in critical aspects of the research process.The challenge lies in the risk of overlooking the nuances of human judgment and expertise due to over-reliance on automated systems [39].Therefore, it is essential to carefully manage the integration of automation into research procedures to ensure that it enhances rather than replaces the critical cognitive skills of human researchers.Moreover, automation extends beyond task execution to include the optimization of research workflows.Advanced automation systems can dynamically adjust experimental parameters in real time based on incoming data, thereby improving the adaptability and responsiveness of research procedures [54].This capability is precious in biotechnology and materials science, where experimental conditions often need continuous adjustment to achieve optimal outcomes.Additionally, the literature underscores that automation facilitates the execution of large-scale studies and high-throughput experiments that would be impractical or impossible to conduct manually [53].These technological advancements allow researchers to explore broader and more complex research questions, driving innovation and discovery at an unprecedented pace.However, the increasing complexity of automated systems also introduces challenges related to their maintenance, calibration, and validation, requiring specialized skills and infrastructure.Integrating human expertise into the automation process is crucial to ensure the accuracy and reliability of automated systems, continuous monitoring and updating are necessary to balance machine efficiency and human oversight [52].The analysis of current literature underscores the significant impact that automation has on optimizing research procedures, highlighting its potential benefits and associated challenges.Numerous studies have documented how automation significantly reduces the time required for data collection, analysis, and experimental processes, leading to substantial gains in overall research efficiency [30].For example, automated sequencing technologies in genomics have revolutionized the efficiency and accuracy of genetic data analysis, enabling researchers to conduct large-scale studies that were previously unimaginable.Similarly, automated synthesis machines facilitate the rapid and precise production of complex compounds in chemical research, thereby accelerating the exploration and development of new materials and potential drug candidates [67].These examples illustrate the profound effect of automation on advancing scientific research, notably by expanding the scope and scale of investigations.However, the literature also emphasizes the importance of carefully assessing automation's drawbacks and potential risks, particularly in ensuring that these technologies complement human expertise rather than replace it [59].
Further analysis of the literature reveals a broad consensus on the need for a balanced approach to automation in research processes.Scholars emphasize that while automation enhances efficiency and consistency, it also requires robust monitoring and validation systems to mitigate the risks of overreliance on automated tools [47].There is a growing recognition of the importance of interdisciplinary collaboration in the development and implementation of automation technologies, ensuring that these tools are integrated into research workflows in ways that preserve the integrity and creativity of the research process.Moreover, the literature highlights the critical role of training and education in equipping researchers with the necessary skills to effectively manage and utilize automated systems [36].As research processes become increasingly automated, the roles of researchers are evolving, requiring a blend of technical expertise, critical thinking, and ethical awareness.This shift underscores the ongoing need for continuous learning and adaptation within the research community, ensuring that the benefits of automation are fully realized without compromising the quality and rigor of scientific inquiry.Embracing this need for continuous learning and adaptation is crucial in the dynamic field of scientific research.Automation is a vital component of Research 5.0, significantly enhancing the efficiency and consistency of research processes.However, technological progress in research also introduces challenges, particularly the potential reduction of human oversight in critical decision-making areas [7].To address this issue, it is essential to implement hybrid systems that combine the expertise of human researchers with the precision and speed of automated technologies.By integrating human judgment with machine efficiency, these systems ensure that important decisions are made effectively.Investing in a comprehensive audit methodology tailored to Research 5.0 is crucial.This approach enables continuous monitoring and fine-tuning of automation processes, maintaining a balance between human intuition and automated accuracy, thereby preserving the cognitive input of researchers while benefiting from the advantages of automation.Another significant challenge in Research 5.0 is the risk of over-reliance on automated systems, which could lead to the undervaluation of essential human judgment.To mitigate this, researchers must undergo extensive training and education in automation technologies, ensuring they are well-equipped to effectively manage and apply these tools [50].Certification in automation technology should be a prerequisite for researcher validation, underscoring the importance of human expertise in the automated research environment.Furthermore, the complexity of automated systems necessitates a collaborative approach involving close cooperation between automation technology experts and specialists in specific research domains.This collaboration ensures the seamless integration of automation tools into research workflows, maintaining operational integrity and enhancing overall research outcomes [33].It also fosters a sense of inclusion and teamwork among all involved, making everyone feel integral to the research process.
Blockchain's Contribution to Data Integrity and Transparency
Blockchain technology, a critical tool for enhancing the reliability and transparency of research methodologies, also presents challenges in its integration [57].Its immutable nature ensures data integrity, making it impossible to alter or tamper with recorded data.This robust protection against data manipulation and fraud is particularly crucial in fields where data quality and authenticity are paramount, such as clinical trials, environmental studies, and social sciences.The creation of a decentralized and secure ledger by blockchain allows researchers to maintain a transparent and verifiable record of all data transactions, accessible to any interested party at any time [5].This high level of transparency not only bolsters trust in the research process but also enhances the replicability of scientific studies, a cornerstone of credible research.However, the integration of blockchain into research methodologies is not without its challenges.The complexity of blockchain systems, along with the significant computational costs and energy consumption associated with their operation, poses practical barriers to widespread adoption [38].Additionally, the decentralized nature of blockchain, while advantageous for transparency, can complicate data management and accountability, particularly in collaborative research environments involving multiple stakeholders.Blockchain technology, beyond its role in data integrity, also plays a crucial role in managing intellectual property and facilitating open science.It provides a transparent and secure platform for recording intellectual property claims, ensuring that researchers receive appropriate recognition for their contributions and reducing the potential for disputes over authorship and ownership [58].This is particularly important in transdisciplinary research, where ideas and data are frequently shared across various teams and institutions.Furthermore, blockchain supports the principles of open science by enabling the transparent and secure sharing of research data and findings with the broader scientific community.Researchers can use blockchain to publish their data openly and verifiably, ensuring that others can access, verify, and build upon their work without concerns about data manipulation or misrepresentation [42].However, the use of blockchain in these contexts raises ethical and logistical questions.For example, the immutability of blockchain records means that once errors or sensitive information are recorded, they cannot be easily corrected or removed, which could have significant implications for privacy and data protection [60].Additionally, integrating blockchain technology into existing research infrastructures requires considerable technical expertise and resources, which may be beyond the reach of smaller institutions or researchers in developing regions.The analysis of existing literature underscores the transformative impact of blockchain technology on the reliability and transparency of research methodologies.Numerous studies highlight blockchain's ability to provide a secure and immutable record of data transactions, which is essential for ensuring the trustworthiness and replicability of scientific research [25].For instance, research in the healthcare sector demonstrates that blockchain technology can effectively manage patient data and clinical trial results, safeguarding data integrity and facilitating future verification.In environmental research, blockchain has been employed to track and verify data on carbon emissions and other environmental metrics, creating a transparent and verifiable record that can be audited by regulators and other stakeholders.These examples illustrate the critical role of blockchain in enhancing data reliability across various research fields, suggesting that it is not merely a data management tool but a foundational technology for ensuring the credibility of research outcomes [10].However, the literature also points to challenges associated with incorporating blockchain into research methodologies, particularly regarding scalability and the potential for increased energy consumption, which may limit its broader adoption [26].Further analysis reveals a growing consensus on the need for a balanced approach when integrating blockchain technology into research practices.While blockchain offers significant advantages in terms of data integrity and transparency, its successful implementation requires careful consideration of both technological and ethical implications [74].Experts argue that to fully leverage the benefits of blockchain, there must be a concerted effort to develop structured frameworks and guidelines that specifically address issues related to data governance, privacy, and the environmental impact of blockchain systems.Additionally, there is an increasing recognition of the need for multidisciplinary collaboration in the development and deployment of blockchain technologies, ensuring that the diverse needs and perspectives of different research communities are adequately addressed.The literature also emphasizes the importance of education and capacity building, particularly in equipping researchers with the necessary skills and knowledge to effectively integrate blockchain into their work [55].As blockchain technology continues to evolve, ongoing research and innovation are essential to overcoming these challenges, ensuring that blockchain can be incorporated into research methodologies in a way that enhances the reliability and accessibility of scientific data.
To effectively leverage blockchain technology in Research 5.0, addressing the inherent challenges is crucial for ensuring data integrity and transparency.One of the primary obstacles is the complexity of blockchain systems, which can hinder widespread adoption among researchers [5].Developing userfriendly interfaces and tools is essential to simplify blockchain integration into research workflows, making the technology more accessible to a broader range of researchers.Additionally, the high computational costs and energy consumption associated with blockchain operations can be mitigated by investing in green energy solutions, which would minimize the environmental impact of the technology.By focusing on the development of scalable blockchain solutions, the technology can be adapted to handle the growing volume of research data without compromising performance, thus ensuring its sustainability in the long term [55].
Moreover, the decentralized nature of blockchain offers significant benefits for transparency, but it also introduces challenges related to data management and accountability [58].To address these challenges, it is crucial to establish clear roles, responsibilities, and protocols for data management.This ensures that the benefits of decentralization are realized without undermining the integrity of research processes.Additionally, creating side chains where updates or corrections can be securely recorded can address the issue of blockchain's immutability, thereby maintaining the accuracy and reliability of research data.To further enhance the effective use of blockchain in Research 5.0, investing in capacity building and education is critical.Researchers should be required to obtain certification from a recognized blockchain organization, ensuring they have the necessary skills and knowledge to effectively use the technology.Furthermore, fostering interdisciplinary collaboration between researchers and qualified technologists should be mandated as a condition for validating research, promoting a holistic approach that integrates technological expertise with academic rigor.
Critique of the Extant Literature to Identify the Future of Practice and Policy
Integrating advanced technologies such as AI, automation, and blockchain into research methodologies-coined as Research 5.0-represents a significant evolution in scientific inquiries.As outlined in this paper, this paradigm shift promises enhanced efficiency, accuracy, and transparency across various research domains, addressing the limitations of traditional methodologies.However, a critical review of the existing literature reveals both the transformative potential of these technologies and the challenges they pose.While AI, automation, and blockchain are poised to revolutionize research practices, their integration requires careful consideration of ethical, technical, and operational factors to ensure these advancements genuinely enhance the research process without compromising its integrity [60].AI's role in improving research efficiency and accuracy is well-documented.The literature consistently highlights AI's capacity to manage large datasets, identify patterns, and generate predictive models, significantly accelerating research processes [2].That is particularly evident in fields such as genomics, where AI enables the analysis of complex genetic data, leading to discoveries at a pace previously unattainable.However, the literature also underscores significant concerns regarding the opacity of AI systems, commonly referred to as the "black box" problem [80].This issue raises questions about the interpretability and trustworthiness of AI-generated results, especially in high-stakes research areas like healthcare and public policy.Addressing these concerns requires developing Research 5.0 Explainable AI (RXAI) frameworks that enhance transparency and ensure that AI-driven research findings are understandable and ethically sound.Automation is critical in streamlining research methodologies by reducing the time and effort required for data collection, analysis, and experimentation.The literature highlights how automation enhances consistency and precision in experimental conditions, leading to more reliable and reproducible research outcomes [12].For example, in chemical and biological research, automation allows for the precise control of experimental variables, reducing human error and enabling large-scale studies that would be unfeasible manually.However, there is a growing recognition of the need to balance automation with human oversight [59].More reliance on automated systems may lead to the undervaluation of human judgment and the potential for overlooking critical insights that only a human researcher can provide.To mitigate these risks, the literature advocates for hybrid systems that combine the strengths of automation with the cognitive abilities of human researchers, ensuring that automation enhances rather than replaces human expertise [53].Blockchain technology offers significant advantages in ensuring data integrity and transparency in research.Its decentralized and immutable nature provides a robust platform for managing research data, safeguarding against manipulation, and ensuring the verifiability of research findings [42].That is particularly important in collaborative research environments where multiple stakeholders are involved, and the integrity of the data is paramount.However, the literature also identifies challenges associated with blockchain systems' complexity and energy consumption, which may hinder their broader adoption [10].To fully realize the potential of blockchain in research, it is essential to develop scalable and energy-efficient blockchain solutions and user-friendly interfaces that make the technology accessible to researchers across disciplines.Additionally, the literature emphasizes the need for clear protocols and guidelines to manage the decentralized nature of blockchain, ensuring that data management and accountability are maintained [38].
Emerging from this study is the recognition that integrating AI, automation, and blockchain into research 5.0 practices-while transformative-requires a multidisciplinary approach to address the ethical, technical, and operational challenges.There is a consensus in the literature that successfully implementing such technologies depends on fostering collaboration between technologists, researchers, and ethicists to develop tools and frameworks that enhance research outcomes without compromising ethical standards [19].Moreover, this paper calls for significant investment in education and capacity building to equip researchers with the necessary skills to use these technologies effectively.Certification and continuous training in AI, automation, and blockchain are suggested as prerequisites for validating research, ensuring that researchers are competent in navigating the complexities of these advanced tools.While this integrative literature review highlights the transformative potential of Research 5.0, it also underscores the need for careful implementation and governance frameworks to address ethical and technical challenges.The future of AI-powered research practice and policy will likely be shaped by how effectively these challenges are addressed [50].By developing scalable, transparent, and userfriendly technological solutions and ensuring that researchers are adequately trained and equipped to use them, Research 5.0 can fulfill its promise of revolutionizing scientific inquiry.The new knowledge generated from this review suggests that successfully integrating AI, automation, and blockchain into research practices will require a holistic approach that balances technological innovation with ethical integrity and human oversight, ultimately leading to more robust, reliable, and ethically sound research outcomes.
Discussion and Implications of the Integrative Literature Review
This ILR on the transformative potential of Research 5.0 technologies-AI, automation, and blockchain-highlights significant advancements in research methodologies while exposing critical challenges that require careful consideration.The findings of this ILR align well with existing research and theory, demonstrating that these technologies can revolutionize the research landscape by enhancing efficiency, accuracy, and transparency.However, the paper also reveals potential discrepancies and unexpected results that merit further discussion, particularly regarding the ethical implications and practical challenges of integrating these technologies into research practices.One of the most consistent themes in this ILR is the recognition that AI, when appropriately applied, significantly enhances research efficiency and accuracy.That is particularly evident in fields that involve large datasets, such as genomics and climate science, where AI's ability to process and analyze data rapidly can lead to faster and more precise conclusions.However, this study also identifies the "black box" problem, where the inner workings of AI models are often opaque, posing challenges to the interpretability and trustworthiness of AI-driven research outcomes.This issue is well-documented in existing literature, and this review suggests that developing Research5.0Explainable AI (RXAI) frameworks is crucial to addressing these concerns [35].The alignment of these findings with existing research underscores the importance of transparency and interpretability in AI applications, which are essential for ensuring that AI-driven research outcomes are reliable and ethically sound.
In contrast, the discussion around automation in research reveals a more nuanced picture.While automation is widely recognized for its ability to streamline research processes and reduce human error, the review also highlights concerns about the potential reduction of human oversight in critical decisionmaking areas [54].This concern is less prominently discussed in existing literature, suggesting that the impact of automation on human cognition and expertise in research contexts may be underexplored.This ILR's findings indicate a need for hybrid systems that integrate human judgment with automated processes, ensuring that automation enhances rather than replaces human cognitive skills.This divergence from existing literature points to a potential gap in current research, highlighting the need for more studies that explore the balance between automation and human expertise in research methodologies.
As discussed in this paper, Blockchain technology presents a clear opportunity to enhance data integrity and transparency in research.The findings are consistent with existing research, which emphasizes blockchain's role in creating immutable and transparent records of data transactions, thereby safeguarding against data manipulation and enhancing the replicability of scientific studies.However, the study also points to practical challenges, such as the complexity and energy consumption associated with blockchain systems, which may limit their broader adoption.These challenges are the complexity and energy consumption associated with blockchain systems well-documented in the literature [5], yet this ILR suggests that developing scalable and energy-efficient blockchain solutions could mitigate these issues, making blockchain a more viable option for widespread use in research.This alignment with existing research reinforces the importance of ongoing innovation in blockchain technology to fully realize its potential in enhancing research methodologies.The interpretation of these findings must consider several factors influencing their applicability and relevance.For instance, the extent to which AI, automation, and blockchain can be effectively integrated into research practices depends on various factors, including the availability of resources, technical expertise, and institutional support [48].These factors could influence the success of implementing Research 5.0 technologies, particularly in resource-constrained environments or institutions lacking the necessary infrastructure.Additionally, ethical considerations, such as data privacy and algorithmic bias, play a significant role in shaping the interpretation of these findings.Addressing these concerns requires a holistic approach that combines technological innovation with ethical oversight, ensuring that the benefits of Research 5.0 technologies are realized without compromising the integrity of the research process.
The findings of this ILR contribute new knowledge to the existing literature by providing a comprehensive analysis of the challenges and opportunities associated with integrating AI, automation, and blockchain into research methodologies.These findings address the study problem by offering insights into how these technologies can overcome the limitations of traditional research methods, thereby enhancing the efficiency, accuracy, and transparency of scientific inquiry.Additionally, the paper contributes to advancing research practices by identifying areas where further innovation and development are needed, particularly in developing Research 5.0 Explainable AI frameworks, hybrid automation systems, and scalable blockchain solutions.This new knowledge is crucial for guiding future research and policy-making, ensuring that the integration of Research 5.0 technologies is effective and ethically responsible.The business and managerial implications of this ILR study are significant, particularly for organizations and institutions that rely on research and development (R&D) to drive innovation.Adopting Research 5.0 technologies can improve research productivity, accuracy, and data integrity, which are critical for maintaining a competitive edge in a rapidly evolving technological landscape.For businesses, investing in AI, automation, and blockchain technologies could streamline R&D processes, reduce costs, and accelerate time-to-market for new products and services.However, the study also highlights the importance of ensuring that these technologies are implemented in a way that preserves human expertise and ethical standards, which is crucial for maintaining trust and credibility in the eyes of stakeholders.Moreover, the new knowledge resulting from this ILR study contributes to promoting positive social change, particularly by aligning with several of the United Nations' 17 Sustainable Development Goals (SDGs).For example, integrating AI, automation, and blockchain in research practices supports SDG 9 (Industry, Innovation, and Infrastructure) by fostering innovation and building resilient research infrastructures.Additionally, the focus on ethical considerations and transparency aligns with SDG 16 (Peace, Justice, and Strong Institutions) by promoting the integrity and accountability of research processes.Furthermore, the potential of these technologies to accelerate research in critical areas, such as healthcare and environmental science, supports SDG 3 (Good Health and Well-being) and SDG 13 (Climate Action), respectively.By advancing these goals, the findings of this ILR study contribute to creating a more equitable, sustainable, and inclusive global society.
The discussion and implications of this ILR study highlight the transformative potential of Research 5.0 technologies while also underscoring the challenges and ethical considerations that must be addressed to ensure their successful integration into research practices.The findings contribute to advancing research methodologies by identifying critical areas for innovation and development, particularly in creating explainable AI frameworks, hybrid automation systems, and scalable blockchain solutions.These advancements have significant business and managerial implications, offering tangible improvements in research productivity and data integrity.The study's alignment with the United Nations' SDGs underscores its contribution to promoting positive social change, demonstrating how Research 5.0 technologies can be harnessed to create a more sustainable and inclusive future.While the findings offer valuable insights, it is essential to approach their application with caution, ensuring that the benefits of these technologies are realized without compromising the integrity and ethical standards of scientific inquiry.
Future Recommendations for Practice and Policy
This integrative literature review (ILR) reveals several critical recommendations for enhancing research practices under the Research 5.0 paradigm, focusing on AI, automation, and blockchain.It identifies the urgent need for transparency and interpretability in AI-driven research to address the "black box" problem.It is recommended that research institutions and policymakers prioritize the development of Research 5.0 Explainable AI (RXAI) frameworks to ensure AI models are transparent and trustworthy.Such frameworks will bridge the gap between AI-generated insights and human understanding, enhancing AI-driven research's reliability and ethical integrity.That will build trust in AI-generated findings and ensure that AI tools are used responsibly across various research disciplines.While automation has significantly improved research efficiency, it also presents the risk of reducing human oversight in decision-making [28].The ILR emphasizes the importance of preserving human cognition in research processes, recommending the incorporation of hybrid automation systems.These systems, which combine the strengths of automation with human cognitive abilities, should be designed to integrate human judgment into automated processes.This approach ensures that automation complements rather than replaces human decision-making, mitigating the risk of over-reliance on automation.Moreover, it underscores the value of human cognition in research processes, leading to more nuanced and contextually relevant research outcomes.
Blockchain technology offers significant advantages in ensuring data integrity and transparency, but its adoption is often hindered by complexity, scalability, and energy consumption concerns [57].Based on the findings of this ILR, it is recommended that research institutions and policymakers invest in developing scalable and energy-efficient blockchain solutions that are tailored to the specific needs of research environments.These solutions should focus on reducing the computational costs associated with blockchain operations while maintaining the security and transparency benefits that make blockchain a valuable tool in research.Addressing these challenges will make blockchain more accessible to a broader range of researchers, fostering more widespread adoption of this technology in the academic and scientific communities [25].
Integrating AI, automation, and blockchain into research methodologies requires a high level of technical expertise [80].To ensure that researchers are adequately prepared to leverage these technologies effectively, it is recommended that research institutions implement comprehensive training and certification programs focused on AI, automation, and blockchain.These programs should be designed to equip researchers with the necessary skills to navigate the complexities of these technologies, ensuring they can apply them ethically and effectively in their work.This investment in education will enhance researchers' capabilities and ensure that these advanced technologies are integrated into research practices in a manner that upholds the highest ethical standards.
Given the ethical challenges associated with AI, automation, and blockchain, it is crucial to establish comprehensive ethical guidelines and governance frameworks that address these issues within the Research 5.0 paradigm.These frameworks should include clear protocols for managing data privacy, algorithmic bias, and the potential misuse of research findings.The findings of this ILR indicate that without such guidelines, adopting these technologies could lead to ethical breaches that undermine research integrity.Establishing robust ethical frameworks will help safeguard research outcomes' credibility and societal impact while fostering public trust in scientific advancements [64].
Integrating AI, automation, and blockchain into research practices requires collaboration between researchers, technologists, and ethicists [4].Research institutions should actively promote interdisciplinary collaboration to ensure that these technologies are implemented in a way that meets the diverse needs of different research fields.Such collaboration is essential for addressing the technical, ethical, and operational challenges associated with Research 5.0 technologies.Fostering these crossdisciplinary partnerships will ensure that the implementation of these technologies is both innovative and aligned with the broader goals of the research community [27].
Given the limitations of this ILR, particularly in terms of the scope of literature reviewed and the evolving nature of Research 5.0 technologies, future research should focus on several key areas.First, more empirical studies are needed to explore the practical implementation of Research 5.0 Explainable AI (RXAI) frameworks in various research contexts.Additionally, future research should investigate the impact of hybrid automation systems on research efficiency and accuracy, focusing on how these systems can be optimized to balance human and machine contributions effectively.This focus on empirical evidence will help refine theoretical models and ensure they are grounded in real-world applications, enhancing their relevance and effectiveness [61].Furthermore, research is needed to examine the scalability and sustainability of blockchain technology in research environments, particularly in terms of energy consumption and operational costs.Such studies help identify the most viable blockchain solutions for different research contexts, ensuring blockchain can be widely and effectively adopted.
Conclusions
This ILR explored the transformative potential of integrating advanced technologies-AI, automation, and blockchain-into research methodologies, collectively known as Research 5.0.The primary problem addressed in this study was the increasing inadequacy of traditional research methodologies in handling the complexities of modern scientific challenges, necessitating the adoption of more sophisticated tools and approaches.The purpose of this study was to critically analyze the implementation of Research 5.0 by integrating these technologies into existing research practices while addressing the associated ethical challenges.The significance of this research lies in its potential to revolutionize scientific inquiry by enhancing the efficiency, accuracy, and transparency of research processes, thus paving the way for more innovative and reliable scientific outcomes.The findings of this ILR support the conclusion that AI, automation, and blockchain hold immense potential for transforming research methodologies.AI's ability to process large datasets and generate predictive models can significantly enhance research efficiency and accuracy, particularly in dataintensive fields such as genomics and climate science.However, the study also highlighted the "black box" problem, emphasizing the need for Research 5.0 Explainable AI (RXAI) frameworks to ensure that AI-generated insights are interpretable and trustworthy.Similarly, automation was found to be highly effective in streamlining research processes and reducing human error, but it also raised concerns about the potential loss of human oversight.The study concluded that hybrid automation systems, which integrate human expertise with automated processes, are essential for maintaining the balance between efficiency and human cognition.Blockchain technology was identified as a powerful tool for ensuring data integrity and transparency in research [42].This study concluded that blockchain's immutable and decentralized nature makes it an ideal solution for managing research data, particularly in collaborative environments where data security and verifiability are paramount.However, the findings also revealed challenges related to the complexity, scalability, and energy consumption of blockchain systems, suggesting that future research and development should focus on creating more accessible and sustainable blockchain solutions.These conclusions underscore the importance of ongoing innovation and interdisciplinary collaboration to fully realize the potential of blockchain in enhancing research methodologies.
The study's findings also led to the conclusion that education and training are critical components in successfully implementing Research 5.0 technologies.Researchers must be equipped with the necessary skills to utilize AI, automation, and blockchain effectively, ensuring that these technologies are applied ethically and efficiently.The study recommended that research institutions develop comprehensive training and certification programs to prepare researchers for the complexities of these technologies.
Additionally, the study emphasized the need for robust ethical guidelines and governance frameworks to address the ethical challenges associated with integrating these technologies into research practices, ensuring that the benefits of Research 5.0 are realized without compromising research integrity.
In conclusion, this ILR has provided valuable insights into the potential and challenges of Research 5.0 technologies, offering a roadmap for their effective integration into research methodologies.The study's findings highlight the need for a balanced approach that combines technological innovation with ethical oversight and human expertise.As research methodologies evolve to meet the demands of an increasingly complex scientific landscape, the successful adoption of AI, automation, and blockchain will be crucial in driving scientific progress [6].This study concludes with a strong message: The future of research lies in seamlessly integrating advanced technologies with ethical and human-centered approaches, ensuring that scientific inquiry remains innovative and responsible.Embracing Research 5.0 unlocks new possibilities for discovery and innovation, ultimately contributing to a more sustainable, inclusive, and equitable world.
Finally, future research should explore the development and implementation of ethical guidelines and governance frameworks for Research 5.0 technologies.This research will be crucial in creating scalable, sustainable, and ethically sound solutions that can seamlessly integrate into diverse research environments, thereby promoting long-term sustainability and ethical integrity.The next logical step in this line of research is to conduct field studies and pilot projects that test the implementation of the recommendations outlined above.These studies should focus on real-world applications of Research 5.0 Explainable AI, hybrid automation systems, and blockchain solutions in various research domains.By gathering empirical data on the effectiveness of these technologies in practice, researchers can refine and improve upon the theoretical frameworks and recommendations proposed in this ILR.Additionally, future studies should explore the long-term implications of integrating Research 5.0 technologies, particularly their impact on research culture, ethics, and societal outcomes.This ongoing research will play a pivotal role in shaping the future landscape of scientific inquiry, ensuring that it evolves in a direction that is both innovative and aligned with the values of the research community. | 16,178.4 | 2024-08-15T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Synthesis and Vulcanization of Polymyrcene and Polyfarnesene Bio-Based Rubbers: Influence of the Chemical Structure over the Vulcanization Process and Mechanical Properties
The overuse of fossil-based resources to produce thermoplastic materials and rubbers is dramatically affecting the environment, reflected in its clearest way as global warming. As a way of reducing this, multiple efforts are being undertaken including the use of more sustainable alternatives, for instance, those of natural origin as the main feedstock alternative, therefore having a lower carbon footprint. Contributing to this goal, the synthesis of bio-based rubbers based on β-myrcene and trans-β-farnesene was addressed in this work. Polymyrcene (PM) and polyfarnesene (PF) were synthesized via coordination polymerization using a neodymium-based catalytic system, and their properties were compared to the conventional polybutadiene (PB) and polyisoprene (PI) also obtained via coordination polymerization. Moreover, different average molecular weights were also tested to elucidate the influence over the materials’ properties. The crosslinking of the rubbers was carried out via conventional and efficient vulcanization routes, comparing the final properties of the crosslinking network of bio-based PM and PF with the conventional fossil-based PB and PI. Though the mechanical properties of the crosslinked rubbers improved as a function of molecular weight, the chemical structure of PM and PF (with 2 and 3 unsaturated double bonds, respectively) produced a crosslinking network with lower mechanical properties than those obtained by PB and PI (with 1 unsaturated double bond). The current work contributes to the understanding of improvements (in terms of crosslinking parameters) that are required to produce competitive rubber with good sustainability/performance balance.
Introduction
More than 25 million tons of rubber are produced every year, most of which correspond to natural rubber (NR), or petroleum-based alternatives, such as butadiene-based rubbers alternatives (e.g., PB, NBR, and SBR) or isoprene-based rubbers (PI), among others [1]. There are several challenges from the current rubber alternatives. For instance, for NR, collection and processing are manual processes, and therefore, the production rate cannot satisfy the consumer demand (and the rubber industry is a growing market). On the other hand, due to the negative environmental impact of using petroleum as raw material, upcoming regulations are promoting the use of natural raw materials to produce biobased materials. An ideal rubber material would show a good sustainability/performance balance, allowing to tailor the molecular weight characteristics and the copolymerization with other monomers for specific applications [2].
A prominent interest has emerged for the β-myrcene and trans-β-farnesene as sustainable alternatives, which are two bio-based monomers belonging to the family of terpenes. These monomers are obtained from natural sources that share similar (chemically and in terms of electronic properties) conjugated dienes such as 1,3-butadiene and isoprene [3][4][5]. In analogy to 1,3-butadiene and isoprene, the 1,3-diene moiety of β-myrcene and trans-βfarnesene enables its free-radical [6,7], anionic [8][9][10], and coordination [11,12] polymerization. Additionally, their (co)polymerization leads to sustainable rubbers [4,13] (PM and PF), which can be crosslinked via sulfur [3,4,10,[14][15][16], thiol-ene [17], or other [18,19] types of crosslinking agents. Crosslinking of these bio-based rubbers is essential as all conjugated diene-based rubbers by themselves do not have the mechanical properties required for certain applications. Therefore, seen from an industrial point of view, the use of rubbers for engineering applications requires their transformation to a crosslinked state.
Several authors have studied the potential of terpene-based elastomers. For example, Zhang et al. prepared partially sustainable elastomers based on styrene and biogenic β-myrcene by living anionic polymerization and subsequently crosslinked them with sulfur. The presence of styrene in the copolymers improves the wet slip and rolling resistance properties [10]. On the other hand, Zhang et al. introduced β-myrcene into a polymerized solution of styrene-butadiene through anionic copolymerization to partially replace the butadiene in styrene-butadiene rubber. In addition, they carried out the crosslinking with sulfur. The results showed an improvement in the resistance to wet sliding with the incorporation of myrcene, in addition to an increase in tensile strength [16]. Although these reports support the interest in the synthesis of sustainable rubbers to generate materials, there are still very scarce scientific literature reports regarding the crosslinking and the evaluation of the mechanical properties of these bio-based terpenes. It is crucial to understand the optimum reaction conditions to produce performance-competitive bio-based rubbers, and benchmarking compared to readily commercial alternatives.
In this work, a neodymium-based (Ziegler-Natta type) catalytic system was used to synthesize the bioelastomers. This type of catalyst is highly active and leads to stereoregular polymers with high content of the 1,4-cis microstructure, which is needed for adequate elasticity, high fatigue and crack resistance, and low heat buildup in synthetic rubber. Vulcanized products obtain better mechanical properties (300% modulus, tensile strength, and elongation at breakage) with the content of the 1,4-cis microstructure [20]. The vulcanization of PM and PF was compared with PB and PI, using the same vulcanization process parameters and with/without carbon black, to elucidate the influence of the chemical structure on the crosslinking network and mechanical properties compared to conventional petroleum-based rubbers. Rubbers with variable molecular weights were also compared to obtain a wider panorama of their behavior during the vulcanization. Through this work, we aimed to set some guidelines toward the crosslinking of these bio-based rubbers and provide some insights into the performance of the resultant rubbers.
Synthesis of Elastomers
The synthesis of the rubbers (both biobased and non-biobased) was carried out using a ternary catalytic system consisting of Al(iBu) 2 H, NdV 3 , and Me 2 SiCl 2 . The polymerization was carried out until reaching conversions close to 100%, in a 1 L stainless-steel reactor under a nitrogen atmosphere. The preparation and aging of the catalytic system were carried out in a glove box under argon atmosphere. First, the cyclohexane and monomer were added into the steel reactor, and it was sealed and heated until reaching 70 • C. Then, the catalytic system was injected into the reactor to initiate the polymerization reaction. The polymerization was terminated with acidified methanol and an antioxidant (Irganox 1010) was added. The resultant rubber was washed with methanol and vacuum-dried until constant weight.
Formulation and Crosslinking
The different formulations were mixed in a two-roll mill Schwabenthan Polymix 40T (Maschienenfabric Fr. Schwabenthan & Co. Kg., Berlin, Germany) at a temperature of 50-60 • C with a nip gap of 0.5 mm. The rubber compounding was started by mastication of the rubber on the two-roll mill; then, the other ingredients were added until a homogenous compound was obtained in the following order: zinc oxide (enhances the effect of accelerators), stearic acid (as a softener and a filler-dispersing agent), sulfur (crosslinking agent), and MBTS (increases the speed of the process). Scorch time (t s2 ), optimum cure time (t c90 ), minimum torque (M L ), and maximum torque (M H ) were determined from the cure curve by a rubber process analyzer (RPA Elite) (TA Instruments, New Castle, DE, USA) (crosslinking parameters are described in the characterization section). The samples were crosslinked to 160 • C for the conventional (CV) and efficient (EV) vulcanization, at t c90 on a Carver auto four/30H-12 automatic hydraulic laboratory press (Wabash, IN, USA). The different formulations are shown in Table 1. Each system was also reinforced with carbon black (CB) to represent the compounds used in the manufacture of tires. The microstructure of the synthesized rubbers was determined via 1 H and 13 C Nuclear Magnetic Resonance (NMR) spectra in a 400 MHz Bruker Advance III NMR spectrometer (Billerica, MA, USA). The macrostructure of synthetized rubbers, molecular weights, and their distributions were calculated with an Agilent Gel Permeation Chromatograph (GPC) model PL-GPC50 (Santa Clara, CA, USA), equipped with a refractive index detector, at a flow rate of 1 mL/min of THF at 40 • C, calibrated using polystyrene standards.
Crosslinked Rubbers
The crosslinking parameters were obtained through an oscillating disc rheometer (ODR) using a rubber process analyzer from TA Instruments (RPA elite). A high-resolution solid-state Nuclear Magnetic Resonance 13 C CP-MAS study was carried out in 500 MHz Bruker Advance III equipment to determine the crosslinking bonds. The mechanical properties were determined using an MTS criterion model 43 (MTS System Corporation, Eden Praire, MN, USA), and tensile strength tests were performed at room temperature at a speed of 500 mm/min according to ASTM D412. The crosslinking density was determined by measuring the swelling balance by placing the previously crosslinked rubbers in toluene for 72 h at 30 • C. At the end of this period, the samples were weighed by removing the excess of toluene with paper and later placed in a vacuum oven at 60 • C to remove the solvent trapped within the sample and weighed again to constant weight. Afterward, the volume of the rubber fraction (V r ) was calculated with the following equation [3,21]: where W i is the weight of the dry sample after swelling, F is the fraction of the insoluble weight of the sample, W m is the weight of the sample before swelling, W s is the weight of the solvent absorbed by the sample, ρ s is the density of the solvent (0.867 g/cm 3 ), and ρ r is the density of rubber. The density of rubber was determined using the hydrostatic weighing method, where the apparent weight of the sample is measured in two different media, in water (W w ) and air (W a ), applying the equation [22]: where ρ w (0.9977 g/cm 3 at 22 • C) and ρ a (0.0012 g/cm 3 ) represent the density of water and air, respectively. The crosslinking density (v) was calculated with the Flory-Rehner equation [3,21]: where M c is the molecular weight between the crosslinks, V s is the molar volume of the solvent (106.29 mL/mol), and χ is the solvent-rubber interaction parameter; this was determined with the Hildebrand solubility parameters with Equation (5) [3,21]: where δ s (8.97 cal/cm 3 ) and δ r are the solubility parameters of the solvent and rubber, respectively, R is the ideal gas constant (1.987 cal/Kmol), T is the absolute temperature (K), and χ β is the entropic contribution (typically 0.34) [23,24]. The δ r was determined based on the chemical structure of the rubber, using the molar group attraction constants (G) with Equation (6) [25]: Hardness tests were made based on ASTM D2240 with a type A durometer. The compression set test was performed based on ASTM D395, Method B for 22 h at 70 • C. The abrasion resistance test was carried out under the ISO-4649:2017, method A. Dynamic Mechanical Analysis (DMA) was carried out in tension mode at 3 • C/min with a frequency of 1 Hz and strain of 10 µm in DMA Q800 equipment (TA Instruments, New Castle, DE, USA). The temperature range was −120 to 20 • C for PB and −90 to 20 • C for PI, PM, and PF.
Synthesis of Rubbers
The synthesis of PB, PI, PM, and PF was carried out via coordination polymerization using a ternary neodymium-based catalytic system comprising NdV 3 /Al(iBu) 2 H/Me 2 SiCl 2 . This catalytic system is known to polymerize dienes with high cis microstructure control [26]. NdV 3 is the catalyst, while Al(iBu) 2 H is an alkylaluminium acting as a co-catalyst acting as Lewis's acid abstracting alkyl groups from the Nd molecule, thus creating active sites, and Me 2 SiCl 2 is a halide donor that promotes catalytic activity and stereocontrol [27,28].
The macro-and microstructural properties of the rubbers (molecular weight characteristics, and cis microstructure) are shown in Table 2. Two different molecular weights of rubber were used for each material for subsequent vulcanization, denoted as H (high molecular weight) and L (low molecular weight). This was performed to investigate the influence of molecular weight on the final characteristics of the crosslinked network. All the synthesized rubbers exhibited a high cis content (>95%), characteristic of the employed catalytic system, and molecular weights ranging from 292 to 1161 kg/mol (see Table 2). The chemical structure of the monomers and the resultant high-cis rubbers are shown in Figure 1.
Vulcanization of Rubbers
The crosslinking of the resultant rubbers was performed via sulfur vulcanization. The sulfur-crosslinked systems are described in the literature in three types: CV, EV, and semiefficient vulcanization (semiEV). The difference among them lies in the accelerator and sulfur ratio (A/S), being 0.1-0.6 for CV, 2.5-12 for EV, and an intermediate range for semi-EV. The final properties of the resultant rubbers depend on the amount of mono-(C-S-C), di-(C-S2-C), and polysulfides (C-Sx-C) bonds that crosslink the polymer chains. The network formed by the CV is composed mostly of di-and polysulfides (95%) bonds, while the EV is predominantly integrated with monosulfide (~80%) bonds, and the semi-EV is a mixture balanced by both [24,[29][30][31][32]. For the rubbers synthesized here, both CV and EV were used for the vulcanization process.
The crosslinking reactions were analyzed by measuring the torque (dNm) as a function of time, obtained from the cure curves at a temperature of 160 °C. The results are shown in Figures 2 and 3. Values of torque and other data obtained from cure curves are shown in Tables S1 and S2 of the supporting information. The influence of the rubber molecular weight was evident in the unreinforced and CB-reinforced rubbers, showing a higher torque at high molecular weights. Increasing the molecular weight and the linearity of the polymer chain structure (such as PB synthesized with Ziegler-Natta catalysts based on neodymium) increases the stiffness of the rubber, whereas when the rubber has a low molecular weight and a structure with long branches, the material exhibits lower stiffness [33]. Therefore, PB rubber showed the highest torque in both unreinforced and CB-reinforced compounds. This behavior was shown in the rubbers with either high (Figures 2a and 3a) or low molecular weight (Figures 2b and 3b). In general, the trend in the torque values for the synthesized rubbers is PB > PI > PM > PF for high-and low-molecular-weight rubbers, and following the same behavior if they are CB-reinforced. PF showed
Vulcanization of Rubbers
The crosslinking of the resultant rubbers was performed via sulfur vulcanization. The sulfur-crosslinked systems are described in the literature in three types: CV, EV, and semi-efficient vulcanization (semiEV). The difference among them lies in the accelerator and sulfur ratio (A/S), being 0.1-0.6 for CV, 2.5-12 for EV, and an intermediate range for semi-EV. The final properties of the resultant rubbers depend on the amount of mono-(C-S-C), di-(C-S 2 -C), and polysulfides (C-S x -C) bonds that crosslink the polymer chains. The network formed by the CV is composed mostly of di-and polysulfides (95%) bonds, while the EV is predominantly integrated with monosulfide (∼ 80%) bonds, and the semi-EV is a mixture balanced by both [24,[29][30][31][32]. For the rubbers synthesized here, both CV and EV were used for the vulcanization process.
The crosslinking reactions were analyzed by measuring the torque (dNm) as a function of time, obtained from the cure curves at a temperature of 160 • C. The results are shown in Figures 2 and 3. Values of torque and other data obtained from cure curves are shown in Tables S1 and S2 of the supporting information. The influence of the rubber molecular weight was evident in the unreinforced and CB-reinforced rubbers, showing a higher torque at high molecular weights. Increasing the molecular weight and the linearity of the polymer chain structure (such as PB synthesized with Ziegler-Natta catalysts based on neodymium) increases the stiffness of the rubber, whereas when the rubber has a low molecular weight and a structure with long branches, the material exhibits lower stiffness [33]. Therefore, PB rubber showed the highest torque in both unreinforced and CB-reinforced compounds. This behavior was shown in the rubbers with either high (Figures 2a and 3a) or low molecular weight (Figures 2b and 3b). In general, the trend in the torque values for the synthesized rubbers is PB > PI > PM > PF for high-and low-molecular-weight rubbers, and following the same behavior if they are CB-reinforced. PF showed the lowest torque values, thus less rigidity, especially at the low molecular weight at which they were unable to be measured, due to high fluidity (thus not shown in Figures 2b and 3b).
ness of the material, which was not presented when adding it to PM and PF, making it easier to the mixing process. According to Tables S1 and S2, CB had a positive impact on the ts2 and tc90, both decreasing in most cases. This is attributed to the reinforcing effect of CB, which restricts the mobility of the chains so that a greater amount of energy is accumulated (heat; this is discussed in greater detail in the section corresponding to the DMA) as it is not dissipated by molecular movement, promoting the breaking of the bonds in the accelerant and sulfur. The incorporation of the CB increases the stiffness of the material presumably due to electrostatic attractions (physical bonds) that increases the rigidity and reduce the flexibility of the crosslinking network.
Crosslinking for Each Vulcanization System
The bonds formed at the crosslinking network of PB and PI have been studied in several works, describing a variety of structures formed by mono-, di-, and polysulfides bonds, as well as cyclization, carbon-carbon crosslinks, and accelerator residues, as shown in Figure 4 [36,37]. Figures 5 and 6 show the crosslinking bonds that can be formed in cis-PI and cis-PB, respectively [38]. The possible crosslinking network for PM and PF has not been extensively studied as in the case of PB and PI. Therefore, it is important to elucidate the influence of the chemical structure (showing a branch with one or two isoprenic units, respectively, see Figure 7) of the polyterpenes over the rubber vulcanization. Higher torque was observed when using EV, suggesting a matrix mainly crosslinked via monosulfide bonds, limiting the network molecular mobility, and reducing the flexibility, as it has been previously described in the literature when compared to a flexible crosslinked network made up of a higher proportion of polysulfide bonds using CV [29,30,32]. In the rheometric curves, a slight reversion is observed in the rheometric curves of the rubbers with the EV system. In the literature, this phenomenon has been presented in this type of system due to the conversion of disulfide to monosulfide crosslinks [30,34]; however, in our case, this could be due to the absence of antioxidant [35].
On the other hand, mixing CB with PB and PI required more work due to the rigidness of the material, which was not presented when adding it to PM and PF, making it easier to the mixing process. According to Tables S1 and S2, CB had a positive impact on the t s2 and t c90 , both decreasing in most cases. This is attributed to the reinforcing effect of CB, which restricts the mobility of the chains so that a greater amount of energy is accumulated (heat; this is discussed in greater detail in the section corresponding to the DMA) as it is not dissipated by molecular movement, promoting the breaking of the bonds in the accelerant and sulfur. The incorporation of the CB increases the stiffness of the material presumably due to electrostatic attractions (physical bonds) that increases the rigidity and reduce the flexibility of the crosslinking network.
Crosslinking for Each Vulcanization System
The bonds formed at the crosslinking network of PB and PI have been studied in several works, describing a variety of structures formed by mono-, di-, and polysulfides bonds, as well as cyclization, carbon-carbon crosslinks, and accelerator residues, as shown in Figure 4 [36,37]. Figures 5 and 6 show the crosslinking bonds that can be formed in cis-PI and cis-PB, respectively [38]. The possible crosslinking network for PM and PF has not been extensively studied as in the case of PB and PI. Therefore, it is important to elucidate the influence of the chemical structure (showing a branch with one or two isoprenic units, respectively, see Figure 7) of the polyterpenes over the rubber vulcanization.
Crosslinking for Each Vulcanization System
The bonds formed at the crosslinking network of PB and PI have been studied in several works, describing a variety of structures formed by mono-, di-, and polysulfides bonds, as well as cyclization, carbon-carbon crosslinks, and accelerator residues, as shown in Figure 4 [36,37]. Figures 5 and 6 show the crosslinking bonds that can be formed in cis-PI and cis-PB, respectively [38]. The possible crosslinking network for PM and PF has not been extensively studied as in the case of PB and PI. Therefore, it is important to elucidate the influence of the chemical structure (showing a branch with one or two isoprenic units, respectively, see Figure 7) of the polyterpenes over the rubber vulcanization. Figure 5. Proposed structures that are produced through the vulcanization of cis-PI [38]. Adapted with permission from Ref. [38]. Copyright (2022) Elsevier Inc. Figure 5. Proposed structures that are produced through the vulcanization of cis-PI [38]. Adapted with permission from Ref. [38]. Copyright (2022) Elsevier Inc. Figure 6. Proposed structures that are produced through the vulcanization of cis-PB [38]. Adapted with permission from Ref. [38]. Copyright (2022) Elsevier Inc. However, the unsaturated branches present in the PM and PF may imply a greater availability for the formation of crosslinking bonds, which are not present in PB and PI. Various assumptions arise to understand the behavior of the crosslinking network in the PM and PF. Looking at the van der Waals forces and entanglement molecular weight (Me), it is possible to find a relationship with the stiffness and toughness of the rubbers. PB, being a linear polymer, has a greater number of electrostatic attractions between the atoms of the rubber chains, producing a material with poor fluidity and high melt viscosity by a low Me (1.8 kg mol −1 ). In contrast, the van der Waals forces and melt viscosity decrease However, the unsaturated branches present in the PM and PF may imply a greater availability for the formation of crosslinking bonds, which are not present in PB and PI. Various assumptions arise to understand the behavior of the crosslinking network in the PM and PF. Looking at the van der Waals forces and entanglement molecular weight (Me), it is possible to find a relationship with the stiffness and toughness of the rubbers. PB, being a linear polymer, has a greater number of electrostatic attractions between the atoms of the rubber chains, producing a material with poor fluidity and high melt viscosity by a low Me (1.8 kg mol −1 ). In contrast, the van der Waals forces and melt viscosity decrease with the size of the branching present in the main chain of PI (Me = 5.4 kg mol −1 ), PM (Me = 18 kg mol −1 ), and PF (Me = 50 kg mol −1 ), where PM and PF present more fluidity and therefore less rigidity [37][38][39][40].
On the other hand, if we suppose that PI presents n possibilities for the formation of the crosslinking network by presenting a single unsaturation, PM will have 2n possibilities when presenting two unsaturations, and, finally, the PF will have 3n possibilities when presenting three unsaturations. This extra number of possibilities gives PM and PF a better distribution of the crosslinking sites, which directly impacts the crosslinking network. That is, if we consider that the chemistry of the reaction is proportional to these possibilities, PM then needs a double amount of crosslinking agent to equal the properties obtained by PB and PI, while PF should require a triple amount of crosslinking agents to obtain the same properties. For this reason, it is necessary to carry out a series of experiments to confirm that the increase in the amount of crosslinking agents has positive effects on the properties of the crosslinking network. However, the literature reports that an increase in the amount of sulfur, or accelerator, has a negative effect for PB and PI, by reducing the mechanical properties [26].
To validate the possible configuration of the crosslinking network, 13C CP-MAS solidstate NMR of the aliphatic region (10 to 75 ppm) was performed for the PI-H, PM-H, and PF-H samples crosslinked with the CV system, which is shown in Figure 8. Analyzing the NMR spectra corresponding to the PI-H compound, we observe that at least four structures were determined for the formation of the crosslinking network, with chemical shifts at 53. 45 31.66, and 20.83 ppm), as it shows a chemical structure similar to PM-H, the formation of crosslinking bonds with the main chain seems to be also unlikely. Instead, crosslinking through the branches and cycles formation are also suggested.
The low-field signal at 42.68 ppm describes the presence of disulfides, polysulfides, and even crosslinking with the main chain, but the overlapping of signals does not allow the determination of what is the dominant crosslinking mechanism. The EV system theoretically presents the same structures proposed for the CV system, but mostly made up of monosulfide bonds. This shows that the crosslinking network is more open in proportion to the size of the branch, and it has a negative influence on the mechanical properties for PM and PF.
Crosslinking Density and Mechanical Properties of Rubbers
The mechanical properties of a vulcanized rubber also strongly depend on the crosslinking density. According to Tables 3 and 4, where mechanical properties and crosslinking density are reported, respectively, generally higher values for tensile strength and For PM-H, it was only possible to determine one crosslinking structure that fits the spectrum (38.95, 35.81, 34.06, 28.06, 25.85, 18.33, and 15.33 ppm), corresponding to crosslinking through branches by monosulfide bonds and cycle formation, showing a better distribution of the sulfur by reducing the presence of di-and polysulfide bonds (although the formation of other structures formed by di-and polysulfide bonds is not ruled out). It is unlikely that crosslinking occurs in the main chain due to low chemical shifts in comparison with the PI-H (signals less than 45.26 ppm). Finally, for the PF-H rubber (42.68, 39.38, 37.95, 31.66, and 20.83 ppm), as it shows a chemical structure similar to PM-H, the formation of crosslinking bonds with the main chain seems to be also unlikely. Instead, crosslinking through the branches and cycles formation are also suggested.
The low-field signal at 42.68 ppm describes the presence of disulfides, polysulfides, and even crosslinking with the main chain, but the overlapping of signals does not allow the determination of what is the dominant crosslinking mechanism. The EV system theoretically presents the same structures proposed for the CV system, but mostly made up of monosulfide bonds. This shows that the crosslinking network is more open in proportion to the size of the branch, and it has a negative influence on the mechanical properties for PM and PF.
Crosslinking Density and Mechanical Properties of Rubbers
The mechanical properties of a vulcanized rubber also strongly depend on the crosslinking density. According to Tables 3 and 4, where mechanical properties and crosslinking density are reported, respectively, generally higher values for tensile strength and elongation at break are observed in formulations with high-molecular-weight rubbers. This condition suggests that the high molecular weights offer a more effective crosslinking matrix because the chains tend to become entangled with each other, thus generating a more rigid matrix and requiring a greater force to break the entanglements and the linkage bonds of the network [40]. In contrast, low-molecular-weight rubbers tend to entangle less, and even short chains could be considered to function as a lubricant for longer chains. On the other hand, considering the above and visualizing the properties presented by the crosslinking network of PB, it could be assumed that it would present the best results for elongation at break. However, it maintains this parameter in a range of 662-825%, surpassed by PI with a greater elongation at break of 987-1262% (some exemplary stressstrain curves are shown in Figure S1 of the supporting information). This characteristic has already been studied by several authors who consider that the PI crosslinking network tends to form crystalline regions that are favored when the material is stretched and are not observed under other conditions [41][42][43][44][45]. This hypothesis guided the analysis of PM and PF, hoping that this condition would be replicated by the crosslinking network. However, such formulations exhibited lower elongations at break for PF and PM contrasting with PI and PB, indicating that these rubbers do not present any type of crystallization, determining that the chemical structure of PM and PF reduces the effectiveness of the crosslinking network in proportional ways to the size of the branch. On the other hand, the presence of CB reinforced the rubbers in terms of tensile strength and reducing the elongation at break of PB and PI. This is due to a reduction in the percentage of rubber when incorporating CB, agglomerates, and the increase in crosslinking density that makes a more rigid material. Furthermore, the presence of CB minimizes the stretch-induced crystallization in PI rubbers [46]. However, in PM compounds, there was a lower reduction in elongation at break than PB and PI rubbers, suggesting that PM allows the incorporation of CB without significantly affecting the tensile strength, and it is more compatible than PB and PI, possibly because the crosslinking network is more open than the PB and PI rubbers. This is corroborated with the increase in tensile strength and elongation at break values for the case of reinforced PF rubbers.
The moduli at 100%, 200%, and 300% elongations are related to the crosslinking density. When this property increases, the moduli value reached is greater (for both vulcanization systems) and there is a decrease in the elongation at break due to a greater restriction of the crosslink joints on the mobility of intercrosslink chains [47].
The hardness and compression set of the crosslinked rubbers are shown in Tables 3 and 4. It is generally observed that most materials crosslinked with the CV system have a higher hardness for high-molecular-weight rubbers, which is closely related to the increase in the crosslinking density because of the restriction of the crosslinking linkages on the mobility of the chains [47,48]. In the crosslinking rubbers with the EV system, the PM and PF show in the same way a higher hardness with the rubbers of higher molecular weight; however, this condition was not presented for PB and PI. The unreinforced compounds of PB with CV and EV had the highest hardness of the evaluated materials, suggesting that it is an intrinsic property of PB produced by its chemical structure and the formation of a more closed crosslinking network (linear chain). However, for reinforced and unreinforced rubbers, a clear trend was not found in relation to molecular weight, but a reduction in hardness of reinforced PI, PM, and PF rubbers was observed and evaluated in CV and EV. This proposes that the hardness depends on the chemical structure of PI, PM, and PF, which results in a more open crosslinking network (given by their branches), therefore being more flexible and with less hardness. In the case of CB-reinforced materials, all rubbers showed an increase in hardness. According to the literature, this is due to the influence of CB on the acceleration of the crosslinking kinetics, and the changes induced by CB in the crosslinking density, which is increased in all reinforced rubbers. The increase in hardness leads to a material with greater elasticity and less flexibility [49].
The compression set test was carried out to determine the ability of rubber to retain its elastic properties after prolonged compression loads at a certain temperature. The analysis of this parameter is essential to determine when the rubber material is suitable to be applied as parts of dampers and seals, which are subject to compression loads. A small compression set value refers to the ability of the sample to maintain original thickness and low damping properties, while a large value indicates lower stiffness with broader damping properties. It is also known that this parameter is another property that can be used to determine the degree of elasticity [50].
All the materials without reinforcement exhibited a lower compression set value at a higher molecular weight of the rubber, and according to the literature, this indicates that the presence of strong chemical bonds in the crosslinking network gives a better ability to recover the initial shape and undergo a less permanent deformation [24,51]. The PB compounds showed lower compressions set with the CV system of 33% and 62%, while the other compounds were deformed from 83% to 99%. On the other hand, the compounds evaluated with the EV system presented better resistance to permanent deformation in this order: PM-H > PB-H > PF-H > PB-L, and the other rubbers had percentages >98%. It should be noted almost all materials have high compression set values; therefore, they suffer a high permanent deformation. Furthermore, by adding CB as reinforcement in general, the compression set is reduced as it gives the material a certain rigidity to withstand the compression load for both CV and EV systems used, even being higher for compounds evaluated with EV. Figure 9 shows the abrasion resistance of CB-reinforced and -unreinforced composites at high molecular weight evaluated in EV. It is observed that the compounds PI-H, PM-H, and PF-H do not appear in Figure 9, because the materials were soft, and it was not possible to determine this characteristic. However, from this group of compounds, the PB-H could be assessed, showing an abrasion resistance of 182 mm 3 , which, in turn, was reduced to 40 mm 3 with the incorporation of CB, thereby losing a smaller amount of material. The terial. The reinforced compounds obtained resistance to abrasion in the following order: PB-H-CB > PI-H-CB > PM-H-CB > PF-H-CB, with this information again relating to the effect of the chemical structure of the evaluated rubbers, corroborating that the linear chains of PB form a closed crosslinked network that limits the detachment of particles by abrasion. Instead, the branched chains in PI, PM, and PF constitute an open crosslinked network that facilitates the loss of material by abrasion as follows: PF-H-CB > PM-H-CB > PI-H-CB.
Dynamic Mechanical Analysis (DMA) of Crosslinked Materials
In order to determine the viscoelastic properties of high-molecular-weight rubbers, an analysis using Dynamic Mechanical Analysis (DMA) was carried out; some values obtained are shown in Table 5, such as storage (E') and loss modulus (E") at −40 °C. For the samples unreinforced and vulcanized with the EV system, the storage modulus values reached are much higher compared to the CV system; this is attributed to the type of monosulfide bonds produced during crosslinking, as mentioned in Section 3.2. In addition, it is observed that the storage modulus is higher for the PI-H (EV system) compound, indicating that the mobility of the crosslinking network in the vitreous region is more limited than in the other compounds.
The loss factor (or damping) factor, also called Tan δ, which is the ratio E"/E', is an indicator of the dynamic behavior that occurs in the glass transition region and is related
Dynamic Mechanical Analysis (DMA) of Crosslinked Materials
In order to determine the viscoelastic properties of high-molecular-weight rubbers, an analysis using Dynamic Mechanical Analysis (DMA) was carried out; some values obtained are shown in Table 5, such as storage (E') and loss modulus (E") at −40 • C. For the samples unreinforced and vulcanized with the EV system, the storage modulus values reached are much higher compared to the CV system; this is attributed to the type of monosulfide bonds produced during crosslinking, as mentioned in Section 3.2. In addition, it is observed that the storage modulus is higher for the PI-H (EV system) compound, indicating that the mobility of the crosslinking network in the vitreous region is more limited than in the other compounds. The loss factor (or damping) factor, also called Tan δ, which is the ratio E /E , is an indicator of the dynamic behavior that occurs in the glass transition region and is related to the mobility of the polymer chains, as well as the amount of energy dissipated by the material [52]. The damping factor of PF and PM is as remarkably high as PI, indicating better interaction between polymer chains in the crosslinking network [52]. In addition, this is related to the length of the branch of the lateral alkene groups that would lead to a greater loss of energy under dynamic load [10].
The effect of carbon black as a reinforcer caused an increase in the storage modulus in all compounds, as can be seen in the Figure 10, but more markedly in the PI-H-CB (EV) and PF-H-CB (EV), indicating that these compounds maintain an excellent molecular interaction with carbon black [52,53]. All compounds show a drop in storage modulus as they approach the glass transition temperature (T g ) until they exceed it and enter in the rubbery region where the long-range coordinated movement of the chains makes the elastic modulus unable to be detected and reduces to zero [54]. The information obtained by the loss modulus is proportional to the storage modulus, thereby verifying the viscoelastic behavior of the material, which is characteristic of any elastomer.
Conclusions
In this work, we compared the properties of terpene-based bio-rubbers to conventional fossil-based rubbers (polyisoprene and polybutadiene), in order to set some guidelines toward the crosslinking of these materials, and thus the production of products with competitive performance. Using the herein processing conditions, polymyrcene and polyfarnesene produce materials with lower mechanical properties than those obtained by PB and PI, presumably because they form crosslinking networks mainly with the branches according to the RMN characterization, making more open networks that decrease the effectiveness of the crosslinked material. The crosslinking with branches and the absence The materials vulcanized with the CV and EV system ( Figure 10) with reinforcement present lower values of Tan δ and broader curves, indicating a reduction in the heat buildup and damping capability [10], which is because CB acts as a barrier to the mobility of the rubber chains, so they accumulate more energy (heat). The temperature of the Tan δ peak is related to the glass transition temperature (T g ). In this work, the incorporation of CB does not seem to overly displace or affect the T g , due the value of this parameter showing minimal changes in the samples with and without reinforcement.
Conclusions
In this work, we compared the properties of terpene-based bio-rubbers to conventional fossil-based rubbers (polyisoprene and polybutadiene), in order to set some guidelines toward the crosslinking of these materials, and thus the production of products with competitive performance. Using the herein processing conditions, polymyrcene and polyfarnesene produce materials with lower mechanical properties than those obtained by PB and PI, presumably because they form crosslinking networks mainly with the branches according to the RMN characterization, making more open networks that decrease the effectiveness of the crosslinked material. The crosslinking with branches and the absence of induction crystallization compared with PB and PI cause a decrease in some properties such as tensile strength.
Generally, the incorporation of carbon black generally improves the mechanical properties of rubber compounds and reduces the crosslinking time, elongation at break, and compression set, while the hardness and resistance to abrasion increase. However, PF compounds show an increase in elongation at break upon CB addition, indicative of better network interaction with the additive. Unlike unreinforced compounds with PI, PM, and PF, which dissipate energy better than compounds with PB, the addition of CB reduces the energy dissipation, thereby accumulating heat. It appears that the mechanical properties such as tensile strength, elongation at break, and compression set of crosslinked rubbers depend on molecular weight, and these can be improved by increasing the molecular weight, while the hardness and resistance to abrasion depend on the chemical structure, as, in linear rubbers, these properties improve, and in branched rubbers, they decrease as a function of the size of the branch. This work provides guidelines for future work, which can take the herein developed knowledge to optimize formulations that can compete with the current fossil-based rubbers.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/polym14071406/s1, Table S1: Data obtained from rheometric curves in the crosslinking process of polydienes with the CV system; Table S2: Data obtained from rheometric curves in the crosslinking process of polydienes with the EV system, Figure S1: Exemplary stress-strain curves of some of the obtained composites. | 9,406.6 | 2022-03-30T00:00:00.000 | [
"Materials Science"
] |
18 F-FDG PET-CT during chemo-radiotherapy in patients with non-small cell lung cancer: the early metabolic response correlates with the delivered radiation dose
Background To evaluate the metabolic changes on 18 F-fluoro-2-deoxyglucose positron emission tomography integrated with computed tomography (18 F-FDG PET-CT) performed before, during and after concurrent chemo-radiotherapy in patients with locally advanced non-small cell lung cancer (NSCLC); to correlate the metabolic response with the delivered radiation dose and with the clinical outcome. Methods Twenty-five NSCLC patients candidates for concurrent chemo-radiotherapy underwent 18 F-FDG PET-CT before treatment (pre-RT PET-CT), during the third week (during-RT PET-CT) of chemo-radiotherapy, and 4 weeks from the end of chemo-radiotherapy (post-RT PET-CT). The parameters evaluated were: the maximum standardized uptake value (SUVmax) of the primary tumor, the SUVmax of the lymph nodes, and the Metabolic Tumor Volume (MTV). Results SUVmax of the tumor and MTV significantly (p=0.0001, p=0.002, respectively) decreased earlier during the third week of chemo-radiotherapy, with a further reduction 4 weeks from the end of treatment (p<0.0000, p<0.0002, respectively). SUVmax of lymph nodes showed a trend towards a reduction during chemo-radiotherapy (p=0.06) and decreased significantly (p=0.0006) at the end of treatment. There was a significant correlation (r=0.53, p=0.001) between SUVmax of the tumor measured at during-RT PET-CT and the total dose of radiotherapy reached at the moment of the scan. Disease progression free survival was significantly (p=0.01) longer in patients with complete metabolic response measured at post-RT PET-CT. Conclusions In patients with locally advanced NSCLC, 18 F-FDG PET-CT performed during and after treatment allows early metabolic modifications to be detected, and for this SUVmax is the more sensitive parameter. Further studies are needed to investigate the correlation between the metabolic modifications during therapy and the clinical outcome in order to optimize the therapeutic strategy. Since the metabolic activity during chemo-radiotherapy correlates with the cumulative dose of fractionated radiotherapy delivered at the moment of the scan, special attention should be paid to methodological aspects, such as the radiation dose reached at the time of PET.
Background
About one third of patients with non-small cell lung cancer (NSCLC) present loco-regionally advanced disease at the diagnosis [1,2], and despite radical treatment with concurrent chemo-radiotherapy (chemo-RT), only 15% of patients will be long-term survivors and 15%-40% will develop loco-regional tumor recurrence [3,4]. A higher biologically effective dose of radiotherapy can improve loco-regional control and survival [5]: however, an escalating radiotherapy dose also results in increasing the risk of toxicity [6]. For this reason, it is important to carefully select patients for radiotherapy dose intensification. Currently, the response to radiotherapy is not determined until the therapy has been completed. If the individual response to radiotherapy could be evaluated earlier during treatment, a timely therapy modification could be accomplished to better adapt the cure. Molecular imaging offers the potential to characterize the nature of tissues on the basis of its biochemical and biologic features. 18 F-fluoro-2-deoxyglucose ( 18 F-FDG) positron emission tomography integrated with computed tomography ( 18 F-FDG PET-CT) is largely used in oncology, especially for monitoring the response to treatment. The imaging of changes in glucose metabolism, as reflected by cellular uptake and trapping of 18 F-FDG, can provide a response assessment that is both more timely and more accurate than that provided by standard morphological imaging [7]. Furthermore, the residual metabolic activity of tumors after radiotherapy, as measured by 18 F-FDG uptake, has been shown to correlate with the pathologic response [8], and to be a significant prognostic factor for survival in patients with NSCLC [9][10][11]. Many researchers recommend a delay of 6-8 weeks or longer after radiotherapy before performing the post-treatment PET study because of inflammatory changes with subsequent alterations in 18 F-FDG uptake [12]. Nevertheless, the confounding effect in the surrounding normal tissue due to the radiation-induced elevation of 18 F-FDG activity in the lung seems to be less relevant when PET is performed during radiotherapy [13]. The objectives of this study were: to evaluate the metabolic changes on serial 18 F-FDG PET-CT studies performed before, during and after concurrent chemo-radiotherapy in patients with unresectable or locally advanced NSCLC; to correlate the metabolic changes with the delivered radiation dose and with the clinical outcome.
Study population
Forty-three patients with unresectable or locally advanced NSCLC who were referred to our department from December 2005 to May 2008 were enrolled in this study. Eligibility criteria were good performance status (ECOG-performance status of 0 or 1), and a reasonable lung function (a forced expiratory volume in the 1 st second >50% of predicted value and a diffusing capacity of the lung for carbon monoxide >50%). Patients were not eligible if they had any other concomitant malignant disease, uncontrolled diabetes mellitus, or severe cardiac or cerebral diseases. Patients having undergone previous radiotherapy to the chest were not allowed to participate while those previously submitted to chemotherapy were accepted. Herein we describe only 25/43 patients (58%, 21 males and 4 females, mean age 64 years, range 43-78 years) who satisfied the eligibility criteria.
Treatment description
All patients underwent concurrent chemo-RT. In the case of previous chemotherapy, concurrent chemo-RT was started after a minimum of 30 days from the last chemotherapy course. Radiotherapy was administered to the involved field with a three-dimensional conformal technique. The median International Commission on Radiation Units and Measurements (ICRU) total referred dose was 50.4 Gy with classical (1.8 Gy/day) fractionation. The planned target volume (PTV) consisted of the primary tumor and the gross nodal volume. Elective nodal irradiation was not administered. The treatment was planned with computed tomography, applying the lung parenchyma correctional factors. A linear photon accelerator (nominal energy 6-10 MeV) was used for the treatment in all cases. All patients were immobilized by customized devices. Two different concurrent chemotherapy regimens were used. Cisplatin (CDDP), 20 mg/m 2 /day/bolus during days 1-4 of the first and last week of treatment, plus weekly gemcitabine 350 mg/m 2 /day, was administered to patients who had not received previous chemotherapy. Two hundred and fifty mg/m 2 /day weekly gemcitabine alone was delivered to patients who had undergone prior chemotherapy. A radiological and pneumological reassessment was performed 4 weeks from the end of chemo-radiotherapy. Patients judged operable underwent surgery, while those considered inoperable were treated according to the preference of the referring physician. 18 F-FDG PET-CT protocol: acquisition and reconstruction parameters, and image interpretation Three 18 F-FDG PET-CT studies were performed using the same acquisition and reconstruction protocol: before starting chemo-RT (pre-RT PET-CT), during the third week of treatment (during-RT PET-CT), and 4 weeks from the end of treatment (post-RT PET-CT). The pre-RT PET-CT was performed at a median time of 13 days (range 1-29 days) before starting chemo-RT; the during-RT PET-CT was performed after a median time of 17 days from the start of treatment (range 10-25 days) and after the delivery of a median radiotherapy dose of 23.4 Gy (range 14.4-34.2 Gy); the post-RT PET-CT was performed at a median time of 30 days (range 16-36 days) from the end of treatment. In case of previous chemotherapy, at least one month had to pass between the last administration and the acquisition of the 18 F-FDG PET-CT. Details of the study were explained to the patients and they provided written informed consent as established by our ethics committee. All studies were performed using an integrated 3D PET-CT device (GEMINI GXL distributed by Philips Medical Systems) combining a dedicated full-ring PET scanner with gadolinium-oxyortho-silicate (GSO) crystals and a multislice spiral CT scanner. Prior to 18 F-FDG injection, patients fasting for at least six hours were settled in a quiet room, checked to be normoglycemic (patients with a fasting glucose level >150 mg/dl were excluded), and intravenously hydrated with saline solution (500 ml). No oral or intravenous contrast agents were administered or bowel preparation applied for any patient in our series. Images were acquired one hour after intravenous injection of 259-407 MBq of 18 F-FDG, produced in the radio-pharmacy of our Centre, according to the body mass index from pelvis to neck. The CT scan, performed from neck to pelvis with a voltage of 120 KeV and tube current of 30 mA, was used for the anatomical localization, and for attenuation correction of PET emission data. PET emission scans were acquired in 3D mode, from pelvis to neck (multiple bed positions, 3 minutes for each bed position). Matched CT and PET images were reconstructed with a field-of-view of 50 cm. Iterative reconstruction and CT-based attenuation correction were used and attenuation-corrected PET images were reviewed in transverse, sagittal and coronal planes. PET data were also displayed in a rotating maximumintensity projection. To view the images, the PET and CT datasets were transferred to an independent computer workstation by DICOM (Digital Imaging and Communications in Medicine) transfer. PET-CT images were analyzed semi-quantitatively using the Syntegra Philips fusion program by two nuclear medicine physicians (M.L.C and M.G.S) with PET-CT experience. Regions of interest (ROIs) were manually drawn over the lesions (tumor and lymph nodes) showing an 18 F-FDG uptake higher than background activity. The Standardized Uptake Value (SUV) was measured in all voxels in the tumor ROI: SUV = (decaycorrected activity * ml tissue)/(activity injected * body weight). We used the maximum SUV (SUVmax) in order to minimize the partial volume effect. Therefore, we took into account the following parameters: 1. SUVmax of the tumor 2. Individual variation in SUVmax (ΔSUV) of the tumor, expressed as a percentage of the baseline 3. SUVmax of the lymph nodes 4. Individual variation in SUVmax (ΔSUV) of the lymph nodes, expressed as a percentage of the baseline value 5. Metabolic Tumor Volume (MTV), expressed in cc, obtained directly from the Philips workstation. Each tumor identified by the user was segmented automatically in three dimensions by the software using the following procedure. First, the voxel of maximum intensity along the selected projection line is used as the starting point for a region growing procedure. The algorithm then finds the voxel of local maximum intensity within a specified radius (default value of 1 cm) of the starting voxel. The region growing algorithm then defines the segmented volume as all voxels connected to the local maximum intensity voxel that have an intensity greater than a specified fraction of the maximum intensity. The threshold intensity value used in this study was 50% of the local maximum intensity. Once all of the hypermetabolic tumor foci are segmented, the software calculates the MTV, defined as the total volume of all tumors in the body in milliliters, as well as the maximum and average SUV within the MTV [14,15]. 6. Individual variation in MTV (ΔMTV), expressed as a percentage of the baseline value 7. Metabolic response, according to the EORTC criteria [16], comparing the three PET-CT studies.
Statistical analysis
The Systat 10.2 software (Systat inc., Point Richmond CA) was used for statistical analysis. Student's paired ttest was used to compare the SUVmax and MTV at different time points. A p-value <0.05 was considered to indicate statistical significance. A linear regression analysis was performed to check out a possible relation between the metabolic response and the cumulative radiation doses reached in individual patients at the time of the during-RT PET-CT study. The disease-free survival (DFS; time to local or distant event) 'time to event' curve was calculated with the Kaplan-Meier method and statistical significance of the difference was assessed with the log-rank test. When applying the EORTC criteria [16] at during-RT PET-CT, 13/17 patients (76.5%) showed a partial metabolic response, and 4/17 patients (23.5%) a stable disease; no complete metabolic response was observed. When applying the EORTC criteria at post-RT PET-CT, 8/21 patients (38%) showed a complete metabolic response, 12/21 patients (57%) a partial metabolic response, and one patient (5%) showed stable disease (Table 1). When analyzing only the subset of patients for whom all three PET-CT were available (n = 13), we observed that the metabolic modifications were similar to those obtained in all series (see Table 1, Figure 1 and Figure 6): SUVmax of the tumor and of the lymph nodes, as well as the MTV, significantly decreased during and at the end of treatment. A borderline statistically significant difference (p = 0.05) between non-responders (n = 7; 12.2 ±5.9) and responders (n = 6; 6.6 ± 2.3) was found only in SUVmax of the tumor measured at during-RT PET-CT (Figure 7). There was no significant correlation between SUVmax of the tumor measured at during-RT PET-CT and SUVmax measured at post-RT PET-CT, as well as between MTV of the tumor measured at during-RT PET-CT and MTV measured at post-RT PET-CT.
Clinical outcome and follow-up
Three out of twenty-five patients (12%) died: one patient during chemo-RT because of pulmonary infection, before completing the treatment; two patients died before undergoing post-RT PET-CT because of myocardial infarction and pulmonary infection. Thirteen out of twenty-five patients (52%) underwent surgery after a median time of 55 days from the end of treatment: 9/13 (69%) had pathological down-staging to stage 0 (no residual disease) or stage I, while 4/13 patients (31%) showed persistent loco-regionally advanced disease at a hystopathological examination. Tumors in 9/25 patients (36%) were judged unresectable and treated according to the preference of the referring physician.
After a median follow-up from the start of chemoradiotherapy of 24.7 months (range 7-28 months), 12/ 25 patients (48%) are still alive while 13/25 patients (52%) died with a median survival time of 14 months. Fifteen patients showed disease progression (7 locoregional failures, 4 distant metastases and 4 both locoregional and distant failures) during follow-up. Median disease progression free survival time was 13 months. No other significant correlation with survival endpoints (local and/or distant disease progression free survival and overall survival) was found for any other metabolic parameters evaluated at any different timepoints. In patients who underwent surgery, no significant association was observed between pathologic down-staging and metabolic response.
Discussion
18 F-FDG PET detects metabolic modifications which are well known to occur before morphologic ones; therefore functional imaging allows an evaluation of the tumor metabolic response during radiotherapy earlier than morphologic imaging [13,16,17]. Changes in therapy, such as dose-escalation or the addition of another treatment modality may be contemplated for patients who show poor response to the current radiotherapy regimen [18]. Furthermore, a disease reduction during radiotherapy detected by functional imaging might suggest a radiation dose escalation, whilst remaining within normal tissue constraints [19]. The role of 18 F-FDG in assessing the response to non-surgical treatment in patients with NSCLC has been extensively investigated. Many Authors performed an 18 F-FDG scan at least two months from the end of radiotherapy or chemotherapy [10,[20][21][22], while only a few have evaluated the potential role of repeating 18 F-FDG PET earlier, either during or after therapy [13,17,23,24]. Researchers from the University of Michigan [13] have performed repeated 18 F-FDG PET in 15 patients with NSCLC stages I to III before, during and after a course of radiotherapy (or chemoradiotherapy) with conventional fractionation. The Authors have observed a reduction of the peak tumor 18 F-FDG activity at approximately 45 Gy during radiotherapy, with a further reduction on PET performed three months from the end of treatment. The qualitative response during radiotherapy correlated with the overall response post-radiotherapy, and the peak tumor 18 F-FDG activity during radiotherapy correlated with that 3 months post-therapy. Subsequently, the same Authors [24] have observed that an adaptation of the radiotherapy plan based on 18 F-FDG PET at approximately 45 Gy during radiotherapy, might allow escalating the tumor dose without increasing the normal tissue complication probability. Van Baardwijk et al. [17] investigated changes in 18 F-FDG uptake in 23 patients with medically inoperable or advanced NSCLC who underwent serial 18 F-FDG PET-CT scans before treatment, 1 week and 2 weeks after the start of treatment and 70 days from the end of an accelerated (1.8 Gy twice a day) radiation treatment with radical intent. While 70 days after the end of radiotherapy, the metabolic activity of the tumor significantly decreased compared to the baseline, during the first 2 weeks of treatment the 18 F-FDG uptake within the tumor changed only moderately (a slight increase during the first week, and a slight decrease during the second week). More recently, Giovacchini et al. [25] performed four repeated 18 F-FDG PET-CT scans in 6 patients undergoing radical radiation treatment for either locally advanced or medically inoperable NSCLC. 18 F-FDG PET-CT scans were performed before, during radiotherapy at the delivered dose of 50 Gy, and after approximately one month and 3 months from the end of radiotherapy. Radiotherapy induced a progressive decrease in glucose metabolism that was greater 3 months after the end of treatment, but could even be detected during the treatment itself.
In this study, we have evaluated the metabolic changes on serial 18 F-FDG PET-CT performed before, during and after concurrent chemo-radiotherapy in patients with unresectable or locally advanced non-small-celllung-cancer (NSCLC). We have also correlated the metabolic changes with the delivered radiation dose, and with the clinical outcome. 18 F-FDG PET-CT studies were performed earlier both during treatment, at a median time of 17 days from the start (median dose of 23.4 Gy), and after treatment at a median time of 30 days. First of all, we observed that at pre-RT PET-CT the values of SUVmax of the tumors were much higher than those reported in literature [13,17], similar only to those reported by Giovacchini et al. [25]. The enhanced trapping of 18 F-FDG into the tumor cells can be due to either biological mechanisms, such as the up-regulation of glucose transporters and hexokinase enzymes, tumor aggressivity, hypoxia, etc., or to modifications induced by previous treatment [26][27][28][29]. Up to now, however, it is not known which of these mechanisms is responsible for the variable levels of 18 F-FDG uptake. In our study, the large tumor size, containing small areas of necrosis, and the relatively small number of patients who received previous treatment could explain the higher values of SUV at pre-RT PET-CT. From our data, we can observe that the tumor metabolic activity significantly decreased early during chemo-RT, and decreased even more at the end of treatment. The metabolic reduction was significant for all parameters, but more significant for the SUVmax. It can be argued that chemo-radiotherapy works better and faster in the cellular metabolism, when considering SUV values, and "relatively" less and more slowly in the tumor volume, based on MTV values. In fact MTV may be considered a "functional volume" and, as such, reduces its activity later in comparison with SUV. Therefore, SUVmax is the more sensitive parameter to show an earlier metabolic modification induced by the treatment. The difference in SUVmax reduction between pre-RT PET-CT and during-RT PET-CT was significantly higher (p = 0.0001) than that observed between during-RT PET-CT and post-RT PET-CT (p = 0.005). This interesting finding allow us to speculate that the tumor cells have a prompt response to treatment. On the contrary, the MTV reduction was similar between during-RT PET-CT and post-RT PET-CT (p = 0.002, respectively), supporting the concept that MTV is a functional volume and its response to therapy is slower. Regarding the lymph nodes, their metabolic activity tends to decline during chemo-radiotherapy, decreasing significantly only at the end of treatment. This finding suggests the stronger lymph node resistance to therapy: in fact it is well known that the neoplastic lymph nodes are a negative prognostic factor especially in patients with NSCLC [30].
Finally, the tumor metabolic activity significantly decreased after a cumulative radiotherapy dose of only 23.4 Gy, which is much lower than that reported in literature: 45 Gy and 50 Gy [13,25]. On the other hand, van Baardwijk et al. [17] did not observe any significant decrease in tumor metabolic activity after the delivery of approximately 37 Gy of accelerated radiotherapy (1.8 Gy twice a day). Differences in the radiotherapy fractionation schedule, treatment time, concurrent chemotherapy administration, tumor biology, and absolute pre-RT PET-CT SUVmax values, might have an impact on tumor 18 F-FDG uptake during radiotherapy. Similarly to Kong et al. [13] and van Baardwijk et al. [17], we observed a large heterogeneity in the changes in metabolic activity among individual patients during and after radiotherapy: this finding may somehow reflect the difference in radio-responsiveness between the individual tumors. Similarly to these Authors [13,17], we also observed a large heterogeneity in the metabolic activity among the individual patients before treatment, suggesting a large cellular heterogeneity in each tumor: for this reason we have also utilized ΔSUV and ΔMTV in order to take into account the "individual" variations during and at the end of treatment, rather than only the "absolute" values such as SUVmax and MTV. In fact, the individual variations allow a better measurement of the effects of the treatment "normalizing" the baseline values, especially when they are highly heterogeneuos.
Regarding the metabolic response after treatment, the EORTC criteria [16] classifies the metabolic response on the basis of SUV values. The classification into four categories using only a number as cut-off, may sometimes give an incorrect classification. In fact, the number does not take into account some scintigraphic features, such as the distribution and shape of the 18 F-FDG activity. From our data, the high number of partial metabolic responses (>50%) can be also attributed to the use of strict SUV criteria, as proposed by EORTC. Therefore, from a clinical point of view, we strongly support the necessity to integrate the SUV values with qualitative and morphological (CT) analyses of the images since the confounding effects such as inflammation may be present at any time after treatment. A wrong classification might therefore be avoided and a clinical significance could be given to the 18 F-FDG uptake. Both Kong et al. [13] and van Baardwijk et al. [17], observed an association between tumor metabolic response during radiotherapy and that post treatment, with different patterns of response during radiotherapy for patients with a complete metabolic response, and patients with a persistence of metabolic activity after the therapy. Our results do not confirm these findings. We found a borderline statistical significant difference (p = 0.05) only in SUVmax of the tumor during treatment: nonresponders showed higher value of SUVmax of the tumor at during-RT PET-CT when compared to responders patients. Unfortunately, our group sample is too small to speculate on this finding. Moreover the wide range in the cumulative radiation dose delivered until the moment of during-RT PET-CT acquisition (14.4-34.2 Gy) is likely to have contributed to the heterogeneity in metabolic response, and may represent a major drawback for this study. Indeed, we observed a significant correlation between SUVmax measured during-RT PET-CT and the cumulative dose of radiotherapy delivered at the moment of the scan acquisition. Although this finding is not surprising, and is in accordance with the well-known association between the radiation dose delivered and the probability of cure in NSCLC [31], this is the first time that it is clearly shown in a clinical setting. Further investigations of this association, i.e. describing tumor activity as a function of pre-treatment activity, radiotherapy dose delivered, and time since the beginning of radiotherapy, may prove to be very interesting. For example, dose-SUV curves could be elaborated using experimental data to extrapolate the total radiation dose required to obtain a complete metabolic response in each patient, thus making it possible to adapt the radiation dose prescription.
A significant correlation between the residual 18 F-FDG uptake within the tumor at the end of treatment (or change in 18 F-FDG uptake in respect to the baseline), and survival end-points has been described by several Authors [10,17]. Finally, also in our experience, patients with loco-regionally advanced NSCLC who showed a complete metabolic response at post-RT PET-CT had a longer disease-free survival when compared with those with a persisting 18 F-FDG uptake. The main limitation of this study is the small group of patients. More studies on a larger number of patients are necessary to confirm our findings.
Conclusions
18 F-FDG PET-CT is able to detect earlier metabolic modifications in patients with locally advanced NSCLC who underwent pre-operative concurrent chemoradiotherapy. SUVmax of the tumor is therefore a valuable parameter, much more so than MTV. While the metabolic changes of the tumor at the end of treatment seem to be of prognostic value for a progression diseasefree survival, the metabolic changes during radiotherapy did not correlate either with the metabolic response after treatment or with the clinical outcome. However, we cannot conclude that 18 F-FDG PET-CT during chemoradiotherapy does not provide any useful information. Indeed, our analysis has a major limitation, due to the wide range in radiation dose reached at the moment of the PET-CT scan, which may have impaired the interpretation of the results. Further studies exploiting the correlation between metabolic modifications during therapy and the clinical outcome are needed in order to optimize therapeutic strategy. Since the metabolic activity during chemo-radiotherapy correlates with the cumulative dose of radiotherapy delivered at the moment of the scan, special attention should be paid to methodological aspects, such as the radiation dose reached at the time of the PET scan. | 5,810 | 2012-07-10T00:00:00.000 | [
"Medicine",
"Physics",
"Biology"
] |
DNSSEC for cyber forensics
Domain Name System (DNS) cache poisoning is a stepping stone towards advanced (cyber) attacks. DNS cache poisoning can be used to monitor users’ activities for censorship, to distribute malware and spam and to subvert correctness and availability of Internet clients and services. Currently, the DNS infrastructure relies on challenge-response defences against attacks by (the common) off-path adversaries. Such defences do not suffice against stronger, man-in-the-middle (MitM), adversaries. However, MitM is not believed to be common; hence, there seems to be little motivation to adopt systematic, cryptographic mechanisms. We show that challenge-response do not protect against cache poisoning. In particular, we review common situations where (1) attackers can frequently obtain MitM capabilities and (2) even weaker attackers can subvert DNS security. We also experimentally study dependencies in the DNS infrastructure, in particular, dependencies within domain registrars and within domains, and show that multiple dependencies result in more vulnerable DNS. We review domain name system security extensions (DNSSEC), the defence against DNS cache poisoning, and argue that not only it is the most suitable mechanism for preventing cache poisoning but it is also the only proposed defence that enables a posteriori forensic analysis of attacks.
Introduction
During the recent decade, the Internet has experienced an increase in sophisticated attacks, subverting stability and correctness of many networks and services. The attacks target individuals as well as enterprises and exploit vulnerabilities in systems and services in the Internet, e.g., WEB and cloud, as well as in basic building blocks of the Internet, such as Domain Name System (DNS), [RFC882,1034], and routing. The attacks inflict economical losses to businesses and have a devastating impact on e-commerce, security and critical infrastructure.
In this work, we focus on DNS, whose correctness and availability are critical to the functionality of the Internet. We investigate one of the most significant threats to DNS infrastructure: cache poisoning. In a cache poisoning attack, the adversary causes recursive DNS resolvers to accept and cache a spoofed DNS response which contains malicious records. These records redirect the victim clients to incorrect (possibly malicious) hosts. DNS cache poisoning is detrimental to correct functionality of Internet services and can be used to distribute malware and *Correspondence<EMAIL_ADDRESS>1 Fachbereich Informatik Technische Universität Darmstadt Mornewegstraße 32, Darmstadt DE, Germany Full list of author information is available at the end of the article spam and can be applied for phishing attacks, credentials theft, eavesdropping and more.
To prevent DNS cache poisoning attacks, most systems adopt challenge-response mechanisms, [RFC6056, RFC5452], whereby a random challenge is sent within the request and a corresponding value is verified to have been echoed in responses. However, challenge-response authentication is not effective against man-in-the-middle (MitM) adversaries (Figure 1), which can inspect the challenges sent within the requests, and craft spoofed responses with valid challenge values. However, it is typically assumed that the common adversary in the Internet is off-path, which unlike a MitM, cannot observe nor modify legitimate packets exchanged between other parties.
Recently, a number of vulnerabilities were shown, allowing off-path attackers to predict values of challengeresponse defences, exposing the recursive DNS resolvers to off-path cache poisoning attacks [1][2][3][4][5][6]. Some of these vulnerabilities were already patched, e.g., [7,8]. These attacks in tandem with the recent revelations on the surveillance programs, such as the ones carried out by the National Security Agency (NSA) [9,10], raise the question whether the widely deployed challenge-response defences http://jis.eurasipjournals.com/content/2014/1/16 suffice to ensure security of services and networks that rely on the correctness and availability of DNS.
We explore DNS security and review the threats that stem from the ubiquitious Internet connectivity, from the cyber arms race, advances in adversarial capabilities and from vulnerable systems. Our review of the recently published vulnerabilities, shows that systems, services and clients are vulnerable and may be frequently attacked. Widely deployed defences against off-path adversaries do not provide the adequate level of security required to thwart attacks by the modern adversaries. Dependencies within the DNS infrastructure, [11,12], further exacerbate the risk of attacks.
We believe that the main factor to this unfortunate situation is that the attacks that do not break connectivity go undetected -which is indeed the case with advanced, cyber and Advanced Persistent Threat (APT) attacks. For instance, recent hijacking of google.rw by the Syrian hackers [13] would go undetected had it not broken access to the target domains, and the surveillance by the US agencies would not be unveiled had it not been exposed by a whistleblower.
Our main message in this work is that domain name system security extensions (DNSSEC) is the most suitable defence for DNS against cache poisoning attacks. As we show, the significance of DNSSEC is not only in preventing cache poisoning (and thus other advanced) attacks but also in its ability to enable detection of attacks a posteriori. In fact, DNSSEC is the only mechanism that facilitates forensic analysis of attacks and provides evidences, which can be presented to third parties and which allow detection of attacks even by very strong adveraries, such as goverment agencies.
As we show in our study, DNSSEC is essential and critical for detection and prevention of attacks and protection of systems in light of the prevalence of sophisticated attacks by modern adversaries.
We next summarise the topics presented in this work.
MitM is common
Contrary to folklore belief, MitM adversaries are common. We review the cache poisoning attacks in the common settings (below) and discuss the adversarial capabilities required to launch them.
Vulnerable name servers and registrars
Many of the DNS cache poisoing attacks occur by subverting a registrar or a name server. In contrast to the local nature of DNS cache poisoning attacks, which target a specific resolver, the impact of such attacks is global, i.e., any resolver receiving a response from the zone file hosted on a compromised server is a potential victim. We review recent attacks, along with the vulnerabilities that allowed them. We perform an evaluation of the vulnerabilities in domain registration interfaces (which registrars provide to customers). http://jis.eurasipjournals.com/content/2014/1/16
Dependencies in Domain Name System
We review the operational characteristics of the DNS infrastructure: transitive trust, coresidence and servers placement. We argue that these factors impact resilience, stability and security of the DNS services. The high coresidence rate can disrupt services to multiple domains during benign failures or attacks on a single name server and high concentration of name servers in certain geographical locations can facilitate censorship of (and attacks on) a high volume of DNS requests.
Domain Name System security extensions
We believe that DNSSEC is the most suitable defence not only for thwarting cache poisoning attacks but also for detecting them. DNSSEC [RFC4033-4035] is a standard cryptographic protection for DNS that authenticates records via digital signatures. Although proposed and standardised in 1997, DNSSEC is still not widely deployed: most zones are not signed and most resolvers do not validate DNSSEC-signed responses. Furthermore, early adopters experience failures and deployment problems. We review some notable failures and recommend automation of deployment as a mitigation. Our goal is to encourage deployment of DNSSEC, and we hope that our work will foster research efforts on the specific aspects which we identified as deterrents towards (correct) DNSSEC deployment.
We show that DNSSEC provides cryptographic evidences that can be used in forensic analysis and detection of attacks long after they occured, in particular even attacks launched by state entities, domain operators or MitM adversaries. This is in contrast to all other proposed defences for DNS, e.g., Eastlake cookies [16] or DNSCurve [17].
Contributions
In this work, we show that a critical system of the Internet, DNS, is vulnerable to attacks and that in contrast to folklore belief, strong attackers, such as MitM, are common. Attacks on DNS are detrimental for Internet clients and services.
We show that DNSSEC is the only standardised mechanism which can provide evidences for forensic analysis of attacks launched by the strong and sophisticated adversaries and can facilitate detection of attacks which would otherwise remain unnoticed.
Unfortunately, as our study indicates, the adoption of DNSSEC among zones in forward and reverse DNS trees is extremely low. Hence, further research is required to identify obstacles to DNSSEC deployment. We hope that our work will motivate such an investigation.
Organisation
We review DNS and DNS cache poisoning in Section 2. We discuss threats from (1) MitM adversaries and how attackers can obtain MitM capabilities (Section 3) and (2) vulnerabilities in name servers and zones hosting infrastructure (Section 4). We then review the dependencies within the DNS infrastructure in Section 5. We discuss DNSSEC and report on our measurements of DNSSEC adoption rate and deployment challenges. We then review application of DNSSEC for forensic analysis (Section 6). We conclude this work in Section 7.
Domain Name System and cache poisoning
In this section, we provide a background on DNS and provide a describe DNS cache poisoning. We discuss the phases of cache poisoning in detail.
Domain Name System
The Domain Name System (DNS), [RFC1024, RFC1025], is a distributed data base of Internet mappings (also called resource records (RRs)), from domain names to different values. For example, A type RRs map a domain name to its IPv4 address.
Domains are organised hierarchically; for every domain name α and each label or domain name x, the domain name x.α is considered a subdomain of α, i.e., part of the α domain name space. Namely, the rightmost label conveys the top-level domain.
Domains and their mappings are also administered hierarchically; the mappings of each domain foo.bar are provided by a name server, managed by the owner of the domain. The name server of a domain foo.bar is identified via a DNS mapping of type NS, from the domain name to the domain name of the name server, which could be subdomain, e.g,. ns1.foo.bar, or not, e.g., ns.goo.net. Mappings of a domain name, e.g., x.foo.bar, are trusted only if received from a name server of that domain or of a parent domain, e.g., the name server of foo.bar or of bar.
Clients use resolvers in order to find RRs for a domain. The resolvers query the name servers to locate the requested RRs. Upon query, a name server responds with the corresponding RR, or a non-existing domain response in case no matching RR exists. Resolvers cache the DNS responses; the caching time is specified in the time to live (TTL) field of a response, e.g., TTL of t seconds indicates that the resolver should store the record for t seconds. Subsequent requests for the same RRs are provided from the cache. A sample lookup process, initiated with a DNS request from a stub resolver, is depicted in Figure 2.
DNS cache-poisoning
In this section, we provide background on DNS cache poisoning. http://jis.eurasipjournals.com/content/2014/1/16 The resolvers accept only responses for which there are corresponding pending DNS requests; thus, the first step in a DNS cache poisoning attack is to trigger a DNS request. Then, after triggering the request, the second step is to inject a spoofed response (redirecting the clients to incorrect hosts) that will be accepted and cached by a victim resolver.
The attackers can use a number of techniques for triggering DNS requests; some techniques attack random victims while others are targeted against specific users.
When an attacker has a direct access to a victim DNS resolver it can repeat the cache poisoning attack and trigger arbitrary number of requests at will. This is possible if the attacker is on the same network with the victim resolver, e.g., it is one of the clients of an ISP whose resolver it wishes to poison.
Another option is to use an open DNS resolver. The amount of open resolvers on the Internet is constantly increasing, from 15 million in 2010 [18] to 30 million in 2013 [19]. Open resolvers provide recursive DNS services to any requesting client. This ability to trigger DNS requests makes open resolvers more vulnerable to attacks and, in particular, to DNS cache poisoning attacks.
Typically, DNS resolvers limit their recursive DNS service only to clients on their networks. However, there are techniques which the attackers can use to trigger requests even remotely.
A known technique is by controlling a malicious script, typically dubbed 'puppet' (sandboxed client) [20], such as a client running Javascript or presenting Flash content. The attacker can accomplish this, for instance, by purchasing an ad space from advertising web site. When clients surf to such web sites, their browsers are redirected, e.g., via images, or iframes, to retrieve objects from other domains (in this case from attacker's domain). This causes the browser to load the resource from a different (remote) domain. Once redirected, the browsers of the clients download and run the script and become puppets. The script runs automatically and without any interaction with the client; see Figure 3. The script can trigger DNS requests to domains and responses to which the (external) attacker wishes to poison.
An attacker can also trigger DNS requests via other, less known techniques, e.g., by sending email to a victim network whose resolver it wishes to poison, see Figure 4; this technique was first proposed in [21].
DNS cache poisoning by man-in-the-middle
Basic Internet protocols, such as DNS and routing, are not cryptographically protected against MitM adversaries, and current defences rely on challenge-response mechanisms which provide security only against off-path adversaries. Deploying cryptography is more difficult than merely configuring challenge-response mechanisms, and the incorrect belief is that since MitM is not common, such defences should suffice.
In this section, we argue that this belief is wrong and review a number of common scenarios where adversaries possess MitM capabilities. We show how easily the adversaries can exploit that ability and how devastating the results may be for the victims.
MitM in open access networks
Since the early days of the Internet and until recently, clients have accessed the Internet mostly from trusted networks, e.g., via their ISPs or enterprise networks. However, during the last decade, an increasing number of devices obtain Internet connectivity via public IEEE 802.11-based wireless (Wi-Fi) networks [RFC5416], e.g., hotels, airports, cafes or networks set up by individuals.
Figure 3 Triggering DNS requests via a malicious script (puppet).
Both, a malicious operator and a malicious client, can subvert correctness and availability of Internet services for their clients. The most effective attack is by spoofing DNS responses and redirecting the clients to incorrect (malicious) hosts, e.g., to download malware or block responses to launch a denial/degradation of service attacks. A malicious operator essentially has MitM capabilities on its network since the traffic of all the clients connected via its network traverse a router under its control. In particular, the network operator can block correct responses and craft spoofed responses from scratch or can inject spoofed records into DNS response packets.
A malicious client can gain MitM capabilities by spoofing DHCP responses for newly connecting clients. In spoofed DHCP responses, a malicious client can provide an incorrect IP/MAC address for local recursive DNS resolver, e.g., one that is assigned to its own network interface card (NIC), and thus will receive all the DNS requests sent by the victim client. But, MitM capabilities are not essential for launching attacks on public wireless networks. In particular, every connected device can receive all transmissions, no matter who the destination is. This enables malicious clients to inspect all DNS requests, i.g., packets sent to port 53, and to craft spoofed responses before authentic responses arrive. However, attacks on correctness and availability are not surprising and both are known threats. In what follows, we outline a less evident threat which appears to be gaining relevance during the recent couple of years. Specifically, we are referring to monitoring online user activities. The network operator, as well as other clients, can inspect the MAC address of all the clients connected to that network. MAC address (uniquely) identifies a network adapter and has the same value no matter which network the client connects to the Internet from. This enables tracking the users throughout the different networks that they use to connect to the Internet. Even benign network operators may pose a threat, e.g., the logs may be kept over a long time period, and may be shared with third parties.
Such logs enable different parties, e.g., security agencies, armies, content providers, to learn about the online behaviour and habits of the clients.
MitM on backbone links
Recent revelations [10] expose monitoring and censorship activities of National Security Agency (NSA) and Government Communications Headquarters (GCHQ) against Internet services and users. The NSA used its secret agreements with telecommunications companies to monitor communications channels in order obtain access to collect and analyse Internet traffic. The NSA uses hosts (code name QUANTUM) to inject spoofed DNS (and HTTP) responses: when observing a DNS request, a spoofed response is automatically crafted and returned to the victim client. Notice that since the QUANTUM servers are deployed on the backbone links, they can always respond before the legitimate server does. Since the first correct response is accepted and the subsequent ones are ignored, the attacker can redirect victim clients to the servers controlled by the NSA (code name FOXACID); the servers then install malware on clients hosts or tap on the communication.
MitM via route poisoning
The Internet consists of multiple autonomous systems (ASes) which are interconnected by means of routing protocols. To enable connectivity, the networks advertise their prefixes, i.e., address blocks, to the Internet via a BGP update messages, [RFC1771]. Every BGP update message is an indication of a routing change. Routers issue BGP update messages when routing information changes, e.g., link failures, topology changes, reconfigurations, updates of local policies. A BGP update contains an advertised prefix and an AS path. The last AS on the path is the originator of the prefix. Since BGP does not employ authentication mechanisms, originators of BGP routing announcements may claim prefixes belonging to other networks or may change routing path (by adding or removing links), e.g., due to benign failures or malicious attacks. Attackers can hijack prefixes by advertising invalid origin or invalid next hop [14]. There is a large body of research studying attacks on BGP routing, e.g., route hijacking and route injection that damage network operation or connectivity.
For instance, recently, a highly publicised route hijacking attack was exposed [15], which was launched over a period of a number of months, and the attackers routed a significant amount of traffic through Belarus and Iceland. Belarus Telecom was advertising a false route and thus managed to hijack traffic which was not directed to its prefix. Such route hijacks were believed to be theoretical prior to that attack, and it showed the feasibility of such massive MitM hijacking.
Route poisoning can be employed to leverage DNS cache poisoning attacks. By forcing the traffic to traverse a specific path, e.g., via a malicious network operator, the attacker can become a MitM for the communication to a target domain and can easily inject spoofed DNS responses into the traffic flow.
Cache poisoning by subverting hosting infrastructure
Many DNS cache poisoning attacks occur by subverting the hosting infrastructure of DNS, e.g., domain registrar or name servers. Indeed, there is an increasing number of attacks by compromising the hosting side of DNS which allows to take over victim domains. Subverting a registrar or a name server is a lucrative avenue for cache poisoning when the attacker is not a MitM. Attacks compromising the hosting infrastrucutre are frequently occurring. In 2013 alone, multiple domains were hijacked by compromising domain name servers or registrars; some of the notable attacks include a compromise of google.rw and even top level domains like, qa, ps, nl, be, my. Registrar, register.com was subverted, and as a result, many names related to security like metasploit.com and bitdefender.com were redirected.
Compromising registrars
Attackers can exploit vulnerabilities, most notably, in the user interface, provided by the registrars. In particular, attackers often exploit vulnerabilities in user interface, such as lack of (or insufficient) user input validation to perform injection attacks, e.g., buffer overflow, and obtain a shell on the victim host. This allows to manipulate DNS records in the zone file, resulting in cache poisoning attacks of the target domain. This serves as a stepping stone to multiple attacks, e.g., enables attackers to distribute spam while passing reputation-based spam filters, to perform phishing, malware distribution, credentials theft. http://jis.eurasipjournals.com/content/2014/1/16 For instance, a security hole within 123 REGISTRATION registrar management console resulted in the hijacking of 300 domains back in 2012; the problem was eventually tracked down to an open account control panel that had allowed changes to be made without adequate authentication.
We tested the interfaces of a number of popular registrars and found a vulnerability which may facilitate cache poisoning attacks, even without compromising the web interfaces: when registering a domain, the attackers can configure legitimate name servers that belong to other domains and are not under their control as their own.
This fact can be abused, e.g., for cache poisoining or denial of service attacks. For instance, consider an attacker that registers a domain under some top level domain, e.g., one-domain-to-rule-them-all.org, and registers a name server that belongs to another domains under org. Then, referral responses to requests for resource records within attacker's domain can be exploited to poison the records of the victim domain whose name server record the attacker used.
Unfortunately, detecting and preventing such attacks is challenging and would require the registries to validate the ownership over the records at registration time.
Compromising name servers
There is a long history of attacks exploiting vulnerabilities in the name servers, e.g., vulnerable operating system or vulnerable DNS software. This is a stepping stone to obtain unauthorised access to the system and to execute arbitrary code. Vulnerabilities were registered in popular operating systems and DNS software, e.g., MS server, Bind versions, PowerDNS.
Some of the known attacks exploited known vulnerabilities, such as (1) buffer overflow, which allows an attacker to obtain unauthorised access to the system and execution of arbitrary code, (2) inproper handling of input values, e.g., one attack exploited vulnerable error handling routine that would crash on invalid DNS transaction identifier values, (3) improper check of memory copy, e.g., would crashed the server allowing an attacker to gain root privileges on name server, and many others.
Dependencies in Domain Name System
Dependencies within the DNS infrastructure further exacerbate the impact of cache poisoning attacks or compromises of name servers or registrars [11]. We consider the following types of dependencies: (1) inter-domain dependencies via transitive trust and (2) zones coresidence due to name servers sharing and (3) dependencies via registrars.
Our study encompassed top 50 K Alexa domains and 568 TLDs. We also measured all the domains depending on these Alexa domains and TLDs via a transitive trust, which resulted in a total of 150 K domains. These domains are served by 48 K name servers; these 48 K name servers have 65 K different IP addresses, since sometimes a single name server is assigned a number of IP addresses.
Transitive trust
A transitive trust dependency can be twofold: (1) a name server can appear in a number of transitive trust chains (i.e., the number of domains that can be impacted by a failure of a specific name server) and (2) a domain can depend on multiple domains for its resolution (i.e., impact of a failure of a single server on the latency or availability of a domain). The former impacts the resilience of the DNS infrastructure and the latter the resilience of a specific domain. Ideally, both should be low.
Our study shows that on average, a domain in top 50 K Alexa depends on 43.5 other domains via transitive trust chains, and on average, a domain in TLD depends on 43.7 domains via transitive trust chains. The maximal number of transitive trust dependencies in Alexa domains is 220 and in TLDs is 183. For instance, domain sigcomm.com, ranked 373097 on Alexa, is hosted at dnsmadeeasy.com, coresiding with 400 other domains. Figure 5 plots the cumulative distribution function (CDF) F(x) = Pr[X ≤ x] of the number of name servers appearing in transitive trust chains of top 50 K Alexa domains and TLDs; this encompasses dependency (1)and increases traffic volume to name servers and resolution latency for clients. The CDF curves for the Alexa domains and TLDs represents the number of transitive trust chains that name servers in Alexa domains (resp. TLDs) appears in. Approximately, 50% of name servers appear in two or more transitive trust chains. More than 90% of the name servers appear in eight and less chains. Some name servers appear in more than 128 transitive trust chains. Figure 6 plots the CDF of the transitive trust dependencies of Alexa top 50 K domains and TLDs; this expresses dependency (2) -and increases resolution time for clients' . Approximately, 50% depend on 20 or more other domains for their resolution, and more than 90% depend on more than 128 domains.
Name servers with high dependencies via transitive trust, i.e., whereby multiple domains depend on them for their resolution, have two side effects: (1) they can become a lucrative target for attacks. For instance, recently, Herzberg and Shulman [3] showed how to launch a DNS cache poisoning attack using fragmented DNS responses to replace the authentic IP address of a victim name server with a spoofed one and ran this against sns-pb.isc.org name server, which appears in 69 transitive trust chains of other domains.
(2) Another notable side effect is that large transitive trust chains introduce more latency to resolution http://jis.eurasipjournals.com/content/2014/1/16 of records within domains depending on many other domains and increase the queries rate to name servers appearing in multiple transitive trust chains. Our study measured an increase of 50 ms for every transitive trust chain of three links, when measured with a cold (empty) cache. Resolutions of larger chains, e.g., 200, can often result in timeouts and unnecessary retransmissions, overloading the network and the name server, and increasing the latency for clients' queries.
Transitive trust dependencies also nullify effectiveness of DNSSEC, [RFC4033-RFC4035], and impede its adoption. In particular, if name servers or other resources of a signed zone are placed under unsigned domains, the DNS resolver will not be able to establish the security of the signed records, and the security will depend on the security of the weakest link in a transitive trust chain.
Coresidence
Hosting a number of zones files on the same name server enables DNS name server operators to optimise profit and reduce operational costs and management overhead. We measure and quantify the dependencies between zones, namely the fraction of zone files residing on the same physical server. We measure the coresidence among TLDs and top 50 K Alexa domains, including the coresidence between name servers appearing in their transitive trust dependencies. As our results, plotted in see Figure 7, indicate, the coresidence rate among the name servers are extremely high. We found that coresidence of multiple zones on the same name server is a common practice among Alexa domains and TLDs. In particular, more than 70% of the name servers of Alexa domains and more than 80% of the name servers of TLDs host multiple zones. Some name servers host more than 500 zone files, such as the name server pdns.ultradns.net.
The implication of high coresidence rate is that a failure or a DoS attack against the availability of the name server or the network hosting it impacts the availability of all of the zones hosted on it. An attack against a name server, e.g., exploiting a vulnerability in a DNS software or in the operating system can enable the attacker to take control over the name server and inject records into the zone files hosted on it. We also find that high coresidence increases loss and failures.
Dependencies via registrars
Many DNS cache poisoning attacks occur by subverting the hosting infrastructure of DNS, e.g., domain registrar or name servers. By exploiting social engineering or vulnerabilities in domains registration interface, an attack against an infrastructure, run by a registrar, may expose all of the domains using it to domain hijacking attacks. Subverting a registrar or a name server is a lucrative avenue for cache poisoning when the attacker is not a MitM. Indeed, there is an increasing number of attacks by compromising the hosting side of DNS.
We quantify the dependencies between domains and registrars. In our study, we measured dependencies between 1003 ICANN accredited registrars and 18 popular TLDs. Figures 8 and 9 depict the inter-dependencies between registrars and domains. In particular, Figure 8 shows dependencies of popular TLDs. On average, a domain depends on almost 600 registrars, with the maximum reaching over 1 K as in the case of the com TLD. Attack against any of these registrars, on which a single domain depends, may suffice to hijack its subdomains and even the domain itself. Figure 9 shows the fraction of TLDs under which registrars can register subdomains. The implication is that subverting a single registrar may allow the attacker to hijack or register subdomains under all the TLDs that the registrar is accredited to manage.
DNS security extensions
Due to the critical function that DNS fulfills, it is highly reactive to new threats and attacks, constantly responding with new defence mechanisms. In particular, to mitigate the detrimental damages of cache poisoning attacks, IETF designed and standardised a cryptographic defence for DNSSEC [RFC4033-RFC4035]. DNSSEC was designed to address the cache poisoning vulnerability in DNS by providing data integrity and origin authenticity via cryptographic digital signatures over DNS resource records. The digital signatures enable the recipient, e.g., resolver, that supports DNSSEC validation, to check that the data in a DNS response is the same as the data published within the target zone.
A secure DNS would be resilient to cache poisoning attacks and would facilitate a wide range of applications and systems, such as secure routing (with ROVER, [22]), secure email, (with PGP keys distribution [23]). For the protection of DNSSEC to kick in, the zones need to be signed starting with the trust anchor (the root zone) all the way down to the target domain to enable the resolver to establish a chain of trust. The name server has to serve signed responses, and the resolver has to validate the signatures and keys in the responses; see an illustration of the chain of trust establishment process in Figure 10.
DNSSEC defines new resource records (RRs) to store signatures and keys used to authenticate the DNS responses. For example, a type RRSIG record contains a signature authenticating an RR set, i.e., all mappings of a specific type for a certain domain name. By signing only RR sets, and not specific responses, DNSSEC allows signatures to be computed off-line, and not upon request; this is important, both for performance (since signing is computationally intensive) and security (since the signing key can be stored in a more secure location than the name server).
To allow clients to authenticate DNS data, each zone generates a signing and verification key pair (sk, vk). The signing key sk is used to sign the zone data and should be secret and kept offline. Upon queries for records in a domain, the name server returns the requested RRs, along with the corresponding signatures (in a RRSIG RRs). To prevent replay attacks, each signature has a fixed expiration date. The clients, i.e., resolvers, should also obtain the zone's public verification key vk, stored in a DNSKEY RR, which is then used by the clients to authenticate the origin and integrity of the DNS data.
Resolvers are configured with a set of verification keys for specific zones called trust anchors; in particular, all resolvers have the verification key (trust anchor) for the root zone. The resolver obtains other verification keys, which are not trust anchors, by requesting a DNSKEY resource record from the domain. To validate these verification keys obtained from DNSKEY, the resolver obtains a corresponding DS RR from the parent zone, which contains a hash of the public key of the child; the resolver accepts the DNSKEY of the child as authentic if the hashed value in DNSKEY is the same as the value in the DS record at the parent and that DS record is properly signed (in a corresponding RRSIG record). Since the DS record at the parent is signed with the DNSKEY of the parent, authenticity is guaranteed.
This process constructs a chain of trust which allows the resolver to authenticate the public verification key of the http://jis.eurasipjournals.com/content/2014/1/16 Figure 10 A (simplified) sample process of constructing a chain of trust from the root zone. For ease of presentation in this illustration, the RRs maintained by the name servers are enumerated, and we specify the exchanged RRs by indicating the corresponding numbers above the arrows. target zone. Specifically, the clients authenticate the public verification key of the zone by constructing a chain of trust starting at the root zone, or another trust anchor, and terminating at the target zone.
DNSSEC deployment
Although proposed in 1997, DNSSEC is still not widely deployed: less than 3% of the DNS resolvers validate DNSSEC records in DNS responses [24]; recent measurements of DNSSEC adoption on forward and reverse DNS trees show that less than 1% of the zones are signed, [25].
We summarise the results in Table 1. The first column contains the number of registered domains that were tested. This is mainly relevant in the reverse DNS tree, where a large fraction of the domains (that correspond to IPv4 address blocks) are not registered. The second column contains the number of name servers in each domain space. In forward DNS, 62% of the TLDs are signed and less than 1% (0.46%) of top million Alexa domains are signed.
In a reverse DNS, there are only 0.07% signed zones in total; this is surprising since the reverse DNS is commonly utilised by security mechanisms. Notice that very few of the reverse DNS domains, that correspond to classes B and C (i.e., x.y.in-addr.arpa and x.y.z.in-addr.arpa), are signed (only 0.4% and 0.06% respectively). This means that even if the lower domains decide to adopt DNSSEC, the resolvers will not be able to establish a chain of trust to them.
Such a low adoption of DNSSEC among domains may be an indication of deployment challenges that needs to be investigated and mitigated. We hope that our work will further motivate such study.
Operational challenges
There are a number of challenges related to deployment of DNSSEC, which we studied in our earlier work [26,27]. In this section, we discuss operational challenges and outages. According to our study, the outages are mainly related to errors in key rollover and to zone http://jis.eurasipjournals.com/content/2014/1/16 signingprocedure. For instance, in January 2012, Comcast (a large ISP) stopped serving responses for nasa.gov. This immediately incited speculations whether Comcast was blocking nasa.gov. In reality, nasa.gov served incorrect signatures over its DNS records, and the validating resolvers of Comcast discarded those 'invalid' responses; the resolvers of Comcast were functioning correctly since such incorrectly signed records could also constitute an attack.
In August 2013, a mistake in key rollover, whereby instead of signing with both the old key and the new one, only the new key was used, causes an outage of domains under gov TLD. The impact was that 18 million clients of comcast Internet provider, more than 70 customers of Google Public DNS and validating Internet providers all over the globe (e.g., Sweden, Czech, Brazil) could not access domains under gov TLD.
There were also a number of other publicised failures, which resulted in broken DNS functionality for victim networks. Most, if not all, of the failures are related to human errors, and operational challenges in DNSSEC could be mitigated by automating the signing and key rollover procedures.
Forensics, evidences and detection with DNSSEC
In this section, we show that DNSSEC can be useful for detection of attacks and in forensic analysis. The feature that makes this possible are digital signatures, which can be validated and verified by anyone with the possession of a public verification key. Signatures provide a valuable information for forensic analysis and can enable identification, e.g., of the exact time that the network was attacked and to which hosts the traffic was redirected.
In contrast to the relative time indicated in the TTL field in DNS records, the cryptographic signatures contain an absolute expiration date and the date the signature was generated. The signatures also contain the tag of the cryptographic material (algorithm, hash, key) that was used to produce the signature.
Notice that since the cryptographic signatures should be impossible to forge for efficient adversaries, only the entity in possession of a cryptographic signing key, or a very strong adversary with huge computational power, should be able to craft valid signatures. Thus, a forged signature is an indication of a very strong adversary, such as a government, or an indication that the zone, which provided the spoofed record with a valid signature, may be malicious or subverted. In what follows, we consider how to analyse attacks or detect breaches, performed by strong adversaries, a posteriori.
We propose to utilise DNSSEC to design a system that would enable analysis of attacks, provide evidences of attacks that took place and even enable detection of some attacks. The system would need to collect the DNS responses (along with the corresponding signatures and cryptographic keys), e.g., by configuring suitable rules in the firewall, and store them in a database for processing.
Forensic analysis
The time stamps on the signatures provide a valuable information, allowing to analyse when a certain mapping was considered valid, and when it constitutes an attack. For instance, consider an organisation that had a network block 1.2.3.0/24 and then moved to a different Internet provider and received a new address block 5.6.7.0/24. All the servers that once occupied a block 1.2.3.0/24 were also moved to address block 5.6.7.0/24. Thus, responses with mappings from the block 1.2.3.0/24 are no longer valid, and if a resolver on some network receives records with mapping from old block, this may be an indication of an attack.
The time fields in the signatures (over the DNS records) enable network operators to analyse when the spoofed records were supplied and if the records reflect the real mappings at the time that they were supplied.
Evidences
It may often be desirable to prove to a third party, e.g., judge, registrar or domain operator, that attack took place. For instance, consider a case where a customer's private data was breached via a redirection to a malicious host. The customer can present the malicious records (which were used in the course of the attack) along with the cryptographic signatures, to a third party, and any third party can be convinced by validating the signatures. Another example is of a stronger adversary, for instance a state, that forces com domain to redirect all traffic destined to one of its subdomains, e.g., a Chinese enterprise Huawei Huawei.com, to different servers. Since com signs the delegation records for Huawei.com, it can also produce valid signatures for those new servers. If the attack is detected, those signatures can indicate that the incident was not a benign failure or mistake, but a malicious attack, which involved resigning the delegation records belonging to Huawei.com.
Such evidences are not available with other cryptographic defences that were proposed for DNS, most notably Eastlake cookies [16] and DNSCurve [17].
Detection
DNS is a distributed infrastruture, and a single domain is often served by mutliple name servers. Furthermore, many name servers are also distributed, e.g., via the ANYCAST technology, [RFC1546], where a DNS request is rerouted to the topologically nearest name server. http://jis.eurasipjournals.com/content/2014/1 /16 The attacks that we discussed in Sections 3 and 4 were launched against a specific name server or a traffic that was exchanged with a specific instance of the name server, e.g., the attacker either subverted a name server, or injected spoofed responses into a communication flow with a name server. Indeed, it is much more difficult (if not impossible) to subvert all of the name servers of some target domain. This would require an attacker that can eavesdrop on multiple Internet links that belong to different autonomous systems (AS) or to compromise all the name servers. Since this should be impossible even for states and military organisations, we use this as a basic premise in the detection technique which we propose next.
The fact that the adversary can compromise only some of the links and servers means that the different name servers (and different instances thereof ) will return different responses to DNS requests.
Networks could establish trustworthiness and correctness of the DNS responses, by querying the different name servers (and instances thereof ), belonging to target domains. To query name servers instances distributed via an ANYCAST, the network could use proxies located in different parts of the Internet. The inconsistencies, if found, would be carefully checked to test for attacks.
Conclusions
A secure DNS is critical for stability and functionality of the Internet. In this work, we review DNS security. We show that current defences do not suffice against common attackers and the DNS infrastructure is vulnerable to cache poisoning. We also experimentally measure the dependencies within DNS. Our results indicate that attacks against a single weak link can impact multiple dependent domains.
A significant problem pertaining to many attacks is that it is impossible to detect them (unless they break connectivity or disrupt the 'expected' functionality). We show that DNSSEC could not only prevent many of the attacks but could also be used to enable detection of the attacks and as well as a posteriori forensic analysis of the attacks. DNSSEC also can be used to generate cryptographic evidences, which would enable victims to prove to the third parties, e.g., insurance organisations, judges, Internet operators, that attacks and breaches took place. To best of our knowledge, our work is the first to propose such applications of DNSSEC.
However, deployment and operation of DNSSEC are challenging; we discuss problems and recommend countermeasures. We hope that our work will raise awareness to the vulnerabilities and the potential of DNSSEC to facilitate forensic analysis of attacks. | 9,826.6 | 2014-10-22T00:00:00.000 | [
"Computer Science"
] |
Evolutionary GAN–Based Data Augmentation for Cardiac Magnetic Resonance Image
: Generative adversarial networks (GANs) have considerable poten-tial to alleviate challenges linked to data scarcity. Recent research has demonstrated the good performance of this method for data augmentation because GANs synthesize semantically meaningful data from standard signal distribution. The goal of this study was to solve the overfitting problem that is caused by the training process of convolution networks with a small dataset. In this context, we propose a data augmentation method based on an evolutionary generative adversarial network for cardiac magnetic resonance images to extend the training data. In our structure of the evolutionary GAN, the most optimal generator is chosen that considers the quality and diversity of generated images simultaneously from many generator mutations. Also, to expand the distribution of the whole training set, we combine the linear interpolation of eigenvectors to synthesize new training samples and synthesize related linear interpolation labels. This approach makes the discrete sample space become continuous and improves the smoothness between domains. The visual quality of the augmented cardiac magnetic resonance images is improved by our proposed method as shown by the data-augmented experiments. In addition, the effectiveness of our proposed method is verified by the classification experiments. The influence of the proportion of synthesized samples on the classification results of cardiac magnetic resonance images is also explored.
Introduction
Cardiac magnetic resonance imaging (MRI) is the gold standard for assessing cardiac function. Conventional cardiac MRI scanning technology has advanced over the years and plays a vital role in the diagnosis of disease. Currently, many cardiac magnetic resonance imageassisted diagnosis tasks that are based on deep learning [1] have achieved good results. A novel algorithm [2] was proposed by Renugambal et al. for multilevel thresholding brain image segmentation in magnetic resonance image slices. This approach requires expensive medical equipment to obtain cardiac magnetic resonance images and also needs experienced radiologists to label them manually. It is undoubtedly extremely time-consuming and labor-intensive. The privacy of patients in the field of medical imaging has always been very sensitive and it is expensive to obtain a large number of datasets that are balanced between positive and negative samples.
A significant challenge in the field of medical imaging based on deep learning is how to deal with small-scale datasets and a limited number of labeled data. Datasets are often not sufficient or the dataset sample is unbalanced, especially when using a complex deep learning model, which makes the deep convolution neural network with a huge number of parameters appear as overfitting [3]. In the field of computer vision, scholars have proposed many effective methods for overfitting, such as batch regularization [4], dropout [5], early stopping method [6], weight sharing [7], weight attenuation [8], and others. These methods are to adjust the network structure. Data augmentation [9] is an effective method to operate on the data itself, which alleviates to a certain extent the problem of overfitting in image analysis and classification. The classical data augmentation techniques mainly include affine transformation methods such as image translation, rotation, scaling, flipping, and shearing [10,11]. These approaches mix the original samples and new samples as training sets and input them into a convolutional neural network. The adjustment of the color space of samples is also a data augmentation method. Sang et al. [12] used the method of changing the brightness value to expand the sample size. These methods have improved solving the overfitting problem. However, the operation on the original samples does not produce new features. The diversity of the original samples has not been substantially increased [13], and the promotion effect is weak when processing small-scale data. Liu et al. [14] used a data augmentation method on the test set based on multiple cropping. Pan et al. [15] presented a novel image retrieval approach for small-and medium-scale food datasets, which both augments images utilizing image transformation techniques to enlarge the size of datasets and promotes the average accuracy of food recognition with deep learning technologies.
The generative adversarial network (GAN) [16] is a generative model proposed by Ian Goodfellow and others. It consists of a generator G and a discriminator D. The generator G uses noise z sampled from the uniform distribution or normal distribution as input to synthesize image G(z). The discriminator D attempts to judge the synthetic image G(z) as false as much as possible and judge the real image x as true. Next, it adjusts the parameters of each model through successive confrontation training. Finally, the generator obtains the distribution model of real samples and good performance of generating images close to real images. The specific structure of the GAN is shown in Fig. 1.
The entire GAN training process is designed to find the balance between the generative network and the discriminative network. This makes the discriminator unable to judge whether the samples generated by the generator are real so that the generative network can achieve the optimal performance. This process can be expressed as formula (1): The GAN generates new samples by fitting the original sample distribution. The new samples are generated from the distribution learned by the generative model, which makes it have new features that are different from the original samples. This characteristic makes it possible to use the samples generated by the generative network as new training samples to reach the goal of data expansion. The GAN has achieved good results in many computer vision fields. However, it has many problems in practical applications. It is very difficult to train a GAN. Once the data distribution and the distribution fitted by the generative network do not substantially overlap at the beginning of training, the gradient of the generative network can easily point to a random direction, which results in the problem of gradient disappearance [17]. The generator will try to generate a single sample that is relatively conservative but lacks diversity to make the discriminator give high scores which leads to the problem of mode collapse [18]. [19], which combines a convolutional neural network with GAN, and conditional GAN [20] which adds a precondition control generator to the input data. The triple-GAN [21] adds a classifier based on the discriminator and generator, which can ensure that the classifier and generator achieve the optimal solution for classification from the perspective of game theory. In this approach, however, it is necessary to manually label samples. The improved Triple-GAN [22] method solves this problem and avoids the situation of gradient disappearance and training instability. In addition, the least squares GAN (LSGAN) [23] and wasserstein GAN (WGAN) [24] have made great improvements to the loss function. The WGAN uses wasserstein distance to measure the distribution distance, which makes the GAN training process more stable to a large extent. However, Gulrajan et al. [25] found that the WGAN uses a forced phase method to make the parameters of the network mostly focus on −0.01, 0.01, which wastes the fitting ability of the convolutional neural network. Therefore, they proposed the WGAN-GP model, which effectively alleviated this problem. Li et al. [26] introduced the gradient penalty term to the WGAN network to improve the convergence efficiency. The evolutionary GAN proposed by Wang et al. [27] is a variant model of a generative adversarial network that is based on evolutionary algorithms. It will perform mutation operations when the discriminator stops training to generate multiple generators as adversarial targets and uses a specific evaluation method to evaluate the quality and diversity of the generated images in different environments (the current discriminator). This series of operations can reserve one or more generators with better performance for the next round of training. This method that overcomes the limitations of a single adversarial target is able to keep the best offspring all the time. It effectively alleviates the problem of mode collapse and improves the quality of the generator.
Recently, many scholars have used GANs to augment training data samples. GANs have been used to augment the data of human faces and handwritten fonts [28]. Ali et al. [29] used the improved PGGAN to expand the skin injury dataset and increased the classification accuracy.
Frid et al. [30] used a DCGAN and an ACGAN to expand the data of liver medical images and proved that the DCGAN has a greater improvement in the classification effect on this dataset. In contrast to affine transformation, a GAN can be used to generate images with new features by learning the real distribution.
The evolutionary GAN can improve the diversity and quality of generated samples. This study therefore uses an evolutionary GAN to perform data augmentation on cardiac magnetic resonance images. The main contributions of this study are as follows: (1) A cardiac magnetic resonance image data augmentation method based on an evolutionary GAN is proposed. This method generates high-quality and diverse samples to expand the training set and improves the value of various indicators of the classification results. (2) Linear interpolation of feature vectors is combined with the evolutionary GAN to synthesize new training samples and generate related linear interpolation labels. This not only expands the distribution of the entire training set, but also makes the discrete sample space continuous and improves the smoothness between domains, which better trains the model.
Evolutionary GAN
The training process of the evolutionary GAN can be divided into three stages: mutation, evaluation, and selection. The first stage is mutation where the parent generator is mutated into multiple offspring generators; the second stage is evaluation where the adaptive score is worked out for each offspring generator of the current discriminator using an adaptive function; the third stage is selection where the offspring generator with the highest adaptive score is selected by sorting. The basic structure of the evolutionary GAN is shown in Fig. 2.
Mutation
The evolutionary GAN uses different mutation methods to obtain offspring generators based on parent generators. These mutation operators are different training targets, the purpose of which is to reduce the distance between the generated distribution and the real data distribution from different angles. It should be noted that the best discriminator D* in formula (2) should be trained before each mutation operation.
Zhang et al. [31] proposed three mutation methods: 1) Maximum and minimum value mutation: Here the mutation has made little change to the original objective function, which provides an effective gradient and alleviates the situation of gradient disappearance. It can be written as formula (3): 2) Heuristic mutation: Heuristic mutation aims to maximize the log probability of the error of the discriminator. The heuristic mutation will not be saturated when the discriminator judges the generated sample as false and it still provides an effective gradient to get the generator continuously trained. It can be written as formula (4): 3) Least-squares mutation: Least-squares mutation can also avoid gradient disappearance. At the same time, compared with heuristic mutation, the least-square mutation does not generate false samples at a very high cost. It does not avoid punishment at a low cost, which can avoid mode collapse to a certain extent. It can be written as formula (5):
Adaptive Function
The evolutionary GAN uses the adaptive function to evaluate the performance of the generator and subsequently quantifies it as the corresponding adaptive score, which can be written as formula (6): F q is used to measure the quality of the generated samples, namely whether the offspring generator can fool the discriminator, which can be written as formula (7): F q measures the diversity of the generated samples. It measures the gradient generated when the parameters of the discriminator are updated again according to the offspring generator. If the samples generated by the offspring generator are relatively concentrated (lack of diversity), it is easier to cause a large fluctuation of the gradient when updating the discriminator parameters. This can be written as formula (8): γ (≥0) is a hyperparameter used to adjust the quality of samples generated and the weight of diversity, which can be adjusted freely in the experiment.
Method
In this study, we describe the design of a data augmentation model for cardiac magnetic resonance medical images based on an evolutionary GAN. This approach can generate highquality and diverse samples to expand the training set. The linear interpolation of related labels is generated by combining the linear interpolation of feature vector with the evolutionary GAN, which expands the distribution of the training set and makes the discrete sample space continuous to train the model better. The specific network structure is shown in Fig. 3:
DAE GAN
High quality and diversity of samples are needed when using a GAN for data augmentation. The evolutionary GAN is very suitable for data augmentation since it can be trained in a stable way and generates high-quality and diverse samples. The user can choose to focus on diversity or quality according to needs by adjusting the parameters in the adaptive function, which can make the process of data augmentation more operative. This study improves the evolutionary GAN and we name the improved model data augmentation evolutionary GAN (DAE GAN).
There is no difference in the input and output between the evolutionary GAN and vanilla GAN. The only exception is that after fixing the discriminator parameters multiple offspring generators are mutated based on the parent generator for training. The optimal one or more generators is selected as the parent generator in the next discriminator environment, after the evaluation by the adaptive function.
The evolutionary GAN greatly improves the diversity of generated samples. However, a certain number of training samples are required if we want to fully train the GAN model. In the case of too few training samples, the generator and discriminator are prone to reach the equilibrium point prematurely and also cause the problem of mode collapse in the generated data. This study uses the traditional affine transformation data augmentation methods before training the GAN to alleviate this problem, expanding the data by operations of horizontal flip, vertical inversion, translation, rotation, and others. The security of medical images was given careful consideration. We therefore did not add the original data with noise and avoided performing operations like cropping. In this way, we preserved the texture and edge features of the original data as far as possible. Traditional data augmentation only makes small changes to the original data and does not generate new features and the samples are also discrete. Thus, this study introduces linear interpolation.
Zhang et al. [31] proposed a data augmentation method that is irrelevant to the data described in their article. This method constructs virtual training samples from original samples, combines linear interpolation of feature vectors to synthesize new training samples, and generates related linear interpolation labels to expand the distribution of the entire training set. The specific formula is as follows (9): x i , x j are the original input vectors; y i , y j are the label codes; (x i , x j ) and (y i , y j ) are two samples randomly sampled from the original samples; λ ∈ Beta[α, α] is the weight vector; and α ∈ (0, +∞) is the hyperparameter that controls the interpolation strength between the feature and the target vector. The linear interpolation method makes the model have the characteristics of a linear model. Processing the area between the original samples and the training samples reduces the inadaptability of predicting test samples other than the training samples. This enhances the generalization ability, simultaneously making the discrete sample space continuous and improving the smoothness between domains.
The original input of the evolutionary GAN discriminator ought to be two samples after fixing the generator parameters: one is the generated sample (the discriminator tries to minimize the distance between the predicted label of this sample and "0"); the other is the real sample (the discriminator tries to minimize as much as possible the distance between the predicted label of this sample and "1"). The discriminator loss function of the original evolutionary GAN is described as follows in formula (10): The expanded expression can be written as formula (11): This study operates linear interpolation on the evolutionary GAN to modify the discriminator input from the original two pictures to one picture. The discriminator task is changed to minimize the distance between the predicted label of the fusion sample and 'λ'. The loss function of the discriminator is modified as formula (12):
Algorithm
Typically, the GAN uses the noise z that obeys the multivariate uniform distribution or multivariate normal distribution as the input of the model. Ben et al. [32] believed that multiple Gaussian distributions can better adapt to the inherent multimodal of the real training data distribution. They thus used multimodal distribution as an input in the GAN and demonstrated that this method can improve the quality and variety of generated images. The DAE GAN training process is shown in Tab. 1.
Data Set and Preprocessing
The magnetic resonance data in this experiment were obtained from a partner hospital. All samples are two-dimensional short-axis primary T1 mapping images. The spatial distance of these cardiac magnetic resonance images ranges from 1.172 × 1.172 × 1.0 mm3 to 1.406 × 1.406 × 1.0 mm 3 and the original pixel size is 256 × 218 × 1. The benign and malignant annotation and segmentation areas of these images are manually labeled and drawn by senior experts. The original image data is in the ".mha" for-mat. After a series of preprocessing operations, such as resampling, selection of regions of interest, normalization, and final selection of interest, we obtained a total of 298 images that consisted of 221 cardiomyopathy images and 77 non-diseased images. The size of the preprocessed image is 80 × 80 × 1. The preprocessed cardiac magnetic resonance image is shown in Fig. 4.
Figure 4: Cardiac magnetic resonance image region of interest
All samples were normalized in this experiment to ensure the consistency of training data. Weused affine transformations on the training set before training the GAN. This included horizontal flip, vertical flip, 90 • , 180 • , 270 • rotation, 0 • -20 • random rotation and amplification, 0%-2% random rescaling of the vertical and horizontal axes, the small and specific amplitude of rotation and amplification, and translation and amplification. The goal of these steps was not to lose the original image information from the data. After augmenting the training set once, we performed two kinds of operations on the data: the first was to put it into the classifier for training directly, and to subsequently use the test set to get the classification results; the second was to put it into different GANs for training and generating new samples to train the classifier again.
Training the DAE GAN
The original evolutionary GAN uses the structure of a DCGAN. In this study, we use the residual structure shown in Fig. 5 in the generator and discriminator since the residual structure [33] can alleviate the gradient vanishing problem and accelerate the convergence rate. The goal is to train the high-performance generator more quickly in the same training time.
After adding the self-attention module [34], the detailed structure of the generator and discriminator and the output size of each layer is shown in Tab. 2.
The DAE GAN experimental environment is as follows: Ubuntu 16.04.1 TLS, Tensorflow 1.14.0, two Nvidia Tesla M40 GPU with 12 GB video memory (used to train the generative models of diseased and non-diseased samples). The maximum storage capacity of the model is set to 4 to take into account the space occupation and accidental interruption.
The Generation Results of the DAE GAN
In this experiment, we use 5-fold cross-validation to dynamically divide the heart magnetic resonance image into a training set and a test set at a ratio of 0.8:0.2. We use only the training set in the DAE GAN training. Each model has been trained several times (≥ 5) in the experiment due to the uncertainty in the training process for the deep convolution model. The specific effect of the data augmentation method was verified by the average classification results.
The training set of the cardiac magnetic resonance image data is expanded after normalization and affine transformation. We train the DAE GAN model by following the steps of Algorithm 1. The effects of our approach on the samples generated in the training process of the generative model are shown in Fig. 6.
Classification Experiment and Analysis of Experimental Results
The observation method has strong subjectivity. In this experiment, data augmentation is performed on small sample medical images. Consequently, the observation method can only be used as a reference evaluation standard. Our study uses the ResNet50 model and the Xception model [35] as a classifier to evaluate the effect of data augmentation. The classification results are used to uniformly evaluate the effects of various data augmentation methods.
In addition to the conventional accuracy index, we also calculate the two medical image classification indexes: sensitivity and specificity. These indicators are briefly explained here.
The accuracy rate is the probability that the diseased sample and the non-diseased sample are judged correctly. The calculation formula is described as follows in formula (13): Sensitivity is the probability that a diseased sample is judged to be diseased. The calculation formula is described by formula (14): Specificity is the probability of judging a non-diseased sample as non-diseased. The calculation formula is described as in formula (15): TP stands for True Positive, which means that not only does the classifier judge it to be a diseased sample, but it is also a diseased sample. TN stands for True Negative. In this case, the classifier judges it to be a non-diseased sample, but it is in fact not a diseased sample. FP is short for False Positive, which means that the classifier judges it to be a diseased sample and it is a non-diseased sample. FN is short for False Negative. Here the classifier judges that the sample is not diseased but it is a diseased sample.
In this study, we use the Keras framework under the Ubuntu 16.04.1 TLS system environment for the classification experiment (version number 2.24). We use a Tesla M40 in the training process. The learning rate is set to 1e−4, and we use the RMSprop optimizer. We set the early stopping method to prevent overfitting and the fivefold cross-validation method is used to find the average classification result of the classifier. The average classification results of each augmentation method in the ResNet50 and Xception classification models are shown in Tab. 3. In the experiment, it is clear that the classification effect is not necessarily better as the number of generated samples increases. After adding a certain amount of data, the classification effect does not rise but decreases. At the same time, if only using generated samples without affine transformation data augmentation, the classification effect was not greatly improved compared with using affine transformation data augmentation alone. The specific experimental results are shown in Fig. 8. The experimental results show that we cannot completely obtain the original data distribution because the quality of the generated data remains poorer than the original data. The classification effect is slightly reduced by using only the generated data without the affine transformation data augmentation method. However, when the two methods are combined, the classification result of the classifier in-creases with the increase of the generated data and reaches the peak value when adding three times of the generated data. The addition of too much generated data leads to overfitting of the classification model and reduces the accuracy of classification.
Experiments were performed to compare the different models with the classification results without any data augmentation method. For the ResNet50 model, the classification accuracy increased from 0.7767 to 0.8478, the sensitivity increased from 0.9674 to 0.9772, and the specificity increased from 0.6964 to 0.7822. For the Xception model, the classification accuracy increased from 0.7953 to 0.8698, the sensitivity increased from 0.9765 to 0.9798, and the specificity increased from 0.6833 to 0.8116.
Conclusion
The DAE GAN model proposed in this paper can effectively expand the amount of cardiac magnetic resonance image data, alleviating the problem of the classification network not being fully trained due to the small amount of medical image data and uneven data. The classification accuracy of the DAE GAN in ResNet50 and Xception models was increased by 7.11% and 7.45%, respectively, compared with not using data augmentation methods. The method proposed in this paper increased the classification accuracy in ResNet50 and Xception by 3.85% and 4.19%, respectively, compared with affine trans-formation data augmentation, and the experimental results showed that the method is effective in different classification models. | 5,726 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Simulation and Performance Optimization of an Amperometric Histamine Detection System †
One of the most widely known biogenic amines is histamine, which plays an important role in the human immune system. Some people suffer from allergic reactions after a histamine-rich diet; this is called histamine intolerance. The aim of this work is to develop a quick and reliable method for the detection and quantification of histamine in food, based on an electrochemical approach. In presence of biogenic amines, a reduction cascade induces a current at the working electrode. Prior to chronoamperometric measurements, Finite Elemente simulations were performed. The results are presented in this work.
Introduction
The functions of biogenic amines in the human body are manifold.These include the regulation of body temperature, stomach pH and vasoactive effects [1].The consumption of food containing excess of biogenic amines is suspected to induce several symptoms such as diarrhea, headaches, rhinoconjunctival symptoms, asthma, hypotension, arrhythmia, urticaria, pruritus and flushing.Currently, there is a lack of reliable methods for the detection of biogenic amines Therefore interpretation of results from clinical studies leading to conclusive correlations between histamine and the afore mentioned symptoms is difficult.In our study we aim to quantify biogenic amines indirectly by inducing a redox cascade.The oxidation of biogenic amines by diamine oxidase (DAO) to aldehydes produces hydrogen peroxide (H2O2) as a by-product.In a second reaction H2O2 is used to oxidize 3,3′,5,5′-tetramethylbenzidin (TMB), catalyzed by horseradish peroxidase (HRP), which is then detected by amperometry.
Materials and Methods
Chronoamperometric detection is based on the chemical oxidation or reduction of the requested analyte.A step-voltage is applied between two electrodes and the current is monitored over time.Oxidation or reduction of an analyte causes rapid current response to the step-voltage.This can be measured by a high sensitive potentiostat.In this case, the oxidized TMB is reduced at the working electrode (see Figure 1) and the amount of induced current directly correlates with the concentration of biogenic amine in the sample (see Figure 1) [2,3].In a simulation study, a 3D model was generated in COMSOL Multiphysics 5.0 (COMSOL, Inc., Burlington, MA, USA), which also accounted for diffusion of the individual species.To determine the concentration of the oxidized TMB (TMBox) generated by the reaction cascade, the rate of product formation v was calculated using Michaelis Menten kinetics (Equation ( 1)).The variable substrate concentration s c , as well as the Michaelis constant M K and the maximum achieved reaction rate max V were taken from a previous experimental study [4].
In addition, electrochemical measurements were carried out using a potentiostat (Reference 600 TM , Gamry Instruments, Warminster, PA, USA).HRP, H2O2 and phosphate-buffered saline (PBS) were purchased from Merck (Merck KGaA, Darmstadt, Germany).A ready-to-use TMB solution and stop solution (2M H2SO4) were purchased from R&D Systems (R&D Systems, Inc., Minneapolis, MN, USA).A 1:1 mixture of HRP stock solution (0.2 U/mL) and the TMB ready-to-use solution was used to generate Solution A. Different amounts of hydrogen peroxide were added to reach the final concentrations of 3.8, 15.3, 61.2, 244.7 and 1000 µM.The reaction was stopped by addition of 10 µL of the stop solution after 5 min.Hundred microliters of the sample was pipetted onto the screen printed electrodes (Gwent Group, Pontypool, UK, [4]).
Simulations
The concentration of the generated oxidized TMB after 5 min of incubation with HRP and variable initial concentrations of H2O2 were simulated using a 3D finite element model.The results of the simulations are shown in Figure 2.
Experiments
Generated TMBox concentrations were calculated by simulation.Higher concentrations induced more negative current at the working electrode (Figure 3).According to the literature [5], current density is directly proportional to the inverse root of time, which is plotted in Figure 4.The slope (k) of the regression line depends on the initial concentration of TMBox.
In Figure 5, the slope of different TMBox concentrations (as shown in Figure 4) was plotted against the concentration of the generated TMBox after 5 min of incubation with H2O2.A logarithmic relationship is observed.Due to the bijectivity in the selected concentration range, one can conclude from the slope to the initial H2O2 concentration.The limit of detection is observed at about 0.1 µM of oxidized TMB.For comparison, a spectrophotometric measurement of the solutions after addition of the stop solution has been performed.This type of measurement offers a slightly higher limit of detection which accounts for about 0.15 µM of oxidized TMB (data not shown).
Conclusions
Chronoamperometry is a high promising instrument to detect small amounts of oxidized TMB, which can also be included into small hand-held devices for point of care application.The current setup is slightly more sensitive then photometry.Additionally, there are possibilities for further optimization such as different electrode material or sample volume.This will be regarded in the next steps of our study.
Figure 1 .
Figure1.A redox cascade is used for the indirect detection of biogenic amines.The oxidation of biogenic amine by DAO reduces O2 to H2O2.In a second reaction, the generated H2O2 is used to oxidize a co-substrate (TMB) catalyzed by horseradish peroxidase (HRP).Reduction of the co-substrate at a working electrode induces a current which directly correlates with the amount of biogenic amines in the sample.
Figure 2 .
Figure 2. Generated TMBox after 5 min in dependency on the initial H2O2 concentration.
Figure 3 .
Figure 3. Experimental Results.The chronoamperometric plots show the time-dependency of the current during the reduction of TMBox at the working electrode.
Figure 4 .
Figure 4. Experimental Results.The measured current is plotted against the inverse root of time and linear regression lines are calculated.
Figure 5 .
Figure 5. Concentration of oxidized TMB plotted against the slope of the regression lines (n = 2). | 1,306.2 | 2018-11-20T00:00:00.000 | [
"Computer Science"
] |
Hybrid Fault Diagnosis Method based on Wavelet Packet Energy Spectrum and SSA-SVM
—As one of the important components of mechanical equipment, rolling bearing has been widely used, and its motion state affects the safety and performance of equipment. To enhance the fault feature information in the bearing signal and improve the classification accuracy of support vector machine, a hybrid fault diagnosis method based on wavelet packet energy spectrum and SSA-SVM is proposed. Firstly, the wavelet packet decomposition is used to decompose vibration signals to generate frequency band energy spectrum, and the bearing characteristic information is constructed from the energy spectrum to extract and enhance the bearing fault characteristic information. Secondly, the penalty and kernel parameters are optimized globally by sparrow search algorithm to improve the classification accuracy of support vector machine, and then construct the WPES-SSA-SVM model. Finally, the proposed model is used to diagnose and analyze the measured signals. Compared with BP, ELM and SVM, the effectiveness and superiority of the proposed method are verified.
I. INTRODUCTION
With the deep integration of new generation information technology and manufacturing industry, the mechanical equipment is becoming more and more complex, accurate and intelligent. With the continuous operation of mechanical equipment, its running state and key parts will gradually degenerate, and the probability of failure and shutdown will gradually increase, which will affect the normal production and processing of enterprises. As one of the important components of machinery, rolling bearings are widely used because of their convenient use and maintenance, reliable operation and good starting performance [1]. Using the characteristics of bearings, the sliding friction between parts is transformed into rolling friction, which improves the production efficiency of the equipment. Once damaged, it will lead to problems in the operation of mechanical equipment, reduce the working efficiency, and even cause the functional failure of rotating machinery, resulting in serious economic losses and personal casualties [2][3]. Therefore, it is of great practical value to timely find and take corresponding measures for the faults of rolling bearings, and it has become a research hotspots in intelligent fault diagnosis.
In recent years, fault diagnosis methods for rolling bearings have been emerging and developing [4][5][6][7]. Fault diagnosis methods for rolling bearings have mushroomed and developed continuously. In general, the fault diagnosis techniques of rolling bearings include: based on vibration signal [8], acoustic signal [9], electrical signal [10] and temperature signal [11]. Among them, vibration signal is more widely used, more intuitive and simple, because it can best represent the fault characteristic information in the process of bearing operation.
As the rapid and continuous development of machine learning and artificial intelligence, more and more researchers combine bearing fault diagnosis with it, and the intelligent fault diagnosis methods and systems are gradually improved. The common fault identification methods include deep learning (DL) [12], artificial neural network (ANN) [13], decision tree (DT) [14] and support vector machines (SVM) [15,16]. Literature [17] proposed the improved BP neural network algorithm Levenberg-Maquardt algorithm in order to improve the diagnostic efficiency of BP neural network. Literature [18] proposed a fault extraction method based on modified Fourier mode decomposition (MFMD) and multi-scale displacement entropy, and combined with BP neural network. Experiments show that this method has high recognition accuracy for different types of faults. In literature [19], wavelet packet energy and decision tree algorithm are combined to extract faults using wavelet packet energy, and then faults are identified and classified using decision tree model. In view of the low fault diagnosis rate of rolling bearings, the method of wavelet packet decomposition and gradient lifting decision tree (GBDT) was proposed in literature [20], and the extracted fault feature data set was input into the classification model of gradient lifting decision tree for fault diagnosis. In literature [21], scale invariant feature transform (SIFT) and kernel principal component analysis (KPCA) were used to extract faults, and SVM classifier was combined to achieve fault classification. Literature [22] applied SVM to fault state identification of rolling bearings and achieved good results. Literature [23] proposed a rolling bearing fault diagnosis method optimized by simplex evolutionary algorithm and SVM. Literature [24] diagnoses fault types by reducing high-dimensional data and using LSSVM.
At present, various intelligent optimization algorithms have emerged one after another, such as particle swarm optimization (PSO), whale optimization algorithm (WOA), ant colony optimization (ACO), genetic algorithm (GA), sparrow search algorithm (SSA), etc., and the combination and improvement of other algorithms have also achieved good results [25]. In reference [26], PSO was used to optimize SVM to realize the identification of multiple fault states of rolling www.ijacsa.thesai.org bearings. In [27], gray wolf optimization algorithm (GWO) was used to optimize the kernel function parameters of SVM globally, so as to achieve the best classification performance of SVM and improve the accuracy of classification recognition. Aiming at the influence of mixed noise of bearing vibration signals on useful information extraction, a optimization classifier based on multi-scale permutation entropy and cuckoo search algorithm (CS) was proposed in literature [28], which used CS to optimize the global optimal solution of SVM. Literature [29] proposed a method based on quantum behavior particle swarm optimization algorithm (QPSO), multi-scale displacement entropy and SVM to construct fault feature sets to realize fault identification of rolling bearings. Compared with single method for fault diagnosis, the combinatorial optimization methods have higher accuracy, but at the same time, different optimization methods have different problems, for example, BP model must be learned through a large amount of sample data, even if has been optimized the BP network parameters globally by optimization algorithm, the model is still not ideal in a small sample environment. SVM parameters can be optimized by PSO and other optimization algorithms to improve the classification accuracy, but this algorithm is prone to fall into local extremum. Therefore, combining the advantages of each algorithm and joint application to improve the effectiveness of rolling bearing status identification and fault diagnosis is the current research trend.
To improve the accuracy of bearing fault diagnosis, this paper firstly uses wavelet packet energy spectrum to extract energy spectrum feature vectors of bearing vibration signals, which are used as the input of SVM. Meanwhile, SSA algorithm is used to optimize the parameters of SVM globally, so as to build a hybrid model. The feasibility and effectiveness of the model are verified by experiments.
The rest parts of this paper are given as lists: Section 2 presents the preliminaries. Section 3 describes of the proposed method. Section 4 details the experimental setup. Section 5 analyzes and discusses the experimental results. Finally, Section 6 outlines the main conclusions.
A. Wavelet Packet Energy Spectrum
Wavelet packet decomposition can decompose signals into different frequency bands without leakage and overlap according to any time-frequency resolution. After wavelet packet transform, the information is intact and all frequencies are retained, which provides strong conditions for extracting the main information in the signal. This decomposition can be performed as many times as needed to obtain the desired frequency. Fig. 1 shows the schematic diagram of orthogonal wavelet packet decomposition of a signal. The original signal was denoted as , and the two sub-bands and of layer 1 can be obtained after wavelet packet decomposition through filters H and G. Decompose the two sub-components of the first layer respectively to obtain the four sub-bands , , and of the second layer; By analogy, the sub-band of layer n can finally be obtained. As can be seen from Fig.1, wavelet packet decomposition decomposes the decomposed frequency band several times, and re-decomposes the high frequency part without subdivision in the wavelet decomposition. In addition, according to the characteristics of the signal to be decomposed, the corresponding sub-frequency band can be adaptively selected to match the frequency spectrum of the signal. After wavelet decomposition, all the characteristic information, including the low frequency part and the high frequency part, can be preserved, which provides strong support for the feature information extraction of the signal.
It can also be seen from Fig.1 that if there are too many decomposition layers, the dimension of the data to be processed will be increased and the unrestricted decomposition cannot continue. In practical application, it is necessary to select an appropriate decomposition level according to the actual situation.
Wavelet packet energy spectrum enhances the stability of wavelet packet decomposition coefficient by extracting the energy of sub-band to construct feature vector. The wavelet packet frequency band energy is defined as follows: Using wavelet packet to decompose the original signal in n level, and 2n sub-frequency band can be decomposed. The energy calculation formula of sub-frequency band is Formula 1.
where, is the coefficient of sub-frequency band , .
Therefore, the wavelet packet frequency band energy spectrum is defined as Formula 2.
, - B. Support Vector Machines SVM is a machine learning algorithm based on statistical learning theory, which can successfully deal with many data mining problems such as pattern recognition, classification and regression analysis. It shows many unique advantages in solving small sample, nonlinear and high-dimensional pattern recognition problems, and overcomes the problems of dimension disaster and over-learning to a large extent. www.ijacsa.thesai.org Based on the theory of minimum construction risk, support vector machine maximizes the distance between the elements closest to the hyperplane and the hyperplane. Its core is to establish the best classification hyperplane, so as to improve the generalization processing ability of learning classification machine.
Taking binary classification as an example, its basic idea can be summarized as follows: first map the input vector to a high-dimensional feature space through some prior selected nonlinear mapping such as kernel function, and then seek the optimal classification hyperplane in the feature space, enables it to as much as possible to separate two classes of data points correctly, at the same time to separate two classes of data point furthest distance classification surface, as shown in Fig. 2.
In Fig. 2, square and triangle represent two types of samples respectively. H is the optimal classification hyperplane; H 1 and H 2 are straight lines that pass through the boundary points of the two types of samples and are parallel to H, and the distance between them γ is the interval. The optimal classification line requires that the classification line can not only correctly classify the two categories, but also maximize the interval. The vector closest to the optimal classification hyperplane is called the support vector.
Assume the training sample set * + ; * +, where is the input index, is the output index, is the sample number, and is the characteristic dimension of the sample. In the case of linear divisibility, there is a hyperplane that separates the two types of samples completely, as shown in Formula 3.
where, ( ) is the weight vector of the training sample, which determines the direction of the hyperplane. is the input vector; is the distance between the hyperplane and the origin.
Solving the optimal classified hyperplane is to find the optimal and , therefore, it can be summed up as the following quadratic programming problem: In order to solve the quadratic programming problem of Formula 4, the Lagrange function is introduced and the duality principle is used to transform the original optimization problem into Formula 5: According to Formula 5, the optimal V is ( ) .Thus, the optimal weight vector and the optimal value can be calculated by Formula 6 and Formula 7.
Then the optimal classification hyperplane is ( ) , and the optimal classification function is obtained.
C. Sparrow Search Algorithm SSA realizes optimization based on the idea that swarm organisms in nature can obtain a better living environment through mutual cooperation [30]. The bionic principle is as follows: in order to obtain abundant food, the sparrow population is divided into explorers and followers in the process of foraging. The explorer in the sparrow population who finds abundant food sources is responsible for providing the foraging area and the direction of food sources for the population, and the followers is responsible for finding more food according to the location provided by the explorer. At the same time, individual sparrows will also monitor the behavior of other individuals and compete for supplies with highforaging peers. When the population is in danger, it will make anti predation behavior. The external sparrow will constantly adjust its position to move closer to an internal or adjacent partner to increase its own security. Therefore, the distribution of food in space can be regarded as the numerical value of function in three-dimensional space. The purpose of sparrow search is to find the global optimal value.
The specific implementation process of sparrow search algorithm is as follows. In the process of searching for food, the randomly generated position matrix X of n sparrows in the d dimensional space is shown as follows: where n represents the number of sparrows, d represents the dimension of the variable of the problem to be optimized, ( ) is the position of the j sparrow in i-dimensional space. www.ijacsa.thesai.org The fitness values are calculated and sorted to determine the finders and entrants, and 10% of randomly selected individuals are scouters. Obtain the current optimal sparrow individual position, and the best fitness value. For the first generation of sparrows, the initial optimal is obtained.
where f represents fitness values of individual sparrows.
In constant iterative optimization process, the explorers in the sparrow population have two main tasks: looking for food and guiding the movement of the population. When the scouters feel dangerous, will alert the populations and guide the followers to a safe area. The location of the explorers is updated as follows: where represents the position of the i-th sparrow in the j-th dimension of the t generation. is a random number in the range of [0,1]; T represents the maximum number of iterations; Q is a random number that follows normal distribution; L represents a matrix where each element is 1; and represents the alarm value and alarm threshold respectively, , -, , -. When means that there are no predators around foraging at this time and the explorer can conduct extensive foraging operation. Conversely, it indicates that some sparrows in the group have found predators and send Danger Warnings to the rest, thus ensure that all sparrows can quickly move to a safe area to forage.
Followers search for food by monitoring and following the explorers with the highest fitness. According to the sorting principle, when , the individual fitness value is low, and these followers need to search other locations to improve the individual fitness value. Conversely, the sparrow will randomly find a location near the current optimal location for feeding.
where represents the global worst position of the tth iteration; is the best position of the t+1 generation explorer.
is a dimensional matrix with each dimensional value randomly generated from 1 or -1.
Individual sparrows will move to the search circle or other companions when they encounter danger during the foraging process. The method of updating the position of individual sparrows in this process is shown in Formula (14).
where is the step size control parameter, and it follows the normal distribution with mean value 0 and variance 1; is the moving direction of the sparrow, and the value range is [-1, 1]; is the minimum constant to avoid zero denominator; represents the current global optimal location; represents the fitness value of sparrow i; and represent the current worst and best fitness values, respectively.
III. PROPOSED MODEL
To improve the fault diagnosis accuracy of bearing vibration signals, a hybrid fault diagnosis model is constructed by using wavelet packet energy spectrum, SSA and SVM, which is named WPES-SSA-SVM. In order to accurately extract features, wavelet packet energy spectrum is used to extract feature information from vibration signals, and the energy of reconstructed signals are calculated through wavelet packet decomposition and reconstruction, and the feature vector is established. Then, SSA is used to optimize the penalty parameter c and kernel parameter g globally to improve the learning ability and generalization ability of SVM classifier. The model consists of data feature extraction, SSA optimization and SVM recognition. The functions of each part and the information transmission between them are shown in Fig. 3.
1) Data feature extraction module:
Using wavelet packet decompose the bearing vibration signal, and the wavelet packet frequency band energy spectrum is generated according to the decomposition results. Taking the energy spectrum information as the fault diagnosis features, and divide it into training and test data set in proportion. Then, the training data is transmitted to the SSA optimization module, and the training and test data are transmitted to the SVM recognition module.
2) SSA optimization module: The SSA optimization module receives the training data from the data feature extraction module and the value range of penalty parameter c and kernel parameter g from the SVM recognition module respectively, uses SSA to find the best penalty parameter c and kernel parameter g, and returns them to the SVM recognition module.
3) SVM recognition module: The SVM recognition module first transmits the value range of penalty parameter c and kernel parameter g to the SSA optimization module for parameter optimization, then receives the optimized parameters, and carries out machine training using the training data received from the data feature extraction module. After that, the fault diagnosis on the test data is recognized to test the recognition effect. www.ijacsa.thesai.org The algorithm of the model is divided into nine steps, and the flow chart is shown in Fig. 4.
Step 1: The original vibration signal is decomposed by wavelet packet, and the frequency band energy spectrum is calculated, and then the data is randomly divided into test data and training data in proportion.
Step 2: Select the kernel function to construct SVM, mainly including linear kernel function, RBF kernel function, polynomial kernel function and Sigmod kernel function, and set the value range of penalty parameter c and kernel parameter g.
Step 3: Initialize sparrow population. Set the population size Size, the maximum number of iterations T max , the individual position X, where X is the multidimensional coordinate composed of penalty parameter c and kernel parameter g, the proportion E, F, S of explorers, followers and scouters, and the safety threshold ST.
Step 4: Using the classification accuracy as the fitness function value of SSA.
Step 5: Find the global optimal position. The fitness value f of individual position is obtained by using training data. The larger the value is, the better the position is, and the global optimal position is the position with the largest f. If multiple positions at the same f, the optimal position is the one with the smallest penalty parameter c.
Step 6: Update the population position and global optimal position.
Step 7: Iteration number condition judgment. If the current of iterations , return Step 6 to continue running; Otherwise, execute Step 8.
Step 8: Using SSA optimization to get the best parameters, and the SVM is trained through the training data.
Step 9: Input the test data into SVM, output the calculated bearing fault label value, identify the fault type, and compare it with the real fault type label in the original data to verify the diagnosis effect.
IV. EXPERIMENTATION
Feature extraction and fault diagnosis were performed using simulated fault data from the bearing experiment data provided by Case Western Reserve University (CWRU). The data set has been applied in many experimental studies and achieved good results. The time domain and wavelet packet characteristics of vibration signals are extracted from the official experimental data and fault diagnosis is carried out. The structure of the bearing test bench is shown in Fig. 5 [31]. The test bench is composed of three-phase induction motor, torque sensing device, electronic control unit, dynamometer and intermediate shaft. During the experiment, the motion state of the rolling bearing in the actual work is simulated. Single point defects with different widths such as 0.007, 0.014, 0.021, 0.028 and 0.040 inch are machined on different parts of the bearing by spark machining technology, so as to obtain the experimental data of different fault types, such as rolling element, inner race and outer race fault.
In this paper, the fault body diameter is 0.007 inch, the motor load horsepower is 1hp, the bearing model is SKF-6205-2RS-JEM, and the sampling frequency of acceleration sensor is 48KHz to collect the vibration signal data of normal bearing, inner race fault, outer race fault and rolling element fault at the driving end. Take 100 groups of data samples for each state, with a total of 400 groups of data samples. 100 samples are randomly divided into 70 training samples and 30 test samples after feature extraction by wavelet packet energy spectrum. The training samples are used to extract features for classification model training, and the test samples are used to test the effect of classification model. The parameters of rolling bearing are shown in Table I. The division and label setting of experimental data are shown in Table II.
V. RESULT AND DISCUSSION
The time domain waveform diagram can intuitively observe the waveform distribution and amplitude of the vibration signal in each state. The waveform will fluctuate with the fault location and size. The vibration signals of normal and different faults of bearings are shown in Fig. 6.
The wavelet packet decomposition with wavelet basis function as db3 is used to decompose the normal state, inner race, outer race and rolling element fault signals respectively, so as to obtain the decomposition coefficient and reconstruction coefficient, and then use the reconstruction coefficient to reconstruct, finally obtain 8 sub-band energy, and the energy proportion of each frequency band is analyzed. Due to space limitation, this paper only lists the wavelet packet components of reconstructed nodes in normal state, as shown in Fig. 7. The energy proportion of 8 sub-bands in different states is shown in Fig. 8.
It can be clearly seen from Fig. 8 that there are differences in normalized amplitude of wavelet energy spectrum in different frequency bands after reconstruction of each node. Among them, the energy spectrum of sub-band 1 and 2 is relatively large in the four states, followed by the energy spectrum of sub-band 3 and 4, and the energy spectrum of sub-band 5, 6, 7 and 8 is relatively small, but there are slightly different in different states. For example, when the bearing is in the normal state, the energy spectrum value of sub-band 4 is higher than that in the fault state. When the outer race fault occurs, the energy spectrum value of sub-band 1 is lower than that in other cases. In case of bearing inner race fault or rolling element fault, the energy spectrum graph is relatively close, but there is still a certain gap between the values of sub-band 4 and sub-band 6. The difference of wavelet packet energy spectrum graphics in different states reflects that the features extracted by wavelet packet transform are sensitive to the fault feature information of vibration signal. Therefore, the energy amplitude corresponding to each sub-band and the energy difference between frequency bands can be used to evaluate the different states of bearings.
To verify the feasibility and effectiveness of WPES-SSA-SVM, experiments were conducted on BP, ELM, SVM and WPES-SSA-SVM respectively. The diagnosis results are shown in Fig. 9, where 'o' stands for the fault category of the actual testing set, and '*' stands for the fault category predicted by the model. From Fig. 9, the BP model misjudged 18 faults in total, including 5 rolling element faults misjudged into 3 inner race faults and 2 outer race faults, 8 inner race faults misjudged into 3 outer race faults and 4 rolling element faults and 1 normal, 5 outer race faults misjudged into 2 inner race faults, 2 rolling element faults and 1 normal, and the diagnostic accuracy is 85%. The ELM model misjudged a total of 16 faults, of which 4 rolling element faults were misjudged as 1 inner race fault and 3 outer race faults, 6 inner race faults were misjudged as 3 rolling element faults and 3 outer race faults, 6 outer race faults were misjudged as 1 rolling element fault and 5 inner race faults, and the diagnosis accuracy was 86.67%. There are 14 wrong judgments in SVM model, including 3 wrong judgments of rolling element fault, 2 wrong judgments of inner race and 9 wrong judgments of outer race. The diagnosis accuracy is 88.33%. WPES-SSA-SVM model misjudged 4 faults in total, including 1 rolling element fault misjudged as inner race fault, 2 inner race faults misjudged as outer race fault and 1 outer race fault misjudged as rolling element fault. The number of misjudged in the four states has been well improved. WPES-SSA-SVM model has the best diagnostic effect for ELM model, SVM model and BP model, and the diagnostic accuracy is 96.67%. The experimental results show that using wavelet packet energy spectrum for feature extraction and SSA to optimize SVM model can improve the performance of fault diagnosis, and has obvious advantages over other non-optimized models. www.ijacsa.thesai.org
VI. CONCLUSION
In this paper, we proposed a hybrid fault diagnosis method based on wavelet packet energy spectrum, SSA, and SVM in rolling bearing. Aiming at the difficulty of feature extraction of bearing vibration signals, wavelet packet decomposition was used to extract the wavelet packet features of vibration signals, and the energy spectrum of wavelet components is calculated and normalized to form the feature vector set, which fully contained the fault feature information of vibration signals. To improve the accuracy of fault diagnosis, the penalty parameter c and kernel parameter g of SVM are optimized by using the good global optimization ability of SSA, so as to build a hybrid fault diagnosis model WPES-SSA-SVM. To verify the classification performance of WPES-SSA-SVM, the CWRU bearing vibration data set is used to extract fault features and diagnose faults. The results show that compared with BP, ELM, and SVM, the proposed method can accurately extract the feature information from the original vibration signals, and has higher diagnosis accuracy. SSA helps to optimize the parameters and improve the classification performance of SVM. In the future, we will use data from other industries and scenarios for diagnosis, and further investigate the improvement of model performance and diagnostic accuracy. | 6,119.4 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
First description of the male of Volesusnigripennis Champion, 1899, with new records from Ecuador and Panama, taxonomical notes, and an updated key to the genera of Sphaeridopinae (Hemiptera, Reduviidae)
Abstract The genus Volesus Champion, 1899 is redescribed and the male of V.nigripennis Champion, 1899 is described for the first time and found to be similar to the female in both structure and coloration. The genus and the species are recorded from Ecuador and Panama for the first time. Notes on the taxonomic history of Sphaeridopinae and an updated key to the genera are provided.
Introduction
provided catalogs of Heteroptera, including Reduviidae, recorded from Ecuador and Panama, respectively. Further records of Reduviidae from Ecuador and Panama were provided by Maldonado (1990) and in papers describing or reviewing different taxa of this family (e.g. Dougherty 1995, Martin-Park et al. 2012, Zhang et al. 2016.
The cladistic analysis of Weirauch (2008) showed that Salyavatinae and Sphaeridopinae are a monophyletic group, while studies by Weirauch (2008) and Gordon and Weirauch (2016) provided evidence that Salyavatinae is paraphyletic and Sphaeridopinae is a sister group to the genus Salyavata Amyot & Serville, 1843 (Salyavatinae). Here we are considering Salyavatinae and Sphaeridopinae as separate subfamilies (following e.g. Weirauch et al. 2014, Gil-Santana et al. 2015.
In the present paper, notes on the taxonomical history of Sphaeridopinae are provided, clarifying some inconsistencies regarding nomenclature and taxonomical changes. Volesus is redescribed and the male of V. nigripennis is described for the first time. The genus and the species are recorded from Ecuador and Panama for the first time. Based on the results obtained here, an updated key to the genera of Sphaeridopinae is presented.
Material and methods
Photographs of the holotype of Volesus nigripennis Champion, 1899 (Figs 1-3), which is deposited at the Swedish Museum of Natural History (NRM), Stockholm, Sweden, were kindly provided by Dr Gunvi Lindberg (NRM).
Data on a female of V. nigripennis from Panama and deposited in the National Museum of Natural History (NMNH), Smithsonian Institution, Washington, DC, USA, were kindly provided by Dr Silvia A. Justi (The Walter Reed Biosystematics Unit, WRBU, Smithsonian Institution, Museum Support Center), with the support of Dr Thomas Henry and James N. Zahniser (NMNH).
Scanning electron microscopy images (12)(13)15,(17)(18)(19)(21)(22)(23)(29)(30)(31)(32)(33)(34)(36)(37)(38)(39)(40)46) were obtained by the second author (JO). A male of V. nigripennis and its external genitalia were cleaned in an ultrasound machine. Subsequently, the samples were dehydrated in alcohol, dried in an incubator at 45 ºC for 20 min, and fixed in small aluminum cylinders with transparent glaze. Sputtering metallization was then performed on the samples for 2 minutes at 10 mA in an Edwards sputter coater. After this process, the samples were studied and photographed using a high-resolution field emission gun scanning electron microscope (FEG-SEM; JEOL, JSM-7500F), as described by Rosa et al. (2010Rosa et al. ( , 2014. All remaining figures were produced by the first author (HRG-S). The fixed adults, microscopic preparations and genitalia were photographed using digital cameras (Nikon D5200 with a Nikon Macro Lens 105 mm, Sony DSC-W830). Drawings were made using a camera lucida. Images were edited using Adobe Photoshop CS6. Observations were made using a stereoscope microscope (Zeiss Stemi) and a compound microscope (Leica CME). Measurements were made using a micrometer eyepiece. The total length of the head was measured excluding the neck, for better uniformity of this measurement. Dissections of the male genitalia were made by first removing the pygophore from the abdomen with a pair of forceps and then clearing it in 20% NaOH solution for 24 hours. Following this procedure, the phallus was recorded without inflation . The endosoma was then everted (Figs 51,52) by carefully pulling on the endosoma wall, using a pair of fine forceps. The dissected structures were studied and photographed in glycerol.
General morphological terminology mainly follows Schuh and Slater (1995). The terminology of the genitalia structures follows Lent and Wygodzinsky (1979). However, the "vesi ca", as recognized by the latter authors, has been considered as absent in reduviids. The assumed equivalent structure in reduviids is a somewhat sclerotized appendage of endosoma (Forero and Weirauch 2012) but not the homologous vesica of other heteropterans, such as Pentatomomorpha (Rédei and Tsai 2011). Thus, this term is not used here. Yet, we adopted the denomination of paired membranous lobes on the endosoma, lateral to the dorsal phallothecal sclerite, from Weirauch (2008), to the flat paired expansions of the endosoma wall (Fig. 50). On the other hand, in order to maintain uniformity with the general terminology followed here, the basal plate bridge is named as such and not as ponticulus basilaris as in Weirauch (2008).
The specimens described here will be deposited in the Entomological Collection of the Oswaldo Cruz Institute ("Coleção Entomológica do Instituto Oswaldo Cruz"), Rio de Janeiro (CEIOC) and in the Dr Jose Maria Soares Barata Triatominae Collection (CTJMSB) of the São Paulo State University Julio de Mesquita Filho, School of Pharmaceutical Sciences, Araraquara, São Paulo, Brazil.
When citing the text on the labels of a pinned specimen, a slash (/) separates the lines and a double slash (//) different labels. All measurements are in millimeters (mm).
Sphaeridops was regarded as belonging to Acanthaspidinae (e.g. Stål 1872, Lethierry and Severin 1896), in which Veseris and Volesus Champion, 1899 were also included when described (Stål 1865, Champion 1899). Pinto (1927) established Sphaeridopidae as a new family, containing Sphaeridops and Limaia Pinto, 1927, described in the same paper. Interestingly, Pinto (1927) argued that he was adopting the opinion of Amyot and Serville (1843) that Sphaeridops should be part of a separate family sensu "Brevicipites", without mentioning the similarity between the etymology of Sphaeridopidae and "Sphéridopides", neither the references of Walker (1873a, b) to it. Pinto (1927) also claimed that the name "Brevicipites" could not prevail according to nomenclatural rules, because it was not based on a genus name, and instead included it as a synonym of the new family Sphaeridopidae. The group has been subsequently considered as a subfamily and most authors credited its authorship to Pinto (1927) (e.g. Costa Lima 1940, Wygodzinsky 1949, Maldonado 1990, Forero 2004), but Putshkov and Putshkov (1985) attributed the authorship of Sphaeridopinae to Amyot and Serville (1843) (referring to "Sphaeridopides"). Costa Lima (1940) in a general book on Brazilian Heteroptera, stated the synonym between Limaia ruber Pinto, 1927under Veseris rugosicollis (Stål, 1862, without giving any reasons for the proposed synonym. In order to review this synonymy, a search for the male type of L. ruber in the Entomological Collection of Oswaldo Cruz Institute, Rio de Janeiro, where it should be deposited (Pinto 1927) was performed (Gil-Santana et al. 1999), but it was not located. Nevertheless, although Maldonado (1990) had credited to Wygodzinsky (1949) the above-mentioned synonymy, it was undoubtedly firstly stated by Costa Lima (1940).
On the other hand, the synonym between Limaia and Veseris Stål, 1865 was in fact first recorded by Wygodzinsky (1949).
Therefore, Sphaeridopinae currently includes three exclusively Neotropical genera: Sphaeridops, Veseris and Volesus (Gil-Santana et al. 2015). Pinto (1927) provided the following diagnosis for Sphaeridopidae: a short head, without an anteocular portion; large antenniferous tubercles, clearly exceeding the anterior border of the head; eyes large, salient, almost touching each other on the ventral portion of the head; and the labium straight, with three [visible] segments. Maldonado and Santiago-Blay (1992) considered that Sphaeridopinae are characterized by two unique characters: the head mostly occupied by the very large eyes, and the antennifers raised on the vertex, close together, between the eyes. These authors were the first to argue that the Sphaeridopinae have a few other unusual characters: presence of sensory organs on the fore lobe of the pronotum (unknown function) and the fact that the dorsal and ventral components of connexivum are well separated by a vertical sclerite; these characteristics were recorded in Sphaeridops eulus Maldonado &Santiago-Blay, 1992. Maldonado andSantiago-Blay (1992) assumed that the smooth areas on the fore lobe of pronotum were sensory organs derived from SEM images of them. These authors also commented that they had observed "corresponding organs" in other two genera, without stating which ones. Gil-Santana et al. (1999) and Gil-Santana and Alencar (2001) recorded sensory organs on fore lobe of pronotum in both species currently included in Veseris. However, these latter authors based their conclusions only on the macroscopic aspect of similar smooth structures of fore lobe, without using SEM imaging. Schuh and Slater (1995) diagnosed Sphaeridopinae by the following set of characters: head projecting only slightly beyond the anterior margin of eyes; eyes large, nearly contiguous ventrally; antennae inserted on anteriorly projecting tubercles; rostrum straight; all tarsi three-segmented. Weirauch et al. (2014) considered that Sphaeridopinae are characterized by a large, robust body; large eyes almost covering the entire head; and a short, straight, thin labium. The keys to the genera provided by Gil-Santana and Alencar (2001) Type species. Volesus nigripennis Champion, 1899, by monotypy. Diagnosis. Volesus can be separated from other genera of Sphaeridopinae by the combination of characters presented in the key below, and additionally by the following characteristics: eyes medium-sized, not covering the head; interocular distance larger than the width of eye, dorsally, and approximately equivalent to it, ventrally; labium with only two visible segments.
Redescription. Body integument shiny, generally diffusely rugose, with linear irregular impressions more intensively and coarsely in thorax, except on lateral portions of mesosternum and median portions of some sternites, in which it is mostly smooth. Head subrectangular in dorsal view, moderately elongate in lateral view; transverse sulcus straight, moderately impressed meeting eyes at inner posterior angle; a midlongitudinal well-marked sulcus running from transverse sulcus to approximately level of anterior margin of eyes; antenniferous stout, cylindrical, diverging forward, straight apically; anteocular region curved downwards, not, or barely, visible in dorsal view; eyes medium-sized, interocular distance in dorsal view larger than width of an eye; labium with only two visible segments; first visible labial segment short, enlarged; sec-ond visible segment long, thin, straight. Thorax: pronotum trapezoidal; fore lobe much shorter and narrower than hind lobe of pronotum; transverse (interlobar) sulcus indistinct; median longitudinal sulcus ill defined, short, running on approximately basal fourth of hind lobe and separated from the median transverse depression of fore lobe by an irregular, curved carina. Prosternum somewhat depressed, with a pair of acute short, lateral processes, directed forward, median portion mostly occupied by stridulitrum, shortly prolonged posteriorly on midline, not surpassing the level of posterior margin of fore coxae and continuous with adjacent sclerite; meso-and metasternum flattened; fore coxae close, separate by a distance smaller than width of each coxa; middle and hind coxae separated from each other by a distance approximately equivalent to slightly more than twice width of each of them. Femora, tibia and tarsi slender, segments with similar width in all three pairs of legs; femora with a small ventral subapical protuberance; a small spongy fossa on apices of fore and mid tibiae. Tarsi three segmented. Abdomen enlarged at about middle portion; small scars of dorsal abdominal glands openings (dag) on medial anterior margins of tergites IV-VI; a vertical sclerite separating dorsal and ventral components of connexivum. Sternites with canaliculae (carinulate) on anterior margin of some segments.
Distribution. Colombia, Costa Rica, Ecuador (new record), Panama (new record). Notes. Volesus nigripennis was described based on a female from Costa Rica (Champion 1899). The female holotype is deposited at the Swedish Museum of Natural History (NRM), Stockholm, Sweden, and its photos are available on their website (Figs 1-3). Forero (2004Forero ( , 2006 recorded this species from Colombia, based on a unique female. These two females have been the only specimens of V. nigripennis known so far. Forero (2004) argued that the knowledge of the male of the species would be useful to a definition in relation to other members of Sphaeridopinae.
Coloration: general coloration blackish with reddish markings 14,20,26,28,35). Head generally blackish; neck mostly reddish; apices of antenniferous tubercles pale; antennal segment II brownish black; antennal segments III-IV brownish; labium brownish (9)(10)(11)14,20). Thorax blackish, brownish black on mesoand metasternum, with the following reddish thoracic markings: on anterior collar and their projections; on lateral and posterior margins of pronotum; on most of fore lobe of pronotum, except its median portion; on hind lobe of pronotum, a median and a pair of lateral converging bands, which are continuous with reddish posterior margin, ending approximately at mid and anterior thirds of hind lobe, respectively; and on postero-superior portion (approximately) of propleura and process of scutellum 14,16,20,26,28). Legs generally blackish; spongy fossa on fore and mid tibiae somewhat paler 20,(24)(25). Hemelytra black, somewhat paler, brownish, on approximately distal half of clavus, medially and about distal half of the membrane, except veins and area just surrounding them (Figs 4-5, 26). Hind wing generally brownish, with veins darkened (Fig. 27). Abdomen blackish to blackish brown; tergite VI with a median reddish spot just below anterior margin; tergite VII almost completely reddish, blackish on and just below anterior margin and with a pair of rounded blackish spots on mid-lateral portion (Fig. 28). Connexivum reddish on: extreme base of segment II, approximately basal third of segments III-V, and somewhat less than basal half of segment VI; connexival portion of segment VII almost entirely reddish with only posterior border of approximately distal half darkened; ventrally, marking on segment II is a small spot on external margin; on segments III-VI connexival reddish markings are prolonged dorsally to a short distance on lateral portion of respective tergite as a subtriangular marking, and ventrally, as a somewhat curved lateral marking, directed backwards, reaching spiracles, which are surrounded by reddish posterior margin; sternite II with anterior margin and median portion, on approximately distal half reddish to reddish brown; transverse median bands, on sternites III-VII, progressively larger, reddish brown in one specimen and pale brownish in other, joining lateral reddish markings described above in sternites V-VII, the latter almost completely reddish, with dark coloration restricted to anterior margin and adjacent to genital capsule 26,28,35). Exposed portion of pygophore and parameres blackish (Fig. 35).
Vestiture: body generally covered by sparse short, somewhat curved, adpressed, thin, golden to brownish setae. Head: eyes, ocelli and neck glabrous; region adjacent to insertion of labium with more numerous and somewhat longer setae; ventral surface of first visible labial segment and basal portion of second visible labial segment moderately setose, dorsal surface of correspondent portions with fewer setae; additionally, some sparse setae scattered on the proximal third of second visible segment, remainder glabrous. Antenna: segment I sparsely covered with setae similar to those of general vestiture but slightly longer, more numerous at apex; segments II-IV densely setose, covered with scattered longer, somewhat curved, brownish setae and much more numerous shorter, thinner, whitish setae (Figs 9, 10). Thorax. Some longer straight thin setae on posterior margin of pronotum adjacent to lateral portion of scutellar base; setae are sparser on ventral surface; smooth lateral areas of mesosternum glabrous. Hemelytra: small adpressed setae sparsely scattered on corium, more numerous at its apex; apical two thirds of clavus, respective adjacent area of corium and membrane glabrous. Legs generally with similar vestiture of the body; setae longer and thicker on tibiae, becoming more numerous towards apex; tarsi with stiff, pale, yellowish to golden-yellowish, oblique to curved setae, with variable lengths. Abdomen: tergites I-V almost completely glabrous, with some scattered small darkened or pale setae, almost imperceptible; tergite VI with some more numerous pale setae; tergite VII with scattered longer golden setae. Connexivum: lateral margins with numerous adpressed short curved darkened setae, forming a few irregular rows; these setae become somewhat longer and paler on distal margin of segment VII; segments II-VI dorsally glabrous; some sparse setae on dorsal surface of distal third of segment VII. Sternites generally covered with sparse thin golden to pale setae; somewhat longer and more numerous setae on median portion of segments VI-VII and on pygophore, except its middle portion.
Structure: Head. Anteocular portion slightly shorter than postocular portion (in lateral view); ocelli separated by a distance slightly larger than transverse width of each ocellus, positioned medially to level of inner posterior angle of eyes and close to transverse sulcus; antenniferous large; first antennal segment slightly longer than head, stout, somewhat curved, its approximately basal fourth slightly thinner; remaining antennal segments progressively thinner, cylindrical; labium reaching or surpassing the mid third of stridulitrum (Figs 6-14, 20-22). Thorax. Anterior collar inconspicuous; anterolateral angles rounded and small (Figs 15, 16); fore lobe with irregular areas with smooth and whitish integument; a median transverse depression on fore lobe present between medial margins of longer curved smooth areas (Figs 14-17); humeral angles acute, slightly prominent (Figs 14, 18); posterior margin of hind lobe slightly curved on middle third (Figs 14, 15). Scutellum sculptured, median depression shallow, process stout, horizontal, apex rounded (Figs 14, 19). Distance between acute prosternal processes: 0.7. Hemelytra generally dull; on extreme base of dorsal surface, laterally, and on lateral portion, basally, moderately shiny; not reaching tip of abdomen, ending somewhat apically to level of the mid third of seventh tergite (Figs 4-5, 26); in one specimen, the membrane has a small additional cell at approximately apical fourth of cubital vein (Fig. 26). Abdomen. Integument generally also rugose (Figs 28-34), except on median portions of sternites IV-VII, in which it is mostly smooth (Figs 34-38). Connexivum largely exposed, laterally to hemelytra (Figs 4-5); anterior margin of tergite I carinulate (Figs 29-31); tergite II with a mid-longitudinal keel and median third of posterior margin curved backwards (Figs 28-31). Sternites carinulate on anterior margin of segments III-V in one specimen and also on segment VI in the other; on sternite III, canaliculae are somewhat larger and extend more towards lateral portion, occupying approximately two thirds of anterior margin, except midline; on following segments canaliculae become progressively slightly smaller and occupy approximately median third of anterior margin, except midline; a median shallow keel on distal two thirds of segment II and somewhat more elevated in sternites III-VI (Figs 35-38). Segment VIII not visible externally, sclerotized on ventral portion, which becomes somewhat wider towards posterior margin; latter almost straight and with a few short setae; dorsal portion membranous and narrower; spiracles on dorsal margin of ventral portion (Figs 39-41).
The male specimens (Figs 4,5,20,35) described here seem to be generally similar to the female of the species in structure and coloration (Champion 1899, Forero 2006Figs 1, 2). However, only the examination of more specimens of V. nigripennis will make it possible to ascertain whether there is sexual dimorphism.
Smooth areas on the fore lobe of pronotum were recorded here in V. nigripennis (Figs 6, 15-17) but it was not possible to distinguish a paired sensory organ similar to that described in Sphaeridops eulus by Maldonado and Santiago-Blay (1992: figs 13, 14). These authors emphasized that the nature of the sensory organ of these areas could be seen in their SEM images. However, judging by the SEM images obtained in the present study (Figs 15,17), it is possible that the supposed sensory organ, also mentioned as present in both species of Veseris (Gil-Santana et al. 1999, Gil-Santana andAlencar 2001) may be in fact a portion of these smooth areas. Only future studies, preferably employing histological techniques will allow the evaluation of the existence and/or possible sensory functions of such portions in these species.
Although Champion (1899) had described that the labium would have the second and third visible labial segments equal in length, our studies, including the SEM images, made it clear that the labium is formed by only two visible segments, with the first visible segment short and enlarged and the other long, thin and straight (Figs 11,12). It is opportune to mention that, according to our request, Dr Dimitri Forero kindly reexamined the female recorded by him from Colombia, sent us photos and confirmed these same features on the labial segments. Similarly, Dr Silvia A. Justi, when examining the female specimen from Panama, also verified that it had only two visible labial segments, with the same characteristics.
Some of the portions of the male genitalia of V. nigripennis, such as the parameres and articulatory apparatus, including a basal plate bridge bent ventrally (Figs 40,45,46,50) seem similar to those recorded for species of Veseris (Gil-Santana et al. 1999, Gil-Santana andAlencar 2001). Weirauch (2008) recorded the presence of the basal plate bridge (=ponticulus basilaris) bent ventrad and a pair of membranous lobes on endosoma, lateral to the dorsal phallothecal sclerite in Sphaeridops amoenus and Salyavata nigrofasciata Costa Lima, 1935 (Salyavatinae). Judging by her drawings, these lobes are smaller in S. amoenus and somewhat larger but shorter in S. nigrofasciata, respectively, than those recorded here in V. nigripennis (Figs 50-52). It is noteworthy that Weirauch (2008) considered both characteristics (a basal bridge bent ventrad and the pair or membranous lobes on the endosoma) as synamoporphies of the clade Salyavatinae + Sphaeridopinae obtained in her cladistic analysis.
On the other hand, because all other structures, such as those of phallus and endosoma, were not adequately recorded by the above-mentioned authors, nor by others who included just partial or incomplete descriptions of the male genitalia of species of Sphaeridops (e.g. Maldonado andSantiago-Blay 1992, Gil-Santana et al. 2000), only future comprehensive studies of these structures among Sphaeridopinae will allow useful comparisons with the results obtained here.
The presence of smooth areas on fore lobe of pronotum in between a rugose integument was also recorded in Triatominae, in which its integument "varies from smooth to granular; in many cases, smooth and granular sections occur side by side, forming a characteristic pattern" (Lent and Wygodzinsky 1979). These smooth areas may seem more prominent in Sphaeridopinae, because the surrounding integument is generally much more coarsely rugose.
An unusual characteristic of the group according to Maldonado and Santiago-Blay (1992), the dorsal and ventral components of connexivum well separated by a vertical sclerite, was also recorded to Volesus nigripennis (Fig. 34).
However, as commented above, the other alleged unusual characteristic of Sphaeridopinae (Maldonado and Santiago-Blay 1992), i.e., sensory organs on fore lobe of the pronotum, were not seen here in V. nigripennis; therefore, the presence of this feature needs more comprehensive studies among species of this group.
On the other hand, although the eyes of Sphaeridopinae have been considered large, almost covering the entire head, nearly contiguous ventrally (Pinto 1927, Maldonado and Santiago-Blay, 1992, Schuh and Slater 1995, Weirauch et al. 2014), this is not the case in Volesus. In the latter, the eyes are medium-sized, not covering the head and distant from each other ventrally (Figs 1, 2, 4-14, 20). In fact, the interocular distance is larger than the width of eye, dorsally, and approximately the same of it, ventrally.
Yet, although in the Sphaeridopinae the head had been considered without an anteocular portion (Pinto 1927) or projecting only slightly beyond the anterior margin of eyes (Schuh and Slater 1995), the anteocular portion in Volesus is longer, visibly projecting beyond the anterior margin of eyes for almost the same distance as the length of the eye (Figs 8, 11). Lastly, the presence of only two visible labial segments in Volesus (Figs 11, 12) is striking.
These dissimilarities between Volesus and other genera of Sphaeridopinae suggest that future studies including other species and more specimens, preferably with a phylogenetic approach, should be done in order to ascertain the set of features diagnostic of Sphaeridopinae.
In this case, it is worth mentioning that none of the phylogenetic studies which suggested that Sphaeridopinae would be a sister group to the genus Salyavata (Salyavatinae) (Weirauch 2008, Gordon and had included Volesus in their analysis. Therefore, possible future taxonomic changes involving these subfamilies, besides being based on cladistics studies, should also include specimens of Volesus to clarify its systematic position within Reduviidae.
In any case, the study of the male of Volesus nigripennis allowed for a better definition of the diagnostic characteristics to separate the genera currently considered as valid in Sphaeridopinae. Thus, a revised key to the genera of Sphaeridopinae is presented below. | 5,560.8 | 2019-05-02T00:00:00.000 | [
"Biology"
] |
Time Operator, Real Tunneling Time in Strong Field Interaction and the Attoclock
: Attosecond science, beyond its importance from application point of view, is of a fundamental interest in physics. The measurement of tunneling time in attosecond experiments offers a fruitful opportunity to understand the role of time in quantum mechanics. In the present work, we show that our real T-time relation derived in earlier works can be derived from an observable or a time operator, which obeys an ordinary commutation relation. Moreover, we show that our real T-time can also be constructed, inter alia, from the well-known Aharonov–Bohm time operator. This shows that the specific form of the time operator is not decisive, and dynamical time operators relate identically to the intrinsic time of the system. It contrasts the famous Pauli theorem, and confirms the fact that time is an observable, i.e., the existence of time operator and that the time is not a parameter in quantum mechanics. Furthermore, we discuss the relations with different types of tunneling times, such as Eisenbud–Wigner time, dwell time, and the statistically or probabilistic defined tunneling time. We conclude with the hotly debated interpretation of the attoclock measurement and the advantage of the real T-time picture versus the imaginary one.
Introduction
Attosecond science (attosecond = 10 −18 s) concerns primarily electronic motion and energy transport on atomic scales. In previous works [1][2][3], we presented a tunneling model and a formula to calculate the tunneling time (T-time) by exploiting the time-energy uncertainty relation (TEUR), precisely that time and energy are a (Heisenberg) conjugate pair. Our T-time is in good agreement with the attosecond (angular streaking) experiment for He-atom [1] with the experimental finding of Eckle et al. [4][5][6], and for hydrogen atoms [7] with the experimental finding of Sainadh et al. [8]. Our model presents a real T-time picture or a delay time with respect to the ionization time at atomic field strength F a (see below, compare Figure 1). Our T-time model is also interesting for the tunneling theory in general because it relates T-time to the energy gap or the height of the barrier [1,2].
Indeed, the role of time has been controversial since the appearance of quantum mechanics (QM). The best known example is the Bohr-Einstein weighing photon box Gedanken experiment (BE-pb-GE) [9] and [10] (p. 132). Our T-time picture [1] shows an intriguing similarity to the BE-pb-GE, where the former can be seen as a realization of the later [1,3]. Concerning the time operator in QM, recently Galapon [11][12][13] showed that there is no a priori reason to exclude the existence of a self-adjoint time operator, canonically conjugate to a semibounded Hamiltonian, contrary to the famous objection of Pauli (known as Pauli theorem). The result is, as noted earlier by Garrison [14], for a canonically conjugate pair of operators of a Heisenberg type (i.e., uncertainty relation), the Pauli theorem does not apply, unlike a pair of operators that form a Weyl pair (or Weyl system.) Our tunneling model was introduced in [1] (see Figure 1). We take a one-dimensional model along the x-axis as justified by Klaiber and Yakaboylu et al. [15,16]. Hereafter, we adopt the atomic units (au), where the electron's mass and charge and the Planck constant are set to unity,h = m e = e = 1/(4π 0 ) = 1 au. In this model, the effective potential of the atom-laser system is given by where F (throughout this work) is the peak electric field strength (at maximum) of the laser pulse (quasistatic limit), and Z e f f is the effective nuclear charge that can be found by treating the (active) electron orbital as hydrogen-like, similar to the well-known single-active-electron (SAE) model [17,18]. The choice of Z e f f is easily recognized for many-electron systems and well-known in atomic, molecular, and plasma physics [19][20][21][22]. The active electron can be ionized by a short laser pulse with an electric field strength F, where ionization happens directly when F equals a threshold called atomic field strength (see below Equation (4)) F a = I 2 p /(4Z e f f ) [19,23], where I p is the ionization potential of the system (atom or molecule). However, for field strengths F < F a , ionization can happen by a tunneling mechanism, through a barrier built by the effective potential and the ionization potential, as seen in Figure 1. The barrier height at a position x is given by (see Figure 1) that is equal to the difference between the ionization potential and effective potential V e f f (x) of the system (atom+laser) at the position x. The crossing points x e,± of V e f f (x) with the −I p -line are given by [1] x e,± = x e,c = x e,+ + x e,− = I p F = d C .
d B is the barrier width, where δ z = δ z (F) is given below in Equation (5). x e,c is usually called the "classical" exit point; it is the intersection of the field line −xF with I p -line, which equals what is usually called the "classical" barrier width d C .
We obtain the maximum h B (x m ) from the derivative of Equation (2), ∂h B (x)/∂x = 0 ⇒ x m (F) = (Z e f f /F), x a = x m (Fa). From h B (x m ) = 0, see Figure 1 (the lower green curve), we get the atomic field strength F a Fortunately, Equation (4) can be generalized as the following. For a field strength F ≤ F a : the equality δ z = 0 occurs at F = F a . Indeed, δ z = δ z (F) = I 2 p − 4Z e f f F is a key quantity; it controls the tunneling process and determines the time "delay" due to the barrier (τ T,d , τ T,i ) as we will see.
In the (low-frequency) attosecond experiment, the laser field is comparable in strength to the electric field of the atom. Usually, intensities ∼10 14 W cm −2 are used. It is usual to characterize the strong-field approximation (SFA) by the Keldysh parameter [24], where ω 0 is the central circular frequency of the laser pulse and τ K denotes the Keldysh time. According to Keldysh or SFA, in Equation (6), at values γ K > 1, the dominant process is the multiphoton ionization (MPI). For γ K < 1 (precisely γ K << 1), the ionization (or field-ionization) happens by a tunneling process, which occurs for F < F a . SFA was developed and refined later by Faisal [25] and Reiss [26] and known under the term Keldysh-Faisal-Reiss (KFR) approximation, where the two regimes of multiphoton and tunneling are more or less not strictly defined by γ K [27][28][29]. In the tunneling regime (for F < F a ) at a quasistatic limit, the electron does not ionize directly. It tunnels (tunnel-ionizes) though adiabatically, and escapes the barrier at the exit point x e,+ to the continuum as shown in Figure 1 (a sketch for He-atom.) With this model, we derived the following relations of the T-time [1], As discussed in [1,2] τ T,d , τ T,i (or | − I p ± δ z |) correspond to the forward, backward tunneling, respectively. An intuitive picture is given by a physical reasoning of these relations [1] as the following: τ T,d is the time delay with respect to the ionization time at atomic field strength F a . The latter is undoubtedly real and not zero τ T,d (F a ) = 1/(2I p ). τ T,d is the time delay for a particle to pass the barrier region and escapes at the exit point x e,+ to the continuum [1]. τ T,i is the time needed to reach the entrance point x e,− from the initial point x i , compare Figure 1. The two steps of the model coincide at the limit F → F a (δ z → 0), and the total time becomes the ionization time at the atomic field strength F a , τ total = 1 I p , τ T,d = τ T,i = 1 2I p . For F ≥ F a , the BSI starts [30,31]. At the opposite side of the limit F → 0, δ z → I p and τ T,d → ∞, hence, nothing happens, i.e., the electron remains in its ground state undisturbed, indicating that our model is consistent; for details, see [1][2][3]. For the first term lim F→0 τ T,i = 1 4I p , see below Section 3.
Physical Reasoning
At first, we follow a reasoning in which the atomic potential plays a central role to determine the T-time. This is similar to our approach of deriving the T-time (Equation (7))) using the TEUR. The idea relies on the SFA approximation, where the electron escapes the tunnel exit with zero velocity and becomes free. It leads, in the adiabatic tunneling, to the fact that the (decreasing) atomic potential energy compensates for the dynamics directed by the electric field. Thus, in adiabatic tunneling, the momentum change −F τ (during the time interval τ) due to the barrier corresponds to the change of the atomic potential, or the potential gradient in the barrier region, which we can express as the following: x e,−
= −Fτ
In Equation (8), the factor (1/2) is considered because tunneling is a half scattering (half collision) [32,33], whereas the symmetry of the process is given by forward and backward scattering [1]. Both issues will be discussed later in Section 3, where the tunneling process in SFA is viewed as a scattering process. We will also see later (Section 2.2.3) that τ (a time interval) in Equation (8) is the time of arrival between two points ∆p/F = (p 2 − p 1 )/F. The constant factor c = 1 4I p is considered due to the dimensionality of the relation (note ∇V is a force field like F). Evaluating Equation (8) at the entrance/exit points x e,± as given above, leads to In this relation, it is easy to see that τ delt = 0 for F = F a because the barrier height and width δ/F vanish, δ z (F = F a ) = 0. In other words, we have to add the ionization time in the absence of the barrier, which is quantum mechanically (QMly) easily comprehensible. One tends to add T-time at atomic field strength τ a = 1/(2I p ). However, while τ a is the ionization time at F = F a , we expect that at F < F a an enhancement in the ionization time occurs. Such an enhancement can be also inferred from the result of Aharonov et al. [9], where they discussed the TEUR and the BE-pb-GE. In our case, we have the electric field F instead of the gravitational field, and we expect the enhancement to be proportional to F relative F a , where the BSI region starts and the ionization becomes a classically allowed process. The enhancement can then be expressed with a simple form τ a F a F . Hence, the T-time as a delay time is: which is the relation given in Equation (7). With the symmetry consideration, or changing exit, entrance points (forward, backward process), we carry out the same procedure and obtain τ T,i , and hence the total time τ total , see Equation (7). Regarding this result, the following points consider attention: • Time like position is a relative observable [9]. One never measures its absolute values but time intervals by clocks. Throughout the present work τ's denotes time intervals, to distinguish it from a time variable usually called t.
•
The delay time in Equation (10) is a twofold delay. The first term is a delay with a field strength F < F a relative to F a , and the second term is a delay due to the presence of the barrier itself, which disappears at F = F a , see below Equation (16).
•
The way we obtained τ delay = τ T,d in Equation (10) is similar to the Eisenbud-Wigner-Smith (EWS) [34,35] time delay, where one adds the barrier-free motion term of the incident particle to the delay caused by the Coulomb potential of the scattering ion itself τ EWS = 1 2 (∆d/v) (note also the factor 1/2), where ∆d is the barrier width and v the velocity of the particle (wave packet), see further below Section 4.
•
We note that our τ delt (Equation (9) fits exactly in the definition of the dwell time τ dwell , the total time spent by a particle inside the barrier [34][35][36], despite the fact that the dwell time is defined in terms of transmission and reflected probabilities or coefficients, gained from the wave function of the tunneled particle(s). This makes it possible, inter alia, to link our T-time to the statistical picture of the T-time, see Section 6.
A Dynamical Time Operator
Our goal is to introduce a time operator which is a quantum mechanical (QMal) counterpart of its classical limit by the virtue of Bohr correspondence principle.
Classically and naively one usually assumes a classical dynamics with t = p/F and ∆p = (p exit − p 0 ) = − 2I p = −F(t exit − t 0 ), which leads to the Keldysh time (t exit − t 0 ) = τ K , where according to SFA v exit = 0, v 0 = 2I p . Another classical way [2] is to take the (classical) barrier width d C over the (arithmetic) mean value of the velocity to escape the barrier region, p = (p 0 − p exit )/2 = 2I p /2, which again with t = x/p leads to the Keldysh time d C /p = (I p /F)/( 2I p /2) = 2I p /F = τ K . One notices that τ K = ∆p/(−F) = d C /p, which is important for the quantum counterparts, see Sections 3-5. However, it is well known that τ K is too large and cannot describe the T-time, simply because it neglects completely the effect of the atomic Coulomb potential. Our objective is to calculate the T-time taking into account the atomic Coulomb potential. In this section, we introduce the QMal correspondence to t = p/F, and, in Section 3, we encounter the correspondence to t = x/p.
With the Bohr correspondence principle, we take the classical limit p F , v = F t and make a transition to its QMal counterpart: This is similar to the construction of the well-known Aharonow-Bohm time operator from its classical limit x/p, see Section 3, and we assumedF † =F. The choice of the sign is related to symmetric (+) antisymmetric form (−), furthermore in Section 3. The suggested notation T FK Fujiwara-Kobe time operator (FKTO) [37,38] is explained below in Section 2.2.3. Our goal is to obtain the T-time form the time operator T FK in Equation (11).
First Approach
By following Razavy [39], we use the Hamiltonian's principle function W(q, τ 0 + t), where p = ∂W ∂q , ∂W ∂t = −H, − ∂H ∂τ 0 = H, ∂H ∂t = 0, τ 0 + t = τ(p, q), H(p, q) is the Hamiltonian and p, q, t are the the momentum, the coordinate, and the time variable; for details, see [39]. In our case, the Hamiltonian H = p 2 /2 − Z e f f /|x| − xF is not explicitly time-dependent, i.e., we take the quasistatic limit. We first show the antisymmetric form (−) (or δp/F = p x e,+ /(−F) − p x e,− /(−F))), we obtain τ del . Note that δp/F is the time of arrival discussed by Fujiwara [37] as explained in Section 2.2.3 (T Fuj ). Let W(p, x) be the Hamiltonian's principle function and S(p, Accordingly, applying a time operator of the type T =p/F (as in Equation (11)) with p = ∂W ∂x , we obtain (10). A constant x−independent term C is unimportant because it cancels out for time intervals in the adiabatic tunneling dynamics. The same holds for nonadiabatic effects, which are expected to be small, especially for a short T-time period. It should be noted that nonadiabatic effects due to the oscillatory field, which causes small shifts in x e,± and a momentum drift, are small [40] and below the error bars of the experimental results [41]. p is the momentum (or momentum operator) and F the scalar electric field of the laser pulse at maximum (quasistatic limit). To get the T-time, i.e., the time to pass the barrier region, we evaluate this expression at the entrance/exit points x e,± of the tunneling region, i.e., we take the momentum difference caused by the barrier with δp = −Fτ delt , and, with where in the last line we applied the same argument as in Equation (10). Clearly, ionization time τ a = τ T,d (F = F a ), δ z = 0 at atomic field strength is QMly real and not zero, and, for F < F a , δ z > 0, a twofold time delay emerges with respect to the ionization at atomic field strength, as we noticed in Section 2.1. Actually, we can write (see Equations (10) and (14)) where χ ≡ χ(F) can be considered as an enhancement factor, a delay caused by a field strength F (< F a ) and the presence of the barrier δ z > 0. Furthermore, the first term τ a (F a /F) in Equations (10) and (14) can be written in the form (τ a F a )/F) = p a /F. Hence, we can write Equations (10) and (14) in the form The significance of Equation (16) is obvious and its similarity to the τ EWS is evident, as mentioned in the previous paragraph. Indeed, Equations (15) and (16) and the twofold delay time have a substantial consequence for the tunneling in strong field and the tunneling theory in general, a detailed investigation follows in [41] and future works. Furthermore, we see from Equations (16) and (14), due to the forward and backward tunneling ±δp, that τ delt is eliminated and the FKTO time of Equation (11) equals a pure ionization (delay) time (no tunneling part), which according to Winful (see below Equation (32)) is the self-interference part [42], τ FK = p a F = τ dion = 1/2(τ T,d + τ T,i ) = (1/2)τ total , furthermore in Section 4. Finally, it is easy to show, by performing the same procedure as that of the symmetric form (+) of Equation (11), we obtain τ dion − 1/4I p , i.e., a constant term does not cancel out by (+) sign. However, this term is eliminated by the momentum representation as discussed by Allcock [43]; we came back to this later in Section 3.
Second Approach
We can obtain the same result by using what Allcock [43] called the momentum difference ∆ (V) p produced by the potential difference ∆V, in his discussion of TEUR, when he initiated the concept of time of arrival (arrival-time) in his seminal papers [43]. It actually goes back to Heisenberg original work on uncertainty relation with the Stern-Gerlach experiment [44]. Allcock defined it by to obtain TEUR of the form where L = (p/m) T, the distance traveled by the particle with the group velocity p, m the mass and T (actually ∆T) is the time of transit (or equivalently arrival time) spent by the particle corresponds to the deflection angle ∆φ. The momentum initiated by Allcock in Equation (17) is our dynamical observable, and we define the time operator as before in Section 2.2.1 p F , ∆T = ∆ (V) p/F, where we again use a scalar field F (quasistatic limit). The difficulty is in defining p −1 . However, because L is the distance traveled by the particle (in our case the barrier region) and due to the adiabaticity, we can take on the right-hand side of Equation (17) a mean value p (with m = m e = 1 au). As discussed in [45], see below Section 3, we have p = 4Z e f f and we take the potential difference in the barrier region to obtain the time T corresponding to the deflection angle ∆φ [43] while tunneling. For its interpretation as macroscopic quantity, see [43]. The only difference is that the deflection angle is caused by the atomic potential (scattering process), instead of the inhomogeneous field in Stern-Gerlach experiment when the atoms pass through the magnetic field. With Equation (17), we obtain Since ∆ (V) p < 0 (the electron is decelerated), the minus sign of F is canceled, whereas for backward scattering we have ∆ (V) p > 0, which leads to −τ delt -where, unlike the Stern-Gerlach experiment, this is not the measured angle in the attoclock experiment. The delay time τ T,d,i can be obtained with the same procedure, as we have done in Equations (10) and (14). One might argue that ∆V = δ z (the energy gap) and, by simply applying Equation (18) directly, one obtains T = 1 2δ z . However, this has been discussed in [2], and it leads again to the same result τ T,d,i . In the sense of the Allcock definition, our T-time, i.e., the time to pass the barrier region τ delt , uses the change of the atomic potential energy ∆V as a hand of the attoclock, switched on (off) at the entrance (exit) of the barrier that are themselves determined by the field strength F. However, F is quasistatic and is supposed to be known with a classical accuracy.
One notices that, in Equations (10), (14) and (19), we have calculated a time interval (∆p/F) from the evaluation on the two sides x e,+ , x e,− of the barrier. Therefore, we get τ delt , the time interval (time difference) caused by the barrier itself (see Equation (16)), which is not the measured quantity (due to the self-interference term) in the attoclock experiment, e.g., the measurement by Eckle and Landsman et al. [4][5][6], see [1]. On the other side, constant terms cancel each other out, unless their values strongly depended on F in a nonadiabatic manner. One expects that such an effect is small, as already mentioned, since the tunneling happens in short time elapse around the peak of the field, which permits the employment of the quasistatical limit [41].
Final Remarks
It is worthwhile to mention that such a type of time observable or a self-adjoint operator in Equation (11) has been introduced in the past by Busch et al. [46] for a free falling particle with mass m in a gravitational (or force field) field g of the form T m = 1 mg P. The operator T m was introduced as an example for a self-adjoint operator canonically conjugate to a Hamiltonian operator H = P 2 /2m + mgQ, and P(t) = mgt + P 0 , where Q is the position variable (distance) and Q 0 = P 0 = 0. It was our motivation to introduce the time operator in Equation (11), where in our case it is easy to show that the operator T is an observable that satisfies the commutation relation with the Since the potential V e f f (x) = −Z e f f /|x| is a scalar quantity [47], the commutation relation is reduced to the position momentum commutation relation We have to add that, after finishing the draft of the manuscript, we encountered earlier works [37,38] where the dynamical operator in Equation (11) and its commutation relation are discussed. Fujiwara [37] showed that a time operator of the formT Fuj = − p k is well defined. This operator corresponds to the Hamiltonian p 2 2 + q k and leads to time-energy uncertainty, where k is a constant uniform force field and q, p are the position and momentum. He also showed that between two points q A < q B is the arrival time. This is similar to our tunneling or delay time, compare Equation (8) Kobe et al. also showed that the Pauli argument does not apply in this case and that in general the energyÊ and tempusT satisfy the same commutation and uncertainty relations as do the operatorsp andq (the momentum and position.) The works of Fujiwara [37] and Kobe et al. [38] nicely confirm our claims of the present work. Thus, we called the time operator in Equation (11) Fujiwara-Kobe time operator T FK .
Aharonov-Bohm Time Operator
In Section 2.2, we have seen that classically one obtains the Keldysh time τ K from p/F (or ∆p/F) and similarly from x/p (or d C /p). In this section, we discuss a derivation of the T-time from the Aharonov-Bohm time operator (ABTO) the counterpart of classical form x/p or d C /p, as we have done for the counterpart of t = p/F in Section 2.2, the Fujiwara-Kobe time operator T FK of Equation (11). The well known ABTO for a free particle with a momentum p and position x [48] is where the second line is the momentum representation, and the third is the energy (E = p 2 2 ) representation, see Allcock [43], where a transformation of the wave function ψ(p) → ψ(E) leads to We first compare our T-time in Equation (7) with Equations (20) and (21). In the limit of F → 0, δ z → I p (ground states) It is clear that at this limit (ground state) the electron can not escape the atom τ T,d → ∞ [1], whereas τ T,i corresponds to the second term of ABTO in Equation (21). We will see later that τ T,d , τ T,i correspond (forward, backward) to i 2 ← → ∂ ∂E , the Olkhovsky time operator. Thus, the limit for F → 0 reveals a connection between our T-time and ABTO. In the general case F = 0, we use Equation (21) as the following. When the electron interacts with the field, we can assume the quasistatic (adiabatic) dynamics tunneling model (QSTM). The electron moves through the barrier region adiabatically with a mean momentum p = 4Z e f f , which, as discussed in [45], can be seen from τ T,d, The QMal correspondence to d C p = τ K (i.e., d C → d B ) leads to τ delt as the following. With d B = (x e,+ − x e,− ) = δ z F and p = 4Z e f f , and noting that the QM correspondence to x p is x e,+ p , x e,− p , the time to reach the exit and entrance point, respectively, and their difference is the time due to the barrier itself, we get The similarity between Equations (21) and (24) is obvious. It should be understood as a mean value due to the assumption p −1 = 1/(4Z e f f ). However, by comparing Equation (24) with Equation (20), we clearly see the difference in the minus sign. In Equations (24) and (25), the ABTO with the minus sign or τ sym − corresponds to the time of arrival time (or transit time) as we have seen in Section 2.2.2, Equation (18) (where L = x e,+ − x e,− = d B ), we discuss this later below (see Equation (29)).
Our total T-time is obtained by taking the plus sign. Then, with a mean momentum p = 4Z e f f , x e,± /p and with Equation (20), we obtain In fact, it has been already discussed in [1] that the symmetry of the tunneling process (forward, backward) leads respectively to τ T,d , τ T,i and the total time τ total , see Equation (7)-hence, the factor 1/2, which implies a relation of τ T,d , τ T,i to the ABTO. Equation (26) might be considered as the symmetric form of ABTO, whereas Equation (24) is the antisymmetric form. QMly a symmetrization leads to a real quantity, which is obvious in Equation (26) (or (27)) and it is the time delay τ dion given in Equation (16) (see also Equation (33)). However, this is also the case in Equation (25), since δ z is real which involves a tunnel ionization. Surprisingly, we obtain from ABTO a delay time τ sym + = 1 2 τ total = τ dion (F) (as given Equations (16) and (33)) of a process, in which τ delt the tunneling part (due to the barrier itself) cancels out, assigned a self-interference term by Winful [42]. The importance of this result especially to the nonadiabatic effects is discussed in [41].
Actually, as seen in Equation (27), the delay time due to the barrier itself is eliminated by the sum of the two process forward and backward in the immediate step (at second line), which invokes the question about the real, imaginary pictures of the T-time, whereas the antisymmetric form in Equation (25) eliminates the self-interference term and performs the delay time due to the barrier itself or the dwell time, compare Equation (32). A complex T-time picture can be obtained by the substitution τ delt → iτ delt or equivalently an imaginary barrier gap δ z → i|δ z |, which leads to the same result in Equation (26) Apart from the fact that τ T,i is classically allowed as discussed in [1] (see the physical reasoning of Equation (7)), which means no need to an imaginary part (i.e., τ C T,i is redundant), only the real part of the delay time τ C T,d (unlike τ T,d , which is real) can be compared with the experimental and the Feynman Path Integral (FPI) results in the adiabatic tunneling picture [1,4], see discussion later in Section 6. Hence, in our view, the complex delay time of Equation (28) can not satisfactorily describe the delay time in the attoclock, Section 6.
However, the choice of the sign allows a question opened namely about the symmetric and the antisymmetric forms given by Equation (26) or Equation (24). In the former (τ sym + ), we obtain the self-interference term, which eliminates the tunneling part, or in other words no tunneling is involved is this case. This is in accordance with the well known observation of the numerical methods that the velocity gauge the SFA (see also Reiss [29]), in which no tunneling time is encountered, as we will discuss in Section 6. The later (τ sym − ) corresponds to τ delt and we calculate τ T,i , τ T,d as done in Equations (10), (14) and (19). In this case, forward and backward processes eliminate the self-interference term and give the net time due to barrier itself, and leads to τ T,d with a good agreement with the experimental result [1] in the adiabatic tunneling [1,4], as already mentioned. In terms of a classical point of view, it is nothing but the time to reach the exit point The question is now which of the experimental data, He-atom [4], or H-atom [49] is the right choice to compare with. The factor (1/2) is of a QMcal origin as we see from Equations (24) and (26); see further below Equation (29). Thus, we think that the data of Landsman et al. is reliable. On the experimental side, the difference comes from the evaluation of the experimental data, i.e., how the final photoelectron momentum distribution and the streaking offset angle difference transformed to a tunneling delay time. Finally, the same factor was also introduced by Wigner in the scattering or atomic collision to calculate the Eisenbud-Wigner time delay, as we will see in the next section. Note that it is only when δ z is an imaginary quantity that we end with an imaginary T-time picture, the one supported by Sainadh et al. [49], see Sections 4 and 6.
In the energy representation, the T-time is given by the form of a bilinear operator of Equation (22), see Olkhovsky [50] where ← → means to the left, right (or forward, backward) applied forms f , (−i/2) ∂ ∂E g , (−i/2) ∂ ∂E f , g . The comparison of Equations (26) and (29) with Equations (20)-(22) (and the transition F → 0) makes it clear that ABTO can be used to derive the (or directly connected to our) T-time, though we take advantage of the QSTM. This is similar to the approach introduced by Allcock in his seminal work [43] to define the time of arrival (TOA) and obtain the TOA distribution or observable time after the classification of Busch [51] (chap. 3). In addition, it is again related to the similarity we encountered before with Fujiwara time of arrival T Fuj , see Section 2.2.3. It turns out that a unified picture of the tunneling time is reliable, as we will see in Section 4.
Note that Z e f f is a parameter with Z e f f = 1.6875 [52] for a small barrier when the electron escapes closer to the nucleus (large F). Z e f f = 1.375 is better for a large barrier width when the electron moves far from the nucleus (small F), see Figure 4 of [1]. In a more reliable approximation, Z e f f depends on the barrier width (or the distance to the nucleus). However, there is no such reliable approximation available so far.
Relation to Eisenbud-Wigner Delay Time
As we have seen in Section 3 and Equation (24), the barrier width can be used to obtain the delay time τ delt . Surprisingly, this is also useful to find a relation to Eisenbud-Wigner time or the more commonly used term τ EWS mentioned in Section 2.1. The τ EWS has a long history going back to atomic collision and particles scattering by a Coulombic potential. We find in [35,53] (and according to Wigner [54,55]) the following definition of the τ EWS : where ∆d = 2 ∂θ ∂k , v =hk/m are the barrier width and the velocity of the particle or the propagation velocity of the wave packet, respectively, and k is the wave vector of the monochromatic wave packet. Similar to Equation (24), we get for a tunneled electron (wave-packet) the time shift [35] due to the barrier (h = m = 1 au) 1 2 where again v (= p as mentioned above) is the mean velocity of the particle (propagation velocity of the wave-packet) in the tunneling region. Obviously, one has to add the barrier-free term to get the total EWS-time τ total EWS = τ f ree + τ EWS [35]. Thus, adding the self-interference term (see Equation (32)) or τ dion (Equation (16)) as we have done in Equations (10), (14), (19) and (26), we get immediately the time delay τ T,d , τ T,i . Clearly, τ T,d corresponds to the τ total EWS . Although this result is unexpected, we think it is not surprising. Then, from Equation (30), we see immediately the similarity to Equations (22) and (29) in the energy form ∂ ∂E , where the phase θ is related to the semiclassical action. We note that such a unified T-time picture was already found by Winful [42] for the quantum tunneling of a wave packet or a flux of particles scattering on a potential barrier. Winful showed in his work that the group delay or the Wigner time delay can be written in the form where τ dwel is the dwell time which corresponds to our τ delt as mentioned in Section 2, and τ si is a self-interference term which corresponds to our τ dion in Equations (16) and (33), see further below Section 6. Recently, Han et al. [56] tried also to find a unified T-time picture, but they claimed that the concept of T-time depends on the model used to derive it. We think that the unified picture brought by Winful is more reliable, especially since it fits well with our result. It is worthwhile to note that Bray et al. used a similar point of view with a semiclassical Keldysh-Rutherford (RK) model based on the Rutherford scattering [33,57]. They found a Coulombic origin of the attoclock offset angle (ACOA). Nevertheless, they claimed to confirm an imaginary T-time picture, which was justified by NITDSE with a short-range potential, see Section 6.
In the imaginary time picture as also done by Bray et al., one claims that experimentally measured ACOA is rooted to the tail of the potential, whereas the T-time is imaginary and its real part is zero (instantaneous tunneling) with a decomposition of the form t s = t i + i t T (t i = Re t s , t T = Im t s ), where t s , t i , t T are the saddle point solution, the exit time and the T-time, respectively, see [49,58]. According to [58], the exit time is the time at which the electron leaves the tunneling barrier, or the ionization time that is in the tunneling picture corresponds to the moment at which the electron emerges in the classically allowed region [58].
As we found in this section, τ EWS permits a scattering mechanism interpretation for the T-time where the ACOA corresponds to a real τ total EWS = τ T,d because δ z and hence τ delt are real quantities. An important point is that we now clearly see that the factor 1/2 in Equation (30) (and the same in τ T,d,i ) enters because the tunneling is half scattering. Taking a second 1/2 factor by symmetrization (anti-symmetrization) as done in Equations (24)-(27) (and (28)) corresponds to forward, backward scattering. In our view, this again shows that the result of Landsman et al. [4] for He-atom [1], is more reliable than the result of Sainadh et [49] for H-atom [7] (which lacks a factor 1/2), as already mentioned in Section 3.
Therefore, the adiabatic tunneling process can also be understood in term of scattering in which the ACOA is due to the scattering by Coulombic potential in the barrier region (not the tail of the potential) and corresponds to the T-time with real quantities (i.e., a real T-time), whereas Bray et al. [33,57] as mentioned above claims that their RK model confirms the imaginary time picture or that the ACOA is due to the tail of the Coulombic potential by comparing the model to a numerical result. In [57] by a comparison to experimental results for H-atom [49], they found that the latter confirms the I −0.5 = F −1 dependence of RK-scattering curve (where I is the intensity of the laser pulse), which is in accordance with an 1/F dependence of the T-time, e.g., Equation (15) with an additional nonlinear term δ z /I p , which is quite smaller than unity (the first term).
Thus far, we see that our real T-time picture is highly consistent and fundamental. However, we have to add in regard to imaginary tunneling picture that it is common (rather fundamental) in QM that two pictures exist for a QMal physical process or phenomena [7]. Indeed, the debate continues and a complete picture of the T-time is not reached so far.
Discussion
In Section 2.2, we introduced a time operator and calculated the T-time Equations (14) and (19). Our approach is supported by the works of Busch et al. [46], Fujiwara [37], and Kobe et al. [38]. We also have shown in Section 3 that the Aharonov-Bohm time operator leads to the same result Equation (24), by taking advantage of a QSTM or an adiabatic tunneling with a mean value of the momentum p = 4Z e f f . A similar conclusion is found in Section 4 with the Eisenbud-Wigner delay time τ EWS . It turns out that the specific form of the time operator is not crucial, and a dynamical variable such p of the system under consideration maintains the time observable through an operator form, which represents this time observable. This is similar to the classical case or limits x/p = x/(tṗ) = p/ṗ = p/F, which lead to the Keldysh time τ K , as mentioned in Section 3. Thus, time operators representing a dynamical variable (intrinsic time) are equivalent.
Furthermore, to our knowledge, this is the first time where one encounters a close relation between T-time delay and time of arrival, see Section 2.2 last paragraph and Section 3. Time of arrival is widely used to calculate T-time. However, the usual belief is that time delay and time of arrival are two different definitions of the time observable, i.e., they result in two time intervals (different τ's) for the same process in quantum systems. According to our present work, we argue that intrinsic time can be represented equivalently by different well defined (dynamical) operators. The most known types, among others, are the delay time and the time of arrival. It is worthwhile to explain what we mean with the above stated well defined. An example is the definition of the Mandelstam-Tamm time operator (MTTO) [59][60][61] (A, B are two operators of the system, and ψ is the wave function) Recently, Gray et al. [60] argued that the TEUR-version of MTTO depends on the states that are not eigenvectors of an observable. It means that the Mandlestam-Tamm time (MT-time) is the infimum of the ratio of static uncertainty to dynamic uncertainty per unit time (see discussion in [2]). Messiah [62] used MTTO to define "the characteristic time of evolution", whereas Gray et al. claimed that this formulation recommends itself since it does not promise too much. They also provided what they called a (mathematical) precise definition of MTTO [60]. However, following Kullie [2], the MTTO can be, in fact, brought in line with the time of arrival of ABTO and the time delay in Equation (7), by using p as a dynamical observable. Furthermore, this contrasts the claim of Orlando et al. [63] that the MT-time is equal to the Keldysh time. Our conclusion is that, under the same conditions, well-defined time operators by a dynamical observable equivalently represent one and the same intrinsic (dynamical) time or time observable of the system (called tempus by Kobe et al. [38]), a clock dynamically attached to the system. The remaining question concerns the equivalence between different time operators, which are defined on the basis of different dynamical observables of the system. In the present work, solely p was used as a dynamical observable.
Attoclock and Tunneling Time in Strong Field Interaction
Attosecond angular streaking experiments (termed attoclock) have triggered a hot debate about T-time, whether it is a real or an imaginary quantity. In adiabatic tunneling, for example [4][5][6] with a Helium (He) atom, we have found that T-time is real. It is a delay time with respect to the ionization at atomic field strength [1][2][3]. In an experiment with H-atoms, Sainadh et al. [49] argued using a short-range potential model and NITDSE that tunneling is instantaneous. Hence, they support the point of view that T-time, i.e., the time to traverse the barrier region, is an imaginary quantity. In [7], we found that the measured data in the experiment with H-atom of Sainadh et al. and the accompanied NITDSE [8,49] fits very well to our real T-time picture (Equation (7)). Solely the factor of (1/2) is still unclear; it is absent in the experimental data of Sainadh et al. [8,49], unlike the data of Landsman et al. [4] for the He atom, as discussed in Sections 3 and 4. Furthermore, according to the imaginary time picture, one also claims that time is a parameter in QM and not an observable. Hence, no time operator exists in QM in line with the well known Pauli theorem. However, the later point has been clarified and the list of references is too large to be specified; see, for example, [11][12][13][37][38][39]45,[64][65][66][67] and the references therein. The present work is a step in this direction.
Despite this, Rost et al. [68] argued that, thanks to the attoclock experiments with He-and H-atoms, one can be sure that a reasonably defined T-time is zero in these cases. Furthermore, Rost et al. concluded that, using classical back propagation of an ionized wave packet (determined QMly by Schrödiger equation), one arrives at the conclusion that the T-time defined at the exit point (as defined in their work) is zero for single-electron dynamics, see Ni et al. [69]. Tunneling is a QMal manifestation and a classical point of view is not appropriate anyway, a deterministic T-time is not expected. Furthermore, although they found a positive time at the exit point by back propagating with the position-based criterion, they claimed that the T-time is zero and the criterion can be considered as a misinterpretation of the attoclock experimental data based on models that do not take full and consistent account for nonadiabaticity, with the argument that they found a high non-tunneled fraction [70], we come back later to this important point. In the same work [69], they found that a static energy-based (again in their notation) adiabatic tunneling criterion fails completely.
The core point in the result (Equation (7)) of our tunneling model is the real T-time to overcome the barrier region as a delay time with respect to the ionization time at atomic field strength; the latter is undoubtedly real and QMly does not vanish. The issue becomes more evident by rewriting Equation (15) in the form (see also Equation (16)) where both terms are time delays and real, the second term τ delt is real because δ z ≥ 0 is a real quantity for F ≤ F a [1] . The relation to τ EWS , τ dwell , and τ sym − of ABTO that we have discussed in Sections 3 and 4 confirms our point of view, in particular that τ EWS provides a similar picture of a scattering particle on a Coulombic potential. However, it is not yet entirely clear whether the origin of the delay time is the Coulombic barrier, due to the similarity with τ EWS , or due to the tail of the Coulomb potential, as claimed by th imaginary T-time picture [49,58]. The latter enforces a Z e f f ≈ 1, which is not reflected in our comparison of the experimental data, especially for F values near the atomic field strength F a , where Z e f f = 1.6875 shows a good agreement with the experimental result [1]. Our model uses the length gauge or the dipole approximation of the interaction Hamiltonian [2] due to the Göppert-Mayer gauge-transformation [71]. However, to claim that it is a kind of a position criterion seems vague, apart from the fact that Equation (15) fits well with the experimental results, as shown in [1]. We think that the work of Ni et al. [69] indicates that numerical methods, such as back propagation including NITDSE imply a dependence on the applied concepts, e.g., the back propagation with the different criteria in the work of Ni et al. [69]. Hence, they end with inconclusive results and interpretations. A drawback of their approach is that they only consider backward propagation, which is not sufficient to give a clear tunneling picture (forward propagation is not considered). Actually, we have already encountered such a matter in the work of Orlando et al. [63], where they claimed to obtain Keldysh time as a lower limit of the T-time from the NITDSE and the MTTO. However, we found in [2] and discussed in the present work that applying the MTTO in our tunneling model leads to the T-time relation in Equation (7), which agrees well with the experimental results. Reiss [29] divided the strong field approximation methods into two categories, strong electric-field approximation (SEFA) and strong propagating-field approximation (SPFA). He argued that they are critically different. The SEFA employs a tunneling model that trends to an adiabatic limit at low frequencies; the SPFA does not invoke the tunneling concept. This could be one of the reasons why Ni et al. get different results by applying different criteria. It is worth noting that earlier works using NITDSE [72,73] have also reached a similar conclusions like Ni et al. As a consequence, the conclusion of Ni et al. of an imaginary T-time, though possible, is not a closing answer and does not tell much concerning the T-time and tunneling theory in general. The imaginary T-time picture is in line with the fact that the tunneling is classically forbidden. This is important; nevertheless, it is not instructive in the case it obscures an insight and otherwise accessible conceptual understanding (See Carver Mead, the Nature Of Light What Are Photons, www.cns.caltech.edu). It offers a picture of tunneling mechanisms among other pictures, a situation that is common in QM. A complete picture to the tunneling in strong field science is still not available, where certainly it is crucial to distinguish between adiabatic and nonadiabatic tunneling.
Our T-time concept for adiabatic tunneling opens a new insight for the understanding and of a fundamental importance to the tunneling theory in general, as mentioned in Section 1. Note that a complex time point of view, i.e., real and imaginary parts, such as the one brought by Torlina et al. [58], would not change our conclusion. The difference is that Torlina et al. [58] claim that the measurement of the attoclock, the real part, corresponds to the tail of the atomic Coulomb potential by using a short-range potential model, whereas in our picture the atomic potential energy at the exit point of a single-active electron model determines the uncertainty in the energy and, by the virtue of TEUR, we get the real delay time measured in the attoclock; see our detailed discussion in [3]. The utilization of TEUR in our model is important to compare with the statistical point of view of the T-time and to the tunneling theory in general, as we will see in the following.
The measurement data in both experiments with He-and H-atom are of a statistical nature. Landsman et al. have shown [4] a good agreement between the experimental result and the FPI. Demir et al. [74] came to the same result of the T-time using a statistical approach to the T-time (SATT) based on FPI. Camus et al. [75] claim that the agreement between theory and experiment in their work with Ar-, Kr-atom provides clear evidence for a nonzero T-time delay, however without clear details about their tunneling picture, i.e., adiabatic or nonadiabatic, which is certainly a crucial point. In our T-time model, we have found a good agreement with FPI of Landsman et al. for He-atoms [1], and with NITDSER of Sainadh et al. for H-atoms [7]. Thus, as far as adiabatic tunneling is considered, we think the question is rather how to compare our result and model with the FPI, NITDSE and other statistical and wave packet dynamics model calculations. We will now discuss a possibility to resolve this puzzle.
First, as known, a single electron model and statistical or probabilistic interpretation inherently coexist in the QM. One thinks about a single electron atom such as the H-atom, the Bohr model, the radial Schrödiger equation, and the Born interpretation of its wave function. Second, our model is not alone or a pure semiclassical single-active electron model because our T-time is derived by the virtue of the TEUR [1], and, in this sense, it is a QMal model with a direct relation to the statistical or probabilistic point of view, see Landsman et al. [4,35]. It is not the time spent by a single electron moving over or through a barrier having an energy less than the barrier height, which is a classical inadequate picture to offer an understanding to our result and the T-time in general. Having such a classical picture in mind does not help to find the correct answer or the right interpretation to the T-time.
In his work, Galapon [76] constructed a time of arrival operator for a square potential barrier and calculated its expectation value for a wave packet. By a comparison to the result of Eckle et al. [5,6] for He-atoms, he showed that only the above-the-barrier components of the momentum distribution of the incident wave packet contribute to the traversal time to cross the barrier region. In statistical methods or wave packet dynamics, one considers the momentum distribution which corresponds to a collective incident of a stream of particles (a statistical ensemble) upon a potential barrier. From this point of view, what Galapon found is that the particles partially tunnel in real time (traversal time). Thus, in terms of FPI [35], this is the part that corresponds to the amount of the transmitted particles or transmission amplitude. It is the part of the wave function that corresponds to a specific time τ that the electron spends inside the potential barrier [35]. In this sense, one can imagine that our T-time would correspond, in terms of a transmission amplitude, to the time τ of the part of the wave function that tunneled (transmitted) the potential barrier. For example, at a field strength F < F a , the ionization rate is the part of ionized atoms (transmission amplitude of the wave function) relative to the amplitude of the total wave function of the system, which corresponds to the total number of the atoms. In this respect, in our model, we quantify tunneling through the energy gap (potential barrier) and T-time using TEUR, which is equivalent to quantifying it through the transmission amplitude of the wave function. This follows in particular from the good agreement of our T-time with FPI [4,35] and SATT [74] for He-atoms. We see now why Ni et al. [69] have found a high non-tunneled fraction as mentioned above (using a position criterion). The tunneling probability is always less than unity and only at F = F a reaches unity, see Delone [77] (p. 72). They ignore the fact that tunneling unlike ionization depends on the probability amplitude or transmission and reflection amplitudes and coefficients. The tunneled fraction corresponds to the part of the transmission amplitude, which corresponds to the T-time τ T,d and τ, which agree with the experimental result in the adiabatic tunneling [1,4,35].
At the atomic field strength F a , the T-time is the ionization time τ a = 1/2I p (compare Equations (15) and (33)), at which the ionization rate equals unity [77] (p. 72), where the barrier disappears and the BSI region starts, which is a classically allowed process. Actually, we can express our views with a simple form in terms of relative probability amplitude, τ T,d = τ a W F a / W F = τ a / W F . Hence, we find that 1/ W F ≥ 1 is an enhancement factor similar to Equation (15). One notices that τ a W F a = τ a = 1/(2I p ) is the QMal limit and τ ≥ τ a for any time measurement through an interaction with a field F ≤ F a [9], see [1,2].
In a way, our approach is tied to the Heisenberg picture of the QM, where the dynamics are performed by the operators acting on the time-independent state vectors, which are represented by a basis in the phase space. The expectation values are then extracted information by virtue of the operators acting on the sates vectors, where the probability character of the expectation value corresponds to the probability amplitude or the factor W (F a )/W (F) = 1/W (F). However, certainly the calculation of W is a difficult task, whereas our approach offers a simple procedure to calculate the tunneling time by exploiting the TEUR [1].
Therefore, we think that our tunneling model is in line with the statistical, probabilistic point of view, e.g., such as FPI, the wave packet dynamics and wave function methods or NITDSE. It might explain the good agreement of our T-time [1] with FPI of Landsman et al. [4], and NITDSE of Sainadh et al. [8,49] (apart from the factor 1/2 as already discussed). This point of view is supported by the similarity between the delay time (interval) due to the barrier itself τ delt and the dwell time τ dwell , see Section 2.1. The latter is defined statically by the transmission and reflection coefficient or amplitudes of the wave function [35] (in our picture forward, backward scattering). In addition, similarly, it is supported by the agreement with the unified tunneling picture of Winful (Equation (32)), see Section 4. This way we showed that our real T-time conciliates with various interpretations and pictures, it is in this sense sturdy, thorough, and fundamental.
Finally, one might argue that the imaginary T-time picture is physically more reliable, see Rost et al. [68], but why then is the ACOA caused by the tail of the potential [49,58] if the energy gap is mainly covered by the nonadiabatic effects? There are several concerns with this picture. 1. It ignores the self-interference term, Equations (32) and (33), which also exists for a zero-range potential. 2. We have already seen from τ ESW in Equation (30) that the ACOA is likely caused by a scattering mechanism due to the Coulomb potential and not solely due to the tail of the potential. 3. It ignores that the time to reach the entrance point τ i (Equation (7), see Figure 1), is a classically allowed process. Furthermore, the electron becomes a free particle at the exit point, when it gains the energy through the nonadiabaticity. It is then less affected by the tail of the Coulombic potential, which is screened by the field [45]. Otherwise, we do have to rely on the adiabatic tunneling picture, where our real T-time model, together with the statistical or probabilistic time interpretation as discussed above, offers a reasonable picture in which the nonadiabatic effects are small and not crucial to the ACOA or the T-time. Where the nonadiabaticity due to the oscillatory field is smaller than the error bars, as already mentioned, and, in our opinion, it is reflected by the spread of the points in the experimental result, compare the figures given in [4]. Further discussion is given in [41], and we follow this issue in future works.
Conclusions
In this work we showed that our T-time can be obtained from a time operator. It is a delay time with respect to an ionization time at the atomic field strength; hence, it is an intrinsic dynamical time and is similar to different dynamical time definitions such as time of arrival, classified as an observable time by Busch [78]. Different types of time operators are discussed, where the conclusion is that, under the same conditions, dynamical time operators equivalently represent the intrinsic (dynamical) time observable of the system, termed as the tempus of the (evolving) system by Kobe et al. [38]. In this sense, the well known definitions, delay time, and time of arrival represent the same dynamical time observable of the system under consideration. A similar unified picture is found by Winful as discussed in Section 4. From the similarity with the EWS-time τ EWS and dwell time τ dwell , it should be noted that time intervals of the tempus of the system are or should be identical, despite resulting from different definitions. We found that the symmetric (antisymmetric) ABTO (likewise FKTO) is related to our total time τ total , τ sym + (τ delt , τ sym − ), where the forward and backward tunneling eliminate the delay time due to the barrier itself (the self-interaction term), respectively. We follow this issue in future works where adiabatic and nonadiabatic tunneling will be considered. Additionally, we discussed the attoclock and the real versus imaginary T-time picture and concluded with the remark that our tunneling model, which is based on the single-active electron approximation and the TEUR, is in line with the statistical or probabilistic picture of the T-time defined by the probability amplitudes of tunneling ionization such as FPI and SATT. | 14,120.4 | 2020-04-07T00:00:00.000 | [
"Physics"
] |
Developing the Corporate Management Model on the Basis of State-Private Partnership in Energy Supply to the State and Municipal Budget Organizations
The article presents the research results of one of the topical issues of corporate management – interaction between a corporation and state in the frameworks of state-private partnership for increasing the energy efficiency and energy saving in the budget sphere. The advanced model of corporate management offers ways of developing the state-private partnership mechanism in this sector.
Under the speeding globalization and post-industrial economic development, the institution of corporate management is rapidly developing both in Russia and abroad.The corporate management efficiency becomes one of the key factors of competitiveness of both a particular company and the national economy as a whole (Blackburn, 1994).The article presents the research results of one of the topical issues of corporate management -interaction between a corporation and state in the frameworks of state-private partnership for increasing the energy efficiency and energy saving in the budget sphere.The advanced model of corporate management offers ways of developing the state-private partnership mechanism in this sector.
In previous works we proved that corporate management must be based on systemic approach.The previous results allow to form a complex system of corporate management, to describe its elements and interaction between them (Engelen, 2002).The system management will use the cost (Othman & Rahman, 2009) and institutional approaches (Aguilera & Jackson, 2003).
The classical corporate management system is based on the division of owner and manager functions (Corbett, 1998;Fama & Jensen, 1983a, 1983b).The main role of the owners (shareholders) is to appoint the Board of Directors.The main role of the Board of Directors, which is the leading body in corporate management of an organization, is to appoint the top managers and further control their performance (Molz, 1985).The recent practice, both in Russia and abroad, shows that this mechanism is inefficient, as it does not allow the corporation authorities to make operative decisions and create an efficient system of risk management in corporations, which is extremely necessary during financial instability.
The new corporate management system should be based on cooperation and integration of management at all levels of the corporation.Besides, the corporate management system, as it was shown before, should include all stakeholders of the corporation (Freeman, Harrison, & Wicks, 2007;Hill & Jones, 1992;Jones, 1995;Post, Preston, & Sauter-Sachs, 2002).The post-industrial economy is characterized by the growth of significance of the corporations' intangible assets, their intellectual capital (Malhotra, 2001;Skyrme & Amidon, 1998).In many modern corporations the independent directors have no experience in the markets of these corporations' products (their major activities refer to other sector).This has a negative effect on the efficiency of the Board of Directors in general.Moreover, it is necessary to systemically manage the competences of the Board of Directors.
Taking the above factors into account, we can view the corporate management system as follows (see Fig. 1), ( ., , & , 2011).Unlike the traditional corporate management system, the proposed one broadens the shareholders' authorities (to simplify the model, we will here and further consider a corporation to be a joint-stock company).Corporate management should be based on the balanced satisfaction of all stakeholders' interests (Clarkson, 1995;Donaldson & Preston, 1995;Freeman & Edward, 1984;Jawahar & McLaughlin, 2001).Here lies the great and so far not fully implemented potential of the corporations' efficiency growth.Shareholders should take an active part in this sphere of corporate management.
Bearing in mind that stakeholders play an important role in corporate management, and interaction with them largely determines the efficiency of corporate management, stakeholders are one of the key elements of corporate management system (Ireland & Hitt, 1999;Jensen, 1991;White, 2009).The corporate management should include the mechanism of revealing and predicting the key stakeholders' interests and using them as the basis for balancing and satisfying the stakeholders' interests (Buck, Filatotchev, & Wright, 1998;Gourevitch & Shinn, 2005;Ho, 2005).To fulfil this task a number of intra-corporate institutions should be formed: 1) Institutions for the stakeholders' interests balancing; 2) System of standards of interactions with stakeholders (shareholders, employees, consumers, partners, etc.); 3) Formal and informal mechanisms of disambiguation.
The main directions of activity in accounting and satisfying the stakeholders' interests are: 1) Determining the company's key stakeholders; 2) Accumulation and systematization of information about the key stakeholders' interests; 3) Evaluating efficiency of a corporation's activity on satisfying the stakeholders' interests, both from the point of view of usefulness for a corporation and from the point of view of stakeholders; 4) Tracing the changes in the interests of corporation stakeholders; 5) Correction of the strategy of interaction with stakeholders and the corporate strategy as a whole.Thus, the dynamics of the corporate strategy development will be determined not only by the dynamics of the market conjuncture development and the priorities of the corporations' shareholders, but also the stability of the key stakeholders' interests, the degree of their influence on the corporation activity and the process of developing ambiguity in their relations with the corporation.
The stakeholders' interests are determined by the factors of their utility.For the further analysis of the corporate ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 6 No 1 S3 February 2015 107 management system, they should be viewed in more detail.
The shareholders' utility factors are the level of their profitability and the degree of risk of no return of their investments.The employees' utility factors are the volume of their work, the level of awards for their work, and the working conditions.
The consumers' utility factors are the price of goods and services, their quality and convenience of use.For the corporate partners it is important that the corporation fulfils its contract obligations, and flexibly corrects the contract terms if necessary.
The creditors are interested in the transparency of the corporation's activity, the ability to duly and correctly trace the dynamics of its key financial indicators and the financial position as whole, in order to estimate the risks in the corporation's activity.
The local population is interested in preserving the favorable ecological conditions in the territory of the corporation's activity (level of noise, pollution, etc.).
The authorities of various levels are interested in budget income as a result of the corporation's activity, the growth of employment and incomes of the population, the increase of the social protection of the population, the creation of infrastructure, the minimization of negative consequences of the corporation's activity for the environment, etc.
The above examples allow to conclude that the utility factors are not always tangible.However, the majority of the above intangible factors can be evaluated by their cost.It is the cost approach that gives a common criteria for their evaluation, management and making other managerial decisions.
Taking everything above-mentioned into account, we should extinguish the following hierarchical levels of the corporate management system: 1) Factors of utility of the corporation and its stakeholders; 2) Competences of corporate management subjects; 3) System of managerial decisions.
Concluding the characteristics of the corporate management system, we should highlight that its main elements are the intra-corporate institutional environment and the external institutional and informational environment of a corporation.
The functioning of the above model can be illustrated by the example of energy servicing companies with the state.Let us view the prerequisites and mechanisms of the private companies' interaction with the state in the field of energy supply and energy saving in budget sphere, which are especially acute due to the requirement of the law to reduce energy consumption in budget organizations by 15% on the basis of quantity till 2014 (Law #261-"On energy saving and energy efficiency increase.", 2009) .
According to the materials of the European Bank for Reconstruction and Development, prepared for the session of Expert council on legislative on state-private partnership of the Russian State Duma Committee on economic policy and entrepreneurship, of April 18, 2011(EBRR, 2011), the financing of energy supply and other communal resources for state and municipal budget sphere, about 4-5% of the Russian consolidated budget are spent.It is necessary to note that this financing comes only from the budget sources of inefficient resulting consumption of the resources.Moreover, the efficiency of energy supply of budget-financed buildings in the Russian Federation is significantly lower than in Europe: our norms of energy consumption is 40-50% higher than the European ones, while the actual consumption of the budgetfinanced buildings in Russia is much higher than the normative one.
Inefficiency and high level of budget expenses for communal resources together with the growth of internal prices for energy leads to the need to increase the energy efficiency in the budget sphere.Very important are also the issues of reconstruction of the budget and social venues and providing the comfortable exploitation of these venues.These are, first of all, schools, kindergartens, hospitals and other establishments for the least protected groups of the Russian population.Solving these tasks demands large investments.At the same time, the possibilities of attracting the local and regional budget funds are limited; there are also limitations of non-budget financing.
A prospective direction of the set task is using the mechanism of state-private partnership2 in the sphere of energy efficiency increasing and budget sphere servicing ( & , 2008; , 2012).According to the expert estimations by the European Bank for Reconstruction and Development, the current energy consumption in the budget sphere can be reduced by 30-40% with the help of resources saving mechanisms.Such measures save a lot of energy and can be used for financing the energy saving.Elaborating the appropriate legislation for private energy servicing companies (ESC) (specialized energy servicing companies, equipment suppliers, energy suppliers, engineering companies, etc.) will create the conditions for investing into the techniques for energy efficiency in budget sphere.The 108 ESC investments will be returned from the budget economy during the action of energy servicing contract (the contract is signed by ESC and the municipality and the budget organization on the competitive basis).
It should be note that the budget organizations usually lack technical equipment, experience and competences for energy supply optimization.Attracting specialized ESC will allow them to focus on their main activity.
Since 2009 the institutional environment of energy servicing contracts has been rapidly developing (the development of legislation in energy servicing), which provides conditions for implementing the mechanisms of stateprivate partnership in the sphere of energy efficiency: the notion of "energy servicing" is introduced; the general requirements for the content and order of signing the energy servicing contracts; possibility is ensured to sign long-term (longer than the three-years budget cycle) energy servicing contracts; possibility to use the budget funds, saved by energy saving, for payments on energy servicing contracts; rules are determined to form the maximal price of energy servicing contract and carrying out the municipal auctions for signing the energy servicing contracts in compliance with the legislation on placing orders for state and municipal needs (94-).
The adopted legislative base makes a solid foundation for signing long-term energy servicing contracts and provides guarantees of investment return to energy servicing companies and banks, which are financing them, from budget economy.At the same time, there are legal restrictions and other problems to be solved in order to create conditions for competitive market of energy servicing in Russia.
As a result of thorough investigation of these issues, the European Bank for Reconstruction and Development specialists made the following proposals for improving the legislation in the above sphere, which are the most topical: 1. To make amendments in 94-to abolish the consumer's liability to require the contract execution if the initial (maximal) price of the contract exceeds 50 mln rubles.2. To change Article 56 of 94-to enable the increase of the maximal price of the contract by the part of exploitation costs for communal infrastructure servicing, which will be reduced as a result of energy servicing measures.3. To include the energy servicing into Article 149 of the Tax Code, containing the list of operations freed from taxation on added value.4. To make amendments in part 2 of Article 24 of 261-, aimed at preserving the existing amounts "communal services" financing for the budget/autonomous organizations and state enterprises who signed energy servicing contracts.The materials prepared by the European Bank for Reconstruction and Development also pose additional questions, which should be studied for the further stimulation of energy servicing business in Russia: 1. Risks of inappropriate services rendered by resource companies are imposed on ESCs. 2. It is necessary that a consumer monthly initiates a complex procedure of negotiating the volume of energy consumption reduction with the resources supplier, which can be combined with the significant for consumption reduction of energy and other resources.3.At present, the rules of signing the energy servicing contracts do not regulate the ESC participation in rendering servicing of equipment installation (which can require consumer's payment additional to the price of the energy-servicing contract).At the same time, to obtain the maximal economy of the energy-efficiency equipment introduction, the correct professional exploitation of this equipment is required.4.There are high political risks in cases, when the terms of consumer's liability exceeds the term of "political cycle".This fact is destimulating for banks when they open "long" credit lines for energy servicing companies.
It is necessary to reduce these risks by developing the mechanisms of state support in the form of state guarantees institutions, creating the refinancing funds of energy servicing contracts, etc.Thus, the institutional environment of energy servicing contracts in Russia is far from the ideal.It should be thoroughly reformed in order to reduce risks and transactional costs of all participants of such contracts.
The traditional energy service is not the only tool of energy saving in the budget sphere.There are other possibilities of attracting the private capital into the energy efficiency budget sector.Leasing is most popular among them.Besides, energy servicing, popular in developed markets is often discussed, i.e. energy servicing of the type of guaranteed economy.Here we should consider the long-term energy servicing contracts not only with the companies, independently implementing the energy-saving measures and obtaining the return investments during the contract period (called ESCon-1), but also with another type of companies (ESCon-2).ESCon-2 does not finance the project (delivery services, installation and maintaining are financed from the budget).However, unlike ESCon-1, such companies give a guarantee of economy ("performance bond") to the customer (state establishment, for example).
Thus, a customer, having funds for equipment and services, obtain an important advantage in comparison with the ordinary contracts for equipment delivery: the risks of the wrong choice of technical solution and its maintenance are
Fig. 1
Fig.1 Corporate management system in post-industrial economy | 3,354.4 | 2015-02-28T00:00:00.000 | [
"Economics"
] |
A NEW TECHNIQUE FOR ULTRAFAST VELOCITY DISTRIBUTION MEASUREMENTS OF ATOMIC SPECIES BY POST-IONIZATION LÁSER INDUCED FLUORESCENCE ( PILIF )
A new method for single shot velocity distribution measurement of metallic impurities of relevance for studies involving continuous sources, such as limiter experiments in fusion devises or sputtering experiments, based in the combination of Resonant Enhanced Multiphoton Ionization (REMPI) and Laser Induced Fluorescence (LIF) is proposed. High ionization yield and good time resolution are expected according to the numerical simulation of the experiment that has been run for several atomic species. Other possible applications of REMPI to plasma edge physics and to conventional techniques for velocity distribution measurements are briefly addressed.
1.INTR0DUCTI0N
Velocity distributions, of great relevance in many fields of physics and chemical physics, can be readily measured by Time of Flight ( TOF ) techniques "*, provided that free colusión conditions exist in the región between the source and the detector.The accuracy of the method is restricted by the finite sharpness of the gating function and the finite dimensions of the detection volume, so that long flight distances are needed for good resolution, in detriment of the signal to noise ratio.On the other hand, Doppler shifted LIF spectroscopy has been widely used for this purpose 1 , although the full sean on the láser wavelength required for the velocity distribution measurement implies a continuous character of the experiment and narrowing the láser bandwidth is made on expenses of pulse energy.
In plasma fusión research, velocity distribution of neutrals are important not only for the evaluation of impurity fluxes but also to determine the mechanism responsible for their ejection 2. The continuous character of the flow of sputtered partióles and the impossibility to chop it make time of flight (TOF) techniques not applicable to in situ velocity measurements in fusión plasma experiments so that Doppler shifted excitation in LIF detection of neutrals is the only method extensively used until now for this purpose^-In general these measurements require many, reproductible , plasma discharges and new methods based on fast scanning of the dye láser frecueney during a single discharge have been developed as alternative 4 .In any case velocity resolution has to be gained on expenses of the signal versus noise ratio and correction for the láser power at each wavelength is always needed due to the low saturation parameter required for these experiments .
In the present work a new method for single shot, in situ velocity distribution measurements based on REMPI in combination with Láser Induced Fluorescence (LIF) , of relevance for impurity flux determinations in fusión plasma research is proposed, altogether with some others applications.
PILIF EXPERIMENT
The proposed experiment consist on crossing two láser beams (the ionizing and probing ones) in the scattering volume and to record the time evolution of the LIF signal after complete ionization has taken place.After the detection volume has been depleted of neutral atoms by the REMPI process an spatial hole in terms of neutral density is formed.As the sputtered atoms start to fill it, the density in the observation volume will continuously increase and therefore the LIF signal when used as a density diagnostic.i.e.bandwith greater than the Doppler profile.This last requirement restricts the proposed experiment to those atomic systems where metastable levéis do not act as a sink of the laser-populated level , as in 3 level systems .
Assuming a well collimated atomic beam with a given velocity distribution f(v) and a scattering volume with dimensión parallel to the travelling direction / the density after a given time after ionization is given by : o ñ where n(0) stands for the density of neutrals before ionization takes place.The accurancy of the velocity distribution obtained with method will be limited by that of the scattering volume dimensions and its resolution by the dimensions themselves and the minimum sampling interval, ultimately limited by the lifetime of the excited level, providing that ionization takes place ¡n a large extend during the ionizing pulse.
The velocity distribution function,f(v),can be reconstructed from the time evolution of the LIF signal by applying the expression : where v in the case of a point source, far away from the scattering volume, ¡s simply given by //t.
The most appealing way to carry out the experiment would be by using the same experimental set-up as for LIF detection.This in many instances consists of a dye láser pumped by an excimer one, typically XeCI at 308 nm, so that a 4.03 eV high power photon source is readily available .This photon energy combined with that of the pumping photon is able to bring all the neutral atoms to the ionization continuum in most of the metáis typically monitored in limiter experiments.In order to completely ionize all the atoms ¡n the sampling volume , the right spiitting of the excimer power into direct (ionizing) beam and pumping one has to be made.That in principie will depend on the particular atomic system under consideration, but a simple calcuiation based on the rate equations for a three level system plus ionization 5 shows that if, for example, a 0.5 Jul/pulse XeCI láser and a ratio 1:1 between the power used to pump the dye láser and that of the ionizing beam are used, so that a modest 50 jaJui/pulse UV radiation is obtained after frecuency doubling ,still high enough to satúrate the resonant transition (S=150 for Be,S=100 for Fe.focussing in 2x2mm^ ), ionization will take place to a 100% in a time shorter than the láser pulse (15 to 20 nsec), even if the excimer radiation is focussed in an área several times larger ,thus minimizing alignment problems.As an example, Fig 1 shows the results for the Fe atom assuming an excimer láser square pulse of 20 nanoseconds and an ionization cross section, for the excited atom, of 10' 18 cm 2 • The conditions used for the simulation are similar to that used for REMPI detection of Fe atoms in sputtering experiments 6 .As it can be seen a high degree of ionization is expected even-at relatively low saturation parameter and modest excimer láser power.
Sampling of the LIF signal in a relevant time scale could be achieved by optically delaying successive reflections of the probing láser or by using a long pulse dye láser (several hundred nanoseconds ) and crossing the excimer at the beginning of the pulse, thus obtaining a continuous LIF signal.The first scheme implies long distances ( optical delay > pulse duration ) and correction for the láser divergence, so that the second scheme will be more feasible.Due to the short time required by the TOF experiment (see below ) synchronization of the two lasers should not be critical.
MODEL CALCULATION FOR EXTENDED SOURCES
The time evolution predicted by Eq. 1 is not directly applicable to extended sources as one has in limiter experiments.Convolution over all the emitter área and the different paths across the sampling volume depending on geometry, as well as attenuation through the plasma edge has to be taken into account.
The results of the calcuiation for a Be bar limiter, where a 0.6 cm diameter hole is used to look at the scattering volume, a prism of 2x2x4 mm placed at 1 cm from the limiter in this simulation (see Fig. 2) are displayed ¡n Fig. 3. Plasma edge temperature and density profiles as well as ionization and excitation rate constants are the same as in Ref 7 .No contribution to the refilling of the hole due to CX or electrón recombination is considered as they are expected to be not fast enough to effectively compete with the direct flux of sputtered atoms in the relevant time scale .A cosine distribution for sputtered partióles is assumed.
As it can be seen in Fig 3a, discrimination between sputtering (Thompson model ) and thermal distributions should be obvious even in a short time after the ionizing pulse (t=0).It must be recalled at this point that accurately measuring the Doppler profile for a thermal distribution requires an extremely narrow láser bandwidth, not always available .A higher resolution in this case, by PILIF will be easily achieved by simply enlarging /.A factor of two in the binding energy will be also distinguishable after several tens of nanosecond.Fig 3b shows the velocity distributions for these two cases (Eb= 3.32 eV and 1.66 eV respectively) as they would be measured by Doppler shift in the same geometry and plasma edge conditions.
EXPERIMENTAL SET-UP
The available experimental set-up, where the proposed experiment will be undertaken, has been previously described 8 .Basically, a DC Glow Discharge in an inert gas ( He, Ar) is produced , the stainless steel chamber acting as the cathode ( = 17% Cr ).The sputtered Cr atoms are detected by LIF at 429 nm ( two level system ) by using an excimer pumped ( XeCI ) dye láser ( Lambda Physik LPX 205¡-ibid.FL 3002E ).The available bandwidth can be reduced from 0.2 to 0.04 cm-1 by an intracavity etalon , so that a velocity resolution of = 0.5 Km/s can be obtained for the standard Doppler shifted LIF experiment ( see fig.3b) , providing that saturation of the transition is avoided ( S= 0.6 kW/cnr 2 s'" 1 ).Splitting of the excimer radiation at 308 nm will still allow 200 mJ/ pulse for the ionizing beam and > 20 mJ /pulse for pumping of the transition, so that the required conditions for the REMPI-LIF experiment can be easily fulfilled, even without focussing of either láser beam .A flashlamp-pumped dye láser ( 1 J/ pulse, pulse duration = 1 jas, bandwidth = 1 nm ), presently under construction, will be used for " long time" excitation under saturation conditions.The LIF time evolution will be recorded in a fast digital oscilloscope ( Tektronix DSA 601,1 GS/s ) and the data transferred to a PC.Synchronization of the excimer and the dye láser beams will be achieved by optically delaying part of the 429 nm radiation used to trigger the excimer láser via a fast photodiode.
OTHER EXPERIMENTS
The good spatial and temporal resolution of láser diagnostics could be used in the REMPI experiments to probé the plasma edge .Although no detailed calculations have been performed yet ,the screening properties of the plasma edge could,in principie , be tested without perturbing other plasma parameters in combination with neutral atomic beam diagnostics.Besides, REMPI could be used to créate a highly located.timeresolved high intensity pulse of single charged ions which propagation through the plasma, followed by optical methods,could yield information concerning particle transpon, among others.
In atomic beams experiments, the proposed ionization scheme could be used to optically chop the beam with an extremely narrow equivalent gate function , thus improving the velocity resolution in the conventional TOF experiment.Fig. 2 : Geometry assumed in the model calculation for a Be limiter .The interaction with the plasma in assumed to take place within a rectangle of 2 cm wide and 4 cm high.See text for the rest of the parameters used.¡ "Nueva técnica para la determinación ultrará-" pida de distribución de velocidades de especies ¡ atómicas por fluorescencia inducida por láser en ¡ la post-ionización".] "Nueva técnica para la determinación ultrará-1 pida de distribución de velocidades de especies i atómicas por fluorescencia inducida por láser en !la post-ionización".( Se presenta un nuevo método para la medida de distribución de velocidades de impurezas metálicas, de relevancia en estudios que Impliquen fuentes continuas i tales como los experimentos de interacción plasma-limitador en plasmas de fusión o , experimentos de spultering, basada en la combinación de las técnicas de Ionización • Mullifotónica Resonante y de Fluorescencia Inducida por Láser.De acuerdo con las i simulaciones hechas para diversas especies atómicas cabe espeiar un alto grado de ( ionización de las especies atómicas en estudio y una alta resolución temporal en la • medida de su distribución de velocidades.También se proponen otras posibles , aplicaciones del proceso de Ionización Multifotónica Resonante a estudios en el bordo de '
Fig. 3 :
Fig. 3 : Results from the model calculation : a) : Time evolution of the LIF signal.Thompson model for Eb (Be) =3.32 eV (top) and Eb=1.66 (middle) and thermal distribution (bottom) forT=2000 K. b) : Velocity distribution along the detection line of sight for the Eb=3.32 and Eb= 1.66 éV cases for the same geometry as above .
1 A
i A new method (or single shol velocity distribution measurement of metallic 1 impurilies of relevance for studies involving continuous sources, duch as limiler i experimente in fusión devices or spultering experiments , based in the combination of i Resonant Enhanced Multiphoton lonization (REMPI) and Láser Induced Fluorescence ¡ (LIF) is proposed.High ionization yield and good time resolution are expected 1 according lo the numerical simulation of the experiment that has been run for i several atomic species.Other possible applications of REMPI to plasma edge physics | and to conventional techniques for velocity distribution measurements are briefly .igacionesEnergéticas!, Medioambientales y Tecnológicaa.I Instituto de Investigación Básica.-Madrid.¡ "A new technique for ultrafast velocity dis-1 tribution measurements of atomic species by I post-ionization láser induced fluorescente ¡ (PILIF)".1 TASARES, F.L. (1992) 10 pp.;3 figs.;8refs. 1 new method for single shot velocity distribution measurement of metallic 1 impurilies of relevance for studies involving continuous sources, áuch as limiter i experimenls ¡n fusión devices or sputtering experiments , based in the combination of ¡ Resonant Enhanced Multiphoton lonization (REMPI) and Láser Induced Fluorescence 1 (LIF) is proposed.High ionization yield and good time resolution are expected i according to the numerical slmulation of the experiment that has been run for ¡ several atomic species.Other possible applications of REMPI to plasma edge physics 1 and to conventional techniques for velocity distribution measurements are briefly Energét iciis , Medioambient ales y Tecnológicas.Instituto de Investigación Básica.-Madrid."A new technique for ultrafast velocity dis-¡ tribution measurements of atomic species by ' post-ionization láser induced fluorescente < (PILIF)".¡ .TABARES, P.L. (1992) 10 pp.;3 figs.;Brefs.( !A new method for single shot velocity distribution measurement of metallic i impurities of relevance for studies involving continuous sources, áuch as limiter i experimenls in fusión devices or sputtering experiments , based In the combinalion of ' Resonant Enhanced Multiphoton lonization (REMPI) and Láser Induced Fluorescence i (LIF) is proposed.High ¡onization yield and good time resolution are expecled i according to the numerical simulation of the experiment that has been run for !several atomic species.Other possible applications of REMPI to plasma edge physics ' and to conventional techniques for velocity distribution measurements are briefly i addressed.¡ CIEMAT-686 ¡ Centro de Investigaciones Energéticas, Medioambientalea y Tecnológicas.' Instituto de Investigación Básica.-Madrid.' "A new technique for ultrafast velocity dis-! tribution measurements of atomic species by \ post-ionization láser induced fluorescente • (PILIF)".!TABARES, P.L. (1992) 10 pp.;3 figs.;flrefs.' A new method for single shot velocity distribution measurement of metallic ¡ impurities of relevance for sludies involving continuous sources, áuch as limiter ' experiments in fusión devices or sputtering experiments , based In the combinalion of i Resonant Enhanced Multiphoton lonization (REMPI) and Láser Induced Fluorescence , (LIF) ¡s proposed.High ionization yield and good time resolution are expected ¡ according to the numerical simulation of the experiment that has been run for ' several atomic species.Other possible applications of REMPI to plasma edge physics i and to conventional techniques for velocity distribution measurements are briefly , nridrosr.od-'« DOE CLASSIFICATION AND DESCRIPTORS: 700100.Time-of-fligh ¡ Method.Fluorescence Spectroscopy.Multi-photon Processes.i lonization.Simulation.Measuring ¡Methods.Velocity.¡ DOE CLASSIFICATION AND DESCRIPTORS: 700100.Time-of-t'ligh i Method.Fluorescence Spectroscopy.Multi-photon Processes. 1 Ionization.Simulation.Measuring Methods.Velocity. 1 DOE CLASSIFICATION AND DESCRIPTORS: 700100.Time-of-fligh ¡ Method.Fluorescence Spectroscopy.Multi-photon Processes., Ionization.Simulation.Measuring Methods.Velocity.• DOE CLASSIFICATION AND DESCRIPTORS: 700100.Time-o£-£1igh ¡ Method.Fluorescence Spectroscopy.Multi-photon Processes.Ionization.Simulation.Measuring Methods.Velocity. | 3,631.6 | 1992-01-01T00:00:00.000 | [
"Physics"
] |
Investigating the Impacts of Autonomous Vehicles on the Efficiency of Road Network and Traffic Demand: A Case Study of Qingdao, China
Rapid urbanization has led to the development of intelligent transport in China. As active safety technology evolves, the integration of autonomous active safety systems is receiving increasing attention to enable the transition from functional to all-weather intelligent driving. In this process of transformation, the goal of automobile development becomes clear: autonomous vehicles. According to the Report on Development Forecast and Strategic Investment Planning Analysis of China’s autonomous vehicle industry, at present, the development scale of China’s intelligent autonomous vehicles has exceeded market expectations. Considering limited research on utilizing autonomous vehicles to meet the needs of urban transportation (transporting passengers), this study investigates how autonomous vehicles affect traffic demand in specific areas, using traffic modeling. It examines how different penetration rates of autonomous vehicles in various scenarios impact the efficiency of road networks with constant traffic demand. In addition, this study also predicts future changes in commuter traffic demand in selected regions using a constructed NL model. The results aim to simulate the delivery of autonomous vehicles to meet the transportation needs of the region.
Introduction
Due to the continuous rapid development of China's socioeconomic conditions and the increasing demand for transportation, the number of automobiles has been growing rapidly.This increase in motor vehicle ownership has brought unprecedented pressure to the transportation environment, leading to issues such as traffic congestion and accidents.The development of autonomous vehicles will help alleviate traffic congestion.Human drivers take a long time to react, and communication between human-driven vehicles is also difficult.In traffic flow theory, this is the direct cause of road bottleneck effects.With the increasing density of vehicles in cities, limited environmental awareness and decision-making abilities of drivers will also worsen the levels of traffic congestion.With the application of the autonomous vehicle, with advanced on-board sensors and information and communication equipment, vehicle response time is expected to be improved, the distance between vehicles will be shortened, and braking time will also be reduced.These changes will bring a smoother ride experience and less congestion.The development of the autonomous vehicle will overturn the existing vehicle ownership mode and reduce the waste of Earth's resources [1,2].The global theoretical evaluation indicates that, even during the busiest times on urban roads, only 12% of vehicles are in motion at any given time, which means that at least 88% of vehicles are idle during everyday life [3,4].Furthermore, it is estimated that 95% of the average lifespan of a vehicle is spent idle, with only 5% of driving time still having a significant proportion of idle time [5].Over the years, a considerable amount of Earth's Sensors 2024, 24, 5110 2 of 18 resources have been wasted on storing and maintaining these idle vehicles.If vehicles achieve autonomous driving, passengers would not need to own their own vehicles.This means they would only have to pay for mileage services, rather than the cost of the vehicle itself.Establishing a car-sharing model would roughly equalize the demand for vehicles with the number of vehicles set aside, thus saving significant natural and social resources [6][7][8].
The integration of self-driving vehicles (AVs) and connected autonomous vehicles (CAVs) is poised to revolutionize how we use our roads, greatly enhancing both capacity and efficiency of traffic.Various studies, such as those by Lu et al. [9], Guériau and Dusparic [10], and Mavromatis et al. [11], have shown that, as the number of AVs increases, urban traffic flows more smoothly, with less congestion and higher speeds.
Moreover, transitioning to AVs and CAVs offers substantial environmental perks.Research by Kopelias et al. [12] indicates that CAVs can reduce CO 2 emissions by up to 94% and cut fuel consumption by as much as 90%.Stogios et al. [13] also found that AVs improve emissions and traffic flow, even under aggressive driving conditions.
Safety is another key benefit.Higher rates of CAVs on the road can enhance safety, as shown by Ye and Yamamoto [14], who found that cautious driving strategies of CAVs lower the risk of accidents.Yu et al. [15] suggested that creating AV-only lanes can further boost efficiency and safety, particularly when AV penetration is still growing.
The spread of AVs and CAVs will also profoundly impact urban planning and land use.Gavanas [16] highlighted that AVs demand a rethink of land use, infrastructure, and urban development.Soteropoulos et al. [17] noted that, while private AVs could lead to more miles traveled and less public transport use, shared AV fleets might reduce the need for parking spaces.
However, as AV and CAV technologies rapidly advance, robust policies and regulations are essential.Ahmed et al. [18] pointed out the need for updated laws to keep pace with technological developments, including advanced driving assistance systems (ADAS).Faisal et al. [19] emphasized that policymakers must prepare for the disruptive potential of AVs with comprehensive frameworks and interventions.
In terms of social equity, AVs also hold promise.Cohn et al. [20] found that scenarios involving high-occupancy AVs and improved transit can offer significant benefits, such as closing gaps in job accessibility and travel efficiency for disadvantaged populations.
Continuous research and simulation studies are crucial to understand and navigate the broader impacts of AVs and CAVs.Guériau and Dusparic [10], Fakhrmoosavi et al. [21], Chen et al. [22], and Gurumurthy et al. [23] underscored the importance of using simulation tools to model these impacts on traffic flow, safety, and efficiency, and to examine how different types of vehicles and driver behaviors interact.
In the field of contemporary research on travel behavior and demand forecasting, there is a strong emphasis on the utilization of advanced modeling techniques, the incorporation of comprehensive socioeconomic and trip-related data, the practical application of policy, and the continuous development of innovative methodological approaches [11,14,24].
Nested logit (NL) models are frequently employed in the analysis of travel behavior and the forecasting of demand [25][26][27] because they are effective at capturing complex correlation patterns and hierarchical decision-making processes that are challenging for standard multinomial logit (MNL) models to accommodate.For example, Hess et al. [28] employed a cross-nested logit model to investigate the correlations between vehicle and fuel type choices, while Dissanayake and Morikawa [29] used an NL model to examine the relationship between household vehicle ownership and mode choice in Bangkok.
The objective of researchers in this field is to develop and calibrate behavioral models that can be used to predict travel demand and to evaluate the impacts of transportation policies.These models frequently integrate revealed preferences (RP) and stated preferences (SP) data in order to enhance the accuracy of the results.For instance, Bierlaire and Thémans [30] constructed models to predict drivers' responses to real-time traffic information in Switzerland, while Ghader et al. [31] proposed a copula-based continuous cross-nested logit model for scheduling tours in activity-based travel models.
It is of great importance to incorporate socioeconomic factors, household characteristics, and trip specifics into the analysis, as these elements have a significant influence on mode choice and travel behavior.In their study of commuting trips in Xi'an, China, Ma et al. [32] incorporated factors such as household income, commuting distance, and occupation.Similarly, Shang and Zhang [33] employed a nested logit model to examine the travel mode choices of residents, taking into account a comprehensive range of influencing factors.These studies frequently extend to practical applications, including policy analysis and strategic planning for congestion reduction, environmental impact, and infrastructure development.For example, Dissanayake and Morikawa [29] employed their model to analyze congestion-reduction policies in Bangkok, while Elmorssy and Tezcan [34] introduced a novel modeling approach to enhance travel demand forecasting and its policy implications.
Notable advancements have been made in the field of traffic simulation platforms, particularly the Simulation of Urban Mobility (SUMO), which now offer robust support for autonomous vehicle (AV) testing.Kusari et al. [35] enhanced SUMO's user experience and traffic variability by calibrating the intelligent driver model (IDM) and integrating SUMO with OpenAI gym to create a Python 3.10 package for real-world simulations.In a further development, Li et al. [36] introduced a novel framework combining SUMO with CARLA to simulate complex environments, thereby enhancing the perception capabilities of AVs through realistic sensor outputs.In a related study, Shi et al. [37] investigated the integration of intelligent transportation systems (ITS) with AVs, demonstrating that increased AV penetration can significantly reduce average trip durations and delays.
A number of studies have concentrated on the effect of AVs and connected autonomous vehicles (CAVs) on traffic flow, safety, and efficiency in mixed traffic scenarios.In their review of traffic simulators, Vrbanić et al. [38] evaluated the suitability of various platforms, including VISSIM, AIMSUN, and SUMO, for use with network simulators.They found that AIMSUN was better suited to less complex models, while VISSIM demonstrated greater capabilities for more complex scenarios.Kavas-Torris et al. [39] employed SUMO to examine the impact of AVs on roadway mobility, demonstrating that augmented levels of autonomy enhance mobility for both AVs and non-AVs, albeit at lower speeds for the latter.In a recent study, Andreotti and Pinar [3] examined the transition from fully autonomous to partially autonomous traffic, highlighting improvements in safety and efficiency due to the introduction of AV features.
Furthermore, research has been conducted with the objective of optimizing traffic flow and energy efficiency through the implementation of various strategies.Wen et al. [40] developed a real-time control model for CAVs at signal-controlled intersections, which resulted in a notable reduction in energy consumption without any adverse effects on traffic efficiency.Mushtaq et al. [41,42] proposed a two-level approach combining platooning and collision avoidance strategies with the aim of managing AV traffic flow, and their findings demonstrated a considerable improvement in performance when this approach was simulated.
Therefore, it is found that a large amount of research is focused on the safety, cost, efficiency, and infrastructure aspects of autonomous driving vehicles.A few studies are based on the potential impact and outcomes of autonomous driving vehicles on urban land use and urban form.There is little research on whether autonomous driving vehicles can meet the transportation needs of different cities.
Materials and Methods
In this paper, a nested logit (NL) model was used to predict demand for unmanned transport.The nested logit model is a popular alternative to the traditional multinomial logit (MNL) model for modeling consumer preferences and choice behavior because the traditional MNL model falls under the umbrella of probabilistic choice models and offers an advantage by accounting for the correlation in unobserved factors that affect the utility of different alternatives.In addition, the traditional MNL model is based on the key assumption of 'independence from irrelevant alternatives' (IIA).This means that the odds of choosing one option over another are not affected by the introduction or removal of a third option [43].However, this assumption often fails in typical scenarios.The NL model addresses this issue by grouping similar alternatives.Therefore, the nested logit model is a valuable tool in transportation planning, market research, healthcare, and public policy due to its flexible structure, which provides greater flexibility than MNL models.Its ability to predict consumer behavior based on factors such as travel time, cost, comfort, and environmental impact, grouped under different 'nests', is essential in these areas.It is common for alternative modes of transport to have similarities, which can lead to correlation of the error terms of the utility functions in the choice set.For instance, the level of comfort when traveling by private car may be more similar to that of traveling by taxi than to that of traveling by bus.The NL model groups similar alternatives into 'nests' to account for correlations and produce more accurate estimates of travelers' choices than the traditional multinomial logit (MNL) model, which assumes that alternatives are independent.
Based on the utility maximization theory, this study assumes that the consumers in the NL model aim to achieve the highest level of satisfaction or benefit possible from the products or services they consume, given their limited resources.Consumer behavior is mathematically represented through utility functions that account for their preferences and trade-offs, even when there are resource constraints.In the literature, a standard NL model operates in two stages.During the initial stage, a consumer narrows down their options to a 'nest' which is a subset of alternatives.In the subsequent stage, the consumer makes a decision within this nested set of alternatives.Each stage reflects a set of conditions that influence the decision-making process.
This study focuses on the planned development of Qingdao City in 2030.Qingdao is situated in the eastern part of China, specifically the southeastern region of the Shandong Peninsula and the northern part of the Yellow Sea.The study area encompasses the Old Town District (the central part of Qingdao), the South District, the Central District, and the North District.
First, the population distribution in the Old Town area was projected.The Old Town area has a population of about 107,800 and is divided into 11 transportation districts, and the distribution of residential population in each transportation district is shown in Figure 1.Using an NL model, we can predict and compare driving patterns in two scenarios: (1) without autonomous vehicles and (2) with autonomous vehicles.In the scenario without autonomous vehicles, residents primarily use non-motorized vehicles, buses, and cars.In the scenario with autonomous vehicles, we examine residents' commuting patterns.Although utility maximization is the foundation of the theory, the model's accuracy and outcome are significantly influenced by the factors used to construct it, such as costs, preferences, and constraints.Therefore, it is crucial to carefully select these factors to accurately represent the set of alternatives and consumer behavior.Taking the 16th region on the map as an example, this region has a population of 10,293 residents.The residents in this region have 22 working area choices and 6 commuting mode choices, resulting in 132 commuting choices for the residents in this region.For the rest of the territory, the distribution of 127,200 residents can be inferred from the corresponding probabilities.
Using an NL model, we can predict and compare driving patterns in two scenarios: (1) without autonomous vehicles and (2) with autonomous vehicles.In the scenario without autonomous vehicles, residents primarily use non-motorized vehicles, buses, and cars.In the scenario with autonomous vehicles, we examine residents' commuting patterns.Travel modes mainly include non-motor vehicles, public transport, shared autonomous vehicle, private autonomous vehicle, and private cars.Private cars require parking fees.The utility of choosing mode i within nest j can be formulated as follows: where V ij is the deterministic component of utility for mode i within nest j, and ε ij is the random error term.In addition, the deterministic component includes various factors that influence the traveler's choice, typically modeled as: where C ij is the travel cost for mode i within nest j, T ij is the travel time for mode i within nest j, S ij represents socioeconomic characteristics influencing the utility (cost of housing, distance to public services, and other conditions).
Inclusive value (IV) for each nest j can be modeled as: where µ is the nesting parameter (0 < µ ≤ 1).Then, the probability of choosing mode i within nest j is calculated as: Sensors 2024, 24, 5110 6 of 18 where the conditional probability P (i|j) is (5) In addition, the marginal probability P (j) is where λ is the scale parameter (usually set to 1).Nesting parameter (µ) accounts for the degree of similarity among alternatives within the same nest.It ranges between 0 and 1, where values closer to 1 indicate higher similarity (more correlation) among the nested alternatives.Scale parameter (λ) is typically set to 1; this parameter ensures that the model is scale-consistent across different levels of the hierarchy [44].
( The model distinguishes between shared and private autonomous vehicles and captures the decision-making process within each category.Residents first decide whether to use motorized or non-motorized transport.Within motorized transport, they further decide between public transport, autonomous vehicles, and traditional cars. Below are the nested logit (NL) utility functions, considering that sublevels 1 and 2 are nests.The NL model groups similar alternatives into nests as follows: Top The utility of choosing mode i within nest j is determined in Equation ( 1) and the deterministic component includes various factors that influence the traveler's choice, typically modeled as: where C ij is the travel cost for mode i within nest j, T ij is the travel time for mode i within nest j, S ij represents socioeconomic characteristics influencing the utility (cost of housing, distance to public services, and other conditions), AV ij is a dummy variable indicating whether the mode is an autonomous vehicle.Other key components of the nested logit model are determined according to (2)-( 6).
Based on the population preferences and pattern selection, the NL model predicts the distribution of traffic and mode proportions in the given area under autonomous driving.The mode proportion or mode share refers to the percentage of travelers who choose a particular mode of transportation during their trips.By multiplying the vehicle occupancy rate of autonomous vehicles with the generated and attracted traffic flows in unit traffic volume, we can determine the share of vehicles on the road for each trip.In the absence of autonomous vehicles, non-motorized vehicles account for the largest proportion of commuting, followed by buses, with private cars accounting for 21.31% (Figure 2).
Based on the population preferences and pattern selection, the NL model predicts the distribution of traffic and mode proportions in the given area under autonomous driving.The mode proportion or mode share refers to the percentage of travelers who choose a particular mode of transportation during their trips.By multiplying the vehicle occupancy rate of autonomous vehicles with the generated and attracted traffic flows in unit traffic volume, we can determine the share of vehicles on the road for each trip.In the absence of autonomous vehicles, non-motorized vehicles account for the largest proportion of commuting, followed by buses, with private cars accounting for 21.31% (Figure 2).In the context of autonomous driving, the percentage of people who use cars as their commuting mode has increased to 45.18% (Figure 3).Among the group of people who choose autonomous driving vehicles as their commuting mode, private autonomous driving vehicles have become the preferred option, accounting for 30.04% of the vehicle share, while shared autonomous driving vehicles account for 13.26%.In the context of autonomous driving, the percentage of people who use cars as their commuting mode has increased to 45.18% (Figure 3).Among the group of people who choose autonomous driving vehicles as their commuting mode, private autonomous driving vehicles have become the preferred option, accounting for 30.04% of the vehicle share, while shared autonomous driving vehicles account for 13.26%.
distribution of traffic and mode proportions in the given area under autonomous driving.The mode proportion or mode share refers to the percentage of travelers who choose a particular mode of transportation during their trips.By multiplying the vehicle occupancy rate of autonomous vehicles with the generated and attracted traffic flows in unit traffic volume, we can determine the share of vehicles on the road for each trip.In the absence of autonomous vehicles, non-motorized vehicles account for the largest proportion of commuting, followed by buses, with private cars accounting for 21.31% (Figure 2).In the context of autonomous driving, the percentage of people who use cars as their commuting mode has increased to 45.18% (Figure 3).Among the group of people who choose autonomous driving vehicles as their commuting mode, private autonomous driving vehicles have become the preferred option, accounting for 30.04% of the vehicle share, while shared autonomous driving vehicles account for 13.26%.The changes in each commuting mode are illustrated in Figure 4. Due to the significant impact of parking fees on residents' choice of transportation, the increase in parking fees for private cars in the autonomous driving scenario has caused the percentage of private cars to decrease from 21.31% to 1.88%.In addition to the initial group of car commuters, autonomous driving vehicles will also attract a portion of the public transportation commuters, especially those who do not normally use cars for commuting.Approximately 15.69% of non-car commuters and 8.18% of public transportation commuters choose to use autonomous driving vehicles for their daily commute.The changes in each commuting mode are illustrated in Figure 4. Due to the significant impact of parking fees on residents' choice of transportation, the increase in parking fees for private cars in the autonomous driving scenario has caused the percentage of private cars to decrease from 21.31% to 1.88%.In addition to the initial group of car commuters, autonomous driving vehicles will also attract a portion of the public transportation commuters, especially those who do not normally use cars for commuting.Approximately 15.69% of non-car commuters and 8.18% of public transportation commuters choose to use autonomous driving vehicles for their daily commute.Road network and predicted vehicle ratios are imported into the traffic modeling software SUMO 1.19.0 for traffic assignment, completing the traffic demand prediction.Simulation of Urban Mobility (SUMO) is an open-source, highly portable, microscopic, and continuous road traffic simulation package designed to handle large road networks.It allows for the implementation of complex traffic management schemes, enabling the creation of a rich set of intermodal traffic management solutions.In this study, the traffic assignment in SUMO could be divided into four essential steps as follows [45,46]: Network import: Loading the geographic data of the area to be simulated involves using SUMO, which supports a variety of input formats such as XML and OpenStreetMap OSM files.In addition, it allows for defining roads, intersections, lanes, and traffic lights in the network.
2.
Demand modeling, which involves representing user behavior in the model.This could include different modes of transportation and routes.SUMO provides a range of tools for creating and managing various demand models.
3.
The simulation is then executed, enabling the user to observe the traffic as it develops over time.Following the simulation, a variety of metrics can be gathered for analysis, such as travel times, route choices, and emissions.SUMO offers various tools to aid analysis, such as plotting tools and export functions.
4.
Traffic assignment in SUMO is managed through its 'route choice models', which can be explained using the widely used nested logit model for network assignment.SUMO is a valuable tool for transport planning.Traffic assignment can provide planners with an understanding of the anticipated road usage under various conditions or transport policies.
The standard SUMO parameters have been adjusted to simulate a possible future scenario for autonomous vehicles.This study utilized the default car-following model known as the Krauss model.This parameter tuning specifically focused on the longitudinal dynamics involving acceleration, deceleration, and gap acceptance.These driving behaviors were defined and fine-tuned as key parameters within the car-following framework of SUMO (refer to Table 1).The modified model aimed to enable vehicles to travel at maximum safe speeds while ensuring constant safety measures are in place, ensuring avoidance of collisions by allowing for braking within predefined acceleration limits between the leading and following vehicles.Here is a breakdown highlighting the customizable parameters within the Krauss car-following model:
•
Mingap: the offset to the leading vehicle when standing in a jam (in m).• Accel: the acceleration ability of vehicles of this type (in m/s 2 ).
•
Decel: the deceleration ability of vehicles of this type (in m/s 2 ).
•
Emergency decel: the maximum deceleration ability of vehicles of this type in case of emergency (in m/s 2 ).
Road Network Modeling Results
To investigate the deployment of an appropriate number of unmanned vehicles to meet the traffic demand of selected regions, this study modeled the scenario of the penetration rate of unmanned vehicles and assumed that the morning peak traffic demand would remain unchanged.The construction of the modeling scenario was the first step.The road network used in this experiment is a component of the planned road network for the central city of Qingdao in 2030.The traffic flow represents the projected traffic demand for this planning area during the morning rush hour, from 8:00 to 9:00.While unmanned vehicles can transmit information wirelessly, traditional traffic signs and signal lights may still be necessary to consider for long-term coexistence with human-driven vehicles.The micro-simulation experiment primarily investigates the coexistence of unmanned and man-made vehicles on an urban scale.The experiment excludes irrelevant factors, such as the geometric design of the road in the unmanned vehicle scenario and the location of charging equipment.It only considers the impact of unmanned vehicles with varying permeability on traffic efficiency and their ability to meet traffic demand when controlling signal lights.The experiment's independent variable is the proportion of unmanned vehicles in traffic flow.The following values are used: 5%, 10%, 20%, 20%, 30%, 40%, 50%, 60%, 70%, 70%, 80%, 90%, and 100%.The dependent variable is urban transport efficiency, and the evaluation indicators include average kilometers traveled, average delay time, average waiting time, average travel time, and average speed (refer to Table 2).To increase the accuracy of the modeling results, this study conducted five simulations for different permeability scenarios, replacing the random speed each time.It should also be noted that data with large errors were removed and the average was calculated for better analyses.The road network and peak hour OD data for the selected areas were obtained from the 'Qingdao Urban Traffic Management Planning Project'.The peak hour OD data were obtained from the 2030 land use plan and local population projections for the selected areas.Analysis of the full simulation model results indicates that unmanned vehicles can effectively improve traffic efficiency and better meet local transport needs when operating in groups.As the penetration rate increased from 0% to 10%, the average delay, average waiting time, and average traveling time increased, while the average travel speed decreased.Observing the modeling process revealed that, as the proportion of unmanned vehicles in traffic flow increased from 0% to 10%, the number of human-driven vehicles remained high.Additionally, most unmanned vehicles appeared singly on the road network (see Figure 5a).This resulted in reduced traffic efficiency, as the unmanned vehicles cannot be utilized to improve the overall traffic flow and may even interfere with human-driven vehicles.
As the penetration rate increased from 10% to 20%, the number of unmanned vehicles operating in the same group on the road network also increased, as shown in Figure 5b.Currently, unmanned vehicles can communicate with each other using vehicle communication.This allows them to accurately adjust their speed based on the preceding vehicle, reducing the following distance and improving traffic efficiency.As a result, the delay time, waiting time, and travel time of unmanned vehicles tended to decrease, while their operating speed tended to increase; however, as human-driven vehicles still made up most vehicles on the road network, their performance has remained largely unchanged.certain extent.
As the proportion of unmanned vehicles in the traffic flow increased from 50% to 100%, the number of unmanned vehicles in the scene exceeded that of human-driven vehicles (refer to Figure 6b).This resulted in a decrease in average delay time, average waiting time, and average traveling time, and a significant increase in traffic speed.Please note that the yellow vehicles in the picture represent human-driven vehicles, while the red ones represent unmanned vehicles.When the proportion of unmanned vehicles in the traffic flow increased from 20% to 50%, the likelihood of multiple unmanned vehicles forming a group also increased (see Figure 6a).This resulted in longer average delay times, waiting times, and travel times for unmanned vehicles.The delay, waiting, and travel times of human-driven vehicles decreased as unmanned vehicles improved the traffic efficiency of the road network to a certain extent.However, unmanned vehicles had higher traffic efficiency than human-driven vehicles.When the proportion of unmanned vehicles in the traffic flow was below 20%, overall traffic efficiency did not significantly improve.When the proportion of unmanned vehicles in traffic flow exceeded 20%, overall traffic efficiency began to improve.When more than 50% of the vehicles in traffic were unmanned, the probability of group operation of these vehicles increased, leading to improved urban traffic efficiency.This study's results indicate that, if the number of unmanned vehicles launched in the early stages is small, the efficiency improvement of the entire system will be limited.Only when the proportion of unmanned vehicles launched exceeds 20% can overall traffic efficiency be greatly improved, and people's transport needs can be satisfied.
The Report on Development Forecast and Strategic Investment Planning Analysis of China's autonomous vehicle industry states that the current development scale of China's intelligent autonomous vehicles has exceeded market expectations [47].Therefore, in this scenario for Qingdao, the penetration rate of autonomous vehicles is 90%, and the lane occupancy rate of the road network during the simulation process is shown in Figure 7.The simulation results are shown in Table 3.As the proportion of unmanned vehicles in the traffic flow increased from 50% to 100%, the number of unmanned vehicles in the scene exceeded that of human-driven vehicles (refer to Figure 6b).This resulted in a decrease in average delay time, average waiting time, and average traveling time, and a significant increase in traffic speed.Please note that the yellow vehicles in the picture represent human-driven vehicles, while the red ones represent unmanned vehicles.
However, unmanned vehicles had higher traffic efficiency than human-driven vehicles.When the proportion of unmanned vehicles in the traffic flow was below 20%, overall traffic efficiency did not significantly improve.When the proportion of unmanned vehicles in traffic flow exceeded 20%, overall traffic efficiency began to improve.When more than 50% of the vehicles in traffic were unmanned, the probability of group operation of these vehicles increased, leading to improved urban traffic efficiency.This study's results indicate that, if the number of unmanned vehicles launched in the early stages is small, the efficiency improvement of the entire system will be limited.Only when the proportion of unmanned vehicles launched exceeds 20% can overall traffic efficiency be greatly improved, and people's transport needs can be satisfied.
The Report on Development Forecast and Strategic Investment Planning Analysis of China's autonomous vehicle industry states that the current development scale of China's intelligent autonomous vehicles has exceeded market expectations [47].Therefore, in this scenario for Qingdao, the penetration rate of autonomous vehicles is 90%, and the lane occupancy rate of the road network during the simulation process is shown in Figure 7.The simulation results are shown in Table 3.In the road network, there are eight distinct congested nodes, as shown in Figure 8. Nodes 1-7 are nodes connecting the area to the urban corridor, while Node 8 is an important node for north-south and eastward traffic.In our paper, Qingdao Districts 16 and 22 (see Figure 1) and the road connecting them were selected for detailed consideration.Therefore, the simulation results' data were compiled for intersections 1, 2, 3, and 8.In the simulation model, the average queue length for Simulation results indicate that, although the market share of autonomous vehicles in the traffic flow is 90% in the predicted scenario, the average speed of vehicles is lower.There are two reasons for this result: First, the increase in demand for vehicles.Autonomous vehicles not only replace human-driven vehicles but also attract some passengers who commute using other modes of transportation, increasing the traffic flow during commuting.The second reason is the increase in average commuting distance, which can be seen from the more congested intersections between regions during the simulation process.In the autonomous driving scenario, due to the higher efficiency of autonomous vehicles compared with human-driven vehicles, the accessibility of surrounding areas increases, and residents tend to choose residential buildings with excellent public services, resulting in increased travel distance.This leads to an increase in traffic flow and severe congestion on connecting roads between regions.
In the road network, there are eight distinct congested nodes, as shown in Figure 8. Nodes 1-7 are nodes connecting the area to the urban corridor, while Node 8 is an important node for north-south and eastward traffic.In the road network, there are eight distinct congested nodes, as shown in Figure 8. Nodes 1-7 are nodes connecting the area to the urban corridor, while Node 8 is an important node for north-south and eastward traffic.In our paper, Qingdao Districts 16 and 22 (see Figure 1) and the road connecting them were selected for detailed consideration.Therefore, the simulation results' data were compiled for intersections 1, 2, 3, and 8.In the simulation model, the average queue length for In our paper, Qingdao Districts 16 and 22 (see Figure 1) and the road connecting them were selected for detailed consideration.Therefore, the simulation results' data were compiled for intersections 1, 2, 3, and 8.In the simulation model, the average queue length for each entrance of several intersections was calculated.Additionally, under the same traffic demand, a scenario was simulated where the share of autonomous vehicles in the traffic flow was 0%, and a comparative analysis was conducted.
Intersection 1: Intersection 1 is located in the northern part of the designated area and is one of the important external exits in the area.The average queue length at the intersection is shown in Table 4.The names in the headings correspond to the direction of traffic on the lanes.In the scenario where autonomous vehicles account for 0% of the traffic flow, the queue length at each entrance exceeds the 90% scenario.In the scenario where autonomous vehicles account for 90% of the traffic flow, the queue length for left turns at the northbound entrance is 304.07 m, and the queue length for straight through traffic is 75.32 m.In the 0% scenario, the queue length for left turns at the northbound entrance is 791.40 m, and the queue length for straight through traffic is 218.69 m.The intersection is congested but still able to meet the traffic demand.Intersection 2 (Figure 9): The queue lengths at Intersection 2 are shown in Table 5.The names in the headings correspond to the direction of traffic on the lanes.In scenarios where autonomous vehicles account for 0% of the traffic flow, the queue lengths are longer compared with scenarios where autonomous vehicles account for 90% of the traffic flow.In the scenario where autonomous vehicles account for 90% of the traffic flow, the queue length for northbound left turns is the longest, approximately 105.86 m; the queue length for southbound through traffic is the longest, approximately 49.64 m.In the scenario where autonomous vehicles account for 0% of the traffic flow, the queue lengths for northbound and eastbound are the longest, as well as the southern direction.The names in the headings correspond to the direction of traffic on the lanes.In scenarios where autonomous vehicles account for 0% of the traffic flow, the queue lengths are longer compared with scenarios where autonomous vehicles account for 90% of the traffic flow.
In the scenario where autonomous vehicles account for 90% of the traffic flow, the queue length for northbound left turns is the longest, approximately 105.86 m; the queue length for southbound through traffic is the longest, approximately 49.64 m.In the scenario where autonomous vehicles account for 0% of the traffic flow, the queue lengths for northbound and eastbound are the longest, as well as the southern direction.Intersection queue lengths are shown in the table below.Intersection queue lengths are shown in the table below.Intersection 3: Intersection 3 is the outbound exit for the southbound area (Figure 10).The queue lengths at Intersection 3 are shown in Table 6.In the scenario where autonomous vehicles account for 90% of the traffic flow, the average queue length for the southbound entrance is 342.87 m, with a direct queue length of 42.02 m.The average queue length for the northbound entrance is 57.6 m.In the scenario where autonomous vehicles account for 0% of the traffic flow, the queue length for the left lane of the southbound direction is 280.1 m, the queue length for the straight lane is 36.07m, and the queue length for the southbound straight lane is 43.92 m.The average queue lengths for Intersection 3 are summarized in Table 6.The names in the headings correspond to the direction of traffic on the lanes.Intersection 3: Intersection 3 is the outbound exit for the southbound area (Figure 10).The queue lengths at Intersection 3 are shown in Table 6.In the scenario where autonomous vehicles account for 90% of the traffic flow, the average queue length for the southbound entrance is 342.87 m, with a direct queue length of 42.02 m.The average queue length for the northbound entrance is 57.6 m.In the scenario where autonomous vehicles account for 0% of the traffic flow, the queue length for the left lane of the southbound direction is 280.1 m, the queue length for the straight lane is 36.07m, and the queue length for the southbound straight lane is 43.92 m.The average queue lengths for Intersection 3 are summarized in Table 6.The names in the headings correspond to the direction of traffic on the lanes.The three lanes of traffic at the south entrance to Intersection 3 are one straight left lane and two straight lanes, and, with the increase in left-turning traffic at the south en- The three lanes of traffic at the south entrance to Intersection 3 are one straight left lane and two straight lanes, and, with the increase in left-turning traffic at the south entrance, these lanes are not set up properly, resulting in longer queues for left turns, with long queues of turning vehicles and left-turning vehicles waiting to enter the left-turn lane in the straight lane adjacent to the straight left lane.
Intersection 8: Intersection 8 (Figure 11) is the traffic hub for the entire area and has the highest traffic load.At the intersection of the eighth street, there are five north-south entrance lanes and four east-west entrance lanes.The average queue lengths for the north-south and eastwest entrances are shown in Table 7 and Table 8, respectively.In a scenario where selfdriving vehicles account for 90% of the traffic flow, the straight queue lengths for northbound and westbound directions will be longer.In a scenario where self-driving vehicles account for 0% of the traffic flow, the queue lengths for northbound straight, westbound straight, and eastbound left turns will be longer.The main conclusions drawn from this study are as follows: (1) The observed improvement in traffic efficiency resulting from the introduction of autonomous vehicles (AVs) in a range of contexts can be attributed to a number of key factors.The engineering of AVs is designed to optimize driving patterns, maintain consistent speeds and minimize abrupt stops and starts, which collectively contribute to a more fluid traffic flow.This optimization results in a reduction in overall congestion and an enhancement in average traffic speeds.Furthermore, AVs are equipped with the ability to communicate with one another and with traffic infrastructure, thereby facilitating more efficient coordination at intersections and during lane changes.The inter-vehicle communication system reduces the necessity for unnecessary braking and acceleration, which are typical of human drivers due to their delayed reactions and inconsistent driving behaviors.As a result, the introduction of AVs has resulted in a reduction in average waiting times and travel times.At the intersection of the eighth street, there are five north-south entrance lanes and four east-west entrance lanes.The average queue lengths for the north-south and east-west entrances are shown in Tables 7 and 8, respectively.In a scenario where self-driving vehicles account for 90% of the traffic flow, the straight queue lengths for northbound and westbound directions will be longer.In a scenario where self-driving vehicles account for 0% of the traffic flow, the queue lengths for northbound straight, westbound straight, and eastbound left turns will be longer.The main conclusions drawn from this study are as follows: (1) The observed improvement in traffic efficiency resulting from the introduction of autonomous vehicles (AVs) in a range of contexts can be attributed to a number of key factors.The engineering of AVs is designed to optimize driving patterns, maintain consistent speeds and minimize abrupt stops and starts, which collectively contribute to a more fluid traffic flow.This optimization results in a reduction in overall congestion and an enhancement in average traffic speeds.Furthermore, AVs are equipped with the ability to communicate with one another and with traffic infrastructure, thereby facilitating more efficient coordination at intersections and during lane changes.The inter-vehicle communication system reduces the necessity for unnecessary braking and acceleration, which are typical of human drivers due to their delayed reactions and inconsistent driving behaviors.As a result, the introduction of AVs has resulted in a reduction in average waiting times and travel times.Nevertheless, the relatively modest reduction in average delay times is predominantly attributable to the limited number of traffic lights at urban intersections, which continue to impose constraints on AVs.This highlights the necessity for further integration of AV technology with advanced traffic management systems.(2) It has been demonstrated that, when the proportion of AVs within the traffic flow is below 20%, there is no significant improvement in overall vehicle efficiency.This is due to the fact that a limited number of AVs are unable to significantly impact the prevailing traffic dynamics, which continue to be largely shaped by human-driven vehicles.Nevertheless, even at lower penetration rates, AVs demonstrate superior efficiency compared with human-driven vehicles, due to the optimized driving algorithms and enhanced adherence to traffic regulations inherent to their design.
As the proportion of AVs in the traffic flow exceeds 20%, their impact on traffic flow becomes more pronounced.The inherent communication and coordination capabilities of AVs begin to generate notable improvements in traffic efficiency, leading to a reduction in congestion and an enhancement in average travel speeds.As the proportion of AVs exceeds 50%, the benefits become even more significant.The increased presence of AVs leads to more consistent and predictable traffic patterns, which further improve traffic flow and better meet transportation needs.This threshold represents a critical tipping point, where the collective behavior of AVs can markedly alter traffic conditions, emphasizing the importance of reaching and surpassing this critical mass for optimal efficiency gains.(3) In scenarios where the proportion of autonomous vehicles (AVs) is significant, an increase in the population and commuting distances will consequently lead to an increase in traffic volume.The potential of autonomous vehicles (AVs) to enhance traffic flow and efficiency may be balanced by the prospect of higher road usage as commuting becomes more convenient and attractive.Such intensification of traffic may result in congestion at intersections along primary thoroughfares, particularly during periods of peak traffic volume.The capacity of the infrastructure to accommodate the increased traffic volume, and thus maintain high efficiency, constitutes a significant challenge in these scenarios.Despite the inherent efficiency of AVs, congestion at critical points can still result from the sheer volume of vehicles, thereby underscoring the necessity for comprehensive urban planning and infrastructure development to accommodate the increased demand.Furthermore, the integration of AVs with public transport and the promotion of alternative commuting methods can assist in addressing these challenges and ensuring sustainable traffic management.
Conclusions
This study focuses on modeling urban traffic flow using micro-traffic simulation based on autonomous vehicles.It investigates the variations in urban traffic efficiency under different proportions of autonomous vehicles in the traffic flow.Previous research on traffic modeling using autonomous vehicles has mainly focused on modeling independent intersections or individual road segments.However, there is currently a lack of research on the impact of different proportions of autonomous vehicles on traffic efficiency within cities, and the conclusions drawn are relatively limited.Microsimulation can provide more comprehensive information and can guide the future configuration of autonomous vehicles in urban areas.
Based on the results of the impact of autonomous vehicles on urban traffic efficiency, this study explores the changes in commuting demand in small and medium-sized cities under the background of autonomous vehicles from the perspective of population and travel mode.However, due to constraints in research time, technology, and the number of articles, this article also has the following limitations: This study also attempted to model the communication behavior of autonomous vehicles, specifically V2X.However, due to technological limitations, it was difficult to simulate the entire urban environment.Although the simulation included a large number of autonomous vehicles, they have not yet been implemented on a large scale in real-world roads, and people's transportation preferences are unpredictable, leading to some deviations.The spatial distribution of urban traffic and employment mutually influence and interact with each other, creating a dynamic process.However, this study only predicts static time points.In future work, exploring patterns of changing traffic demand through a series of feedback loops can enable autonomous vehicles to better meet transportation needs.In addition, autonomous vehicles are not just vehicles with transportation capabilities but also crucial carriers of big data.They play a key role in achieving smart mobility, intelligent transportation, and the development of smart cities.Therefore, utilizing autonomous vehicles to meet China's transportation needs is an important component of building smart cities in the future.
( 1 )Sub-level 2 . 1 :level 2 . 2 :➢ Nest 1 : 2 :Sub-level 2 . 1 : 2 :
Scenario 1: without autonomous vehicles In this scenario, residents primarily use non-motorized vehicles, buses, and cars.The nested logit model structure for travel mode choice can be represented as follows: Top Level: Travel Mode Choice ➢ Sub-level 1: Non-Motorized Vehicles: Cycling ➢ Sub-level 2: Motorized Vehicles Public Transport: Bus Sub-Private Transport: Car Travel mode choice structure captures the hierarchical decision-making process where residents first decide whether to use motorized or non-motorized transport.Within each category, they further choose specific modes.Below are the nested logit (NL) utility functions, considering that sublevels 1 and 2 are nests.The NL model groups similar alternatives into nests as follows: Top Level: Travel Mode Choice Non-Motorized Vehicles (M1): Cycling (M11) ➢ Nest Motorized Vehicles (M2) Public Transport (M21): Bus (M211) Sub-level 2.Private Transport (M22): Car (M221)
Figure 3 .
Figure 3. Driving modes in autonomous driving scenarios.
Figure 3 .
Figure 3. Driving modes in autonomous driving scenarios.
Figure 3 .
Figure 3. Driving modes in autonomous driving scenarios.
Figure 4 .
Figure 4. Changes in commuting patterns.Road network and predicted vehicle ratios are imported into the traffic modeling software SUMO 1.19.0 for traffic assignment, completing the traffic demand prediction.Simulation of Urban Mobility (SUMO) is an open-source, highly portable, microscopic, and continuous road traffic simulation package designed to handle large road networks.It allows for the implementation of complex traffic management schemes, enabling the creation of a rich set of intermodal traffic management solutions.In this study, the traffic
Figure 5 .
Figure 5. Visualizations of the simulation scenario for the rate of unmanned vehicles in the traffic flow of 10% (a) and 20% (b).
Figure 5 .
Figure 5. Visualizations of the simulation scenario for the rate of unmanned vehicles in the traffic flow of 10% (a) and 20% (b).
Figure 6 .
Figure 6.Visualizations of the simulation scenario for the rate of unmanned vehicles in the traffic flow ((a) 50% and (b) 80%).
Figure 6 .
Figure 6.Visualizations of the simulation scenario for the rate of unmanned vehicles in the traffic flow ((a) 50% and (b) 80%).
Scenario 2: with autonomous vehiclesIn this scenario, travel modes include non-motorized vehicles, public transport, shared autonomous vehicles, private autonomous vehicles, and private cars.The nested logit model structure can be represented as follows: Top Level: Travel Mode Choice ➢ Sub-level 1: Non-Motorized Vehicles: Cycling ➢ Sub-level 2: Motorized Vehicles Sub-level 2.1: Public Transport: Bus Sub-level 2.2: Autonomous Vehicles: Shared AV (SAV) and Private AV (PAV) Sub-level 2.3: Private Transport: Traditional Car
Table 1 .
Parameters of the driver model used in SUMO simulations in this study.
Table 2 .
Variables and evaluation indicators.
Table 3 .
Road network modeling results.
Type Travel Time (s) Travel Distance (m) Waiting Time (s) Delay Time(s) Travel Speed (m/s)
Table 3 .
Road network modeling results.
Table 4 .
Average queue length at Intersection 1.
Table 5 .
Average queue length at Intersection 2.
Table 5 .
Average queue length at Intersection 2.
Table 6 .
Average queue length at Intersection 3.
Table 6 .
Average queue length at Intersection 3.
Table 7 .
Average queue length for north-south direction.
Table 8 .
Average queue length for east-west direction.
Table 7 .
Average queue length for north-south direction.
Table 8 .
Average queue length for east-west direction. | 11,065.6 | 2024-08-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Generic diagonal conic bundles revisited
We prove a stronger form of our previous result that Schinzel's Hypothesis holds for $100\%$ of $n$-tuples of integer polynomials satisfying the usual necessary conditions, where the primes represented by the polynomials are subject to additional constraints in terms of Legendre symbols, as well as upper and lower bounds. We establish the triviality of the Brauer group of generic diagonal conic bundles over the projective line. Finally, we give an explicit lower bound for the probability that diagonal conic bundles in certain natural families have rational points.
Introduction
In our previous paper we proved that Schinzel's Hypothesis (H) holds for 100% of n-tuples of integer polynomials satisfying the usual necessary conditions [SS23, Thm.1.2].Here we give some improvements, complements and further applications of the results of [SS23] relevant to diagonal conic and quadric bundles over the projective line.
In the first section we prove a stronger form of [SS23, Thm.1.2] where the primes represented by the polynomials are required to satisfy additional conditions in terms of Legendre symbols, as well as upper and lower bounds, see Theorem 1.1.Using this result, in Corollary 1.3 we give a simplified proof of a weaker form of the Hasse principle for random diagonal conic bundles over the projective line [SS23, Thm.6.1], with a bound for the least solution.We prove an analogous statement for diagonal quadric bundles of relative dimension 2, see Corollary 1.5.(It is well-known that quadric bundles of relative dimension at least 3 over the projective line satisfy the Hasse principle [CTSS87,Prop. 3.9].) The absence of Brauer-Manin conditions in Corollary 1.3 is due to the triviality of the Brauer group of generic diagonal conic bundles mentioned in [SS23, Remark 6.2] and proved in the second section of this note, see Theorem 2.1 and Corollary 2.3.
In the last section we give an explicit lower bound for the density of pairs of integer polynomials P 1 (t), P 2 (t) of arbitrary fixed degrees such that the equation is soluble in Z.When the degrees of P 1 (t) and P 2 (t) are large, this density is close to one third.We also estimate the height of the smallest integer solution of (1).The authors have been partly supported by the EPSRC New Horizons grant "Local-to-global principles for random Diophantine equations" (EP/V019066/1).We are very grateful to the referees for their thorough reading of the paper and helpful comments.
1 Schinzel hypothesis on average with quadratic residue conditions Non-constant polynomials P 1 (t), . . ., P n (t) ∈ Z[t] are called a Schinzel n-tuple if the leading coefficient of each P i (t) is positive, and for every prime ℓ the product n i=1 P i (t) is not divisible by t ℓ − t modulo ℓ.The height of a polynomial P (t) ∈ Z[t] is defined as the maximum of the absolute values of the coefficients, and is denoted by |P |.The height of an n-tuple of polynomials P = (P 1 (t), . . ., P n (t)) ∈ (Z[t]) n is defined as |P| = max i=1,...,n (|P i |) The following result is [SS23, Thm.1.2] with additional properties (2), (3) and (6).The proof of (6) uses [SS23, Prop.6.5] based on Heath-Brown's bound for character sums [HB95,Cor. 4].
Theorem 1.1 Fix any (d 1 , . . ., d n ) ∈ N n , ε > 0, M ∈ N, m 0 ∈ Z/M and Q ∈ ((Z/M)[t]) n such that deg(Q i ) ≤ d i and gcd(Q i (m 0 ), M) = 1 for all i = 1, . . ., n.For every (i, j) ∈ (N∩[1, n]) 2 with i < j let ǫ ij ∈ {1, −1}.Then for 100% of Schinzel n-tuples P ∈ (Z[t]) n of respective degrees d 1 , . . ., d n such that P ≡ Q mod M, there exists a natural number m with the following properties: the Legendre symbol P i (m) P j (m) equals ǫ ij for all i < j. Proof.Define Let us write Ω = {(i, j) : 1 ≤ i < j ≤ n}.For any S ⊂ Ω define We follow the standard convention that a product indexed by the elements of an empty set is equal to 1.In particular, we have T ∅,P (x) = θ P (x).Assuming (5), the following function takes the value 1 if (6) holds and the value 0 otherwise: which leads to For the rest of the proof we restrict attention only to the range where A 1 , A 2 are arbitrary fixed constants satisfying n < A 1 < A 2 .By [SS23, Eq. (6.7)] the term corresponding to S = ∅ equals where θ P (x) is defined similarly to θ P (x) by dropping the condition that the primes P 1 (m), . . ., P n (m) are necessarily distinct.Let us define We next show that uniformly in the range (8) and for all S = ∅ one has Letting makes it plain that T S,P takes the form of the function η P from [SS23,Def. 6.4].This allows us to apply [SS23, Prop.6.5] to verify (10).Feeding (9)-( 10) into (7) yields Hence the number of P in Poly(H) Therefore, for 100% of P in Poly(H) one has which implies where we used [SS23, Eq. (4.10)] for the last deduction.Hence, there exists m ≤ x satisfying (4)-(6).To verify (3), we take which proves (3).To prove (2) we note that if the largest integer m ≤ x that satisfies (4)-( 6) is m 1 then For almost all P with |P| ≤ H we have shown that C P (x) > x log x .Combined with the upper bound ).Note that for almost all P with |P| ≤ H one has min i,j |c ij | ≥ H log log H , hence, for all i we have Hence, for all i one has In particular, min i P i (m 1 ) > |P|(log |P|) ε/2 , which concludes the proof of (2).
Generic diagonal conic bundles
As an application we give a simplified proof of [SS23, Thm.6.1] with an added value of a bound for the least solution.Finite search bounds for Diophantine equations are not well-studied but are nevertheless relevant to the theory as they are intimately related to Hilbert's 10th problem for Q.We give a search bound that is of polynomial growth in the size of the coefficients.We need the following special case of a theorem of Cassels [Cas55].
Proposition 1.2 (Cassels) If f 1 , f 2 , f 3 are non-zero integers such that the quadratic form 3 i=1 f i x 2 i represents zero in Q, then there exists a solution (x 1 , x 2 , x 3 ) ∈ N 3 such that For m = (m 0 , m 1 , . . ., m r ) ∈ Z r we write P (t, m) for the polynomial r k=0 m k t k .
Corollary 1.3 Let n 1 , n 2 , n 3 be integers such that n 1 > 0, n 2 > 0, and n 3 ≥ 0, and let n = n 1 + n 2 + n 3 .Let a 1 , a 2 , a 3 be non-zero integers not all of the same sign.Let d ij be natural numbers, for i = 1, 2, 3 and j = 1, . . ., n i .Define such that the n-tuple (P ij (t, m ij )) is Schinzel.Let M be the set of m ∈ P such that for each p|2a 1 a 2 a 3 the equation has a solution in Z p for which the value of each polynomial P ij (t) is a p-adic unit.
Then there is a subset M ′ ⊂ M of density 1 such that for every m ∈ M ′ the equation (11) has a solution The set M ′ has positive density in Z d+n ordered by height.
Proof.By absorbing primes into variables x, y, z we can assume that a 1 a 2 a 3 is square-free.Write M = 8a 1 a 2 a 3 .Local solubility of (11) with t = m at an odd prime p|M with an additional condition that the value of each P ij (m) is a p-adic unit depends only on the value of P ij (m) modulo p.For the prime 2 the same holds modulo 8. Thus M is a finite disjoint union of subsets given by the condition for which there exists an m 0 ∈ Z such that (11) with t = m 0 has a solution in Z p for each p|M, and gcd(Q ij (m 0 ), M) = 1.Let us fix such a Q and such an m 0 .Then for any P ≡ Q mod M and any m ≡ m 0 mod M the equation ( 11) with t = m is soluble in Z p for each p|M.Suppose that p ij := P ij (m), where i = 1, 2, 3 and j = 1, . . ., n i , are distinct primes, where The local solubility of (11) with t = m at the primes not dividing M and not equal to one of the p ij is clear, since the conic has good reduction modulo such a prime.It remains to show that we can choose m ≡ m 0 mod M so that ( 11) is solvable at each of the primes p ij = P ij (m).
For i = 1, 2, 3 define Here the middle term is the Legendre symbol and the right hand term is the Hilbert symbol.By global reciprocity we obtain Define λ ij ∈ F 2 as follows: • λ 1,j = λ 1j , for j = 1, . . ., n 1 ; ) mod M, we see from (12) that the λ ij depend only on Q and m 0 .Thus the same is true for the λ ij .
Lemma 1.4 We have i=1,2,3 Proof.By assumption, a 1 , a 2 , a 3 are not of the same sign.Since M = 8a 1 a 2 a 3 , from global reciprocity we obtain Local solubility of the conic (11) at a prime p is equivalent to so this holds for all p|M.Thus the product of From the definition of λ ij it is immediate that this product equals the product of (−1) λ ij over all i and j.
We continue the proof of Corollary 1.3.
Local solubility of the conic (11) at p ij is equivalent to the condition where If i > i ′ , we define x ij,i ′ j ′ to be equal to x i ′ j ′ ,ij .Thus we always have x ab,cd = x cd,ab .We observe that (13) is equivalent to the equation This is clear for i = 1.For i = 2 and i = 3 we prove (14) using which immediately follows from global reciprocity.Consider the system of n = n 1 +n 2 +n 3 linear equations (14) in n 1 n 2 +n 2 n 3 +n 1 n 3 variables x ij,i ′ j ′ .By assumption we have n 1 > 0 and n 2 > 0, so we have at least one variable, namely x 11,21 , and n ≥ 2 equations.The sum of the left hand sides of all equations ( 14) is zero, and it is easy to see that the matrix of this linear system has rank n − 1.Thus the linear map given by this matrix is surjective onto the subspace of vectors with zero sum of coordinates in (F 2 ) n .We conclude that the system ( 14) is solvable for arbitrary λ ij with zero sum.In our case this holds by Lemma 1.4, so (14) has a solution, say ε ij,i ′ j ′ ∈ F 2 , where (ij) and (i ′ j ′ ) are pairs as above.
Applying Theorem 1.1, for P ≡ Q mod M in a subset of density 1 we find a natural number m ≡ m 0 mod M, m ≤ (log |P|) n+1/2 , such that the numbers P ij (m) are distinct primes satisfying Then the conic (11) with t = m is everywhere locally solvable, and so is solvable in Z. Furthermore, Proposition 1.2 ensures the existence of a solution where
Generic diagonal quadric bundles of relative dimension 2
Let a 0 , a 1 , a 2 , a 3 be non-zero integers not all of the same sign and let a = a 0 a 1 a 2 a 3 .Let d 1 , . . ., d n be positive integers and let d = j=0 m ij t j be the generic polynomial of degree d i .Let S 0 , S 1 , S 2 , S 3 be subsets of {1, . . ., n}.The equation defines a family of quadrics Q t parametrised by the affine line with coordinate t over the field Q(m), where m = (m 1 , . . ., m n ).The generic fibre Q η is a quadric of dimension 2 over Q(t, m).We can multiply (15) by a non-zero element of Q(m) and absorb squares into coordinates x i without affecting the isomorphism class of Q η .This allows us to assume without loss of generality that each a i is square-free with no prime dividing more than two of the a i , and that no element of {1, . . ., n} belongs to more than two of the sets S i .Define Corollary 1.5 In the above notation assume that δ is not a square in Q(t, m).Let P be the set of m = (m i ) ∈ Z d+n such that the n-tuple (P i (t, m i )) is Schinzel.Let M be the set of m ∈ P such that for each p|2a the equation (15) has a solution in Z p for which the value of each polynomial P i (t) is a p-adic unit.Then there is a subset M ′ ⊂ M of density 1 such that for every m ∈ M ′ the equation (15) has a solution in Z.The set M ′ has positive density in Z d+n ordered by height.
Proof.We follow the beginning of proof of Corollary 1.3.Let M = 8a.Local solubility of (15) at t = m at an odd prime p|a with an additional condition that the value of each P ij (m) is a p-adic unit depends only on the value of P ij (m) modulo p.For the prime 2 the same holds modulo 8. Thus M is a finite disjoint union of subsets given by the condition P ≡ Q mod M, where Q is an n-tuple of polynomials in (Z/M)[t] for which there exists an m 0 ∈ Z such that (15) with t = m 0 has a solution in Z p for each p|M, and gcd(Q ij (m 0 ), M) = 1.Let us fix such a Q and such an m 0 .Then for any P ≡ Q mod M and any m ≡ m 0 mod M the equation (15) with t = m is solvable in Z p for each p|M.
Suppose that p i := P i (m), where i = 1, . . ., n, are distinct primes, where P i (t) ≡ Q i (t) mod M and m ≡ m 0 mod M. Thus p i ≡ Q i (m 0 ) mod M, hence p i does not divide M. The local solubility of (15) with t = m at the primes not dividing M and not equal to one of the p i is clear, since the quadric has good reduction modulo such a prime.
Suppose that every element of {1, . . ., r} belongs to exactly one of the sets S i , and every element of {r + 1, . . ., n} belongs to two of these sets.The condition on δ implies r ≥ 1, so the prime p 1 belongs to exactly one of the sets S i .The local solubility of (15) with t = m at the primes p i , where i = 1, . . ., r, is automatic: this follows from the fact that the conic obtained by setting x j = 0 in (15), where i ∈ S j , has good reduction modulo p i .It remains to show that for P ≡ Q mod M in a subset of density 1 we can choose m ≡ m 0 mod M so that ( 15) is solvable at each of the primes p r+1 , . . ., p n .
To apply Theorem 1.1 we define the values ǫ ij = p i p j for i < j, as follows.The values of ǫ 1,i = 1 for i = 2, . . ., r are of no importance and can be chosen arbitrary.For i = r + 1, . . ., n we define Since p i ≡ Q i (m 0 ) mod M, we see that this depends only on Q and m 0 .For k ≥ 2 we define ǫ k,l = 1 for all l > k.Then (15) with t = m is solvable in Z p i for i = r + 1, . . ., n, because up to multiplication by a square the product of all four coefficients is ap 1 . . .p r , which is a non-square modulo p i .This implies solubility in Z p i by [Cas78, Ch. 4, Lemma 2.6].An application of Theorem 1.1 finishes the proof since the resulting quadric Q m is everywhere locally solvable, hence solvable in Z.
The case when δ is a square in Q(m) can be reduced to the case of conic bundles.Indeed, let C → A 1 be the conic bundle defined by setting x 0 = 0 in (15).Then every smooth fibre Q t is isomorphic to C t × C t , see [CTS21, Prop.7.2.4 (c ′′ )], so Q t has a rational point if and only if C t has a rational point.Thus Corollary 1.3 gives a similar statement for generic diagonal quadric bundles for which δ is a square in Q(m).
When δ is a square in Q(t, m) but not a square in Q(m), prime values of polynomials seem to be insufficient to prove the analogue of Corollary 1.5.Indeed, consider the following particular case of (15): 2 Brauer group of generic diagonal conic bundles In this section we prove the triviality of the Brauer group of generic diagonal conic bundles as was mentioned but not proved in [SS23, Remark 6.2].
Let n 1 , n 2 , n 3 be non-negative integers such that n 1 and n 2 are positive.Suppose that we have positive integers d ij , where i = 1, 2, 3 and j = 1, . . .n i .Let m ijk be independent variables, where i = 1, 2, 3, j = 1, . . .n i , and k = 0, . . ., d ij .Write m ij = (m ijk ).Consider the generic polynomials of degree d ij : Let K be a field of characteristic zero, and let F be the purely transcendental extension of K obtained by adjoining all variables m ijk for i = 1, 2, 3, j = 1, . . .n i , and k = 0, . . ., d ij .For a 1 , a 2 , a 3 ∈ K × consider the subvariety X ′ ⊂ P 2 F × A 1 F given by (11).It is easy to see that X ′ is smooth and geometrically integral, and the projection to A 1 F is a proper morphism whose fibres are conics.There is a natural compactification of X ′ → A 1 F to a smooth projective surface X with a conic bundle structure X → P 1 K .The main result of this section is Theorem 2.1 Let n 1 > 0, n 2 > 0, n 3 ≥ 0. Then the natural map Br(F ) → Br(X) is an isomorphism.
We need to introduce some notation.Let F be an algebraic closure of F .Each polynomial P ij (t) is irreducible over F , thus is a field extension of F of degree d ij .We write N ij : F ij → F for the norm map.The norm map N ij gives rise to a homomorphism , which we shall denote also by N ij .
For distinct pairs (ij) and (rs) define R ij,rs ∈ F as the resultant of P ij (t) and P rs (t) considered as polynomials in F [t]. Since P ij (t) and P rs (t) have no common root in F , we have R ij,rs = 0. We note that each R ij,rs is an (absolutely) irreducible polynomial in the variables m ijk over K, see [GKZ94,p. 398].The polynomials R ij,rs and R i ′ j ′ ,r ′ s ′ differ by an element of K × if and only if either i = i ′ , j = j ′ , r = r ′ , s = s ′ or i = r ′ , j = s ′ , r = i ′ , s = j ′ .In the second case we have R ij,rs = (−1) d ij drs R rs,ij .
To simplify notation, we write p ij = m i,j,d ij for the leading coefficient of P ij (t).Write N ij (P rs ) for the norm of the image of P rs (t) in F ij .Then we have For i = 1, 2, 3 we define P i (t) = n i j=1 P ij (t) and write p i = n i j=1 p ij for the leading coefficient of P i (t).(When n 3 = 0, we write p 3 = 1 for the leading coefficient of the constant polynomial 1.) Let d i = n i j=1 d ij and let n = n 1 + n 2 + n 3 .Proof of Theorem 2.1.Let A ∈ Br(F (t)) be the class of the conic (11) over F (t). Equivalently, A is the class of the quaternion algebra associated to this conic.Each closed point M = Spec(F M ) of P 1 F gives rise to the residue of A at this point; this is an element of For any j ′ ≤ n i ′ this is a product of the irreducible polynomial R ij,i ′ j ′ and a rational function coprime to R ij,i ′ j ′ .In particular, the norm N ij (α ij ) is non-trivial, hence α ij is non-trivial.By assumption n 1 ≥ 1 and n 2 ≥ 1, thus each α ij is non-trivial.In particular, α 1,1 and α 2,1 are two non-trivial residues of A. It follows that A does not belong to the image of the natural map Br(F ) → Br(F (t)), hence the map Br(F ) → Br(X) is injective by [CTS21,Lemma 11.3.3].
The above calculation also shows that all fibres of X → P 1 F over the points of A 1 F are reduced and irreducible.The fibre at infinity is smooth if d 1 , d 2 , d 3 have the same parity.Let us first assume that this is the case.By [CTS21, Prop.11.3.4], the cokernel of Br(F ) → Br(X) is the homology group of the following complex: where the first map sends the generator 1 ∈ Z/2 to (1, . . ., 1), and the second map sends the generator of the (i, j)-summand Z/2 to N ij (α ij ) ∈ F × /(F × ) 2 .This sequence is a complex by Faddeev's reciprocity law [CTS21, Thm.1.5.2].
Let ε ij ∈ {0, 1}, for i = 1, 2, 3 and j = 1, . . ., n i , not all of them zero, be such that i=1,2,3 The factors given by pairs (ij) and (rs) contribute the irreducible element R ij,rs to the left hand side and no other factor does.Thus if ε ij = 1 for some i and j, then we must have ε rs = 1 for all r = i and all s = 1, . . ., n r .Repeating the argument, we obtain that ε ij = 1 for all possible values of i and j.This proves Theorem 2.1 in the case when d 1 , d 2 , d 3 have the same parity.Now suppose that d 1 , d 2 , d 3 do not have the same parity.Write {1, 2, 3} = {i, i ′ , i ′′ }, where d i and d i ′ have the same parity.Let α ∞ be the residue of A at infinity.We calculate that then this is simply divisible by p 2,1 .In all cases we conclude that α ∞ is non-trivial.Thus the fibre of X → P 1 F at infinity is singular, reduced and irreducible.Now [CTS21, Prop.11.3.4] says that the map Br(F ) → Br(X) is injective and its cokernel is the homology group of the following complex: Here the first map sends the generator 1 ∈ Z/2 to (1, . . ., 1), and the second map sends the generator of the (i, j)-summand Z/2 to N ij (α ij ) ∈ F × /(F × ) 2 and sends the generator of the extra copy of Z/2 (given by the point at infinity) to α ∞ .
Proof.Since the smooth proper surface X F is rational, we have Br(X F ) = 0 by the birational invariance of the Brauer group [CTS21, Cor.6.2.11].Thus we obtain a functorial exact sequence be the Zariski open subset given by the condition that the discriminants and the leading coefficients of all the polynomials P ij (t) and the resultants of all pairs of the polynomials P ij (t) are non-zero.We have U = ∅, e.g., because given by ( 11) is smooth and geometrically integral when m ∈ U(Q), because no polynomial in t divides more than one coefficient and if it divides some coefficient then it simply divides it.For m ∈ U(Q) we denote by X m → P 1 Q the conic bundle surface over Q which is a natural smooth compactification of X ′ m .Specialisation at m ∈ U(Q) preserves the degrees of the polynomials P ij (t), thus X m can be obtained as the specialisation of the conic bundle surface X → P 1 F considered above.
Corollary 2.3 For 100% of points m ∈ Z d+n ordered by height the natural map Br(Q) → Br(X m ) is an isomorphism.
Proof.100% of points m ∈ Z d+n are contained in U, and for such points the surface X m and thus the map Br(Q) → Br(X m ) are well defined.For m ∈ U(Q) we have compatible specialisation maps sp m : Pic(X) → Pic(X m ) and sp m,Q : Pic(X F ) → Pic(X The exact sequence (18) and Theorem 2.1 imply that the top horizontal map is an isomorphism.It follows that for m ∈ H the bottom map is also an isomorphism.This map fits into an exact sequence Proof.The probability for a given polynomial of positive degree to have a nonzero value at a given point of F ℓ is 1 − 1/ℓ.The product of n polynomials of positive degrees does not vanish at a given point of F ℓ with probability (1 − 1/ℓ) n , so it vanishes with probability 1 − (1 − 1/ℓ) n .If ℓ ≤ d i for all i, these events are independent, so the product of polynomials vanishes everywhere on F ℓ with probability Remark 3.2 For ℓ = 2 the assumptions of the lemma are always met, so we have δ n (2, d) = 1/2 n−1 − 1/4 n .In particular, the density of Schinzel n-tuples is always at most 1/2 n−1 , hence goes to 0 when n → ∞.
The case n = 2
Expression [SS23, Eq. (2.6)] is complicated to evaluate if 1 + min{d i } < ℓ ≤ d.We now give a more practical lower bound in the case n = 2.
Proof.In the counting function in the definition of δ 2 (ℓ, d) we can assume that neither polynomial is identically zero, thus Since P 1 (t)P 2 (t) vanishes everywhere on F ℓ , we see that which gives the desired bound.
The main goal of this section is to give an explicit lower bound for the density of pairs of integer polynomials P 1 (t), P 2 (t) of fixed degrees d 1 , d 2 , respectively, such that the above equation is solvable in Z.
We use Corollary 1.3 with n 1 = n 2 = 1, n 3 = 0, a 1 = a 2 = 1, and a 3 = −1.We only need to deal with the prime ℓ = 2.If a and b are odd integers, then the Hilbert symbol (a, b) 2 equals 1 if and only if a or b is 1 mod 4. Thus we need to calculate the probability σ 2 that a pair of polynomials P 1 (t), P 2 (t) ∈ (Z/4)[t] of degrees deg(P 1 ) ≤ d 1 , deg(P 2 ) ≤ d 2 satisfies the following conditions: (a) P 1 (0) and P 2 (0) are both odd, or P 1 (1) and P 2 (1) are both odd, and (b) P i (n) ≡ 1 mod 4 for some i ∈ {1, 2} and some n ∈ Z/4.Condition (a) is the Schinzel condition at 2, and condition (b) is the triviality of the Hilbert symbol at 2. Recall that d = d 1 + d 2 .We have where T is the set of pairs P 1 (t), P 2 (t) ∈ (Z/4)[t] of degrees deg(P 1 ) ≤ d 1 , deg(P 2 ) ≤ d 2 satisfying condition (a) and taking only values 0, 2, and 3 modulo 4.
There are 2 e−1 polynomials of degree at most e in F 2 [t] with given values at 0 and 1.Each of these can be lifted to exactly 2 e−3 polynomials of degree at most e in (Z/4) [t] with given values at the elements of Z/4 that are compatible with the values at 0 and 1 modulo 2. Thus there are 4 e−2 polynomials P (t) ∈ (Z/4)[t] of degree at most e with given values at the elements of Z/4 such that the values at 0 and 2 have the same parity, and similarly for the values at 1 and 3.
so by [CTS21, Remark 5.4.3 (2)] the last map in (18) is surjective.It remains to apply Theorem 2.1.Let K = Q.We can think of m = (m ijk ) as the coordinates of the affine space A d+n Q , where n = n 1 + n 2 + n 3 and d = ij d ij .Let U ⊂ A d+n Q Evaluating the product σ 2 ℓ≥3 δ 2 (ℓ, d) using Lemmas 3.1 and 3.3 gives the following bound.Proposition 3.4 Let d 1 and d 2 be positive integers.The density of pairs of integer polynomials P 1 (t), P 2 (t) of degrees deg(P 1 ) = d 1 and deg(P 2 ) = d 2 with positive leading coefficients, ordered by height, such that the equation P 1 (t)x 2 + P 2 (t)y 2 = z 2 from which we conclude that for m ∈ H the natural map Br(Q) → Br(X m ) is an isomorphism.The complement Q d+n \ H is a thin set [S97, Section 9.2, Prop.1].The classical theorem of S.D. Cohen (see, e.g.[S97, Section 13.1, Thm.1]) implies that H contains 100% of points of Z d+n , when they are ordered by height.Let ℓ be a prime.For P (t) ∈ F ℓ [t] we denote by Z P (ℓ) the number of zeros ofP (t) in F ℓ .For n ∈ N and d = (d 1 , . .., d n ) ∈ N n , let δ n (ℓ, d) := #{P ∈ F ℓ [t] n : deg(P i ) ≤ d i , Z P 1 •••Pn (ℓ) = ℓ} ℓ n+d 1 +...+dn be the density of Schinzel n-tuples modulo ℓ.Write d = d 1 + . . .+ d n .An explicit expression for δ n (ℓ, d) is given in [SS23, Eq. (2.6)], but when ℓ is large or small compared to the degrees d i it is easy to calculate δ n (ℓ, d) directly.For example, if ℓ > d, then Z P 1 •••Pn (ℓ) = ℓ if and only if each P i (t) is a non-zero polynomial, thus | 7,746.2 | 2022-12-30T00:00:00.000 | [
"Mathematics"
] |
Combined ostracod and planktonic foraminiferal biozonation of the Late Coniacian – Early Maastrichtian in Israel
The distribution and zonation of planktonic foraminifera and ostracods during the Late Coniacian – Early Maastrichtian succession in Israel was studied in detail from six surface sections. The combination of contemporaneous biozones led to a more accurate age determination of the local ostracod zones, according to the Tethyan planktonic foraminiferal zonation. The configuration of the biozones of both taxa presents more datum lines for stratigraphic correlation of the Senonian strata of Israel. Three new ostracod species were described from Campanian sediments: Cytherelloidea zinensis, Loxoconcha hebraica and Cristaeleberis ordinata.
INTRODUCTION
Late Coniacian -Maastrichtian marine formations of the Mount Scopus Group (Flexer, 1968) are widely distributed in Israel. They are mostly composed of chalks, mark, cherts and phosphorites. A renewed interest in Senonian rocks of Israel was evoked after the micropaleontological studies of Moshkovitz (1984; calcareous nannofossils) and Honigstein (1983Honigstein ( , 1984ostracods). Additional biostratigraphic data on ostracods are recorded in Lipson-Benitah et al. (1985; combined with foraminifera) and in Lifshitz et al. (1985). Reiss et al. (1985) summarised multiple bio-and chronostratigraphic data from the Senonian of Israel, based on ranges of indicative species of megafossils (mainly ammonites), planktonic and benthic foraminifera, calcareous nannoplankton, and ostracods. A modified biostratigraphic chart, on the base of ranges of 54 Globotruncanidae species, was presented in Almogi-Labin et al. (1986) and the results were compared with the general European zonation of Robaszynski et al. (1984). In this study, the local ostracod zones are correlated with the more general planktonic foraminiferal zonation.
The combined biozonation i s based on former results (Honigstein, 1983, Reiss et al., 1985 and on six additional surface sections (Table 1, coordinates in Israel grid; Fig. 1). These profiles were chosen to be representative for a detailed bio-, litho-and chronostratigraphic study. Studies on other microfossil groups (from the same "type-" sections) are in preparation. Both planktonic foraminifera and ostracods were studied from the same samples, except those of the Ein Fawwar section (see Fig. 1). The distribution of the planktonic foraminifera and their ranges were determined here by Almogi-Labin and the ostracods by Honigstein and Rosenfeld. The results are depicted in Figs. 3-12. Species with limited taxonomic and stratigraphic importance are omitted, such as Arcaeoglobigerina cretaceu and A . blowi (foraminifera), and Bythocrypris windhami, Cytherella cf. C. austinensis, Bunlonia? aff. 6 . cretacea, Buirdoppilata pondera and Spinoleberis megiddoensis (ostracods). The investigation of more than 400 samples led also to a modification of the general distribution chart of Senonian ostracods from Israel (Fig. 7 ) . A calibrated scheme of ostracod versus planktonic foraminiferal zones is given in Fig. 13.
The samples from the studied sections, their washed residues, as well as the picked foraminifera, are deposited in the Micropaleontological Collection of the Geological Department of the Hebrew University, Jerusalem, catalogued with the Laboratory prefix HU-. The ostracod material is stored at the Micropaleontological Laboratory of the Geological Survey of Israel. Jerusalem.
Planktonic foraminifera
The planktonic foraminiferal fauna which occurs in our material was discussed in detail in Reiss et al. (1985Reiss et al. ( , 1986 and Almogi-Labin et al. (1986), where also Globotruncanidae species were figured. In the present study, species of Heterohelicidae of stratigraphical importance in the Santonian -Campanian were recorded in the distribution charts (Figs. 3, 5 , 7, 11).
CORRELATION USING PLANKTONIC FORAMINIFERAL AND OSTRACOD BIOZONATION
The distribution of the ostracods and planktonic foraminiferal assemblages within the six studied sections (Figs. 3-12), as well as from previous works (Honigstein, 1983;Reiss et al., 1985), led to the following correlations of biozones, as presented in Fig. 13.
The Phyrocythere lata (S-1) assemblage zone of Late Coniacian age (Honigstein, 1984) was correlated in a northern Israel borehole section with the Marginotruncana angusticarenata zone (Lipson-Benitah, 1980). According to Lipson -Benitah (in press), at least the upper part of the S-1 zone, which was observed in Bar'am ( Fig. 6), Nahal Ya'alon ( Fig. 8) and Nahal Zin (Fig. lo), belong to the lower part of the Dicarinella concavata zone (Robaszynski et al., 1984). The planktonic foraminifera of this The Coniacian/Santonian boundary is problematic, but may be tentatively placed at the top of the S-1 zone. The Santonian succession is represented by high populations of ostracods and foraminifera. The Dicarinella concavata and Dicarinella 'asymetrica zones can be correlated with the Cythereis rosenfeldi rosenfeldi (S-2) and Limburgina miarensis (S-3) assemblage zones. Their biozone boundaries alternate (Ein el Qilt,Bar'am,Nahal Ya'alon,. The Santonian in the Nahal Zin section (Figs. 9-10) is reduced to about 5m; the Dicarinella asymetrica zone was probably therefore not observed because of the poor preservation of the foraminifera however, all ostracod zones were found.
The SantonianICampanian boundary is defined by the common base of the Leguminocythereis dorsocostatus (S-4) and Globotruncanita elevata zones (Figs. 3-10). The Globotrunicanita elevata zone, indicative for the Early Campanian period, correlates to the S-4 zone and it its top, to the base of the Brachycythere beershevaensis ( S -5 ) assemblage zone (Nahal Ya'alon, Fig. 8; Tarqumiya, Fig. 12). The diversity of planktonic foraminifera in the Tarqumiya section (Fig. 11) within the Early Campanian is much higher, the specimens are larger and contain a higher percentage of adults than in the Ein Fawwar exposure (Fig. 3). The ostracod diversity in these sections remains more or less constant, but the total ostracod content in the samples from Ein Fawwar is higher. These observations enhance the general W-E trend of planktonic foraminifera decrease and ostracod increase (Flexer & Honig-Honigstein el al. stein, 1984).
The S-5 zone can be compared with the Late Campanian Globotruncana rosetta and the latest Campanian Globotruncanita calcarata zones . Therefore, the former range of this ostracod zone, which can sometimes be subdivided into the subzones 5a and 5b (Honigstein, 1984: upper part of Early Campanian) must be extended into the Late and latest Campanian. The Late Campanian S-5* subzone was recognised only from southern Israel (Honigstein, 1984;present paper: Nahal Ya'alon, Fig. 8; Nahal Zin, a ;. falsos? Fig. 10). Two new ostracod species were found in the Nahal Zin section within this subzone, accompanying the usually rare and low diversity fauna. The S-5b subzone, contemporaneous with the Globotruncanita calcarata zone (Bar'am;Tarqumiya,, can be differentiated from the S-5a subzone by the first occurrence of Veeniacythereis tenyetensis and Cythereis ornatissirna (Fig. 2). Moreover, a higher ratio of pitted forms of Brachycythere and Protobuntonia versus the reticulated specimens of Ventrocythereis is found in the S-5b subzone. The Campanian/Maastrichtian boundary is not clear-The combination of contemporaneous occurrences of ly defined in the Israeli succession (Reiss et al., 1985, ostracod and planktonic foraminifera1 biozones enables 1986) and cannot precisely be dated by planktonic us to date the local ostracod biostratigraphy according foraminifera (disappearance of Globotruncanita cafcarto the regional Tethyan planktonic foraminiferal zonautu) and the rare ostracod fauna. The Early Maastrich-tion. The use of both taxa, ostracods and planktonic tian is determined in the present study with the foraminifera, provides more datum lines and allows a common range of the Giobotruncana falsostuarli and finer resolution of the Senonian stratigraphy. | 1,639.4 | 1987-11-01T00:00:00.000 | [
"Geology"
] |
Simultaneous adsorption of As(III) and Cd(II) by ferrihydrite-modified biochar in aqueous solution and their mutual effects
A simply synthetic ferrihydrite-modified biochar (Fh@BC) was applied to simultaneously remove As(III) and Cd(II) from the aqueous solution, and then to explore the mutual effects between As(III) and Cd(II) and the corresponding mechanisms. The Langmuir maximum adsorption capacities of As(III) and Cd(II) in the single adsorbate solution were 18.38 and 18.18 mg g−1, respectively. It demonstrated that Fh@BC was a potential absorbent material for simultaneous removal of As(III) and Cd(II) in aqueous solution. According to the XRF, SEM–EDS, FTIR, XRD, and XPS analysis, the mechanisms of simultaneous removal of As(III) and Cd(II) by Fh@BC could be attributable to the cation exchange, complexation with R-OH and Fe-OH, and oxidation. Moreover, the mutual effect experiment indicated that Cd(II) and As(III) adsorption on Fh@BC in the binary solution exhibited competition, facilitation and synergy, depending on their ratios and added sequences. The mechanisms of facilitation and synergy between Cd(II) and As(III) might include the electrostatic interaction and the formation of both type A or type B ternary surface complexes on the Fh@BC.
Materials and methods
Materials. Rape straw was collected from Ziyang City (Sichuan Province, China) and was used as feedstock for the production of pristine biochar (PBC). Fe(NO 3 ) 3 ·9H 2 O and KOH (AR grade) were selected to modify the PBC; NaAsO 2 and Cd(NO 3 ) 2 ·4H 2 O) (AR grade) were used to prepare the stock solutions for the batch experiment. All reagents were from Reagent Co. Ltd. All solutions were prepared with ultrapure water (18.2 MΩ).
Preparation of ferrihydrite-modified biochar. Rape straw was air-dried and ground to < 5 mm. The feedstock was pyrolyzed in a furnace in the N 2 atmosphere. The pyrolysis temperature was increased to 400 °C at a rate of 5 °C min −1 and then maintained at 400 °C for 2 h. After cooling to room temperature, the biochar was washed with the deionized water and filtered using a 300-mesh sieve. The biochar that passed through the sieve was collected and was referred to as PBC.
The ferrihydrite-modified biochar (Fh@BC) was synthesized according to a previously reported method 44 with some modifications. First, the dried PBC of one gram was submerged in the 0.1 M Fe(NO 3 ) 3 solution of 50 mL. The suspension with pH < 2 was vibrated in a thermostatic shaker (25 °C) at 180 rpm for 24 h. Then,1 M KOH solution was added into the suspension to adjust the pH to 7.0 ± 0.1, which is the same as that used in the synthesis of pure ferrihydrite. The suspension was stirred vigorously using a magnetic stirrer at 600 rpm for 30 min at room temperature (~ 25 °C). The modified biochar was continuously washed by the deionized water until the conductivity of the aqueous solution was less than 50 μS cm −1 , and then was separated using a 300-mesh sieve. Fh@BC and Fh were freeze-dried and stored at 4 °C in the dark for later experiments 42 . Batch experiments. Batch experiments were performed to evaluate the adsorption capacity and performance of the adsorbent for As(III) and Cd(II). The background electrolyte of the reaction system was 0.01 M NaNO 3 . Briefly, all batch adsorption experiments were performed with 50 ± 0.1 mg PBC or Fh@BC in a 20 mL solution. The suspension was placed into 50 mL centrifuge tubes and then vibrated in a thermostatic shaker at a velocity of 180 rpm at 25 °C. Afterward, the suspension was filtered through 0.45 µm disposable filters for subsequent determination.
The adsorption kinetics experiments included single and binary solutions. In the single adsorbate solution, the adsorption kinetics experiment was conducted in 10 mg L −1 As(III) or Cd(II) solution at pH= 7.0 ± 0.1 for 24 h of oscillation (denoted as "As" or "Cd"). In the binary solution, two approaches were adopted, as follows: (1) 20 mg L −1 As(III) solution and 20 mg L −1 Cd(II) solution were mixed in equal volume and the solution pH was regulated to 7.0 ± 0.1, and the theoretical concentrations of As(III) and Cd(II) in the mixed solution was 10 mg L −1 (denoted as "Cd and As"); and (2) As(III) or Cd(II) solution was added into the Cd(II) or As(III) solution successively after a 24 h pre-equilibrium reaction, and the concentration of Cd(II) and As(III) in the binary solution was about 10 mg L −1 (denoted as "Cd + As" for Cd(II) or "As + Cd" for As(III)). The sequential As(III) or Cd(II) in the binary solution was determined after a 24 h equilibrium and denoted as "Cd + As" for As(III) or "As + Cd" for Cd(II). At the appropriate time, the suspension was filtered for subsequent determination. In this study, pseudo-first-order (PFO) kinetic model and pseudo-second-order (PSO) kinetic model were adopted to describe the adsorption rates 33,46 , as shown in the Supplementary Information (SI. 1).
Adsorption isotherm experiments also evaluated single and binary solutions. For the single solution, the isotherm experiment was conducted with various concentrations of As(III) or Cd (II) solution (pH = 7.0 ± 0.1), ranging from 1 to 200 mg L −1 . For the binary solution, two approaches were considered: (1) various concentrations of As(III) or Cd(II) solution (pH = 7.0 ± 0.1), ranging from 1 to 200 mg L −1 , were mixed simultaneously (denoted as www.nature.com/scientificreports/ "Cd and As"); and (2) 5 mg L −1 As (III) or Cd(II) solution was successively added into the pre-equilibrium solution with the Cd(II) or As(III) concentration ranging from 1 to 200 mg L −1 (denoted as "Cd + As" or "As + Cd"). The Langmuir and Freundlich models were adopted for data fitting for the adsorption isotherm experiments of Cd(II) and As (III) 33,47 , as shown in SI. 2.
To further understand the adsorption performance and mechanisms of Fh@BC for Cd(II) and As(III), solution pH, competition ions, and oxidation capacity of Fh@BC for As(III) were also examined. The detailed methods are described in SI. 3, 4. Measurements and characterization. Element contents of PBC and Fh@BC before and after adsorption were determined by X-ray fluorescence spectroscopy (XRF; Cadence, XOS, USA). Morphology and element features of PBC and Fh@BC were characterized by a scanning electron microscope with energy dispersion spectrometry (SEM-EDS; SU8020, Hitachi, Japan). N 2 adsorption-desorption isotherms were determined using an ASAP 2460 analyzer (Micromeritics Instrument, USA). The specific surface areas of PBC and Fh@BC were calculated by the Brunauer-Emmett-Teller (BET) method. Surface functional groups of PBC and Fh@BC were evaluated according to the Fourier transform infrared (FTIR) spectra obtained with a Nicolet IS10 instrument (Thermo Fisher Scientific, USA) with a scanning range of 4000-400 cm −1 . X-ray diffraction (XRD) was used to identify the crystalline structures of PBC and Fh@BC at a scanning rate of 6° min −1 and a 2θ range of 10°-90°, using a BRUCKER D8 (Bruker, Germany). The chemical states of the elements were evaluated by X-ray photoelectron spectroscopy analysis (XPS) using a Thermo ESCALAB 250Xi (Thermo Fisher Scientific, USA). All spectra were calibrated with the binding energy of the C1s peak at 284.8 eV. The concentrations of total As or As(III) in solution were quantified using atomic fluorescence spectrometry (AFS; PERSEE, China). The concentrations of Cd(II) and iron (Fe) in the aqueous solution were quantified by flame atomic absorption spectrometry (AAS; PERSEE, China). Solution pH was measured using a pH meter (H160NP, Hach, USA).
Results and discussion
Characterization of Fh@BC. As illustrated in Fig. S1, the pore size of Fh@BC was about 10 μm. From Table S1, iron (hydr)oxides loaded on the Fh@BC. Moreover, the BET of Fh@BC increased from 3.76 to 4.13 m 2 g −1 during the synthesis process in the Fe (NO 3 ) 3 solution (pH < 2).
The FTIR analysis was shown in Fig. S2a. The broadband of Fh@BC near ~ 3340 cm −1 was strengthened and was attributed to the stretching vibration of FeO-H 33,48 or RO-H 32,49,50 , This indicated that -OH was induced on the PBC during the modified procedure. The peak at 1591 cm −1 (assigned to aromatic C=C or C=O groups of the carboxyl) weakened after interaction with Fe 3+ , implying that aromatic C=C or C = O groups of the carboxyl unit were consumed during the modification process 26 . Fh@BC showed a new characteristic peak at 1376 cm −1 resulting from the stretching Fe-OH vibrations 48 . As mentioned above, iron (hydr)oxides formed on the surface or in the pores of PBC.
The XRD pattern of Fh@BC is illustrated in Fig. S2b. A strong and broad peak at approximately 23° indicated that the crystal plane resulted from typical disordered glassy polymers of carbon 51 . The CaCO 3 diffraction peak of Fh@BC became weak and even disappeared after the modification, with a decreased peak at 873 cm −1 on FTIR; this result is consistent with the reduced calcium contents of Fh@BC in Table S1. In contrast, the related characteristic peaks of iron (hydr)oxides in the XRD patterns were not observed. As indicated by the Fe contents loaded on biochar (Table S1) and XRD patterns of the pure ferrihydrite 48,52 , iron-containing compounds on the surface or in the pores of Fh@BC were weakly crystalline iron (hydr)oxides 42 .
As(III) and Cd(II) adsorption in a simultaneous or sequential addition system. Adsorption capac-
ity in a single or binary solution. As illustrated in Fig. S3, the order of addition of the As(III) and Cd(II) solutions had a significant effect on their adsorption capacities for As(III) or Cd(II). Among the various treatments, Cd(II) adsorption in the "Cd + As" system and As(III) adsorption in the "As + Cd" system was significantly greater than the other groups (P < 0.05). Compared to the single solution, the subsequent addition of As(III) or Cd(II) improved the adsorption of Fh@BC for Cd(II) by 0.29 mg g −1 or for As(III) by 0.24 mg g −1 . However, the pre-equilibrated As(III) or Cd(II) exhibited a significant inhibition effect on the adsorption ability of Fh@ BC for subsequently added Cd(II) (− 0.71 mg g −1 ) or As(III) (− 0.37 mg g −1 ) (P < 0.05), indicating that As(III) and Cd(II) can compete for the same adsorption sites on the iron (hydr)oxides of modified biochar, which can adsorb the As(III) and Cd(II) simultaneously due to the complexation 9 . Compared to "Cd", "As" and "Cd and As", we found that As(III) inhibited the adsorption of Cd(II) in the binary solution (simultaneous addition) on account of the higher complexation constant between As(III) and iron (hydr)oxides than that of Cd(II) 42,53 .
Adsorption characteristics in a single or binary solution. The adsorption of As(III) and Cd(II) by Fh@BC attained equilibrium within 5 h and 10 h, respectively, as illustrated in Fig. 1a,b. When As(III) or Cd(II) solution was added to the pre-equilibrated Cd(II) or As(III) solution, the adsorption capacity of Fh@BC for Cd(II) or As(III) was improved, indicating that the sequential addition had a dramatic effect on the adsorption capacity and the adsorption rate of As(III) and Cd(II) 9 . The PSO with a higher R 2 value much more precisely described the adsorption process of Cd(II) and As(III) in all treatments than the PFO. These results indicated that the removal of Cd(II) and As(III) by Fh@BC was mainly due to the chemical reaction 10 . Further, it appears that the adsorption of Cd(II) and As(III) by Fh@BC was a multiple-step process, which may include external surface diffusion, intraparticle diffusion, and valence forces 5 . Similar results were also found in other studies 5,10 . As can be seen from Table 1, K2 (0.68) of Cd(II) in the "As + Cd" system and K 2 (1.29) of As(III) in the "Cd + As" system were much higher than that of Cd (0. 46 www.nature.com/scientificreports/ pre-equilibrium of As (III) or Cd(II) could improve the adsorption rate of the subsequently added Cd(II) or As(III) in the binary solution.
The results of the adsorption isotherm experiments in Fig. 1c,d indicated that the sequential addition of Cd(II) (5 mg L −1 ) or As(III) (5 mg L −1 ) improved the adsorption capacity of Fh@BC for As(III) or Cd(II). Combined with the results in Table 1, it appeared that the Freundlich model provided a better description of Cd(II) adsorption www.nature.com/scientificreports/ by Fh@BC in the single and binary solution, indicating that the adsorption by Fh@BC included monolayer adsorption and multi-layer adsorption on the biochar surface and iron (hydr)oxides. In contrast to Cd(II), the Langmuir model was a good fit for the adsorption process of As(III) in the single and binary solution, indicating a one-layer distribution on the surface of the Fh@BC 5 .
According to the Langmuir model, Fh@BC displayed a great adsorption capacity for As(III) and Cd(II) in Table 1. The Langmuir maximum adsorption capacity of As(III) and Cd(II) on Fh@BC in the single solution was 18.38 and 18.18 mg g −1 , respectively. For estimating the adsorption performance, the Langmuir maximum adsorption capacities of Fh@BC for Cd(II) and As(III) were compared with other modified biochars reported in previous studies, as shown in Table S2. Fh@BC, with a simple process of modification, exhibited much more excellent adsorption capacity for simultaneous removal of As(III) and Cd(II) than others 9,54 . This excellent performance of Fh@BC for As(III) and Cd(II) might be explained by the adsorption by PBC and the complexation of iron (hydr)oxides 14,42,55 .
Mutual effect between As(III) and Cd(II) in a binary solution. The mutual effects between As(III)
and Cd(II) in binary solution with simultaneous addition were considered in Fig. 2. The results indicated that the adsorption capacity of Fh@BC for Cd(II) was improved by the high concentration of As(III) in binary solution with simultaneous addition in Fig. 2a. However, the simultaneous addition of Cd(II) had little effect on the adsorption capacity of Fh@BC for As(III) in Fig. 2b. As discussed above, it can be concluded that the coexisting As(III) improves the adsorption capacity of Fh@BC for Cd(II) in the binary system. Compared with the "Cd" system, the sequential addition of 5 mg L −1 As(III) solution promoted the adsorption of Cd(II) by Fh@BC in the "Cd + As" system by 0.09-4.47 mg g −1 , as shown in Fig. 3a. In turn, compared to the "As" system, the pre-equilibrated Cd(II) changed the adsorption capacity of Fh@BC for the sequentially added As(III) by -0.30-0.59 mg g −1 . A synergy effect between As(III) and Cd(II) was observed when the concentration ) on adsorption capacities in "Cd + As" system (a) and "As + Cd" system (b) with sequential addition. www.nature.com/scientificreports/ ratio of Cd(II) to As(III) ranged from 75:5 to 200:5. As shown in Fig. 3b, compared with the "As" system, the addition of 5 mg L −1 Cd(II) solution also promoted the adsorption of Fh@BC for As(III) in the "As + Cd" system by 0.05-3.98 mg g −1 . Similar to the "Cd + As" system, a synergy effect between As(III) and Cd(II) in the "As + Cd" system was observed when the concentration ratio of As(III) to Cd(II) ranged from 5:5 to 200:5. Compared to the "Cd" system, the pre-equilibrated As(III) also changed the adsorption capacity of the sequentially added Cd(II) by − 0.11-0.45 mg g −1 . As(III) and Cd(II) mutually facilitated the adsorption of Fh@BC for Cd(II) or As(III) in binary solution (the "Cd + As" or "As + Cd" system), depending on the ratio of As(III) to Cd(II).
These results indicate that there were competition and promotion between As(III) and Cd(II) on their adsorption capacities on Fh@BC in the binary solution. The interactions depended on the addition sequence of As(III) and Cd(II), which determined the competition between As(III) and Cd(II) on adsorption sites of modified biochar. Wu et al. 9 demonstrated that the presence of As(III) facilitated Cd(II) adsorption, while the presence of Cd(II) suppressed As(III) adsorption on the modified biochar (calcium-based magnetic biochar). The speculated reason for the facilitation of As(III) on Cd(II) was on account of the electrostatic interaction and the formation of type B ternary surface complexes (=Fe-O-As-O-Cd), and the inhibition of Cd(II) on As(III) was due to the same adsorption sites (i.e., iron (hydr)oxides) on the modified biochar. Other studies and our study have also found competition and facilitation between Cd and As in the binary solution (i.e., the facilitation of As on Cd) 5,10 , and the potential reasons for the facilitation of As(III) on Cd(II) are the electrostatic interaction and the formation of ternary surface complexes (type B) 4,9,56 . Besides, we found an interesting and fresh phenomenon is that Cd(II) could facilitate the adsorption of Fh@BC for As(III) in the binary solution. The speculated reason for the facilitation of Cd(II) and As(III) might include the electrostatic interaction and the formation of type A ternary surface complexes based on the results of Wu et al. 9 and Liu et al. 4 . When the pH of the solution was 7, the dominated species of As(III) was H 3 AsO 3 during the batch experiment as shown in Fig. S4. This result demonstrated that electrostatic attraction was not responsible for the As(III) adsorption by Fh@BC in the binary solution; instead, surface complexation between H 3 AsO 3 and iron (hydr)oxides on Fh@BC likely occurred 4 and then determined the adsorption capacity of Fh@BC for As(III). Based on this, it further inferred that the facilitation of Cd(II) on As(III) might be a result of the formation of type A ternary surface complexes 56 , and the plan of quantitative evidence should be considered to clarify the contribution of type A in the next work.
Adsorption mechanism analysis. As mentioned above, Fh@BC could simultaneously adsorb Cd(II) and As(III) in the binary solution, and the potential mechanisms of Fh@BC adsorption for As(III) and Cd(II) are discussed as follows. First, the changes in the pH of the beginning solution (pH b ) and equilibrium solution (pH e ) might verify the existence of Fh@BC protonation and deprotonation process during the adsorption process ( Fig. S4a and b). The main species of Cd(II) and As(III) in aqueous solution at pH = 7.0 were Cd 2+ and H 3 AsO 3 , respectively ( Fig. S4c and d). The adsorption of Fh@BC for Cd 2+ in aqueous solution at pH = 7.0 might be due to electrostatic attraction between Cd 2+ and negatively charged biochar 26,57 , while the adsorption of Fh@BC for H 3 AsO 3 at pH = 7.0 could be attributed to the complexation of iron (hydr)oxides 36 . As shown in Fig. S5, ion exchange or electrostatic attraction for Cd(II) adsorption could be identified by the influence of the coexisting cations (Ca 2+ > Mg 2+ > K + > Na + ) 36 . This indicates that the electrostatic interaction influenced the adsorption performance of Fh@BC for Cd(II), depending on the ionic radius and charge 58 . The co-existence of anions (except H 2 PO 4 − ) presented a slight effect on the adsorption of Fh@BC for As(III). This can be attributed to the formation of inner-sphere complexation between As(III) and Fh@BC 52,59,60 . With an increasing H 2 PO 4 concentration, the adsorption of Fh@BC for As(III) significantly decreased due to the strong competition between H 2 PO 4 and As(III) 58,61 . Further, Fh@BC exhibited an oxidation capacity for As(III), because ferrihydrite is a natural Fenton reagent that can oxidize As(III) to As(V) 43,44 . The oxidation capacity in this study was about 8.38 mg g −1 (Fig. S6).
The FTIR spectrum was presented in Fig. 4a. The weakened characteristic peak of Fh@BC(200Cd) at ~ 3400 cm −1 was attributed to Cd(II) complexation with Fe-OH 33 or R-OH 32,49,50 . The reduction and shift in the characteristic peaks near 1700 and 1600 cm −1 indicated that aromatic C=C or C=O of the carboxyl unit were consumed during Cd(II) adsorption 26 , and Cd-π interaction may have occurred 33 . Moreover, the characteristic peak of Fh@BC(200As) at ~ 3400 cm −1 was larger as a result of -OH derived from H 3 AsO 3 , despite the existence of complexation between Fe-OH and H 3 AsO 3 60 . The XRD results of Fh@BC before and after adsorption were shown in Fig. 4b. The absence of well-crystallized minerals indicated that the iron (hydr)oxides on the Fh@BC were still amorphous after adsorption of Cd(II) and As(III). Moreover, the characteristic peaks of As or Cd in the XRD results were not observed. Similar results have been found in other studies that have applied iron-modified biochar for the adsorption of Cd or/and As in the single or binary solution 4,9 . C 1s, O 1s, Fe 2P, Cd 3d, and As 3d XPS spectra were used to analyze the evolution of the functional groups on Fh@BC, as illustrated in Fig. 5. The C1s XPS spectrum was divided into three characteristic peaks at the binding energies of 284.8, 286.1 and 288.8 eV, assigned to C-C/C=C, C-O and O-C=O, respectively 33,46 (Fig. 5b). After As(III) and Cd(II) adsorption, the percentage of C-C/C=C increased from 62.59% to 69.44% and 69.93%, respectively; the ratio of O-C=O decreased from 7.55% to 4.86% and 6.29%, respectively. The shift among the oxygen-containing functional groups indicated that the hydroxyl and carboxyl groups on Fh@BC were involved in the complexation with As(III) and Cd(II) during the adsorption process 26 .
The O1s XPS spectrum was classified into five peaks with binding energies of 530.2, 531.5, 532.6, 533.6 and 534.4 eV, representing metal oxide (M-O), quinone, C=O, C-OH/Fe-OH/C-O-C and -COOH, respectively 31,33,39,62,63 (Fig. 5c). After adsorption for Cd(II) and As(III), the ratios of C-OH and Fe-OH on Fh@ BC-Cd and Fh@BC-As dramatically decreased from 27.96 to 9.56% and 10.02%, respectively; the percentage of -COOH decreased from 21.70 to 3.48% and 0.78%, respectively. These findings indicated that Cd(II) and As(III) could form complexes with C-OH, Fe-OH and -COOH 33 . Moreover, after reacting with As(III) and Cd(II), the 64 (Fig. 5d). After the adsorption for Cd(II) and As(III), the percentage of Fe(III) on Fh@BC(200Cd) and Fh@BC(200As) changed from 81.46 to 81.83% and 68.31%, respectively. The shift of Fe 2p on Fh@BC(200As) indicated that Fe(III) on/in Fh@BC was reduced during the adsorption process, this process coupled with oxidation of As(III).
The Cd 3d3/2 and Cd 3d5/2 spectra indicated the existence of Cd-O 35 (Fig. 5f). As 3d was deconvoluted into two peaks (i.e., As 3d3/2 and As 3d5/2), the proportions of As(III) and As(V) on Fh@BC(200As) were 49.91% and 50.09%, respectively (Fig. 5e). This was supported by the oxidation capacity test of Fh@BC for As(III), which was up to 80.46.% (Fig. S6). Previous studies have demonstrated that iron (hydr)oxides have a strong oxidation capacity for As(III) 33,65 . In addition, As(III) and As(V) generally form bidentate complexes and monodentate complexes with ferrihydrite 42,48 .
Based on the above, it concluded that Fh@BC could adsorb Cd(II) and As(III) simultaneously in aqueous solution, according to the following potential mechanisms (Fig. 6): (1) cation exchange; (2) complexation of Cd(II) and As(III) with oxygen functional groups, including -COOH, C-OH and Fe-OH; (3) coordination between π-electrons and the C=C of the aromatic structure; and (4) oxidation for As (III). In this paper, the results further confirmed that As(III) could promote the adsorption of Fh@BC for Cd(II), and found a fresh result that the presence of As(III) or Cd(II) could promote the adsorption of Fh@BC to each other (synergy), depending on the concentration ratio and the addition sequence of As(III) and Cd(II). Synergy mechanism between Cd(II) and As (III) in aqueous solution might include the following aspects: (1) electrostatic interaction; (2) the formation of ternary surface complexes (type A or type B).
Conclusion
In this study, a simple synthetic ferrihydrite-modified biochar (Fh@BC) was applied for the simultaneous removal of As(III) and Cd(II) in aqueous solution. The Langmuir maximum adsorption capacity of Fh@BC for As(III) and Cd(II) in the single adsorbate solution was 18.38 and 18.18 mg g −1 , respectively. It demonstrated that Fh@BC had the potential for simultaneous removal of As(III) and Cd(II) in aqueous solution. The adsorption mechanisms of Fh@BC for Cd(II) or As(III) mainly included ion exchange and complexation. Moreover, the mutual effect experiment indicated that Cd(II) and As(III) adsorption on Fh@BC in the binary solution exhibited competition, facilitation and synergy, depending on the sequence and concentration ratio of Cd(II) and As(III). | 5,691 | 2022-04-08T00:00:00.000 | [
"Engineering"
] |
Fully solvable mathematical scheme in finding out the right mass and width values of f0(500) and ρ0(770) mesons
Abstract. Starting from the phase representations with one subtraction of the pion scalar-isoscalar and vector-isovector charged pion electromagnetic form factor and exploiting the most accurate information on the S-wave isoscalar and the P-wave isovector ππ scattering phase shifts, respectively, to be obtained from the existing inaccurate experimental data by means of the GarciaKamiński-Peláez-Yndurain Roy-like equations with an imposed crossing symmetry condition, in the framework of the so-called "fully solvable mathematical scheme" the most reliable values of the f0(500) and ρ0(770) meson mass and width are found.
Introduction
If there is a function F(t) to be analytic in the whole complex t-plane besides the cuts on the positive real axis from the lowest square root branch point t = 4 to +∞, fulfills the elastic unitarity condition ImF(t) = F(t)e −iδ sin δ in elastic region with δ to be some of the ππ scattering phase shifts, the asymptotic behavior F(t) |t|→∞ ∼ 1/t, the reality condition F * (t) = F(t * ) and it is normalized at t = 0 to one, then one can write down using the Cauchy formula a dispersion relation with one subtraction at the point t = 0, which together with the unitarity condition through the Omnés-Muskelishvili integral equation leads to the phase representation of F(t) to be the starting point for our further investigations. Under the "fully solvable mathematical scheme" [1] it is understood a procedure leading to a very simple form of F(t) in the variable q = [(t − 4)/4] 1/2 (2) by means of an explicit calculation of the integral in the phase representation (1). The function F(t) on the positive real axis for t > 4 is complex with the phase δ F to be defined by the relation which, however, due to the elastic unitarity condition, phenomenologically verified to be valid approximately up to 1 GeV, is identical with the ππ scattering phase shift δ.
Since the transformation (2) is in fact a conformal mapping of the two-sheeted Riemann surface in t variable into one q-plane, elastic cut, generated by the square root branch point t = 4, disappears.
Noticing the conformal mapping (2) in more detail, the first Riemann sheet in t variable, containing only branch points corresponding to opening various thresholds and zeros of F(t), is mapped on the upper half q-plane, whereby position of the branch point t = 4 and the normalization point t = 0 are mapped into q = 0 and q = +i, respectively, and the real axis from −∞ up to t = 4, on which F(t) is a real function, is mapped on the whole positive imaginary axis of the q-plane.
The second Riemann sheet in t variable, containing branch points, again some zeros and also complex conjugate pairs of poles, which control the shape of F(t), is mapped on the lower half q-plane.
If we restrict ourselves only to the elastic region and neglect contributions to F(t) of all inelastic branch points beyond 1 GeV, then there are only zeros in the upper and lower half q-plane to be accounted as roots of a polynomial in numerator and conjugate pairs of poles according to the imaginary axis exclusively in the lower half q-plane to be accounted as roots of a polynomial in denominator of F(t).
As a result one can represent F(t) in the form of the following rational function Multiplying the numerator and denominator by the complex conjugate denominator, Eq. (4) is changed to the form The reality condition F * (t) = F(t * ) results in the reality of (5) on the positive imaginary axis of the q-plane. Then it is straightforward to see that the expression actually fulfils the latter claim and through the definition of the phase of F(t) leads to the following parametrization of the ππ scattering phase shift δ δ = arctan with all coefficients to be real. This parametrization is derived directly from the basic principles like analyticity, unitarity and reality condition.
Pion scalar-isoscalar form factor and f 0 (500) meson parameters
The pion scalar-isoscalar form factor (FF) Γ π (t) is defined by the matrix element of the scalar quark density , 0 (201 E Web of Conferences https://doi.org/10.1051/e onf /201920401020 PJ pjc 9) 204 10 Baldin ISHEPP XXIV 20 with t = (p 2 − p 1 ) 2 andm = 1 2 (m u + m d ), and fulfils all properties of the function F(t) defined in Sect. 1. Even the normalization of physically nonmeasurable pion scalar FF Γ π (t) is equal to one as it is seen from the calculated pion sigma term value with error in the framework of the χPT [2], if the pion mass m π is taken to be one.
Then the phase representation of the pion scalar-isoscalar FF is where δ 0 0 (t) is now the S-wave isoscalar ππ scattering phase shift to be exactly equal to the parametrization (7) with the parameter A 1 to be identified with the S-wave isoscalar ππ scattering length a 0 0 . The number of nonzero parameters in (7) and their numerical values are found in a fitting procedure of the most precise data on δ 0 0 (t) with theoretical errors in Fig. 1 to be generated by Garcia-Kamiński-Peláez-Yndurain (GKPY) equations [3] for the S-wave isoscalar ππ scattering amplitude. Figure 1. The data on δ 0 0 (t) with theoretical errors to be generated by GKPY equations for the Swave isoscalar ππ scattering amplitude. Solid line represents our optimal fit of data with 5 nonzero coefficients A i in (7) The data in Fig. 1 have been analyzed by the relation (7) up to the moment, when the minimum of χ 2 /ndf was achieved. The latter has been found with the first 5 nonzero coefficients A i of the values and the final form of the S-wave isoscalar ππ scattering phase shift δ 0 0 (t) takes the form , 0 (201 E Web of Conferences https://doi.org/10.1051/e onf /201920401020 PJ pjc 9) 204 10 Baldin ISHEPP XXIV 20 Substitution of (11) into (10), however, leads to the expression which does not allow one to calculate the corresponding integral explicitly.
In order to carry out it practically, it is convenient to decompose the integral (14) into a sum of two integrals according to singularities to be placed in the upper or lower half q-plane, as it is sketched in Fig. 2.
Then the explicit form of is obtained in the straightforward way, if in the case of the first integral the contour of integration is closed in the upper half q-plane (see Fig. 2). In a such way one avoids complicated calculations of the cut-contributions to be automatically generated by branch points under logarithms, if the contours are drawn as it is demonstrated in Fig. 2.
The substitution of (17) into (13) leads to the explicit form of the pion scalar-isoscalar FF where P n (t) is any polynomial normalized at t = 0 to one, however, it has not violate the asymptotic behavior of the pion scalar-isoscalar FF. The pole q = q * 3 on the second Riemann sheet in t-variable corresponds to the f 0 (500) meson resonance, now with the mass and the width, m σ = (459 ± 29) MeV and Γ σ = (517 ± 77) MeV [7], respectively, which are compatible with the parameters obtained in [4,5].
Vector pion electromagnetic form factor and ρ 0 (770) meson parameters
The vector isovector charged pion electromagnetic (EM) form factor F EM,I=1 π (t) is defined by the matrix element of the pion EM current J EM µ as follows with e to be the electric charge and t = (p 2 − p 1 ) 2 the momentum transfer squared. The F EM,I=1 π (t) also fulfils all properties of the function F(t) defined in Sect. 1, including the normalization F EM,I=1 π (0) = 1, if the electric charge is put to be one. Then F EM,I=1 π (t) can be represented by the phase representation where δ 1 1 (t) is now the P-wave isovector ππ scattering phase shift and P n (t) is polynomial normalized at t = 0 to one, however, it has not violate the asymptotic behavior of the charged pion EM FF.
In this case the model independent parametrization (7), taking into account a threshold behavior of the δ 1 1 (t) |q|→0 ∼ a 1 1 q 3 , has to be adapted to the form where the parameter A 3 is identified with the P-wave isovector ππ scattering length a 1 1 . The number of nonzero parameters in (23) and their numerical values are again found in a fitting procedure of the most precise data on δ 1 1 (t) with theoretical errors in Fig. 4 to be generated by Garcia-Kamiński-Peláez-Yndurain equations [3] for the P-wave isovector ππ scattering amplitude.
The minimum of χ 2 /ndf was achieved by the first 4 nonzero values of coefficients in (23) Figure 4. Optimal description of the GKPY δ 1 1 (t) data. Solid line represents our optimal fit of data with 4 nonzero coefficients A i in (23) and the roots of the corresponding polynomials in the numerator and denominator of the equivalent to (23) logarithmic relation are q 1 = −2.5480 ± 0.0020 +i0.2752 ± 0.0016, q * 1 = −q 2 , q 2 = 2.5480 ± 0.0020 +i0.2752 ± 0.0016, q * 2 = −q 1 , A substitution of (25) with (26) into (22) leads to and the integral in (27) is calculated in the same way as it was done in the case of the scalarisoscalar pion FF. The pole q = q * 1 on the second Riemann sheet in t-variable corresponds to the ρ 0 (770) meson resonance, with the mass and the width, m ρ = (763.56±0.51) MeV and Γ ρ = (143.09± 0.82) MeV [7], respectively.
Conclusions
Taking into account phase representations with one subtraction of the pion scalar-isoscalar form factor and vector-isovector charged pion electromagnetic form factor and exploiting the , 0 (201 E Web of Conferences https://doi.org/10.1051/e onf /201920401020 PJ pjc 9) 204 10 Baldin ISHEPP XXIV 20 most accurate up to now information on the S-wave isoscalar and the P-wave isovector ππ scattering phase shifts, respectively, to be obtained from the existing inaccurate experimental data by means of the Garcia-Kamiński-Peláez-Yndurain Roy-like equations with an imposed crossing symmetry condition, in the framework of the so-called "fully solvable mathematical scheme" the most reliable values of the f 0 (500) and ρ 0 (770) meson mass and width are determined.
The work was supported by VEGA grant No.2/0153/17. | 2,515.8 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
Brief communication : " Oldest Ice " patches diagnosed 37 km southwest of Dome C , East Antarctica
The presence of ice as old as 1.5 Ma is very likely southwest of Dome C, where a bedrock relief high makes the ice thin enough to prevent basal melting. Three-dimensional ice flow modelling is required to ensure that the basal ice is old enough above the bedrock, and that the age resolution of the ice archive is sufficient. A 3D ice flow simulation is led to calculate selection criteria that together locate patches of ice with likely old, well-resolved, undisturbed and datable ice. These patches on 5 the flanks of the bed relief are a balance trade-off between risks of basal melting and sufficient age resolution. The trajectories of the ice particles towards these sites are short and the ice flows over a smoothly undulating bed. Several precise locations of potential 1.5 Ma-old ice are proposed, to nourish the collective thinking on the precise location of a future drill site.
1 Introduction 10 Antarctic ice is an exceptional archive of the Earth's paleoclimates across all the glacial/interglacial periods, and the only one that contains direct samples of ancient atmospheres.The oldest available ice archive goes back 0.8 Ma in time (EPICA Dome C ice core, Jouzel et al., 2007), but is not old enough to study a main climatic transition that occurred between 1.2 Ma and 0.9 Ma, known from the temporal variations of the isotopic composition of benthic sediments (mid-Pleistocene transition, 35 ter that characterizes the vertical deformation through the ice column.We showed that the observed isochrones are compatible with high basal age and sufficient resolution.However, these two previous studies neglect horizontal motion of ice, whereas the trajectories of the ice particles are definitely not vertical.The travel time of these particles, and consequently their age, is strongly influenced by the shape of the bedrock and the ice surface.We are presently lacking the description of the local 3D 40 state of stress of the ice, where the geometry of the terrain and the strain history of the ice particles can be properly taken into account. To evaluate the quality and position of deep old ice, we here proceed to a steady state 3D ice flow simulation, at a regional scale.Whereas these two previous 1D modelling results could not help us determine precise oldest-ice targets, we here intend to provide objective criteria that together 45 delimit kilometer-scale patches of old, well-resolved, undisturbed basal ice.The bottom-most ice recovered should be older than the MPT, ideally as old as 1.5 Ma such that several climate cycles pre-MPT can be recorded.The vertical age resolution has to be better than 10 ka m −1 to detect high-frequency climatic variability in the ice core (Fischer et al., 2013).Moreover, basal ice will probably be disturbed, similarly to the deepest 60 m of the EPICA Dome C ice core (Jouzel et al., 50 2007;Tison et al., 2015).We have no better evaluation of the height of disturbed ice in the region, and we will similarly take 60 m as a security margin for the drilling minimum distance from the bedrock, but we should keep in mind this cutoff height could be an underestimation.One should look for places where the mechanisms responsible for stratigraphy disturbances (cumulated basal shear, bedrock roughness) should be minimal.Convergent flow should be avoided as well, because 55 it tends to thicken basal layers.This is defavourable to recovering oldest ice as it will shift older layers downwards, and makes dating process more complex (Tison et al., 2015).Finally, location of the future drill site should be above the highest subglacial lake detected by the radar survey (Young et al., 2017), otherwise the risk of basal melting could be drastically increased ("water limit").(Fretwell et al., 2013;Young et al., 2017) and basal melt rate (Passalacqua et al., 2017) (Fretwell et al., 2013), except on the bedrock relief southwest of Dome C, where a dense airborne radar survey has been recently collected (Young et al., 2017).The firn is accounted 65 for by considering a mass-equivalent layer of ice at the surface; the modelled ice thickness is then simply 30 m thinner than the observed thickness (Schwander et al., 2001).Horizontal resolution of the corresponding mesh is 1 km.This mesh contains 20 vertical elements, the deepest one being 25 times finer than the upper one, so that the velocity field is better resolved near the bottom.
Our domain sits in the center of the Antarctic plateau, and lateral boundary conditions do not 70 correspond to physical boundaries, but to virtual vertical surfaces.As the domain contains the dome summit, the ice flow is divergent and no input flow should be considered.We impose velocity boundary conditions corresponding to the shallow-ice approximation (SIA) (Hutter, 1983).The ice surface as observed today may not correspond to the chosen rheology and is relaxed for 50 years such that the present surface slope does not induce an unrealistic velocity field.The part of ice motion attributed 75 to basal sliding is not known precisely in the Dome C region, and depends on water circulation.
As we here focus on a region were basal melting is probably null, and for the sake of simplicity, a no-sliding condition is imposed at the bottom of the ice column.Vertical velocities at the base are equal to the basal melt rate output from previous modelling work (Fig. 7 in Passalacqua et al., 2017).As the main interest of this work focuses on what happens for deepest ice, mechanical anisotropy of the ice has to be accounted for.The relation between strain rates and stresses are described by the generalized orthotropic linear flow law (GOLF, Gillet-Chaulet et al., 2005), given a certain vertical profile of ice fabric.By introducing a dependence on the second invariant of the deviatoric stress, this 90 law can be extended to the non-linear case (Ma et al., 2010).The fabric profile is only known at the dome summit (Durand et al., 2009), and shows that the ice mainly undergoes vertical compression, but also longitudinal extension in the deep layers (Tison et al., 2015).However, there is no reason to use the very same fabric profile everywhere else, where shearing is more influent or the bed shape different.In this short study we will not discuss the influence of the chosen rheology, but we 95 first made sure that the computed surface velocities correctly simulated the horizontal velocity field measured at the surface by Vittuari et al. (2004) for different n values and fabric profiles.Hence, we decided to use the widely-accepted value of 3 for n, and a synthetic vertical fabric profile, for which the eigenvalues of the second order orientation tensor evolve linearly with normalized depth, from isotropy at the surface to a single maximum fabric at the bottom (see a similar treatment in Sun 100 et al., 2014).
Model outputs
Back trajectories are computed from the 3D velocity field using a Lagrangian scheme such that the age is known along the forward trajectory of the ice particles, and an age field is generated.The age resolution could be calculated from the vertical derivative of this age field, but we found it more 105 accurate to track the annual layer thickness λ from the ice surface (Whillans, 1979) using where ˙ zz is the vertical strain rate.This formulation neglects vertical rotation effects that tends to overturn internal layers.This assumption is reasonable if internal layers are mainly horizontal, which is the case over the studied bedrock relief.The age resolution is then computed as the inverse of λ, Beyond the computation of ages and age resolutions, the 3D simulation is also useful to detect where deep ice is more likely to be folded.Shearing will tend to amplify small wrinkles in the ice layers and so disturb the ice's basal stratigraphy, whereas longitudinal extension will tend to flatten them.Competition between shear and longitudinal stresses can be represented by a dimensionless 120 shear number (Waddington et al., 2001) where ˙ XZ is the shear strain rates along ice flow, ˙ XX and ˙ ZZ , the local longitudinal and vertical strain rates.Waddington et al. (2001) use this shear number as a criterium to detect if a given wrinkle can be amplified by shear.More simply here, we use it to predict the presence of undisturbed ice, 125 whereby the smallest shear number is best.
Selection of favourable locations
If the absolute value of the age, age resolution, or strain rates can be discussed regarding the choices of the model parameters, the outputs still keep their relevance when analysed relatively to themselves.The existence of older, better-resolved, less disturbed ice, is much less sensitive to the mag-130 nitude of ice velocity than to the shape of the bedrock or that of the ice surface elevation.As a consequence, this short study focuses on getting practical information for the decision-making process rather than on discussing the uncertainties or the influence of the chosen model parameters.
The five selection criteria (age, age resolution, bed curvature, shear number and bed height) are used to compute five masks, thresholded as follows.For bed curvature, 0 m −1 would have been the natural threshold but it was too much restrictive and we decided for a slightly higher value (2 10 −5 m −1 ).The shear number threshold appeared naturally by studying its spatial evolution (see §3.3).The bed height threshold correspond to the "water limit", at 480 m.Furthermore, our results show that most of the subglacial elevated bed relief southwest of Dome C is favourable to the existence of 1.5 Ma-old ice, so we adopted more conservative age and age resolution thresholds for The area identified as possibly hosting oldest ice is elongated along the bedrock relief, and stands at an intermediate bed elevation (mainly between 400 m and 550 m above seal level, Fig. 2).Neither the very top of the bedrock relief, nor its lowest foothills appear to be suitable for the archiving process of very old ice.The basal melt rates imposed as boundary condition are null on the upper part of the bed relief, therefore infinite age are calculated for the very basal ice.In that case, the age 150 of the ice standing 60 m above the bed is strongly dependent on the ice thickness.On the top of the bedrock relief, the ice is at its thinnest and the old ice then sits closer than 60 m above the bed.On the foothills of the bedrock relief, the ice is thick, basal melt rates are above zero, and the basal ice is therefore continuously being melted from the bottom.
Some places may host ice even older than 2 Ma, but they all stand below Young et al's ( 2017) water limit (Fig. 2, yellow line).The presence of very old ice in those areas is not impossible, but may also be the consequence of insufficient imposed basal melting.The transition between melting and frozen ice should stand somewhere on the flanks of the bed relief, but it is difficult to pinpoint the precise location of this threshold.Despite the promising thick ice in the region to ensure old ages and sufficient age resolution, the risk of basal melting is real.Age resolution for the deepest part of the ice column is influenced by two factors.First and obviously, the thicker the ice, the better the age resolution.As a consequence, the tops of the bedrock relief are rarely compatible with a sufficient age resolution of oldest ice.Bedrock flanks should be preferred, but some of the thickest ice areas on the flanks will be discarded as well because of an increased risk 165 of basal melting.
Second, for ice positioned close to the ice ridge, age resolution benefits from a thicknening effect of the deeper layers (so-called Raymond effect, Raymond, 1983).This results in a band of wellresolved ice, oriented along the ice ridge and perpendicular to the bedrock relief.No Raymond arche is visible in the radargramms that could argue for a strong Raymond effect time here.One 170 explanation is that the shape of the ice surface is much more rounded than at Dome C, and the produced along-ridge flow tend to dampen the amplitude of the Raymond arches (Martín et al., 2009).Moreover, the characteristic time for a Raymond arche to form here would be several 100 ka (Martín et al., 2009), during which the surface ridge probably moved, smoothing out the forming arches.Unfortunately, the past position and shape of the ridge are unknown, and drilling far from its
Stratigraphic disturbances
At a divide, the shear stress perpendicular to the divide is null, so that the shear number S of the deep ice close to the ridge is low (∼ 10).However, ice flow faces steep bed slopes with higher velocities than under the divide west side of the bedrock relief.As a consequence the shear number 180 increases very sharply and reaches much higher values (∼ 100), and this zone of high shear should be discarded for the oldest-ice challenge.All the areas for which the basal ice crossed this zone of high shear should be discarded as well, so the trajectories of the ice particles need to be represented.
Trajectories for oldest-ice spots
The best combination of age, age resolution, folding, convergence and melting criteria is shown in that could ensure a certain stability of the flow.Figure 3 shows the travelling of the ice particles towards this site, and the corresponding horizontal distance travelled does not exceed 10 km from the surface.However, as there is probably no basal melting here, the ice particle would be likely to closely follow the bed along several kilometers, in a depth range dominated by a strong vertical shear, enhanced by an unduling bed underneath.To minimize the bed influence, we could also consider a drill site located 3 km upstream, where the ice age would exceed 1.5 Ma (yellow cross in Fig. 2, red dashed line in Fig. 3).
Of course, locating a unique "best" drill site within one of these three boxes is not possible with this 3D-modelling approach only.However, it allows to define a restricted area where a new set of 200 observations will be the most valuable.We should focus on local bedrock summits or crest lines, because local troughs make the ice flow converge, but also heat flow (Van der Veen et al., 2007), increasing the risk of insufficient age and positive basal melt.Furthermore, the ice ridge probably moved laterally in the past, which is not accounted for here.As a consequence the true trajectory of an ice particle for a given drill site could even come from the northwest and turn left, or from
Figure 1 .
Figure1.Mesh, bedrock dataset(Fretwell et al., 2013; Young et al., 2017) and basal melt rate(Passalacqua used for the simulation.The red patch on the situation map (bottom left) show the hold of the domain used for calculation, and the blue outline corresponds to the hold of Fig 2.
description The Stokes equations are solved on a 83 × 114 km domain, approximately centered on the Dome C, using the finite element model Elmer/Ice.The surface and bed geometries are provided by the Bedmap2 data set
3
The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License.As we know that the basal ice in the Dome C region is at or near the melting point (Passalac-80 qua et al., 2017; Van Liefferinge and Pattyn, 2013), the temperature profile measured in the EPICA Dome C borehole is a good representation of the thermal structure of the ice in the Dome C vicinity.Hence, we account for ice temperature by using the same normalized temperature profile on all the domain.Solving the coupled thermo-mechanical equations would require heavy computing resources, without radically changing the ice fluidity -which is mainly controlled by temperature -85 and the trajectories of the ice particles.
110 considered as the layer thickness for a single year.The way ice strains by flowing over a rough bed differs depending on the orientation of ice flow, and a same bedrock relief could be a convergence or a divergence area.Once the velocity field The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License.isknown, a local coordinate system (X, Y, Z) can be defined at each point, for which the X-axis is oriented along flow.The curvature of the bed perpendicular to the flow is then computed everywhere, 115 and convergence areas are identified where the bed curvature is positive.
140
the selection of smaller suitable locations (1.8 Ma for the age and 8.5 ka m −1 for the age resolution, 60 m above the bed).A logic combination of the five masks delineate the patches fulfilling all our selection criteria.The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License. 3 Results and interpretation 3.1 Age at 60 m above the bed 145
Figure 2 .
Figure 2.This figure focuses on the bedrock relief designed as such in Fig.1, located ∼ 40 km southwest of Dome C. The upper, small maps present the selection criteria thresholded as following : age > 1.8 Ma, age resolution < 8.5 ka m −1 , bed curvature < 2 10 −5 m −1 , shear number S < 40, and bed height > 480 m (simply shown here by a thick yellow contour).The bottom map shows the combination of the 5 selection criteria.Magenta boxes A, B and C correspond to patches that could be considered as our best oldest-ice targets.Dashed lines show trajectories of ice particles, the red one correspond to the profile presented in Fig. 3. Crosses locate possible drill sites, discussed in the text.Blue outlines show the best patches of Van Liefferinge et al. (2018).Projection: WGS84/Antarctic Polar Stereographic -EPSG:3031 (kilometers).7
185Fig. 2 (
Fig. 2 (bottom), revealing several spots of appropriate ice.The patches for which the trajectories are the shorter should be preferred, and two magenta boxes highlight our most promising patches.For magenta boxes A, B and C the ice originates from the divide, guided by the strong lateral divergence resulting from the shape of the ice surface.Locations within boxes C and A should be considered first for a future Oldest Ice drilling because of shorter trajectories, and less risks of stratigraphy 190
205
the southeast and turn right.By choosing a local bedrock summit, overhanging its environment, we minimize the risk that the true trajectory of basal ice met a bedrock obstacle.Considering that, only a few set of favourable drill sites remain in boxes A, B and C (crosses in Fig 2).
Figure 3 .
Figure 3. Trajectories of the ice particles from the ice surface towards the bedrock (red trajectory in Fig. 2).Red numbers indicate the age of the ice, in Ma.The bed shape is shown in grey.The thin red dashed line show one possible drill site (yellow cross in Fig.2).
3. 5
Comparison with results based on thermodynamical modelling Our results can be benchmarked against the ones of Van Liefferinge et al. (2018), who used a 210 transient thermodynamical model to compute the highest geothermal flux value that keep basal ice frozen.By comparison with available continental geothermal flux datasets, they locate patches where basal ice has likely remained frozen over 1.5 Ma.These authors also included a further mechanical constrain representing limited impact of bedrock roughness on the preservation of the bottom ice stratigraphy.They identified a 8 km-long patch covering the SE upper part of the bedrock relief, that 215 crosses our biggest, central patch, but both do not overlap pefectly (Fig. 2, bottom map, blue outlines and green areas).They also identified a site within our magenta box B, but no site in the box C.These comparative results highlights the complementarity of the two approaches.The 1D thermodynamical model of Van Liefferinge et al. (2018) has a better control on the thermal aspect of the problem than on its mechanical aspect, and selects sites that are more conservative from a heat budget point 220 of view, i.e. preferentially local bedrock heights.On the contrary, our 3D approach accounts for the horizontal strain of the ice, and select sites that are more conservative from a mechanical point of view.The upper part of the box A or the left part of the box B validate the constraints of both approaches.In our approach, the bedrock summit in box C is the safest mechanically; however, it was not selected by Van Liefferinge et al. (2018) because of a local bedrock roughness exceeding their 225 threshold of 20 m, despite the fact that their thermal criterium was fulfilled.9 The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License.The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License.The Cryosphere Discuss., https://doi.org/10.5194/tc-2018-19Manuscript under review for journal The Cryosphere Discussion started: 26 February 2018 c Author(s) 2018.CC BY 4.0 License. | 4,849.6 | 2018-02-26T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
EGFR-Mutated Non-Small Cell Lung Cancer and Resistance to Immunotherapy: Role of the Tumor Microenvironment
Lung cancer is a leading cause of cancer-related deaths worldwide. About 10–30% of patients with non-small cell lung cancer (NSCLC) harbor mutations of the EGFR gene. The Tumor Microenvironment (TME) of patients with NSCLC harboring EGFR mutations displays peculiar characteristics and may modulate the antitumor immune response. EGFR activation increases PD-L1 expression in tumor cells, inducing T cell apoptosis and immune escape. EGFR-Tyrosine Kinase Inhibitors (TKIs) strengthen MHC class I and II antigen presentation in response to IFN-γ, boost CD8+ T-cells levels and DCs, eliminate FOXP3+ Tregs, inhibit macrophage polarization into the M2 phenotype, and decrease PD-L1 expression in cancer cells. Thus, targeted therapy blocks specific signaling pathways, whereas immunotherapy stimulates the immune system to attack tumor cells evading immune surveillance. A combination of TKIs and immunotherapy may have suboptimal synergistic effects. However, data are controversial because activated EGFR signaling allows NSCLC cells to use multiple strategies to create an immunosuppressive TME, including recruitment of Tumor-Associated Macrophages and Tregs and the production of inhibitory cytokines and metabolites. Therefore, these mechanisms should be characterized and targeted by a combined pharmacological approach that also concerns disease stage, cancer-related inflammation with related systemic symptoms, and the general status of the patients to overcome the single-drug resistance development.
Introduction
Lung cancer is a leading cause of cancer-related deaths worldwide [1]. About 10-30% of patients with non-small cell lung cancer (NSCLC) harbor mutations, mainly in exons 18-21, of the EGFR gene, which is considered an oncogenic driver [2]. Among these, exon 19 deletion mutation (p. E746-A750del) and exon 21-point mutation (p.L858R) are the most frequent, accounting for more than 85% of EGFR mutations. Other EGFR mutations are considered uncommon [3].
EGFR tyrosine kinase inhibitors (TKIs) represent the first-line treatment and targeted therapy for patients with metastatic NSCLC harboring EGFR mutations [3]. However, drug resistance occurs in these patients, and due to the failure of the immune surveillance, cancer cell growth continues [4,5].
Immunotherapy with programmed cell death protein 1 (PD-1) and programmed death-ligand 1 (PD-L1) inhibitors is considered a promising treatment strategy for NSCLC. However, current evidence suggests that such immunotherapy is not beneficial for patients with NSCLC carrying EGFR mutations. Many preclinical and clinical studies, as well as a meta-analysis of five clinical trials (Checkmate 017 and 057, Keynote 010, OAK, and as well as a meta-analysis of five clinical trials (Checkmate 017 and 057, Keynote 010, OAK, and POPLAR), showed poor effectiveness of PD-1 inhibitors in patients with EGFR-mutated NSCLC [6,7]. Nonetheless, according to some studies, immunotherapy is effective in patients with uncommon EGFR mutations.
In this review, we will discuss the importance of the tumor microenvironment (TME) and immune response in lung cancer evolution and prognosis, the role of EGFR mutations in influencing the immune response and resistance to immunotherapy, dynamic changes in the TME and immune cells during TKI treatment, strategies for overcoming resistance to immunotherapy, and the rationale for combining TKI treatment and immunotherapy in EGFR-mutated NSCLC.
Role of TME in Lung Cancer Evolution and Prognosis
The TME is composed of cancer cells, immune cells such as T cells, B cells, dendritic cells (DCs), myeloid-derived suppressor cells (MDSCs), tumor-associated macrophages (TAMs), carcinoma-associated fibroblasts, tumor vasculature, lymphatic tissue, as well as adipocytes, cytokines, and exosomes [8,9]. TME plays a crucial role in cancer development, progression, and metastasis owing to the intricate crosstalk between its components ( Figure 1). T lymphocytes appear to mediate the switch from tumor immune surveillance to cancer immune escape through the recruitment of regulatory T-cells (Tregs) and upregulation of MDSCs [9]. In addition, it has been reported that macrophages and neutrophils are crucially involved in the mechanisms of immune escape and development of lung cancer. In particular, these cells create a proinflammatory background that strongly affects both carcinogenesis and immune-response efficiency [2]. Lung microenvironment changes from physiological state to NSCLC development and progression. Normal lung microenvironment includes epithelial cells, smooth muscle cells, fibroblast, endothelial cells, and immune cells such as dendritic cells, neutrophils, T cells, and alveolar macrophages. The latter contribute to maintain immunological homeostasis, but they can also promote inflammation, and thus, development of premalignant lung lesions and carcinogenesis. During NSCLC development, the tumor microenvironment (TME) changes, thus contributing to inflammation angiogenesis, immune modulation, and therefore promoting NSCLC progression, metastasis, and prognosis. In particular, the immune TME, through specific reprogramming and modulation of T cells, tumor-associated macrophages (TAMs) and myeloid cell populations exert crucial tumor-promoting or tumor-suppressing activities. The switch from tumor immune surveillance to Figure 1. Lung microenvironment changes from physiological state to NSCLC development and progression. Normal lung microenvironment includes epithelial cells, smooth muscle cells, fibroblast, endothelial cells, and immune cells such as dendritic cells, neutrophils, T cells, and alveolar macrophages. The latter contribute to maintain immunological homeostasis, but they can also promote inflammation, and thus, development of premalignant lung lesions and carcinogenesis. During NSCLC development, the tumor microenvironment (TME) changes, thus contributing to inflammation angiogenesis, immune modulation, and therefore promoting NSCLC progression, metastasis, and prognosis. In particular, the immune TME, through specific reprogramming and modulation of T cells, tumor-associated macrophages (TAMs) and myeloid cell populations exert crucial tumorpromoting or tumor-suppressing activities. The switch from tumor immune surveillance to cancer immune escape is characterized by the recruitment of regulatory T-cells (Tregs) and upregulation of MDSCs. In addition, TAMs (M1 and M2 polarized cells), and neutrophils play a key role in the mechanisms of immune escape. In particular, they contribute to create a proinflammatory TME that strongly affects the immune response efficiency. Abbreviations: MDSCs-myeloid-derived suppressor cells; IL-Interleukin; VEGF-vascular endothelial growth factor; PD-L-programmed death-ligand; TGF-tumor growth factor; CSF-colony-stimulating factor; M-CSF-macrophagecolony stimulating growth factor. Created with BioRender.com (https://biorender.com/, accessed on 17 May 2022).
In the normal physiological state, innate and adaptive immune cells catch and eradicate cancer cells through immunosurveillance [10]. However, cancer cells acquire the ability to escape from the antitumor immune response, and the dysregulated relationship between antagonistic effectors (i.e., CD8 + cytotoxic T-cells) and regulatory immune cells (i.e., Tregs) leads to a TME that can promote cancer development [11]. As part of the TME, TAMs play an active role in tumorigenesis. Macrophages constitute the majority of the cells in the immune infiltrate in tumors, and they have extremely varying effects on tumorigenesis, depending on their phenotype within the TME [12]. As key innate immune cells, they are involved in the immunological response against foreign cells and in tissue repair following an injury [13]. There are two types of activated macrophages: pro-inflammatory (M1) and anti-inflammatory (M2). M1 to M2 macrophage polarization is a plasticity phenomenon that occurs in various physiological situations, from bacterial infection to wound healing. When a wound fails to heal, it can become a chronic wound with persistent inflammation, which could create the basis for tumorigenesis [12]. It has been suggested that M1 macrophages eradicate cancer cells through reactive oxygen and nitrogen intermediates, and several proinflammatory cytokines, whereas M2 macrophages promote tumorigenesis and metastasis by secreting matrix-degrading enzymes, angiogenic factors, and immunosuppressive cytokines/chemokines, as well as by downregulating major histocompatibility complex class II (MHC II) and co-stimulatory ligands CD80 and CD86 [14][15][16]. Thus, M2 macrophages generally support tumor progression and metastasis. Interleukin (IL)-4, IL-10, and IL-13 secreted in response to the activation of signal transducer and transcriptional activator-3 (STAT3) and nuclear factor kappa B pathways contribute to M2 polarization. In addition, TAMs may cause immune suppression through immune checkpoint (PD-L1) induction and enhanced activation of specific metabolic pathways [13].
Consistently, TAM polarization in the TME has been reported to be associated with different prognoses of cancer patients, with the prevalence of M1 cells in the TME correlating with better progression-free and overall survival [17,18]. In contrast, the prevalence of M2 macrophages appears to be related to poor outcomes in lung cancer [19]. However, M1 macrophages, which produce reactive oxygen and nitrogen species, contribute to DNA damage, and may stimulate malignant transformation [13]. In lung cancer, M1 macrophages are found in cancer cell islets, and their presence is related to a good prognosis. In contrast, M2 macrophages are located in the tumor stroma and are associated with a poor prognosis. In the early stages of tumorigenesis, M1 macrophages consistently infiltrate tumor islets to prevent cancer progression. However, during cancer progression, M1 macrophages may acquire the M2 phenotype and start supporting tumor progression [15]. Additionally, TAM polarization leading to the prevalence of M2 macrophages in the TME can cause drug resistance [16]. Various mechanisms for the TAM-related drug resistance have been proposed. First, TAMs can promote the epithelial-to-mesenchymal transition by secreting transforming growth factor-β and tumor necrosis factor (TNF)-α and induce remodeling of the extracellular matrix through proteases such as matrix metalloproteinases (MMPs) [16,20,21].
Several studies have shown that TAM-secreted proteins such as MMPs, plasmin, urokinase-type plasminogen activator, vascular endothelial growth factor, IL-8, basic fibroblast growth factor, phosphatidylinositol-glycan biosynthesis class F protein, and gastrinreleasing peptide lead to therapeutic resistance and boost tumor angiogenesis [16,20]. Moreover, TAM-secreted immunosuppressive factors such as prostaglandin E2, IL-10, transforming growth factor-β, indoleamine-pyrrole 2,3-dioxygenase, chemokine C-C motif ligand (CCL) 17, CCL18, and CCL22 inhibit the Th1 immune response and also cause drug resistance [22][23][24]. Tumor-associated macrophages (TAMs) are essential elements during the initial phase of the immune response due to their phagocytic capacity, ability to synthesize interferon (IFN), and interactions with helper and cytotoxic T lymphocytes. However, their persistent activation with consequent chronicization of inflammation, oxidative stress, and changes in metabolic pathways lead to the impairment of effective T-cell responses by causing T-cell exhaustion, a condition in which lymphocytes, even when activated, are non-functional and subsequently undergo programmed cell death [25]. In this regard, Mascaux et al. [26] demonstrated that in lung squamous cell carcinoma, the adaptive immune response was the strongest at the earliest cancer stages, whereas at the most advanced invasive stages, they observed increased expression levels of co-inhibitory molecules and suppressive cytokines, such as PD-L1, IL-10, and IL-6.
To understand the role of TAMs in the efficacy of immune response and immunotherapy, it is necessary to clarify the contribution of Tregs to cancer immunosuppression. Tregs are a specialized T-cell lineage that express the transcription factor FOXP3, which is crucial for Treg stabilization and stimulation of the Treg-specific gene expression profile necessary to prevent autoimmune reactions in normal conditions [27,28]. However, Tregs can switch their fate and phenotype under certain circumstances, such as inflammatory perturbations of the microenvironment. This is possible owing to changes in their gene expression program, which is characterized by the loss of FOXP3 expression and production of pro-inflammatory cytokines, and IFN-γ, which convert these cells into effector T-cells (Treg reprogramming). In a recent study by Di Pilato et al., an increase in IFN-γ levels favored the expression of PD-1 on effector T-cells and synthesis of PD-L1 by cancer cells and macrophages [29], thereby turning off CD4 + CD25 − conventional effector T-cells, reactivating an immune escape mechanism and increasing macrophage activation [30]. Recently, Gallimore et al. highlighted an important therapeutic role of the induction of selective recruitment and modulation of different Tregs with different molecular profiles and functions in the TME [31]. Based on these observations, other authors observed that immunotherapy efficacy could be diminished by the inflammatory response caused by the reprogramming of Tregs [25]. Indeed, as already described above, various suppressive and counter-regulatory mechanisms may be involved in the lack of immunotherapy effectiveness, especially in the advanced stages of neoplastic disease. In patients with advanced cancer, specific changes in oxidative and glycolytic metabolic pathways during a chronic inflammatory response interfere with conventional T-cell activation and function and may be one of the reasons for the ineffectiveness of immunotherapy. Consequently, a combined strategy of modulating the activity of Tregs, pharmacological inhibition of chronic inflammation mediated by macrophages, and, at the same time, suppression of oxidative stress and positive regulation of metabolic imbalances could improve the effectiveness of modern immunotherapy. Reprogramming of Tregs has a dual effect: firstly, it immediately activates immunosurveillance, and secondly, it causes delayed, macrophage-mediated inflammation deleterious for the antineoplastic efficiency of the immune system response [32]. Furthermore, the loss of Treg activity shown in various in vivo and in vitro experimental models involving reprogramming of Tregs into IFN-γ-producing cells [33][34][35], is accompanied by M1 pro-inflammatory polarization of peritoneal macrophages with associated production of pro-inflammatory and immunosuppressive cytokines [36]. The persistent activation of macrophages does not favor sustained antitumor T-cell responses [37,38]. Following M1 macrophage polarization, other processes such as increased synthesis proinflammatory cytokines, production of reactive oxygen species, and changes in glucose and iron metabolism occur [39]. In particular, an iron-sequestering phenotype develops, which is characterized by intracellular iron accumulation and low iron release and availability (functional iron deficiency) [40]. Dysregulation of the iron metabolism impairs several vital cell processes, such as DNA and protein synthesis, enzyme activity, integrity of oxidative pathways, and cell proliferation, thereby resulting in a progressive loss of T-cell function when cancer advances [41].
Based on this evidence, a strategy that blocks chronic inflammation and Treg reprogramming should be considered in some patients depending on the cancer stage. Moreover, TAMs, specifically M1, are the main producers of IL-6, which plays a key role in modulating both tumor progression and immune escape through multiple mechanisms [25,42]. In particular, IL-6 is involved in lung cancer tumorigenesis, and its increased circulating levels have been associated with poor survival of patients with lung cancer [43]. In addition, IL-6 acts directly on lung epithelial cells via the nuclear factor kappa B signaling pathway when conditioned by exposure to carcinogens. Tobacco smoking is known to induce KRAS mutations and thereby stimulate IL-6 expression in the lung epithelium [44], promoting lung cancer cell proliferation and migration through the STAT3 pathway activation [45]. Exhausted tumor-associated CD8 + T lymphocytes are another source of IL-6 in lung cancer [46]. Moreover, IL-6 is one of the key cytokines driving the immunopathology caused by the prolonged non-specific inflammation contributing to the so-called "cytokine storm", with related systemic symptoms and impairment of immune response. Indeed, IL-6 influences the effectiveness of the immune system in multiple ways. IL-6 can act as an activator or inhibitor of T-cell responses, depending on the duration of its activity; moreover, by inducing systemically specific derangements of energy metabolism, nutritional status, and symptoms as anemia and anorexia, it significantly negatively affects T-cell functions [47]. Elevated levels of IL-6 are often observed in advanced lung cancer patients, which, at diagnosis, frequently present cachexia syndrome, an inflammatory driven severe condition characterized by involuntary weight-loss accompanied by chronic inflammation, fatigue, anorexia, and anemia, where IL-6 is actually one of the key pathogenetic mediators [47]. Thus, although IL-6 initially participates in the activation of the immune response, its prolonged, chronic release ultimately contributes to immunosuppression, severe cancer-related symptoms, and poor general patient status and prognosis. Consistent with the above evidence, blocking chronic inflammation, primarily driven by IL-6, may be fundamental in improving the efficacy of currently available immunotherapy, especially in advanced lung cancer patients [25].
Role of EGFR Mutations in Influencing TME, TAM Polarization, and Response to Immunotherapy
Preclinical and clinical studies have pointed out that the TME of patients with NSCLC harboring EGFR mutations displays peculiar characteristics and may modulate the antitumor immune response [6]. Several trials indicated that EGFR mutations are associated with immunosuppressive TME, lower tumor mutation burden (TMB), and increased PD-L1 expression [2,6]. TMB is defined as the total number of substitution, insertion, and deletion mutations per megabase of the coding region that encodes a tumor gene. Recent studies suggested that reduced TMB may predict a poor response to immune checkpoint inhibitors (ICIs) in patients carrying EGFR mutations [6,7]. Preclinical studies indicated that EGFR mutations lead to cancer immune escape through the PD-1/PD-L1 pathway [6]. In addition, it has been shown that EGFR mutations influence TME components, such as tumor-infiltrating lymphocytes (TILs), Tregs, MDSCs, TAMs, and immunoregulatory cytokines ( Figure 2).
A retrospective study observed that NSCLC tumors harboring EGFR mutations had low expression levels of PD-L1 and few CD8 + TILs. In contrast, other studies have detected high PD-L1 expression in this type of tumor [11]. Preclinical studies have demonstrated that EGFR activation upregulates intrinsic PD-L1 expression, inducing T-cell apoptosis and immune escape in EGFR-mutated NSCLC. In a genetically engineered mouse model of lung adenocarcinoma carrying an EGFR mutation, decreased macrophage MHC-II expression, enhanced macrophage IL1RA expression, and increased macrophage phagocytic activity have been observed and attributed to the M2 macrophage phenotype [48]. The presence of inflamed TME is considered a positive predictive factor for the response to immunotherapy. Although EGFR-mutated NSCLC typically is not associated with inflamed TME, characterized by low levels of CD8 + T cells and immune-suppressive cells, the numbers of Tregs and PD-L1 expression levels are increased in this cancer ( Figure 2). This leads to reduced effector T-cell activity and promotes TME favoring immune escape and cancer progression [48,49]. In EGFR-mutated cancers, TME displays high Treg infiltration without CD8 + T-cell infiltration. The recruitment of effector CD8 + T cells is prevented by the downregulation of CXCL10 through IRF1. In contrast, the stimulation of Treg infiltration is achieved through the upregulation of CCL22 through JNK/-JUN. Such immunological status may correlate with resistance to immunotherapy. Moreover, in EGFR-mutated cancers, the non-inflamed immunosuppressive TME (high levels of Tregs Figure 2. Role of tumor microenvironment in EGFR-mutated NSCLC in influencing resistance pathways to targeted TKI treatment and potential targets for immunotherapy. EGFR mutations are associated with immunosuppressive TME, lower tumor-mutation burden (TMB), and increased PD-L1 expression. EGFR mutations may promote cancer immune escape through modulation of the PD-1/PD-L1 pathway, which in turn determine T-cells inactivity and/or exhaustion. This also leads to EGFR-TKI resistance. In addition, EGFR mutations influence several TME components, such as tumorinfiltrating lymphocytes (TILs), Tregs, MDSCs, TAMs, and immunoregulatory/proinflammatory cytokines, i.e., IL-6. The latter, through the activation of the STAT-3 intracellular pathway, contribute to tumor growth and resistance to targeted therapies. Abbreviations: AKT-serine-threonine kinase; EGFR, epidermal growth factor receptor; ERK-extracellular signal-regulated kinase; IL-Interleukin; JAK-Janus kinase; MHC-major histocompatibility complex; MEK-mitogenactivated protein kinase; MDSC-myeloid-derived suppressor cells; NF-kB, nuclear factor kappa B; PI3K-phosphatidylinositol-4,5-bisphosphate 3-kinase; PD-1-programmed death; PD-L1-programmed death ligand-1; TKI-Tyrosine kinase inhibitors; Treg-regulatory T-cell; STAT3-signal transducer and activator of transcription 3; TCR-T-cell receptor; TMB-tumor mutational burden. Created with BioRender.com (https://biorender.com/, accessed on 17 May 2022).
A retrospective study observed that NSCLC tumors harboring EGFR mutations had low expression levels of PD-L1 and few CD8 + TILs. In contrast, other studies have detected high PD-L1 expression in this type of tumor [11]. Preclinical studies have demonstrated that EGFR activation upregulates intrinsic PD-L1 expression, inducing T-cell apoptosis and immune escape in EGFR-mutated NSCLC. In a genetically engineered mouse model of lung adenocarcinoma carrying an EGFR mutation, decreased macrophage MHC-II expression, enhanced macrophage IL1RA expression, and increased macrophage phagocytic activity have been observed and attributed to the M2 macrophage phenotype [48]. The presence of inflamed TME is considered a positive predictive factor for the response to immunotherapy. Although EGFR-mutated NSCLC typically is not associated with inflamed TME, characterized by low levels of CD8 + T cells and immune-suppressive cells, the numbers of Tregs and PD-L1 expression levels are increased in this cancer ( Figure 2).
This leads to reduced effector T-cell activity and promotes TME favoring immune escape and cancer progression [48,49]. In EGFR-mutated cancers, TME displays high Treg infiltration without CD8 + T-cell infiltration. The recruitment of effector CD8 + T cells is prevented by the downregulation of CXCL10 through IRF1. In contrast, the stimulation of Treg infiltration is achieved through the upregulation of CCL22 through JNK/-JUN. Such immunological status may correlate with resistance to immunotherapy. Moreover, in EGFR-mutated cancers, the non-inflamed immunosuppressive TME (high levels of Tregs and low levels of CD8 + T cells) diminishes the expression of EGFR by Tregs, leading to the development of the resistance to TKIs. In conclusion, in the TME of EGFR-mutated NSCLC, high Treg infiltration occurs in the context of the non-inflamed TME. Therefore, EGFR mutations play a crucial role in cell growth, survival, and development of immune escape mechanisms [48].
Dynamic Changes of the TME during TKI Treatment
Treatment with EGFR-TKIs alters the TME and decreases PD-L1 expression levels, which may also affect the response to immunotherapy. Additionally, EGFR-TKIs regulate the strength of the immune response through TME changes (Figure 3). In particular, EGFR-TKIs increase the presentation of MHC class I and II molecules and potentiate T-cellmediated tumor killing. Moreover, the numbers of tumor-infiltrating effector Tregs were significantly lower in patients treated with TKIs. The lung cancer TME contains CD8 + T cells and immune-suppressive TAMs expressing PD-L1. From a clinical standpoint, the presence of PD-L1 + TAMs may predict the effectiveness of ICIs. and low levels of CD8 + T cells) diminishes the expression of EGFR by Tregs, leading to the development of the resistance to TKIs. In conclusion, in the TME of EGFR-mutated NSCLC, high Treg infiltration occurs in the context of the non-inflamed TME. Therefore, EGFR mutations play a crucial role in cell growth, survival, and development of immune escape mechanisms [48].
Dynamic Changes of the TME during TKI Treatment
Treatment with EGFR-TKIs alters the TME and decreases PD-L1 expression levels, which may also affect the response to immunotherapy. Additionally, EGFR-TKIs regulate the strength of the immune response through TME changes (Figure 3). In particular, EGFR-TKIs increase the presentation of MHC class I and II molecules and potentiate Tcell-mediated tumor killing. Moreover, the numbers of tumor-infiltrating effector Tregs were significantly lower in patients treated with TKIs. The lung cancer TME contains CD8 + T cells and immune-suppressive TAMs expressing PD-L1. From a clinical standpoint, the presence of PD-L1 + TAMs may predict the effectiveness of ICIs. Figure 3. Dynamic changes of tumor microenvironment (TME) of EGFR-mutated NSCLC during tyrosine kinase inhibitor treatment. TME of EGFR mutated adenocarcinoma is typically characterized by prevalence of M2 polarized macrophage, low levels of CD8+ cells, increased number of Treg, and upregulation of PD-L1. The latter, especially if associated with macrophage-mediated inflammation particularly through IL-6 and increased ROS levels, contributes to T-cell exhaustion. Additionally, several factors secreted by M2 polarized TAMs (as TGFbeta, TNFalpha, MMPs, VEGF, IL-8, bFGF,PGE2, CCL22, and CCL18). These factors contribute to tumor progression and immunodepression. The TKI treatment has been associated with a decrease in PD-L1 expression, lowering of Treg, and promotion of TAM polarization from M2 to M1 phenotype. Abbreviations: EGFR-epidermal growth factor receptor; NSCLC-non-small cell lung cancer; JAK-Janus Kinase; PI3Kphosphatidylinositol-3 kinase; NF-kB-nuclear factor-κB; IRF1-interferon regulatory factor 1; ILinterleukin; TAM-tumor-associated macrophages; MDCS-myeloid-derived suppressor cells; PD-L1-programmed death-ligand 1; ROS-reactive oxygen species; IFN-interferon; TKI-tyrosine kinase inhibitor; CSF1-colony stimulating factor 1; TGF-tumor growth factor; TNF-tumor necrosis factor; MMP-matrix metalloproteases; VEGF-vascular endothelial growth factor; PGEprostaglandin E; CCL-C-C-motif ligand. Created with BioRender.com (https://biorender.com/). Although according to one clinical trial, pembrolizumab did not elicit a significant response in patients with EGFR-mutated lung cancer naïve for EGFR-TKI, even in the . Dynamic changes of tumor microenvironment (TME) of EGFR-mutated NSCLC during tyrosine kinase inhibitor treatment. TME of EGFR mutated adenocarcinoma is typically characterized by prevalence of M2 polarized macrophage, low levels of CD8+ cells, increased number of Treg, and upregulation of PD-L1. The latter, especially if associated with macrophage-mediated inflammation particularly through IL-6 and increased ROS levels, contributes to T-cell exhaustion. Additionally, several factors secreted by M2 polarized TAMs (as TGFbeta, TNFalpha, MMPs, VEGF, IL-8, bFGF, PGE2, CCL22, and CCL18). These factors contribute to tumor progression and immunodepression. The TKI treatment has been associated with a decrease in PD-L1 expression, lowering of Treg, and promotion of TAM polarization from M2 to M1 phenotype. Abbreviations: EGFR-epidermal growth factor receptor; NSCLC-non-small cell lung cancer; JAK-Janus Kinase; PI3K-phosphatidylinositol-3 kinase; NF-kB-nuclear factor-κB; IRF1-interferon regulatory factor 1; IL-interleukin; TAM-tumor-associated macrophages; MDCS-myeloid-derived suppressor cells; PD-L1-programmed death-ligand 1; ROS-reactive oxygen species; IFN-interferon; TKI-tyrosine kinase inhibitor; CSF1-colony stimulating factor 1; TGF-tumor growth factor; TNF-tumor necrosis factor; MMP-matrix metalloproteases; VEGF-vascular endothelial growth factor; PGE-prostaglandin E; CCL-C-C-motif ligand. Created with BioRender.com (https://biorender. com/, accessed on 17 May 2022).
Although according to one clinical trial, pembrolizumab did not elicit a significant response in patients with EGFR-mutated lung cancer naïve for EGFR-TKI, even in the presence of high PD-L1 expression, the efficacy of this ICI could be influenced by TME changes during the EGFR-TKI treatment [50,51]. Recently, IT effectiveness was shown to correlate positively with the number of CD8 + lymphocytes and negatively with the number of FOXP3 + tumor-infiltrating lymphocytes in patients who acquired resistance to EGFR-TKIs [52]. In another study, both mouse and human macrophages were demonstrated to prevent killing of cancer cells by CD8 + T-cells, thereby affecting the response to immunotherapy [53]. Nonetheless, according to lung cancer clinical data from case series and clinical trials, the presence of TAMs expressing PD-L1 apparently correlates with a good response to immunotherapy, owing to the negative effects on cytotoxic lymphocytes [19,[54][55][56]. A retrospective study evaluated the effectiveness of immunotherapy in patients with EGFRmutated NSCLC by assessing both PD-L1 expression and TME parameters, including numbers of CD8 + TILs [57]. On the basis of PD-L1 expression levels and abundance of CD8 + TILs, the TME was divided into four subtypes: type I-adaptive immune resistance (PD-L1 + /CD8 + ); type II-immune ignorance (PD-L1 − /CD8 − ); type III-intrinsic induction (PD-L1 + /CD8 − ); and type IV-immune tolerance (PD-L1 − /CD8 + ) [58]. The results of that study confirmed that TKIs alter PD-L1 and PD-L2 expression levels and affect the numbers of CD8 + TILs. High abundance of CD8 + TILs was shown to be associated with favorable outcomes in EGFR-mutated NSCLC. Furthermore, high levels of CD8 + TILs may affect the response to EGFR-TKI treatment [57]. Su et al. [59] reported a high proportion of PD-L1 + /CD8 + cases among patients with de novo resistance to first-line EGFR-TKIs for advanced NSCLC. These findings suggested that NSCLCs with high PD-L1 expression and large numbers of CD8 + TILs benefit less from TKI treatment despite EGFR mutations. In addition, many reports indicated that relatively high PD-L1 expression in EGFR-mutated NSCLC was related to lower response to EGFR-TKIs and worse progression-free survival [57,60,61].
In recent clinical trials, it has been observed that ICIs promote macrophage polarization from the M2 to M1 phenotype [19]. In a retrospective study evaluating the relationship between TAMs and response to EGFR-TKIs in treatment-naïve patients, irrespective of the EGFR mutation status, both univariate and multivariate analyses showed that TAMs and EGFR mutations were independent prognostic factors of survival. However, as proposed in the review by Biswas et al. [62], these parameters correlate with each other. Tumors carrying EGFR mutations had higher TAM counts than tumors with wild-type EGFR (90% vs. 38.5%). Hence, TAM counts may predict the response to EGFR-TKIs, as TAMs contribute to drug resistance induced by the activity of stromal fibroblasts, as previously demonstrated by Wang et al. in vitro and in vivo [63]. According to the data obtained by two studies that evaluated patients with early and advanced NSCLC, host immunosurveillance is unimpaired in the early stages of NSCLC, when the M1 macrophage phenotype prevails, but falls apart in the advanced stages due to M2 phenotype polarization [14]. Targeted therapy blocks specific signaling pathways, whereas immunotherapy stimulates the immune system to attack tumor cells that previously evaded immune surveillance [64]. Subgroup analysis of clinical trials showed no survival benefit from immunotherapy in patients carrying EGFR mutations [65,66]. EGFR activation increases PD-L1 expression in tumor cells, inducing T-cell apoptosis and immune escape [64]. EGFR-TKIs strengthen MHC class I and II antigen presentation in response to IFN-γ, increasing T-cell-mediated tumor killing [67,68]. These findings explain the potential synergistic effects of immunotherapy and TKIs. However, the clinical benefits of such a combination may be limited [64]. Non-inflamed tumors lack significant lymphocyte infiltration, exhibit low PD-L1 expression, and have elevated levels of immunosuppressive elements in the TME, so such tumors may not be particularly sensitive to immunotherapy [69,70]. EGFR-TKIs modify the TME, weakening the suppressive activity of Tregs and promoting activity of cytotoxic T-cells [64]. EGFR-TKIs were demonstrated to boost the levels of cytotoxic CD8 + T cells and DCs, eliminate FOXP3 + Tregs, and inhibit macrophage polarization to the M2 phenotype, albeit only for a short time. However, EGFR-TKIs also decrease PD-L1 expression in cancer cells. Therefore, a combination of TKIs and immunotherapy may have suboptimal synergistic effects. Tariq et al. observed that inhibition of the STAT6 pathway by gefitinib prevented M2 polarization, but no dynamic changes during TKI treatment were evaluated [71]. Nevertheless, after a long period of TKI treatment, IL-10 activated the STAT3 pathway in MDSCs, inducing Treg activity, inhibiting DCs, and increased M2 macrophage polarization. Thus, the initial effect of gefitinib was neutralized [64].
Rationale for Combining Immunotherapy and TKI Treatment to Overcame Resistance
Based on the above evidence, it can be hypothesized that a combined treatment with ICIs and EGFR-TKIs could increase anticancer activity. Some studies have consistently suggested a synergistic effect of a combination of PD-1/PD-L1 inhibitors with EGFR-TKIs in EGFR-mutated NSCLC with PD-L1 overexpression [72,73]. However, data regarding the combined use of PD-1/PD-L1 inhibitors and EGFR-TKIs are controversial [74]. Combined therapy using TKIs and ICIs is currently being explored, but the toxicity of these drugs may preclude their concomitant use [75][76][77][78]. However, in a small study, a combination of the TKI erlotinib and ICI nivolumab appeared to be safe and well tolerated [79]. Notably, erlotinib was shown to increase MHC I antigen presentation, rendering cancer cells vulnerable to T cells [80]. Furthermore, two studies showed that TKI downregulated PD-L1 expression and modulated T-cell-mediated immune responses [72,81,82]. EGFR TKIs also had immunostimulatory activities, as reported by Venugopalan et al. [83] and Jia et al. [64]. Moreover, in a study in mouse models of lung cancer with EGFR mutation, the TKI erlotinib was demonstrated to boost leukocyte infiltration and enhance antigen-presenting function [83]. Hence, TKI treatment is thought to re-establish a healthy immune microenvironment, inducing tumor regression. However, a combination of TKI therapy with immunotherapy does not prevent disease relapse [80].
Teng et al. [58] assessed the effectiveness of immunotherapy using a TME model based on TIL numbers and PD-L1 expression levels. Their findings suggest that tumors with the immunoinflammatory TME (PD-L1 + and TIL + ) may benefit greatly from ICI treatment. Further, lower CD8 + TIL levels were related to EGFR mutation. EGFR mutations can upregulate CD73 expression, increase Treg numbers, and stimulate conversion of ATP into immunosuppressive adenosine, contributing to cancer progression and metastasis [8,84,85]. Increased activation of Tregs mediated by adenosine and accumulation of MDSCs, along with lower anticancer activities of natural killer cells and DCs, increased macrophage polarization toward the M2-phenotype, and inhibition of the effector T-cell-mediated response collectively led to tumor immune escape [86]. Preclinical and clinical studies demonstrated that the application of EGFR-TKIs increases MHC expression and induces FOXP3 degradation, leading to reduced Treg activity and infiltration in the TME [87]. Furthermore, EGFR-TKIs boost T-cell-mediated anticancer function, reduce T-cell apoptosis, inhibit M2 polarization, and enhance IL-10, CCL2, and INF-γ levels. CCL2 promotes differentiation of T cells into Th2 cells that have anti-inflammatory functions. This, in turn, activates the STAT3 pathway in MDSCs, which migrate to the TME and exert immunosuppressive activity. Furthermore, MDSCs promote angiogenesis by stimulating the secretion of the vascular endothelial growth factor, and release of MMPs [8].
In conclusion, the influence of EGFR-TKIs on the TME in EGFR-mutated adenocarcinoma may play a critical role in the response to immunotherapy.
Overcoming Resistance and Potentiate Response to Immunotherapy in Patients with EGFR-Mutated NSCLC by Targeting TAM Related Inflammation and Cytokine Storm
TAMs have been highlighted as one of the major components of the immunosuppressive TME and, consequently, are an attractive target to improve responses to immunotherapy. Several strategies, such as TAM depletion, TAM reprogramming, and targeting of TAM functional molecules, have been proposed to enhance the efficacy of ICIs. Preclinical studies carried out in mouse models of several solid tumors, including lung cancer, sug-gest that a combination of these strategies with immunotherapy can enhance therapeutic responses, but all of these strategies need further investigation before they may be applied in clinical practice [88]. In lung cancer, ICIs targeting the PD-1/PD-L1 pathway have shown limited success in patients with NSCLC that harbored activating EGFR mutations, because activated EGFR signaling allows NSCLC cells to use multiple strategies to create an immunosuppressive TME, including recruitment of TAMs and Tregs, and at the same time, produce inhibitory cytokines and metabolites [89]. Therefore, some studies have explored novel mechanisms to reverse the suppressive TME and consequently improve the efficacy of ICIs in such patients. A recent study by Chen et al. [90] showed that activated EGFRsignaling induced ILT4 overexpression in NSCLC cells via the ERK1/2 and AKT signaling pathways and suppressed tumor immunity by recruiting M2-like TAMs, diminishing the T-cell response. EGFR activation observed in patients with NSCLC has two mechanisms: intrinsic activation caused by EGFR mutations and extrinsic activation by ligand recruitment in patients with wild-type EGFR [91]. The observations by Chen et al. [90] suggest that ILT4 inhibition prevents immunosuppression and tumor growth. In fact, they demonstrated that ILT4 inhibition reversed the immunosuppressive features of the TME, so this approach might be a promising strategy for the second-line treatment in both TKI-resistant EGFR-mutated and EGFR wild-type NSCLC. Furthermore, ILT4 inhibition may also help to overcome resistance to ICIs in this class of patients [90]. Recently, Tu et al., showed that in a xenograft mouse model of EGFR-mutated NSCLC, neither anti-PD-L1 nor anti-CD73 antibody alone inhibited tumor growth compared with the effect of the isotype control. However, a combination of both antibodies significantly inhibited tumor growth, increased the number of tumor-infiltrating CD8 + T cells, and enhanced IFN-γ and TNF-α production by these T cells. Consistently, the authors observed an increase in expression levels of genes involved in inflammation and T-cell function in tumors treated with a combination of anti-PD-L1 and anti-CD73 antibodies. These results indicate that a combination of anti-CD73 and anti-PD-L1 therapies may be effective in treating EGFR-mutated NSCLC [84].
Discussion
ICIs that block the PD-1/PD-L1 pathway have revolutionized the clinical care of patients with locally advanced or metastatic NSCLC [92]. However, patients with EGFRmutated NSCLC benefit less from the anti-PD-1/PD-L1 treatment than patients without the mutation [7]. However, the mechanism underlying the resistance to anti-PD-1/PD-L1 agents in EGFR-mutated NSCLC remains unclear. Some authors have demonstrated lower IFN-γ levels and decreased T-cell infiltration in EGFR-mutated NSCLC [69], which suggests decreased immunogenicity or suppression of the immune response in the TME. Given that PD-L1 expression in tumor tissue is an important biomarker that predicts clinical outcomes of the anti-PD-1/PD-L1 treatment [93], it is possible that tumors in EGFR-mutated NSCLC express low levels of PD-L1. A pooled analysis of 15 published studies and data from the Cancer Genome Atlas showed that patients with NSCLC harboring EGFR mutations have lower PD-L1 expression in their tumor tissue [69]. At the same time, other studies have demonstrated upregulation of PD-L1 in NSCLC with activating EGFR mutations [94,95]. Because reports on PD-L1 status in EGFR-mutated NSCLCs are conflicting [96], there may be other mechanisms at play that contribute to immunosuppression in the tumors of such patients. The TME plays a key role in regulating tumor progression and significantly affects the efficiency of immune response in patients with mutated EGFR. As we discussed in this review, evidence from the literature supports the notion that the TME in EGFR-mutated NSLC is immunosuppressive, with reduced TMB, low expression of PD-L1, low TIL numbers, and high Treg infiltration. Additionally, NSCLC tumors harboring EGFR mutations typically present with non-inflamed TME, which has been associated with a poor response to immunotherapy. In this regard, it remains unclear whether there are differences in the TME and ICI efficacy in NSCLC with different EGFR mutation subtypes. Two recent studies have indicated that from an immunological perspective, oncogenic mutations may be an important factor for cellular immune-suppression [97,98]. Several potential approaches to improve the response to immunotherapy have been tested. Targeting TAM and DC therapy may be an interesting future direction for patients with EGFR mutations. In addition, combining ICI with TKI may be another effective therapeutic strategy. In any case, when considering the mechanisms that modulate the effectiveness of immunotherapy, in addition to establishing the oncogenic mutations, the evaluation of the patient's general status is crucial. This is especially important in lung cancer patients who, at diagnosis, often present with a compromised general status, with cachexia, chronic inflammation, and consequent immunodepression. Indeed, these factors may be particularly important in explaining the inferior response to immunotherapy in some subsets of patients with advanced-stage cancer [25]. Immunotherapy may be combined with drugs that modulate chronic inflammation, counteract oxidative stress, and correct disturbances of energy and iron metabolism, which significantly impact the efficiency of the immune system. In this regard, several years ago, we tested the efficacy of a combination treatment with immunotherapy (recombinant IL-2), anti-inflammatory agent (medroxyprogesterone acetate), and antioxidants in patients with advanced cancer [99,100].
Conclusions
In conclusion, different immunosuppressive mechanisms, including-but not limited to-altered TME, are involved in the resistance to immunotherapy in EGFR-mutated NSCLC. These mechanisms should be properly characterized and targeted by a combined pharmacological approach. In this context, the specific processes that regulate the effectiveness of the immune response in relation to the disease stage, cancer-related inflammation with related systemic symptoms, and patient general status should not be underestimated. | 8,660.4 | 2022-06-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Lethal, Hereditary Mutants of Phospholamban Elude Phosphorylation by Protein Kinase A*
Background: Heterozygous mutations in the cytoplasmic domain of phospholamban cause lethal dilated cardiomyopathy. Results: The mutations alter phospholamban-protein kinase A interactions that are essential for substrate recognition and phosphorylation. Conclusion: Hereditary mutations in phospholamban that prevent phosphorylation by protein kinase A will lead to chronic inhibition of SERCA. Significance: Arginines in the cytoplasmic domain of phospholamban should be considered hot spots for hereditary mutations leading to dilated cardiomyopathy. The sarcoplasmic reticulum calcium pump (SERCA) and its regulator, phospholamban, are essential components of cardiac contractility. Phospholamban modulates contractility by inhibiting SERCA, and this process is dynamically regulated by β-adrenergic stimulation and phosphorylation of phospholamban. Herein we reveal mechanistic insight into how four hereditary mutants of phospholamban, Arg9 to Cys, Arg9 to Leu, Arg9 to His, and Arg14 deletion, alter regulation of SERCA. Deletion of Arg14 disrupts the protein kinase A recognition motif, which abrogates phospholamban phosphorylation and results in constitutive SERCA inhibition. Mutation of Arg9 causes more complex changes in function, where hydrophobic substitutions such as cysteine and leucine eliminate both SERCA inhibition and phospholamban phosphorylation, whereas an aromatic substitution such as histidine selectively disrupts phosphorylation. We demonstrate that the role of Arg9 in phospholamban function is multifaceted: it is important for inhibition of SERCA, it increases the efficiency of phosphorylation, and it is critical for protein kinase A recognition in the context of the phospholamban pentamer. Given the synergistic consequences on contractility, it is not surprising that the mutants cause lethal, hereditary dilated cardiomyopathy.
The sarcoplasmic reticulum calcium pump (SERCA) and its regulator, phospholamban, are essential components of cardiac contractility. Phospholamban modulates contractility by inhibiting SERCA, and this process is dynamically regulated by -adrenergic stimulation and phosphorylation of phospholamban. Herein we reveal mechanistic insight into how four hereditary mutants of phospholamban, Arg 9 to Cys, Arg 9 to Leu, Arg 9 to His, and Arg 14 deletion, alter regulation of SERCA. Deletion of Arg 14 disrupts the protein kinase A recognition motif, which abrogates phospholamban phosphorylation and results in constitutive SERCA inhibition. Mutation of Arg 9 causes more complex changes in function, where hydrophobic substitutions such as cysteine and leucine eliminate both SERCA inhibition and phospholamban phosphorylation, whereas an aromatic substitution such as histidine selectively disrupts phosphorylation. We demonstrate that the role of Arg 9 in phospholamban function is multifaceted: it is important for inhibition of SERCA, it increases the efficiency of phosphorylation, and it is critical for protein kinase A recognition in the context of the phospholamban pentamer. Given the synergistic consequences on contractility, it is not surprising that the mutants cause lethal, hereditary dilated cardiomyopathy.
In cardiac muscle, -adrenergic stimulation increases contractility and accelerates relaxation. These effects are due to the activation of PKA, which targets a variety of downstream contractile and calcium-handling systems. One such target is phospholamban (PLN), 3 a regulator of the sarcoplasmic reticulum calcium pump (SERCA) (1). Following an appropriate physiological cue, PKA phosphorylates PLN and increases calcium reuptake by SERCA into the sarcoplasmic reticulum (SR). Although the role of SERCA and PLN in muscle relaxation is clear, evidence from animal models suggests that most of the inotropic effects on contractility also originate from SR calcium handling (2). This is because dynamic control of myocardial contraction-relaxation involves fine tuning SERCA inhibition and SR calcium levels. SERCA function depends on the available pool of inhibitory PLN, which in turn depends on the cytosolic calcium concentration and the oligomeric and phosphorylation states of PLN (1,3). It is known that defects at any point in this pathway can result in heart failure (4), although it took almost three decades after the initial discovery of PLN to establish this link.
Dilated cardiomyopathy (DCM) is a major cause of cardiovascular disease, with ϳ30% of cases being of familial or hereditary origin (5). Many disease-causing mutations are found in genes encoding contractile or calcium-handling proteins, such as PLN, where defects in force transmission, endoplasmic reticulum stress, apoptosis, and biomechanical stress underlie the development and progression of DCM. In humans, abnormal SERCA to PLN ratios (6 -8) or mutations in PLN (9 -12) are associated with disease, whereas superinhibitory and chronically inhibitory PLN mutations can cause heart failure in mouse models (13)(14)(15). Two such examples of mutations include R9C and Arg 14 deletion (R14del) in the cytoplasmic domain of PLN (9,11), which have been linked to DCM in extended family pedigrees. In addition, R9L and R9H are newly identified mutations, although their linkage to heart failure has not been fully established (12). These hereditary mutations are somewhat surprising because Arg 9 and Arg 14 were not previously considered essential residues of PLN, and overall the cytoplasmic domain of PLN makes a small contribution to SERCA inhibition (16). Nonetheless, cysteine substitution of Arg 9 is thought to result in loss of inhibitory function and trapping of PKA, whereas deletion of Arg 14 alters the PKA recognition motif of PLN. The resultant effects on SERCA function and SR calcium stores are causative in the development and progression of DCM.
To gain mechanistic insight into R9C, R9L, R9H, and R14del, we created missense and deletion mutants in the cytoplasmic domain of PLN and characterized their effects on phosphorylation by PKA in the absence and presence of SERCA. For the disease-associated mutants, R14del resulted in a slight loss of inhibitory function and a complete loss of phosphorylation, R9H resulted in normal inhibitory function and a complete loss of phosphorylation, and R9L and R9C resulted in a complete loss of both inhibitory function and phosphorylation. Any changes to the PKA recognition motif of PLN (Arg 13 -Arg 14 -Ala 15 -Ser 16 ) (3, 17) eliminated phosphorylation, providing a simple explanation for the R14del mutant. That is, deletion of Arg 14 would be expected to render PLN unresponsive to -adrenergic stimulation and constitutively active as an inhibitor of SERCA. In contrast, mutagenesis of Arg 9 revealed multiple effects on SERCA regulation (18). All nonconservative mutations of Arg 9 significantly decreased the inhibitory activity of PLN, as well as its ability to be phosphorylated by PKA. Further insight was gained through the mutagenesis of PKA, which revealed that Glu 203 and Asp 241 were required for efficient phosphorylation of PLN. By virtue of an electrostatic interaction with Arg 9 of PLN, these PKA residues increase the efficiency of phosphorylation and allow PKA to recognize PLN in the context of the pentamer. To summarize our findings, Arg 9 of PLN plays a multifaceted role in cardiac contractility: it is important for SERCA inhibition, it increases the efficiency of PLN phosphorylation, and it allows PKA to recognize nonphosphorylated PLN monomers in the context of a partially phosphorylated pentamer.
EXPERIMENTAL PROCEDURES
Sample Preparation-SERCA1a was prepared from rabbit hind leg muscle (19,20), and recombinant human PLN was made using established procedures (18,21). SERCA and PLN were reconstituted for functional assays using established procedures (22,23) to obtain final molar ratios of 1 SERCA to 4.5 PLN to 120 lipids. ATPase assays were performed as previously described (18,24). For phosphorylation assays, a "fast" reconstitution was performed to increase the lipid to protein ratio of the co-reconstituted proteoliposomes (25). The final molar ratios were 1 SERCA to 4.5 PLN to 900 lipids. For all proteoliposomes used herein, the concentrations of SERCA and PLN were determined by quantitative SDS-PAGE (26).
Phosphorylation Assays-PLN was first phosphorylated in detergent solution by the catalytic subunit of PKA (PKA-c) (Sigma-Aldrich) in the absence of SERCA as previously described (22) with a molar stoichiometry of 1 PKA-c to 1000 PLN. PLN was phosphorylated with ATP spiked with [␥-32 P]ATP (ϳ0.1 Ci/l). All other components of the reaction were identical to published protocols (22). The reactions were stopped by the addition of TCA, incubated on ice for 10 min, washed several times with 10% TCA and water, and counted in 1 ml of liquid scintillant (Perkin-Elmer) for 1 min in a scintillation counter. All of the values were corrected by subtracting background counts per minute from samples containing no PKA. PLN was also phosphorylated by PKA-c in co-reconstituted proteoliposomes in the presence of SERCA (50 l) as previously described (22) with a molar stoichiometry of 1 PKA-c to 50 PLN. The ATP was spiked with [␥-32 P]ATP (ϳ0.1 Ci/l), and the samples were treated as described above. All of the values were corrected for PLN concentration in proteoliposomes as determined by gel quantitation (ImageQuant software; GE Healthcare).
Recombinant PKA Purification-The wild-type bovine PKA catalytic subunit cloned into the pET3a vector (EMD Chemicals, San Diego, CA) was purchased from Biomatik (Cambridge, Canada). Codons were optimized for expression in Escherichia coli and a six-histidine tag was added on the N terminus of the PKA gene. The plasmid was transformed into E. coli (DE 3 ) pLysS cells (Stratagene, Santa Clara, CA). Cultures were grown at 37°C in noninducible minimal media (MDAG-135) (27) until A 600 ϭ 0.6 and then induced with IPTG (0.5 mM) for 6 h at 22°C. Recombinant PKA was purified on a nickelnitrilotriacetic acid column (Qiagen) under native conditions according to the protocol provided in the Qiaexpressionist (Qiagen). After elution, recombinant PKA was concentrated (ϳ1 mg/ml) and dialyzed into 50 mM Tris-HCl, pH 7.5, 50 mM NaCl, 0.2% -mercaptoethanol, 50% glycerol, and 1 mM EDTA. This protocol was repeated for the E203A and D241A mutants of PKA. The purity and concentration of each mutant was assessed by SDS-PAGE, and all activity values were corrected for it.
Kemptide and PLN Peptide Phosphorylation-The kinetic values for the PKA proteins were acquired from a [␥-32 P]ATP phosphorylation filter binding assay that has been previously described (28). Kemptide was purchased from Promega, and PLN cytoplasmic peptides were synthesized by Biomatik Corporation. Kemptide and wild-type PLN peptide concentrations were varied from 1.0 to 400 M for wild-type PKA and 1 to 700 M for E203A and D241A PKA; R9C PLN peptide was only varied from 1 to 300 M for wild-type and mutant PKAs because of solubility problems. K m and V max were obtained by fitting the data to the Michaelis-Menten equation (v ϭ V max [S]/ ([S]ϩK m )), and all of the data were plotted as substrate concentration (M) versus activity (M/min).
Functional Properties of PLN Mutants Implicated in Heredi-
tary Cardiac Pathology-Although the root cause of DCM can be a single site mutation in PLN, heart failure is an incredibly complex process that impairs many aspects of calcium homeostasis and the cellular proteome, including decreased levels of SERCA (29,30). Mechanistically, it is important to separate initiating events from the complex array of secondary pathological consequences that define heart failure. Hereditary missense mutations, such as those found in PLN, provide valuable insights into disease-associated changes in calcium homeostasis. In the case of PLN mutants (R9C, R9H, R9L, and R14del), SERCA dysregulation accounts for the earliest stages of disease, which ultimately leads to reduced pumping force, cardiovascular remodeling, and heart failure.
For this reason, the goal of the present study was to mechanistically define the relationship between PLN mutation and the regulation of SERCA that underlies the development of DCM. To do this, we used reconstituted proteoliposomes containing SERCA and PLN under conditions that mimic native SR membrane (22,25). Functional characterization of the proteoliposomes relied on measurements of the calcium-dependent ATPase activity of SERCA in the presence of wild-type or mutant PLN. R9C and R9L resulted in complete loss of function, R9H was indistinguishable from wild type, and R14del resulted in partial loss of function ( Fig. 1, A Phosphorylation of PLN Mutants Implicated in Hereditary Cardiac Pathology-The cytoplasmic domain of PLN is the target of regulation via the -adrenergic pathway, and disruption of this process would be expected to influence the development and progression of DCM. Although no known mutations affect the site of phosphorylation by PKA (Ser 16 ), Arg 14 is part of the PKA recognition motif, and Arg 9 is a more peripheral, upstream residue that may also be involved in recognition by PKA (31). Contrary to the location of these residues, R14del was initially reported to be phosphorylated, whereas R9C was reported to abrogate phosphorylation (9,11). Therefore, our goal was to understand the relationship between disease-associated mutations and phosphoregulation of PLN. Under conditions that resulted in efficient phosphorylation of wild-type PLN (data not shown), we observed no detectable PKA-medi-ated phosphorylation of R9C, R9L, or R14del and minimal phosphorylation of R9H (Fig. 1C). This was confirmed with SERCA ATPase activity measurements, which revealed minimal changes in SERCA inhibition following PKA treatment of the PLN mutants (Table 1). This led us to consider whether the mutation of these particular residues (Arg 9 and Arg 14 ) or the nature of the mutations (Cys, Leu, or His substitution or deletion) was the key determinant for the defect in phosphorylation.
To address this, we first tested the PKA-mediated phosphorylation of alanine mutants of residues Lys 3 to Thr 17 in the absence of SERCA (Fig. 2). Under conditions where wildtype PLN rapidly reached complete phosphorylation, most of the alanine substitutions between residues 3 and 17 resembled the native protein. However, three mutants (R9A, R13A, and R14A) exhibited clear defects in phosphorylation with S16A
TABLE 1 Kinetic parameters from Hill plots and phosphorylation of diseaseassociated PLN mutations
a Percentage of phosphorylation compared with wild-type PLN of detergent-solubilized mutant PLN or wild-type/mutant mixtures of PLN. b The kinetic data were taken from Ceholski et al. (18) and are shown for comparison. c ph indicates that the PLN was treated with PKA prior to reconstitution. Table 1. C, PKA-mediated phosphorylation of wild-type and diseaseassociated mutants of PLN. Phosphorylation is shown as a percentage of wild- serving as a negative control. The results for Arg 13 and Arg 14 were anticipated given their placement in the PKA recognition motif and prior characterization by mutagenesis (32). The result for Arg 9 of PLN was unexpected, given that it was not previously reported to be a determinant for PKA-mediated phosphorylation (17).
PLN in the absence of SERCA allowed unhindered interaction with PKA for optimal phosphorylation, yet this did not take into account the SERCA-PLN interaction that normally occurs in cardiac SR. To examine this, proteoliposomes containing SERCA in the presence of wild-type or mutant PLN were phosphorylated by PKA. To distinguish between SERCA-specific effects on PLN phosphorylation versus molecular crowding that could limit the accessibility of PKA, the reconstitution method was altered to incorporate a higher lipid to protein ratio in the proteoliposomes (25). Under conditions where wildtype PLN rapidly reached complete phosphorylation, we again observed significant decreases in the phosphorylation of R9A, R13A, and R14A (Fig. 3). Surprisingly, the S10A mutation now exhibited reduced phosphorylation (67 Ϯ 4.2% of wild type) comparable with the reduction observed for the R9A mutant (63 Ϯ 2.8% of wild type). The reduced phosphorylation of S10A only occurred in the presence of SERCA, indicating that this residue may be important for PKA recognition and binding of SERCA-bound PLN. It should be noted that this reduced level of phosphorylation corresponded to more than one molecule of PLN per molecule of SERCA.
Mimicking Disease-associated Mutations in PLN-So far, the R9C, R9H, R9L, and R14del mutations have only been found in heterozygous individuals, where they exert a dominant negative effect on calcium reuptake. In particular, the R9C mutant was reported to trap PKA and block phosphorylation of wild-type PLN (11). The dominant negative effect of R9C on PKA-mediated phosphorylation prompted us to investigate equimolar mixtures of mutant and wild-type PLN in our phosphorylation assays (Table 1). To allow for the formation of a trapped, inactive complex with PKA, the assays were performed with or without preincubation of PKA with the R9C, R9L, R9H, or R14del mutants. Under these conditions, none of the mutants sequestered PKA and prevented the phosphorylation of wild-type PLN. This was supported by SERCA ATPase activ-ity measurements, where the phosphorylation of wild-type PLN had the expected effects on heterozygous mixtures with the mutants (R9C, R9L, R9H, and R14del; Table 1). Our conclusion was that a simple interaction between the PLN mutant and PKA did not result in trapping, suggesting that something more complicated may be occurring in DCM. SERCA, PLN, and PKA are part of a larger signaling complex that includes protein kinase A-anchoring protein AKAP18␦ (33), as well as additional regulatory components. It is reasonable to expect that one or more of these interactions may be perturbed in R9C-mediated DCM.
Nonetheless, the lack of phosphorylation of R9C, R9H, R9L, and R14del mutants by PKA prompted us to further investigate by generating a series of amino acid substitutions of and around Arg 9 and Arg 14 . The Arg 14 deletion mutant shortens the N-terminal cytoplasmic domain of PLN and changes the PKA recognition motif from Arg 13 -Arg 14 -Ala 15 -Ser 16 to Ile 12 -Arg 13 -Ala 15 -Ser 16 (Fig. 2). This change interferes with SERCA and PKA interactions necessary for the proper regulation of calcium reuptake. We have already shown that R14del cannot be phosphorylated by PKA and that R13A and R14A recapitulate this behavior. To confirm these observations, we mutated Arg 13 to isoleucine (R13I) to mimic the amino acid sequence change that results from deletion of Arg 14 (the sequence became Ile 13 -Arg 14 -Ala 15 -Ser 16 ) without shortening the cytoplasmic domain of PLN. Like R14del, we found that R13I could not be phosphorylated by PKA (Fig. 4). The combined results for R14del, R13A, R14A, and R13I indicated that any change to the PKA recognition motif of PLN would be expected to eliminate PLN phosphorylation as a means of regulating SERCA function.
We next investigated the physicochemical properties of R9C, R9H, and R9L that contribute to PLN dysregulation and DCM. The mutants tested included R9A, R9C, R9E (charge reversal), R9del (deletion of Arg 9 ), R9S (isosteric to R9C), R9K and R9H (conservative substitutions), R9Q (removal of charge), and R9L, R9V, R9I, and R9M (hydrophobic substitutions). All mutations of Arg 9 except for R9K and R9Q reduced phosphorylation by PKA (Fig. 4), whereas R9C, R9H, and all hydrophobic substitutions (R9L, R9V, R9I, and R9M) completely or nearly abolished phosphorylation. Surprisingly, the hydrophobic substitutions were the most effective mimics of the phosphorylation defects associated with R9C and R9L, mirroring the trend observed for the functional defects associated with these mutants (18). Additionally, R9H was found to cause a severe defect in phosphorylation, consistent with the potential linkage of this mutation to heart failure (12). Although histidine is a conservative substitution for arginine, the aromatic side chain likely makes it a poor substrate for PKA phosphorylation. We next investigated the positioning of the cysteine substitution at Arg 9 . We generated isosteric mutations of nearby residues Thr 8 and Ser 10 to cysteine (T8C and S10C). Neither mutant exactly mimicked R9C, yet T8C clearly resulted in a strong defect in phosphorylation. We concluded that Arg 9 is important for the recognition of PLN by PKA and that a hydrophobic mutation in this region of PLN is particularly detrimental for phosphorylation by PKA.
Arg 9 of PLN and Complementary Residues of PKA-Previous studies have suggested that PKA prefers peptide substrates with a basic residue upstream of the recognition motif at the P-6, P-7, or P-8 position (31,34), although the role of such distal residues in natural substrates like PLN has been less apparent. In model substrates, the upstream arginine is not required for phosphorylation, yet it plays a role in peptide positioning in the active site of PKA and increases the efficiency of phosphorylation. Herein, mutagenesis revealed that Arg 9 at P-7 of PLN appeared to fit this notion, where removal of the arginine side chain (R9A) decreased the efficiency of PLN phosphorylation, and substitution of particular side chains (hydrophobic substitutions) completely abolished phosphorylation (Fig. 4). The structure of the catalytic subunit of PKA bound to a cytoplasmic peptide of PLN has been determined (35), and it identifies two acidic residues in PKA (Glu 203 and Asp 241 ) that interact with Arg 9 of PLN (Fig. 5). To assess the importance of these residues, we produced recombinant bovine catalytic subunit of wild-type PKA, as well as Glu 203 to Ala (E203A) and Asp 241 to Ala (D241A) mutants of PKA. The activity of these PKA variants was confirmed using kemptide (sequence LRRASLG) (36), an ideal substrate for PKA based on the phosphorylation site of liver pyruvate kinase ( Table 2).
Phosphorylation of PLN by Recombinant PKA-Although model peptides are a facile system for studying phosphorylation, full-length, membrane-associated PLN is the natural substrate for PKA. However, the disease-associated Arg 9 mutants of PLN could not be phosphorylated over the time frame of our experiments. For this reason, the R9S mutant was chosen as a surrogate. R9S is isosteric to R9C, yet it resulted in sufficient phosphorylation for the study of PKA mutants (Fig. 4). Under the experimental conditions, recombinant wild-type PKA phosphorylated wild-type PLN with a half-time of ϳ7.5 min and an initial rate of 12.2 mol min Ϫ1 (Fig. 6A). Phosphorylation of R9S PLN with recombinant wild-type PKA resulted in a lower initial rate, and the time-dependent phosphorylation saturated but never reached complete phosphorylation. A similar trend occurred for the E203A mutant of PKA with both wild-type and R9S PLN (Fig. 6B). In fact, the progress curves for these three enzyme-substrate pairs (wild-type PKA with R9S PLN, E203A PKA with wild-type PLN, and E203A PKA with R9S PLN) were very similar to one another and did not reach complete phosphorylation. This suggested a common underlying effect on phosphorylation. Lastly, we tested the D241A mutant of PKA, which had a more severe effect on the phosphorylation of PLN (Fig. 6C). The data are consistent with interactions between Glu 203 and the side chain of Arg 9 and Asp 241 and the backbone amide of Arg 9 (Fig. 5), both of which are required for positioning of the substrate for efficient phosphorylation. The progress curves for the three enzyme-substrate pairs described above never reached complete phosphorylation, consistent with either enzyme inactivation or substrate depletion. Enzyme inactivation, perhaps by denaturation or product inhibition, seemed unlikely because the enzyme was limiting in the reactions. Nevertheless, a simple test for enzyme inactivation was to incubate the three PKA variants (wild type, E203A, and D241A) with wild-type PLN until saturation was reached, followed by the addition of fresh enzyme to test whether phosphorylation could proceed. We found that the addition of enzyme did not result in complete phosphorylation of wildtype PLN by the PKA mutants (Fig. 7A), indicating that enzyme inactivation was not the cause of this behavior. This same result was observed with R9S PLN, where the addition of fresh PKA (wild type or mutant) was unable to complete R9S phosphorylation (Fig. 7B). We then wondered how the substrate might change as a function of time in the progress curves. Because the substrate, wild-type or R9S PLN, was identical for all three PKA variants, it seemed improbable that true substrate depletion was occurring. Instead, it seemed more likely that the accessibility of the substrate to PKA changed as product accumulated in the phosphorylation reactions. The simplest way to envision how this might occur was to invoke phosphorylation in the context of the PLN pentamer (37). To take this into account, we replotted the progress curves for wild-type PKA and R9S PLN and E203A PKA and wild-type PLN as a function of the number of phosphorylated monomeric equivalents (Fig. 8A). For mutation of either PLN (Arg 9 ) or PKA (Glu 203 ), the progress curves stalled after the phosphorylation of two to three monomers per PLN pentamer. This led to the hypothesis that these residues of PLN and PKA function, at least in part, to recognize a nonphosphorylated monomer in the context of a partially phosphorylated pentamer.
If this hypothesis was correct, reducing the oligomeric state of PLN should increase the level of phosphorylation. For this, we returned to the disease-associated R9C mutant, which could not be phosphorylated by PKA in our assays and thus provided a rigorous test for our hypothesis. A full-length monomeric form of R9C (R9C-SSS) and an R9C cytoplasmic peptide (R9C 1-20 , amino acids 1-20 of PLN) were both tested for their ability to be phosphorylated by PKA. The monomeric form of PLN was generated by replacing the three transmembrane cysteine residues (Cys 36 , Cys 41 , and Cys 46 ) with serine. The PLN-SSS and R9C-SSS mutants were entirely monomeric by SDS-PAGE (Fig. 8B), and they possessed inhibitory properties comparable with wild-type and R9C PLN, respectively ( Table 1). As might be expected, R9C-SSS increased phosphorylation ϳ22-fold compared with R9C (to ϳ37% phosphorylation level of wild type), whereas R9C 1-20 increased phosphorylation 45-fold compared with R9C (to ϳ84% phosphorylation level of wild type) (Fig. 8C and Table 2). The increase in phosphorylation of the monomeric forms of R9C indicated that the quaternary structure of the pentamer is a limiting factor in the phosphorylation of individual subunits. Thus, the specific rec- ognition of Arg 9 by PKA helps to overcome this, reminiscent of the role of the pentamer in PLN dephosphorylation (37). In addition, the R9C monomer did not reach wild-type levels of phosphorylation, indicating that Arg 9 of PLN is important for the efficient phosphorylation of this longer, natural substrate of PKA.
DISCUSSION
Thus far, four mutations linked to heart disease have been identified in the cytoplasmic domain of PLN (9,11,12,38). The first to be identified was an R9C mutant, followed by R14del and newly identified R9H and R9L. Deletion of Arg 14 was found in two families with autosomal dominant DCM resulting in death at middle age (9,39), as well as another small family with late onset mild DCM (38). A heterozygous mouse model generated for this mutant suggested it was a superinhibitor of SERCA that was only partially reversible by PKA-mediated phosphorylation (9). By comparison, the R9C mutant results in multiple changes to PLN function. A heterozygous mouse model suggested that R9C is a loss of function form of PLN that also traps PKA and prevents the phosphorylation of other cellular targets including wild-type PLN (11). The effects of R9H and R9L on SERCA inhibition have been characterized (18), but their consequences on phosphorylation have yet to be examined. Given the link of R9C and R14del to PLN phosphorylation, we systematically examined the residues surrounding the phosphorylation site of PLN for their disease relevance. We wished to examine the aberrant interactions involving only PLN-SERCA and PLN-PKA, both of which would be considered initiating events in the development of DCM. At the cellular level, these initiating events could be distinct from observations made at later stages of disease development, where many affected processes can ultimately contribute to the observed suppression of SERCA function.
Mechanism of Disease-causing Mutations of PLN-First considering R14del, it was initially reported to be a partial inhibitor of SERCA under homozygous conditions and a superinhibitor under heterozygous conditions in HEK-293 cells (9). It was also observed that R14del may be phosphorylated, despite the change to the PKA recognition motif. A later study in mouse models revealed that under homozygous conditions, R14del was misdirected to the plasma membrane where it altered the activity of the sodium pump (40). In this latter study, R14del did not inhibit SERCA and was only weakly phosphorylated. However, only heterozygous R14del patients have been identified, and it has been shown that R14del is retained in the SR under heterozygous conditions (9). This suggests that the presence of both mutant and wild-type PLN underlies the development of DCM, perhaps via the reported superinhibition of SERCA. To investigate this, we isolated SERCA and R14del from all other cellular effectors and found it to be a partial inhibitor of SERCA (slight loss of function mutant) in both the absence and presence of wild-type PLN (18). We also observed that R14del could not be phosphorylated by PKA. Putting this in terms of initial stages of calcium dysregulation, R14del would result in partial inhibition of SERCA and lack of -adrenergic control by phosphorylation. This initial chronic inhibition of SERCA could lead to the development of other cellular sequelae, such as SERCA down-regulation (29,30) or changes in the SUMOylation of SERCA (41).
By comparison with R14del, the R9C mutant resulted in multiple changes to PLN function, including the purported trapping of PKA. Given that a cysteine residue replaced Arg 9 of PLN, we immediately considered the aberrant chemistry of a free sulfhydryl as the culprit for disease (42). However, examination of the structure of the PKA substrate-binding pocket revealed no obvious mechanism for the formation of a trapped complex (35). Further investigation revealed that a hydrophobic substitution at this position was enough to completely mimic R9C, and we anticipated that mutations like R9L might eventually be found in the human population (18). Of course, we now know that Arg 9 is a hot spot for disease-associated mutations, as can be seen with the recent identification of R9H and R9L in heart disease patients (12). Perhaps this is not surprising, because PKA prefers an arginine six to eight residues upstream of the recognition motif (31,43), and dynamic phos- phorylation of PLN is critical for normal cardiac function (44 -46). The available data suggest that the free and PKA-bound conformations of PLN are distinct (35) and that interconversion is more efficient with Arg 9 present. One can then speculate that hydrophobic substitution of Arg 9 is detrimental because it alters interactions necessary for the free or PKA-bound conformations of PLN (35,47,48). From the standpoint of establishing prediction models for human heart failure, any of the hydrophobic substitutions identified herein (such as T8C, R9I, R9M, R9V, and R13I) would be expected to mimic the disease development seen for R9C and R9L. One interesting mutant to note was R9H, recently identified in a Brazilian cohort of heart failure patients (12). The R9H mutation was found in a single patient with idiopathic DCM and was considered a low penetrant allele because several family members had the PLN mutation in the absence of disease. However, R9H resembles the disease-associated R14del mutation in that it is a functional inhibitor of SERCA (18), which cannot be phosphorylated by PKA (Fig. 4). As a result, R9H would be unresponsive to -adrenergic stimulation, leading to constitutive inhibition of SERCA. Although this by itself may not be causative in disease, we anticipate that individuals harboring the R9H mutation would be predisposed to heart failure.
Arg 9 Is Important for Proper Positioning of PLN in PKA Active Site-Arginine residues in the cytoplasmic domain of PLN appear to be hot spots for disease-associated mutations. At first glance this suggests a common underlying disease mechanism, although two of the mutants are partly functional (R9H and R14del), and two are nonfunctional (R9C and R9L). Although the hereditary PLN mutations do not have a common effect on the functional state of PLN (i.e. SERCA inhibition), all of the mutants appear to implicate PKA in disease (9,11). As part of the PKA recognition motif, deletion of Arg 14 was expected to have a major impact on PLN phosphorylation ( Fig. 4 and Refs. 9 and 32). In addition, it has been known for some time that PKA prefers model substrates with an arginine residue N-terminal to the recognition motif (31). The structure of PKA with an inhibitor (protein kinase inhibitor, PKI) shows that an upstream arginine interacts with Glu 203 , which is part of the peptide positioning loop of PKA (31,49). As a natural substrate, Arg 9 of PLN fits this notion of an upstream arginine, and the recent crystal structure of the PLN cytoplasmic domain bound to PKA clearly revealed interactions of Arg 9 with the peptide positioning loop of PKA (Fig. 5 and Ref. 35). As a natural PKA substrate, Arg 9 of PLN appears to be positioned by Glu 203 and Asp 241 of PKA, yet the functional implications of these interactions remain poorly elucidated. Herein we have shown that Arg 9 and Arg 14 of PLN and Glu 203 and Asp 241 of PKA are essential for phosphorylation, providing a possible shared mechanism for the diseaseassociated mutations. The presence of Arg 9 offers the advantage of increased substrate affinity and efficiency of phosphorylation (Table 2). If Arg 9 , Glu 203 , or Asp 241 is mutated, PKA can no longer discriminate between PLN and a model substrate such as kemptide. As for the role of each residue in the proper positioning of PLN in the active site of PKA, Arg 9 contributes to both binding affinity and catalytic efficiency, Glu 203 makes a larger contribution to binding affinity, and Asp 241 influences catalytic efficiency.
Role of Arg 9 of PLN in Efficient Phosphorylation of PLN Pentamer-Mutation of Arg 9 of PLN or Glu 203 or Asp 241 of PKA resulted in a plateau in phosphorylation at ϳ60% of total PLN (Fig. 6). The prospect of enzyme inactivation was eliminated when the addition of extra PKA failed to fully phosphorylate PLN (Fig. 7), and it was equally unlikely that substrate depletion was responsible. Instead, our results indicated that partial phosphorylation correlated with the oligomeric state of PLN. Disrupting the ability of PLN to form a pentamer, either by mutation (R9C-SSS PLN) or the use of a cytoplasmic peptide (R9C 1-20 ), markedly increased phosphorylation at Ser 16 (Fig. 8C). Because PLN phosphorylation occurs randomly, with each monomer within a pentamer having an equal chance at becoming phosphorylated (37), phosphorylation appeared to stall after two or three monomers within the pentamer were phosphorylated. Thus, we concluded that Arg 9 of PLN, along with Glu 203 and Asp 241 of PKA, are required for the phosphorylation of a monomer within the context of a partially phosphorylated pentamer. This observation may provide an explanation for the PKA trapping reported for the R9C mutation in lethal DCM (11).
It has been reported that the conformational dynamics of PLN are an important determinant for PKA-mediated phosphorylation (50) and that PKA recognizes substrates by conformational selection (35). Herein we find that Arg 9 plays a dual role: it increases the efficiency of phosphorylation of a PLN monomer, and it allows for recognition of a monomer within the context of the PLN pentamer. Although PLN is a dynamic molecule, it also possesses a well defined structure that is distinct from that in the PKA-bound state (35,48). Thus, in the selection of an appropriate substrate conformation by PKA, the presence of Arg 9 in PLN must offer an advantage for the recognition of a suitably structured substrate. The absence of Arg 9 in disease-causing mutants of PLN (such as R9C, R9H, and R9L) could alter the conformational selection by PKA, thereby creating a kinetic trap for PKA and affecting the phosphorylation of other cellular targets. Additionally, it is becoming clear that hydrophobic substitution of Arg 9 creates multiple defects in PLN function, including loss of phosphorylation and an abnormal interaction with PKA, as well as loss of inhibitory function and a dominant negative interaction with SERCA (18). In the case of R9C, this could be further exacerbated by disulfide bond formation between PLN monomers (42). Because the associated defects in calcium homeostasis appear to be causative in heart failure, arginine residues in the cytoplasmic domain of PLN should be considered functional hot spots for hereditary mutations. | 8,213.8 | 2012-06-15T00:00:00.000 | [
"Biology"
] |
Phylum barrier and Escherichia coli intra-species phylogeny drive the acquisition of antibiotic-resistance genes
Escherichia coli is a ubiquitous bacterium that has been widely exposed to antibiotics over the last 70 years. It has adapted by acquiring different antibiotic-resistance genes (ARGs), the census of which we aim to characterize here. To do so, we analysed 70 301 E. coli genomes obtained from the EnteroBase database and detected 1 027 651 ARGs using the AMRFinder, Mustard and ResfinderFG ARG databases. We observed a strong phylogroup and clonal lineage specific distribution of some ARGs, supporting the argument for epistasis between ARGs and the strain genetic background. However, each phylogroup had ARGs conferring a similar antibiotic class resistance pattern, indicating phenotypic adaptive convergence. The G+C content or the type of ARG was not associated with the frequency of the ARG in the database. In addition, we identified ARGs from anaerobic, non- Proteobacteria bacteria in four genomes of E. coli , supporting the hypothesis that the transfer between anaerobic bacteria and E. coli can spontaneously occur but remains exceptional. In conclusion, we showed that phylum barrier and intra-species phylogenetic history are major drivers of the acquisition of a resistome in E. coli .
INTRODUCTION
Escherichia coli is a ubiquitous bacterium found in the intestinal microbiota of vertebrates. In the human gut microbiota, E. coli is the dominant species of the phylum Proteobacteria with a mean 10 8 c.f.u. (g faeces) -1 [1,2]. It is also commonly found in the digestive tract of mammals and birds, including livestock and poultry [3]. Hence, during the last 70 years, E. coli strains have been highly exposed to antibiotics used in human and animal health, as well as those used in agriculture. In response, E. coli has adapted through the acquisition of multiple genes encoding antibiotic resistance referred to as antibiotic-resistance genes (ARGs). As a consequence, acquired resistance in E. coli is linked to human activities [4].
In the pre-next generation sequencing (NGS) era, an exhaustive characterization of ARGs in a given species was challenged by the necessity to use as many PCRs as the genes targeted and perform experiments in a high number of isolates. With the development of NGS and genomics after 2005, the number of bacterial genomes made available has increased exponentially and the identification of ARGs was made easier via the use of in silico tools. Efforts have been made to collect and organize known ARG sequences in dedicated databases, with the first antibiotic resistance database (ARDB) released as early as 2008 [5,6]. Since then, others such as ResFinder [6], CARD [7], ARG-ANNOT [8] and more recently AMRFinder [9] have followed. Typically, these databases include thousands of ARG nucleotide and/or amino-acid sequences previously identified in cultivable and/or pathogenic bacteria. However, their content is biased as they lack the ARGs found in bacteria that are difficult to culture and those of little relevance from a medical perspective, such as commensal, strict anaerobic bacteria from the gut microbiota. Nonetheless, we and others have found these bacteria do harbour a vast diversity of ARGs, and the latter actually differ from those found in the ARG databases [10,11]. These ARGs have been made available in specific databases, namely FARMEDB [12], ResFinderFG [13] OPEN ACCESS and Mustard [11], whose content slightly overlaps with that of conventional ARG databases (Fig. S1, available with the online version of this article). Indeed, very few observations support occurrence of a transfer of the ARG from intestinal commensals to Proteobacteria opportunistic pathogens such as E. coli [14]. Still, such transfer has proven to be possible. For instance, tetX, a gene encoding resistance to tetracyclines [15], was shown to be transferred from Bacteroidetes to Proteobacteria. Furthermore, some transfer events may have gone unseen due to the lack of genomic monitoring of a large number of strains and because of the lack of sampling.
While most of these ARGs are borne by mobile genetic elements such as integrons [16], plasmids [17,18] or more rarely phages [19], some associations between their presence and specific E. coli phylogroups have been evidenced in the past based on phenotypic and genetic markers [20][21][22][23]. More recently, genomic data have confirmed such associations and extended them to more specific phylogenetic lineages [24,25]. Some of these multidrug-resistant lineages are disseminating worldwide, such as clonal group A [sequence type (ST) 69 phylogroup D] [26] and more recently ST131 (phylogroup B2) [27]. In such clonal groups, strong associations have been evidenced between the within ST sub-clade, the plasmid type and the ARG content [28]. All these data argue for a complex cross-talk between the chromosomal background, the genetic support of the ARG and the ARG itself resulting from intergenic epistasis [29].
Since December 2015, EnteroBase [30], a public database including thousands of genomes from E. coli/Shigella and other species (Salmonella, other Escherichia species, Clostridioides, Vibrio, Yersinia, Helicobacter and Moraxella), has been available. Beyond genomes, EnteroBase includes (with varying degrees of completeness) metadata linked to the strain itself (name, source, location, laboratory, species, serotype, disease and entry/update date) and to its sequencing process (N50, coverage). Still, no data regarding antibiotic resistance are available. In this study, we leveraged the high number of E. coli genomes in EnteroBase to characterize the acquired ARGs in E. coli and, more precisely, (i) to test for specific associations between the phylogenetic background of the strains and the presence of ARGs, and (ii) to evidence ARG transfer between non-Proteobacteria species and E. coli using metagenomic databases ResFinderFG [13] and Mustard [11].
Genomic database, species classification and E. coli phylogroup determination
A total of 82 063 available genomes was downloaded from Escherichia/Shigella EnteroBase (as of 1 February 2019). The genomes were classified according to their genera and species (Shigella, Escherichia coli, Escherichia albertii, Escherichia fergusonii, Escherichia marmotae or unknown). First, Shigella and enteroinvasive E. coli (EIEC) genomes were identified using in silico PCR with primers of the ipaH3 gene [31]. Due to their specific, obligatory intracellular pathogenic trait, they were removed from the dataset. Then, using ClermonTyper [32], a tool that provides information about phylogroups (A, B1, B2, C, D, E, F and G) for E. coli and identifies nearest species in conjunction with Mash [33] (E. fergusonii, E. albertii and Escherichia clades including E. marmotae), all the genomes were classified as E. coli belonging to the aforementioned phylogroups, E. fergusonii, E. albertii and Escherichia clades I to V. Of note, in this study, we considered that phylogroup F included both F and G phylogroups [34]. When a discrepancy was observed between the ClermonTyper and Mash attribution (n=3734) the strain was classified according to Mash.
ARG identification, plasmid incompatibility group, chromosomal multilocus sequence type (MLST) determination and G+C content
Diamond [35] was used to identify all the ARGs in EnteroBase by aligning all genomes against the AMRFinder [9] (29 04 2019 version), Mustard [11] (30 09 2017 version) and the ResFind-erFG [13] (21 12 2016 version) databases (with a minimum coverage × identity value greater than 0.64 for nucleotidic sequences corresponding to 80 % identity and 80 % coverage). All redundancies (sequences sharing 100 % identity in nucleotides) between the databases were removed. The ARG families were selected according to the Mustard website (http:// mgps. eu/ Mustard). All ResFinderFG and Mustard ARGs originatng from non-Proteobacteria were further investigated. When a genome was found to include an ARG putatively originating from a non-Proteobacteria, contamination (i.e. the presence of multiple sequences of non-Proteobacteria along with that of E. coli) was assessed using Kraken [36]. PlasmidFinder [37] database together with Diamond [35] were used to determine the plasmid incompatibility groups found in each genome of
Impact Statement
We analysed a large set of Escherichia coli genomes and searched for antibiotic-resistance genes (ARGs) using various databases. We observed that ARGs distributed according to the phylogenetic background of the strains, supporting the observation that constraints were at play within E. coli. Moreover, we identified four instances of putative transfers of ARGs from a phylum other than that of E. coli, stressing the strong inter-phyla barrier for ARG exchange. However, the capacity of the acquired ARGs to provide resistance against the most used antibiotic families did not differ according to the phylogenetic background, stressing that the different lineages of E. coli adapted to the antibiotic pressure with the acquisition of ARGs their genetic background could accommodate. This research is to our knowledge the first of its kind to study the acquired resistome of E. coli, an intestinal, ubiquitous bacterium that has been exposed to antibiotics from their earliest use. In this connection, the results could help in better understanding how bacteria adapt to antibiotics by acquiring ARGs.
the EnteroBase database (98 % identity and a minimum of 95 % coverage). MLST was determined using mlst software based on the Warwick University or Pasteur Institute MLST schemes available from the pubMLST database (https:// github. com/ tseemann/ mlst) [38]. G+C content (%) deviation between E. coli and the acquired ARG was measured using the E. coli G+C defined previously [39]. ARG sub-family classification was obtained by clustering all the ARGs from the three databases (AMRFinder, Mustard and ResfinderFG) using cd-hit-est based on a 90 % identity threshold [40,41].
Statistical analysis and normalization
To circumvent the sequencing biases for the richness and diversity estimation, the data were normalized so that each phylogroup would include the same number of genomes (n=10 000) by re-sampling (for C, D and F phylogroups) or sub-sampling (for A, B1, B2 and E phylogroups) while maintaining the proportionality. The same protocol was applied to the STs. For all other statistical analysis, the complete dataset was used (without normalization). We tested the correlations between phylogroup, ST, plasmid incompatibility group and ARG using the corrplot package of R v3.4.2 and the corrmat function. The preferential distribution of some ARGs within phylogroups was tested using the Kruskal-Wallis test and Benjamini-Hochberg correction. The diversity of the ARGs in some STs was measured using the Shannon index in R (v3.4.2) with the vegan package. The number of distinct ARGs in phylogroups or STs was referred to as ARG richness.
Logistic regression was performed using R (v3.4.2) and the glm function. We first tested all variables in a univariate model and afterwards included in the multivariate model all variables that had shown a P value <0.01. The stepAIC function of the mass package was used to performs stepwise model selection by Akaike information criterion (AIC).
Most frequent ARGs found in E. coli
First, we used the AMRFinder database and identified a total of 314 091 ARGs in E. coli genomes. The mean number of ARGs by genome was around 4.5 and the median was 2, and we observed a minimum of 0 ARGs and a maximum of 47 ARGs. The first part (n=164 519) included genes matching to known genes with 100 % of identity and coverage. This corresponded to 381 ARGs out of the 4955 genes included in the AMRFinder database. The second part (n=149 572) was made of variants sharing a coverage × identity value greater than 0.64 for nucleotide sequences (corresponding to 80 % identity and 80 % coverage). This comprised variants for 328 genes (including 169 genes not previously detected when the 100 % identity and coverage parameters applied) in the AMRFinder database. A total of 550 genes out of 4955 (11.1 %) of the AMRFinder database were, thus, found at least once in E. coli genomes. The 20 most frequent ARGs sharing 100 % identity with genes from AMRFinder are depicted in Fig. 2. We predominantly found genes encoding β-lactamases and aminoglycoside-modifying enzymes (AMEs), the three most abundant genes being bla TEM-1 (n=16 766), aph(3′′)-Ib (n=15 481) and aph(6)-Id (n=12 845).
We also observed a high frequency of ARGs conferring resistance to antibiotics that are used to treat infections not caused by E. coli but rather caused by Gram-positive bacteria, such as rifampicin (arr, n=394/110, respectively, 100 % identity and variants) and macrolide-lincosamide [lnu (n=494/322), Unexpectedly, we identified a blaZ gene commonly found in Staphylococcus aureus in an E. coli strain. However, subsequent analysis of the genome revealed that 10 % of reads were assigned to S. aureus and 90 % to E. coli. This 10 % of reads from S. aureus evenly distributed in the genome of S. aureus strain CFSAN007851, strongly supporting the hypothesis of a contamination prior to sequencing.
Distribution of the resistance genes according to the strain phylogeny
We observed that the mean number of ARGs per genome differs according to the phylogroup: 5.3 for A, 3.6 for B1, 5.5 for B2, 7.6 for C, 7.1 for D, 2.2 for E and 7.4 for F phylogroup (P <0.001 with ANOVA test). We took an ecological approach by considering the richness (corresponding to the number of unique ARGs) and the diversity with the Shannon index (used to quantify specific biodiversity). We observed a variable distribution of the ARG richness in each phylogroup, with a richness of 254, 236, 213, 170, 178, 122 and 169 in phylogroups A, B1, B2, C, D, E and F, respectively. However, we observed an even distribution of the diversity with Shannon index equal to 3.63, 3.51, 3.15, 3.58, 3.40, 2.92 and 3.51 (Fig. S2). The three most represented ARGs in each phylogroup were bla TEM-1 , aph(3′′)-Ib and aph(6)-Id, except for in phylogroups D and F, in which mphA (a phosphotransferase conferring resistance to macrolides) and tetB (an efflux pump conferring resistance to tetracyclines) ranked second and third, respectively. However, specific ARGs were more frequently found in some phylogroups [referred to as phylogroup-predominant resistance genes (PPRGs), i.e. ARGs with a P value less than 0.001 with Kruskal-Wallis test and Benjamini-Hochberg PPRGs, respectively. In contrast, we observed 102 PPRGs in phylogroup A (n=12 469).
We tested the hypothesis that even if the distribution of ARGs differed according to the phylogroup, that of their function (i.e. the antibiotic families they encode resistance to) would not. We applied the same statistical approach and indeed found no specific association between the activity spectrum of ARG families and phylogroups (Fig. 4). For instance, we looked specifically at bla CTX-M genes, which encode resistance to third-generation cephalosporins. bla CTX-M was widespread between all phylogroups (and particularly in the phylogroup B2) except the phylogroup E where bla CTX-M-1 was prominent. However, the pattern of resistance they confer is very similar.
Global associations between strain phylogeny, plasmid type and resistance genes
We tested correlations between ARGs, phylogroups, STs and plasmid incompatibility groups. A total of 124 clusters were found with a correlation factor strictly higher than 0.30, each containing at least one ARG with other ARGs, STs, phylogroups or plasmid incompatibility groups. The bla CTX-M-15 gene strongly correlated with aac(6′)-Ib and bla OXA-1 (r=0.70), and also to a lesser extent correlated to aac(3)-IIa, mphA, aadA5, qacEdelta1 and ST131, as well as the plasmid incompatibility group IncFII (r=0.36). In contrast, we did not identify other ARGs, plasmid incompatibility groups or STs associated with the other common bla CTX-M genes, bla CTX-M-27 and bla CTX-M-1 . As for carbapenemase-encoding genes, a correlation was detected between bla NDM-1 and aph(3′)-VI, floR, erm-42, bla CMY-6 , mphE, msrE, armA and with the plasmid incompatibility group IncA (r=0.36). Finally, bla TEM-1 correlated with aph(3′′)-Ib, aph(6)-Id, aac(3)-IId, sul2 and the incompatibility group IncQ1 (r=0.39). We further assessed whether the ARGs that correlated were located next to each other. The ARGs associated with bla CTX-M-15 , bla NDM-1 and bla TEM-1 were in most instances found on the same contig (Fig. S3), supporting their acquisition via a common mobile genetic element rather than multiple acquisitions events. Last, we used a logistic regression to identify the variables associated with the presence of bla CTX-M-15 , bla NDM-1 and bla TEM-1 . In multivariate analysis, we observed strong associations between these genes and other ARGs (including those mentioned above), but also with plasmid types and phylogroups, supporting the role of the genetic background in the presence of ARGs in E. coli (Tables S2-S4). Of note, we did not observe any negative correlation between ARGs. However, genes tetB, aadA5, qacEdelta1, bla TEM-1 , bla CTX-M-15 , bla OXA-1 and mphA were negatively correlated to ST11.
G+C content and type of antibiotic resistance
We determined the G+C content of clusters of ARGs measured from the EnteroBase genomes and the distribution of their divergence using the mean G+C content of the E. coli core genome (51.8 %) [39]. We observed a large panel of G+C content deviation (Fig. 6) between the mean G+C content of E. coli and the G+C content of acquired ARGs, supporting the supposition that the G+C content was not a constraint for the acquisition of ARGs from other species or within the E. coli species. Interestingly, we also observed that the G+C content of the most frequent ARGs found in E. coli (with at least 1000 occurrences) did not significantly differ from that of the ARGs with low frequency (Mann-Whitney test, P=0.7, mean for the most frequent ARG 50.0, and 48.6 for low frequency).
The functional class of the horizontally transferred ARGs has been shown to play a role in the fitness of the recipient strain in that the gene encoding drug modification (e.g. β-lactamase and AME) would have a minimal impact on the fitness of the recipient strain even if originating from a G+C-divergent background. Conversely, genes encoding proteins interacting with the cellular content (e.g. efflux and target modification) would impact the fitness of the recipient, especially when originating from a G+C-divergent background [44]. However, in our dataset, we did not observe a distinct pattern according to the type of resistance (Mann-Whitney test, P=0.7, mean for drug-modifying enzyme equal to 3.969, and 2.582 for ARGs with cellular interactions; Fig. 6).
Non-Proteobacteria ARGs can be exceptionally found in E. coli
Lastly, we assessed whether some ARGs found in non-Proteobacteria bacteria were present in the E. coli genomes of EnteroBase using the specific databases ResFinderFG and Mustard. We observed a high number of hits (n=833 814) in the Mustard and ResFinderFG databases, mostly corresponding to genes predicted to originate from Proteobacteria. The first part (n=271 458) included genes matching to known genes with 100 % of identity and coverage. This corresponded to 37 different genes identified in the 6095 genes of the Mustard database (Table S5) and 44 different genes identified in the 2282 genes of the ResFinderFG database (Table S6). The second part (n=562 356) was made of variants sharing a coverage × identity value greater than 0.64 for nucleotide sequences (corresponding to 80 % identity and 80 % coverage) with known genes. This made up 385 673 variants for 23 additional genes in Mustard (Table S5) and 176 683 variants for 61 additional genes in ResFinderFG (Table S6).
Interestingly, we identified a putative β-lactamase encoding gene from ResFinderFG (beta_lactamase, KU546399.1, faeces, AMX) and Mustard (MC3.MG60.AS1.GP1.C4251.G1) databases also found in the strain Bacteroides uniformis NBRC 113350 (NCBI GenBank accession no. NZ_AP019724.1). The E. coli bearing this gene was from phylogroup A, ST744 (Table 1), and had been isolated in Germany [45] from a patient screened for multidrug-resistant bacteria at the Fig. 6. Scatter plot of the G+C deviation of ARGs according to their occurrence in EnteroBase and the type of resistance encoded. Each dot corresponds to one cluster of ARGs (90 % nucleotide identity) created using AMRFinder, Mustard and ResfinderFG databases. The G+C content deviation corresponds to the difference between the mean G+C content of E. coli [39] and the mean of the G+C content of each gene in the cluster.
University Hospital of Münster. The gene was embedded in a 7600 bp contig (Fig. S4), itself sharing 100 % identity with the Bacteroides uniformis genome. The strain was kindly provided by Professor Alexander Mellmann from the University of Münster and was re-sequenced in our laboratory using Illumina MiniSeq and Oxford Nanopore (Oxford Nanopore Technologies) chemistries, which confirmed the presence of the resistance gene (data not shown). Further description of this strain is underway and will be detailed in a separate study.
We also found two erm genes (ribosomal methylases), encoding macrolide-lincosamide resistance and originating from non-Proteobacteria. The first one was found in an E. coli from phylogroup E, ST753, originating from a livestock sample from Ireland (Table 1). A part of the 12 910 bp contig containing the gene matched against a Bifidobacterium breve genome (Fig. S5). We were able to identify some transposase genes in the contig bearing the erm gene and surrounding inverted repeat sequences. The second erm was found in an E. coli from a phylogroup D, ST405 sample from The Netherlands (Table 1), and was originally identified on a plasmid from Clostridioides difficile (Fig. S6). Unfortunately, the metadata associated with those genomes was not sufficient to trace back the strains and confirm the presence of the genes in E. coli by re-sequencing the strains. However, no evidence of wet-lab or dry-lab contamination was observed.
Last, we identified a tetM gene, encoding tetracycline resistance, originating from C. difficile (Fig. S7). The gene was found in a 3291 bp contig completely matched against the C. difficile strain in an E. coli from phylogroup A, ST5943, originating from Thailand. No evidence of contamination was observed even though the reads were not available.
DISCUSSION
Using a large number of genomes, we were able to assess the diversity and distribution of ARGs in E. coli. From a global perspective, we assumed that the richness of acquired ARGs in E. coli was somewhat limited with regards to the high number of ARGs in the literature and the closeness of E. coli to human activities (the ARGs found in E. coli representing 11.1 % of the AMRFinder database). This suggested constraints in the path for ARG exchanges and sustainability between species (phylogenetic origin of the gene, interaction within the cell of the gene product [44,46]), but also within the E. coli species. We did not find any evidence of a link between the G+C content or the functional class of the transferred genes and their frequency in the database. However, some ARGs had specific associations with the genomic background (phylogroups and STs), other ARGs and plasmid incompatibility groups. Of note, we did not observe any negative correlation between ARGs, suggesting that the ARGs are not competing with each other.
Such association between the phylogenetic background and the ARGs can result mainly from two evolutionary scenarios. In the 'chance and timing scenario' , there is a limited number of acquisition events then propagated vertically (clonal inheritance). In this case, the strong gene-lineage association is only contingent upon evolutionary history. Such a rare acquisition scenario would be likely to apply to genes such as aac(3)-I, aac(6′)-Ib-cr5 or bla . Conversely, for the multiple arrivals, the maintenance and the expression of the ARGs are under selection due to epistatic interactions between the resistance determinants and the genomic background. This could concern genes such as tetB, with widespread distribution but increased prevalence in specific lineages.
Similar association between the genomic background and the presence of virulence genes has been reported within the E. coli species [47,48] and attributed to epistasis between different parts of the genome [29]. Nonetheless, on a broader scale, we observed that these preferential genetic supports of resistance led to the same functional pattern of resistance to antibiotic classes. While E. coli phylogroups had different ARG distributions, they harboured a full genetic armamentarium to resist the same antibiotic families. In brief, the ARGs were different but their functions were similar. Such a functional redundancy suggests that the E. coli phylogroups were exposed to the same antibiotic pressure but acquired different ARGs to cope with it in an adaptive convergence process. Interestingly, a similar role of the genetic background influencing the genomic basis of antibiotic resistance by channelling evolution along different mutational paths has been reported following a long-term in vitro evolution experiment with E. coli [49].
The antibiotic exposure impacts not only the bacteria causing infections, but also the bacteria residing in our microbiota. Indeed, we observed a high rate of ARGs conferring resistance to antibiotics that are used to treat infections not caused by E. coli or other Gram-negative bacteria but rather caused by Gram-positive bacteria, such as rifampicin and macrolidelincosamides. Such antibiotics are excreted in the intestine at high concentrations, so that some bacteria with minimum inhibitory concentrations too high to be in the clinical spectrum of the antibiotic would be affected in the gut. In that respect, the high frequency of ARGs conferring resistance to those antibiotics is a strong signal stressing the impact of antibiotics on our microbiota.
We observed four putative transfers of ARGs between non-Proteobacteria and E. coli. In a previous work, we observed that the vast majority of ARGs found in the intestinal microbiota were very distinct from those found in cultivable bacteria (including E. coli) and that few arguments were found to support their mobility [11]. Taking advantage of the largest E. coli database to date, we could observe that ARGs could actually be exchanged between E. coli and intestinal commensals, such as was observed for tetX [50]. Even if anecdotal in this dataset, with four observations, the very fact that they were detected suggests that they are not that uncommon, but unlike tetX which has met with success, their spread is very limited to date as they were only found in one genome each. The donor bacteria were strict anaerobic bacteria (Bacteroides uniformis, Bifidobacterium breve and C. difficile) commonly found in the gut microbiota at high abundances [51] alongside E. coli. Considering that humans have been using antibiotics for more than 70 years, very favourable conditions have been met for ARG transfers between anaerobic bacteria and E. coli. That such transfers have been observed so rarely supports the hypothesis that anaerobic bacteria may indeed provide ARGs to E. coli, but that the contribution to the worldwide AMR issue seems to be minor. Indeed, the most successful ARGs found in E. coli originate from other Proteobacteria species [14], e.g. bla CTX-M progenitors are Kluyvera spp. belonging to the Enterobacterales [52]. Besides, the observations of the ARG transfer would not have been possible if not for the use of multiple databases to cover the broadest range of ARGs. While databases such as AMRFinder may be found suitable to identify ARGs from clinically relevant bacteria, they may not be appropriate when it comes to looking for ARGs originating from other environments, such as the gut microbiota.
We acknowledge some limitations of the present study. Despite including a large number of genomes, EnteroBase suffers from inclusion biases in that strains of interest (e.g. the resistant and/or the pathogenic ones) are the most sequenced. Indeed, EnteroBase includes a large number of STEC (Shiga-toxin-producing E. coli), mainly the O157:H7 serotype, ExPEC (extra-intestinal pathogenic E. coli) with the emerging ST131 and many strains producing extended spectrum β-lactamases. This may have led to an overestimation of the associations between ARGs, phylogenetic traits and replicons. Also, many more ARGs may have been found in E. coli, perhaps including some from intestinal commensals that were not found of interest to be cultured and sequenced. Hence, we assume we did not cover the global picture of the acquired ARGs always found in E. coli, but only a part. Nonetheless, we believe our findings are sound with regards to the very high number of strains included in this study. We also know that some genomes with contamination can be found in the database, which warrants the use of specific tools upstream of genome analysis.
In all, we observed that ARGs were distributed in the E. coli phylogroups/STs with a preferential fashion. In the meantime, they provided resistance to the same antibiotic families. Furthermore, we observed that the transfer of ARGs between non-Proteobacteria and E. coli indeed occurred but seemed to be exceptional.
Funding information
This work was partially supported by the 'Fondation pour la Recherche Médicale' (equipe FRM 2016, grant 325 number DEQ20161136698) and by the Direction Générale des Armées (project FastGeneII). | 6,494.6 | 2021-08-01T00:00:00.000 | [
"Biology"
] |
Perturbative entanglement entropy in nonlocal theories
Entanglement entropy in the vacuum state of local field theories exhibits an area law. However, nonlocal theories at large N and strong coupling violate this area law. In these theories, the leading divergence in the entanglement entropy is extensive for regions smaller than the effective nonlocality scale and proportional to this effective nonlocality scale for regions larger than it. This raises the question: is a volume law a generic feature of nonlocal theories, or is it only present at strong coupling and large N ? This paper investigates entanglement entropy of large regions in weakly coupled non-local theories, to leading order in the coupling. The two theories studied are ϕ4 theory on the noncommutative plane and ϕ4 theory with a dipole type nonlocal modification using a fixed nonlocality scale. Both theories are found to follow an area law to first order in the coupling, hence no evidence is found for a volume law. This indicates that, perturbatively the nonlocal interactions considered are not generating sufficient entanglement at distances of the nonlocality scale to change the leading divergence, at least to first order in the coupling. An argument against volume laws at higher orders is also presented.
Introduction and summary
Entanglement entropy has recently attracted interest as a way to study the correlations between degrees of freedom in a quantum state. Local field theories generally exhibit what is know as an area law behaviour, where the leading divergence in the entanglement entropy of a spatial region is proportional to the area of the boundary of that region. That is, S ∼ |∂A|Λ d−2 , where S is the entanglement entropy, |∂A| the area of the boundary of the region and Λ is the momentum scale of the UV regulator of the theory, for example the inverse of a lattice spacing. 1 However, recent holographic studies of strongly coupled nonlocal theories have found a volume law behaviour instead [2][3][4][5][6]. That is, for a nonlocality scale l, S ∼ |A|Λ d−1 for regions much smaller than l and S ∼ l|∂A|Λ d−1 for regions much larger than l [5]. Note that entanglement entropy of large regions is sufficient to differentiate this type of volume law from an area law, as the entanglement entropy is proportional to These results can be understood intuitively by assuming that all the degrees of freedom within the range of the nonlocality are equally entangled with each other. Then, for regions much smaller than l, all the degrees of freedom inside the region, not only those near the boundary, are entangled with degrees of freedom outside. For regions much larger than l, all the degrees of freedom within a distance l of the boundary are entangled with those outside. In both cases, the number of degrees of freedom strongly entangled across the boundary is proportional to Λ d−1 rather than the Λ d−2 expected from an area law.
A natural question is whether this behaviour is generic to nonlocal theories or if it is confined to a strongly coupled, large N regime. One approach is to study entanglement entropy for a free scalar field on the fuzzy sphere [7][8][9][10]. This turns out to be proportional to the area 2 for small polar caps [9,10]. However, two issues arise which question whether this should be characterised as a volume law. First, the dependence of the entanglement entropy on the UV regulator does not match the volume law described above. Second, the entanglement entropy does not scale like the number of degrees of freedom contained in the polar cap, as the degrees of freedom are not uniformly distributed across the sphere. Instead it scales as the number of degrees of freedom near the boundary [7,8]. Another limitation of this theory is that the nonlocality scale is tied to the size of the sphere so it is not possible to study regions much larger than the nonlocality scale.
Another approach is to study a free field theory on a lattice with a nonlocal kinetic term, in which case a volume law was found [11].
This paper investigates the role of interactions in this question by considering two theories with nonlocal interactions: scalar λφ 4 theory on the noncommutative plane and λφ 4 theory with a dipole type nonlocal modification with fixed nonlocality scale. The leading divergence in entanglement entropy of large regions is calculated to leading order in perturbation theory and is not found to be proportional to the length scale of the nonlocality, hence no evidence of a volume law is found. Instead, the leading divergence in both theories has the same form as the standard local λφ 4 theory which follows an area law. This result indicates that, perturbatively these nonlocal interactions are not generating sufficient entanglement at distances of the nonlocality scale to change the leading divergence, at least to first order in the coupling.
The free theory with λ = 0 for both of these nonlocal theories is equivalent to the regular commutative λφ 4 theory. There is no modification of the entanglement entropy at this order. Perturbation theory can be used to study the nonlocal theories at small λ.
The entanglement entropy is calculated using the replica trick and the formula S = −∂ n [ln Z n − n ln Z 1 ] n=1 , where Z n is the partition function of the field theory defined on an JHEP09(2015)180 n-sheeted space [12][13][14]. This partition function can be reduced to computing vacuum bubble diagrams and the O(λ) contribution in perturbation theory comes from bubble diagrams with one vertex and two loops. Consistent with the results of previous investigations of perturbative noncommutative theories [15], the planar diagrams in the nonlocal theories give the standard commutative result, which is where A ⊥ is the (infinite) area of the boundary of our region, Λ our UV regulator, m our IR regulator and G n is the Green's function on the n-sheeted space used in the replica trick [14]. This contribution follows an area law, as S ∝ A ⊥ Λ 2 up to logarithmic corrections.
The nonlocality only affects the nonplanar diagram. This diagram contributes a term of the form , where now ∆x corresponds to a translation from the nonlocality.
In the dipole theory, ∆x is proportional to the fixed dipole length. Thus the nonplanar diagram has only a logarithmic IR divergence and is subleading compared to the planar diagrams. In the noncommutative theory the translation along the noncommuative plane is proportional to the momentum in the other noncommutative direction, so this contribution must be integrated over this momentum. If we don't impose an IR regulator, the momentum controlling the translation is allowed to vanish and G(0, ∆x) → G 1 (0) ∼ Λ 2 . This gives a contribution that is of the same order as the planar diagrams. However, if we impose an IR regulator, ∆x has a minimal value and this divergence can be reinterpreted as an IR divergence. This is familiar from the UV/IR connection described for example in [15].
Our results for the O(λ) contribution to the entanglement entropy, S 1 , are real scalar : complex scalar : where S planar and S nonplanar denote the contributions from planar and nonplanar diagrams respectively. The leading divergences from these diagrams in each of the theories considered are Commutative theory : Noncommutative plane : Dipole theory : S nonplanar is subleading, (1.9) where Λ is our UV regulator, m is our IR regulator, A ⊥ is the area of the boundary, Θ is the noncommutativity parameter of the plane and a is the nonlocality scale of the dipole theory. The details of the expansion in m Λ used to extract these leading divergences are discussed in section 5.2.1.
In both cases, the contribution from these nonplanar diagrams does not have the right form to be interpreted as the sign of a volume law in the entanglement entropy and we must JHEP09(2015)180 conclude that these nonlocal theories at least to first order in perturbation theory obey an area law. This can be contrasted with the strong coupling result which found clear signs of the volume law even for large regions [5]. Thus, the volume law must either only appear at higher orders in perturbation theory or it must require strong coupling. Consistent with our analysis, previous investigations of perturbative dynamics of the noncommutative theory [15] have shown that noncommutativity does not introduce any new perturbative UV divergences that cannot be reinterpreted as IR divergences. Thus, is it hard to see how the higher degree of divergence required for a volume law can arise in perturbation theory. We are lead to the conclusion that entanglement on distances of the nonlocality scale and volume laws require strong coupling and are not accessible to perturbation theory.
The remainder of the paper is organised as follows: section 2 describes the theories we study, section 3 explains how the entanglement entropy can be computed perturbatively in these theories, section 4 shows that the results for the free theory are unchanged in these nonlocal theories, section 5.1 computes the first order correction in the coupling to the entanglement entropy in a real scalar φ 4 theory for a warm-up and for later reference. Section 5.2 extends the calculation to the real scalar on the noncommutative plane. Section 5.3 reproduces the results for the previous two sections in the case of the complex scalar. Section 5.4 computes the result for the complex scalar in the dipole theory. Finally, section 6 concludes with a discussion of these results.
Theories
The theories used in this paper are scalar field theories on R 1,3 where products of fields are replaced with a possibly noncommutative product denoted . Three examples of this product will be used: the regular commutative one, the Moyal product associated with the noncommutative plane and the dipole product with a fixed nonlocality scale. See [16] for a review of noncommutative field theory. The Euclidean action is The entanglement entropy in these three theories is calculated to leading order in the coupling λ. The mass is present to serve as an IR regulator and will be taken to be small in the end. First, the standard commutative case, where (f g)(x) = f (x)g(x), is reviewed and presented in our notation in sections 4 through 5.1. The entanglement entropy for this theory was studied in [14] and the approach contained therein will be followed for each of the theories we consider.
Second, in section 5.2, the entanglement entropy of a field theory defined on the noncommutative plane, where is studied. The noncommutativity is parametrised by the antisymmetric tensor Θ. This theory has been studied perturbatively in [15]. In this case especially, the mass should be JHEP09(2015)180 thought of as an IR regulator and taken to zero at the end of the calculation in order to see full effects of the UV/IR mixing present in this theory. We specialise to the case commonly referred to as the noncommutative plane where Θ µν = Θ δ 1µ δ 2ν − δ 2µ δ 1ν for simplicity. Finally, the entanglement entropy of the a simpler nonlocal theory with a fixed nonlocality scale along a particular axis, known as a dipole theory, is studied. For this product, a vector called a dipole must be assigned to every field. The noncommutative product is where L µ (f ) is the dipole assigned to the field f . These dipoles must obey various rules set out in [17]. In particular, the dipole of the -product of two field must be the sum of their dipoles. As well, the dipole of the complex conjugate of a field must be minus the dipole of the original field. This means that a real field must have a zero dipole and that a complex scalar must be used rather than the real scalar field theory discussed so far. The action for a complex scalar is where there two φ 4 terms which are inequivalent due to our noncommutative product [17]. 3 The result from the real scalar theory will be extended to this complex scalar theory in section 5.3, then the dipole theory will be studied in section 5.4.
Setting L µ (φ) = aδ µ1 , the terms in the action can be written in a more explicit form: where only the dependence on the first coordinate, labelled x, is highlighted as the other coordinates are unaffected by this deformation. In fact, renormalisability requires that we include in the action terms of the form for all n [17]. However, the contributions from these terms can be obtained by simply substituting a → na into the results for n = 1 and summing over n. The results in section 5.4 are such that this sum is guaranteed to converge as long as the λ n don't grow too quickly. As the inclusion of these terms would not affect our conclusions, we will not consider them separately.
JHEP09(2015)180 3 Entanglement entropy
The standard technique of the replica trick is used to compute the entanglement entropy [12]. This technique was used in a perturbative context in [14], whose approach is followed here. Starting with ρ A , the reduced density matrix of the ground state of the theory in question for a region A, the idea is to evaluate by calculating Trρ n A for arbitrary n and analytically continuing. In this paper we will concentrate on the simplest case where A is the half plane The main result that will be needed can be lifted directly from [12,14]: where Z n is the partition function of the theory on an n-sheeted surface with a cut along the region A that connects the sheets. However, some details of this n-sheeted space will be needed in the argument to follow, so the rest of this section will define it more carefully.
n-sheeted surfaces
The density matrix can be written as a path integral, (at finite inverse temperature of β) where Z 1 is a normalisation factor to ensure that Trρ = 1. Then the reduced density matrix for a region A is obtained by periodically identifying the field in the Euclidean time direction alongĀ, the complement of A, while leaving the boundary condition along A untouched. To look at the ground state, β must be sent to infinity. We do this while keeping the cut along A near the origin. Then, This identification of boundary conditions can be replaced by defining the field theory on an n-sheeted surface with a cut along A that takes you from one sheet to the next. Calling this n-sheeted surface R d \ A n , the projection onto the sheet π : the indicator function telling you if you are on the k th sheet χ k :
JHEP09(2015)180
where S E for Φ has the same form as that for each φ, since the action for each sheet is additive. With our simple region A, a half-plane, polar coordinates can be defined in the x-τ plane of R d \ A. Then the glueing required to create this n-sheeted surface is simply to identify θ = 2π on one sheet to θ = 0 on the next. Thus polar coordinates can be defined on R d \ A n where θ ∈ [0, 2πn), such that each interval of length 2π corresponds to a sheet, i.e. π(r, θ, y, z) = (r, θ mod 2π, y, z) and χ k (r, θ, y, z) = χ [2π(k−1),2πk) (θ). This gives us the result from [12,14] cited above, as Z n = DΦe −S E . This path integral over Φ is the path integral over the n-sheeted surface.
Free theory
The first step is to understand the free theories where λ = 0. The action for the free noncommutative and dipole theories is the same for that of the commutative theory, since the star product of 2 fields is the same as the regular product up to a total derivative [15].
For the noncommutative theory, so that the quadratic term in the action is the same as for the commutative case up to a total derivative. As there are no boundaries, the only place this total derivative could make for a finite contribution is at the conical singularity introduced at the origin when considering the n-sheeted path integral. Around the origin this term contributes (note that the singularity is at the origin of the x-τ plane and is not localised in the y-z directions), where A ⊥ is the area of the y-z plane. As long as ∂ n φ ∂ n+1 φ is regular at the origin this term will not contribute to the action. This means that φ needs to be C ∞ at the origin, which is just the regular boundary condition imposed in the commutative case. For the dipole theory, direct calculation of the -product of two fields can be seen to reduce to the commutative result in equation (2.5).
Thus the free theory is the same for all three theories.
Green's functions
Since the free theories are the same, they have the same Green's functions. This Green's function is straightforward in the polar coordinates introduced in section 3.1. Since the action for Φ living on the n-sheeted surface is the same as the action for φ living on any particular sheet, the local equation that the Green's function must obey will be the same.
JHEP09(2015)180
The only difference is that θ must be periodic with period 2πn rather than the usual period of 2π. The Green's function for the field living on the n-sheeted surface is, from [14], . ⊥ refers to the directions orthogonal to the cut introduced by the replica trick.
The Euler-Maclaurin formula, can be applied to this Green's function to replace the sum over k, It will be useful to define G n (x, x ; p) as It is also useful to define f n (x, x ) and f n (x, x ; p) as where G 1 is the Green's function on the 1-sheeted surface, that is just the regular Green's function.
Single sheeted limit
This Green's function for the n-sheeted space must reduce to the regular Green's function in the limit where n → 1. Starting with our expression for the Green's function in equation (4.3), defining ϕ = θ − θ for convenience and setting n = 1, dγ 2π e i(z sin γ−nγ) . Using this representation and the fact that Defining our position axes on the x 0 -x 1 plane such that x = (0, r) implies that x = (−r sin ϕ, r cos ϕ). Then defining q = (q cos γ, q sin γ), (4.14) Finally, defining p = ( q, p ⊥ ), which is the usual Euclidean Green's function.
Entanglement entropy in the free theory
The entanglement entropy when λ = 0 must be identical in the three theories as it was shown above that the quadratic terms in the action are the same. This can be seen more explicitly by using the approach from [14]. Starting from S A = −∂ n [ln Z n − n ln Z 1 ] n=1 , the part of the entanglement entropy which depends on the mass can be related to the Green's function by
JHEP09(2015)180
In the commutative case, Φ 2 (x) n = G n (x, x). In the non-commutative case, That the -product turns out to just translate the argument of the Green's function is an important theme of the calculation in this paper. The only difference for a complex scalar is that the mass term in the action is proportional to Φ † Φ instead of Φ Φ, however the expectation value of this leads to the same Green's function and the same result follows.
The dipole theory is identical except that translations by Θ times the momentum in the y-direction are replaced by translations by a.
Thus, still for the non-commutative case, recovering explicitly the result from the commutative case by shifting the integration variable. However, this shift of the integration variable on the n-sheeted surface bears further investigation. It is sketched in figure 1.
This shift is well defined except for the region which gets translated into or out of the origin. However, this region has measure zero and cannot affect the result of the integral. As long as only a countable number of such shifts are done, these points can be omitted from the integral without changing the result. Finally, the integral over the whole n-sheeted surface can be written as a sum over the sheets and the Jacobian of this shift on each sheet is 1, so the Jacobian of the whole shift does not introduce any new factors into the integral. Thus shifting the variable of integration on this n-sheeted surface is allowed with no Jacobian, just as for the plane.
Commutative theory
We will start by computing the first order correction to the entanglement entropy for the commutative φ 4 theory. This was done previously in [14], but will be repeated here with JHEP09(2015)180 more explicit regulators that will allow a direct comparison to the nonlocal cases. From [14], where n denotes integration over the n-sheeted surface and ln Z n,k is the k th order term in a λ expansion of ln Z n . Generally, adding subscript will denote the order of a term in a λ expansion, e.g. X = X 0 + X 1 + X 2 + . . . The entanglement entropy can be calculated using equations (3.1) and (3.2), ln Tr (ρ n A ) 1 = ln Z n,1 − n ln Z 1,1 Recalling from equation (4.9), The j > 1 terms don't contribute [12], so they will be dropped in what follows. This is the same on each sheet, so the integral over the n-sheeted surface is n times in integral on one sheet. Finally, f 1 (x, x ) = 0, so ∂ n f 2 n (x, x )| n=1 = 0 and (5.5)
JHEP09(2015)180
Schwinger parameters are introduced to allow the denominators to be combined, using This allows us to regulate the UV divergence in S 1 by introducing a factor of e − 1 αΛ 2 , as was done in previous perturbative studies of noncommutative theories [15]. This regulator is convenient in the noncommutative case and is used here so that the results can be compared. Using equation (25) from p. 146 in volume I of [19], the effect of this regulator is Thus it regulates the UV and leaves the IR unaffected. This can be seen simply from the fact that e − 1 αΛ 2 vanishes for α Λ −2 and goes to one for α Λ −2 . A mass m regulates the IR by contributing a factor of e −αm 2 , which has the opposite behaviour.
Introducing these Schwinger parameters and regulating, All the momenta integrals except q are Gaussian, the q integral can be evaluated. This along with the fact that ∂ ν I ν (z)| ν=0 = −K 0 (z), 5 gives
JHEP09(2015)180
after substituting r 2 → t and setting a = y = 1 2β , gives (5.14) Looking at the α integral first, by substituting α → 1 α in the first line and using equation (5.7) as well as in the second. This recovers the Λ 2 divergence seen previously in this case [14].
Using equation (29) as K 0 (z) → − ln z as z → 0. This reproduced the logarithmic divergence seen previously in this case [14] and makes explicit its form in our regularisation scheme. Combining, the first order in λ correction to the entanglement entropy in the commutative theory is This is proportional to the area of the boundary of A, that is A ⊥ , and the leading divergence is of order Λ 2 , so this result fits with the area law picture discussed in the introduction.
Noncommutative theory
Next we will compute the first order correction to the entanglement entropy for the noncommutative φ 4 theory. Similarly to the commutative theory, Using the associativity of the -product, this can be written as
JHEP09(2015)180
The usual Wick's Theorem can be applied to calculate the four-point function, The key point is that while the conical singularity breaks the translational invariance in the x 0 -x 1 plane, it is preserved in the x 2 -direction. Thus the star product reduces to a translation in the x 1 -direction by an amount determined by the momentum in the x 2 -direction. Defining G n (w, z) = dpy 2π G n (w, z; p y ) as in equation (4.6), = dp y 2π G n (w + 1 2 p y Θî, z; p y ), (5.22) this can be used to evaluate the 4-point function, Then, by shifting the spatial integral, = n d 4 x dk y dp y (2π) 2 G n (x, x; k y )G n x + Θk yî ; p y (5.24) + G n (x, x; k y )G n x + 1 2 Θ(k y + p y )î, x + 1 2 Θ(k y + p y )î; p y .
In [15] it is seen that the effects of the non-commutativity manifest themselves in the diagrams where lines cross each other. This is also present here, as figure 2 shows that it is only the second term that involves lines crossing. The other two terms are two selfcoincident Green's functions -the same result as was found in the commutative case in section 5.1 and [14]. The second term, which corresponds to the nonplanar diagram, is the only one which is different than what was found in the commutative case.
The entanglement entropy can be calculated using equation (3.1), where the fact that the spatial integral can be shifted, that the momenta can be renamed, that G 1 (x, x; p y ) = G 1 (x+a, x+a; p y ), that f n (x, x , p y ) = f n (x, x ; −p y ) as long as x 2 = x 2 JHEP09(2015)180 Figure 2. Vacuum bubble diagrams at leading order in a real scalar λφ 4 theory. The only vacuum bubble where lines cross is the second one. This is the only one which is affected by the noncommutativity, as discussed in [15]. and that f 1 = 0 so that the terms with f 2 n can be ignored have all been used. The j > 1 terms in f n have also been dropped again, which allows us here to write the integral over the n-sheeted surface as n times the integral over a sheet. In the commutative case, it was clear that these j > 1 terms do not contribute [12]. In appendix A it is argued that the leading divergence must be entirely contained in the j = 1 term even in this noncommutative theory.
New contribution from the nonplanar diagram
The first term in equation (5.25) is the contribution from the two planar diagrams. These give the same result as in the commutative case, namely λA ⊥ Λ 2 2 10 3 2 π 3 ln Λ 2 4m 2 from each diagram. However, the nonplanar diagram gives a new contribution to the entanglement entropy from the non-commutativity. The contribution from this nonplanar diagram will be denoted S nonplanar , where r 2 = ( r − Θk yî ) 2 = r 2 + (Θk y ) 2 − 2Θrk y cos φ and A ⊥ is the area of the x 2 -x 3 plane that bounds the region for which the entanglement entropy is being calculated. The next step is to introduce Schwinger parameters and to regulate this integral in the same manner as the integrals for other perturbative calculations in this noncommutative theory were regulated in [15], as discussed in section 5.1, The p y , p z and k except for k y integrals are all Gaussian (recall that r is a function of k y ),
JHEP09(2015)180
Taking a large Λ limit of this expression and expanding K 1 (x) ≈ 1 x for x → 0 allows us to extract an overall quadratic divergence. However, more progress can still be made by evaluating the ρ integral.
Using in order equation (23) from p. 131 of [19] and (15.9.E19) of [18], is the appropriate branch of the associated Legendre function with noninteger degree. Defining z = α cos 2 ϕ+sin 2 ϕ+g 2 (ϕ,φ) 2g(φ,ϕ) sin ϕ and recalling that g(φ, ϕ) = √ 1 + sin 2ϕ cos φ, where G(α) is dimensionless and finite for α ∈ (0, ∞). At this point, the asymptotic behaviour of G(α) can be analysed numerically, as no analytic formula for this integral was found in the tables consulted. However, while analysing this asymptotic behaviour, we found that G(α) = 16 √ α+1 gives an exact match up to high numerical accuracy across the many orders of magnitude that were checked. 6 Using this result for G(α), Note that this result is invariant under ΘΛ 2 ↔ Θm 2 , another sign of the UV/IR connection in non-commutative theories. This integral has two regulators, Λ and m. The only other dimensionful parameter is Θ, so the only dimensionless products of these regulators are m Λ and ΘmΛ. As is familiar from the UV/IR mixing in this theory, the limits Λ → ∞ and m → 0 do not commute. This can be resolved by taking m Λ → 0 while fixing ΘmΛ. Then taking the limit m → 0 or Λ → ∞ first corresponds to the limits ΘmΛ → 0 or ΘmΛ → ∞ respectively. 7 6 The only potential divergences in the integral for S nonplanar come from the regions of small and large α. If the reader is uncomfortable with this numeric argument, this functional form for G(α) could also be thought of more conservatively as a function with the right asymptotic behaviour to reproduce the correct divergences in this integral. 7 This discussion applies even if we want to think of m as a physical mass, as the ratio m Λ will still vanish if m is fixed while Λ → ∞. This case corresponds to ΘmΛ → ∞.
This result illustrates the UV/IR connection in non-commutative theories. If the IR regulator is removed first (ΘmΛ 1), S nonplanar ∼ A ⊥ Λ 2 -a quadratic UV divergence. However if the UV regulator is removed first (ΘmΛ 1), S nonplanar ∼ A ⊥ Θ 2 m 2 , allowing the same divergence to be interpreted as an IR divergence. In addition, whether Θ 2 m 2 Λ 2 4 is taken to be large or small there is a logarithmic divergence as is found in the commutative case. However, here there is the additional option of keeping both regulators, that is keeping 1 2 ΘmΛ finite, which eliminates the logarithmic divergence seen in the commutative case. 8 In particular, there is a natural choice of IR regulator, 9 From a mathematical point of view, this UV/IR connection can be seen to originate from the translation of the arguments of the Green's function. In the commutative theory, S nonplanar ∼ n dxG n (x, x)f n (x, x) where as in the noncommutative theory, the non-planar diagram made a contribution of the form S nonplanar ∼ n dxG n (x, x + Θp)f n (x, x + Θp). If an IR regulator is imposed, this momentum cannot vanish and regulates the integral. This can be seen more clearly in the dipole theory (analysed in section 5.4) where the fixed translation regulates the UV divergence of the integral.
It is important to note that contributions from the j > 1 terms in equation (4.5) were dropped at the start of this section and are not present in equation (5.40) or elsewhere in these results. However, as is discussed in appendix A, these do not affect the leading divergence in S nonplanar or the conclusion that there is no volume law.
In contrast to strong coupling results, which saw signs of a volume law for the entanglement entropy even with large regions, this perturbative calculation is only sees an area law. The leading divergence in S nonplanar is quadratic and proportional to the area of the boundary of the region, A ⊥ , in line with the area law discussed in the introduction.
Complex scalar
The difference when considering a complex scalar is the Wick contraction in equations (5.1) and (5.21) for the commutative and the noncommutative theory respectively. For the real JHEP09(2015)180 Figure 3. Vacuum bubble diagrams at leading order in the noncommutative complex scalar λφ 4 theory. The two on the left come from the λ 0 φ † φ φ † φ term in the action whereas the two on the right from the λ 1 φ † φ φ φ † term. scalar λ φ(w)φ(x)φ(y)φ(z) = λ (G n (w, x)G n (y, z) + G n (w, y)G n (x, z) + G n (w, z)G n (x, y)) , (5.42) whereas for the complex scalar this must be replaced with In the commutative theory, the fields in the 4-point function are all inserted at the same point, that is w = x = y = z. Taking into account the difference in the normalisation of the φ 4 term in the action, the only change is to replace an overall factor of 3λ 4! by 2(λ 0 +λ 1 )
4
. This has no effect on the intermediate steps of the calculation and can just be carried through straight to the final result: For the noncommutative theory, it is a simple matter of writing out the -products explicitly and following through similar transformations of the integration variables as in the previous section. This procedure gives 2λ 0 + λ 1 times the commutative result plus λ 1 times the result for the nonplanar diagram already encountered for the real scalar. This result can be obtained directly by looking at the 4 diagrams in figure 3 and realising that only the term proportional to λ 1 gives a nonplanar diagram.
Thus the result for the noncommutative theory with a complex scalar is (5.45)
Dipole theory
For the dipole theory, the explicit form of the interaction terms was written out in equation (2.5). Thus,
JHEP09(2015)180
Applying Wick's Theorem, using the facts that G 1 (x, x) = G 1 (x + a, x + a) and f n (x + a, x) = f n (x, x + a) (when ignoring the j > 1 terms) and shifting the integral, .
Again this is as expected from the diagrammatic approach. Only the single nonplanar diagram gives a new contribution and the 3 planar diagrams give contributions identical to those in the commutative theory.
Focusing on the contribution from the nonplanar diagram, the explicit forms of G 1 and f n give where now r 2 = ( r − aî) 2 = r 2 + a 2 − 2ra cos φ.
Introducing Schwinger parameters and regulating, In this case, all the momenta integrals except q are Gaussian, (5.51) The α integral can be factored out to give, using equation (5.7), This factor came from evaluating G 1 (0, aî) which goes as ∼ 1 a 2 as expected. The fixed nonlocality scale has regulated the UV divergence in this case. In the dipole theory the distance of the translation is fixed, as opposed to the non-commutative case where the translation is proportional to the momentum in the y-direction which can vanish in the IR.
JHEP09(2015)180
Using equation ( to leading order in the small m limit. Thus S nonplanar has only an IR divergence in the dipole theory. The leading divergence in the j = 1 term is S nonplanar = − λA ⊥ 3 · 2 6 π 3 a 2 − ln(a 2 m 2 ) , (5.56) however there will be contributions to this order from the j > 1 terms which were dropped. The the conclusion of this analysis is that the nonplanar diagram does not contribute to the leading divergence of entanglement entropy at this order as it is subleading to the contribution from the planar diagram. The nonlocality introduced in the dipole theory does not affect the area law, as the total entanglement entropy at this order in perturbation theory is dominated by the planar diagrams which matched the result from the commutative theory. Even the subleading terms we have analysed do not follow any sort of volume law as they are not proportional to the lengthscale of the nonlocality. The only effect of the nonlocality is to regulate the UV divergence otherwise present. Similar behaviour was observed in [15], where one of the ways that the nonlocality manifested itself was by softening divergences in nonplanar diagrams.
Final remarks
In this paper we computed the first perturbative correction to the entanglement entropy in two nonlocal theories, a φ 4 theory defined on the noncommutative plane and a dipole theory.
The contribution to the entanglement entropy in each of these theories at first order in coupling comes from vacuum bubble diagrams. The planar diagrams give the same contribution in all three theories. However, the nonplanar diagram is affected by the modified -product. Never the less, these diagrams do not modify the area law observed JHEP09(2015)180 in the commutative theory. Thus, at this order in perturbation theory and for the region considered at least, all these theories follow an area law with no sign of a volume law, as opposed to the strongly coupled case where the signature of the volume law could be seen even for large regions.
In the commutative theory it has been shown that the modification to the entanglement entropy at first order in perturbation theory can be absorbed into the renormalisation of the mass [14]. It would be interesting to see if a similar interpretation can be made in the case of the theories considered here.
Finally, a comment about the commutative limit. Since the quantities dealt with in the paper are not UV finite, this is not a straightforward issue. The general pattern is that the nonlocality has served as an additional regulator that softens certain divergences. Thus, if the nonlocality is removed, these divergences reappear and the commutative limit applied to the final results is not smooth. | 9,011.8 | 2015-09-01T00:00:00.000 | [
"Physics"
] |
Scandium doping brings speed improvement in Sb2Te alloy for phase change random access memory application
Phase change random access memory (PCRAM) has gained much attention as a candidate for nonvolatile memory application. To develop PCRAM materials with better properties, especially to draw closer to dynamic random access memory (DRAM), the key challenge is to research new high-speed phase change materials. Here, Scandium (Sc) has been found it is helpful to get high-speed and good stability after doping in Sb2Te alloy. Sc0.1Sb2Te based PCRAM cell can achieve reversible switching by applying even 6 ns voltage pulse experimentally. And, Sc doping not only promotes amorphous stability but also improves the endurance ability comparing with pure Sb2Te alloy. Moreover, according to DFT calculations, strong Sc-Te bonds lead to the rigidity of Sc centered octahedrons, which may act as crystallization precursors in recrystallization process to boost the set speed.
dopants match well with parent Sb 2 Te 3 structure and increase the stability of Sb 2 Te 3 in the phase change process 20 . Furthermore, a recent report from Science points out that Sc-doped Sb 2 Te 3 phase change material without phase separation has a very rapid SET speed reaching up to 700 picoseconds 21 Among the equilibrium phases of Sb-Te system, Sb 2 Te alloy has >50 °C higher crystallization temperature than Sb 2 Te 3 one. Hence, in this paper, Sc element was also been chosen as a dopant into Sb 2 Te alloy. We hope that through the adjustment of substrate material, the new one could have better thermal stability, and also useful for speed improvement in Sb 2 Te alloy.
Results
The sheet resistance as a function of temperature (R-T) for Sb 2 Te and Sc 0.1 Sb 2 Te films (~100 nm) was measured to clarify the influence of doping on the thermal stability as depicted in Fig. 1(a). The sharp drop of the resistance happens around the crystallization temperature (T c ), which significantly shifts to higher temperature after doping Sc. According to the derivative of logarithmic sheet resistance with respect to temperature (dlgR/dT), the T c of Sb 2 Te and Sc 0.1 Sb 2 Te films are estimated to be 156.1 °C and 174.9 °C, respectively, indicating better amorphous stability after Sc doping. Figure 1(b) shows the 10-year data retention characteristics for Sb 2 Te and Sc 0 . 1 Sb 2 Te films. Based on the Arrhenius equation: t = τ·exp(E a /k B T), where t is the 50% criterion failure time, τ is the proportional time constant, E a is the crystallization activation energy, k B is the Boltzmann constant, T is the absolute temperature. After Sc doping, the activation energy E a increases from 2.44 eV to 3.00 eV. By extrapolating the data retention time to 10 years, the data retention temperature for Sc 0.1 Sb 2 Te film is estimated to be 92.7 °C, demonstrating a better thermal stability than that of Sb 2 Te (63.8 °C) and conventional GST (85 °C) alloy. Apart from better thermal stability, four orders of magnitude resistance difference leave enough margins for identifying the high resistance and low resistance states.
From microstructural side, thermally-induced phase transition processes were investigated by in-situ transmission electron microscope (TEM) technique for Sb 2 Te and Sc 0.1 Sb 2 Te films (~15 nm) with a heating rate of 10 °C/min. Figure 2 shows TEM bright-field (BF) images and the corresponding selected area electron diffraction (SAED) patterns for Sb 2 Te (a-c) and Sc 0.1 Sb 2 Te (d-f) films at different temperature. Pure Sb 2 Te film starts to crystallize with an explosive crystal growth at 140 °C (Fig. 2b), and the grain size is in about several hundred nanometers scale. After the temperature increases to 200 °C (Fig. 2c), the grain size of Sb 2 Te film is almost the same comparing with Fig. 2b, because its crystallization process has already finished around 140 °C. As for Sc 0.1 Sb 2 Te film, it starts to crystallize at 160 °C with numerous nanocrystals (<~10 nm) as shown in Fig. 2d. Though these nanocrystals grow a little (<~15 nm) as temperature rises up to 200 °C (Fig. 2e), the grain size of Sc 0.1 Sb 2 Te is still much smaller than that of Sb 2 Te after crystallization. In addition, both of the SAED patterns in Fig. 2c,f can be indexed as hexagonal (h-) Sb 2 Te structure (JCPDS No. 80-1722). No extra diffraction rings appear in Fig. 2f, which demonstrates that Sc 0.1 Sb 2 Te film is a single h-phase without phase separation. The XRD result of crystallized Sc 0.1 Sb 2 Te film, as shown in Fig. S1, further confirms that Sc 0.1 Sb 2 Te has the same structure as Sb 2 Te. That is, Sc doping significantly affects the crystallization behavior of Sb 2 Te film without forming any new phase or new structure. Beyond that, a crystallized film with three times more of Sc doping level, as much as 11% (Sc 0.4 Sb 2 Te), was investigated to inspect the distribution of Sc atoms by using of STEM-EDS mapping in TEM as shown in Fig. S2. Even at a higher doping level, the crystalline structures of Sc 0.4 Sb 2 Te and Sc 0.1 Sb 2 Te remain the same from SAED pattern side. This EDS results give more direction on uniform distributed Sc, Sb and Te elements without obvious phase separation appearing in nanometer scale.
In order to understand the interplay between Sc atoms and Sb 2 Te lattice, XPS experiment was applied to investigate the bonding state of crystallized Sb 2 Te and Sc 0.1 Sb 2 Te films. Figure 3 shows the binding energy of Sb 3d and Te 3d core levels for Sb 2 Te and Sc 0.1 Sb 2 Te films. The C 1 s peak at 284.8 eV is used as a reference. After Sc doping, both peaks of Sb 3d shift to lower energies (~0.2 eV for Sb 3d 5/2 , ~0.25 eV for Sb 3d 3/2 ). Similar results are observed in the binding energies for Te 3d (~0.25 eV for both Te 3d 5/2 and Te 3d 3/2 ). Usually, binding energy will decrease when an atom bonds to another one with a lower electro-negativity. Since the electro-negativity of Sc (1.36) is smaller than that of Sb (2.05) and Te (2.12), Sc atoms is very likely to bond with Sb and Te atoms in Sc 0.1 Sb 2 Te film after crystallization, resulting in the decrease of binding energy for Sb and Te elements. Considering that the electro-negativity difference (ΔS) of Sc-Te and Sc-Sb is much bigger than Sb-Te, and a large ΔS between two atoms would increase nucleation probability 22,23 . More nuclei are likely to generate after Sc doping, and the intergrowth of nuclei produces more grain boundaries, which will suppress the subsequent crystal growth significantly. This may be contributing to explain the much smaller grain size distribution after Sc doping as shown in Fig. 2f.
Anyway, good device performances are the key to application. Figure 4a shows the resistance-voltage curves of Sc 0.1 Sb 2 Te alloy based PCRAM cell with different pulse widths (the falling edge of the voltage pulse is 3 ns). Both set and reset voltages slightly shift to a higher value when the pulse width decreases. But most of all, even 6 ns electrical voltage pulse can still induce reversible phase transformation in this PCRAM device. Comparing to conventional GST (crystallization speed of ~ 50 ns) 10 and Sb 2 Te (crystallization speed of ~ 20 ns) 13 , Sc 0.1 Sb 2 Te based PCRAM cell exhibits faster operation speed. Besides, endurance up to 3.3 × 10 5 cycles without failure (Fig. 4b) also demonstrates that Sc 0.1 Sb 2 Te alloy has great potential for PCRAM application 24 .
Discussion
To further verify the location of doped Sc atoms, Ab initial method was carried out to theoretically predict the most probable site by calculating the formation energy (E f ) in each site. Rather than simulating the exact experiment composition, we introduce a single Sc atom at various lattice sites in a Sb 2 Te supercell with lattice parameter of 12.816◊12.816◊ 17.633(Å 3 ) to evaluate the effects of doped Sc atom. In the Sb 2 Te supercell, there are seven possible dopant sites for Sc atoms, Sb 1 , Sb 2 , Sb 3 , Te 1 , Te 2 and In 1 , In 2 (shown in Fig. S3a). Sb and Te with subscript mean the substitution doping in which the Sc atom replaces Sb or Te, whereas In1 and In2 mean the Sc atom enters the interstitial site. The formation energy of each structure after relaxed was calculated and shown in Fig. S3b. The E f was obtained according to the following equation: Here -E un doped and -E Sc doped denote the total energies of the relaxed structure before and after Sc doping. E Sc denotes the chemical potential of doped Sc, E Sb Te / denotes the chemical potential of Sb or Te being replaced, while it is zero for interstitial doping. As shown in Fig. S3b, the E f for Sb 1 is −2.482/2.583 eV for Sb/Te rich, which is much lower than all of the other conditions. Thus, Sb 1 is the most energetically favorable position for the doped Sc atom. To identify the bonding information of Sc 0.1 Sb 2 Te when Sc substitutes Sb 1 , the charge density difference (CDD) of relaxed structure was illustrated in Fig. 5. Their corresponding 2D charge density plot was shown in Fig. S4, which shows that Sc and Sb are bound with Te through a bond point 25 . In order to show the chemical environment of Sc, we present the nine layers that exist in the Sb 2 Te h-structure along C axis. As shown in Fig. 5b,c, there is only tenuous charge present in three of the Sb-Te bonds which is in distinct contrast with the noticeable charge accumulation at the bond center between Sc and Te. The original Sb-centered octahedron shows three strong (3.045 Å) bonds and three weak bonds (3.159 Å). However, the Sc-centered octahedron shows six identical strong bonds (2.98 Å), the strong Sc-Te bonds lead to the rigidity of Sc-centered octahedrons. Even they may not necessarily be intact in the melt-quenched amorphous phase, yet the Sc-centered octahedrons can still be the subcritical embryos owe to their lowest formation energy. So the reconstruction of Sc-centered octahedrons is more advantageous than that of Sb-centered motifs in the recrystallization process. The existence of large amounts of precursors will refine the crystalline size, thus increase the grain boundaries which will accommodate more stress produced in phase change process of PCRAM device. Moreover, smaller grain size will increase the interface-area-to-volume ratios to facilitate the hetero-crystallization at the grain boundaries, further accelerating the crystallization speed. This may explain why the SET speed of Sc 0.1 Sb 2 Te based PCRAM device (6 ns) is faster than Sb 2 Te based one (20 ns). However, after 3.3 × 10 5 cycles, the non-uniform electronic fields in the active mushroom shape area might lead to a reallocation of elements, thus appear Sb 2 Te large grains that can result in device failure.
Comparing with Sc 0.2 Sb 2 Te 3 material, Sb 2 Te alloy in this study was choosing as the parent material instead of Sb 2 Te 3 alloy, considering the Sb 2 Te's thermal stability is better. Sc 0.2 Sb 2 Te 3 material can change from amorphous state to face centered cubic (f-) phase, and then to the stable h-phase with the increase of temperature. The 700 picoseconds set speed 21 only involves phase change process between amorphous and f-phase, which is a metastable state like f-GST phase. Avoiding f-to-h phase transition is very important in PCRAM device, with an emphasis on preventing grain growth 26 . Hence, in this paper, Sc 0.1 Sb 2 Te material can reach 6 nanoseconds set speed without intermediate phase, benefiting from its strong Sc-centered cluster. It has high-speed and good thermal stability together, which may point out some direction for the future design of PCRAM devices.
Conclusion
In this paper, Sc doped Sb 2 Te film was investigated to verify its application in PCRAM. After Sc doping, the thermal stability of Sb 2 Te alloy was improved, and the 10-year data retention time was increased. The crystalline Sc 0.1 Sb 2 Te film exhibits a single phase without phase separation. Sc 0.1 Sb 2 Te based PCRAM cell can still realize stable reversible switching behaviors even at 6 ns. Two orders of resistance difference between set and reset state makes it easy to distinguish "0" and "1". Furthermore, endurance up to 3.3 × 10 5 cycles makes Sc 0.1 Sb 2 Te a promising material for PCRAM application.
Methods
Sb 2 Te and Sc doped Sb 2 Te films was deposited on SiO 2 /Si (100) substrates and carbon coated TEM grid by co-sputtering Sc and Sb 2 Te targets using RF sputtering system at room temperature. The composition of the deposited films, BF images and SAED patterns were characterized by JEOL-2100F TEM with energy dispersive spectroscopy (EDS). The bonding situation of Sb 2 Te and Sc 0.1 Sb 2 Te alloy was evaluated by X-ray photoelectron spectroscopy (XPS) with Al Kα radiation. T-shaped PCRAM cells with a tungsten bottom electrode (190 nm in diameter) were fabricated using 130 nm CMOS technology. Afterwards, Sc 0.1 Sb 2 Te film (about 55 nm), TiN film (10 nm) and Al top electrode (300 nm) were sequentially deposited. Resistance-voltage curves and programming cycles were monitored with Keithley 2400 and Tektronix AWG 5002B. Calculations in this work was investigate by using density functional theory (DFT) 27 . The Vienna Ab-initio Simulations Package (VASP) 28 was used for calculations. The projector augmented wave (PAW) 29 pseudopotentials were used to describe electron-ion interactions. For the exchange-correlation energies between electrons, the Perdew-Burke-Ernzerhof (PBE) 30 function was employed. The energy cut offs were chosen to be 450 eV and 350 eV for relaxation and static calculation. A supercell containing 3◊3◊1 unit cells of Sb 2 Te was constructed for relaxation. The 5◊5◊3 K point mesh with Gamma centered was used. The relaxation was performed until the total energy converged to within 1 meV. | 3,371.2 | 2018-05-01T00:00:00.000 | [
"Materials Science"
] |
A Novel Performance Metric for Building an Optimized Classifier
Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP). Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS) algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.
INTRODUCTION
To date, many efforts have been carried out to design more advanced algorithms to solve classification problems. At the same time, the development of appropriate performance metrics to evaluate the classification performance are at least as importance as algorithm. In fact, it is a key point to produce a successful classification model. In other words, the performance metric plays a significant role in guiding the design of better classifier.
From the previous studies, the performance metric is normally employed in two stages (i.e., the training stage and the testing stage). The use of performance metric during the training stage is to optimize the classifier (Ferri et al., 2002;Ranawana and Palade, 2006). In other words, in this particular stage, the performance metric is used to discriminate and to select the optimal solution which can produce a more accurate prediction of future performance. Meanwhile, in the testing stage, the performance metric is usually employed for comparing and evaluating the classification models (Bradley, 1997;Caruana and Niculescu-Mizil, 2004;Kononenko and Bratko, 1991;Provost and Domingos, 2003;Seliya et al., 2009).
In this study, we are interested about the use of performance metric in evaluating and building an optimized classifier for any heuristic and stochastic classification algorithms. In general, these algorithms use the training stage learns from the data and at the same time attempt to optimize the solution by discriminating the optimal solution from the large space of solutions. In order to find the optimal solution, the selection of suitable performance metric is essential. Traditionally, most of the heuristic and stochastic classification models employ the accuracy rate or the error rate (1-accuracy) to discriminate and to select the optimal solution. However, using the accuracy metric as a benchmark measurement has a number of limitations, which have been verified by many works (Ferri et al., 2002;Ranawana and Palade, 2006;Wilson, 2001). In those studies, they have demonstrated that the simplicity of this accuracy metric could lead to the suboptimal solutions especially when dealing with imbalanced class distribution. Furthermore, the accuracy metric also exhibits poor discriminating values to discriminate better solution in order to build an optimized classifier (Huang and Ling, 2005).
Instead of the accuracy metric, there are few other metrics which have been designed purposely to build an optimized classifier. A Mean Squared Error (MSE) is one of the popular error function metric that are used by many neural network classifiers such as backpropagation network (Al-Bayati et al., 2009;Pandya and Macy, 1996) and supervised Learning Vector Quantization (LVQ) (Kohonen, 2001) for evaluating neural network performance during the training period. In general, MSE measures the difference between the predicted solutions and desired solutions. By employing this metric, the smaller MSE value is required in order to obtain a better neural network classifier.
Meanwhile, Lingras and Butz (2007) proposed the used of extended precision and recall values to identify the boundary region for the Rough Support Vector Machines (RVSM). In this study, the notion of conventional precision and recall metrics are extended by defining separate values of precision and recall for each class. However, both of these performance metrics could not be employed by other heuristic and stochastic classification algorithms due to different learning paradigm or objective function being used.
On top of that, Ranawana and Palade (2006) introduced a new hybridized performance metric called the Optimized Precision (OP) for evaluating and discriminating the solutions. This performance metric is derived from a combination of three performance metrics, which are accuracy, sensitivity and specificity. In this study, they have demonstrated that the OP metric is able to select an optimized generated solution and is able to increase the classification performance of ensemble learners and Multi-Classifier Systems for solving Human DNA Sequences data set. Area under the ROC curve (AUC) is another popular performance metric used to construct optimized learning models (Ferri et al., 2002). In general, the AUC provides a single value for discriminating which solution is better on average. This performance metric is proven theoretically and empirically better than the accuracy metric in optimizing the classifier models (Huang and Ling, 2005).
Similar to the above-mentioned performance metrics, the main purpose of this study is trying to improve the problem of accuracy metric in discriminating an optimal solution in order to build an optimized classifier for heuristic and stochastic classification algorithms. This study introduces a new hybridized performance metric that is derived from the combination of accuracy metric with the extended precision and recall metrics. The new performance metric is known as an Optimized Accuracy with Recall-Precision (OARP) metric. We believe that the benefits of the accuracy and extended precision and recall can be best exploited to construct a new performance metric that is able to optimize the classifier for heuristic and stochastic classification algorithms. In this study, we limit our study scope by comparing the new performance metric against the conventional accuracy metric. Moreover, the two-class classification problem is used for comparing both metrics.
Further, we will show that our proposed performance metric is better than the conventional accuracy metric by constructions of examples from different types of class distribution in discriminating the optimal solution. Next, with more discriminating features and finer measure, we will show that any heuristic or stochastic classification algorithm would search better and later obtain better optimal solution. A series of experiment using nine real data sets will be used to demonstrate that the Monte Carlo Sampling (MCS) algorithm optimized by the OARP metric produce better predictive result as compared to the algorithm optimized by the accuracy metric alone.
Related performance metrics:
The performance evaluation for binary classification model is based on the count of correctly and incorrectly predicted instances. These counts can be tabulated in a specific table known as a confusion matrix. In the confusion matrix, the counts of predicted instances can be categorized into four categories. Table 1 shows the four categories of results of confusion matrix.
As indicated in Table 1, tp represents the positive patterns that are correctly classified as positive class. Meanwhile, fp represents the negative patterns that are misclassified as positive class. On the other hand, tn represents the negative patterns that are correctly predicted as negative class and fn represents the On top of the above-mentioned metrics, few advanced metrics are also proposed based on the confusion matrix as a reference. Below we discussed two advanced metrics which are related to our study. Ranawana and Palade (2006) proposed a new hybridized metric called the Optimized Precision (OP). This new metric is a combination of three performance metrics which are accuracy, sensitivity and specificity. In order to construct this hybridized metric, a new measurement called Relationship Index (RI) is introduced with the objective to minimize the value of |Sp-Sn| and at the same time to maximize the value of Sp+Sn. The RI is defined as in Equation 6. A high value of RI would entail a low |Sp-Sn| value and a high value of Sp+Sn:
Optimized Precision (OP):
In order to apply Equation 6 in the performance of optimization algorithms, Ranawana and Palade (2006) combine the beneficial properties of accuracy and RI as shown in Eq. 7 to reduce the detrimental effect of data split during training of the classifier. Through this combination, the value of OP remains relatively stable even when presented with large imbalanced class distribution: Sp Sn OP Acc RI Acc Sp Sn In the case of RI = 0 when Sp = Sn, an alternative definition of OP was proposed as given in Eq. 8:
Extended version of precision and recall:
Nonetheless, binary classifier only deals with 'yes' and 'no' answers for a single class. In other words, the classifier is trying to separate the instances into two different classes, which are either class 1 or class 2. Through this concept, Lingras and Butz (2007) propose an extended version of precision and recall by defining precision and recall for each class.
Let assume for two-class problem every class has their own precision and recall value C 1 = {p 1 , r 1 }, C 2 ={p 2 , r 2 }, a set of instances that belongs to each class C 1 = {R 1 }, C 2 ={R 2 }, as well as a set of predicted instances C 1 = {A 1 }, C 2 ={A 2 }. Having these properties, the extended precision and recall for two-class problem can be defined as in Eq. 9 and 10 respectively: where, 1≤i≤c and c is the maximum number of class. Lingras and Butz, (2007), they have theoretically proved that for two-class problem the precision of one class is correlated to the recall of other class for twoclass problem. This correlation can be defined as p1 is proportional to r2 (p1∝r2) and p2 is proportional to r1 (p2∝r1). Through this correlation, they demonstrated that these extended precision and recall values can be used to identify the boundary region (lower bound for both classes) for the Rough Support Vector Machines (RVSMs) instead of using conventional hyper plane.
The optimized accuracy with recall-precision: The aim of most classification model is to maximize the total number of correct predicted instances in every class. In certain situation, it is hard to produce a classifier which can obtain the maximal value for every class. For instance, when dealing with imbalanced class instances, it is often happen where the classification model is able to perform extremely well on a large class instances but unfortunately perform poorly on the small class instances. Clearly, this indicates that the main objective of any classification model should be maximizing all class instances in order to build an optimized classifier.
As mentioned earlier, the accuracy metric is often used to build and to evaluate an optimized classifier. However, the use of accuracy value could lead the searching and discriminating processes to the suboptimal solutions due to its poor discriminating feature. Moreover, the metric is also not robust when dealing with imbalanced class instances. This observation will be experimentally demonstrated in the next sub-section.
In contrast, precision and recall are two performance metrics that are used as alternative metrics to measure the binary classifier performance from two different aspects. In any binary classification problem, it is possible that for the classifier to produce higher training accuracy with higher precision value but lower recall value or with lower precision value but higher recall value. As a result, building a classifier that maximizes both precision and recall values is the key challenge for many binary classifiers. However, it is difficult to apply both of these metrics separately. By applying these metrics separately, it will cause the selection and discrimination processes become difficult due to multiple comparisons.
We believe that the beneficial properties of accuracy, precision and recall metrics can be exploited to construct a new performance metric that is more discriminating, stable and robust to the changes of class distribution. In order to transform these metrics into a singular form of metric, we will adopt two important formulas from (Ranawana and Palade, 2006), which are the Relationship Index (RI) and OP. This is a two-step effort, whereby first we have to find a suitable way to employ the RI formula and next to identify the best approach to adopt the OP formula in order to construct the new performance metric.
From our point of view, the conventional precision and recall metrics are not suitable for the integration process. This is because both metrics only measure one class of instances (positive class). This is somewhat against the earlier objective which attempts to maximize every class instances in order to build an optimized classifier. To resolve this limitation, the extended precision and recall metrics proposed by (Lingras and Butz, 2007) were suggested for the integration. The main justification is that every class instance should be able to be measured individually using both metrics as defined in Equation 9 and Equation 10.
As proved by (Lingras and Butz, 2007), for twoclass problem, the extended precision value in a particular class is proportional to the extended recall values of the other class and vice versa. From this correlation, the RI formula can be implemented. To employ the RI formula, the precision and recall from different classes were paired together (p 1 , r 2 ), (p 2 , r 1 ) based on the correlation given in (Lingras and Butz, 2007). At this point, the aim is to minimize the value of |p 1 -r 2 | and |p 2 -r 1 | and maximize the value of p 1 +r 2 and p 2 +r 1 . Hence, we define the RI for both correlations as stated in Eq. 11 and 12: However, these individual RI values are still pointless and could not be applied directly to calculate the value of new performance metric. Thus, to resolve this problem, we compute the average of total RI (AVRI) as shown in Eq. 13 to formulate the new performance metric: where, c indicates the maximum number of class.
However, the use of accuracy value alone could lead the searching process to the sub-optimal solutions mainly due to its less discriminative power and inability to deal with imbalanced class distribution. Such drawbacks motivate us to combine the beneficial properties of AVRI with the accuracy metric. With this combination, we expect the new performance metric is able to produce better value (more discriminating) than the accuracy metric and at the same time remain relatively stable when dealing with imbalanced class distribution.
The new performance metric is called the Optimized Accuracy with Recall-Precision (OARP) metric. The computation of this OARP metric is defined in Eq. 14: However, during the computation of this new metric, we noticed that the value of OARP may deviate too far from the accuracy value especially when the value of AVRI is larger than accuracy value. Therefore, we proposed to resize the AVRI value into a small value before computing the OARP metric. To resize the AVRI value, we employed the decimal scaling method to normalize the AVRI value as shown in Eq. 15: where, x is the smallest integer such that max (|AVRI new_val |) < 1.
In this study, we set the x=1 for the entire experiments. By resizing the AVRI value, we found that the OARP value is comparatively close to the accuracy value as shown in the next sub-section.
At the end, the objective of OARP metric is to optimize the classifier performance. A high OARP value entails a low value of AVRI which indicates a better generated solution has been produced. We also noticed that via this new performance metric, the OARP value is always less than the accuracy value (OARP < Acc). The OARP value will only equal to the accuracy value (OARP = Acc) when the AVRI value is equivalent to 0 (AVRI = 0), which indicates a perfect training classification result (100%).
OARP vs. accuracy: Analysis on discriminating an optimized solution:
In this study, we also attempt to demonstrate that the new performance metric is better than the conventional accuracy metric through three criteria. The first criterion is that the metric has to be more discriminative. The second criterion is that the metric favors the minority class instances when majority class instances always dominate the selection process. The third criterion is that the metric is robust to the changes of class distribution. To prove these criteria, four different examples have been used to demonstrate the capability of this new performance metric in selecting and discriminating the optimized solution based on different types of class distributions. However, in this study, we restricted our attention to the two-class classification problem suitable with the proposed metric. We also restricted our discussion to the solutions that are indistinguishable according to accuracy value (Example 1-3). On top of that, we also included one special example that shows the drawback of accuracy in discriminating the solution that has poor results on the minority class of instances but produce higher accuracy rate with the other solution that has slightly lower accuracy value but able to predict correctly all minority class of instances (Example 4).
Example 1: Given balanced data set containing 50 positive and 50 negative instances (domain Ψ) and two performance metrics, Acc and OARP are used to discriminate two similar solutions a and b, Acc={(a,b)|a,b Ψ} and OARP={(a,b)|a,b Ψ}. Assume that a and b obtained the same total correct predicted instances (TC) as given in Table 2a.
From this example, we can intuitively say that b is better than a. This is proved by evaluating the misclassification instances for both classes, the fp and fn for b, which are comparatively balanced as compared to a. For this case, the OARP metric showed a decision value that similar to intuitive decision, while the accuracy metric unable to decide which solution is better due to poor discriminative value.
Example 2: Given an imbalanced data set containing 70 positive and 30 negative instances (domain Ψ) and two performance metrics, Acc and OARP are used to discriminate two similar solutions a and b, Acc={(a,b)|a,b Ψ} and OARP={(a,b)|a,b Ψ}. Assume that a and b obtained the same total correct predicted instances (TC) as given in Table 2b. Similar to Example 1, intuitively b is better than a in terms of the fp and fn values. In this example, the OARP metric demonstrated better value and produced decision similar to intuitive decision. Meanwhile, the accuracy metric could not tell the difference between a and b.
Example 3: Given an extremely imbalanced data set containing 95 positive and 5 negative instances (domain Ψ) and two performance metrics, Acc and OARP are used to discriminate two similar solutions a and b, Acc={(a,b)|a,b Ψ} and OARP={(a,b)|a,b Ψ}. Assume that a and b obtained the same total correct predicted instances (TC) as given in Table 2c.
Similar to the two examples earlier, intuitively b is better than a in terms of fp and fn. As indicated in the table, the OARP metric once again able to produced decision similar to intuitive decision. However, the value of accuracy metric is unvarying and could not distinguish which solution is better.
Example 4: Given two special cases of solutions a and b and added into an extremely imbalanced data set containing 95 positive and 5 negative instances(domain Ψ) and discriminated by two performance metrics, Acc and OARP, Acc={(a,b)|a,b Ψ} and OARP={(a,b)|a,b Ψ}. Assume that a and b obtained the same total correct predicted instances (TC) as given in Table 2d.
In this special case, two contradictory results are obtained. The accuracy metric distinguished that b is better than a, but the OARP metric resulted otherwise. Intuitively, we can conclude that a is better than b. This is because, a able to predict correctly all the minority class instances as compared to b. Clearly, b is poor since no single instance from minority class instances is correctly predicted by b. Hence, we can conclude that the result obtained by OARP metric is similar to intuitive decision and clearly better than the accuracy metric.
From the four examples given, three conclusions can be drawn from the results. First, the value of the OARP metric is more discriminating than the value of accuracy metric because the OARP metric is able to tell the difference between both solutions through the values obtained, while the accuracy metric could not.
Second, these examples showed that the accuracy metric is not robust to the changes of class distribution because the size of instances changes the value of accuracy metric is no longer able to perform optimally (Example 2-4). This indicates that the accuracy metric is not a good evaluator and optimizer to be used for discriminating the optimal solution. In contrast, the OARP metric is sensitive to the changes of class distribution. Although the OARP metric is sensitive, the value produced by the OARP metric is robust and able to perform optimally by producing a clear optimal solution.
Third, when dealing with the imbalanced or extremely imbalanced class distribution, the OARP metric favored to the minority class distribution instead of majority class distribution as shown in Example 4. This criterion is really important to prove that the chosen generated solution is capable to classify minority class instances correctly. In contrast, the accuracy metric is neutral to the changes due to poor informative feature about the proportion of instances in both classes. Neutral is used here to indicate that the accuracy metric only cares with the total of correct predicted instances. The dangerous of this situation is (Example 4) it could lead the selection process of any classifier to the sub-optimal solutions.
Experimental setup:
We have theoretically showed that the new performance metric, OARP was better than the accuracy metric in selecting and discriminating better solutions using four examples. Next, we are going to demonstrate the generalization capability of the OARP metric against the accuracy metric using real world application data sets.
For the purpose of comparison and evaluation on the generalization capability of OARP metric against the accuracy metric, nine binary data sets from UCI Machine Learning Repository were selected. All of these selected data sets are imbalanced class distribution. The brief descriptions about the selected data sets are summarized in Table 3.
In pre-processing data, all data sets have been normalized within the range of [0, 1] using min-max normalization. Normalized data is essential to speed up the matching process for each attribute and prevent any attribute variables from dominating the analysis (Al-Shalabi et al., 2006). All missing attribute values in several data sets were simply replaced with median value for numeric value and mode value for symbolic value of that particular attribute across all instances.
In this study, all data sets were divided into ten approximately equal subsets using 10-fold cross validation method similar to (Garcia-Pedrajas et al., 2010) where k-1 is used for training and the remaining one for testing. These training and testing folders have been run for 10 times.
Experimental evaluation: In this study, all data sets were trained using a naïve stochastic classification algorithm which is Monte Carlo Sampling algorithm (Skalak, 1994). This algorithm combines simple stochastic method (random search) and instance selection strategy. There are two main reasons this algorithm is selected. Firstly, this algorithm simply applies accuracy metric to discriminate the optimal solution during the training phase. Secondly, this algorithm is aligned with the purpose of this study which is to optimize the heuristic or stochastic classification algorithm.
To compute the similarity distance between each instance and prototype solution, the Euclidean distance measurement is employed. The MCS algorithm was reimplemented using MATLAB Script version 2009b. To ensure fair experiment, the MCS algorithm was trained simultaneously using the accuracy and OARP metrics for selecting and discriminating the optimized generated solution. For simplicity, we refer these two MCS models as MCS Acc and MCS OARP respectively. All parameters used for this experiment are similar to (Skalak, 1994) except in the number of generated solution, n. In this experiment, we employed n = 500 similar to (Bezdek and Kuncheva, 2001).
From this experiment, the expectation is to see that the MCS OARP is able to predict better than the model optimized by the MCS Acc . For evaluation purposes, the average of testing accuracy (Test Acc ) will be used for further analysis and comparison. Table 4 shows the results from the experiment. From Table 4, we can see that the average testing accuracy obtained by MCS OARP is better than the MCS Acc model. The average testing accuracy obtained by MCS OARP model is 0.8439 while 0.8119 for the MCS Acc model for all nine binary data sets. Overall, the MCS OARP model shows an outstanding performance against the MCS Acc model, whereby the MCS OARP model has improved the classification performance in all binary data sets.
RESULTS
To verify this outstanding performance, we performed a paired t-test with 95% confidence level on each binary data set by using ten trial records for each data set. The summary result of this t-test analysis is listed in Table 5. As indicated in Table 5, the MCS OARP model obtained six significant wins, while the other three data sets show no significant differences between the MCS OARP and MCS Acc . On top of that, we also performed a t-test analysis on the average testing accuracy obtained by both models over nine binary data sets (Table 4). From this analysis, the MCS OARP metric shows a significant difference with the MCS Acc model at confidence level of 95% and even 99% where pvalue is 0.0021.
DISCUSSION
The experimental results have shown that the MCS OARP model has outstandingly outperformed if compared to the MCS Acc model for all binary data sets in terms of predictive accuracy. Empirically, we have proved that the OARP metric is more discriminating than the accuracy metric in selecting and discriminating the optimized solution for stochastic classification algorithm, which in turn produced a higher accuracy of predictive results. This somewhat against a common intuition in machine learning that a classification model should be optimized by a performance metric that it will be measured on. This finding is also consistent with reports from studies in (Huang and Ling, 2005;Rosset, 2004).
Furthermore, the OARP metric demonstrated is also robust to the changes of class distribution. This is proved by empirical results where the OARP metric was able to optimize and improve their predicted results over all nine imbalanced data sets.
We believe that the OARP metric works effectively with the stochastic classification model in leading towards a better training model. In this particular paper, the MCS model optimized by the OARP metric was able to select and discriminate better solution as compared to its performance with the conventional accuracy metric alone. This indicates that the OARP metric is more likely to choose an optimal solution in order to build an optimized classifier for stochastic classification algorithm.
CONCLUSION
In this study, we proposed a new performance metric called the Optimized Accuracy with Recall-Precision (OARP) based on three existing metrics, which are the accuracy and the extended recall and precision metrics. Theoretically, we proved that our newly constructed performance metric satisfied the above criteria using four analysis examples with different types of class distribution. To support our theoretical evidence, we compared experimentally the new metric against the accuracy metric using nine real binary data sets. Interestingly, the MCS model optimized by the OARP metric has outperformed and statistically significant than the MCS model optimized by the accuracy metric. The new OARP metric is proven to be more discriminative, robust to the changes of class distribution and also favored the small class distribution.
For the future study, we are planning to extend this new performance metric, OARP for solving multi-class problems. Moreover, we are also interested to conduct an extensive comparison between the OARP metric against different performance metrics in optimizing the heuristic or stochastic classification models. | 6,570.2 | 2011-04-01T00:00:00.000 | [
"Computer Science"
] |
THEORETICAL STUDIES OF ELECTRONIC AND OPTICAL PROPERTIES FOR SOME NEW AZO DISPERSE DYES FOR DYE-SENSITIZED SOLAR CELLS BY USING TD AND TD DFT METHOD.
Isaac Onoka, Numbury Surendra Babu and John J. Makangara. Department of Chemistry, College of Natural and Mathematical Sciences,The University of Dodoma, post box: 338, Dodoma, Tanzania. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
…………………………………………………………………………………………………….... Introduction:-
Research in renewable energy has become one of the most imperative issues in global energy strategy due to increased energy consumption and limited fossil resources. The incident solar energy on earth per hour exceeds the current consumption of the energy of the world per year. The necessity of cultivating renewable energy sources is growing day by day [1]. Therefore, efficient solar energy conversion provides a promising technology for balancing the increasing energy demand due to fast industrial development [2]. New photovoltaic (PV) energy technologies can contribute to environmentally friendly, renewable energy production, and the reduction of the carbon dioxide emission associated with fossil fuels and biomass.
One new PV technology, organic solar cell technology, is based on conjugated polymers and molecules. Organic solar cells are a type of polymer solar cells which use conductive organic polymers or small organic molecules for light absorption and charge transport to produce electricity from sunlight by the photovoltaic effect. Organic solar cells have attracted considerable attention in the past few years owing to their potential of providing environmentally safe, flexible, lightweight, inexpensive and efficient solar cells. Among all the renewable energy technologies, the nanocrystalline dye-sensitized solar cell (DSSC) system, a kind of photovoltaic device that ISSN: 2320-5407 Int. J. Adv. Res. 6(5), 83-91 84 presented by O'Regan and Gratzel in 1991, has attracted a lot of attention because of the potential application for low-cost solar electricity [3][4][5][6].
DSSCs are solar cells, belonging to third generation of solar cells; they are based on Nature's principles of photosynthesis. DSSCs are composed of a porous layer of titanium dioxide nanoparticles, covered with a molecular dye that absorbs sun light very similar to the chlorophyll in green leaves. The titanium dioxide is immersed under an electrolyte solution, above which is a platinum-based catalyst which acts as a counter electrode. This chemical way of assembling the cell architecture allows facile and cost effective processing which makes these cells front runners in the view of the basic design novelty and potential for low cost manufacturing.
The efficiency of DSSC is the bottleneck of the design and test of the new materials (e.g. dye sensitizer) for DSSCs, which have been dominated by the often costly and time-consuming synthesis procedures [7]. As in the case of new dye sensitizer materials development, it is difficult for synthetic chemists to work out high-performance dyes with the desirable properties prior to the experiments on the assembled cell, without any support on the information of the new dyes [8]. In some cases, disappointing results from the most late-stage of the dye synthesis laboratories indicate an urgent need to understand the physical origin of dyes at molecular level, prior to experiments taking place. To overcome this bottleneck in the development of new DCCSs with better efficiency, the state-of-art computational methods need to be utilized. Today, accurate first-principle quantum chemical calculations are made available on supercomputing facilities accessible to more research groups. Such calculations are a reliable tool to design, study, and screen new materials prior to synthesis. Computer-aided rational design of new dye sensitizers based on the systematic chemical modifications of the dye structure has recently drawn the attention of several groups, including ours [9][10][11][12][13][14][15][16][17].
Further developments in dye design will play a crucial part in the ongoing optimization of DSSC [36], and it depends on the quantitative knowledge of dye sensitizer. So the theoretical investigations of the physical properties of dye sensitizers are very important in order to disclose the relationship among the performance, structures and the properties, it is also helpful to design and synthesis novel dye sensitizers with higher performance.
Azo compounds are a very important class of chemical compounds receiving attention in scientific research. They are highly colored and have been used as dyes and pigments for a long time [18][19][20]. Furthermore, they have been studied widely because of their excellent thermal and optical properties in applications such as optical recording medium [21][22][23][24], toner [25], ink-jet printing [26], and oil-soluble lightfast dyes [27].A survey of the literature reveals that the intermediate for this study, 2-amino-1, 3, 4-thiadiazole-2-thiol has recently been used to synthesize two organic dyes and their metal complexes, using carbocyclic coupling components Computational Methods:-The computations of the ground state geometries, electronic structures, and electronic absorption spectrum for studied dye sensitizers were done using DFT with Gaussian 09 package [28] with Becke's three parameter gradientcorrected exchange potential and the Lee-Yang-Parr gradient-corrected correlation potential (B3LYP) [29][30][31], and all calculations were performed without any symmetry constraints by using polarized split-valence 6-31G(d) basis set. The electronic absorption spectrum requires calculation of the allowed excitations and oscillator strengths.The HOMO, LUMO and gap (HOMO-LUMO) energies are also deduced for the stable structures. Therefore, the ground state energies and oscillator strengths were investigated using TD-DFT calculations on the entirely DFT optimized geometries. The vertical excitation energy and electronic absorption spectra were simulated using TD-DFT with B3LYP/6-31G (d) level ingas phase. The power conversion efficiency of each molecule was determined by the position of the band gap depending on the difference [LUMO (molecule) -LUMO (acceptor)]. In this research paper to attempt has been made to characterize new azo dyes.
In the present research work, to make an effort to determine and evaluate the geometrical parameters, the absorption peaks (λ max ) in the UV spectra of thestudied dyes. In addition to analyze electro-optical properties, electron 85 injection, electronic copling constants, light harvesting efficiencies, open circuit voltages (V oc ) and Quantum chemical parameters.
Results And Discussion:-Ground state geometry:-
The optimized structures of all studied dyes are shown in figure 1. All the molecular geometries have been calculated by DFT with the hybrid B3LYP functional theory combined with 6-31G (d) basis sets using Gaussian 09 program. The selected bond lengths, bond angles and dihedral angles are presented in Table 1. From the results, we find that the addition of donor groups induce a slight change in the dihedral torsion angles and bond lengths. A slight decrease in the N=N double bond lengths (D3, D4 and D5) with comparison of D1 and slight increase in D6 as the following order: D6 > D2 > D1 > D5 > D3 > D4), because due to the different electron donor groups added. On the other hand, the twisting of the chain backbone is investigated by the dihedral angle's variety as indicated in table 1; we found that the dihedral angles are similar for all compounds (~ 180° or 0 0 ) due to aromatic rings are attached to the thiadiazole cycles, which give an almost flat structure for these studied compounds. Electronic properties:-Electronic properties of the HOMO and LUMO energy levels of the dyes are crucial in studying organic solar cells. The HOMO and LUMO energy levels of the donor and of the acceptor dyes for photovoltaic devices are key important factors to determine whether the effective charge transfer will happen between donor and acceptor.The HOMO and LUMO energies of the studied dyes are computed in Table 2 and Figure 2 shows the frontier molecular orbitals for all the six dyes. From the results (Table.2) the highest occupied and lowest unoccupied molecular orbital's get approximate HOMO /LUMO energies of -5.8766/-2.5919 eV for D1, -5.8053/-2.5045eV for D2, -5.9933/-2.5840 eV for D3, -6.1169/-2.9465 eV for D4, -5.8529/ -2.7835 eV for D5 and -5.1266/ -2.6248 eV for D6, corresponding to energy gaps of 3.2847 eV for D1, 3.3007 eV for D2, 3.4093 eV for D3, 3.1704 eV for D4, 3.0694 eV for D5 and 2.5078 eV for D6. The lower energy gap (E g ) of D6, D5 and D4 compared to that of D1 shows a significant effect of intra molecular charge transfer, which would make the absorption spectra to red shifted. This is due to the effect of the electron-donor unit which is strong of D6, D5, and D4 than that of other dyes and these molecules expected to have the most outstanding photophysical properties especially D6.The injection of electron from dye to semiconductor is depends on the HOMO and the LUMO energy levels of the dye (donor) and semiconductor (acceptor). For effective injection of the electron from the excited dye into the acceptor (metal oxide semiconductor e.g. TiO 2 ,) the LUMO level of the dye needs to be higher than the conduction band edge of the acceptor. The band edge of TiO 2 is approximately at -4.2 eV (relative to vacuum) and the LUMO energy level of D1, D2, D3, D4, D5, and D6 were -2.5919 eV, -2.5045eV, -2.5840 eV, -2.9465 eV respectively, indicating that the LUMO levels of all four dyes are higher than the conduction band edge of TiO 2 .
86 Table 1:-Optimized selected bond length in angstroms (A 0 ), bond angles and dihedral angles in degree ( 0 ) of the studied dyes obtainedby DFT/B3LYP with 6-31G (d) basis set in gas phase. The HOMO level of the dye needs to be sufficiently lower than the redox couple to ensure the efficient regeneration of the dye. The most widely used redox couple in the electrolyte of DSSC is the I -/I -3 pairs where the estimated energy level is at -4.8 eV (relative to vacuum). The HOMO of -5.8766 eV for D1, -5.8053 eV for D2, -5.9933 eV for D3, -6.1169 eV for D4, -5.8529 eV for D5 and -5.1266 eV for D6, which are more negative than the energy level of the redox couple.
Quantum chemical parameters:-
Quantum chemical parameters such as hardness (η), chemical potential (µ), electrophilicity index ( ) and electron negativity ( ) have been calculated from HOMO and LUMO energies [33][34][35][36] based on Koopman's theory from density functional theory calculation and the values are listed in Table 2 for studied dyes.
From the table 2, we note that the D6 has the highest value of the chemical potential (μ = −3.8 eV ) compared to other dyes (D1, D2, D3, D4, and D5), this is a tendency to view the electrons to escape from D6 easily. Therefore D6 behaves as good donor of electrons comparison of others dyes. For the electronegativity, the D6 has a low value of electronegativity than other compounds (D1, D2, D3, D4, and D5) ( Table 2), thus the D6 is the dye that is able to donate to him the electrons from others compounds (semiconductor). In another hand, the D6 dye has a low value of chemical hardness (η) in comparison with other dyes; this indicates that the D6 is very easy to liberate the electrons. The order of liberate of the electrons are D6> D5> D4> D1> D2 >D3.
Electrophilicity index (ω) is a measure of energy lowering due to maximal electron flow between donor and acceptor [37]. This index measures the propensity of chemical species to accept electrons. A good, more reactive, nucleophile is characterized by lower value of ω, and conversely a good electrophile is characterized by a high value of ω. Therefore the D6 dye is a good electron donating molecule. Generally, the molecules having a large dipole moment, possesses a strong asymmetry in the distribution of electronic charge, therefore can be more reactive and be sensitive to change its electronic structure and its electronic properties under an external electric field. Through the Table 2, we can observe that the dipole moment (ρ) of compounds D6 and D4 are greater than other dyes.
Photovoltaic properties:-
Generally for enhance the light harvesting efficiency of Dye sensitized solar cells; the choice of the appropriate donor and acceptor spacer are essential, so therefore, we calculated the frontier orbital energy gaps between HOMO and LUMO of six dyes (D1, D2, D3, D4, D5 and D6).
Theoretical background:-
The power conversion efficiency (ɳ) was calculated according to the Eq. 5: where P inc is the incident power density, J sc is the short-circuit current, V oc is the open-circuit voltage, and FF denotes the fill factor. To analyze the relationship between V OC and E LUMO of the dyes based on electron injection (in DSSCs) from LUMO to the conduction band of semiconductor TiO 2 (E CB ), the energy relationship can be expressed [38]: The obtained values Voc of the studied dyes calculated according to the Eq. 6 range from 1.0535 eV to 1.4955 eV of TiO 2 (Tab. 3) these values are sufficient for a possible efficient electron injection. The J SC in DSSCs is determined by the following equation [39] () where LHE(λ) is the light harvesting efficiency at a given wavelength, Φ inject evinces the electron injection efficiency, and η collect denotes the charge collection efficiency. In the systems which are only different in sensitizers, η collect can be reasonably assumed to be constant. LHE (λ) can be calculated from the following equation where f represents the oscillator strength of adsorbed dye molecules. Φ inject is related to the driving force ΔG inject of electrons injecting from the excited states of dye molecules to the semiconductor substrate. It can be estimated as [40] 22 * 00 From the equations 5−9, we could roughly predict the efficiency of novel dyes without intensive calculations.
Where * dye OX E is the oxidation potential of the excited dye, dye OX E is the redox potential of the ground state of the dye, 00 dye E is the vertical transition energy, and 2 TiO CB E is the conduction band edge of the TiO 2 semiconductor. So J SC can be well estimated through f and ΔG inject . Two models can be used for the evaluation of * dye OX E [41]. The first implies that the electron injection occurs from the un relaxed excited state. For this reaction path, the excited state oxidation potential can be extracted from the redox potential of the ground state, E is expressed as [42]:
Electron injection:-
The description of the electron transfer from a dye to a semiconductor, the rate of the charge transfer process can be derived from the general classical Marcus theory, [43][44][45].
In eq. (12), k inject is the rate constant (in s-1) of the electron injection from dye to TiO2, kB is the Boltzmann thermal energy, h the Planck constant, G inject is the free energy of injection, RP V is the coupling constant between the reagent and the product potential curves. Eq (12) Table 3. The short-circuit current (J sc ) depends on two main influencing factors; light harvesting ability (LHE) and the electronic injection free energy (ΔG inject ) (Eq. 7).The LHE is considered as a very important factor for the organic dyes in which we could appreciate the role of dyes in the DSSC, i.e. absorbing photons and injecting photo-excited electrons to the conduction band of the semi-conductor (TiO 2 ). In the order to know that to give an intuitional impression to how the influencing of donor spacer of the LHE, we simulated the UV/Vis absorption spectra of the six dyes.We remark that as the different dyes the oscillator strengths were changed. As shown in Table 3, the LHE of the dyes fall within the range: 0.2979-0.9303. The LHE values for the dyes are in different ranges i.e that all the dyes will give different photocurrent.
The influencing factor as mentioned above, for enhancing the value of J sc , which is the electronic injection free energy ΔG inject . From the table 3, for all studied dyes, ΔG inject is negative; this reveal that the electron injection process is spontaneous, and showing that dye's excited state would lies above the conduction band edge of TiO 2 resulting favorable condition for electron injection. The calculated ΔG inject for these six dyes is decreased in the following order of D6>D2>D1>D3>D5>D4. Among these six dyes, we observe that the dye D6 has the larger ΔG inject , this maybe is due to the influencing of the extending the conjugate bonds. As a results, and based just from LHE and ΔG inject related to J sc , we could conclude that the cell containing the dye D1, D2, D6 and D6 should have the highest J sc due to its relative large LHE and injection driving force compared to the dyes D3, D4 and D5. Absorption properties:-The absorption spectra of studied dyes have been computed at TD-DFT/BLYP with 6-31G (d) level of theory in gas phase. The excitation energies, absorption wavelengths, oscillator strengths and transition contribution are listed in Table 4 and Fig. 3. Electronic transitions up to 6 states were studied for all new designed sensitizers. According to the transition character, most of the dyes show the HOMO→LUMO transition as the first singlet excitation. The 90 major contribution of the transition characters in in gas phase showing in table 4. The absorption wavelengths as shown in Table 4 and Figure 3, the absorption wavelengths of D1, D2, D3, D4, D5 and D6 in gas phase are 353. 12
Conclusions:-
In this study, we have used the DFT/B3LYP and DFT/B3LYP method with 6-31G(d) basis set to investigate theoretical analysis of the geometries, electronic properties, optical properties of new azo dyes. The modification of chemical structures can greatly modulate and improve the electronic and optical properties of studied dyes.The LUMO energy levels of all dyes are much higher than that of TiO2 conduction band edge, suggesting that the photoexcited electron transfer from Di to TiO2 may be sufficiently efficient to be useful in photovoltaic devices. The calculated ΔG inject for these six dyes is decreased in the following order of D6>D2>D1>D3>D5>D4. Among these six dyes, we observe that the dye D6 has the larger ΔG inject , this maybe is due to the influencing of the extending the conjugate bonds. In another hand, the D6 dye has a low value of chemical hardness (η) in comparison with other dyes; this indicates that the D6 is very easy to liberate the electrons. The order of liberate of the electrons are D6> D5> D4> D1> D2 >D3. We could conclude that the cell containing the dye D1, D2, D6 and D6 should have the highest J sc due to its relative large LHE and injection driving force compared to the dyes D3, D4 and D5. | 4,298 | 2018-05-31T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Effective neutrino masses in KATRIN and future tritium beta-decay experiments
Past and current direct neutrino mass experiments set limits on the so-called effective neutrino mass, which is an incoherent sum of neutrino masses and lepton mixing matrix elements. The electron energy spectrum which neglects the relativistic and nuclear recoil effects is often assumed. Alternative definitions of effective masses exist, and an exact relativistic spectrum is calculable. We quantitatively compare the validity of those different approximations as function of energy resolution and exposure in view of tritium beta decays in the KATRIN, Project 8 and PTOLEMY experiments. Furthermore, adopting the Bayesian approach, we present the posterior distributions of the effective neutrino mass by including current experimental information from neutrino oscillations, beta decay, neutrinoless double-beta decay and cosmological observations. Both linear and logarithmic priors for the smallest neutrino mass are assumed.
Introduction
Neutrino oscillation experiments have measured with very good precision the three leptonic flavor mixing angles {θ 12 , θ 13 , θ 23 } and two independent neutrino mass-squared differences ∆m 2 21 ≡ m 2 2 − m 2 1 and |∆m 2 31 | ≡ |m 2 3 − m 2 1 |. The absolute scale of neutrino masses, however, has to be determined from non-oscillation approaches, using beta decay [1], neutrinoless double-beta decay [2], or cosmological observations [3]. Once the neutrino mass scale is established, one knows the lightest neutrino mass, which is m 1 in the case of normal neutrino mass ordering (NO) with m 1 < m 2 < m 3 , or m 3 in the case of inverted neutrino mass ordering (IO) with m 3 < m 1 < m 2 .
As first suggested by Enrico Fermi and Francis Perrin [4][5][6], the precise measurement of the electron energy spectrum in nuclear beta decays A Z N → A Z+1 N + e − + ν e , where A and Z denote the mass and atomic number of the decaying nucleus, can be utilized to probe absolute neutrino masses. Since the energy released in beta decays is distributed to massive neutrinos, the energy spectrum of electrons in the region close to its endpoint will be distorted in comparison to that in the limit of zero neutrino masses. This kinematic effect is usually described by the effective neutrino mass [7] where U ei (for i = 1, 2, 3) stand for the first-row elements of the leptonic flavor mixing matrix U , i.e., |U e1 | = cos θ 13 cos θ 12 , |U e2 | = cos θ 13 sin θ 12 , |U e3 | = sin θ 13 in the standard parametrization [8], and m i (for i = 1, 2, 3) for the absolute neutrino masses. Very recently, the KATRIN collaboration has reported its first result on the effective neutrino mass using tritium beta decay 3 H → 3 He + e − + ν e , and reached the currently most stringent upper bound [9,10] m β < 1.1 eV , at the 90% confidence level (CL). With the full exposure in the near future, KATRIN aims for an ultimate limit of m β < 0.2 eV at the same CL [11], which is an order of magnitude better than the result m β 2 eV from the Mainz [12] and Troitsk [13] experiments. Motivated by this impressive achievement of the KATRIN experiment, we revisit the validity of the effective neutrino mass m β in Eq. (1) and clarify how it depends on the energy resolution and the sensitivity of a realistic experiment for tritium beta decays. 1 More explicitly, we shall consider the KATRIN [11], Project 8 [17] and PTOLEMY [18][19][20] experiments. Their main features and projected sensitivities have been summarized in Appendix A and Table 1. We compare m β in Eq. (1) with other effective neutrino masses proposed in the literature, and also consider the exact relativistic spectrum of tritium beta decays. A measure of the validity of m β in terms of the exposure and energy resolution for a beta-decay experiment can be set. As a result, we find that the standard effective mass m β and the classical spectrum form can be used for KATRIN and Project 8 essentially without losing accuracy. 1 In this work, we focus only on tritium beta-decay experiments. Similar analyses of the effective neutrino mass can be performed for the electron-capture decays of holmium, namely, e − + 163 Ho → ν e + 163 Dy, which are and will be investigated in the ECHo [14] and HOLMES [15], NuMECS [16] experiments. Furthermore, it is of interest to estimate how likely a signal in upcoming neutrino mass experiments, including those using electron-capture, is. Towards this end, we perform a Bayesian analysis to obtain the posterior distributions of m β . The probability to find the beta-decay signal depends on the experimental likelihood input one considers, in particular the neutrino mass information from cosmology and neutrinoless double beta decays. The cosmological constraints on neutrino masses reply on the datasets one has included in generating the likelihood, whereas the constraints from neutrinoless double beta decays are subject to the assumption whether neutrinos are Dirac or Majorana particles. It is thus quantified what the consequences of adding more and more additional mass information are. Moreover, the prior on the smallest neutrino mass, which could be linear or logarithmic, is important for the final posteriors.
The remaining part of our paper is organized as follows. In Sec. 2, we make a comparison between the exact relativistic spectrum of electrons from tritium beta decays with the ordinary one with an effective neutrino mass m β . Then, a quantitative assessment of the validity of the effective neutrino mass is carried out. The posterior distributions of the effective neutrino mass are calculated in Sec. 3, where the present experimental information from neutrino oscillations, neutrinoless double beta decays and cosmology are included. Finally, we summarize our main results in Sec. 4. Technical details on the considered experiments and on the likelihoods used for our Bayesian analysis are delegated to appendices.
The relativistic electron spectrum
Before introducing the effective neutrino mass for beta decays, we present the exact relativistic energy spectrum of the outgoing electrons for tritium beta decays (or equivalently the differential decay rate), which can be calculated within standard electroweak theory [21][22][23][24], the result being Here N T is the target mass of 3 H and E e = K e + m e is the electron energy with K e being its kinetic energy. In Eq. (3), the reduced cross section is given by where G F = 1.166 × 10 −5 GeV −2 is the Fermi constant, |V ud | ≈ cos θ C is determined by the Cabibbo angle θ C ≈ 12.8 • , F (Z, E e ) is the ordinary Fermi function with Z = 1 for tritium taking account of the distortion of the electron wave function in the Coulomb potential of the decaying nucleus 2 , C V ≈ 1 and C A ≈ 1.2695 stand for the vector and axial-vector coupling constants of the charged-current weak interaction of nucleons, respectively. In addition, f F 2 ≈ 0.9987 and being the endpoint energy corresponding to the neutrino mass m i . Some comments on the kinematics are in order: • Given the nuclear masses m 3 H ≈ 2 808 920.8205 keV and m 3 He ≈ 2 808 391.2193 keV [23], as well as the electron mass m e ≈ 510.9989 keV, one can obtain the Q-value of tritium beta decay Q ≡ m 3 H − m 3 He − m e ≈ 18.6023 keV. In the limit of vanishing neutrino masses, the endpoint energy K end turns out to be which is lower than the Q-value by a small amount of Q − K end,0 ≈ 3.4 eV. This difference arises from the recoil energies of the final-state particles and is naturally included when one considers fully relativistic kinematics. Since the electron spectrum near its endpoint is sensitive to absolute neutrino masses, which are much smaller than this energy difference of 3.4 eV, it is not appropriate to treat the Q-value as the endpoint energy.
• It is straightforward to verify that K end ≈ K end,0 −m i m 3 He /m 3 H and thus y +m i m 3 He /m 3 H ≈ K end,0 − K e , where a tiny term m 2 i /(2m 3 H ) < 1.78 × 10 −10 eV for m i < 1 eV can be safely ignored. Taking this approximation on the right-hand side of Eq. (5), we can recast the kinematical function into from which it is interesting to observe that the absolute neutrino mass m i in the square root receives a correction factor m 3 He /m 3 H ≈ 0.999811. The difference between Eqs. (7) and (5) is negligibly small, so the former will be used in the following discussions.
Furthermore, given 1 − m 3 He /m 3 H ≈ 1.89 × 10 −4 and m e /m 3 H ≈ 1.82 × 10 −4 , the relativistic electron spectrum dΓ rel /dK e approximates to the classical one where σ cl (E e ) = σ(E e )/(m 3 He /m 3 H ) and σ(E e ) has been given in Eq. (4). Comparing the classical spectrum in Eq. (8) with the relativistic one in Eq. (3), one can observe that the endpoint energy in the former case deviates from the true one by an amount of (1 − m 3 He /m 3 H ) m i ≈ 10 −4 m i . As the PTOLEMY experiment could achieve a relative precision of 10 −6 for the determination of the lightest neutrino mass [20], it would be no longer appropriate to use the classical spectrum in PTOLEMY. However, it is rather safe for KATRIN and Project 8 to neglect the factor 1 − m 3 He /m 3 H , as their sensitivities to the neutrino mass are weaker than for PTOLEMY. To be more specific, the 1σ sensitivities of KATRIN and Project 8 to m 2 β are σ(m 2 β ) ≈ 0.025 eV 2 [11] and σ(m 2 β ) ≈ 0.001 eV 2 [17], respectively, corresponding to σ(m β )/m β ≈ 0.05 (0.5 eV/m β ) 2 and σ(m β )/m β ≈ 0.002 (0.5 eV/m β ) 2 . Both values are much larger than the correction of order 10 −4 from the factor m 3 He /m 3 H . To have an expression for the electron spectrum applicable to experiments beyond KATRIN and Project 8, i.e., leading to PTOLEMY, we can slightly modify the classical energy spectrum as follows: Let us now check whether the difference between the exact relativistic spectrum dΓ rel /dK e and the modified classical one dΓ cl /dK e affects the determination of absolute neutrino masses in future beta-decay experiments with a target mass of tritium ranging from 10 −4 g in KATRIN to 100 g in PTOLEMY.
In other words, we examine whether these two spectra are statistically distinguishable in realistic experiments. Consider the ratio of these two energy spectra where an expansion in terms of the electron kinetic energy K e = E e − m e has been carried out. First of all, the constant on the rightmost side of Eq. (10) can be absorbed into the uncertainty of the overall normalization factor A β in the statistical analysis, so it is irrelevant for our discussions. 3 Considering the term proportional to the electron kinetic energy K e in Eq. (10), it could potentially disturb the determination of the normalization factor of the spectrum. The distortion amplitude induced by the K e -dependent term can be characterized by the specified energy window ∆K e below the endpoint. For example, we have ∆K e = 30 eV for KATRIN while ∆K e = 5 eV for PTOLEMY, which is limited by the detector performance, see Appendix A. In this way, we can obtain the distortion amplitude of 5.1 × 10 −8 for KATRIN and 8.5 × 10 −9 for PTOLEMY, respectively. To examine the impact of this distortion, one can compare it with the statistical fluctuation of the events within the corresponding energy window. The integrated number of beta-decay events within the energy window below the endpoint can be calculated via where T is the operation time and E ≡ N T · T is the total exposure. The statistical fluctuation of the beta-decay events within the energy window is estimated as N int /N int ≈ 10 −5 for KATRIN with ∆K e = 30 eV and E = 10 −4 g ·yr, while N int /N int ≈ 10 −7 for PTOLEMY with ∆K e = 5 eV and E = 100 g·yr. Both values are much larger than the corresponding distortion amplitudes. It is thus evident that the uncertainty in A β will be dominated by the intrinsic statistical fluctuation of the observed beta-decay events in future experiments. As the data fluctuation near the endpoint is most significant among the entire spectrum, the influence of the spectral distortion as indicated in Eq. (10) is not important. Hence we conclude that the classical spectrum with the neutrino masses corrected by m i → m i · m 3 He /m 3 H in Eq. (9) works as well as the exact relativistic spectrum in Eq. (3) for future beta-decay experiments.
It is worthwhile to emphasize that because of a finite energy resolution ∆, which is normally much larger than the absolute neutrino mass m i , it is difficult to resolve the true endpoint. Hence, the experimental sensitivity to neutrino masses is in fact governed by the integrated number of beta-decay events within a specified energy window below the endpoint. Taking this energy window to be the experimental energy resolution, ∆K e = ∆, we can figure out the expected statistics around the endpoint according to Eq. (11). For a conservative experimental setup, e.g. that is achievable in KATRIN with ∆ = 1 eV and E = 10 −4 g · yr, the expected event number within the endpoint bin is 3.2 × 10 5 . In the limit of m β ∆, a finite neutrino mass will induce a relative deviation of events within the window ∆ by 3/2 · m 2 β /∆ 2 . Therefore, the sensitivity to m β is roughly scaled as (∆/E) 1/4 . The choice of energy window is limited by the smearing effect of finite energy resolution, and the sensitivity to neutrino masses will drop with a larger energy resolution. This can be compensated by increasing the exposure, such that an efficient event number can be acquired to resolve the overall shift due to finite neutrino masses.
Unless stated otherwise, we will refer from now on to dΓ cl /dK e as the exact spectrum in the remaining discussion.
Validity of the effective mass
In principle, it is the exact relativistic spectrum that should be confronted with the experimental observation in order to extract the absolute neutrino masses m i (for i = 1, 2, 3), since |U ei | 2 (for i = 1, 2, 3) can be precisely measured in neutrino oscillation experiments. However, often the effective electron spectrum with only one mass parameter is considered (see e.g., [25]) where the effective neutrino mass m β is usually defined as in Eq. (1). Note that for consistency we have kept the near-unity factor m 3 He /m 3 H as in dΓ cl /dK e of Eq. (9), which is necessary when it comes to experiments beyond KATRIN and Project 8, i.e. PTOLEMY. However, m 3 He /m 3 H appears as an overall factor to all neutrino mass parameters, so the quantitative impact on our discussion of the validity of the effective mass is actually negligible 4 , but we keep it nevertheless in our numerical calculations. Another important point is that the endpoint energy in Eq. (6) should be corrected if we take account of binding energies as well as excitations of the daughter system in an actual experiment. For instance, in KATRIN or Project 8 with the molecular tritium target, a correction of 16.29 eV to the endpoint energy should be considered owing to the binding 4 One can easily check that the relation m 2 β ≡ |U e1 | 2 m 2 1 + |U e2 | 2 m 2 2 + |U e3 | 2 m 2 3 is stable under m i → m i · m 3 He /m 3 H and m β → m β · m 3 He /m 3 H . Thus any quantitative conclusion made by considering m 3 He /m 3 H corrections can be directly applied to the case without correction of m 3 He /m 3 H by shifting all neutrino masses with a relative fraction as small as 10 −4 , and vice versa. energies of the mother tritium pair, the daughter tritium-helium molecule, and the combination of the ionized electron [1]. Compared to the atomic case, the recoil energy of the molecular state will also be reduced by a factor of two due to the doubled mass, which will boost the endpoint energy of electrons. For PTOLEMY, with a foreseen possibility of atomic tritium weakly bounded to a graphene layer [18], the ionized electron will inevitably interact with the complex graphene binding system. Furthermore, the recoiled helium (with a kinetic energy of 3.4 eV) will escape the graphene binding structure with a sub-eV binding energy. All these effects need to be systemically considered in the experiment to evaluate the final endpoint energy K end,0 . However, in our case, the spectrum in Eq. (12) mostly depends on the relative deviation from the endpoint energy K end,0 − K e , and σ cl (E e ) changes very slowly as a function of K end,0 − K e near the endpoint. Hence, our results which will be presented in terms of K end,0 − K e are still valid if a different K end,0 is considered. On the other hand, the final-state excitations of molecules will smear the energy of outgoing electrons [26][27][28][29], and this effect will be taken into account as the irreducible energy resolution of the experiment.
Let us summarize the existing expressions of the electron spectra defined in this work: (i) the exact relativistic beta spectrum dΓ rel /dK e without making approximations, see Eq. (3); (ii) the classical spectrum dΓ cl /dK e in the limit of m 3 He /m 3 H → 1 and m e /m 3 H → 0, see Eq. (8); (iii) the modified classical spectrum dΓ cl /dK e by making the replacement m i → m i · m 3 He /m 3 H in dΓ cl /dK e , see Eq. (9); (iv) the effective electron spectrum dΓ eff /dK e defined in Eq. (12). We have seen that the difference between dΓ cl /dK e in Eq. (9) and the classical spectrum dΓ cl /dK e in Eq. (8) plays only a role when PTOLEMY is considered. In addition, the difference to the relativistic spectrum dΓ rel /dK e in Eq. (3) is minuscule and the classical spectra can be considered as the exact ones. It remains to compare the so-defined exact spectrum dΓ cl /dK e in Eq. (9) to the effective one dΓ eff /dK e in Eq. (12).
Moreover, in the literature two different definitions of the effective neutrino mass have also been introduced [31][32][33], namely The effective electron spectrum can then be obtained by replacing m β in Eq. (12) with m β or m β . In this subsection, we discuss the difference among those three effective neutrino masses and clarify their validity with future beta-decay experiments in mind.
As has been observed in Ref. [33], the three effective neutrino masses have different accuracies in fitting the exact spectrum. If neutrino masses are quasi-degenerate, all three effective masses provide very good fits and their relative differences are very small. For example, it is easy to verify that (m 2 β − m 2 β )/m β 10 −3 . If the chosen energy window satisfies ∆K e < 2 m β , then m β can give a better fit than m β , whereas m β is still an excellent parameter in fitting the spectrum with an almost negligible difference (m 2 β − m 2 β )/m 2 β 10 −5 . If neutrino masses are hierarchical, m β always fits better to the spectrum than the other two variants. In case of an extremely small value of the lightest neutrino mass, both m β and m β are unable to offer a good fit to the true spectrum. In Fig. 1 we plot three effective neutrino masses in terms of the lightest neutrino mass which is m 1 for NO and m 3 for IO. One can observe that their differences are significant in NO The 3σ CL uncertainties of oscillation parameters have been considered. A similar plot for absolute neutrino masses was given in Ref. [30].
when m 1 is small, but in IO the differences are always unnoticeable. The situation of IO can be attributed to the fact that the contribution of m 3 is suppressed by |U e3 | 2 while the remaining two neutrino masses m 1 and m 2 are always nearly degenerate due to the relation ∆m 2 21 |∆m 2 31 |. To be more explicit, we look carefully at the main difference between the exact spectrum dΓ cl /dK e and the effective one dΓ eff /dK e . The difference stems from the kinematical functions, namely [33] where the spectra involving our three effective neutrino masses m β , m β and m β are collectively given. To analyze the difference we take m β in the NO case for example, but the other effective masses and the IO case can be studied in a similar way. Let us start with the endpoint of the electron spectrum and then go to lower energies. For convenience the factor of m 3 He /m 3 H is omitted in the following qualitative discussion, which of course will not affect the main feature of the result as we noted above.
1. For the exact spectrum, the endpoint energy K end is set by the smallest neutrino mass, i.e., K end,0 − K end = m 1 , while it is m β for the effective spectrum dΓ eff /dK e . Since m 2 β = m 2 1 +∆m 2 21 |U e2 | 2 +∆m 2 31 |U e3 | 2 > m 2 1 , the endpoint energy of the effective spectrum dΓ eff /dK e is smaller than that of the exact one dΓ cl /dK e . Therefore, starting from the electron kinetic without Gaussian Smearing Figure 2: Illustration of the electron spectrum from tritium beta decays, where the total exposure E = 1 g · yr and the best-fit values of neutrino mixing angles and mass-squared differences are assumed. The exact spectra dΓ cl /dK e with m 1 = 10 meV and m 1 = 10.5 meV are shown as the gray solid and red dotted curve, respectively. The effective spectra dΓ eff /dK e with m β = 13.4 meV, m β = 11.9 meV and m β = 10 meV, corresponding to m 1 = 10 meV, are represented by the dark, medium and light blue dashed curves. In the left panel, the energy smearing is ignored, while an energy resolution of ∆ = 100 meV is taken into account in the right panel. In both panels, the real spectra are depicted in the upper subgraph, whereas their deviations from the exact spectrum with m 1 = 10 meV are given in the lower subgraph.
energy of K e = K end,0 − m 1 and going to smaller values, the effective spectrum dΓ eff /dK e is always vanishing and thus should be lying below the exact one dΓ cl /dK e .
2.
As K e is decreasing further, we come to the point at which K end,0 −K e = m β is satisfied. Note that m 2 β = m 2 3 −∆m 2 31 |U e1 | 2 −∆m 2 32 |U e2 | 2 < m 2 3 holds. Therefore, for m β < K end,0 −K e < m 3 , dΓ eff /dK e becomes nonzero. As indicated in Eqs. (14) and (15), before the decay channel corresponding to m 3 is switched on, dΓ eff /dK e is about to exceed dΓ cl /dK e . At ∆m 2 31 has been taken into account, leading to dΓ eff /dK e > dΓ cl /dK e .
3. When we go far below the endpoint, e.g., K e K end,0 −m i or equivalently K end,0 −K e m i , the neutrino masses can be neglected and thus these two spectra coincide with each other. Therefore, for m β under consideration, the difference between dΓ eff /dK e and dΓ cl /dK e could change its sign in the narrow range below the endpoint but finally converges to zero.
For illustration, we show in Fig. 2 the electron spectra in the narrow energy region −200 meV ≤ K e − K end,0 ≤ 200 meV around the endpoint, where possible background events are ignored. In addition, the total exposure for the tritium beta-decay experiment is taken to be E = 1 g · yr. In the left panel of Fig. 2, the exact spectrum dΓ cl /dK e with m 1 = 10 meV is plotted as the gray solid curve, while that with m 1 = 10.5 meV is represented by the red dotted curve for comparison. The effective spectra dΓ eff /dK e for m β = 13.4 meV, m β = 11.9 meV and m β = 10 meV are given by the dark, medium and light blue dashed curves, respectively. Those values are obtained for a smallest mass of m 1 = 10 meV and the current best-fit values of the oscillation parameters [34]. Since it is hard to distinguish these spectra, as can be seen in the upper subgraph in the left panel, we depict their deviations from the exact spectrum, with m 1 = 10 meV in the lower subgraph. The behavior of these deviations can be well understood analytically, as we have already explained by using Eqs. (14) and (15). As for the exact spectrum dΓ cl /dK e with m 1 = 10.5 meV, it is always lying below that with m 1 = 10 meV. The reason is simply that the kinematical function [(K end,0 − K e ) 2 − (m i · m 3 He /m 3 H ) 2 ] 1/2 in the exact spectrum dΓ cl /dK e becomes smaller for larger values of m i .
The finite energy resolution of the detector has been ignored in the left panel of Fig. 2, but is taken into account in the calculations of the energy spectra and their deviations from dΓ cl /dK e with m 1 = 10 meV in the right panel. Assuming the energy resolution 5 of the detector to be ∆ = 100 meV and taking the Gaussian form, we can derive the energy spectrum with smearing effects as follows which has been plotted in the right panel for both dΓ cl /dK e and dΓ eff /dK e . Note that we have not yet specified any planned experimental configuration so far, because the main purpose here is to understand the behavior of deviations caused by using different effective neutrino masses. Two interesting observations can be made and deserve further discussions: • First, when energy smearing effects are included, the difference between dΓ eff /dK e and dΓ cl /dK e will be averaged over the electron kinetic energy, reducing the discrepancy between them. This effect is more significant for the electron kinetic energy closer to the endpoint. Therefore, if the energy resolution is extremely good, the error caused by using the effective spectrum becomes larger. In this case, one needs to fit the experimental data by implementing dΓ cl /dK e with the lightest neutrino mass as the fundamental parameter.
• Second, the effective spectrum dΓ eff /dK e with m β converges to the exact one in the energy region far below the endpoint. Moreover, it is interesting to notice that even though the difference ∆(dΓ/dK e ) between dΓ eff /dK e with m β and dΓ cl /dK e can be either positive or negative, the total number of beta-decay events within a very wide energy window is approximately vanishing. To be more concrete, the integration of ∆(dΓ/dK e ) over an energy window ∆K e scales as ∆(dΓ/dK e ) ∝ m β /∆K e [33], which will be vanishing when ∆K e m β . If the energy resolution ∆ = 100 meV is larger than absolute neutrino masses, we can evaluate the difference between the effective spectra in the region of K e < K end,0 − ∆ via series expansion in terms of m 2 i /∆ 2 , namely, where all higher-order terms of O(m 4 i /∆ 4 ) have been omitted. Consequently, the effective spectrum with m β can fit perfectly the experimental observation, whereas a sizable overall shift is left for m β as well as for m β . As we have mentioned before, although the energy resolution is not good enough to completely pin down the endpoint, the experimental sensitivity to absolute neutrino masses can be obtained by observing the total number of beta-decay events within the energy window around the endpoint.
An immediate question is whether the effective neutrino mass is still a useful parameter for future beta-decay experiments. Put alternatively, does dΓ cl /dK e in Eq. (9) provide a good description of the effective spectrum dΓ eff /dK e in Eq. (12)? We will now investigate the validity of the effective neutrino mass by following a simple statistical approach. The strategy of our numerical analysis is summarized in the following.
Given the target mass and the operation time T (i.e., the total exposure E), we simulate the experimental data by using the exact spectrum dΓ cl /dK e and divide the simulated data into a number of energy bins with bin width ∆, which is taken to be the energy resolution of the detector. In general, the event number in the ith energy bin [E i − ∆/2, E i + ∆/2] is given by the integration of the spectrum over the bin width where E i denotes the mean value of the electron kinetic energy in the ith bin and dΓ ∆ /dK e is the convolution of a spectrum with a Gaussian smearing function as in Eq. (16). The simulated event number N cl i in each energy bin is calculated by using dΓ cl /dK e with a specified value of m 1 (i.e., the lightest neutrino mass in the NO case). On the other hand, to clarify how good dΓ eff /dK e can describe the true data, the predicted event number N eff i in each energy bin is calculated in the same way but with the effective spectrum dΓ eff /dK e , which will be subsequently sent to fit the simulated true data N cl i . It should be noted that KATRIN operating in the ordinary mode with the MAC-E-Filter observes actually the integrated number of beta-decay events and has to reconstruct the differential spectrum by adjusting the retarding potential to scan over a certain energy window containing the endpoint. The number of events for the differential spectrum in each energy bin turns out to be is the event number of the integrated spectrum for the scanning point corresponding to E i . For this reason, the statistical fluctuation of the event number for the reconstructed differential spectrum can be estimated as N int i + N int i−1 , which should be compared with that of N i for the direct measurement. Meanwhile, a longer time of data taking is also expected. Therefore our result should be taken to be conservative when considering KATRIN-like experiments operating in integrated mode. KATRIN can also directly measure the non-integrated beta spectrum in a possible MAC-E-TOF mode, as described in Appendix A. Since tritium experiments in the future tend to adopt non-integrated modes to maximize the neutrino mass sensitivity, we shall focus on this scenario. For those tritium experiments operating in the non-integrated mode, we will use the following experimental configurations: (i) KATRIN with the target mass m KATRIN = 2.5×10 −4 g and energy resolution ∆ KATRIN = 1 eV; (ii) Project 8 loaded with molecular tritium gas with m P8 = 5×10 −4 g and ∆ P8m = 0.36 eV; (iii) Project 8 loaded with atomic tritium gas with m P8 = 5 × 10 −4 g and ∆ P8a = 0.05 eV; (iv) PTOLEMY with m PTOLEMY = 100 g and ∆ PTOLEMY = 0.15 eV. The details can be found in Appendix A. For Project 8 loaded with molecular tritium, the energy resolution is limited by the irreducible width of the final-state molecular excitations [35]. This limitation can be overcome by switching the target to atomic tritium. Note that for KATRIN in the MAC-E-TOF mode, which is still under development, the penalties of the tritium decay rate and energy resolution due to the chopping procedure (see Appendix A) are ignored, so the configuration here is somewhat idealized for KATRIN. Nevertheless, we will find the effect of using m β even in this ideal KATRIN setup is negligible. We adopt Gaussian distributions as in Eq. (16) for the uncertainties caused by finite energy resolutions in all experiments. In a more realistic analysis with all experimental details taken into account, one should consider a strict shape for the energy resolution function, e.g., a triangle-like shape for KATRIN in the developing MAC-E-TOF mode. The actual shape for Project 8 with molecular tritium should also be calculated with detailed consideration of final-state excitations. However, a different shape of the energy resolution from Gaussian should not affect our results by orders of magnitude.
In Fig. 3, we show the difference in the event numbers of dΓ eff /dK e and dΓ cl /dK e , together with the statistical fluctuation of the events. In the upper two panels two nominal experimental setups have been chosen for demonstration, and in the remaining four panels we illustrate the cases of realistic experiments. In all panels, the data are simulated with dΓ cl /dK e , for which a Figure 5: The function ∆χ 2 ≡ χ 2 β − χ 2 β | min is shown with respect to the lightest neutrino mass m 1 in the NO case. The dark red curve is generated by fitting with the exact spectrum dΓ cl /dK e , while the dark blue one is by using the effective spectrum dΓ eff /dK e with m β , for the exposure of E = 100 g · yr. The light curves are for E = 1 g · yr with all else being the same. For illustration, we use an energy resolution of ∆ = 0.1 eV.
true value of the lightest neutrino mass m true 1 = 10 meV has been input, and the data fluctuations are represented by the filled gray histograms. For comparison, the event number difference in each energy bin has been calculated for three effective spectra with m β = 13.4 meV, m β = 11.9 meV and m β = 10 meV, which is denoted as the blue dashed curves. In addition, the gray solid curve denotes the exact spectrum with m true 1 = 10 meV as in Fig. 2, while the red dotted curve is for m true 1 = 10.5 meV. From Fig. 3, two important observations can be made. First, for a smaller exposure such as in KATRIN and Project 8, the statistical fluctuation can easily overwhelm the deviations, rendering the effective description of the beta spectrum more reliable. Second, for m β , the error caused by using the effective spectrum is most significant in the energy bin containing the endpoint. The reason is obvious, namely that the data fluctuation increases and the deviation decreases, as the energy moves away from the endpoint. For the other two effective masses m β and m β , the deviations are even more significant.
To quantify the difference between the effective and exact spectra in a statistical approach, we define ∆N i ≡ N eff i − N cl i in each energy bin and take N cl i to be the corresponding statistical uncertainty. 6 In this way, if ∆N i is negligible compared to N cl i , one can claim that the error due to the use of the effective spectrum is unimportant in that energy bin. For the whole energy spectrum, the χ 2 -function can be constructed as where i runs over the number of energy bins. This χ 2 -function measures to what degree the effective spectrum dΓ eff /dK e deviates from the exact one dΓ cl /dK e . Because we have used dΓ cl /dK e to generate the true data, from the model selection perspective (i.e., fitting two different models dΓ cl /dK e and dΓ eff /dK e with the same data, respectively), χ 2 β defines the statistical significance with which one can favor dΓ cl /dK e over dΓ eff /dK e . If one insists in using the effective spectrum dΓ eff /dK e to fit the data, χ 2 β also measures the goodness of fit χ 2 β /v of dΓ eff /dK e given the degree of freedom v in fitting. Since most deviations of dΓ eff /dK e from dΓ cl /dK e distribute only in a few energy bins around the endpoint, the degree of freedom can be v = O(1) depending on the number of bins we use in the actual fit. As has been mentioned previously, we have fixed the bin size to be the energy resolution ∆. In principle, the bin width can be chosen freely. The smaller the bin width is, the more information one can acquire in the fit. However, this is limited by the energy resolution of an experiment, which will smooth out the information within a comparable bin size, such that further decreasing the bin size will not improve the result anymore. We have numerically checked that by choosing a bin width smaller than the energy resolution, e.g. ∆/8, the χ 2 -function defined in Eq. (19) will increase only by a factor of ∼ 70%. Further reducing the bin width will not alter this result.
Let us make some remarks on the other input in our numerical calculations. First, the best-fit values of neutrino oscillation parameters from Ref. [34] are adopted. Second, the energy window for the analysis has been taken to be K e − K end,0 ∈ (−4 · · · 4) eV. Third, we have assumed no background contributions. The inclusion of possible background events will reduce the value of χ 2 β , leading to a smaller statistical deviation of the effective spectrum dΓ eff /dK e from the classical one dΓ cl /dK e . Four, we take the normalization factor to be one, as it can be precisely determined by choosing a wider energy window in realistic experiments.
In the left panel of Fig. 4, for each pair of m 1 in the range of (0 · · · 0.1) eV and ∆ in the range of (0.02 · · · 0.6) eV, we present the value of χ 2 β for the total exposure of E = 1 g · yr in the NO case, where the effective spectrum with m β is adopted for illustration. Similar calculations have also been carried out in the IO case and the results are given in the m 3 -∆ plane in the (19) and (20) arising from the description of the electron spectrum by using the effective neutrino mass m β . One year of data taking has been assumed. No background is assumed, and the χ 2 -values can be further reduced taking into account possible background contributions. right panel. Roughly speaking, for those values of ∆ and m 1 in the NO case (or m 3 in the IO case) corresponding to χ 2 β 0.1, the effective spectrum dΓ eff /dK e with m β is reasonably good to describe the data, i.e., with negligible and fragile statistical significance to discriminate dΓ cl /dK e from dΓ eff /dK e and no noticeable impact on the goodness-of-fit. In the same sense, we can also conclude that m β is no longer a safe parameter for those values of ∆ and m 1 (or m 3 ) corresponding to χ 2 β 10, i.e., the statistical power to favor dΓ cl /dK e over dΓ eff /dK e is more than 3σ and a considerable impact on the goodness of fit arises (the p-value of fit is 0.001565 for χ 2 β = 10 and v = 1, and the model dΓ eff /dK e is almost ruled out by the data).
Although we have fixed the exposure at E = 1 g · yr, it is straightforward to derive the values of χ 2 β for a different exposure by noting the fact that (∆N i ) 2 /N cl i is linearly proportional to E. As a consequence, for a different exposure E, the original values of χ 2 β for E will be modified to be χ 2 β · E/E. For instance, the original value of the contour χ 2 β = 10 for E = 1 g · yr should be changed to χ 2 β = 0.01 for E = 1 mg · yr. Nevertheless, if we insist in using dΓ eff /dK e to fit the data regardless of the statistical preference for the true model dΓ cl /dK e and a poor goodness-of-fit, the parameter estimation of m β can always be performed based on using dΓ eff /dK e . In this case, a large value of χ 2 β does not necessarily mean a large value of ∆χ 2 ≡ χ 2 β − χ 2 β | min in parameter estimations, where χ 2 β | min denotes the minimum of χ 2 β by freely adjusting m β or m 1 . To explicitly show the error of fitting the neutrino mass m 1 with the effective spectrum, we calculate ∆χ 2 and present the final result with respect to m 1 in Fig. 5. In our calculations, we assume the true value of m 1 to be 10 meV, corresponding to m β = 13.4 meV. The energy resolution is fixed to 0.1 eV. The dark red curve represents the result obtained by fitting with the exact spectrum dΓ cl /dK e , while the dark blue one corresponds to the fit by using the effective spectrum dΓ eff /dK e , given the exposure of E = 100 g · yr. The light curves stand for the case with E = 1 g · yr. One can observe that if the effective spectrum with m β is used, the best-fit value of m 1 is found to be m bf 1 = 9.6 meV, which deviates notably from m bf 1 = 10 meV obtained by using the exact spectrum. Even for the exposure of E = 1 g · yr the true value is outside of ∆χ 2 4 when fitting with the effective spectrum. The situation becomes worse if we take a larger exposure.
To systematically study how far the parameter value fitted by using dΓ eff /dK e can deviate from the true one, we define the following difference of χ 2 where χ 2 β (m β = m true β ) is the χ 2 value when m β is set to m true β when fitting with dΓ eff /dK e , and χ 2 β (m β = m bf β )| min is the minimum value of the χ 2 -curve obtained by freely adjusting m β with m bf β being the best-fit value. The value of m true β can be directly obtained with Eq. (1) once the input value of m 1 in dΓ cl /dK e for simulating the data is given. The difference ∆χ 2 true measures how likely one can recover the true value of the model parameter m β by fitting with dΓ eff /dK e . We present χ 2 β and ∆χ 2 true in Fig. 6 as a function of the exposure E and the energy resolution ∆. We fix the lightest neutrino mass as 0 eV for these plots, as χ 2 β is maximized in this case according to Fig. 4.
The experimental configurations of KATRIN, Project 8 and PTOLEMY have been indicated in Fig. 6, and their corresponding χ 2 -values have been explicitly summarized in Table. 1. For Figure 6: The contours of χ 2 β and ∆χ 2 true , see Eqs. (19) and (20), arising from the description of the electron spectrum by using the effective neutrino mass m β , are displayed in the exposureresolution (E-∆) plane for the NO case (left two panels) and the IO case (right two panels). The lightest neutrino mass is fixed to 0 eV. PTOLEMY the effective beta spectrum can no longer be adopted. The use of the effective spectrum with m β would result in a huge error in fitting the neutrino mass compared to the precision that is supposed to be achieved in such an experiment, e.g., ∆χ 2 true = 141 for NO and ∆χ 2 true = 81 for IO for one year of data taking. For KATRIN and Project 8 with one year of exposure, the effective mass m β is fortunately applicable with χ 2 β , ∆χ 2 true 0.1. Note that there is a little risk for Project 8 loaded with the atomic tritium. To be more specific, in the extreme case that the data taking time is set to 10 years and an improvement on the energy resolution is made to ∆ = 0.03 eV, ∆χ 2 true for NO can be as large as 1, indicating that the true value of m β is out of the 1σ CL region by fitting with the effective spectrum dΓ eff /dK e , hence the description by using the effective spectrum would not be appropriate anymore.
Posterior Distributions
As we have already demonstrated in the previous section, the effective spectrum dΓ eff /dK e cannot be used for PTOLEMY, but is safe to use in the KATRIN and Project 8 experiments. Following the Bayesian statistical approach [36], we derive in this section the posterior distributions of the effective neutrino mass m β , based on current experimental information from neutrino oscillations, beta decay, neutrinoless double-beta decay (0νββ) and cosmology. Since the description of the beta spectrum via the effective neutrino mass is still valid for KATRIN and Project 8, posterior distributions of the effective neutrino mass should be very suggestive for future experiments. Our results in this section can also be used for the electron-capture experiments ECHo [14], HOLMES [15] and NuMECS [16], if CPT is assumed to be conserved in the neutrino sector. For the similar analysis relevant for the effective neutrino masses in β and 0νββ decays, see Refs. [37][38][39][40][41][42][43]. Here we perform an updated analysis for the direct neutrino mass experiments, in light of a good number of experimental achievements.
As usual, two important ingredients for the Bayesian analysis should be specified. First, we have to choose the prior distributions for the relevant model parameters where ∆m 2 sol = ∆m 2 21 and ∆m 2 atm = ∆m 2 31 (or ∆m 2 32 ) in the NO (or IO) case. For all oscillation parameters {sin 2 θ 12 , sin 2 θ 13 , ∆m 2 sol , ∆m 2 atm }, we assume that they are uniformly distributed in the ranges that are wide enough to cover their experimentally allowed values. For the absolute neutrino mass scale, which is represented by the lightest neutrino mass m L (i.e., m 1 in the NO case or m 3 in the IO case), we consider the following two possible priors: • A flat prior on the logarithm of m L in the range of (10 −7 · · · 10) eV, namely, Log 10 (m L /eV) ∈ [−7, 1], which will be referred to as the log prior in the following discussion. This prior is scale invariant and motivated by the approximately constant ratios [44] of charged fermion masses m u /m c ∼ m c /m t ∼ λ 2 , m d /m s ∼ m s /m b ∼ λ, and m e /m µ ∼ m µ /m τ ∼ λ 2 (where λ = sin θ C ≈ 0.22 is the Wolfenstein parameter), as well as by the in general exponential fermion mass hierarchies. Note that an ad hoc lower cutoff 10 −7 eV for m L has been imposed, which is necessary to bound the prior volume from below. Decreasing this cutoff is equivalent to putting more and more prior volume to very small and essentially vanishing values of m L .
• A flat prior on m L in the range of (0 · · · 10) eV. Note that the ratio of the heaviest to the second-heaviest neutrino mass is rather small, at most ∆m 2 31 /∆m 2 21 ≈ 5 for NO and essentially 1 for IO, motivating a moderate and non-exponential ordering of neutrino masses.
Without a complete theory for neutrino mass generation, we cannot judge which prior is favorable and therefore shall treat both of them on equal footing. The prior dependence of the final posterior distributions reflects that current experimental knowledge on the absolute scale of neutrino masses is still very poor. If one attempts to set limits on model parameters, a prior-independent approach may be found in Ref. [45].
The likelihood functions for each type of experiments can be found in Appendix B. Briefly speaking, the global-fit results of all neutrino oscillation data from Ref. [34] will be used to construct the likelihood function L osc (sin 2 θ 12 , sin 2 θ 13 , ∆m 2 sol , ∆m 2 atm ). The tritium beta-decay experiments Mainz [12], Troitsk [13] and KATRIN [9] are taken into account and the likelihood function L β (m 2 β ) involves the model parameters {m L , sin 2 θ 12 , sin 2 θ 13 , ∆m 2 sol , ∆m 2 atm }. As for 0νββ experiments, the likelihood function L 0νββ (m ββ , G 0ν , |M 0ν |) actually contains all the parameters in Eq. (21). For the two Majorana CP phases ρ and σ relevant for 0νββ experiments, we shall take flat priors in the range of [0 · · · 2π), as there is currently no experimental constraint on them. For the phase space factor G 0ν , a Gaussian prior is assumed with the central value and 1σ error available from Ref. [46]. The nuclear matrix elements |M 0ν | take a flat prior in the range spanned by the predictions from different NME models [39]. Finally, the upper bound on the sum of three neutrino masses Σ = m 1 + m 2 + m 3 from cosmological observations will be implemented, and the corresponding likelihood function L (i) cosmo depends on {m L , ∆m 2 sol , ∆m 2 atm }, where i = 1, 2, 3 refers respectively to the Planck data on the cosmic microwave background, its combination with gravitational lensing data, and their further combination with baryon acoustic oscillation data, as explained in Appendix B.
With the priors of model parameters and the likelihood functions from the relevant experiments, we can compute the posterior distribution of m β , i.e., dP/dm β , in the standard way of Bayesian analysis. The sampling is done with the help of the MultiNest routine [47][48][49]. The numerical results for the flat and log priors on m L are shown in Figs. 7 and 8, respectively. A summary of the volume fractions of m β posteriors covered by future KATRIN and Project 8 sensitivities has been presented in Table 2. Some comments on the numerical results are in order.
• In Fig. 7, a flat prior on m L is assumed. The plots in the first row are for the NO case, whereas those in the second row are for the IO case. In each row, the upper subgraph in the left column shows the posterior distributions of the effective mass in four different scenarios of adopted experimental information: (1) L osc + L β for the neutrino oscillation and beta decay data; (2) L osc + L β + L including L 0νββ do not apply. The future sensitivities of KATRIN [11] (0.2 eV) and Project 8 [17] (40 meV) are shown as the upper and lower dashed boundaries of the gray bands. This gray region represents the gradual improvement of the sensitivities. In Fig. 8, the same computations have been carried out for the log prior on m L , where all notations follow those of Fig. 7.
• By comparing among the different scenarios in both Figs. 7 and 8, we can make the following important observations.
1. Let us first focus on the impact of 0νββ. If the cosmological observations, namely, the upper bounds on neutrino masses, are not considered, then one can make a comparison between the orange curves (corresponding to L osc + L β ) in the left column and those (corresponding to L osc + L β + L 0νββ ) in the right column. It is evident that the experimental constraints from 0νββ decays lead to a significant shift of the posterior distribution to the region of smaller values of m β . Even in this case, it is very likely that the future beta-decay experiments can determine the absolute neutrino mass no matter whether NO or IO is true. For instance, in Fig. 7 where the flat prior on m L is assumed, Project 8 can cover 67% of the posteriors in the NO case.
2.
One should investigate what role is played by the cosmological observations. For this purpose, we concentrate on the plots in the right columns of Fig. 7 and Fig. 8. When the cosmological observations are considered, one can see that the probability of discovering a nonzero effective neutrino mass in beta-decay experiments drops dramatically. In the worst situation, where the likelihood set L osc + L β + L cosmo + L 0νββ is taken in the NO case, even Project 8 can only cover 4.9% of the posterior. Therefore, the detection of a positive signal in this case would imply a tension between the beta-decay experiments and cosmological observations. Regardless of the prior and likelihood choices, Project 8 can always cover all the posteriors of the IO case. In this connection, the discrimination between NO and IO seems to be very promising in future beta-decay experiments [50], e.g. an explicit study of the sensitivity has already been performed for PTOLEMY in Ref. [20].
Summary
The determination of absolute neutrino masses is experimentally challenging, but scientifically very important. As fundamental parameters in nature, absolute neutrino masses must be precisely measured in order to explore the origin of neutrino masses, which calls for new physics beyond the standard model. Motivated by the latest result from the KATRIN experiment and upcoming tritium beta decay searches, we have performed a detailed study of the exact electron spectrum dΓ cl /dK e in Eq. (9), which is a modified relativistic one, and its difference to the effective electron spectrum dΓ eff /dK e in Eq. (12) which includes the usually considered effective neutrino mass m β or its variants. Moreover, based on current experimental information from neutrino oscillation data, tritium beta decays, neutrinoless double-beta decays and cosmology, we have computed the posterior distributions of the effective neutrino mass m β in Eq. (1). Our main results are summarized as follows.
First, for tritium beta decays, the classical electron spectrum dΓ cl /dK e can be modified by replacing m i with m i · m 3 He /m 3 H to account for the exact electron spectrum including relativistic corrections. In this case, the difference between the exact relativistic spectrum and the modified classical spectrum dΓ cl /dK e can be safely ignored, as the dominant uncertainties in the measurements at KATRIN, Project 8 and PTOLEMY arise from the statistical data fluctuations. Furthermore, it is interesting to compare the exact spectrum with the effective one containing the usually considered observable m β . However, as we have demonstrated in a quantitative way, the validity of the effective mass m β actually depends on the energy resolution and the total exposure of a realistic beta-decay experiment. We show that the use of the standard effective neutrino mass for KATRIN and Project 8 is justified. For the future PTOLEMY experiment with an exposure of 100 g · yr, it will be problematic to introduce an effective neutrino mass, and the lightest neutrino mass should be used together with the exact spectrum. While this is known, we have performed here a general analysis with keeping the exposure and energy resolution as free parameters.
Second, as we have mentioned above, it is justified to describe the exact electron spectrum dΓ cl /dK e by the effective one with the effective neutrino mass m β in the KATRIN and Project 8 experiments. Therefore, it does make sense to derive the posterior distributions of the effective neutrino mass, given the latest experimental data on neutrino oscillations, beta decays, neutrinoless double-beta decays and cosmological observations. Although the cosmological upper bound on the sum of three neutrino masses pushes the posterior distribution of m β down to the region almost outside of the sensitivity of Project 8 in the NO case, it does not affect much the situation in the IO case due to the lower bound on m β 50 meV even in the limit of m 3 → 0. This also implies that future tritium beta-decay experiments are able to discriminate between neutrino mass orderings.
As KATRIN continues to accumulate more beta-decay events and the development of the techniques to be deployed in Project 8 and PTOLEMY is well in progress, it is timely and necessary to revisit the effective neutrino mass and its validity in future beta-decay experiments. The analysis presented in the present work should be helpful in understanding the approximations made in expressions of the beta spectrum and is suggestive for the improvement on the usage of the effective masses. In light of the precision measurement of the beta spectrum already in the first run of KATRIN, one may go further to extend the analysis to consider the presence of sterile neutrinos and other new physics, and/or to consider the electron-capture decay of 163 Ho.
Some experimental details about KATRIN, Project 8 and PTOLEMY experiments are as follows: • The KATRIN experiment [11] implements the so-called MAC-E-Filter (Magnetic Adiabatic Collimation combined with an Electrostatic Filter) to select electrons from tritium beta decays that can pass through the electrostatic barrier with the potential energy of E V . The observable in the MAC-E-Filter is the integrated number of the electrons that have passed through the energy barrier. The sharpness of the filter is characterized by the ratio between the minimum B min = 3 × 10 −4 T and the maximum B max = 6 T of magnetic fields, i.e., ∆ = E e B min /B max ≈ 1 eV, where E e ≈ Q = 18.6 keV is the electron energy in the range close to the endpoint and Q ≡ m 3 H − m 3 He − m e is the Q-value for tritium beta decay. Since the filter is insensitive to the transverse kinetic energy of electrons, the sharpness denotes roughly the maximum of transverse kinetic energies and thus can be regarded as the energy resolution. Adopting the energy window E V ∈ [Q − 30 eV, Q + 5 eV] and including all the statistic and systematic uncertainties, the KATRIN experiment [11] with a target tritium mass of O(10 −4 ) g can measure m 2 β with a 1σ uncertainty of 0.025 eV 2 , corresponding to the sensitivity of m β < 0.2 eV at the 90% CL in the assumption of m β = 0 as the true value. KATRIN can also directly measure the non-integrated beta spectrum by extracting the time of flight information from the source to the detector, operating in the so-called MAC-E-TOF mode. Since the emitting time of the electron at source is not directly measurable, a technique has been devised to infer the emitting time by chopping the source with some high voltage potential frequently. A lower counting rate and a worse energy resolution will be caused by the additional chopping procedure. The total target mass of 3 H planned to be loaded in the full KATRIN setup can be inferred from the formula of the tritium molecule number N (T 2 ) = A S · T · ρd ≈ 2.518 × 10 19 with the source cross section A S = 53 cm 2 , the tritium purity T = 0.95 and the column density ρd = 5 × 10 17 cm −2 , see Eq. (25) and Table 7 of Ref. [11] for details. Given the mass per tritium nucleus ∼ 5 × 10 −24 g, we obtain the total target mass of the full KATRIN as m KATRIN = 2.5 × 10 −4 g. The energy resolution of KATRIN in this work is fixed to ∆ KATRIN = 1 eV.
• Unlike the KATRIN experiment, the Project 8 collaboration will utilize the technique of cyclotron radiation emission spectroscopy to measure the electron energies [17]. If the magnetic field is uniform in the spectrometer, the cyclotron radiation of accelerating electrons can be observed for a few microseconds and its frequency can be precisely determined, leading to an excellent energy resolution. As has already been shown in Fig. 5 of Ref. [17], with the deployment of O(10 −4 ) g atomic 3 H and one year of running time, Project 8 is able to push the upper limit on the effective neutrino mass down to m β < 40 meV at the 90% CL, assuming the true value of m β = 0. In this work, we will adopt two extreme setups for Project 8: (i) an intermediate phase with the molecular 3 H, a target mass of m P8 = 5 × 10 −4 g corresponding to 5 × 10 19 tritium molecules, and an energy resolution ∆ P8m = 0.36 eV limited by the irreducible width of the final state molecular excitations [35]; (ii) an ultimate phase with the atomic 3 H, a target mass of m P8 = 5 × 10 −4 g, and an energy resolution of ∆ P8a = 0.05 eV which is limited by the inhomogeneity of the magnetic field ∆B/B ∼ 10 −7 [17]. 7 The target mass m P8 = 5 × 10 −4 g can be achieved with a gas volume of 100 m 2 as required by the phase IV of Project 8 and a gaseous tritium number density of 10 12 cm −3 .
• The PTOLEMY experiment has been designed to detect the cosmic neutrino background (CνB) [18][19][20] via the electron-neutrino capture on tritium ν e + 3 H → e − + 3 He, as suggested by Steven Weinberg in 1962 [51]. Thanks to the large target mass of 100 g tritium and the low background rate required for the CνB detection, PTOLEMY would have an overwhelmingly better sensitivity to the absolute neutrino mass than KATRIN does, namely, the relative uncertainty reaches σ(m 1 )/m 1 10 −2 for m 1 = 10 meV with an energy resolution of ∆ = 100 meV. In the PTOLEMY experiment, the energy of electrons from tritium beta decays will be measured in three steps. First, the MAC-E-Filter is used to select the electrons close to the endpoint, preventing the calorimeter from being swamped by the huge number of events in the energy range below the endpoint. Second, after passing through the MAC-E-Filter, the electrons are then sent to a long uniform solenoid, undergoing the cyclotron motion in the magnetic field of 2 T. Hence the radio signal can be implemented to track each single electron. Finally, the electrons are decelerated by the electrostatic voltage until their kinetic energies are 100 eV or so to match the dynamic range of a cryogenic calorimeter. The energy resolution of these electrons can be as low as 50 meV [20].
The limits of the former two are given as . For KATRIN, we use the likelihood presented in Fig. 4 of Ref. [9]. We find the likelihood can be well approximated by a skewed normal distribution: where erfc(x) is the complementary error function, with σ = 1.506 eV 2 , µ = 0.0162 eV 2 and m 2 β in units of eV 2 , as well as α = −2.005. Since the KATRIN experiment has the highest sensitivity to m β , we may have L β ≈ L KATRIN .
• L 0νββ -The constraints on the half-life of 0νββ are given by the existing 0νββ searches. The limits on the effective neutrino mass |m ββ | can be derived by using where G 0ν denotes the phase-space factor, M 0ν is the nuclear matrix element (NME), and m e = 0.511 MeV is the electron mass. In our numerical analysis we use the likelihood functions from Refs. [39,52], which include the experimental information of GERDA [53], KamLAND-Zen [54], EXO [55] and CUORE [52].
The likelihood functions have been obtained by analyzing the Markov chain files available from the Planck Legacy Archive.
With the likelihood functions listed above, the total likelihood relevant for our analysis can be calculated as L tot = L osc × L β × L 0νββ × L | 15,680 | 2019-10-18T00:00:00.000 | [
"Physics"
] |
Depth estimation from a single SEM image using pixel-wise fine-tuning with multimodal data
To support the ongoing size reduction in integrated circuits, the need for accurate depth measurements of on-chip structures becomes increasingly important. Unfortunately, present metrology tools do not offer a practical solution. In the semiconductor industry, critical dimension scanning electron microscopes (CD-SEMs) are predominantly used for 2D imaging at a local scale. The main objective of this work is to investigate whether sufficient 3D information is present in a single SEM image for accurate surface reconstruction of the device topology. In this work, we present a method that is able to produce depth maps from synthetic and experimental SEM images. We demonstrate that the proposed neural network architecture, together with a tailored training procedure, leads to accurate depth predictions. The training procedure includes a weakly supervised domain adaptation step, which is further referred to as pixel-wise fine-tuning. This step employs scatterometry data to address the ground-truth scarcity problem. We have tested this method first on a synthetic contact hole dataset, where a mean relative error smaller than 6.2% is achieved at realistic noise levels. Additionally, it is shown that this method is well suited for other important semiconductor metrics, such as top critical dimension (CD), bottom CD and sidewall angle. To the extent of our knowledge, we are the first to achieve accurate depth estimation results on real experimental data, by combining data from SEM and scatterometry measurements. An experiment on a dense line space dataset yields a mean relative error smaller than 1%.
Introduction
In the semiconductor industry, critical dimension scanning electron microscopes (CD-SEMs) are used to measure the spatial lateral dimensions of structures on a microchip. These B 1 Eindhoven University of Technology, Eindhoven, The Netherlands 2 ASML Netherlands B.V., Veldhoven, The Netherlands measurements are important for controlling the fabrication process, which enables yield optimization of a produced wafer. Currently, SEM is the fastest way of measurement that provides local geometry information. However, the obtained SEM images are a two-dimensional (2D) representation of the electron interactions with the surface. In practice, detailed metrology that provides the true 3D geometry of this structure is desired for various reasons. It is expected that 3D metrology will become crucial in the semiconductor industry's quest to keep up with the requirements of Moore's Law [1].
Depth estimation from 2D images has been studied thoroughly in the field of computer vision [2] and is nowadays applied to robotics [3], autonomous driving [4], medical imaging [5] and many other scene understanding tasks. Traditionally, these techniques relied on stereo pairs of input images [2], but more recently the subfield of monocular depth estimation has emerged [6]. Here, the depth estimation task is constrained with a single image available per scene during the inference phase. This paper concentrates on performing depth estimation on SEM data to analyze and predict the semiconductor's surface. Monocular depth estimation is challenging, as it is an ill-posed problem. This challenge results from the fact that multiple 3D scenes can be projected onto the same 2D scene. Currently, many state-of-the-art modeling techniques heavily rely on deep neural networks [7]. These models can perform inference on various types of data, by setting up a highdimensional non-linear regression or classification problem. Deep neural networks have been applied to many computer vision tasks such as image classification, object detection and semantic segmentation, achieving remarkable results. One reason for these results is the networks' ability to understand a geometric configuration by not only taking local cues into consideration, but also by employing global context such as shape or layout of the scene, which is extremely helpful for solving non-trivial computer vision problems.
Neural networks require large-scale datasets with (manually) annotated ground-truth labels, which can be a difficult operation. In the case of monocular depth estimation from SEM images, the ground-truth data can only be obtained from other sources, such as atomic force microscopy (AFM) [8], transmission electron microscopy (TEM) [9] and scatterometry, also referred to as optical critical dimension (OCD) metrology [10]. The first two sources provide highly accurate and local depth information. However, they commonly provide data in one dimension and are notoriously slow and labour-intensive. Alternatively, OCD metrology is extremely fast, much faster than SEM, but provides measurements averaged over a larger area on the wafer, typically 25 µm 3 or more.
One possibility to circumvent the labeling problem is to generate a synthetic dataset, containing representative geometries, with an electron scattering simulator. Open source implementations based on Monte-Carlo methods are currently available [11] and provide highly accurate simulations of propagating electrons through a material. However, the results of these simulations are not fully accurate. The electron beam and the detector are simplistically approximated, which negatively impacts the image quality. Thereby, physical phenomena, like electron-beam-induced charging and damage, are excluded, while models for the generation of so-called secondary electrons are hard to validate. Therefore, this approach forms only one part of the solution. A second training step is required where the model will be adapted to experimental (real) data.
Machine learning can be a helpful tool for deriving the above models. Domain adaptation is a sub-field of machine learning, where the goal is to maximize prediction performance on a target domain without (complete) labels, with the help of a related and well-labeled source domain, while the prediction task in both domains is identical [12]. In this Fig. 1 Qualitative results of the proposed method. Input SEM images are depicted at the top row and corresponding depth maps predictions at the bottom row. From left to right: synthetic contact holes, real experimental dense lines, real experimental isolated trenches. Predictions of the contact holes are inverted in order to improve visualization case, we have the sole availability of coarse-grained labels in the target domain (average depth from OCD), so we can classify this as a weakly supervised domain adaptation problem. More specifically, the goal is to fine-tune a pre-trained network with a limited set of experimental SEM data, paired with OCD metrology measurements. For doing so, an accurately aligned dataset of these modalities is required.
The objective of this work is to extract useful 3D information from SEM images, using advanced modeling techniques based on deep neural networks. First, a depth estimation method on synthetic data is explained. Next, this method is extended to work on measured experimental data, without any local ground-truth depth information. Example results of the proposed method are displayed in Fig. 1. This research work presents two contributions. First, we present a method that is capable of predicting a detailed height map and corresponding semiconductor metrics of synthetic SEM images under realistic noise conditions. Second, we demonstrate a weakly supervised domain adaptation technique, in order to incorporate the OCD data into the training procedure. We refer to this technique as pixel-wise fine-tuning.
The paper is organized as follows. After a survey of related work in Sect. 2, Sect. 3 discusses the proposed method in detail. Then, Sect. 4 provides the results and discusses the results in Sect. 5. Finally, the paper concludes in Sect. 6. Additional implementation details are provided in "Appendix." 2 Related work
Depth Estimation from SEM Images
Several techniques have already been developed to extract depth information from SEM images. A well-known method obtains depth information from observing disparities at descriptive points from a stereo image pair [13,14]. The stereo pair is acquired by tilting the specimen. Unfortunately, this method is not suitable for a SEM, since tilting is not possible due to geometric constraints imposed by the objective lens above the specimen (300-mm wafer). One way to overcome this issue is to tilt the beam (not the specimen) with deflectors [15]. But this tilt angle is limited to less than a degree in typical high-resolution SEMs. Another technique uses a four-channel secondary electron (SE) detector [16]. By combining these four SE intensity maps, it is possible to create a depth profile of the surface. However, this method is not compatible with the magnetic objective lenses that are typically used in a SEM. Moreover, all aforementioned techniques require a different hardware platform, which puts high demands on the system costs.
Also methods based on a single SEM image with conventional hardware have been proposed. In [17] SEM images are compared against a model library with physical models. This method predicts shape approximations interpolated from multiple models in the library. It has only been validated with line space patterns and so far seems to be hard to generalize to various geometries, materials and SEM settings. Alternatively, landing energy is exploited to extract depth information in top-down SEM images [18]. In certain conditions, the SE yield is sensitive to depth, while unresponsive to other shape parameters. The results obtained with synthetic SEM images were verified by experiments on an inverted pyramid shape with unit step depth transitions, but can be extended to more complex structures according to the authors. The main limitation of this approach is the requirement to change landing energies, which is typically undesirable for continuous measurement systems. Another recent work uses a neural network to predict 1D SEM-profile depths from synthetic 1D back-scattered electron (BSE) profiles [19]. A custom-weighted loss function was designed to train the network, which improved the results significantly.
Monocular depth estimation from natural images
Monocular depth estimation has been an active field of research over the years. Initially, supervised techniques were proposed [6], where ground-truth depth is available during training. Later, self-supervised techniques have become popular as well [20]. Here, depth is inferred by cleverly exploiting information from stereo data [21] or video data [22] during training. This paper will be focused on supervised methods because of the ready availability of ground-truth data for the simulated SEM images and the hardware limitations of stereo imaging.
Starting with [6], supervised depth estimation techniques evolved over the years [23][24][25], but along with the major improvements on established benchmarks [26,27], the networks became also quite complex [28]. Recent work [29] rephrased the depth estimation problem as an image-toimage translation [30], based on conditional generative adversarial networks (cGANs) [31]. These frameworks add a second network to the training process, which enforces an adversarial loss term, resulting in global consistency of the output. These networks show impressive results, even with a relatively straightforward prediction network [32].
SEM and deep learning
Deep learning is successfully applied to other tasks in SEM imaging. For instance, deep neural networks are used for line roughness estimation and Poisson denoising [33]. They also seem beneficial for removing artifacts without the need of paired training data [34]. Both works promise great potential for these kind of models in the field of SEM. Similarly, these applications are also established research fields with other use cases, for example, image denoising [35][36][37] on natural images and contouring [38] on medical images.
Methodology
Our approach consists of the following steps. First, a synthetic dataset is generated and pre-processed. Then, a neural network is pre-trained with the generated data. Next, the network weights are adapted using experimental data. After the training process, a diverse test set is used for validation, by comparing key semiconductor performance metrics. Information about the implementation is found in "Appendix B."
Synthetic data generation
For the development of the methods in this work, we developed datasets with two types of structures: contact holes (CHs) and line-based spaces (LSs). These datasets are explained in detail in the next sections. The resulting constructed geometries are used as input for a Monte-Carlo particle simulator. For this, we have adopted Nebula [39], which is an open source, GPU-accelerated, solution for simulating the electron-scattering processes in materials. This simulator is currently one of the most accurate solutions available and produces partially realistic SEM images corresponding to the input geometries.
Contact holes dataset
CHs are cylindrical holes inside a layer of material. A hole should span the entire layer along the axial (depth) dimension to ensure contact to the next layer. We have chosen CHs for several reasons. First, the geometry contains nontrivial shape information in two lateral dimensions (circular), in contrast with line-based spaces, where only one lateral The middle CH has no edge-width because the wall is perfectly straight. The right CH has overhang because SWA > 90 • , and it is not opened because the depth value is insufficient to reach the bottom layer (shaded area) dimension exhibits significant depth variation. Second, CHs are heavily used in the semiconductor industry, since they enable connecting subsequent layers in a device. Third, from an industry perspective, it is attractive to obtain a proper estimation of the depth value of every CH, in order to determine whether the CH is open or not. Unopened CHs result in failures of the device.
For the creation of randomized CH geometries, the parametric model displayed in Fig. 2 is used. All CHs are generated in a two-dimensional (x, y) grid of unit cells. The total size of the grid is 1024×1024 (nm) and contains 16 × 16 unit cells, which results in an average pitch of 64 nm. For individual CH generation, we distinguish two type of process deviations. Normal distributions are used to mimic the intra-field (local) process deviations as realistically as possible. Furthermore, inter-field (between image) deviations are applied to the parameters that influence the height prediction the most (depth and sidewall angle). Here, a uniform distribution is used to ensure the network is robust for all possible combinations. More specifically, the center point of a CH within a unit cell deviates from the center with x , y ∼ N (0, 1) nm. The top critical dimension (TCD) and bottom critical dimension (BCD), both in nm, are defined by: where rand ∼ max(N (0, 2), −10) and shift ∼ U(−5, 2). The same shift value is applied to all CHs within the grid. The skew of this distribution was chosen because the patterning process gives rise to a preference of tapered CHs (SWA < 90 • ). Also line-edge roughness (LER) is applied in the x-and y-direction to perturb the perfectly circle-shaped edge of the CH. More details are available in "Appendix A." Furthermore, the numerical values are derived from relevant experimental data. The depth of the CHs is varied between 20 and 100 nm with steps of 1 nm. One depth value is applied to all CHs in the grid, in order to mimic a lithographic process as close as possible. CHs chosen at random with probability p = 0.005 are unopened (filled with extra material), as shown in the rightmost CH in Fig. 2. One example of a resulting geometry is visualized in the leftmost image of Fig. 3. For the simulations, we have used SiO 2 (Silicon dioxide) as top material and Si (Silicon) as bottom material. For the settings of the electron beam, we employed a Gaussian distributed spotprofile, defined by its Full Width Half Maximum (FWHM) of 2.0 nm, a dose of 100 electrons per pixel and a landing energy of 800 eV. These settings are chosen to mimic common CD-SEM operation, except that currently a FWHM around 3.5 nm is more common. In total, we have simulated four geometry realizations per depth value, resulting in 320 images of 1024 × 1024 pixels, with a pixel size of 1 nm 2 .
Line space datasets
LSs are vertical or horizontal strokes of material in a regular fashion, separated by trenches (Fig. 4). Because of the presence of stochastic effects of the fabrication process, the LSs have non-smooth edges. In extreme cases, LSs can have interruptions or get (partly) connected to adjacent structures, often called micro-bridges. LSs are heavily used in devices, since they are the building blocks of transistors, as well as wiring between components.
The geometries should be roughly matched with the experimental data (examples in Fig. 1), which consists of dense lines (16 nm) with 32-nm pitch and isolated trenches (16 nm) with 112-nm pitch. The TCD, MCD and BCD are independently varied from 13 to 20 nm and the depth is varied from 15 to 30 nm and kept equal within one image. The 1D LER is applied to the line contours by an improved variant of the Thorsos method [40]. More details are found in "Appendix A." All parameter ranges were chosen a bit larger than the ranges of the measured data. This makes the simulated data a superset of the actual data, which ensures that all possible cases are covered by the simulated data. Defects such as micro-bridges are not modeled in the synthetic dataset. In total, 550 dense line geometries were constructed together with 1650 isolated trench geometries. Figure 3 shows one example for both.
For the simulator, the same settings were used as for the previous experiment, except for the landing energy (500 eV) and pixel size (0.64 nm 2 ), to obtain a better match with the experimental data. In total, we have simulated one SEM image per geometry, with a field of view (FOV) of 1024 × 1024 pixels.
Pre-processing
The fact that some parts of the CD-SEM system are not modeled in the simulator creates a distribution shift between the synthetic and experimental domains. In this Section, we elaborate on the steps taken for decreasing this domain shift. Furthermore, data augmentation techniques are discussed.
Noise
A simplified noise model of a CD-SEM system is displayed in Fig. 5. The first noise contribution is shot noise from the electron gun. The number of primary electrons (PEs) originating from the gun is Poisson distributed. When a PE hits the specimen, it may become a secondary electron (SE), which experiences a stochastic electron cascade (scattering) through the material. This results in a compound Poisson noise distribution. Both effects are accounted for in the simulator. The third noise contribution is from the detector, where dark current is assumed to be dominant. Dark current intrinsically behaves as shot noise (Poisson), but for large numbers, the Poisson distribution will approach a Normal distribution. Therefore, this detector noise is modeled as additive Gaussian
Histogram correction
CD-SEM systems work with a detector current which will be translated into a gray value. This value depends on various CD-SEM aspects, such as the electronics, signal gain, landing energy, etc. Scaling all gray values of an image to use the full dynamic range prevents saturation effects, while changing settings of the CD-SEM. We have implemented this by snapping the lowest 0.2% pixels to the lowest value possible, the highest 0.2% pixels to the highest value possible and scaling everything in between accordingly. Images are stored in 8-bit unsigned integer format. Eight bits typically provide sufficient dynamic range, while maintaining the memory load of millions of images acceptable.
Data augmentation
Additional data augmentation is performed on the fly when training the network. A smaller patch of 256 × 256 pixels is cropped from the generated image at a random location. Detector noise and histogram correction are applied next. Further augmentation may be horizontal flipping, vertical flipping and rotating, with a probability of 0.5 per event.
With experimental data, this probability is set to zero, since important aberrations, like charging, are not symmetrical and dependent on the fast-scan direction of the SEM. Examples of pre-processed synthetic images are displayed in Fig. 5.
During inference, an entire image is processed at once by the network, so the only augmentation steps that stay relevant are adding noise and histogram correction. With inference of experimental data, no augmentation step is required.
Depth estimation from synthetic data
This section involves model selection, network architecture, loss functions and explaining the training process in more detail.
Model selection
There are many ways to represent a 3D structure, e.g., a polygon mesh, a voxel grid or a depth map. To determine what data type is most suitable for this application, an initial experiment was performed for examining the SEM signal, using a simple geometry with a varying SWA, see Fig. 6. We observe no distinctive signal for overhanging structures and conclude that distinguishing them is not possible, with the chosen landing energy only. This implies that only one depth value per pixel location of the SEM image is sufficient to capture all depth information present in the image signal. True 3D data types, like voxel grids, would therefore be redundant. Instead, we have adopted to use depth maps, which directed the research into depth estimation models.
Recent literature on depth estimation uses standardized benchmarks to compare the performance of different approaches [4]. Supervised methods still have the best overall performance. Most supervised methods use a pixel-wise loss function. However, recent work [29] proposes adding an adversarial (non-local) loss term to the depth prediction network. This approach outperforms pixel-wise losses with a relatively simple prediction network and triggers the interest for conducting an extensive loss function evaluation study. This will be elaborated in a separate section. Fig. 7 Architecture of the prediction network, consisting of a convolutional front-end, 9 residual blocks and a transposed convolutional back-end. The number of channels and kernel size are displayed above the convolutional blocks. The width and height of the inputs during training are displayed at the left bottom. The stride of the convolutional layers is unity, except for the layer before (2) and after (1/2) the series of Resblocks. Reflection padding is applied prior to each convolutional block to reduce border artifacts
Network architecture
The network used is based on recent work [41] for image-toimage translation. We denote A s and A d as the SEM image and depth map domains, respectively, while a s and a d refer to training examples in both domains. The actual prediction network learns a mapping function G : A s → A d which takes a SEM image as input and outputs a depth map. Furthermore, depending on the loss function, we use a discriminator network with a mapping function D. This network takes a SEM image and a corresponding predicted depth map as input and outputs an error-parameter score that quantifies the quality of the realism.
A detailed overview of the prediction network is found in Fig. 7. It consists of 9 stacked residual blocks [42], together with a convolutional front-and back-end. All residual blocks have two convolutional layers and an identity connection to the next block. This connection is attractive because the convolutional layers only have to learn the difference between the input and the output, which is in many cases less demanding for the network. These skip connections also enable the construction of deeper nets, since they do not suffer from the vanishing gradient problem during the backpropagation phase. The number of filters in the first layer is set to 64. Instance normalization is used after each convolutional layer, followed by a rectified linear unit (Relu).
Loss functions
A loss function with multiple terms is used for more detailed optimization. We employ three terms, each operating at a different scale. At a local scale we use an 1 or 2 loss, as defined by: where n ∈ 1, 2 is the rank of the distance measure and p data denotes the probability distribution of the data samples. This loss term operates on pixel level. A perceptual loss, which operates on patch level, is used for regional features and is defined by: Here, F (i) denotes the i-th layer with M i total network elements. It minimizes the 1 -distance of the network's intermediate feature representations between the predicted and ground-truth samples. The applied network is VGG16 [43], which is pre-trained with Imagenet [44] data.
For the global features, we have trained the prediction network together with a discriminator network. The network then becomes a generative adversarial network (GAN) [45], which can also be used for image-to-image translation [30] when adding conditional inputs. In this case, a least squares GAN (LSGAN) loss [46] is used, which consists of a generator loss and discriminator loss, resulting in the following specification: Unlike cross-entropy functions, the squares in Eq. (4) stronger penalize samples far from the decision boundaries, even when classified correctly, which helps to stabilize the training process [47]. For the discriminator, we have used a multi-scale Patch-GAN [30], operating at a receptive field of 70 and 140 pixels (which is the default operation setting), each with three convolutional layers. Also here, all layers are followed by a normalization and activation layer, while the first layer starts with 64 filters. Finally, we can construct the resulting loss function as a linear combination of the aforementioned terms, where a part is minimized over G, and the last part over D, such that:
Training process for pre-training
The data are divided in a training, validation and test set, consisting of 70%, 5% and 25% of the data, respectively. The test set is carefully constructed so that all possible depths are represented. Training is done in randomized batches of 16 images. As already mentioned, data augmentation is performed on the fly. The amount of noise added to the images is uniformly distributed between zero and the specified maximum σ required to mimic the detector noise. After empirical experiments, this turned out to be the best choice. A possible reason for this choice is that the network is not able to establish proper kernel filters when only receiving very noisy images. The Adam optimizer [48] is used for minimizing the total loss function, for 300 epochs, with a learning rate of 0.0002 and momentum parameters β 1 = 0. where y pred and y gt are the predicted and ground truth depth map. N is the total number of pixels.
Depth estimation from experimental data
The shift in distributions between the experimental domain and the synthetic domain requires an extra step. In this case, we have paired experimental SEM data with available OCD data. Due to the lack of local information in the OCD data, the ground truth is only partially present, which makes that this method can be classified as a weakly-supervised learning approach.
Experimental datasets
We have employed a CD-SEM system to measure a focus exposure matrix (FEM) wafer just after a lithography step. On a FEM wafer, the focus and dose of the scanner is gradually changed during exposure, which results in considerable geometry variations over different locations on the wafer. The wafer contained 16-nm dense lines (32-nm pitch) and 16-nm isolated trenches (112-nm pitch). The available data consist of two measurements for 1341 unique locations on the wafer. One SEM measurement with an FOV of approximately 1 µm 3 is available, as well as one OCD measurement with an FOV of 25 µm 3 . The OCD measurement contains several parameters (scalars) that are directly We have constructed two datasets, one with dense lines and one with isolated trenches. The dense-line dataset contains 331 images, where the depth varies between 17 and 24 nm. The isolated-trenches dataset contains 682 images, where the depth is within 26-27 nm. Although the depth range of the isolated trenches is insufficient for testing the depth predictions, we use these data to perform other useful experiments. The total number of measurements is lower than the total number of measurements on the wafer, since cases where the OCD trapezoid model has not converged properly are omitted.
Pixel-wise fine-tuning
The domain adaptation step is implemented by a novel method, further referred to as pixel-wise fine-tuning. In general, fine-tuning with a single value as ground truth entails that the optimization problem of the model is underconstrained. In order to prevent the network drifting from the manifold of realistic structures, some training regularization is required. The inference on experimental data without fine-tuning the network turned out to be qualitatively correct in terms of lateral shape information, but quantitatively incorrect in terms of depth information in the axial direction. Therefore, we have decided to generate a new ground-truth by combining information from the resulting depth maps with corresponding OCD depth values. This re-enables pixel-wise training, thereby solving the under-constrained problem. This domain adaptation method is valid for this use case because the properties of a lithographic multilayer etch process imply that the structure height within the field-of-view of and OCD measurement is very constant. Alternatively, we have tried to regularize the network by fine-tuning only a subset of the layers or adding a discriminator to the loss function that was specifically trained on realistic depth maps. The results of both methods were not satisfactory because artifacts were introduced, so that it will not be treated further.
The pixel-wise ground truth is produced by scaling the depth maps (D pt ) obtained from inference of the experimental images on the pre-trained network. The scaling is defined by where d OCD denotes the depth parameter from the OCD model and d pt is the depth derived from the depth map D pt . Matrix D gt is the resulting depth map. The value of d pt is determined by the distance between the two peaks in the histogram of the depth map, displayed in Fig. 8. More specifically, the histogram bins have a width of 0.01, and the largest bin of the lower half and the largest bin of the upper half of the histogram are selected. These peaks represent the values of the averaged bottom-layer surface depth and the averaged depth of the LSs. This method is robust for the presence of noise in D pt and produces consistent results.
Artifact removal
The predicted depth maps of isolated trenches suffer from artifacts at the surface between the trenches, most likely due to charging effects present in the experimental data. These artifacts are present as small pits from the surface of the depth map and do not interfere with the border of the trench or the trench itself. We have solved this issue by adding one processing step, just prior to the pixel-wise scaling operation. The processing step entails element-wise multiplication with a dilated binary map (b ct ) originating from a SEM contouring algorithm, which exploits an adaptive-threshold method. This step is also depicted in Fig. 8 at step (b). It removes the artifacts while preserving the rest of the information in the depth map. With this ground-truth, the network learns to ignore charging artifacts, which results in a correct output. Since SEM contouring algorithms are available for many structures, this method can be extended to other use cases.
Training process for fine-tuning
The entire training process is depicted in Fig. 8. Pre-training is performed as described in the previous sections concerning synthetic data. The experimental data are separated in sets, 70% train, 5% validation and 25% test. Fine-tuning is done for 100 epochs using Adam solver, with a learning rate of 0.001. Data augmentation and detector noise are not applied. Several models are trained with different loss configurations. The same performance metrics are used as in the validation during the pre-training process.
Post-processing
Several key performance indicators that are relevant for the semiconductor industry can be inferred from the obtained depth maps. We introduce the following notations. The area at depth z is A z = N z · a p , where N z denotes the number of pixels below (or above with dense lines) depth z within a slice at depth z of the structure, selected with a threshold operation. Parameter a p is the area of one pixel. In this work a p = 1 nm 2 for CHs and a p = 0.64 nm 2 for LSs. For selecting individual structures, each unit cell is selected first, with a mask. Then the following operations are performed.
Semiconductor metrics for CHs
The parameters present in the model of Fig. 2 have to be retrieved for each individual contact hole. The following metrics will be used. The critical dimension of a CH is calculated with the formula of the area of a circle. Therefore, this metric can be seen as average critical dimension.
Semiconductor metrics for LSs
The parameters occurring in the model of Fig. 4 representing local information will be gathered as follows.
-TCD: A z top /L where z top = z ceil + 2nm and L is the length of the selected structure and z ceil is the location of the leftmost peak of the histogram function. Additionally, global information should be derived from the depth map to enable validation with OCD data.
-Average CD at depth z: This is P · N z /N where P denotes the pitch of the pattern and N the total number of pixels in the image. -Average depth value: This value is calculated with the histogram method as described earlier.
Results
In this section, we present qualitative and quantitative results.
The following section elaborates on synthetic data, predominantly on the experiment with the CH dataset. The second section focuses on the experimental LS dataset.
Synthetic results
The depth estimation network is trained as explained in the previous sections. The network did not suffer from overfitting, since the performance on the validation set did not degrade at the end of the training procedure.
Contact holes dataset
Qualitative results of CHs are found in Figs. 1 and 12. The mean absolute errors are displayed in Fig. 9. All provided metrics are calculated with the post-processing method discussed in the previous section. It can be observed that a network with only a local 1 loss works best for all metrics. The obtained relative error of the depth is between 4.2 and 6.1% for realistic noise levels. TCD, BCD and SWA correlations of individual CHs for two different SEM images are displayed in Fig. 10. We have found that TCD and BCD always show a good correlation. Furthermore, SWA correlation is reasonable, but tends to become less accurate in images with many overhanging CHs.
In this work, we primarily focus on depth. The results of the best performing network (yellow bars in Fig. 9 with 1 loss) are displayed in Fig. 11. The depth inference by the network (indicated by get depth) closely follows the depth programmed in the geometry (indicated by set depth), which is used to generate the simulated SEM image. It can be observed that deeper holes result in less accurate predictions, since the average error grows with the depth. This is explained by the fact that when the CH becomes deeper, the change in SEM signal becomes smaller, i.e., the SEM signal scales non-linearly with the depth of the CHs. A possible physical explanation is that the total number of detected electrons is lower for deeper structures, while some noise contributions are not dependent on depth, which results in a lower SNR for deeper structures. Partially filled holes perform well (which proves applicability of this technique for defect detection) but are sometimes less correlated with The network can also handle large field-of-view SEM images. A qualitative result of simulated data is shown in Fig. 12 and the corresponding quantitative pixel-wise absolute difference with ground truth is displayed in Fig. 13.
Line-spaces with roughness dataset
Global model performance on the synthetic LS dataset is summarized in Fig. 14. We observe similar behavior between the models, also here 1 performs best on all metrics except for TCD and SWA, where the model was trained with 2 , LSGAN and VGG loss. It is possible to combine the metrics of different models in the post-processing to get even better predictions for SWA, as shown by the purple bars.
Experimental results
After extensive training with synthetic data, the network was not able to give satisfactory results on experimental data. Therefore extra training steps were required to implement, which we explained in the methodology section. The results of these steps are presented in the following sections.
Dense lines dataset
Some examples of depth maps obtained from SEM images of dense LS patterns are displayed in Figs. 1, 8 and 17. Figure 15 shows the performance of the model trained with 2 loss on depth estimation for individual lines. The depth inferred by the network (indicated by get SEM depth) closely follows the depth measured by the OCD tool (indicated by get OCD Depth). The average error is low, smaller than 1 nm, which means this network is able to predict depth very accurately. This is an important result because it shows Fig. 16. Here, we used 1 loss for training. SEM is most sensitive for TCD, since it shows a clear correlation with the OCD data. MCD and BCD perform reasonably well. There is some offset present in the slope of the data points. This could be explained by the fact that the SEM signal is less sensitive for lower structures, than the OCD tool. Besides, the definition of MCD is not strict in the parameter model of the OCD tool. Also with experimental data, the network is able to handle large field-of-view results. A qualitative depth map is shown in Fig. 17, and the corresponding quantitative pixelwise absolute difference with ground truth is displayed in Fig. 18.
Isolated trenches dataset
Qualitative results of the depth maps before and after finetuning are displayed in Fig. 19. The final result shows that the charging artifacts are completely removed through better learning and modeling.
Since this dataset does not have sufficient variation in depth values, only the CD value is interesting to evaluate. The corresponding OCD model has only one CD value defined. We obtain a mean absolute error of 0.46 nm with minimal slope off-set, which indicates that the lateral information in the depth map is in accordance with both modalities. Furthermore, these depth maps can be used to measure the depth of micro-bridges inside the trenches, since the network should be able to cope well with intermediate depths values. Fig. 19 Qualitative results of isolated trenches. Left: depth map prediction prior to fine-tuning. Right: depth map prediction after fine-tuning with artifact removal
Discussion and limitations
Although an extensive ablation study on the performance of different loss functions was performed, as well as hyperparameter tuning of the network and training process, it cannot be guaranteed that it is the optimal configuration for this use case. The most important goal of this research is to prove that the technique presented is feasible with the type of data available. Even though the results are promising, it is important to note that there are some caveats to the presented approach.
With the presented method, the measurements from the OCD tool were used as a reference, by using them to create a new ground truth. Evidently, the precision of this measurement tool is also limited. Especially because the OCD value is averaged over a much larger area of the wafer, the local accuracy cannot be guaranteed. Ideally, this method should be validated with a third metrology tool. For example, this could be implemented by comparing TEM cross sections or AFM traces with the predicted depth maps at certain points on the wafer. It would also be possible to calibrate the network with these measurements, but in the ideal case we only want to exploit it for validation, since the cost (slow, expensive, destructive, etc.) of these measurements is much higher than that of OCD metrology.
Currently, a histogram-based approach is used to match the predicted profile to the OCD measurement. This method was found empirically and showed acceptable results. However, it would be more accurate to use a Maxwell solver [49,50] for this purpose. By feeding the predicted depth map into the solver, a virtual OCD measurement can be made. This enables more accurate comparison between the modalities.
The artifact removal method for isolated trenches works well in the performed experiments. Nevertheless, it is expected that this method will degrade for certain circumstances. With specific combinations of materials and geometries, charging effects may occur more intensely, also in the deeper structures of the depth map. A straightforward solution is to incorporate the charging effects in the simulation models. However, this is not a trivial task due to the complexity of the physics involved. Alternatively, datadriven solutions, such as unsupervised domain adaptation, are interesting future research directions for this purpose.
Conclusions
We have shown that deep learning models are suitable as a conceptual solution for extracting 2D and 3D metrics from synthetic SEM images. The final prediction network, which is based on a image-to-image translation task, was trained with several loss functions on different scales. For depth estimation on these images, a single 1 loss turned out to be the best choice for CHs, with a mean relative error of 4.2-6.1% on depth. The 1 loss also works best for depth prediction on synthetic LSs, but for TCD and SWA a combined loss ( 2 loss, perceptual loss and adversarial loss) results in the lowest error metrics. It is also possible to combine both networks ( 1 -based and combined-based) to obtain a slightly better performance on SWA. We also showed that the network was able to detect defected contact holes in most cases, which promises great potential for defect detection.
Furthermore, we have demonstrated that it is possible to calibrate the model in order to cope with real experimental data. We showed that it is possible to achieve an average prediction error below 1 nm after calibration with OCD data. The network can also well summarize to defects, such as microbridges, even if they are not modeled in synthetic data. This generalization power provides great potential for estimating the height of these defects. However, ideally this hypotheses should be validated first with a third metrology tool.
The result of this work makes it possible to use the threedimensional information hidden in a SEM image. While other technologies used for this purpose have significant shortcomings in applicability or practicability, the current method may be applicable to industrial measuring equipment with limited calibration data and executed on conventional computing platforms. intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
A.1 Contact holes
For CHs, roughness is applied by connecting N equidistant points on a virtual circle, where N is set to N = 73. First, the distance of each point to the center of the circle is changed by a normally distributed variable ( f [m]), where the standard deviation is uniformly distributed (U), which is specified by The numbers f [m] with 0 ≤ m ≤ N − 1 are convolved with a sampled Gaussian function, specified as: with a specific correlation length σ CL that is normally distributed and empirically determined as σ CL ∼ N (1, 9). The corresponding convolution ensures smoothness around the perturbed cylindrical shape between various neighboring points, resulting in more realistic edges. This convolution is specified by After adding roughness, the top and bottom structures are connected in the z-dimension.
A.2 Line spaces
For line edge roughness (LER), the Thorsos method [51] is applied as described in [40]. This is a power spectral densitybased method where the autocorrelation is approximated as For the correlation length (l c ), roughness factor (α) and standard deviation (σ ), we have used 16.8 nm, 0.5 and 0.77 nm, respectively. These values are closely matching with the available experimental data.
Appendix B: Implementation details
The geometries were created with Python and Numpy and stored as text format in *.tri files. Nebula [39] was used for SEM simulations. The simulations were performed at a GPU cluster with GPU K80 video cards (memory of 24 GB).
The dataset was constructed with Python and Pandas. The electron yield numbers of SEM images were stored in 8bit unsigned integer format. The depth values in the maps were stored in 32-bit float format. The depth estimation network was implemented in Pytorch 1.3.0 with Python 3.6. Tensorboard 2.0.0 was used for visualization of the validation metrics. The neural network was trained at a GPU cluster with one V100 video card (memory of 32 GB). Post-processing was done with Python, NumPy, SciPy and OpenCV. For visualization of the data MATLAB, Matplotlib, Visio and Excel were used. | 10,241 | 2022-07-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
The Prognostic Value of lncRNA MCM3AP-AS1 on Clinical Outcomes in Various Cancers: A Meta- and Bioinformatics Analysis
Background MCM3AP antisense RNA 1 (MCM3AP-AS1) is a newly identified potential tumor biomarker. Nevertheless, the prognostic value of MCM3AP-AS1 in cancer has been inconsistent in the available studies. We performed this meta-analysis to identify the prognostic role of MCM3AP-AS1 in various cancers. Methods We searched PubMed, Web of Science, EMBASE, and the Cochrane Library databases to screen relevant studies. Hazard ratios (HR) or odds ratios (OR) and corresponding 95% confidence intervals (CI) were used to evaluate the relationship between aberrant MCM3AP-AS1 expression and survival and clinicopathological features (CFS) of cancer patients. A meta-analysis was performed using STATA 12.0 software. Additionally, results were validated by an online database based on The Cancer Genome Atlas (TCGA). Subsequently, we analyzed the MCM3AP-AS1-related genes and molecular mechanisms based on the MEM database. Results Our results showed that overexpression of MCM3AP-AS1 was related to poor overall survival (OS) (HR = 2.00, 95% CI, 1.52–2.64, P < 0.001) and relapse-free survival (RFS) (HR = 3.28, 95% CI 1.56–6.88, P = 0.002). In addition, MCM3AP-AS1 overexpression was associated with TNM stage, differentiation grade, and lymph node metastasis, but not significantly with age, gender, and tumor size. In addition, MCM3AP-AS1 overexpression was verified by the GEPIA online database to be associated with poorer survival. The further functional investigation suggested that MCM3AP-AS1 may be involved in several cancer-related pathways. Conclusions The overexpression of MCM3AP-AS1 was related to poor survival and CFS. MCM3AP-AS1 may be considered a novel prognostic marker and therapeutic target in various cancers.
Introduction
Cancer threatens human health, is a leading cause of death, and is a major obstacle to increasing life expectancy in countries worldwide [1,2]. While significant advances in cancer research have been made, the treatments developed and patient prognosis have not met expectations, necessitating a change in how cancer is researched and treated [3]. Numerous cancers can be prevented or effectively treated if diagnosed early [4]. The presence of tumor markers helps in the early detection of cancer [5]. Thus, looking into novel tumor markers, finding tumors earlier, and treating patients immediately and effectively can help to improve their prognosis.
Long noncoding RNA (lncRNA) is a noncoding transcript with a length larger than 200 nucleotides, which cannot encode proteins owing to open reading frame deficiency [6,7]. Through continuous research, lncRNA has been identified to be engaged in transcriptional and posttranscriptional regulation by interacting with DNA, RNA, or proteins and regulates various physiological and pathological processes [8][9][10]. Aberrant expression of lncRNA acts as suppressor genes or oncogenes and is involved in tumorigenesis, progression, and metastasis [11]. Therefore, lncRNAs with distinctive expression and functional variety can be regarded a diagnostic and prognostic biomarker and may provide new therapeutic targets for the clinic [12,13].
The fast development of bioinformatics provides a broad prospect for the research of disease diagnosis and therapeutic targets [26]. For example, Lee et al. found that DLK2 acts as a potential prognostic biomarker for RCC based on bioinformatics analysis [27]. Zhou et al. suggested that patients with CYB561 overexpression have reduced OS and increased risk of death, and CYB561 may serve as a valid clinical prognostic biomarker for breast cancer [28]. Therefore, to further 2 Disease Markers understand the prognostic potential of lncRNA MCM3AP-AS1 expression, we performed bioinformatics analysis to investigate the potential prognostic value of MCM3AP-AS1 in cancers. In addition, we explored the genes and pathways associated with MCM3AP-AS1. To better guide the clinical work, we intend to explore the potential of MCM3AP-AS1 as a novel tumor marker and therapeutic target.
Inclusion and Exclusion
Criteria. The inclusion criteria were as follows: (i) the expression level of MCM3AP-AS1 in tumor tissues was detected and divided into two groups of high and low expression; (ii) provides information on the association of MCM3AP-AS1 with survival or CFS; (iii) reported hazard ratio (HR) for OS and RFS or provided survival curves to allow calculation of HR; and (iv) all data were obtained from clinical samples. The exclusion criteria were as follows: (i) reviews, case reports, confer-ence abstracts, letters, or animal studies; (ii) studies without survival or clinicopathological data; and (iii) data from the database.
Data
Extraction and Quality Assessment. Two authors independently screened for inclusion in the study and extracted the required information and data [29]. When there was disagreement, a third author intervened to reach a consensus. Based on the inclusion and exclusion criteria, the following information was extracted: (i) name of first author and year of publication, (ii) country of publication, (iii) tumor type, (iv) sample size, (v) lncRNA MCM3AP-AS1 detection method, (vi) cut-off value, (vii) follow-up time, (viii) outcomes, and (ix) OS and RFS data. We evaluated the quality of the included studies according to the Newcastle-Ottawa Scale [30] (NOS), which used nine entries to assess studies, with one point for each entry satisfied and a total score between 0 and 9. Based on the scores obtained, they were classified as high quality (7-9), moderate quality (4)(5)(6), and low quality (0-3). All scoring was done independently by two authors.
Validation by Reviewing Public Data. Gene Expression
Profiling Interactive Analysis (GEPIA) is based on The Cancer Genome Atlas (TCGA) and can be used to validate gene differential expression analysis in tumor/normal tissues [31]. Our meta-analysis used GEPIA to validate the association of MCM3AP-AS1 expression with OS and detect the distinction of MCM3AP-AS1 expression levels between normal and tumor tissues. Survival analysis was performed using the K-M method and log-rank test, and the figure of K-M curves displayed the HR and P value. 3 Disease Markers 2.6. Predicting Target Genes and Building Signal Pathway Network. We acquired MCM3AP-AS1-related genes from the MEM database [32] (https://biit.cs.ut.ee/mem/index .cgi). Later, we performed gene ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis on the obtained genes by online databases (http://www.bioinformatics.com.cn). Furthermore, we constructed and visualized the MCM3AP-AS1-related signaling pathway network using Cytoscape software [33].
Statistical Analysis.
We predicted the correlation between MCM3AP-AS1 expression and tumor patients' survival based on HR and 95% confidence interval (CI). Some of the included studies had precise survival data that could be utilized directly. For the remaining studies that only provided KM curves, we used Engauge Digitizer V.4.1 soft-ware to extract survival data and calculate HR and 95% CI [34]. Survival outcomes were expressed by log HR and standard error (SE), and clinicopathological parameters were expressed by odds ratio (OR) and 95% CI. Between-study heterogeneity was assessed using chi-squared tests and the I 2 statistic. We used a fixed-effects model to analyze the results when I 2 < 50% and the P value of Q test ðPQÞ ≥ 0:05. Otherwise, a random-effects model was used. If there was significant heterogeneity between studies, subgroup analysis was used to find the source of heterogeneity. Meta-analysis outcomes were shown using forest plots. Begg's funnel plot and Egger's regression test were used to evaluate publication bias. To assess the stability of the effect of independent studies on the results, we performed a sensitivity analysis on this. The study results were analyzed using STATA 12.0, and P < 0:05 was deemed statistically significant.
Characteristics of Studies.
We retrieved a total of 123 articles from the four databases (PubMed, Web of Science, EMBASE, and the Cochrane Library), and 16 studies were finally included through screening. Figure 1 ( Figure 1) shows the process and results of screening the literature according to PRISMA criteria. All the included studies were published in 2019-2021 and were from China. Ultimately, the included studies included 12 types of cancer, such as cervical carcinoma (CC) [35], CRC [16,36,37], EC [17], HCC [38], LC [19,39], lymphoma [40], NPC [20], OSCC [21], PC [22], papillary thyroid cancer (PTC) [41], PCa [23,42], and RCC [24]. There was sufficient data for OS and RFS to be considered as survival outcomes, and Table 1 demonstrated the basic characteristics of these studies. Figure 2(a) shows the relationship between MCM3AP-AS1 expression and OS. Twelve studies with 816 patients were included, and all the data were obtained from clinical samples. We used a fixed-effects model since these studies had no heterogeneity (I 2 = 0:0%, PQ = 0:826). Metaanalysis results showed that tumor patients with high MCM3AP-AS1 expression had poor OS (HR = 2:00, 95% CI 1.52-2.64, P < 0:001) (Figure 2(a)). Therefore, MCM3AP-AS1 was an independent factor in the low survival of tumor patients.
Association of MCM3AP-AS1 Expression Levels with OS and RFS.
In addition, two studies were included for RFS analysis. The fixed-effect model was applied (I 2 = 0%, PQ = 0:411). The
Subgroup Analysis of the Association between MCM3AP-AS1 Expression Level and OS.
To further assess the relationship between MCM3AP-AS1 expression levels and OS, we performed a subgroup analysis based on the following factors: the system of cancers (digestive system, urogenital system, respiratory system, or other) (Figure 3(a)), sample size (≥80 < 80 tissues) (Figure 3(b)), follow-up time (>60 or ≤60 months) (Figure 3(c)), and article quality (NOS scores ≥ 7 or <7) (Figure 3(d)). The outcomes of the subgroup analysis did not change the predictive value of MCM3AP-AS1 for OS in cancer patients.
Sensitivity Analysis and Publication Bias.
To assess the effect of each independent study on the OS results, we performed a sensitivity analysis. After excluding each eligible study, the outcomes did not change significantly, thus substantiating the robustness of the meta-analysis results and the reliability of MCM3AP-AS1 expression on OS prediction ( Figure 5). Begg's funnel plot and Egger's regression test were used to investigate possible publication bias. Our results revealed no obvious publication bias for OS (P > jtj = 0:382; Figure 6(a)), tumor size (P > jtj = 0:939; Figure 6(b)), TNM stage (P > jtj = 0:729; Figure 6(c)), lymph node metastasis (P > jtj = 0:750; Figure 6(d)), differentiation grade (P > jtj = 0:883; Figure 6(e)), age (P > jtj = 0:972; Figure 6(f)), and gender (P > jtj = 0:599; Figure 6(g)). (Figure 7). Furthermore, by combining MCM3AP-AS1 expression data from all TCGA databases and OS data of human tumors, the GEPIA survival plots were used to divide 9471 patients into the MCM3AP-AS1 high-expression group and the MCM3AP-AS1 low-expression group. The results showed that upregulation of MCM3AP-AS1 expression predicted poorer OS, confirming the results of our metaanalysis (Figure 8(a)). Moreover, the violin plot showed that the expression level of MCM3AP-AS1 was significantly related to the clinical stages of human tumors (Figure 8(b)).
Analysis of MCM3AP-AS1-Related Genes.
We filtered the top 100 MCM3AP-AS1-related genes from the MEM database and found that ZNF397, MRPS25, and RBM12B were the top three predicted target genes, closely associated with MCM3AP-AS1 gene expression Figure 9). Furthermore, we used GO and KEGG enrichment analysis to understand the potential molecular mechanisms of MCM3AP-AS1 in cancer ( Figure 10; Table 3). Also, we used Cytoscape software to make a signaling pathway network of these MCM3AP-AS1-related genes that coexpressed with MCM3AP-AS1 ( Figure 11).
Discussion
Cancer remains a major public health problem worldwide and is one of the leading causes of death in every country [43]. In the past two years, cancer incidence and mortality rates have increased further due to delays in the diagnosis and treatment of cancer due to the novel coronavirus [44]. However, early cancer detection and advances in treatment can improve patient survival rates [45]. It has been shown that many lncRNAs are abnormally expressed in diverse cancers. lncRNA can influence cancer development and progression by accelerating tumor cell proliferation, metastasis, and invasion [46]. Furthermore, because of their tissue specificity and stability, lncRNAs have the potential to be therapeutic targets as well as diagnostic or prognostic bio-markers [12]. Therefore, lncRNA is an important biomarker for cancer diagnosis and treatment, and it could be used as a possible therapeutic target to improve the prognosis of people with cancer. As a study reported, several lncRNAs play an essential part in the tumor occurrence and development of different cancers [47]. For example, Fang et al. [48] found that lncRNA SLCO4A1-AS1 was highly upregulated in GC and accelerates growth and metastasis of GC. Furthermore, they conclude that SLCO4A1-AS1 is an important oncogenic lncRNA in GC, and SLCO4A1-AS1 is a potential novel therapeutic target for GC. A study by Bhan et al. suggest that lncRNA PVT1 accelerates breast cancer proliferation and metastasis as an oncogene and may be a potential therapeutic target for breast cancer. Therefore, it is crucial to identify new tumor markers 10 Disease Markers associated with the prognosis of malignant tumors. lncRNA can be considered as molecular marker for tumors, and its expression can be used to predict tumor prognosis and patient prognosis, providing a new basis for cancer diagnosis and treatment [49].
To improve the prognosis of patients with cancer, an increasing number of studies have identified biomarkers that can predict cancer prognosis through bioinformatics analysis, such as Zhao et al. who found that aberrant expression of STEAP1 in pancancer predicted survival and CFS and could be a potential therapeutic target [50]. Chen et al. suggested that ALKBH7 may serve as a potential prognostic pancancer biomarker and is involved in the immune response [51]. Thus, we investigated the expression levels of MCM3AP-AS1 in cancers through the GEPIA database. The outcomes showed that MCM3AP-AS1 was overexpressed in various cancers, and patients in the high-expression group had poor OS. Then, we selected MCM3AP-AS1-related genes from the MEM databases, performed GO and KEGG enrichment analysis, and constructed a signaling network to better define the functions of MCM3AP-AS1 in cancers. The outcomes of GO analysis revealed that MCM3AP-AS1 has a lot to do with the nucleoplasm, nucleus, transcription, and poly(A) RNA binding. Furthermore, the results of KEGG analysis revealed that MCM3AP-AS1 was significantly correlated with RNA transport, ribosome biogenesis in eukaryotes, spliceosome, and regulating pluripotency of stem cell-related signaling pathways. Moreover, we have further investigated the 13 Disease Markers mechanism of MCM3AP-AS1 in cancers. In CRC, the high expression of lncRNA MCM3AP-AS1 promotes cell metastasis and proliferation by regulating miR-193a-5p/SENP1 [37]. MCM3AP-AS1 is upregulated in HCC and enhances the growth of HCC by targeting the miR-194-5p/FOXA1 axis [38]. In PC, MCM3AP-AS1 accelerates migration and growth through modulating FOXK1 by sponging miR-138-5p [22]. To investigate the association between MCM3AP-AS1 and multiple cancers, we concluded MCM3AP-AS1 and its functional role and related target genes, as shown in Table 4.
Notwithstanding, there are some limitations to our study. First, the literature we included was all from China, so there may be selection bias in our outcomes. Second, there is no uniform cut-off value for the MCM3AP-AS1 expression level, and the survival data HR and 95% CI for some studies were extracted by Engauge Digitizer software and may contain statistical errors. Third, only one of the included studies demonstrated that downregulation of MCM3AP-AS1 was linked to the survival of CC.
Conclusions
In conclusion, our meta-analysis demonstrated that overexpression of MCM3AP-AS1 in cancers was significantly associated with poor survival and CFS. Furthermore, MCM3AP-AS1 can be considered a novel prognostic biomarker and therapeutic target for various cancers. Nonetheless, our study has some limitations, and these conclusions need to be confirmed by additional high-quality, large sample size, and multicenter studies.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 3,546.2 | 2022-06-24T00:00:00.000 | [
"Medicine",
"Biology"
] |
Critical intermittency in rational maps
Intermittent dynamics is characterized by long periods of different types of dynamical characteristics, for instance almost periodic dynamics alternated by chaotic dynamics. Critical intermittency is intermittent dynamics that can occur in iterated function systems, and involves a superattracting periodic orbit. This paper will provide and study examples of iterated function systems by two rational maps on the Riemann sphere that give rise to critical intermittency. The main ingredient for this is a superattracting fixed point for one map that is mapped onto a common repelling fixed point by the other map. We include a study of topological properties such as topological transitivity.
Introduction
This paper will provide and study examples of iterated function systems by two rational maps on the Riemann sphere that give rise to intermittent time series.The central example of the paper is intermittency of a type that we call critical intermittency, where the main ingredient is a superattracting fixed point for one map that is mapped by the other map onto a common repelling fixed point.We consider a topological description of the dynamics for which we study density of orbits of the semi group generated by the iterated function system.And we consider a metrical description by looking at statistical properties of the intermittent time series.
In dynamical systems theory, intermittency stands for time series that alternate between different characteristics.It in particular indicates times series that appear stationary over long periods of time and are interrupted by bursts of nonstationary dynamics.These are called the laminar phase and relaminarization.Explanations for the occurrence of intermittent time series were given by Pomeau and Manneville [13], see also [3].They offered explanations using bifurcation theory, and distinguished different types of intermittency caused by different local bifurcations.Later research added to the list of mechanisms giving intermittency, including crisis induced intermittency, homoclinic intermittency, on-off intermittency and in-out intermittency.
1.1.Critical intermittency in iterated function systems of logistic maps.The type of dynamics we consider in this paper is related to the following example from interval dynamics.
Denote g a (x) = ax(1 − x) for the logistic map on the interval [0, 1], with 0 < a ≤ 4. Consider the iterated function system generated by the two maps f 0 = g 2 and f 1 = g 4 .This defines a semi-group f 0 , f 1 of compositions of f 0 and f 1 .For each iterate we pick i ∈ {0, 1} at random, i.i.d., with probabilities p 0 and p 1 = 1 − p 0 , and then iterate using f i .Note that this defines a Markov process and recall that a stationary measure for the Markov process is a measure m satisfying m = p 0 f 0 m + p 1 f 1 m.
Here f i m, i = 0, 1, stands for the push forward f i m(A) = m(f −1 i (A)).Carlsson [4] observed that the only stationary measure for this iterated function system is the delta measure at 0 if p 0 > 1/2.This was further investigated in [1] where σ-finite stationary measures were constructed for p 0 > 1/2, and in [9] that studied stationary measures for all values of p 0 .
We summarize results on this iterated function system in the theorem below.Write Σ = {0, 1} N and endow Σ with the product topology and the Borel σ-algebra.Further write ω = (ω i ) i∈N for elements of Σ.We denote . On Σ we consider the Bernoulli measure ν p 0 ,p 1 given the probabilities p 0 , p 1 .
Assume p 1 ≥ 1/2.Then the delta measure δ 0 at 0 is the unique stationary probability measure.There is an absolutely continuous σ-finite stationary measure with support equal to [0, 1].For any ε > 0 and for Lebesgue almost any x ∈ [0, 1], (1) The theorem expresses the occurrence of intermittent time series; orbits spend most time near 0 but there are infrequent but repeated bursts away from 0. Under the conditions of the theorem, the only stationary probability measure is δ 0 , even though 0 is repelling for both maps f 0 , f 1 .The explanation lies in the existence of a superattracting fixed point 1/2 for f 0 , which will be mapped onto the repelling fixed point 0 after iterating under f 1 .Iterates of f 0 bring a point superexponentially close to 1/2, after iterating f 1 the point will get superexponentially close to 0, after which many iterates are needed to escape neighborhoods of 0.
The bound p 1 ≥ 1/2 is optimal: it is shown in [9] that for p 1 < 1/2 there does exist an absolutely continuous stationary probability measure.
Other examples of this phenomenon, which we call critical intermittency, are possible [9].In section 2 we will introduce conditions on pairs of 1-parameter families of rational functions that guarantee critical intermittency.Two key assumptions are the existence of a joint fixed point, which is attracting for the first map, and repelling for the second.In order for critical intermittency to occur, we assume the common fixed point is repelling on average.The first map will moreover have a superattracting fixed point, which is mapped back to the common fixed point by the second map.The proof of intermittency is aided by proving the density of semigroup orbits, which is shown for parameter values for which the common fixed point is not resonant.
In section 3 we introduce an explicit pair of 1-parameter rational maps of degree 2 that satisfy the conditions that imply critical intermittency for non-resonant parameter values.We moreover show that density of orbits still frequently occurs for parameter values where the common fixed point is resonant.In section 4 we will treat another explicit example, where the common fixed point is on average neutral instead of repelling.We show that intermittency can still occur, just as for real interval maps.
Iterated function systems on the Riemann sphere
In this section we will cover key assumptions that cause critical intermittency for rational iterated function systems on the Riemann sphere.We will work with a pair of 1-parameter families of rational functions, and show that critical intermittency holds for almost every parameter value.In later sections we will discuss explicit pairs of rational functions where it is possible to decide more precise when critical intermittency holds.
Key assumptions.Throughout this section let f 0 = f 0,λ and f 1 = f 1,λ be rational functions, both depending analytically on a parameter λ ∈ D. We will stipulate the behavior of f 0 and f 1 on two marked points in C. Without loss of generality we may assume that these two points are 0 and ∞.
We assume that 0 is a fixed point for each f 0 and f 1 .We assume it to be repelling for each f 0 and attracting for each f 1 .The point ∞ is assumed to be a superattracting fixed point of f 0 , and mapped to the fixed point 0 by f 1 .We will write d ≥ 2 for the local degree of f 0 at infinity, so that f 0 is locally conjugate to z → z d .Finally, we assume that each function f 0 is hyperbolic, and that all the other attracting fixed or periodic points of f 0 are mapped to the immediate basin of ∞ for f 0 by some iterate f k 1 .Note that by hyperbolicity, the attracting periodic points vary holomorphically with λ, hence the iterate k only depends on the choice of attracting periodic point, not on the value of λ.
2.1.Dense semigroup orbits.Recall that the action of the semi-group G = G λ = f 0 , f 1 is said to be topologically transitive if for every non-empty and open U, V ⊂ C there exists g ∈ G with g(U ) ∩ V = ∅.Given an invariant set S = f 0 (S) ∪ f 1 (S), the action of G is said to be minimal on S if for all z ∈ S the G-orbit of z is dense in S. We are interested in G-orbits that are dense in C. As this can not hold for all points, it can for instance not hold for z = 0, ∞, the best that we can hope for is that G-orbits of all but finitely many points are dense.
We start with a result on topological transitivity.
Lemma 2.1.For each λ ∈ D the action of the semi-group G is topologically transitive.
Proof.Consider an open set U ⊂ C. By hyperbolicity of f 0 the set U must intersect the attracting basin of some attracting fixed point or periodic cycle.Let V ⊂ U be an open connected subset contained in this basin.Then for some large n the set f n 0 (V ) is contained in a small neighborhood of an attracting periodic point x of f 0 .Let k be such that Recall that near ∞ the map f 0 is holomorphically conjugate to a map z → z d , for d ≥ 2. It follows that for large the set contains an annulus around the point ∞.Moreover, by increasing if necessary we can guarantee that the modulus of this annulus is arbitrarily large.
It follows that contains a small annulus of arbitrarily large modulus around the point 0. Since the repelling fixed point 0 is a non-isolated point of the Julia set of f 0 , and since f 0 acts in local coordinates as multiplication by the multiplier at the fixed point, it follows that a small annulus around 0 of sufficiently large modulus must contain a point on the Julia set of f 0 , and thus also an open neighborhood of such a point.It follows that where E f 0 is the exceptional set of f 0 , which contains at most 2 points.This completes the proof.
Remark 2.2.One can verify that the exceptional set of f 0 is in fact empty.By compactness of C it follows that there exist integers n, k, , and m such that Remark 2.4.For ν(λ) ∈ B(0, 1) one of the following must be satisfied: (1) S = C.
(2) S is a finite union of real lines passing through the origin.
(3) S is a discrete union of concentric circles.
(4) S is discrete.It is possible that case (2), (3), or (4) occurs persistently for all λ, in which case we say that G is persistently resonant.If G λ is not persistently resonant, the equality S = C will hold for almost every λ ∈ D. Indeed, the parameters λ for which cases (2), (3), and (4) hold are given by countably many real analytic equations in λ.Each such equation either holds throughout, in which case G λ is persistently resonant, or is satisfied in a real analytic subvariety of real dimension 1.
Proof of Theorem 2.3.We will consider semi-group orbits that remain in B r (0), and accumulate on a small annulus around the origin of arbitrarily large modulus.The argument that concludes the proof of 2.1 can then be used to determine density of the orbit in all of C.
Since we will remain in B r (0), we may assume that r > 0 is sufficiently small so that we can use linearizing coordinates for the map f 0 , i.e. f 0 (z) = µ(λ)z.Since the multiplier is preserved under conjugation we obtain f 1 (z) = ν(λ)z + O(z 2 ).Let us denote the local linearization map of this function by ϕ = ϕ λ , which is unique once we demand that Consider diverging sequences (m j ) and (n j ) for which µ(λ) It follows that the semi-group orbit of z accumulates on a small annulus around 0 of arbitrarily large modulus, which completes the proof.
Remark 2.5.Theorem 2.3 implies the density of the G-orbit G(z 0 ) of z 0 for an arbitrary initial value z 0 whenever some element in G(z 0 ) lies in a small punctured neighborhood of the origin, which is of course a necessary condition.It is also clear that this condition is not always satisfied.For example, if z equals one of the attracting periodic points of f 0 and is mapped exactly onto ∞ by f 1 , then the orbit G(z) is finite.In general there could be different subsets in the Julia set of f 0 that are invariant under both f 0 and f 1 , which we exclude in the Lemma below.We will later discuss explicit examples of the functions f 0 and f 1 for which we can deduce that there are no non-trivial invariant subsets, and as a result we obtain density for all but finitely many initial values z 0 ∈ C. In those examples we can moreover deduce that density also occurs for most resonant parameters λ, where S satisfies case (ii), (iii), or (iv).In the resonant setting more precise knowledge of the higher order terms is required to deduce density of local orbits near the origin.Theorem 2.3 and Remark 2.5 yield the following result.Lemma 2.6.Suppose that the hypotheses of Theorem 2.3 are satisfied.Moreover, assume that none of the attracting fixed or periodic points of f 0 are mapped exactly to ∞ under iteration by f 1 .In addition, assume that for any z 0 ∈ C \ {0, ∞} there is g ∈ G λ so that g(z 0 ) lies in the immediate basin of attraction of ∞ for f 0 .Then for z ∈ C \ {0, ∞} the G λ -orbit of z is dense in C.
Critical Intermittency.
As in the real setting we write Σ = {0, 1} N and endow Σ with the product topology and the Borel σ-algebra.We write ω = (ω i ) i∈N for elements of Σ.The iterated function system f 0 , f 1 defines a skew product system F : Σ× C → Σ× C given by F (ω, z) = (σω, f ω 0 (z)).
A stationary measure m for the iterated function system defines an invariant measure ν p 0 ,p 1 × m for F .
Theorem 2.7.Consider the iterated function system f 0 , f 1 given with probabilities p 0 , p 1 .Suppose that the key assumptions and the additional hypotheses of Lemma 2.6 hold.Assume Then the delta measure δ 0 is the only finite stationary measure.Moreover, for any ε > 0, and for Lebesgue almost any z ∈ C, In the proof we use Kac theorem.We recall a version we use.Consider a measurable map f : X → X with a finite invariant measure µ.Let E ⊂ X with µ(E) > 0. Define the first return time R : We use the statement that the average first return time to E is finite.
Proof of Theorem 2.7.Assume there is a finite stationary measure m that assigns zero measure to {0, ∞}.By Lemma 2.6 the support of m is all of C. Then ν p 0 ,p 1 × m is a finite invariant measure for F .(We may assume that ν p 0 ,p 1 × m is ergodic.)Given a set A ⊂ Σ × C of positive measure ν p 0 ,p 1 × m(A) > 0, Kac theorem yields finite average return time.We will derive a contradiction by providing a set A of positive measure and with infinite average return time.
For A we take the product set A = [0] × A where A is an annulus around 0 between a small circle C(0, δ) around 0 and f 0 C(0, δ) .Take A so that it includes Since ∞ is a superattracting fixed point for f 0 and f 1 (∞) = 0, it follows that for every z ∈ C with |z| ≥ R we have f 1 • f N 0 (z) ∈ B(0, δ) for large enough R and N larger than some N 0 .
A calculation in the spirit of [1,4] establishes infinite average return time to A for p 0 > 1/d.Recall that the local degree of f 0 at infinity is d.Thus for given z ∈ C with |z| ≥ R we have Each iterate maps a point in B(0, δ) at most a constant factor further away from 0. Therefore for a given ω ∈ Σ, Consequently the only stationary probability measure is the delta measure δ 0 .Item (1) follows from (2.1), which implies that for z ∈ B(0, ε), with probability one the orbit f n ω (z) leaves B(0, ε).
We continue with item (2).We follow reasoning from [2] which is also used in [1].Instead of using B(0, ε) we find it convenient to replace it with the union W of B(0, ε) and a small disc {z ∈ C : |z| > r} ∪ {∞} in C around ∞.We will establish that for almost all ω, f n ω (z) spends on average a bounded number of iterates between leaving and re-entering W . Item (2) in the formulation of the theorem will be deduced from this, and it gives in fact additional information on the duration of the relaminarization.
Given ε > 0, there is a finite partition {Q i } of C \ W so that for Q i there is a cylinder B i of uniformly bounded depth K i ≤ K, so that f K i ω (z) ∈ W for (ω, z) ∈ B i × Q i .As ν p 0 ,p 1 (B i ) is bounded from below, it follows that the expected first hitting time for Take an orbit z n = f n ω (z 0 ) with for definiteness z 0 ∈ W . Also assume that z 0 is not in the inverse orbit of 0. Define subsequent escape times from W and C \ W : T 0 = 0 and Note that such a sequence of escape times exists almost surely.Write η k = T 2k−1 − T 2k−2 and ξ k = T 2k − T 2k−1 for the duration of the orbit pieces in W and so that η counts the number of iterates from T 2k on where z n ∈ W . Likewise define for n ∈ [T 2k+1 , T 2k+2 ), , so that ξ counts the number of iterates from T 2k+1 on where z n ∈ W .
Finally calculate Construct independent stochastic variables σ k ≥ ξ k that have uniformly bounded expectation and variance.This can be done as follows: by shrinking B i one constructs cylinders B i of constant depth K and with constant measure ν p 0 ,p 1 (B i ), still satisfying and G(ω, z) = (σω, 0) for z ∈ W .Take a cylinder B in Σ of depth K and measure ν p 0 ,p 1 (B) = ν p 0 ,p 1 (B i ) and add (B, W ) to the collection {(B i , Q i )}.Consider the stochastic variable σ that gives the first time to enter a B i × Q i by iterating G K .Take independent copies σ k ≥ ξ k of σ.
An application of the strong law of large numbers (see for instance [11]) gives that Remark 2.9.We get that for any z ∈ C, where the convergence is in the weak star topology, for ν p 0 ,p 1 almost all ω.
Degree 2 example
In this section we discuss an explicit pair of 1-parameter rational functions for which critical intermittency can be shown.Given a parameter λ ∈ C, consider the two maps on the Riemann sphere C. The first map, f 0 , is conjugate to z → z 2 through a translation by 1.Its Julia set equals the circle {|z + 1| = 1}.The maps f 0 and f 1 have 0 as common fixed point.Moreover, the set of three points {0, ∞, −1} is forward invariant by both maps.The points are mapped under f 0 , f 1 in the following way: As in the general setting we write f 0 , f 1 for the iterated function system generated by f 0 , f 1 , and work with associated probabilities p 0 , p 1 .Again we assume that the Lyapunov exponent at 0 is positive: We will use the skew product notation introduced in the previous section.
3.1.Dense semi-group orbits.Since our explicit semigroup satisfies the general assumptions from the previous section, we immediately obtain topological transitivity from Lemma 2.1.Our next goal is to prove the density of orbits.Lemma 2.6 implies that density occurs for almost all values of λ, and for initial values sufficiently close to 0. We will see that in our explicit setting density also occurs frequently for parameter values for which the semigroup G λ is resonant.Of course, for λ ∈ R the invariance of R implies that density does not occur for real initial values.We will focus on proving density when λ ∈ B(0, 1) \ R. We need a lemma on linearizing coordinates.Lemma 3.1.Let λ = 0. Then the rational functions f 0 and f 1 are not simultaneously linearizable at 0.
Proof.Recall that the linearization map is unique up to a multiplicative constant.It is therefore sufficient to consider the linearization map ϕ for f 0 of the form ϕ(z) = z+a 2 z 2 +a 3 z 3 +O(z 4 ), and to show that ϕ does not also linearize f 1 .Since f 0 (z) = 2z+z 2 we have that a 2 = −1/2 and a 3 = 1/6.Observe that Therefore the second order part of ϕ • f 1 − λϕ only vanishes when λ = −3, in which case Hence regardless of the value of λ = 0 the maps f 0 and f 1 cannot be simultaneously linearizable.
Proof.For simplicity of notation we work with w = z + 1, which gives f 0 : w → w 2 and Assume that both w and f 1 (w) are contained in ∂D.Analysis of the image of ∂D under w → w−1 w 2 shows that for each fixed λ there are at most 4 points w ∈ ∂D whose image can lie in ∂D, including the fixed point w = 1.Therefore it is sufficient to only consider w ∈ ∂D whose forward f 0 -orbit contains at most 3 points, possibly with 1 added.Up to complex conjugation we therefore obtain the following eight candidates for a forward invariant sets.
Note that the rationality of f 0 and f 1 implies that the higher order terms are non-zero.Since f 0 and f 1 are not simultaneously linearizable, it follows that Therefore the semi-group H = h 0 , h 1 induced by the two distinct neutral maps h 0 := f 2m 1 • f 2n 0 and h 1 := (f m 1 • f n 0 ) 2 with the same rotation number, cannot be normal in a neighborhood of 0, see [6].In particular the action of H on any closed curve winding around 0 is unbounded.
Observe that {f m 0 • f n 1 (z 0 )} m,n∈N accumulates on closed curves winding around 0 of arbitrarily small radius.Hence there are points on those curves whose H-orbits must be unbounded.But since the generators of H are both neutral at the origin, it follows that the unbounded orbit under H starting at such a point must be arbitrarily dense on an annulus enclosed by two of such closed curves winding around 0. It follows that by considering the H-action on smaller and smaller scales, and repeatedly composing with f 0 to get back to a given scale, we obtain density on an open set which completes the proof of case (iii).
Case (iv).Let us again work in linearization coordinates for f 0 , such that we can write g 0 (z) = 2z.By the assumption in case (iv) it follow that 2 m λ n = 1 for certain minimal n and m, from which it follows that . Straightforward computation shows that the second order term of g 1 does not vanish when |λ| < 1 (Lemma 3.1), from which it follows that the second order term of h also does not vanish.Thus h has a parabolic fixed point at the origin with a single parabolic basin.In order to simplify the discussion we can apply a linear coordinate transformation to give h(z) = z + z 2 + O(z 3 ), so that all orbits in the parabolic basin of h converge to zero along the negative real axis.
Consider base points z n = g n 1 (z 0 ) for n large with arg(z n ) bounded away from 0. Then z n lies in the parabolic basin of h at 0. Write z n,j = h j (z n ) where j runs from 0 to a large k = k(n) satisfying |z n,k | << |z n |.Recall that the points z n,j converge to zero as j → ∞, along a real analytic curve tangent to the negative real axis.Moreover, the ratios between consecutive points satisfy z n,j z n,j+1 → 1 as n → ∞.Write w n,j = g 1 (z n,j ), so that the points w n,j converge to zero along the half line through −λ, which is different from the negative real axis, and since λ / ∈ R also different from the positive real axis.Choose J > 0 such that arg(w n,j ) ∼ arg(−λ) for j ≥ J.It follows that the points w n,j still lie in the parabolic basin for j ≥ J. Now define w n,j, = h (w n,j ) for j ≥ J and ≥ 0.
Recall the existence of the Fatou coordinate on the parabolic basin: a change of coordinates, again denoted by ϕ, conjugating h to z → z + 1. Recall that ϕ is of the form for some b ∈ C. It follows that each of the orbits {w n,j, } ∈N lies on a real analytic curve, and these real analytic curves are all transverse to the half line through −λ, with angles bounded away from zero.After scaling by an iterate g s 0 to bring w j,J, back to fixed scale, we obtain an arbitrarily dense set of points lying in an open set of uniform size.By increasing n and taking a convergent subsequence of g s 0 (w j,J, ) we obtain a dense set of accumulation points in an open subset, completing the proof.Remark 3.4.It is not clear to the authors whether Theorem 3.3 also holds for nonreal λ with |λ| > 1.However, it does hold for generic λ.Assume that the set Using the attraction under f 0 to the point ∞, we can consider a starting value z 0 that is unequal to but arbitrarily close to 0. We may therefore assume that, for k ∈ N large, the point 2 k z 0 is still close to zero.For j ≤ k we obtain that ) is still sufficiently close to the origin.The assumption that S is dense implies that by starting with smaller and smaller values of z 0 , the set of points f j 0 f n 1 (z 0 ) becomes more and more dense in a round annulus centered at 0 of arbitrarily large modulus.As in the proof of Theorem 2.3 it follows that the semi-group orbit is dense in C.
Remark 3.5.The Fatou set F (G) of the semi-group G = f 0 , f 1 is the set of points where G is normal.The Julia set J(G), defined as the complement C \ F (G), is a closed backward invariant set.Under the assumptions of Theorem 3.3, J(G) equals C. Indeed, the Julia set contains 0 and by backward invariance also ∞.By [7] it contains a neighborhood of ∞ and then using Theorem 3.3 it equals C. By [8], repelling fixed points of elements of G lie dense in J(G) = C.
3.2.Intermittency.Our explicit semigroup satisfies the hypotheses of Theorem 2.3.Moreover Lemma 3.2 implies that the only invariant subset is {−1, 0, ∞}.Thus the result of Theorem 2.7 still holds for our specific example.As the density occurs for parameter values for which the semigroup is resonant, we can state the intermittency with larger values of λ.Theorem 3.6.Consider the iterated function system f 0 , f 1 given with probabilities p 0 , p 1 .Let λ ∈ B(0, 1) \ R. Assume (3.2) holds and Then the only finite stationary measure is the delta measure δ 0 .For any ε > 0, for Lebesgue almost any z ∈ C, (1) f n ω (z) ∈ B(0, ε) for infinitely many n;
Vanishing Lyapunov exponents
For iterated function systems of interval maps, Gharaei and the first author [5] showed how intermittent time series occur if there is a common fixed point with a vanishing Lyapunov exponent, so that the common fixed point is neutral on average.We will present an analogous example for iterated function systems of Möbius transformations on the Riemann sphere.Consider the maps We pick the maps f 0 , f 1 with equal probability.where R(ω, z) = min{i > 0 : F i (ω, z) ∈ A} is the return time to A. Here as before A = [0] × A with A an annulus between a small circle S around 0 and its image f 0 (S).By Kac theorem this implies there is no finite stationary measure with support intersecting A, so that the only stationary probability measure is δ 0 .
To obtain (4.1), fix z inside S and let H(ω) = min{i > 0 : F i (ω, z) ∈ A} be the first time that f i ω (z) that enters A. We are done if we prove Σ H dν p 0 ,p 1 = ∞.The remaining statements follow as in the proof of Theorem 3.6. | 7,018.2 | 2019-09-12T00:00:00.000 | [
"Mathematics"
] |
Improvement in the Sustained-Release Performance of Electrospun Zein Nanofibers via Crosslinking Using Glutaraldehyde Vapors
Volatile active ingredients in biopolymer nanofibers are prone to burst and uncontrolled release. In this study, we used electrospinning and crosslinking to design a new sustained-release active packaging containing zein and eugenol (EU). Vapor-phase glutaraldehyde (GTA) was used as the crosslinker. Characterization of the crosslinked zein nanofibers was conducted via scanning electron microscopy (SEM), mechanical properties, water resistance, and Fourier transform infrared (FT-IR) spectroscopy. It was observed that crosslinked zein nanofibers did not lose their fiber shape, but the diameter of the fibers increased. By increasing the crosslink time, the mechanical properties and water resistance of the crosslinked zein nanofibers were greatly improved. The FT-IR results demonstrated the formation of chemical bonds between free amino groups in zein molecules and aldehyde groups in GTA molecules. EU was added to the zein nanofibers, and the corresponding release behavior in PBS was investigated using the dialysis membrane method. With an increase in crosslink time, the release rate of EU from crosslinked zein nanofibers decreased. This study demonstrates the potential of crosslinking by GTA vapors on the controlled release of the zein encapsulation structure containing EU. Such sustainable-release nanofibers have promising potential for the design of fortified foods or as active and smart food packaging.
Introduction
Microbial contamination is one of the main causes of food spoilage and poisoning during the food storage period [1].Currently, the application of chemical fungicides is the most commonly used preservation approach.However, these chemicals have potential hazards to human health and the natural environment [2].Therefore, it is essential to develop safe, eco-friendly, and effective natural agents for food preservation.Eugenol (EU) is the key component of clove oil and possesses remarkable pharmacological actions, such as inflammatory, antiflatulent, anticonvulsant, antipyretic, antitumor, pain-relieving, and neuroprotective effects [3][4][5].Moreover, EU has shown strong activity against pathogenic viruses, bacteria, and fungi, including Staphylococcus aureus, Escherichia coli O157:H7, Salmonella enterica, Helicobacter pylori, Campylobacter jejuni, and Listeria monocytogenes [6][7][8][9], and hence represents a promising alternative to chemical fungicides.However, the intense aroma, high volatility, and easy decomposition through light, oxidation, and heat processes of EU greatly affect its stability, making it less effective for food preservation [2,10,11].Therefore, Foods 2024, 13, 1583 2 of 17 designing appropriate encapsulation systems can overcome these limitations and protect EU for better physicochemical stability and functional activity.
Electrospinning is a cheap and well-established technology that does not involve severe conditions (extreme temperature or pressure); therefore, it has great potential for encapsulating and delivering sensitive bioactive compounds [2].In addition, the obtained nanofibers possess a high load efficiency, high porosity, and specific surface area, as well as a controllable morphology, which can stabilize and adjust the release behavior of active compounds [12].Numerous synthetic and biopolymers can be electrospun to produce nanofibers.Compared with synthetic polymers, natural biopolymers have received much interest because of their nontoxicity, renewability, and biocompatibility [8,12].Nevertheless, unlike stable nanofibers made of synthetic polymers, volatile bioactives loaded in electrospun biopolymer nanofibers are prone to burst and uncontrolled release in water, which results in poor bioaccessibility of the active additives.Therefore, the development of biopolymer nanofibers with a controlled release nature is crucial to enhance food sustainability and decrease waste.
Coaxial electrospinning [13,14], polymer blending [15,16], and crosslinking [17] have been used to enhance the release behavior of functional components in biopolymer nanofibers.In the literature, coaxial electrospinning has been successfully used to produce core-shell biopolymer nanofibers with a better-controlled release performance [13,14,18,19].However, electrospinning's shortcomings lie in the slow production rate, high process requirements, and poor compatibility of core-shell fluid [20,21].Moreover, it was found that the release profiles of functional components from biopolymer nanofibers could be tuned by blending with polymers [15,22].Most synthetic polymers are generally non-biodegradable or slow-degradable and need to be dissolved in toxic organic solvents, which restrict the applications of nanofibers in food fields [22,23].Good compatibility and the homogeneous partitioning of synthetic polymers with biopolymers are also challenging [24].In addition, crosslinking is an interesting method that can be used to reinforce the structure of biopolymer nanofibers, thus improving their release properties [25][26][27][28].In comparison, glutaraldehyde (GTA) is an effective crosslinking agent, which has a low cost, fast reaction time, and the ability to react with the amine or hydroxyl groups of protein molecules [29].However, GTA is toxic and may remain in electrospun nanofibers [26,30].To overcome this limitation, vapor-phase crosslinking and vacuum drying post-treatment can be performed, which have been shown to have little or no cytotoxic effects [31][32][33].Several studies have investigated the effect of GTA vapor crosslinking on the release performance of electrospun nanofibers [34].Baykara et al. (2022) reported that the vapor-phase crosslinking of GTA on PVA/gelatin nanofibers did not significantly affect the release of gentamicin [35].Hadyz et al. (2021) reported that chitosan/PEO nanofibers crosslinked by vapors of GTA extended the release time of nizatidine [33].However, to the best of our knowledge, current reports mainly focus on synthetic polymers.Studies on the effect of crosslinking by vapor GTA on the release profile, stability, and structural properties of biopolymer nanofibers are still limited.
Zein, a byproduct of the bioethanol industry, is the main storage protein in maize and is recognized as a GRAS polymer [36].Zein has good fiber formation properties and can fabricate nanofibers using nontoxic organic solvents, making it an ideal biopolymer for electrospinning and nanofiber formation [37,38].Zein is water-insoluble due to the rich nonpolar amino acids in its structure, which makes it suitable for effectively entrapping hydrophobic functional ingredients into nanofibers, contributing to good compatibility and potential interactions [39].The encapsulation of many bioactive compounds in zein matrices has been reported, demonstrating their increased stability and functional properties [36,[40][41][42].Furthermore, zein has shown low digestibility and controlled release capability [43,44]; thus, its nanofibers have drawn great interest in the controlled delivery of bioactive compounds.Silva et al. (2020) indicated a lower release rate of tryptophan from zein nanofibers compared with that from zein films [45].In the study by Defrates et al. (2021), a more sustained release of model drugs (rifampin, rhodamine B, and crystal violet) was observed in zein fibers compared with zein films [46].Nevertheless, limited studies have been conducted on the sustained or controlled release of volatile functional compounds in zein nanofibers.
Developing biodegradable active packaging with a sustained-release nature for volatile bioactives is of great interest in food preservation and transportation.Therefore, the purpose of the present work was to overcome the limitations of fast and uncontrolled release of volatile actives in biopolymer nanofiber materials for developing slow-release zein nanofibers via crosslinking using GTA in the vapor phase.The influence of crosslinking on the physicochemical properties, including the fiber morphology, water resistance, and mechanical properties, of zein nanofibers was investigated.To analyze the crosslinking mechanism between GTA and zein, the intermolecular interactions were studied using spectroscopic methods.EU was added to zein nanofibers and then crosslinked by GTA vapors.The effect of crosslinking times on the EU release behavior and corresponding release mechanism was assessed.The encapsulation and in vitro release tests of EU suggest great potential for sustained-release zein nanofibers in the food and pharmaceutical industries.
Fabrication and Crosslinking of Electrospun Zein Nanofibers
The spinning solution was prepared by dissolving zein (30%, w/v) in acetic acid at 25 ± 2 • C under magnetic stirring.After standing for 20 min to remove bubbles, the solution was then electrospun using an electrospinning device (HZ-11, Huizhi Electrospinning (HZE), Qingdao, China) equipped with an applied voltage of 16 kV at a constant flow rate of 1 mL/h, a target roll speed of 1000 rpm, and a needle tip to collector distance of 10 cm.The spinning solution volume of each sample remained constant at 2 mL, and the nanofibers were collected on aluminum foil.The electrospinning zein nanofibers were labeled GTA_0h.
Electrospun zein nanofibers were crosslinked with GTA vapors.Specifically, an aqueous GTA solution (25%, v/v) was placed into an airtight desiccator, with the zein nanofibers placed in the headspace.The entire desiccator was maintained at 20 ± 2 • C for 3, 6, 9, and 12 h.After crosslinking, the nanofibers were placed in a vacuum drying oven at 40 • C for 12 h to remove the remaining GTA.The crosslinked zein nanofibers for different times were referred to as GTA_3h, GTA_6h, GTA_9h, and GTA_12h.
Scanning Electron Microscopy (SEM)
The zein nanofiber samples were coated with a gold layer under vacuum, and their morphology was observed using a scanning electron microscope (Hitachi, Tokyo, Japan).The fiber diameters were measured using ImageJ (x64) software based on 200 random fibers of each sample [31].
Mechanical Properties
The mechanical properties of the zein nanofiber samples were determined on a universal testing machine (Instron, Norwood, MA, USA) according to the method of Lu et al. (2017) with some modifications [47].Each sample was cut into 1 cm × 3 mm (length × width) pieces.Before starting the measurement, the thickness of each nanofiber sample was examined using micrometer calipers (KAFUWELL YB5001A, Hangzhou, China).A crosshead speed of 5 mm/min and a 5 mm gauge length were used during the measurement.The test was repeated five times for each sample.The tensile strength (TS), elongation at break (EB), and Young's modulus (YM) were calculated using the following formulas: where F m = maximum load (N) recorded S = cross-sectional area of the nanofibers L b = length (mm) at the breaking point L 0 = initial length (mm) of the nanofibers L m = test length (mm) corresponding to the maximum load L g = gauge length (mm)
Water Contact Angle (WCA)
The WCAs of the zein nanofiber samples on the glass slide were measured using an OCA 25 contact angle meter (DataPhysics Instruments GmbH, Filderstadt, Germany).Briefly, 3 µL of distilled water was pipetted onto the samples, and the WCA was recorded at t = 5 s after the waterdrop had made contact with the nanofiber samples.Five tests were performed for each nanofiber sample.
Attenuated Total Reflectance Infrared Spectroscopy (ATR-FT-IR)
The FT-IR spectra of the zein nanofiber samples were collected using a Nicolet 6700 FT-IR spectrometer (Thermo Scientific, Waltham, MA, USA) with an ATR unit attached.All spectra in the wavenumber range of 4000-400 cm −1 were obtained by averaging 32 scans at 4 cm −1 resolutions.
Preparation and GTA Crosslinking of Zein Nanofibers Loaded with EU
Zein was dissolved in acetic acid (30%, w/v) at 25 ± 2 • C under continuous magnetic stirring.EU was added to the zein solution at a level of 20% (w/w) of zein content and allowed to mix for 1 h.After standing for 20 min, the mixture solution was electrospun using an electrospinning device (HZ-11, Huizhi Electrospinning (HZE), Qingdao, China) equipped with an applied voltage of 16 kV at a constant flow rate of 1 mL/h, a target roll speed of 1000 rpm, and a needle tip to collector distance of 10 cm.The electrospinning zein nanofibers were labeled EU/ZNF.Then, the zein nanofiber samples were crosslinked with GTA vapors for 3, 6, 9, and 12 h, which were labeled EU/ZNF_3h, EU/ZNF_6h, EU/ZNF_9h, and EU/ZNF_12h, respectively.
The EU contents in the zein nanofiber samples were determined according to the method of Aydin et al. (2022), with some modifications [48].In brief, each nanofiber sample (15 mg) was placed in a dialysis bag containing 4 mL of ethanol aqueous solution (90%, v/v).The dialysis bag was then immersed in 60 mL of ethanol aqueous solution (90%, v/v) in a hermetic beaker and shaken at 25 • C and 80 rpm for 12 h.The concentration of EU in the release medium was then determined using a spectrophotometer at a wavelength of 280 nm.The encapsulation efficiency and loading capacity were calculated as follows: EE (%) = (Mass of actual EU loaded in nanofibers)/(Mass of nanofibers) × 100% (4) LC (%) = (Mass of actual EU loaded in nanofibers)/(Mass of EU used in nanofibers) × 100% (5)
Immersion Study
To better understand the EU release behavior, the swelling characteristics of the zein nanofiber samples were determined according to the method of Martin et al. (2022), with slight modifications [49].The zein nanofiber samples were cut into 3 cm × 3 cm (length × width) pieces.Subsequently, the samples were immersed in distilled water at room temperature for 0.5, 1, 3, 6, and 24 h, before photographing to measure the volumetric changes.Non-crosslinked and crosslinked zein nanofibers loaded with EU were immersed in phosphate-buffered saline (PBS; pH = 7.2) at room temperature for 0.5, 1, 3, and 24 h, and then freeze-dried.Subsequently, their morphology was observed using a scanning electron microscope (Hitachi, Tokyo, Japan).
In Vitro Release Behavior of EU
The effect of crosslinking time on the release behavior of EU from zein nanofiber samples was investigated according to the method described by Chng et al. ( 2023) with some modifications [50].The nanofiber samples (20 mg) were placed in a dialysis bag containing 4 mL of PBS (pH = 7.2).The dialysis bag was then immersed in 60 mL of PBS in a beaker and shaken at 25 • C and 80 rpm for 30 h.At specific time intervals (0.5, 1, 2, 3, 4, 5, 6, 8, 10, 12, 20, 24, and 30 h), 5 mL of the release medium was removed and replenished with an equal amount of fresh medium (PBS, pH = 7.2).The concentration of EU in the release medium was determined using a spectrophotometer at 280 nm.
EU Release Kinetics
As shown in Table 1, mathematical models, including zero-order, first-order, Higuchi, Ritger-Peppas, Peppas-Sahlin, and Kopcha, were used to fit the EU release curves to investigate the release mechanism of EU from the zein nanofiber samples.
Table 1.Mathematical models for the analysis of the eugenol release kinetics from zein nanofibers [51].
Model Equation Parameter
Zero-order
Statistical Analysis
All determinations were performed at least in triplicate, and the results are presented as the mean ± standard deviation.Single-factor analysis of variance and Duncan's multiple range test were used for statistical analysis, with a significance level of 0.05 (p < 0.05).
Morphology and Fiber Diameter Distribution
The morphology and fiber diameter distribution of the non-crosslinked and crosslinked zein nanofibers are shown in Figure 1 and Table S1.According to Figure 1a, the average fiber diameter of GTA_0h was measured as 208.22 ± 46.93 nm; the average fiber diameters of GTA_3h, GTA_6h, GTA_9h, and GTA_12h were calculated as 225.52 ± 47.25 nm, 224.32 ± 52.93 nm, 226.96 ± 68.00 nm, and 267.69 ± 90.97 nm, respectively.The average fiber diameter and diameter distribution slightly increased with increasing exposure time to GTA vapor.The SEM images show that the fibers of GTA_0h were continuous and uniform.After crosslinking with GTA vapor, the fibers swelled but retained their fiber morphology, which is considered to be related to the absorption of water vapor during exposure to GTA vapors [35].These results are consistent with those of previous studies, in which the fiber diameter of electrospun nanofibers increased after vapor-phase GTA crosslinking [35,52].
to GTA vapor.The SEM images show that the fibers of GTA_0h were continuous and uniform.After crosslinking with GTA vapor, the fibers swelled but retained their fiber morphology, which is considered to be related to the absorption of water vapor during exposure to GTA vapors [35].These results are consistent with those of previous studies, in which the fiber diameter of electrospun nanofibers increased after vapor-phase GTA crosslinking [35,52].
Mechanical Characterization
The mechanical properties of the non-crosslinked and crosslinked zein nanofibers are presented in Figure 2. The results indicated that GTA_0h showed minimal TS (11.08 ± 0.29 MPa).After crosslinking, the TS was observed to increase, with values of 15.28 ± 0.58 MPa, 17.59 ± 0.83 MPa, 23.60 ± 2.79 MPa, and 28.65 ± 0.43 MPa for GTA_3h, GTA_6h, GTA_9h, and GTA_12h, respectively.This increase might be due to the formation of chemical bonds among zein molecules inside and among the fibers, resulting from crosslinking during exposure to GTA vapors [35,53].The longer crosslinking time would allow the generation
Mechanical Characterization
The mechanical properties of the non-crosslinked and crosslinked zein nanofibers are presented in Figure 2. The results indicated that GTA_0h showed minimal TS (11.08 ± 0.29 MPa).After crosslinking, the TS was observed to increase, with values of 15.28 ± 0.58 MPa, 17.59 ± 0.83 MPa, 23.60 ± 2.79 MPa, and 28.65 ± 0.43 MPa for GTA_3h, GTA_6h, GTA_9h, and GTA_12h, respectively.This increase might be due to the formation of chemical bonds among zein molecules inside and among the fibers, resulting from crosslinking during exposure to GTA vapors [35,53].The longer crosslinking time would allow the generation of a denser network structure, improving the mechanical strength of the zein nanofibers.On the other hand, the YM of GTA-3h was 0.63 ± 0.06 GPa; after 12 h of vapor-phase GTA crosslinking, the YM value gradually increased to 0.88 ± 0.01 GPa, indicating that the zein nanofibers showed increased resistance to deformation with an increase in the crosslinking time.Notably, compared with the crosslinked zein nanofibers, GTA-0h had the most considerable YM value of 0.97 ± 0.05 GPa.These findings demonstrate that crosslinking decreases the rigidity of electrospun zein nanofibers.In addition, the EB increased continuously from 4.85 ± 0.86% of GTA-0h to 6.07 ± 0.12% of GTA-6h and then slightly Foods 2024, 13, 1583 7 of 17 decreased to 4.63 ± 0.36% of GTA-12h.One possible reason for this is that after 6 h of GTA crosslinking, the strong bond or three-dimensional structure formed between zein and GTA molecules would restrict the motion of the molecular chains [54,55].Chen et al. (2022) fabricated electrospun feather keratin/gelatin nanofibers crosslinked by vapor-phase GTA and found that proper crosslinking could effectively enhance their mechanical strength [31].Wang et al. (2016) also reported that the TS value of electrospun starch nanofibers was increased by ~10 times via crosslinking with vapor-phase GTA [52].
crosslinking, the YM value gradually increased to 0.88 ± 0.01 GPa, indicating that the zein nanofibers showed increased resistance to deformation with an increase in the crosslinking time.Notably, compared with the crosslinked zein nanofibers, GTA-0h had the most considerable YM value of 0.97 ± 0.05 GPa.These findings demonstrate that crosslinking decreases the rigidity of electrospun zein nanofibers.In addition, the EB increased continuously from 4.85 ± 0.86% of GTA-0h to 6.07 ± 0.12% of GTA-6h and then slightly decreased to 4.63 ± 0.36% of GTA-12h.One possible reason for this is that after 6 h of GTA crosslinking, the strong bond or three-dimensional structure formed between zein and GTA molecules would restrict the motion of the molecular chains [54,55].Chen et al. (2022) fabricated electrospun feather keratin/gelatin nanofibers crosslinked by vapor-phase GTA and found that proper crosslinking could effectively enhance their mechanical strength [31].Wang et al. (2016) also reported that the TS value of electrospun starch nanofibers was increased by ~10 times via crosslinking with vapor-phase GTA [52].
Water Contact Angle (WCA)
The WCA was used to investigate the surface hydrophilicity of the zein nanofibers before and after vapor-phase GTA crosslinking.The surface of nanofibers is considered hydrophobic when θ > 90°; on the contrary, it is considered a hydrophilic surface [56].As shown in Figure 3, GTA_0h had a WCA of 120.88 ± 2.24°, indicating a hydrophobic surface.When the exposure time to GTA vapors was shorter than 9 h, the WCA values increased with exposure time.The WCA of GTA_6h was 126.38 ± 2.44°.This increased hydrophobicity can be attributed to the consumption of hydrophilic groups in zein molecules during crosslinking and the decrease in porosity of the crosslinked zein nanofibers, which restricted water penetration [31,55].However, no significant difference in the WCA
Water Contact Angle (WCA)
The WCA was used to investigate the surface hydrophilicity of the zein nanofibers before and after vapor-phase GTA crosslinking.The surface of nanofibers is considered hydrophobic when θ > 90 • ; on the contrary, it is considered a hydrophilic surface [56].As shown in Figure 3, GTA_0h had a WCA of 120.88 ± 2.24 • , indicating a hydrophobic surface.When the exposure time to GTA vapors was shorter than 9 h, the WCA values increased with exposure time.The WCA of GTA_6h was 126.38 ± 2.44 • .This increased hydrophobicity can be attributed to the consumption of hydrophilic groups in zein molecules during crosslinking and the decrease in porosity of the crosslinked zein nanofibers, which restricted water penetration [31,55].However, no significant difference in the WCA of the zein nanofibers was observed until the crosslinking time reached up to 12 h.After 12 h of vapor-phase GTA crosslinking, the WCA of the zein nanofibers decreased to 114.70 ± 2.87 • , which may be related to their decrease in surface roughness and thickness.Notably, GTA_12h still possessed a hydrophobic surface (θ > 90 • ). of the zein nanofibers was observed until the crosslinking time reached up to 12 h.After 12 h of vapor-phase GTA crosslinking, the WCA of the zein nanofibers decreased to 114.70 ± 2.87°, which may be related to their decrease in surface roughness and thickness.Notably, GTA_12h still possessed a hydrophobic surface (θ > 90°).
Stability
As a hydrophobic protein macromolecule, zein is insoluble in water.To examine the water resistance of the electrospun zein nanofiber samples, we used distilled water to immerse the as-spun and crosslinked zein nanofibers.Figures 4 and 5 separately show the macroscopic images and the volume of non-crosslinked and crosslinked zein nanofibers after 24 h of immersion.GTA_0h was observed to shrink immediately when in contact with water, with a volume of 59.37% of the initial value.In comparison, the crosslinked zein nanofibers possessed better water resistance, with volumes of 64.07%(GTA_3h), 78.36% (GTA_6h), 84.39% (GTA_9h), and 69.58% (GTA_12h) upon exposure to water.When the immersion time was increased to 24 h, a decrease in the volumes of all samples was observed.The volumes of GTA_3h, GTA_6h, GTA_9h, and GTA_12h were 26.80%, 34.73%, 33.90%, and 35.17% of their original values, respectively, which is still higher than that of GTA_0h (28.34%).Furthermore, Figure S1 shows the macroscopic images of noncrosslinked and crosslinked zein nanofibers upon immersion in acetic acid.GTA_0h was found to be fully dissolved in acetic acid.The crosslinked samples could better retain their integrity without dissolution.These results indicate that vapor-phase GTA crosslinking could improve the resistance of zein nanofibers to water and acetic acid.Yu et al. (2020) reported similar results, in that the moisture resistance of zein/PVA nanofibers was greatly enhanced after steam crosslinking with GTA [57].Moreover, Selling et al. (2008) reported that in-site GTA-crosslinked zein nanofibers could dissolve in acetic acid [58].The different results may be related to the difference in crosslinking methods and the amount of GTA used.
Stability
As a hydrophobic protein macromolecule, zein is insoluble water.To examine the water resistance of the electrospun zein nanofiber samples, we used distilled water to immerse the as-spun and crosslinked zein nanofibers.Figures 4 and 5 separately show the macroscopic images and the volume of non-crosslinked and crosslinked zein nanofibers after 24 h of immersion.GTA_0h was observed to shrink immediately when in contact with water, with a volume of 59.37% of the initial value.In comparison, the crosslinked zein nanofibers possessed better water resistance, with volumes of 64.07%(GTA_3h), 78.36% (GTA_6h), 84.39% (GTA_9h), and 69.58% (GTA_12h) upon exposure to water.When the immersion time was increased to 24 h, a decrease in the volumes of all samples was observed.The volumes of GTA_3h, GTA_6h, GTA_9h, and GTA_12h were 26.80%, 34.73%, 33.90%, and 35.17% of their original values, respectively, which is still higher than that of GTA_0h (28.34%).Furthermore, Figure S1 shows the macroscopic images of non-crosslinked and crosslinked zein nanofibers upon immersion in acetic acid.GTA_0h was found to be fully dissolved in acetic acid.The crosslinked samples could better retain their integrity without dissolution.These results indicate that vapor-phase GTA crosslinking could improve the resistance of zein nanofibers to water and acetic acid.Yu et al. ( 2020) reported similar results, in that the moisture resistance of zein/PVA nanofibers was greatly enhanced after steam crosslinking with GTA [57].Moreover, Selling et al. (2008) reported that in-site GTA-crosslinked zein nanofibers could dissolve in acetic acid [58].The different results may be related to the difference in crosslinking methods and the amount of GTA used.
Fourier Transform Infrared Spectrometry (FT-IR)
The FT-IR spectra were recorded to explore the structural changes in zein nanofibers before and after crosslinking with GTA vapors.As shown in Figure 6 and Table S2, the spectra of GTA_0h presented a broad characteristic peak at 3295 cm −1 , corresponding to the O-H and N-H stretching vibrations [59].As the crosslinking time increased, that peak exhibited a redshift, and its peak intensity gradually decreased, suggesting the success of the crosslinking reaction.The free amino groups in the fibers reacted with the aldehyde groups in GTA molecules, leading to fewer free amino groups remaining to form O-H and N-H hydrogen bonds [60].Chen et al. (2022) also reported similar results in electrospun feather keratin/gelatin nanofibers crosslinked by GTA vapors [31].In addition, with increasing crosslinking time, the characteristic peaks representing the C=O stretching vibration, C-N stretching vibration, and N-H bending vibration (amide I band) were red-shifted from 1653 cm −1 of GTA_0h to 1648 cm −1 of GTA_12h [61,62].The ATR-FT-IR spectra of all nanofiber samples showed the maximum amide II band at 1542 cm −1 .After crosslinking, the characteristic maximum of the amide I/amide II bands shifted to lower frequencies from 1653 cm −1 /1542 cm −1 of GTA_0h to 1648 cm −1 /1542 cm −1 of GTA_12h.These results reflect the change in random coil structures, indicating the greater structural stability of crosslinked zein nanofibers than GTA_0h [63].For the crosslinked zein nanofibers, a new characteristic peak appeared at ~1066 cm −1 , corresponding to the symmetric stretching vibration of the C=O groups [64].These findings confirm the crosslinking of zein nanofibers by GTA vapors.Overall, the crosslinked zein nanofibers showed secondary structures similar to those of as-spun zein nanofibers, suggesting that crosslinking post-treatment by GTA vapors on zein nanofibers had little effect on their secondary structure.
Figure 7 presents the possible molecular structure of zein after electrospinning and crosslinking.Acetic acid is a weak organic acid that is capable of breaking hydrophobic interactions, unfolding the β-sheet structure, and causing partial denaturation of the α-helix structure of proteins [65,66].Zein was demonstrated to dissolve in acetic acid, with positive charges and a highly extended structure [66].More free amino groups in the protonated zein molecules were exposed after electrospinning because of the rapid and large proportion of stretching.Moreover, fast solvent evaporation caused rapid solidification of the fibers, leading to free amino groups in zein being distributed on both sides of the molecular chains [52].During exposure to GTA vapors, free amino groups were allowed to react with Foods 2024, 13, 1583 10 of 17 aldehyde groups in GTA molecules to form chemical bonds, which resulted in increased mechanical strength and water resistance of the crosslinked zein nanofibers [17].
frequencies from 1653 cm −1 /1542 cm −1 of GTA_0h to 1648 cm −1 /1542 cm −1 of GTA_12h.These results reflect the change in random coil structures, indicating the greater structural stability of crosslinked zein nanofibers than GTA_0h [63].For the crosslinked zein nanofibers, a new characteristic peak appeared at ~1066 cm −1 , corresponding to the symmetric stretching vibration of the C=O groups [64].These findings confirm the crosslinking of zein nanofibers by GTA vapors.Overall, the crosslinked zein nanofibers showed secondary structures similar to those of as-spun zein nanofibers, suggesting that crosslinking post-treatment by GTA vapors on zein nanofibers had little effect on their secondary structure.Foods 2024, 13, x FOR PEER REVIEW 11 positive charges and a highly extended structure [66].More free amino groups in the tonated zein molecules were exposed after electrospinning because of the rapid and la proportion of stretching.Moreover, fast solvent evaporation caused rapid solidificatio the fibers, leading to free amino groups in zein being distributed on both sides of the lecular chains [52].During exposure to GTA vapors, free amino groups were allowe react with aldehyde groups in GTA molecules to form chemical bonds, which resulte increased mechanical strength and water resistance of the crosslinked zein nanofibers [
The release behavior of uncrosslinked and crosslinked zein nanofibers was then termined in PBS at 25 °C.According to Figure 8, all nanofiber samples exhibited a sim trend of EU release, comprising a burst release in the initial 2 h, followed by a sustai
The release behavior of uncrosslinked and crosslinked zein nanofibers was then determined in PBS at 25 • C. According to Figure 8, all nanofiber samples exhibited a similar trend of EU release, comprising a burst release in the initial 2 h, followed by a sustained release up to 30 h.The initial release percentage for EU/ZNF was 26.15 ± 0.41%.The crosslinked zein nanofibers showed a decreased release percentage, with values of 23.31 ± 1.60% (EU/ZNF_3h), 18.15 ± 2.76% (EU/ZNF_6h), 16.66 ± 0.89% (EU/ZNF_9h), and 15.58 ± 2.23% (EU/ZNF_12h).At 30 h of release, approximately 81.38 ± 0.71% of release occurred from EU/ZNF.For EU/ZNF_3h, EU/ZNF_6h, EU/ZNF_9h, and EU/ZNF_12h, the cumulative release percentages were 74.80 ± 1.03%, 64.79 ± 2.58%, 59.89 ± 2.55%, and 54.74 ± 1.66% at 30 h, respectively.The initial burst release is possibly due to the rapid dissolution of EU close to the fiber surface, and the subsequent gradual release mostly depends on the diffusion rate of EU inside the fibers to the surface [33].These results indicate that crosslinking post-treatment by GTA vapors could restrict molecular movement, ultimately extending the release time of EU from the zein nanofibers [67].Furthermore, crosslinking reduced the number of 3D porous structures, which could restrict the permeation of water molecules, making it more difficult to release EU from crosslinked nanofibers [35].Similarly, previous studies have reported that crosslinking treatment may prolong drug release from nanofibers [68,69].Notably, the release rate of EU significantly decreased with the crosslinking time increasing to 6 h.Subsequently, a slower decline in the EU release rate was observed until 12 h of crosslinking.This decrease could be due to sufficient crosslinking after 6 h of the crosslinking reaction, resulting in a decline in the sustained release effect with the crosslinking time up to 12 h.Similarly, previous studies have reported that crosslinking treatment may prolong drug release from nanofibers [68,69].Notably, the release rate of EU significantly decreased with the crosslinking time increasing to 6 h.Subsequently, a slower decline in the EU release rate was observed until 12 h of crosslinking.This decrease could be due to sufficient crosslinking after 6 h of the crosslinking reaction, resulting in a decline in the sustained release effect with the crosslinking time up to 12 h.It has been demonstrated that fiber morphology can influence the drug-release behavior of electrospun nanofibers [70,71].To better understand the release behavior of EU from crosslinked zein nanofibers by GTA vapors, the morphology of the nanofibers was observed after immersion in PBS (pH = 7.2) for 24 h.As shown in Figure 9, neat zein nanofibers exhibited poor stability in PBS, as EU/ZNF collapsed into a film after 0.5 h of immersion in PBS, likely due to the high hydrophobicity of the zein nanofibers.When the nanofibers were immersed in PBS, strong hydrophobic interactions between the fibers caused the formation of film-like substances [67].The crosslinked zein nanofibers maintained their fiber shape and 3D porous structure in PBS after immersion for 24 h, although a certain extent of swelling of the fibers was observed.By increasing the crosslinking time It has been demonstrated that fiber morphology can influence the drug-release behavior of electrospun nanofibers [70,71].To better understand the release behavior of EU from crosslinked zein nanofibers by GTA vapors, the morphology of the nanofibers was observed after immersion in PBS (pH = 7.2) for 24 h.As shown in Figure 9, neat zein nanofibers exhibited poor stability in PBS, as EU/ZNF collapsed into a film after 0.5 h of immersion in PBS, likely due to the high hydrophobicity of the zein nanofibers.When the nanofibers were immersed in PBS, strong hydrophobic interactions between the fibers caused the formation of film-like substances [67].The crosslinked zein nanofibers maintained their fiber shape and 3D porous structure in PBS after immersion for 24 h, although a certain extent of swelling of the fibers was observed.By increasing the crosslinking time up to 12 h, the stability of the nanofibers increased, which corresponds to the decrease in their volume shrinkage observed in Figure 5.This increased stability could be ascribed to the robust fiber matrix resulting from the formation of chemical bonds during crosslinking, resulting in their sustained-release performance.Similarly, Luo et al. (2018) fabricated gelation nanofibers crosslinked by GTA, which showed good stability in PBS [67].In the study by Chen et al. (2022), improved water stability of feather keratin/gelatin nanofibers after vapor-phase GTA crosslinking was also observed [31].
To estimate the release mechanism of EU from non-crosslinked and crosslinked zein nanofibers, the corresponding release data of 60% dissolution were fitted to the zero-order, first-order, Higuchi, Ritger-Peppas, Peppas-Sahlin, and Kopcha models, and the results are presented in Table 2 [54].The value of R 2 in the first-order model (0.97-0.98) is obviously higher than that in the zero-order model (0.64-0.77), indicating that the EU release rate is matrix diffusion-controlled and dependent on EU concentration.The Peppas-Sahlin model was the best model to describe the release profile of EU from all zein nanofibers (R 2 = 0.98-0.99).In the Peppas-Sahlin model, k 1 /k 2 > 1 indicates that EU was released through a Fickian diffusion mechanism, k 1 /k 2 < 1 indicates mainly an erosion mechanism, and k 1 /k 2 = 1 indicates a combination of diffusion and erosion mechanisms.Based on the results shown in Table 2, the estimated K 1 /K 2 ratios of EU/ZNF, EU/ZNF_3h, EU/ZNF_6h, EU/ZNF_9h, and EU/ZNF_12h were all higher than one.Moreover, the release exponent (n) estimated according to the Ritger-Peppas model for all samples was lower than 0.45, and the values of a/b estimated according to the Kopcha model for all samples were lower than one.Therefore, the release of EU from the as-spun and crosslinked zein nanofibers mainly followed the Fickian diffusion mechanism.Given that both EU and zein are insoluble in water, the diffusion of EU through the fibers is the main barrier to release.Surendranath et al. (2023) also indicated that the release of propranolol hydrochloride from thermally crosslinked zein/PVP nanofibers was mainly via Fickian diffusion [54].
Conclusions
In this study, crosslinking post-treatment with GTA vapors was used to improve the physicochemical properties and release performance of electrospun zein nanofibers.Compared with as-spun zein nanofibers, crosslinked samples retained their fiber structure, but the fiber diameter increased and the morphology became more compact.The surface hydrophobicity of the zein nanofibers did not change after crosslinking with GTA vapors for 12 h.The appropriate degree of crosslinking plays an essential role in improving the stability and mechanical properties of zein nanofibers.After 6 h of crosslinking with GTA vapors, the zein nanofibers exhibited a volume of 84.39% upon exposure to water, insolubility after 24 h of soaking in acetic acid, and a TS value of 23.60 ± 2.79 MPa.Exposure to GTA vapors resulted in the formation of new chemical bonds between the free amino groups in zein molecule chains and the aldehyde groups in GTA molecules, which increased the structural stability of the zein nanofibers.Moreover, vapor-phase GTA-crosslinked zein nanofibers showed a controlled release of EU in PBS.The model fitting results indicated that Fickian diffusion was the dominant release mechanism of EU, and the Peppas-Sahlin model is effective for describing the EU release behavior (R 2 = 0.98-0.99).Accordingly, these crosslinked zein nanofibers have the potential to be used as active food packaging with sustained release.Further studies are underway to explore in vitro functional activities (antioxidant activity, antimicrobial activity), microbial penetration assay, cell cytotoxicity, the mechanism of EU-zein interaction, and the preservation effect on fruits.
Figure 1 .
Figure 1.Scanning electron microscopy images and diameter distribution (a) of the as-spun and crosslinked zein nanofibers.
Figure 1 .
Figure 1.Scanning electron microscopy images and diameter distribution (a) of the as-spun and crosslinked zein nanofibers.
Figure 2 .
Figure 2. Mechanical properties of as-spun and crosslinked zein nanofibers.Various letters (A-D, a-c) and Roman letters on the top of the columns suggest the significant difference (p < 0.05).
Figure 2 .
Figure 2. Mechanical properties of as-spun and crosslinked zein nanofibers.Various letters (A-D, a-c) and Roman letters on the top of the columns suggest the significant difference (p < 0.05).
Figure 3 .
Figure 3. Water contact angle of the as-spun and crosslinked zein nanofibers.Various lowercase letters suggest the significant difference (p < 0.05).
Figure 3 .
Figure 3. Water contact angle of the as-spun and crosslinked zein nanofibers.Various lowercase letters suggest the significant difference (p < 0.05).
Foods 2024 , 18 Figure 4 .
Figure 4. Macroscopic images of the as-spun and crosslinked zein nanofibers after immersion in water for 24 h.
Figure 4 .
Figure 4. Macroscopic images of the as-spun and crosslinked zein nanofibers after immersion in water for 24 h.
Foods 2024, 13 , 1583 9 of 17 Figure 4 .
Figure 4. Macroscopic images of the as-spun and crosslinked zein nanofibers after immersion in water for 24 h.
Figure 5 .
Figure 5. Volume changes in the as-spun and crosslinked zein nanofibers after immersion in water for 24 h.Various lowercase letters on the top of the columns suggest the significant difference (p < 0.05).
Figure 5 .
Figure 5. Volume changes in the as-spun and crosslinked zein nanofibers after immersion in water for 24 h.Various lowercase letters on the top of the columns suggest the significant difference (p < 0.05).
Figure 6 .
Figure 6.FT-IR spectra of the as-spun and crosslinked zein nanofibers.
Figure 7
Figure7presents the possible molecular structure of zein after electrospinning and crosslinking.Acetic acid is a weak organic acid that is capable of breaking hydrophobic interactions, unfolding the β-sheet structure, and causing partial denaturation of the αhelix structure of proteins[65,66].Zein was demonstrated to dissolve in acetic acid, with
Figure 6 .
Figure 6.FT-IR spectra of the as-spun and crosslinked zein nanofibers.
Figure 7 .
Figure 7. Effect of electrospinning and GTA crosslinking on the molecular structure.
Figure 7 .
Figure 7. Effect of electrospinning and GTA crosslinking on the molecular structure.
Figure 8 .
Figure 8.In vitro release profiles of eugenol from non-crosslinked and crosslinked zein nanofibers in PBS.
Figure 8 .
Figure 8.In vitro release profiles of eugenol from non-crosslinked and crosslinked zein nanofibers in PBS.
Figure 9 .
Figure 9. Scanning electron microscopy images of eugenol-loaded zein nanofibers crosslinked with vapor GTA after immersion in PBS for 24 h.
Figure 9 .
Figure 9. Scanning electron microscopy images of eugenol-loaded zein nanofibers crosslinked with vapor GTA after immersion in PBS for 24 h.
Table 2 .
Fitting parameters of the release model for non-crosslinked and crosslinked zein nanofibers loaded with eugenol. | 8,874.6 | 2024-05-01T00:00:00.000 | [
"Materials Science"
] |
Kadsindutalignans A–C: three new dibenzocyclooctadiene lignans from Kasura induta A.C.Sm. and their nitric oxide production inhibitory activities
Abstract Phytochemical study on the methanol extract of the stems and leaves of Kadsura induta led to the isolation of six dibenzocyclooctadiene lignans, including three new compounds named kadsindutalignans A–C (1–3), and three known ones, heteroclitalignan B (4), kadsuphilin C (5) and kadsulignan E (6). Their structures were elucidated based on extensive spectroscopic analyses, including HRESIMS, 1D- (1H NMR and 13C NMR), 2D-NMR (HSQC, HMBC, 1H-1H COSY and NOESY), and experimental circular dichroism (CD) spectra. All the isolates inhibited NO production in LPS-activated RAW264.7 cells with IC50 values in the range from 5.67 ± 0.54 µM to 38.19 ± 2.03 µM, compared to that of the positive control of NG-monomethyl-L-arginine acetate (L-NMMA) with an IC50 value of 8.90 ± 0.48 µM. Interestingly, the new compound 2 showed potential inhibition of NO production with an IC50 value of 5.67 ± 0.54 µM, which was higher than that of the positive control. Graphical Abstract
Introduction
The genus Kadsura (family Schisandraceae) comprises about 16 species, which is usually a climbing plant with separate male and female flowers growing on different plants.In Vietnam, there are six Kadsura species, K. chinensis, K. coccinea, K. hetroclita, K. longipedunculata, K. oblongifolia (Chi 2018), and K. induta A. C. Sm. a new record for the flora of Vietnam in 2012 (Thanh et al. 2012).In the traditional medicine of Vietnam, the leaves and stems of K. induta are used to treat arthritis, gastritis and duodenitis (Thanh et al. 2012).Previous phytochemical study on Kadsura species led to the isolation of lignans and triterpenes (Lu and Chen 2008;Liu et al. 2014;Su et al. 2014;Wang et al. 2021;Zhang et al. 2021;Tram et al. 2022).Notably, dibenzocyclooctadiene lignans, a typical substance of the genus Kasura, have been also found in K. induta and some of them exhibited antiviral and anti-HIV activities (Wenhui et al. 2007(Wenhui et al. , 2009;;Minh et al. 2014).In our continuing efforts toward discovering structurally interesting and biologically significant dibenzocyclooctadiene lignans from Kadsura species, six dibenzocyclooctadiene lignans (1-6) including three new ones (1-3) were isolated from the stem and leaves of this plant.All the isolates were found to inhibit NO production in LPS-activated RAW264.7 cells.Herein, we report the details of the isolation, structural elucidation, and the NO production inhibitory activities of these compounds.159.2, 151.0, 148.5, 141.3, 140.6, 136.2, 135.1, 130.9, 120.5, 120.2, 111.1 and 101.8] as similar to the most lignans found from the genus Kadsura (Li et al. 2006;Lu and Chen 2008;Liu et al. 2014;Minh et al. 2014).Of these, two carbons at d C 111.1 and 101.8 had HSQC cross peaks with the olefinic protons at d H 6.72 and 6.31, respectively.The abovementioned data suggested compound 1 was a dibenzocyclooctadiene lignan having one acetoxy, four methoxy, one dioxygenated methylene, and two hydroxy groups (Lu and Chen 2008;Minh et al. 2014).Detailed analysis of the NMR data (Table S1) showed that compound 1 was quite similar to Kadsuralignan B, which had two acetoxy groups at C-6 and C-9 (Li et al. 2006).In the HMBC spectrum, H-11 (Wang et al. 2021;Zhang et al. 2021) and further confirmed by HMBC spectrum as shown in Figure S1.The configuration of the biphenyl group of 1 was determined to be S, similar to all dibenzocyclooctadiene lignans found from the genus Kadsura, based on the positive cotton effects at 226 nm and negative cotton effects at 255 nm observed on the CD spectrum (Ikeya et al. 1988;Li et al. 2006;Wang et al. 2006;Shen et al. 2007).In the NOESY spectrum, the cross peaks between H-11 (d H 6.31) and H-8 (d H 1.94) indicated twist-boat-chair conformation of cyclooctadiene ring and H-8 adopted axial position (b-configuration) (Wang et al. 2006;Shen et al. 2007).Thus, the NOESY cross peaks between H-8 (d H 1.94) and H-9 (d H 4.84)/H-17 (d H 1.28) suggested the same b-configuration of H-9 and methyl group C-17.On the other hand, the NOESY cross peaks between H-4 (d H 6.72) and H-6 (d H 5.64) demonstrated for equatorial position (a-configuration) of H-6 (Figure S2).Thus, the chemical structure of compound 1 was determined as shown in Figure 1, a new compound named kadsindutalignan A.
Results and discussion
The molecular formula of compound 2 was elucidated as C 31 H 33 NO 11 by the exhibition of HR-ESI-MS ion peak at m/z 596.2136 43.4), between H-4 (d H 6.84) and C-6, and between H-6 (d H 5.88) and C-11.The CD spectrum of 2 showed the Cotton effects at (þ) 228 and (À) 249 nm indicating S-configuration of the biphenyl groups (Li et al. 2006;Ikeya et al. 1988).In the NOESY spectrum, the cross peaks between H-8 (d H 2.26) and H-11 (d H 6.53)/H-9 (d H 5.74)/H-17 (d H 1.40), H-4 (d H 6.84) and H-6 (d H 5.88) demonstrated the same stereochemistry of dibenzo cyclooctadiene moiety between 2 and 1 (Figure S2).Therefore, the chemical structure of compound 2 was determined as shown in Figure 1, a new compound named kadsindutalignan B.
Compound Comparing the NMR data of 3 with those of 1 and 2 showed that the lower carbon chemical shifts of C-3 (d C 148.6) and C-12 (d C 148.7) together with the absence of the dioxygenated methylene signals in 3 suggested two hydroxy groups were at C-3 and C-12 (Li et al. 2006).The other hydroxy group at C-9 confirmed by the 1 H-1 H COSY correlations of H-6/H-7/H-8/H-9, H-7/H-17, H-8/H-18, and by HMBC correlations from H-9 to C-11 and from H-11 to C-9 as shown in Figure S1.Four methoxy groups were at C-1, C-2, C-13 and C-14 determined by HMBC spectrum.The CD spectrum of 3 showed the Cotton effects at (þ) 215 and (À) 250 nm indicating S-configuration of the biphenyl groups (Li et al. 2006 andIkeya et al. 1988).The NOESY cross peaks between H-17 (d H 0.93) and H-4 (d H 6.67)/H-18 (d H 1.17), H-8 (d H 1.92) and H-7 (d H 2.08)/H-9 (d H 4.62)/H-11 (d H 6.49), H-11 and H-9 were observed evidencing these protons were close in proximity, indicating H-7, H-8, and H-9 groups in the same side of the molecule (Figure S2) (Wang et al. 2006;Shen et al. 2007).Thus, the chemical structure of compound 3 was determined as shown in Figure 1, a new compound named kadsindutalignan C.
The dibenzocyclooctadiene lignans have been reported for their potential antiinflammatory activity (Li et al. 2006;Wang et al. 2021).Therefore, compounds 1-6 were evaluated for anti-inflammatory activity by their ability to inhibit NO production in LPS stimulated RAW 264.7 cells (Supporting information).At a concentration of 100 mM, compounds 1-6 did not significantly show cytotoxic activity by MTT assay (data not shown).Therefore, the levels of NO production in the cell medium were measured in the presence of 1-6 at serial diluted concentrations (0-100 mM).As shown in Table S3, all the tested compounds exhibited NO inhibitory activity with IC 50 values in the range from 5.67 ± 0.54 to 38.19 ± 2.03 mM compared to that of the positive control L-NMMA (IC 50 8.90 ± 0.48 mM).Regarding structure activity relationship, our results suggested that the dibenzocyclooctadiene lignans which had a dioxygenated methylene group at C-12 and C-13 may have significant NO inhibitory activity and the compounds having benzyl or pyridinecarboxyl groups showed higher NO inhibitory activity than that of the other tested compounds.This result is well in agreement with those reported in literature (Li et al. 2006;Wang et al. 2021).
General
Optical rotation was measured on a Jasco P2000 polarimeter.The CD spectra were measured on a Chirascan spectrometer (Applied Photophysics, Surrey, UK).The HR-ESI-MS was taken on an Agilent 6530 Accurate Mass Q-TOF LC/MS.The NMR spectra were recorded on a Bruker 500 MHz spectrometer using TMS as an internal Standard.Preparative HPLC were run on an Agilent 1100 system including quaternary pump, autosampler, DAD detector, and preparative HPLC column YMC J'sphere ODS-H80 (4 mm, 20 Â 250 mm).Isocratic mobile phase with the flow rate of 3.0 mL/min was used in pre-HPLC.The compound was monitored at wavelengths of 205, 230, 254 and 280 nm.Flash column chromatographies were performed using silica gel (60, 70-230 mesh and 230-400 mesh, Merck, Darmstadt, Germany) or reversed phase C-18 (YMC, Kyoto, Japan) as stationary phase.Thin layer chromatographies (TLC) were carried out on pre-coated silica gel 60 F 254 and RP-18 F 254S plates.The spots were detected by spraying with aqueous solution of H 2 SO 4 5% followed by heating with a heat gun.
Plant material
The stems and leaves of Kadsura induta A.C.Sm.were collected from Sapa, Lao Cai, Vietnam, in April 2022 and authenticated by Dr Nguyen Van Thanh, Institute of Ecology and Biological Resources, VAST.A voucher specimen (code NCCT-P149) is kept at the Herbarium, Institute of Ecology and Biological Resources, Hanoi, Vietnam. | 2,023 | 2022-10-18T00:00:00.000 | [
"Chemistry",
"Medicine",
"Environmental Science"
] |
Site-specific dose-response relationships for cancer induction from the combined Japanese A-bomb and Hodgkin cohorts for doses relevant to radiotherapy
Background and Purpose Most information on the dose-response of radiation-induced cancer is derived from data on the A-bomb survivors. Since, for radiation protection purposes, the dose span of main interest is between zero and one Gy, the analysis of the A-bomb survivors is usually focused on this range. However, estimates of cancer risk for doses larger than one Gy are becoming more important for radiotherapy patients. Therefore in this work, emphasis is placed on doses relevant for radiotherapy with respect to radiation induced solid cancer. Materials and methods For various organs and tissues the analysis of cancer induction was extended by an attempted combination of the linear-no-threshold model from the A-bomb survivors in the low dose range and the cancer risk data of patients receiving radiotherapy for Hodgkin's disease in the high dose range. The data were fitted using organ equivalent dose (OED) calculated for a group of different dose-response models including a linear model, a model including fractionation, a bell-shaped model and a plateau-dose-response relationship. Results The quality of the applied fits shows that the linear model fits best colon, cervix and skin. All other organs are best fitted by the model including fractionation indicating that the repopulation/repair ability of tissue is neither 0 nor 100% but somewhere in between. Bone and soft tissue sarcoma were fitted well by all the models. In the low dose range beyond 1 Gy sarcoma risk is negligible. For increasing dose, sarcoma risk increases rapidly and reaches a plateau at around 30 Gy. Conclusions In this work OED for various organs was calculated for a linear, a bell-shaped, a plateau and a mixture between a bell-shaped and plateau dose-response relationship for typical treatment plans of Hodgkin's disease patients. The model parameters (α and R) were obtained by a fit of the dose-response relationships to these OED data and to the A-bomb survivors. For any three-dimensional inhomogenous dose distribution, cancer risk can be compared by computing OED using the coefficients obtained in this work.
Introduction
The dose-response relationship for radiation carcinogenesis up to one or two Gy has been quantified in several major analyses of the atomic bomb survivors data. Recent papers have been published, for example, by Preston et al. [1,2] and Walsh et al. [3,4]. This dose range is important for radiation protection purposes where low doses are of particular interest. However, it is also important to know the shape of the doseresponse curve for radiation induced cancer for doses larger than one Gy. In patients who receive radiotherapy, parts of the patient volume can receive high doses and it is therefore of great importance to know the risk for the patient to develop a cancer which could have been caused by the radiation treatment.
There is currently much debate concerning the shape of the dose-response curve for radiation-induced cancer [5][6][7][8][9][10][11][12][13][14][15][16][17]. It is not known whether cancer risk as a function of dose continues to be linear or decreases at high dose due to cell killing or levels off due to, for example, a balance between cell killing and repopulation effects. The work presented here, aims to clarify the dose-response shape for the radiotherapy dose range. In this dose range, the linear-no-threshold model (LNT) derived from the atomic bomb survivors from Hiroshima and Nagasaki can be combined with cancer risk data available from about 30,000 patients with Hodgkin's disease who were irradiated with localized doses of up to around 40 Gy.
The usual method for obtaining empirical dose-response relationships for radiation associated cancer is to perform a case control study. For each patient with a second cancer the location of, and the point dose at the malignancy can be determined. If the dose is obtained also for a number of controls the dose-response relationship for radiation induced cancer can be obtained. The advantage of this method is a direct determination of risk as a function of point dose, the major disadvantage are the large errors involved when determining the location and dose to the origin of the tumor. In this work another method was used by assuming certain shapes of dose-response curves based on model assumptions. The free model parameters for each organ are adjusted in two steps. First, the models have to reproduce in the limit of low dose the risk coefficients of the A-bomb survivors. Second, by applying the models to typical dosevolume histograms of treated patients they have to predict the corresponding observed second cancer risk which was obtained from epidemiological studies. An advantage of this method is that no point dose estimates at the tumor origin are necessary; a disadvantage is that the obtained dose-response curve is dependent on the a priori model.
The aim of this paper is to attempt a combination of the LNT model derived from the atomic bomb survivors and cancer risk data from a Hodgkin cohort treated with radiotherapy, in order to determine possible dose-response relationships for radiation associated site specific solid cancers for radiotherapy doses. This work is an extension of recently published results on possible dose-response relationships for radiation induced solid cancers for all organs combined [11,14,18]. The main difference to previous work is the use of a more realistic dose-response relationship including fractionation effects which is more suitable for radiotherapy applications. Many problems and uncertainties are involved in combing these two data-sets. However, since very little is currently known about the shape of dose-response relationships for radiation-induced cancer in the radiotherapy dose range, this approach could be regarded as an attempt to acquire more information in this area.
Cancer risk from the Atomic bomb survivor data
The excess absolute risk in a small volume element of an organ (EAR) is factorized into a function of dose RED(D) and a modifying function that depends on the variables age at exposure (agex) and age attained (agea): where RED (risk equivalent dose) is the dose-response relationship for radiation induced cancer in units of dose and b is the initial slope, which is the slope of the dose-response curve at low dose. The modifying function μ contains the population dependent variables: In this form the fit parameters are gender-averaged and centered at an age at exposure of 30 years and an attained age of 70 years. The initial slopes b EAR and the age modifying parameters g e and g a for a Japanese population and for different sites are taken from Preston et al. [1] and are listed in Table 1.
In this work it is intended to combine the Japanese A-bomb survivor data with secondary cancer data from of Hodgkin's patients from a Western population. This raises the issue of risk transfer between Japanese and Western populations. In this work we transfer risk according to ICRP 103 [19] by establishing a weighting of ERR (excess relative risk) and EAR that provides a reasonable basis for generalizing across populations with different baseline risks. For this purpose ERR:EAR weights of 0:100% were Table 1 Initial slopes b (in brackets 95% confidence interval) of the A-bomb survivors for age at exposure of 30 and attained age of 70 years and age modifying parameters g e and g a for different sites. The values for the Japanese population (b EAR -Japan) were taken from the analysis of Preston et al. [1]. Risk was transferred to a Western population (b EAR -UK) by establishing a ERR-EAR weighting according to ICRP 103 [19]. *excess cases per 10'000 PY Gy, †initial slope from colon, ◇ initial slope from oral cavity, +age dependence from stomach, ‡ age dependence from all solid assigned for breast, 100:0% for thyroid and skin, 30:70% for lung, and 50:50% for all others [19]. The risk ratios ERR:EAR from [20] are listed in Table 2 for the Japanese and UK population normalized to the initial slopes b EAR of a Japanese population. The ratio of the ERR:EAR weighted initial slope for a UK population and a Japanese population is given in the last column of Table 2. This ratio was used to transfer b EAR of the Japanese population to a UK population listed in the second column of Table 1.
Application of cancer risk models to radiotherapy patients A word of caution is necessary here. EAR as defined by Eq. 1 is the mathematically modeled excess absolute risk in a small volume element of an organ or tissue and must be distinguished from the usually used epidemiologically obtained excess absolute risk for a whole organ EAR org . Although this notation might appear confusing we followed this approach as it was previously used by other authors [16,17]. If the dose-volume-histogram V(d) in an organ of interest is known, excess absolute risk in that organ can be obtained with Eq. 1 by a convolution of the dose-volume histogram with EAR: where V T is the total organ volume and the sum is taken over all bins of the dosevolume histogram V(D). For a completely homogenously irradiated organ with a dose D hom excess absolute risk is simply EAR org = EAR(D hom ). Table 2 Transfer of risks between the Japanese and the UK population using weighting between a generalized ERR and EAR model according to ICRP 103 [19] and UNSCEAR [20]. All values are normalized to EAR of the Japanese population. The weighting w between the model is taken from ICRP103 [19]. † transfer between populations and models taken from "all other solids" of UNSCEAR [20] Schneider et al. If risk estimates are applied to radiotherapy patients it is usually of interest to know the advantage of a treatment plan A relative to another treatment plan B with respect to cancer induction in one organ and one patient (same gender, age at exposure and age attained). It is therefore necessary to evaluate the risk ratio: where we introduced organ equivalent dose (OED) [11] which is a dose-response (RED) weighted dose variable averaged over the whole organ volume: It becomes instantly clear that risk ratios for different radiotherapy treatment plans are equivalent to OED ratios which can be simply determined on the basis of an organ specific dose-response relationship (RED) and dose volume histogram (V(D)). OED values are independent of the initial slope b and the modifying function μ and are thus keeping the necessary variables and the corresponding uncertainties at a minimum.
It should be noted here that for highly inhomogeneous dose distributions, cancer risk is proportional to average dose only for a linear dose-response relationship. For any other dose-response relationship, cancer risk is proportional to OED.
Dose-response models for carcinoma induction
Several different dose-response relationships for carcinoma induction are considered here. The first is a linear response over the whole dose range: The second is a recently developed mechanistic model which accounts for cell killing and fractionation effects and is for carcinoma induction of the form [21]: where it assumed that the tissue is irradiated with a fractionated treatment schedule of equal dose fractions d up to a dose D. The number of cells is reduced by cell killing which is proportional to a' and is defined using the linear quadratic model where D T and d T is the prescribed dose to the target volume with the corresponding fractionation dose, respectively. For analyzing the Hodgkin data from Dores et al. [22] we used for D T = 40 Gy and for d T = 2 Gy. The repopulation/repair parameter R characterizes the repopulation/repair-ability of the tissue between two dose fractions and is 0 if no and 1 if full repopulation/repair occurs. It is assumed here an a/b = 3 Gy for all tissues, since analysis of breast cancer data has shown that the dose-response model is robust with variations in a/b [23].
Since a dose-response model as described by Eq. 7 is based on various assumptions and thus related to uncertainties it was decided to include two limiting cases. The first one, commonly named bell-shaped dose-response curve, is defined by completely neglecting any repopulation/repair effect and thus fractionation and is derived by taking Eq. 7 in the limit of R = 0: Although the case R = 0 represents an acute dose exposure, repopulation/repair effects are certainly important. However, any repopulated cell is not irradiated (as long as the time scale of irradiation is small) and thus, in the context of carcinogenesis, repopulation/repair effects are in this case irrelevant.
The second limiting case is a dose-response relationship in case of full repopulation/ repair, and is derived by taking Eq. 7 in the limit of R = 1: Organ equivalent dose for the dose-response curves defined by Eqs. 6,7,9 and 10 become in the limit of small dose: Hence OED is, in the case of a homogenous distribution of small dose, average absorbed organ dose D; -. Thus in the limit of small dose all proposed dose-response relationships approach the LNT model and the initial slope b can be obtained from the most recent data for solid cancer incidence. Here the data for a follow-up period from 1958 to 1998 was used from a publication of Preston et al. [1].
Dose-response models for sarcoma induction
The excess risk of sarcomas observed from the study of the A-bomb survivors [1] is an order of magnitude smaller than for carcinomas. Data from radiotherapy patients indicate however that sarcoma induction at high dose is at a comparable magnitude than carcinoma induction. Therefore it is not appropriate to assume a pure linear doseresponse relationship for sarcoma induction. A recently developed sarcoma induction model was used which accounts for cell killing and fractionation effects and is based on the assumption that stem cells remain quiescent until external stimuli like ionizing radiation trigger re-entry into the cell cycle. The corresponding mechanistic model which accounts also for cell killing and fractionation effects is of the form [21]: where is assumed that the tissue is irradiated with a fractionated treatment schedule of equal dose fractions d up to a dose D and the parameters have the same meaning than in Eq. 7. Since a dose-response model as described by Eq. 12 is based on various assumptions and thus related to uncertainties it was decided, similar to the carcinoma case, to study three cases. The first one is defined by looking at minimal repopulation/ repair effects by using Eq. 12 with a fixed R = 0.1. The second one is defined by looking at intermediate repopulation/repair effects by using Eq. 12 with a fixed R = 0.5. The third case is a dose-response relationship in case of full repopulation/repair, and is derived by taking Eq. 12 in the limit of R = 1: Organ equivalent dose for the dose-response curves for sarcoma induction defined by Eqs. 12 and 13 become, in the limit of small dose: Sarcoma risk from a homogenous distribution of small dose is proportional to the cube of dose and thus results in a much lower cancer risk than expected from a linear model. This is consistent with the observations of the A-bomb survivors.
Modeling of the Hodgkin's patients
Cancer risk is only proportional to average organ dose as long as the dose-response curve is linear. At high dose it could be that the dose-response relationship is non-linear and as a consequence, OED replaces average dose to quantify radiation induced cancer. In order to calculate OED in radiotherapy patients, information on the threedimensional dose distribution is necessary. This information is usually not provided in epidemiological studies on second cancers after radiotherapy. However, in Hodgkin's patients the three-dimensional dose distribution can be reconstructed.
For this purpose data on secondary cancer incidence rates in various organs for Hodgkin's patients treated with radiation were included in this analysis. Data on Hodgkin's patients treated with radiation seem to be ideal for an attempted combination with the A-bomb data. These patients were treated at a relatively young age, with curative intent and hence secondary cancer incidence rates for various organs are known with a good degree of precision. Since the treatment of Hodgkin's disease with radiotherapy has been highly successful in the past, the treatment techniques have not been modified very much over the last 30 years. This can be verified, for example, by a comparison of the treatment planning techniques used from 1960 to 1970 [24] with those used from 1980 until 1990 [25]. Additionally, the therapy protocols do not differ very much between the institutions that apply this form of treatment. These factors make it possible to reconstruct a statistically averaged OED for each dose-response model RED (D), which is characteristic for a large patient collective of Hodgkin's disease patients.
The overall risk of selected second malignancies of 32,591 Hodgkin's patients after radiotherapy has been quantified by Dores et al. [22]. They found, for all solid cancers after the application of radiotherapy as the only treatment, an excess absolute risk of 33.1 per 10,000 patients per year. The site-specific excess risks are listed in Table 3.
The total number of person years in these studies was 92,039 with a mean patient age at diagnosis of 37 years. The mean follow-up time of the Hodgkin's patients was 8 years. The mean age at diagnosis (agex = 37) and the mean attained age (agea = 45) was then used with the temporal patterns of the atomic bomb data (Eq. 2) to obtain the site specific risks at agex = 30 and agea = 70 years for the Dores data (Table 3). For bladder cancer the temporal pattern could be determined only with a large error which results in a variation of the corresponding EAR org by more than one order of magnitude. Therefore it was decided to apply for bladder cancer the temporal pattern for all solid cancers.
Typical treatment techniques for Hodgkin's disease radiotherapy were reconstructed in an Alderson Rando Phantom with a 200 ml breast attachment. Treatment planning was performed on the basis of the review by Hoppe [25] and the German Hodgkin disease study protocols http://www.ghsg.org. We used for treatment planning the Eclipse External Beam Planning system version 8.6 (Varian Oncology Systems, Palo Alto, CA) using the AAA-algorithm (version 8.6.14) with corrected dose distributions for head-, phantom-and collimator-scatter. Three different treatment plans were computed which included a mantle field, an inverted-Y field and a para-aortic field. All plans were calculated with 6 MV photons and consisted of two opposed fields. The technique for shaping large fields included divergent lead blocks. Treatment was performed at a distance of 100 cm (SSD). Anterior-posterior (ap/pa) opposed field treatment techniques were applied to insure dose homogeneity.
The dose-volume histograms of the organs and tissues (exclusive of bone and soft tissue) which are listed in Table 1 were converted into OED according to the doseresponse relationships for carcinomas (Eqs. 6, 7, 9 and 10). A statistically averaged OED was then obtained by combining the OED from different plans with respect to Patients were primarily treated for Hodgkin's disease with radiotherapy. The data for agex = 30 and agea = 70 years were converted using the temporal patterns of the atomic bomb data (Eq. 2) with the coefficients listed in Table 2.
the statistical weight of the involvement of the individual lymph nodes [26]. The same was executed with bone and soft tissue using the sarcoma dose-response relationships from Eqs. 12 and 13. Here it is assumed that radiation causes in bone and soft tissue exclusively sarcomas, in all other organs which are listed in Table 1 carcinomas.
Combined fit of A-bomb survivor and Hodgkin's patients
Since the dose distribution in a Hodgkin's patient is highly inhomogenous and the dose-response relationships as described by Eqs. 7, 9, 10, 12 and 13 are non-linear, it is not appropriate to apply a straight forward fit to the data. An iterative fitting procedure needs to be used instead. For this purpose, as described in the previous section, the dose-volume histograms for the different organs of interest were converted into OED for given model parameters a and R. The initial slope b was taken from Table 1 for carcinoma induction and kept fix. For sarcoma dose-response curves the parameters b and a were varied and R was kept fix at 0.1, 0.5 and 1, respectively. The fitted EAR values were compared to the original data. The aand R-values, and aand b-values were fitted iteratively by minimizing χ 2 for carcinoma and sarcoma induction, respectively where the sum is taken over all bins of the dose volume histogram of the specific organ. The coefficient of variation (CV) was calculated to estimate the quality of the fit: A fit was accepted as significant good when CV < 0.05. The linear model from Eq. 6 was optimized by allowing a variation of the initial slope b in the 95% confidence interval of the A-bomb survivor data ( Table 1).
The procedure described above was slightly varied to fit all solid cancers, since for all solid cancers combined statistically significant A-bomb data up to approximately 5 Gy are available. Thus the avalue for all solid cancers combined could be obtained using the A-bomb data and was fixed at 0.089 [18].
Results
The results of the parameter fits are listed in Table 4 for carcinoma induction and in Table 5 for sarcoma induction. Not all dose-response models could fit the data well (CV > 0.05). This was indicated by "nc" in the tables. The Figures show the fitted dose response models for the different organs and tissues. In Figures 1 (all solid), 2 (female breast), 3 (lung), 4 (colon), 5 (mouth and pharynx), 6 (stomach), 7 (small intestine), 8 (liver), 9 (cervix), 10 (bladder), 11 (skin), 12 (brain and CNS) and 13 (salivary glands) carcinoma induction is plotted using the linear model indicated by the black line, the full model marked by the red line, the model neglecting fractionation and thus repopulation with R = 0 (sometimes called a bell-shaped dose-response) labeled by the green line and finally the model describing full repopulation between dose fractions with R = 1 (sometimes called a plateau dose-response) marked by the blue line. All dose-response models are plotted for a age at diagnosis of 30 and an attained age of 70 years, but they can easily converted to other ages by using the temporal patterns described by Eq. 2 with the parameters listed in Table 1.
From the analysis excluded were Esophagus and Thyroid, since theses organs were covered by a limited dose range of 30-55 Gy and 44-46 Gy, respectively. Figure 1 shows the dose-response models fitted to the whole body structure (the complete Alderson phantom). The initial slope b of the A-bomb survivor data is that for all solid tumors. The linear model was not converging, all other models could be fitted. The fitted variables b, a and R are listed for each organ and each model. In addition the coefficient of variation (CV) is given. A fit corresponding to a CV > 0.05 was denoted as not converging (nc). * in Gy -1 † in (10,000 PY Gy) -1 Table 5 Results of the fit to the Hodgkin data for the different dose-response models for sarcoma induction.
Discussion
Site Low repopulation R = 0.1 (Eq.10) Intermediate repopulation R = 0.5 (Eq.10) The bell-shaped, plateau and full dose-response relationships are depicted by the green, blue and red line, respectively. The magenta curve represents the results from a fits to case control studies [23]. The fits are presented for age at exposure of 30 years and attained age of 70 years.
Figure 3
Plot of excess absolute carcinoma risk for lung cancer per 10,000 persons per year as a function of point dose in the organ. The bell-shaped, plateau and full dose-response relationships are depicted by the green, blue and red line, respectively. The magenta curve represents the results from a fits to case control studies [27]. The fits are presented for age at exposure of 30 years and attained age of 70 years. There is strong variation for intermediate dose levels around 10 Gy. It should be noted here that for inhomogenous dose distributions a dose response relationship for the whole body should be used with extreme care, as two completely different distributions of dose in the organs could result in the same OED for the whole body. The doseresponse relationships for the whole body obtained in this report should be therefore used only for Hodgkin's patients treated with mantle fields. In contrast the doseresponse relationships for single organs can be used generally for analyzing any dose distribution.
The quality of the applied fits shows that the linear model fits best colon, cervix and skin. All other organs are best fitted by the full model indicating that the repopulation/ repair ability of tissue is neither 0 nor 100% but somewhere in between. It seems that for most organs at large doses the dose-response relationship is flattening or decreasing.
It should be noted that Mouth and Pharynx was covered by a limited dose range from 16-45 Gy. Thus the resulting dose-response relationship for dose levels outside that range should be used with care. For rectum none of the dose response models could predict the Hodgkin data. The linear model fitted colon, liver, cervix, skin and Brain/CNS. The model with full repopulation/repair did not fit rectum, cervix and skin.
Bone and soft tissue sarcoma were fitted by all the models well. In the low dose range beyond 1 Gy sarcoma risk is negligible. For increasing dose sarcoma risk increases rapidly and reaches a plateau at around 30 Gy. This is in agreement with observations which demonstrate small sarcoma risk at low dose from the A-bomb survivors and significant sarcoma risk in the high dose regions of radiotherapy patients.
The results of this study can be compared to EAR-modeling based on case control studies. In two recent publications excess absolute risk of breast and lung cancer was fitted to the model including fractionation [23,27]. For breast cancer the obtained model parameters where a = 0.067 Gy -1 and R = 0.62 and for lung a = 0.061 Gy -1 and R = 0.84. The corresponding dose-response curves are plotted in Figures 2 and 3 as the magenta lines for comparison with the results obtained in this study. If it is considered that the dose-response relationships were derived from two completely different data sets with two different methods the agreement is satisfying.
The epidemiological data from the Atomic-bomb survivors and the Hodgkin's patients are associated with large errors as discussed below. Nevertheless some basic conclusion can be tentatively drawn from the analysis presented here.
Increased risks of solid cancers after Hodgkin's disease have been generally attributed to radiotherapy. An important question is whether chemotherapy for Hodgkin's disease also adds to the solid cancer risk, and if so, at which sites. If chemotherapy indeed affects induction of solid tumors, one would expect that patients receiving combined modality treatment would have a greater relative risk than patients treated solely with radiotherapy. In several studies, no increased risk of solid cancers overall was observed after the application of chemotherapy alone. Dores et al. [22] calculated both the risk after radiotherapy alone and the solid cancer risk after combined modality therapy and found an excess absolute risk of 39 and 43 per 10,000 patients per year, respectively. As a consequence, the difference in risk between combined modality treatment and radiotherapy alone (4 per 10,000 patients per year) can be tentatively attributed to either chemotherapy or a genetic susceptibility of the Hodgkin patient population with regard to cancer or both. The risk difference accounts approximately for 10% of all solid cancers and can be regarded as not substantial when compared to other errors involved for risk estimation and is also not statistically significant (see Table 1).
It is well known that genetic susceptibility underlies Hodgkin's disease [28]. It is not clear whether this genetic susceptibility would also affect the development of other cancers. There is the possibility of a cancer diathesis, the prospect that, for some reasons related to genetic makeup, a person who developed one cancer has an inherently increased risk of developing another. However, such cancer susceptibility would result in a minimal excess cancer incidence compared to the incidence of radiation related tumors, since such an excess cancer incidence of solid tumors should also be seen in Hodgkin's patients after treatment with chemotherapy alone. However, there is no statistically significant increase for all solid tumors combined. Therefore, such an effect will be hidden in the 95% confidence interval of the observed cancer incidence after chemotherapy.
In this work EAR has been used to quantify radiation-induced cancer. EAR is used here, since the risk calculations of the Hodgkin's cohort are based on extremely inhomogenous dose distributions. It is assumed that the total absolute risk in an organ is the volume weighted sum of the risks of the partial volumes which are irradiated homogenously. Currently there is no available method for obtaining analogous organ risks using ERR without modeling the underlying baseline risk. Shuryak et al. [16,17] recently published a model including the description of typical background carcinogenesis in addition to radiation induced cancer. They could thus obtain a microscopic ERR model. The advantage of their model in comparison to our approach is that they could determine directly ERR the disadvantage is a larger number of adjustable parameters (three more parameters) which must be introduced to model background cancer risk.
The A-bomb survivor data used in this work were taken from a recent report from Preston et al. [1]. Preston determined the initial slope of the dose-response relationship by using an RBE of 10 for the neutrons. Recent research by Sasaki et al. however indicated that the neutron RBE might by larger and varying with dose [29]. It could be important to determine site specific cancer induction also for a dose varying RBE similar to the work which was done for all solid cancers combined [18].
Conclusions
A comparison of dose distributions in humans, for example in radiotherapy treatment planning, with regard to cancer incidence or mortality can be performed by computing OED, which can be based on any dose-response relationship. In this work OED for various organs was calculated for a linear, a bell-shaped, a plateau and a mixture between a bell-shaped and plateau dose-response relationship for typical treatment plans of Hodgkin's disease patients. The model parameters (a and R) were obtained by a fit of the dose-response relationships to these OED data and to the A-bomb survivors. For any three-dimensional inhomogenous dose distribution, cancer risk can be compared by computing OED using the coefficients obtained in this work.
For absolute risk estimates, EAR org can be determined by taking additionally the initial slope b from Table 1 and multiplying it with the population-dependent modifying function using the coefficients of Table 1. However, absolute risk estimates must be viewed with care, since the errors involved are large. | 7,196 | 2011-07-26T00:00:00.000 | [
"Medicine",
"Physics"
] |
Modelling the Response of FOXO Transcription Factors to Multiple Post-Translational Modifications Made by Ageing-Related Signalling Pathways
FOXO transcription factors are an important, conserved family of regulators of cellular processes including metabolism, cell-cycle progression, apoptosis and stress resistance. They are required for the efficacy of several of the genetic interventions that modulate lifespan. FOXO activity is regulated by multiple post-translational modifications (PTMs) that affect its subcellular localization, half-life, DNA binding and transcriptional activity. Here, we show how a mathematical modelling approach can be used to simulate the effects, singly and in combination, of these PTMs. Our model is implemented using the Systems Biology Markup Language (SBML), generated by an ancillary program and simulated in a stochastic framework. The use of the ancillary program to generate the SBML is necessary because the possibility that many regulatory PTMs may be added, each independently of the others, means that a large number of chemically distinct forms of the FOXO molecule must be taken into account, and the program is used to generate them. Although the model does not yet include detailed representations of events upstream and downstream of FOXO, we show how it can qualitatively, and in some cases quantitatively, reproduce the known effects of certain treatments that induce various single and multiple PTMs, and allows for a complex spatiotemporal interplay of effects due to the activation of multiple PTM-inducing treatments. Thus, it provides an important framework to integrate current knowledge about the behaviour of FOXO. The approach should be generally applicable to other proteins experiencing multiple regulations.
Introduction
The FOXO (Forkhead Box, type O) family of transcription factors (TFs) cause changes in gene expression to implement a cellular stress response programme, and an increase in their activity is a consequence of many of the genetic interventions that extend lifespan in model organisms. FOXO transcription factors are active when an organism is fasting, whereas feeding causes the activation of the Insulin-and Insulin-like Signalling (IIS) pathway, and in particular of Akt/PKB, which negatively regulates FOXO TFs by phosphorylations that cause translocation to the cytoplasm, thus suppressing their transcriptional activity. However, the location, synthesis/degradation and transcriptional behaviour can all be modified by signals from a variety of cellular signalling pathways, which are integrated by FOXO as a net response to the total set of modifications (phosphorylations, acetylations and ubiquitinations), that it undergoes. Integrating this knowledge within an extensible framework would provide a valuable means to organise the information and to explore the role of the modifications.
In Figure 1A, we show a multiple sequence alignment of FOXO family members from human (Homo sapiens), mouse (Mus musculus), Xenopus leavis, Dario rerio, Drosophila and C elegans. There are some regions of conservation throughout, especially in the DNA-binding domain (expanded region). Some of the modifiable sites in this domain are highlighted, along with an indication of the enzyme that modifies them. It is clear that there a great many regulators of FOXO (among them Akt, CDK2, CK1, JNK, IKK, AMPK, SIRT1 and CBP/P300), and a great many modifiable sites. The multiple sequence alignment is available as supplementary dataset S1. In several cases, the modifiable amino acid contacts DNA directly, making it obvious how a covalent change could alter DNA-binding behaviour; this is shown in Figure 1B, a representation of the structure of Human FOXO3 bound to DNA (PDB accession 2UZK [39]); more recently, the effect of acetylation on binding has been shown directly [40].
FOXO is implicated in many of the interventions that extend lifespan in worms, flies and mice, particularly genetic interventions in the IIS pathway and some protocols of dietary restriction (DR). Overexpression of FOXO itself extends lifespan moderately in flies [41,42], as do the overexpression of the regulators JNK in flies [43] and worms [44], MST in worms [45], AMPK in worms [10], and sir2.1 in worms [46]. Lifespan is extended by many loss-offunction mutations of daf-2, (the C elegans insulin-like receptor) and other molecules in the IIS pathway in a FOXO/daf-16 dependent manner. Similarly, lifespan is extended by DR in ways that, depending on the protocol, may be dependent on FOXO (e.g sDR, [10]), partially dependent on it (IF, [47]) or independent of it [48]. See Greer et al. [49] for a recent review of methods of DR in C elegans. Certain FOXO hapolotypes [50][51][52] and SNPs in the FOXO3 gene have been found to be associated with human longevity [53], although interestingly the SNPs are all found in intronic regions, and do not seem to introduce or remove sequence features related to splicing, so it is not clear how they have their effect.
Many reviews of the behaviour and role of FOXO are available, related to its PTMs [54], its role in ageing and the maintenance of homeostasis [55,56], metabolism [2], cancer and apoptosis [57], the immune system [58], cancer [59], ageing and cancer [1], its structure/function relationships [60], interaction partners [61], and subcellular localization and transcription [62]. Karpac and Jasper [63] review the interaction between IIS and stress signalling through JNK, while van der Horst et al [64] and Levine et al [65] review the interaction with stress signalling more generally, and compare and contrast the roles of FOXO and p53. What is clear is that the interpretation of the multiple complex regulations and interactions will be much assisted by a quantitative modelling approach.
To our knowledge FOXO has not been previously the subject of computational modeling, though other transcription factors have, for example the SMAD2/SMAD4 complex [66], MSN2/4 [67,68], NF-kB [69], p53 [70,71]. The forkhead homologue from fission and budding yeast, FKH2, has been treated as part of a wide survey of regulation of the cell cycle by coupled transcription and phophorylation [72], though its regulation was not modelled in detail. Several of the above references include the modelling of nuclear-cytoplasmic shuttling, which is a key regulatory mechanism of many TFs, including FOXO.
It is clear that FOXO has important roles in ageing, nutrient response and stress response pathways; and also that the large number of regulations that it undergoes and their different and often contradictory effects, make it difficult to understand the system. Hence, we believe that it is a good candidate for computational modelling, which, by integrating information from diverse sources and making possible a quantitative description, has the ability to resolve some of these difficulties [73]. We have produced a model in which the level of various modifiers of FOXO (activities of modifying enzymes) are set as inputs, and in which the behaviour of the FOXO in terms of localization, protein level and the transcriptional response are the outputs. Because of the complexity of the interaction of FOXO PTMs, this already produces interesting behaviour, and provides the core of future models in which the concentrations and activities of the modifying enzymes will themselves be dynamic variables, set by extracellular signals or by feedback regulation due, directly or indirectly, to genes regulated by FOXO, and the transcriptional output will include the response of multiple genes.
Methods
There are two problems to be faced in modelling the regulation of FOXO. Firstly, it is necessary to choose a model structure that simplifies the large (and still not fully elucidated) set of PTMs, while still retaining enough information to describe all the relevant regulations. Secondly, appropriate parameters must be set for the reactions, even though detailed kinetic measurements are often not available.
The first problem is a consequence mainly of the large number of independent PTMs, which lead to a need to include many species to describe all the combinations. This can be appreciated by considering Figure 2A, which shows a fairly complete (but not exhaustive) list of the known modifiable residues of mammalian FOXOs [54]. If the model architecture mirrored this directly, then, (since each of the N sites can have its covalent modification present or absent independent of all the others), it would be necessary to include a species in the model to represent each combination of PTMs, leading to 2 N FOXO species (chemically different molecules); and, in addition, each could be present in the three compartments of the cytoplasm, nucleus or bound to DNA (transcriptionally active), making 362 N species in total. Each of these species could be present in a copy number of 0, 1, 2, … at each time point in the simulation. For the set of N = 27 modifications in Figure 2A, this would result in approximately 10 8 species, an excessively large number, even before all the reactions to interconvert them are considered. However, simplifications can be made by grouping all PTMs that produce a similar effect on the behaviour of FOXO. This usually means all those made by a particular modifying enzyme or enzymes. So, for example, to a good approximation the three residues modified by Akt (T32, S253 and S315 in Human FOXO3) must all be phosphorylated before the effects are seen, which are to enhance export of [54]. NES = Nuclear Export Sequence; NLS = Nuclear Localization Sequence.The Forkhead domain is expanded and conserved regulatory sites are indicated. The alignment was made with T-Coffee [136] and the graphic made with Jalview [137]. B: FOXO3 bound to DNA; 2UZK [39]. The graphic was made with PyMOL (DeLano Scientific, Palo Alto, CA, USA). doi:10.1371/journal.pone.0011092.g001 FOXO from the nucleus to the cytoplasm [74][75][76], to change its affinity for DNA [77] and to increase its degradation rate [78]. Hence, eight combinations of phosphorylations of 3 residues can be modelled as a single collective phosphorylation, which will be referred to as type Pa ( Figure 2B), either present (corresponding to all 3 individual phosphorylations) or absent (corresponding to any other combination); this reduces the resulting number of FOXO species from eight to two, i.e. by a factor of four. That all three residues must be modified is shown by the results of mutagenesis [74] and by the fact that FOXO6, which lacks one of them, is not regulated by nuclear-cytoplasmic shuttling [79]. In addition, the same three residues are modified by SGK [80] as well as Akt with similar effects; although the detailed kinetics will be different, for modelling purposes we approximate them as the same, i.e. all reactions are duplicated with Akt replaced by SGK (these two enzymes are in any case difficult to distinguish [62]) and the rates the same. Analogously, the three Akt isoforms are treated as a single Akt enzyme.
Other PTMs are simplified similarly ( Figure 2B). Phosphorylation by CDK2 [81] and CK1 [82], although at different residues, both lead to export to the cytoplasm and so are treated as a single modification. Five phosphorylations by AMPK [11] are considered as a single modification leading to increased transcription. IKK modifies only a single residue, so needs no simplification. The multiple modifications by JNK [83] and MST1/STK4 [45] are again combined into one, and any differences between the two enzymes are ignored. This is reasonable because both enzymes are fully and quickly activated by fairly low concentrations of H 2 O 2 (below 100mM) (Lehtinen Figure 1A, Essers Figure 1A); they then phosphorylate FOXO within 30 mins (Lehtinen Figure 3A, Essers Figure 3A), which results in its nuclear localization (Lehtinen Figure 4EF, Essers Figure 4A) and transcriptional upregulation (Lehtinen Figure 5A, Essers Figure 4B). Four acetylations [84] are combined into one modification, as are two monubiquitinations [85]. Unlike the other PTMs, it is possible that there could be competition between the processes of acetylation and monoubiquitination, since both PTMs are attached to Lys. However, the two modifications are treated as independent in the model. Polyubiquitination, leading to degradation, is treated as a one-step process, although in fact a polyubiquitin chain must be built up stepwise; this can be modelled in detail [86] but this degree of sophistication was not included in the current FOXO model. With these simplifications, there are effectively only a maximum of 8 modifiers and 3 locations to consider, leading to a maximum of 768 chemical species of FOXO if all PTM-addition reactions are active. It is also of interest to know the total amount of FOXO having a particular PTM, since this typically corresponds to what can be measured via e.g. immunoprecipitation or a Western blot; this is the sum of the copy numbers of all the species that have that modification present, irrespective of the other PTMs that they may contain. Therefore counter variables are also introduced in the model, which are not involved in any reactions but simply sum appropriate subsets of the FOXO species.
We now turn to the second problem, that of addressing the kinetics of the reactions in which these species are involved. First, consider the reactions involved ( Figure 3). A particular FOXO species may be phosphorylated by any of the five types of kinase in Figure 2B and desphosphorylated by the corresponding phosphatase ( Figure 3A). In this model, only a single phosphatase is included that may catalyse any dephosphorylation reaction. FOXO may also be acetylated by the CBP/P300 enzyme and deacetylated by SIRT1 ( Figure 3B), monoubiquitinated and deubiquitinated by USP7 ( Figure 3C), polyubiquitinated by the SCF complex (or MDM2) and the resulting polyubiquitinated form degraded ( Figure 3D), translocated between cytoplasm, nucleus and a DNA-bound state ( Figure 3E), and synthesised ( Figure 3F). It may not be necessary to consider all these reactions as being active simultaneously (indeed in this paper we do not consider more than two or three different reactions being active at any time), but the structure of the model allows them all to be so. When the FOXO is bound to DNA, it stimulates the production of the mRNA for each gene that it regulates; we assume that there is also a ''basal'' production of this mRNA, a translocation of the mRNA to the cytoplasm, where it may be degraded or translated to protein, which may itself be degraded. Only in its own synthesis reaction does the FOXO represent a single species in the model, since it is synthesised in the cytoplasmic compartment without any PTMs; otherwise it may already have any combination of the PTMs applied by any of the other reactions, and may also be present in any compartment. Hence, up to several several thousand reactions interconverting FOXO species must be taken into consideration. However, the reactions change only one of the PTMs at a given step, i.e. each FOXO species can be converted into only nine others. These pre-existing modifications will, however, in general affect the rate constants for the forward and backward steps. Usually, the way that multiple modifications interact has not been measured in detail. In the model described in this paper, the process has been treated simply by multiplying the effects that each of the PTMs has in isolation on the relevant rates. These effects are therefore modelled as parameters, which will be referred to as Multiplicative Factors (MFs) subsequently, that multiply the ''basal'' rate, For example, for a species with both Akt and JNK phosphorylations, the effect on translocation is the basal rate6MF due to Akt6MF due to JNK. This is clearly likely to be a fairly gross approximation, but it is necessary to enable the model to be parameterized; its justification will thus be in the empirical validity and usefulness of the model. Of course, as most detailed measurements are made, it will be possible to modify the model to incorporate them.
For the transcription reactions represented in Figure 3G, the number of reactions in the model would also be multiplied by the number of FOXO-regulated genes. Currently, though, transcription is considered to be that of only two genes, SOD2 (MnSOD) and InsR, with identical kinetics. Transcription results are shown only for SOD2.
For many of the regulatory reactions included in the model, there are not explicit time courses, or only a few points. They are modelled by mass-action kinetic equations of the following form: In these equations, A and B represent FOXO species with different PTMs, and the forward and back reactions occur at rates v 1 and v 2 , catalysed by enzymes E a and E b . The experimental information often amounts to a knowledge of (A/B) eq and some idea of the relaxation time, from an experiment when one of A and B is initially at very low concentration, then the enzyme that produces it is activated. From the rate equations, first with the time derivative equal to zero, and second with one of A or B (near)-zero, it is simple to see that this determines pseudo-first order rate constants, k9 a = k a E a and k9 b = k b E b . Then, we choose appropriate values for the copy number of E a and E b in the range of 10 4 to 10 5 when fully activated (10 5 is typical of Akt when stimulated by insulin [87]); and set the second order rate constants k a and k b accordingly, as shown in Table 1. Transport processes are modelled with first-order kinetics: Where A and B represent FOXO species in different compartments, and the first order rate constants k tr1 and k tr2 can be found from the relaxation kinetics. The rate constants for transport processes are given in Table 2.
We have assumed that at the levels of ''high active kinase'' (10 4 molecules or more) the rates are high (half-life of a few minutes) compared to processes like the relocation/degradation/transcriptional output of FOXO. This is true of Akt-phosphorylation [27] and is typical of phosphorylation processes, (though perhaps less so of acetylation).
For the transcriptional outputs, the relevant reactions are Where w is null, g mn is the mRNA in the nucleus of gene g (currently either SOD2 or InsR), g mc is the cytoplasmic mRNA of gene g, g pc is the cytoplasmic protein corresponding to gene g, and A is a DNA-bound FOXO species. The first-or zero-order rate constants are given in Table 2.
A python script was written to generate the species and reactions: this script is available as supplementary Dataset S2. To comply with modelling standards and simplify exchange and reuse, SBML [88] is used to represent the model; but, as a further simplification, rather than generating SBML directly, the python script produces output in SBML-shorthand [89], a more compressed and human-readable format which nevertheless can be converted without loss of information to SBML.
Simulations were carried out using the gillespie2 stochastic simulator in stand-alone mode and also through BASIS, which provides a web interface to gillespie2 and a database to store the results [73,90]. Analysis was also carried out using R. Sensitivity analysis was performed with COPASI [91].
As we have said above, the kind of equations relating to fractional locations or degrees of up/down regulation do not permit the number of molecules involved to be known exactly. The initial number of FOXO molecules of all species was set to be 1000. This is probably an underestimate, but is the approximate number of consensus FOXO binding sites, GTAAACAA [92], within 1 kbase of the complete set of transcription start sites in the genomes of the mouse and fly. This number is large enough that the difference between a stochastic and deterministic simulation is often unimportant; however, in this case, the large number of different FOXO species means that the copy number of each species can still be low -a situation that will arise commonly if [78]). D: Acetylation protects FOXO from degradation. Data points from bTC-3 cells (figure S5 in Kitamura et al [97]). Black: SIRT high (copy number 10 3 ), CBPP300 low (10); Red: SIRT moderate (200), CBPP300 active (10 3 ); Blue: SIRT low (10), CBPP300 high (10 3 ). Synthesis is inhibited (E2F1 = 0) and Akt active (10 5 ) in all simulations. E: degradation is accelerated by IKK activation: experimental data from MCF7 cells, from Hu et al [98] figure 5F, simulation: IKK = 10 5 . F: Effect of acetylation on transcription: experiment: (from Matsuzaki et al Fig 1B [99]) FOXO1-mediated transcription in HepG2 cells of 36IRS-MLP-luc by wild-type, 3KR (acetylation-resistant) and 3KA (acetylation-mimic); simulation: nuclear SOD2 mRNA with copy number of CBPP300 = 10 3 , copy number of SIRT1 = 10 3 (50:50), CBPP300 = 10, SIRT1 = 10 3 (deAc(etylated)), and CBP/P300 = 10 3 , SIRT1 = 10 (Ac(etylated)); normalized so that transcription from 50:50 = transcription from WT. G: Upregulation of transcription by AMPK: Experiment: Greer et al figure 7Aii, transcription of GADD45 with AMPK active and WT FOXO3 or 6A mutant (AMPK-phosphorylation-resistant); simulation: AMPK = 10 5 or AMPK = 100. doi:10.1371/journal.pone.0011092.g004 Figure 5. Combination of the effects of PTMs by JNK and Akt. A: FOXO is pre-equilbrated for 1440 minutes with Akt activity low (Akt = 100) and JNK inactive (JNK = 0). The PP2A level is 10 4 . At t = 60 min FOXO is set to high activity (5610 4 ); the PP2A level is 10 4 . At t = 120min the level of active JNK is also set to 5610 4 (times of high kinase activity shown by dosing bars below the plots). The graph shows the total amount of FOXO and multiple regulations are included in a model [93]. In addition, the copy number of mRNA species for the transcribed gene is low. For both these reasons a stochastic representation is more appropriate than a deterministic.
Fitting the Model to Experimental Data
The first regulation of FOXO to be discovered was its Aktphosphorylation and translocation between nucleus and cytoplasm. Simply stated, in the absence of Akt-activation, FOXO is primarily nuclear, while activation of Akt, usually by the insulinsignalling pathway, leads to sequestration of FOXO in the cytoplasm. Experimentally, FOXO localization is usually measured as the fraction of cells having FOXO mostly nuclear, mostly cytoplasmic or both nuclear and cytoplasmic, as determined with fluorescent-labelled FOXO. In the current model, the fraction of FOXO in the nuclear or cytoplasmic compartment will be treated as equivalent to the experimentally determined fraction of cells having FOXO mostly nuclear or cytoplasmic. Those cells which are experimentally observed to have FOXO both nuclear and cytoplasmic are counted together with those where it is wholly cytoplasmic (an equal division between nuclear and cytoplasmic was also considered, but the assignment to cytoplasm gives a better fit). In the model the total nuclear FOXO is simply the sum of that in the nuclear and DNA-bound compartments. The basal rate of redistribution is ,0.1 min 21 , to give a half-life of the order of 10 minutes, with the cytoplasmic-nuclear rate slightly higher, and the nuclear-cytoplasmic lower, for unmodified FOXO, to give an equilibrium favouring the nuclear form. The basal rates of DNA binding and unbinding are chosen so that, of the total nuclear FOXO, about two-thirds is DNA-bound and one-third nuclear but not bound to DNA.
In Figure 4A we show the generally good agreement between the total nuclear FOXO fraction in experiments in CV1 cells ( Figure 1A-N [75]) and simulation, under gradually increasing levels of Akt activation, corresponding in experiment to inhibition of PI3K with wortmannin, serum-starvation, provision of growth medium, and constitutive activation of PI3K. (PI3K is a kinase upstream of Akt in the IIS pathway).
In Figure 4B a timecourse of the redistribution of FOXO from nuclear to cytoplasmic upon Akt activation is shown. In the experiment (Biggs et al. Figure 1O [75]), CV1 cells are serum starved for 2h and then stimulated for 1h with IGF-1. Using the model, the best fit to this is obtained with active Akt of 2610 3 in the pre-equilibration level, which is then increased to 2.5610 4 to correspond to the IGF stimulation. The experiment lags slightly behind the simulation result at around 5 or 10 minutes, but this is its levels in the nuclear, DNA-bound and cytoplasmic compartments, as a fraction of total FOXO at t = 0.B: same experiment as A, the level of SOD2 mRNA in nuclear and cytoplasmic compartments is shown C: same experiment as A, the level of SOD2 protein is shown. D: fraction of cytoplasmic and total nuclear (nuclear+DNA-bound) FOXO, as a function of Akt copy number, for different levels of JNK. PP2A level is 10 4 Figure 1; also [94] Akt-Phos Pa 3 [78] Figure 1 IKK-Phos Pd 6 [98] Figure 5F Acetyl Ac 0.033 [97] Figure S5 Rate constants of FOXO synthesis/degradation and PTM addition/removal processes. For a given process, the first-order rate constant is given, and, for enzyme catalysed reactions, a derived second-order constant and the putative enzyme molecule number that produced it. Only the product of these, the first order constant, is related to experimental measurements. Where appropriate, a list is then given of a series of multiplicative factors (MFs) which may modify the rate for the process, depending on the presence of a another particular PTM in the FOXO species. doi:10.1371/journal.pone.0011092.t001 reasonable given that that in the simulation the level of active Akt is set directly whereas in the experiment it must be activated through the IIS pathway, with a delay of the order of a few minutes. Now let us consider the synthesis and degradation of FOXO. This was first studied in detail in experiments on HepG2 cells by Matsuzaki et al [78]. These experimental results, together with model output, are shown in Figure 4C. For synthesis, it has been shown that, if the proteasome is inhibited with MG132, the amount of FOXO1 increases in the cell by 20% in 6h. Taking the equilibrium copy number of FOXO of all chemical species to be 1000, the rate of increase of FOXO is then approximately 5.5 molecules min 21 , shown in Table 1 as a pseudo-first order rate constant. Breaking this down in to a second-order constant 6 the copy number of the E2F1 transcription factor, and choosing this copy number = 100, gives the basal value of the rate constant 0.0055 min 21 . Then, without proteasome inhibition, the amount of FOXO in the cell was quantified with and without insulin stimulation, i.e. with and without Akt activation, over 12h. Although proteasomes are present in the nucleus, it has been convincingly shown that degradation of FOXO occurs only in the cytoplasm [78,94]. It has also been shown that phosphorylated FOXO is degraded faster. Assuming the basal synthesis level is maintained subsequently, we simultaneously fitted the pseudo firstorder polyubiquitination and degradation rates and the MF multiplying the rate at which Akt-phosphorylated FOXO is degraded compared to unphosphorylated. The best fit was obtained with first order rate constants of 10 23 min 21 for polyubiquitination, 0.1 min 21 for degradation, and an MF of 3. Taking the copy numbers of the modifiers to be 10 3 for both the total of ubiquitin ligases (SCF+MDM2) and the proteasome, this gives second order rate constants of 10 26 and 10 24 min 21 . With these parameters, Figure 4C shows that, when degradation is inhibited, the amount of FOXO in the cell increases despite the activity of Akt (data in red), whereas, without inhibition of degradation, FOXO molecule number decays very slowly when Akt activity is low (data in black), or with a half-life of about 6h when Akt activity is high (data in blue).
Other PTMs significantly modulate the degradation of FOXO. Acetylation is one of these, and is of particular interest because of the suggestion that the FOXO-and histone-deacetylase, SIRT1, may also modulate organism lifepan. The timescale of both acetylation and deacetylation is of the order of 10 minutes when highly activated. This is shown in van der Horst 2004 [95] Figure 4, in response to high concentrations of H 2 O 2 , 200 and 500 mM, that activate CBP/P300 and fully acetylate FOXO4 in [74], [75] Akt-Phos Pa 0.1 [75] CDK2-Phos Pb 0.5 [81] Figure 3A IKK-Phos Pd 0.5 [98] Figure 1 JNK-Phos Pe 10 [83] mUb 10 [85] NuclearRcytoplasm transport ---0.160.55 = 0.055 [74], [75] Akt-Phos Pa 10 [75] CDK2-Phos Pb 2 [81] IKK-phos Pd 10 [98] JNK-Phos Pe 0.1 [83] mUb 0.1 [85] NuclearRDNAbound transport ---0.25 [76] Akt-Phos Pa 0.5 [77] Figure 6B Acetyl Ac 0.5 [99] Figure 1D DNAboundRNuclear RNA export k exp ---0.22 [100] RNA translation k transl ---1.23 [100] RNA degradation k mdeg ---5.622 [100] Protein degradation k pdeg ---1 . 9 610 23 [100] Rate constants of FOXO translocation reactions and transcription, translation and degradation processes of FOXO-regulated genes/proteins. The first-order rate constant is given for these processes, except for basal transcription, which is zero-order. Where appropriate, a list is then given of a series of multiplicative factors (MFs) which may modify the rate for the process, depending on the presence of a particular PTM in the FOXO species. doi:10.1371/journal.pone.0011092.t002 30 minutes (although at lower concentrations, such as 20 mM, acetylation is still low after 1h but is more or less complete after 4h). Hence, it is reasonable to choose pseudo-first-order rate constants of ,0.1 min 21 , and we make the choice of the copy number of CBP/P300 and SIRT1 to be 10 3 when fully activated, and a second order rate constant of 10 24 for each reaction. SIRT1 and CBP/P300 are mostly nuclear [96], so in the model these reactions do not occur in the cytoplasmic compartment. Regarding its interaction with degradation, experiments by Kitamura et al [97] on FOXO1 in betaTC3 cells (pancreatic cells, in which Akt is permanently activated by endogenous insulin) show that acetylation exerts a strong protective effect against degradation. This is shown in Figure 4D, where model output is fitted to experimental data. This fit is obtained with acetylation reducing the rate of polyubiquitination by a factor of 30, and SIRT and CBP/P300 levels that give high, medium and low levels of acetylation. Another PTM that affects degradation, this time accelerating it, is phosphorylation by IKK. Again, phosphorylation is chosen to be a rapid process in comparison with degradation. The fit in Figure 4E is produced by a saturating level of IKK activation, with a MF of 6 for polyubiquitination of IKK-phosphorylated FOXO. IKK phosphorylation also leads to FOXO translocation to the cytoplasm even if Akt is low. This leads to good agreement with experimental data from Hu et al on FOXO3 [98].
Acetylation also affects transcription, as shown in Figure 4F, where experimental measurements [99] of transcription by wildtype FOXO1, an acetylation-resistant 3KR mutant and an acetylation-mimicking 3KA mutant are compared with transcriptional output of the model with SIRT and CBP/P300 levels that produce 50% of FOXO acetylated, mostly deacetylated, and mostly acetylated. The DNA-binding of the acetylated form is decreased by a factor of two, as was measured directly [99]. The agreement is reasonable, especially considering that the mutants may in any case not behave identically to the acetylated or deacetylated forms.
FOXO may also be modified by AMPK, with modulation of transcription but not subcellular localization [11]. Again, we assume that AMPK-phosphorylation is a rapid process, and when fully phosphorylated, the FOXO-stimulated transcription rate is increased by a factor of two. As shown in Figure 4G, this is in good agreement with the experimental data for transcription from several genes including GADD45, though the experimental result seems to be gene dependent, a level of complication not included in the model currently. In Figure 4G, experimental data for transcription of GADD45 by WT FOXO3 with AMPK activated, and transcription under the same conditions from a 6A mutant of FOXO that is not phosphorylated by AMPK, is compared with simulation data with AMPK fully active (copy number 10 5 ) and mostly deactivated (copy number 100).
JNK-phosphorylation broadly opposes the effects of Aktphosphorylation, and is discussed in the subsection on scenarios of multiple FOXO regulation below. Other modifications such as CDK2-phosphorylation and monoubiquitination have similar effects, causing alteration of the nuclear-cytoplasmic equilibrium and FOXO-mediated transcription. They can be modelled similarly (data not shown).
Several parameters of the model relate to transcription of mRNA, its transport to cytoplasm, translation to protein and and degradation; and also to the degradation of the protein. Lacking absolute quantification of protein or mRNA level, or measurement of mRNA or protein lifetime, we have used generic values [100]. SOD2 protein level seems to be upregulated to about the same degree as mRNA on a timescale of 1 day [19], consistent with a protein half-life of up to about 8 hours. Only the transcription rate depends explicitly, in certain cases, on the PTM state of FOXO. In the future these parameters could be made gene-specific as required. Example models (in SMBL format) producing curves in Figure 4C-E are given as supplementary datasets S3, S5 and S7. These datasets, together with all other supporting information, are also available in the single dataset S9.
Sensitivity
We have investigated the sensitivity of the concentrations of the species of the model as a function of the parameters, using COPASI [91], in deterministic simulations of the models used in the fitting described above. The sensitivity was calculated at a certain time point, particular to each simulation, where initial transients had died away (most of the enzyme-catalysed PTM adding/removing reactions had come into pseudo-equilibrium, while the concentration of SOD2 protein, which has a longer timescale, was near a stationary point). It was not appropriate to find the long-time equilibrium as in several of the scenarios simulated the model did not have a non-trivial steady state. Scaled sensitivities, i.e. (k/X) dX/dk where X is a concentration and k a parameter, were considered. Examples of the detailed results are available as supplementary Datasets S4 (corresponding closely to the model S3), S6 (corresponding to S5) and S8 (corresponding to the model S7). The most important parameters depend, of course, on the regime that the model is in (which PTMs are active, etc): sensitivities are low to those parameters which only affect reactions in which the species involved are present at low concentrations. In general, synthesis/Akt-phosphorylation and polyubiquitination (but not degradation itself) and transport processes, especially between nucleus and cytoplasm, tend to affect the FOXO concentrations and the concentrations of the output gene, with high sensitivities (of magnitude up to 1). Only a few FOXO concentrations, and not the output gene concentrations, have a high direct sensitivity to reactions involving acetylation/deacetylation. We remark that, although the results are sensitive to Aktphosphorylation and dephosphorylation, they are not sensitive to an increase or decrease of both by the same factor, because these reactions (and others of PTM addition and removal) occur on a shorter timescale than other processes in the model such as FOXO synthesis and degradation. Datasets S4, S6 and S8, together with all other supporting information, are also available in the single dataset S9.
Scenarios of Multiple FOXO Regulation
Having shown that experimentally-determined responses to single and some multiple regulations can be reproduced by the model, we now move on to study the behaviour of the model in scenarios involving multiple regulations. A simple combination of effects is shown in Figure 5A-C. A simulation has been preequilibrated for 1 day, to allow the SOD2 protein to reach equilibrium. Akt activation (starting at t = 60 min and remaining active for the rest of the simulation) causes FOXO translocation to the cytoplasm; accordingly, transcription almost stops and the cytoplasmic and nuclear concentrations of mRNA fall; the level of the protein also begins to fall, although the effect is not very large on a 1 hour timescale. After a further 60 minutes, JNK is activated, and in large measure overrides the effect of Akt and causes the FOXO to translocate back from the cytoplasm to the nucleus. Once in the nucleus, however, it will be observed that the ratio DNA-bound/nuclear is lower than before, and the FOXO level is also slightly lower as some has been degraded while it was cytoplasmic. Accordingly, although the mRNA level recovers somewhat, it is not to its previous level, and the protein continues to fall, though very slowly. In this simulation, the level of both kinases is set to 10 5 when they are active. The PP2A level is 10 4 molecules, so the PTM that the kinases confer is present in about 80% of the FOXO molecules.
Experiments in which JNK is activated by H 2 O 2 show an interplay between the opposing effects of JNK and Akt; JNK tends to override Akt (as in A-C) but if Akt activity is particularly high it may override a low level of JNK activation [83]. Detailed doseresponse data that would enable a full parameterization is not available, but Figure 5D and E show that the model can capture these effects. As Akt increases it tends to drive FOXO to the cytoplasm, but the effect can be reversed by JNK. It is interesting to note the effect of reducing the activity of protein phosphatases: in D the PP2A level is high (copy number 10 4 ), while in E it is much lower. It is noteworthy that, with PP2A low and without JNK, a lower level of Akt activation (in E compared to D) is sufficient to drive FOXO to the cytoplasm, but also a lower level of JNK activity suffices to retain it in the nucleus even at a high level of Akt activation (compare the response to JNK = 10 3 ). That PP2A, JNK and Akt could all change together is a likely scenario, because oxidative stress tends to inactivate protein phosphatases due to their reliance on a catalytic cysteine that contains an easily oxidised SH group.
Another example is shown in Figure 6, which shows the effect of a short and long stimulation by Akt under different background PTM states. This in-silico experiment was suggested by comments made in Calnan and Brunet [54]; to our knowledge it has not been experimentally investigated, so the simulation results constitute an experimentally verifiable prediction made by the model.
First, consider the effects of short and long-term activation of Akt alone, (Figure 6A-C). The system was pre-equilibrated at low Akt for 2880 minutes (2 days) before the first period of Akt activation. The last 360 minutes of this is shown in Figure 6, so that the mainly nuclear (DNA-bound) location of FOXO is apparent. At t = 360, Akt is activated for 60 minutes (set by an event from 100 molecules to 10 5 ). This initial burst of Akt activation causes FOXO translocation to the cytoplasm and transcription almost stops ( Figure 6B). However, once active Akt is reset to a low level (back to 100 molecules t = 420 for the next 120 min) the FOXO rapidly relocates to the nucleus and transcription resumes at almost the same level it had initially. A second, longer period of Akt activation then follows (10 5 active Akt molecules from t = 540 for the next 720 min (12 hours) ), again causing translocation of FOXO and almost halting transcription of FOXO target genes. This pulse is long enough for the level of FOXO to decline by more than half in the course of it, as a result of polyubiquitination and resultant degradation of the cytoplasmic FOXO. Hence, when the second pulse of high Akt activity ends (at t = 1260) and FOXO returns to the nucleus, the transcription level ( Figure 6B) is at first also less than half what it was before the Akt pulse. The simulation is continued for a further 1 K days (until t = 3420), during which the transcription level recovers, as FOXO is re-synthesised, though even at the end of this period it has only just regained its initial level. Consequently, the level of SOD2 protein (with a half-life of 8h) also declines appreciably during the long Akt pulse, and recovers only slowly thereafter. In this simulation the copy number of CBP/P300 was set to 10 molecules and SIRT1 to 1000, levels designed to ensure that acetylation is slight, and the levels of other modifying enzymes were set to zero.
The same protocol of brief and prolonged Akt activity has a different outcome if the FOXO is more strongly acetylated ( Figure 6 D-F), an effect which may be produced either by increasing the level of active CBP/P300 and/or decreasing the level of SIRT1; in this simulation the levels were CBP/ P300 = 1000, SIRT1 = 100 throughout the pre-equilibration period and the timecourse shown, leading to 90% acetylation of FOXO. E2F1 was reduced to ensure that the copy number of FOXO was about the same in A and D. Acetylated FOXO binds DNA less strongly, but it is also stabilized against polyubiquitinmediated degradation. As a consequence, the basal level of FOXO mediated transcription is lower (only 75% of its level when FOXO is unacetylated; Figure 6B vs Figure 6E) but almost none of the FOXO is degraded during the long Akt pulse, so that when the second pulse of Akt activity ends, transcription recovers almost immediately to its basal level in this PTM background. Hence, although equilibrium level of SOD2 is also lower, and it is degraded as before during the long Akt pulse when FOXO is sent to the cytoplasm, it recovers more quickly to its initial level ( Figure 6F cf Figure 6C).
This demonstrates how the model is capable of making experimentally verifiable predictions of the effects of PTMinducing treatments applied according to various time-dependent protocols.
Discussion
The model presented here provides a significant first step towards a detailed model coupling ageing-related stress and nutrient response pathways with the stress resistance. It also provides an extensible modelling framework into which future information could be integrated. Although complex, because of the need to treat multiple chemical species, we believe that there is no other approach that can integrate information from multiple regulatory pathways into a single framework. We have shown that, even without detailed up-and downstream pathways, it is capable of reproducing experimental data on many features of FOXO regulation. Moreover, in Figures 5 and particularly 6, we have shown how the multiple regulations of FOXO alone can produce complex spatio-temporal dynamics, such as the faster recovery of antioxidant levels after the end of a long period when FOXO acetylation is at a high level.
Despite the large amount of work done on FOXO transcription factors, it is not easy to parameterize the model; time courses are generally not available, and different organisms and different tissues are widely used, so we have frequently had to argue by homology between different tissues or organisms, though human FOXO1 was the paralogue most often used and cell lines such as CV1, HepG2, bTC-3 and MCF7 were used. Hence the fact that we have referred to the species in the model as FOXO rather than FOXO1 or some other specific paralogue.
The modifying processes have been represented by simple 2u mass-action rate laws. This is a consequence of two factors: first, there are few quantitative timecourses available, but more usually a series of Western blots (sometimes only showing the presence or absence of a treatment and an indication of the time over which the change occurs). With this data, the forward and backward rates of a first order process and its inverse can be estimated, but the multiple constants of more complicated kinetics cannot. Secondly, the simplification of combining multiple phosphorylations into a single collective modification leads naturally to simple kinetics where the FOXO species appears linearly in the rate expression; nevertheless it is likely that more detailed timecourses would show that the kinetics do not follow this kind of behaviour. For example, with multiple regulations, it is to be expected that the fully phosphorylated form will appear as a concave-upwards function of time, such as a cubic function when three phosphorylations occur, in response to a step increase of the enzyme producing it, rather than the linear timecourse of the 1u Figure 6. The effect of brief and prolonged Akt activity interacting with degradation and acetylation. Before the start of the time course in the figure, FOXO is pre-equilibrated for 2520 minutes to allow its target gene SOD2 to come into equilibrium, with Akt very weakly active (copy number 100). A-C shows a simulation where acetylation is low (SIRT1 = 10 3 , CBP/P300 = 10), whereas in D-E CBP/P300 is high (SIRT1 = 10, CBP/ P300 = 10 3 ) so that FOXO is 99% acetylated. As shown by the dosing bars, Akt is activated at t = 360 min for 60 min, and again at t = 540 min for 720 min. model. This may often seem to correspond to an apparent initial delay in experiment. As detailed time course data becomes available it may, therefore, be necessary to revise this assumption.
Transport into and out of the nucleus has also been represented by simple first order processes with rates dependent on the PTM state, when in fact the process is more complex, involving binding of the FOXO to two 14-3-3 proteins, masking the Nuclear Localization Signal, and its resultant export to the cytoplasm [62]. This export from the nucleus is itself a multi-step process, involving binding to the nuclear pore complex and several other cofactors. Again, this may alter the kinetics in some regimes. It is interesting to note that the action of JNK in causing nuclear localization occurs through multiple mechanisms: JNK phosphorylates both 14-3-3 and FOXO, causing them to unbind from each other in the cytoplasm, and also attenuates Akt activation by phosphorylating IRS. The action seems to be evolutionarily conserved, but the JNK-phosphorylation sites are conserved much more highly in 14-3-3 than FOXO [63], so the effect is probably a result of the 14-3-3 modifications. The model, in which 14-3-3 is not explicitly represented, can capture this simply as a nuclear relocalization response which is conserved even if the JNKphosphosites are not (cf. Figures 1A/2A).
The negative second derivative of the experimental curve of FOXO molecule number against time ( Figure 4C) suggests that there is negative feedback inhibition of FOXO synthesis, though this is not included in the current model. This may also explain why acetylation does not seem to produce a large increase in FOXO molecule number.
In general, the kinetics of the phosphatase-catalysed reactions are much less certain that those of the kinase-catalysed reactions. Most of the time they have not been studied in detail. Hence, we have assumed a single ''general phosphatase'', assumed to be PP2A, to reverse all the kinase-mediated phosphorylations in the model.
When the localization of FOXO alters experimentally in response to changes in PTM, it is experimentally observed that there is a change in the fraction of cells in the population having FOXO primarily nuclear, primarily cytoplasmic, or present in both compartments (see for example Figure 3 of Brunet 1999 [74], Figure 1 of Biggs et al. 1999 [75] or Figure 4 of Essers et al. 2004 [83]). Although the stochastic modelling carried out here means that there is continuous fluctuation in the amount of FOXO in the various compartments of the model (cytoplasm, nuclear and DNAbound), we found that the fluctuation is usually small, about 10 molecules, i.e. 1% of the total amount of FOXO, and the average number of FOXO molecules in each compartment varies smoothly as the PTM level alters. This seems somewhat different to the behaviour reported about cell cultures; however it is likely that there is in fact a continuous variation in the amount of FOXO in nuclear and cytoplasmic compartments, but the impossibility of quantifying this amount means that the simple three-state classification must be adopted, and all cells where FOXO is present in more than small amounts in both compartments will be classified as having FOXO nuclear+cytoplasmic, whether the ratio be 0.2:0.8 or 0.8:0.2. Turning to the question of the differences between cells in a population, it might at first be thought that the experimentally observed behaviour corresponds to stochastic fluctuations in the response of particular FOXO molecules. However, since in the model these fluctuations were very much smaller, and moreover since it appears from the published data that the FOXO localization in a particular cell is more or less stable in time, we believe this is unlikely to be the case. It could instead be a consequence of differences between cells in the expression level of upstream components of the insulin signalling pathway. In future, heterogeneity in a population of models could be studied, and/or alternative model architectures, such as positive feedbacks, could be investigated to try to increase the observed fluctuations to obtain behaviour closer to that seen experimentally. These are currently beyond the model boundaries.
Several details of the reactions which could potentially affect the model's behaviour have been omitted. For example, cofactors essential for the formation of PTMs, such as ATP, ubiquitin, Acetyl-CoA and NAD+, are not explicitly included in the model. We also remark that, although the model reproduces the experimental result that acetylation downregulates FOXOmediated transcription, and deacetylation upregulates it ( Figure 4F), the situation experimentally is more complex than this suggests, with transcription found to vary in a genedependent way, some genes being downregulated and some upregulated [96,99,101]; reviewed in [102]. A plausible explanation for this is given by Daitoku et al [84], whose work showed that histone acetylation and deacetylation is carried out, with an effect on transcription, by the same enzyme pair that modifies FOXO. This could be handled by extending the model to include acetylated and deacetylated states for the FOXO-regulated genes as well as for FOXO itself. Transcription downstream of FOXO is not the focus of this paper and has been limited to two genes. However, the model could be easily extended to include multiple genes, with gene-specific values of the basal and FOXOstimulated transcription rate.
In addition to its direct action as a DNA-binding transcription factor, FOXO may bind to other TFs and modulate their activity. This occurs for example with nuclear receptors such as HNF-4 [103] and ER (Estrogen receptor) [104,105], as well as PR, GR, RAR and TR [104]. FOXO seems usually to reduce the transactivation function of the other TF but may also increase it, as the case of RAR and TR. In a similar way, FOXO may also bind to another TF acting as a transcriptional repressor and relieve this repression, as seems to happen with p53 repression of SIRT1 transcription [29] or Cs1 repression of Hes1 [106]. Behaviour of this kind could be treated by extending the model to introduce reactions in which FOXO species, probably that subset which is nuclear but not DNA-bound, bind to other TFs to form species representing the FOXO-TF complexes. Depending on the other TF and the gene, the resulting complexes themselves could be assigned low or no transcriptional activity, corresponding to FOXO acting as a transcriptional repressor, or a higher transcriptional activity than the other TF alone, corresponding to FOXO acting as a transcriptional co-activator. It was found in Hirota et al. [103] that the binding between FOXO and HNF-4 itself depends on the phosphorylation state of FOXO; this could be treated very naturally in the modelling framework used here. The cost in increased complexity of the model depends on the other TF. If it can be treated simply the cost would be similar to that of adding an extra class of PTM to FOXO, but if it has many states itself (representing different localizations, different PTMs etc) then the number of species representing complexes is the product of the number of FOXO species and the number of species of the other TF, and care would need to be taken to keep this manageable. Similar considerations arise if the number of regulated genes in the model is also high.
Even the large number of regulations considered in this paper does not exhaust FOXO's repertoire. A potentially important omission in the current model is that of additional stress response phosphorylations made by p38 MAPK (MAPK14) and ERK (MAPK1), which may not be synonymous in their effect to those by JNK. It has been shown that ERK and p38 MAPK modify mouse FOXO1 [107], and the p38 MAPK orthologues pmk1-3 modify daf-16 in C elegans [108]. At least in worms, the effect of modification by p38 is similar to that by JNK, i.e. translocation of FOXO to the nucleus; however there is some evidence that ERK in mammals has a different effect, causing increased MDM2mediated degradation and possibly translocation from nucleus to cytoplasm (although inhibition of MEK, upstream of ERK, fails to reverse this [109]). At the cost of further increasing the number of species in the model, these effects could be included by adding additional PTM states within the framework of the existing model and choosing appropriate rate constants to reproduce experimental data as far as possible. FOXO can also be methylated [110], a modification that reduces its Akt-mediated phosphorylation. In addition, FOXO1 appears to be phosphorylated by ATM/ATR [111] while FOXO3 is itself important for activation of ATM/ ATR [112,113]; these modifications may be included in future versions of the model. Updating of the model would currently be done manually, though the process would be simplified by the work already done in extracting relevant data from publications and developing fitting protocols. Moreover, parameters of processes not related to the PTM to be added or modified should remain unchanged. However, a way of computationally ascertaining the correct simplifications of the PTM set would clearly be desirable, and research is proceeding currently that addresses related problems. This has been mainly done on genetic and metabolic networks. In metabolic networks important simplifications arise because the behaviour of steady-states is studied rather than kinetics, and there are stoichiometric constraints on the reactions. Attempts have already been made to generate genomescale models [114,115], and even to do so automatically using annotations [116,117]. Applying similar approaches to these to kinetic modelling may become feasible in the near future, though this will require both theoretical developments [118] and high throughput experiments with data generated in machine-readable formats [119]. One way of developing the FOXO model automatically would be, given some sets of data and a trial model, to use a Monte Carlo approach to alter that model by adding or deleting categories of PTMs and re-allocating reactions between them, and then re-fitting to optimize parameters within each model. Similar approaches have been tried to investigate the robustness of gene regulatory networks in simulations of evolution [120].
Similar approaches to generating multiple chemical species by enumerating combinations of PTMs have been implemented several times previously (Cellerator [121] StochSim [122] BioNetGen, [123]). The approach adopted here to the problem of modelling a molecule with multiple modifications, and estimating parameters in such models with many species, is similar to that of ''rule-based modelling'' (see papers by Hlavacek et al. [93,124] and Faeder et al. [125] and references therein), and a similar method of reducing multiple modifications to a single collective modification has also been used previously [126]. We remark that multistate species (i.e. addition of multiple PTMs to a base species) will be supported by the non-core SBML Level 3 package ''multi'' which will be of great benefit for this type of modelling. Developments in rule-based modelling particularly may also be relevant to the question of updating the model, and indeed new algorithms that allow reactions to be generated as required ''on-the-fly'' rather than enumerated beforehand may enable the lifting of the requirement that multiple modifications be combined [127,128].
The scripting/rule-based modelling approach to generating reactions is applicable to other proteins that undergo multiple regulations in a similar manner, for example the insulin receptor substrate IRS-1 [129][130][131] and the C elegans transcription factor SKN-1 (orthologous to NRF TFs in higher organisms) [132]. Following on from this, we remark that several other TFs are indeed, like FOXO, regulated by multiple PTMs, including the tumour suppressor p53 (UNIPROT accession P04637 for the human protein), which can undergo modifications including phosphorylation (by multiple kinases including ATM/ATR), acetylation, methylation and ubiquitylation at least 17 sites [133]. There have been many other papers [70,71,134,135] modelling aspects of the behaviour of p53, though these have mostly concentrated on the negative feedback control through its transcription of the ubiquitin ligase MDM2 and resulting degradation, which can produce oscillations in protein concentration. p53 is usually represented in these models by more than one species, which may correspond to different PTM states. For example, in the papers by Proctor et al. and by Puszynski et al. there are both an inactive and a transcriptionally active form of p53, the latter phosphorylated in response to DNA damage. These can be viewed as omitting certain modifications that are irrelevant to the particular purpose of the model and considering the others as a single collective modification, in a similar way to how the multiple Akt-modifications of FOXO were treated here. However, to understand other aspects of the tumour suppressor function of p53, for example the interplay of responses to DNA damage and hypoxia, the interaction of multiple PTMs may be important and could be tackled by modelling using methods similar to those employed here.
To summarize, this FOXO model provides a framework into which future measurements can be fitted, and provides a central component for more complex models integrating the multiple upstream pathways and the downstream events that follow from the transcription of the multiple FOXO-regulated genes. We have demonstrated that, even alone, it demonstrates complex spatio-temporal regulations: in particular, the behaviour shown in Figure 6 may have implications for stress-resistance transcription. If FOXO is not acetylated, the long-time activation of Akt by insulin-like signalling may lead to sufficient degradation of FOXO that recovery of FOXO mediated stress-resistance transcription would be slow (in the simulation in Figure 6, this follows simply from Akt deactivation; but it could also be the result of a JNK activation to override the Akt, as in Figure 5). However, if the FOXO is simultaneously protected by acetylation, FOXO is degraded much less and transcription recovers more rapidly. In the current model, the absolute level of transcription from acetylated FOXO was slightly slower, but this could be a gene-specific effect. In combination with detailed models of the upstream and downstream pathways, this work will, we anticipate, provide insight into the interplay of ageing, nutrition and stress, and form the basis for a modelling approach connecting metabolic pathways relating to ageing, energy usage and stress with life history theory. This model represents an essential first step. | 13,560.4 | 2010-06-14T00:00:00.000 | [
"Biology",
"Mathematics",
"Medicine"
] |
Abstract and Concrete Sentences, Embodiment, and Languages
One of the main challenges of embodied theories is accounting for meanings of abstract words. The most common explanation is that abstract words, like concrete ones, are grounded in perception and action systems. According to other explanations, abstract words, differently from concrete ones, would activate situations and introspection; alternatively, they would be represented through metaphoric mapping. However, evidence provided so far pertains to specific domains. To be able to account for abstract words in their variety we argue it is necessary to take into account not only the fact that language is grounded in the sensorimotor system, but also that language represents a linguistic–social experience. To study abstractness as a continuum we combined a concrete (C) verb with both a concrete and an abstract (A) noun; and an abstract verb with the same nouns previously used (grasp vs. describe a flower vs. a concept). To disambiguate between the semantic meaning and the grammatical class of the words, we focused on two syntactically different languages: German and Italian. Compatible combinations (CC, AA) were processed faster than mixed ones (CA, AC). This is in line with the idea that abstract and concrete words are processed preferentially in parallel systems – abstract in the language system and concrete more in the motor system, thus costs of processing within one system are the lowest. This parallel processing takes place most probably within different anatomically predefined routes. With mixed combinations, when the concrete word preceded the abstract one (CA), participants were faster, regardless of the grammatical class and the spoken language. This is probably due to the peculiar mode of acquisition of abstract words, as they are acquired more linguistically than perceptually. Results confirm embodied theories which assign a crucial role to both perception–action and linguistic experience for abstract words.
INTRODUCTION
The distinction between "abstract" and "concrete" concepts and words is all but uncontroversial. People disagree when trying to categorize a specific noun as "abstract," and even more when classifying as such a specific verb. Evidence suggests that the "abstract-concrete dimension" reflects a continuum rather than a dichotomy. Indeed, Nelson and Schreiber (1992) and Wiemer-Hastings et al. (2001) asked people to judge the concreteness of large sets of words; they found a bimodal distribution (according to features, such as tangibility or visibility), not a dichotomy. Things are even more complicated when words are embedded within contexts. Most of us would agree that the noun "apple" and the verb "to grasp" are concrete, but judging verb-noun pairs such as "to grasp the meaning," or "to think about an apple" (e.g., Aziz-Zadeh et al., 2006) is all but simple. In addition, the meaning of a sentence is often influenced by a specific language and culture; furthermore, it has been shown that this linguistic and cultural influence is particularly strong for abstract compared to concrete words (Boroditsky, 2003).
The study of how abstract concepts and words are represented has been the focus of many investigations in the 1960s-1990s. The two most influential views were the context availability theory (CAT, Schwanenflugel, 1991) and the dual coding theory (DCT, e.g., Paivio, 1986). CAT would ascribe the processing difference between concrete and abstract words to the fact that concrete words have stronger semantic relations with the context represented by other words. According to DCT, instead, abstract words would be represented only in a linguistic system while concrete words would be represented both in imagery and linguistic system.
As to the neural substrates of language comprehension, the integration of lesions analyses, white matter tractography, and resting state functional magnetic resonance imaging (e.g., Dronkers et al., www.frontiersin.org 2004;Turken and Dronkers, 2011) have recently brought into question traditional models: not only the left posterior temporal cortex but an extensive network in the left hemisphere seems to be critical for the processing of language (e.g., left posterior middle temporal gyrus, MTG; the anterior part of Brodmann's area 22; the posterior superior temporal sulcus). The investigation of the structural and functional connectivity of the keys regions (using diffusion tensor imaging) has shown a bilateral temporo-parieto-frontal network supported by long-distance white matter pathways. This network seems to interact with other brain regions outside the traditionally recognized language areas (Turken and Dronkers, 2011). Pertaining to the aim of the present work, in the last years we have assisted a renewed interest for the way concrete and abstract words are represented, as the growing body of brain imaging studies reveals (e.g., Desai et al., 2010;Ghio and Tettamanti, 2010). Many of these studies supported the original proposal by Paivio, showing for example that processing of abstract words is more lateralized in the left hemisphere than processing of concrete ones (for a review see Binder et al., 2005).
In the same line, on the theoretical side it has been recently proposed that language comprehension is both embodied and symbolic (e.g., Louwerse and Jeauniaux, 2008;Dove, 2010). In keeping with Paivio, Dove (2009 argues in favor of "representational pluralism," claiming that perceptual simulations play an important role in highly imageable concepts while amodal linguistic representations play a crucial role in abstract concepts. One of the reasons of the renewed interest for abstract words is that understanding the way we represent abstract words is a testbed for the increasingly popular (e.g., Chatterjee, 2010) embodied theories of language comprehension, according to which language is grounded in perception, action, and emotional system (for reviews, see Barsalou, 2008;Fischer and Zwaan, 2008;Gallese, 2008). Whereas it is now widely recognized that the evidence in support of embodied theories is compelling regarding concrete or highly imageable words, the issue is much debated regarding abstract words and sentences (Pezzulo and Castelfranchi, 2007;Louwerse and Jeauniaux, 2008;Dove, 2010). Within the embodied framework abstract words would be explained as the result of the transfer in abstract domains of image-schemas derived from sensorimotor experiences: for example, the image-schema derived from "container" would be used to understand the notion of "category" (Lakoff, 1987;Gibbs and Steen, 1999;Boot and Pecher, 2011), the action of giving a concrete object (pizza) would be used to understand the action of giving some news (Glenberg et al., 2008). Alternatively, it has been proposed that abstract words evoke different kinds of properties, i.e., that they activate situations and introspective relationships more frequently than concrete words (Barsalou, 1999;Barsalou and Wiemer-Hastings, 2005; for a review see Pecher et al., 2011).
More crucial to our work are some recent proposals which, starting from an embodied perspective and avoiding assuming the existence of amodal symbols, detached from perceptual and motor experience, share with Paivio the idea that multiple types of representation underlie knowledge (for a review see special topic on Embodied and Grounded Cognition, Borghi and Pecher, 2011). These proposals differ from Paivio's view as they hypothesize that not only concrete, but also abstract words are embodied and grounded. According to the language and situated simulation (LASS) theory , linguistic forms and situated simulations interact continuously and different mixtures of the two systems underlie a wide variety of tasks. The linguistic system (comprising the left-hemisphere language areas, and especially the left inferior frontal gyrus, Broca's area) is involved mainly during superficial linguistic processing, whereas a deeper conceptual processing necessarily requires the simulation system, made up of the bilateral posterior areas associated with mental imagery and episodic memory.
The word as social tools (WAT) proposal (Borghi and Cimatti, 2009) differs from the LASS theory because, according to WAT, the linguistic system does not simply involve a form of superficial processing: words are not conceived of as mere signals of something but also as tools that allow us to operate in the world. In addition, WAT extends LASS as it formulates more detailed predictions on the representation of abstract and concrete words. Indeed, according to WAT abstract word meanings would rely more than concrete word meanings on the everyday experience of being exposed to language in social contexts. According to WAT the difference between abstract and concrete words basically relies on the different mode of acquisition (MoA; Wauters et al., 2003), which can be perceptual, linguistic, or mixed. MoA ratings, which correlate but are not totally explained by age of acquisition, concreteness, and imageability, gradually change over grades. In the first grades acquisition is mainly perceptual, later it is mainly linguistic. It can follow that abstract words are typically acquired later, also because it is more difficult to linguistically explain a word meaning than to point at its referent while labeling. The acquisition of abstract words, due to their complexity, typically require a long-lasting social interaction, and it often implies complex linguistic explanations and repetitions. In contrast, the process by which young children learn concrete words appears effortless and often occurs within a single episode of hearing the word spoken in context (e.g., Carey, 1978; see also Pulvermüller, in press). This has the consequence that, even if for the representation of both concrete and abstract words meanings sensorimotor and linguistic experience are crucial, we rely more on language to understand the meaning of concrete words, whereas we rely more on non-linguistic sensorimotor experience to grasp the meaning of abstract words. (Borghi and Cimatti, 2009). Given that abstract words do not have a specific object or entity as referent, many of them might be acquired linguistically, i.e., listening to other people explaining their content to us, rather than perceptually. This might be due also to their different degree of complexity: learning to use a word such as "lipstick" is simpler than learning to use a word like "justice," and the linguistic label might be more crucial for keeping together experiences as diverse as those related to the notion of "justice." used novel categories to mimic the acquisition of concrete and abstract concepts; they found that linguistic explanations are more important for the acquisition of abstract than for concrete words, and showed with a property verification task that concrete words evoke more manual information, while abstract words elicit more verbal information. WAT hypothesizes also that the MoA determines the representation of the word in our brain: when the words refer to categories learned through sensorimotor experiences (e.g., "bottle"), they have a much higher level of grounding in the perception and action systems than words learned mainly through the mediation of other words (e.g., "democracy"; ; see also Prinz, 2002). Consistently, concrete words should evoke more manual information, activating precociously motor areas (Jirak et al., 2010;Pulvermüller, in press), whereas abstract words should elicit more verbal-linguistic information, activating precociously motor areas related to the mouth, as data on transcranial magnetic stimulation study and on words acquisition modality suggest .
Notice that claiming that concrete and abstract words are acquired through different modalities does not require the postulation of any difference in format between the two kinds of words, nor any transduction from sensorimotor experience into amodal symbols. It simply means that abstract word meanings should rely more on the embodied experience of being exposed to language than concrete word meanings. However, we do not intend to imply that abstract words rely on the simple embodied experience of speaking and listening -this would not suffice to call their representation embodied. In contrast with non-embodied approaches to abstract words, in our view a word like "philosophy" would activate perceptual and motor experiences, together with linguistic experience. As demonstrated by , with abstract terms the advantage of linguistic over manual information was present only when linguistic information did not contrast with perceptual one.
The major difference between Paivio's approach and multiple representation theories such as WAT's approach to concrete and abstract words is that, according to the first, abstract words rely only on the verbal system, while for WAT both concrete and abstract words are grounded in perception and action systems, even if the linguistic system plays a major role for abstract words representation.
The best way to disambiguate these hypotheses is the selection of a paradigm that allows contrasting abstract and concrete words combined in sentences. So far most evidence has been found with brain imaging rather than with behavioral studies, it concerns single words rather than words embedded in contexts, and tasks requiring deep semantic processing are typically not used [an exception is given by a recent fMRI study by Desai et al. (2010), in which a sentence evaluation task was used]. In contrast, our study focuses on how words meaning changes depending on the context in which it is embedded. For this reason we will compare not only whole abstract and concrete sentences, but also sentences which result from a mixture of abstract and concrete nouns and verbs in a well-balanced design. We believe this may represent an important step for a systematic investigation of abstraction. One of the advantages of this design resides in the possibility to study abstractness in a continuum, and to verify the effects on comprehension using different combinations and studying how the meaning of single words can change depending on the context. In addition, focusing on sentences instead than on single words offers the possibility to investigate linguistic processing in a more ecological way, and allows us detecting eventual influences of the different spoken languages.
In the present study we asked participants to judge the sensibility of sentences. We chose this task because it is established that it implies a deep semantic processing of the sentences (see also Turken and Dronkers, 2011). Coherently with previous literature, we defined as "concrete" only nouns that refer to manipulable objects and only verbs referring to manual actions (e.g., "a flower"/"to grasp"). We decided to define as "abstract" only nouns that do not refer to an object, rather to an entity that can neither be grasped nor touched, and only verbs that refer to an action 1 that cannot be performed with any part of the body, that is, an action that does not explicitly require any movement or any activation of the motor system (e.g., "a concept"/"to describe"). In addition, to investigate the specific effects of the specific language we use, we examined different combinations of nouns (abstract and concrete ones) and verbs (abstract and concrete ones), in two languages, German and Italian, which are syntactically different: in German the noun precedes the verb; in Italian it is the opposite.
There are several possible views: 1. No difference view: abstract and concrete concepts have the same core representations. According to the amodal theories their representations in the brain would be most probably in the language domain; according to the strictly modal view both concrete and abstract concepts would be represented in the perception and action system. 2. Non-embodied multiple representation view: concrete and abstract words have distinct representations: the first are represented in the sensorimotor system, abstract words in the language system. This view, proposed by Paivio (1986), is adopted by multiple representation views not adopting an embodied approach to abstract words, i.e., to views arguing that concrete and abstract words differ in format (e.g., Binder et al., 2005;Dove, 2010). 3. Embodied multiple representation view: abstract and concrete concepts are represented both in the language domains and in the perception and action systems. However, they are not represented in the same way in the two systems but there is a different distribution. Linguistic information should be more relevant for abstract words, perception, and action information for concrete ones. This is the view consistent with multiple representation theories adopting an embodied perspective, such as WAT and LASS.
In contrast with strictly amodal and strictly modal views (No difference views), both embodied and non-embodied multiple representation views predict costs in mixed combinations, when switching from one perceptual modality to another (Pecher et al., 2003). In addition, according to the WAT proposal mixed combinations should be differently modulated by the syntactical structure of the two different chosen languages. As the Age of Acquisition clearly affects performance in semantic tasks (Lewis, 1999;Brysbaert et al., 2000) and is correlated with the Modality of Acquisition, WAT predicts that in mixed conditions RTs should be slower when the abstract word precedes the concrete one, due to the fact that the former is acquired later and relies more on linguistic information than the second (Bloom, 2000;Colombo and Burani, 2002;Mestres-Missé et al., 2009).
EXPERIMENTAL METHOD PARTICIPANTS
Thirty-eight students from the University of Hamburg (group I) and 38 students from the University of Bologna (group II) took part in the study. All were native German speakers (group I) or native Italian speakers (group II), right-handed according to the Edinburgh Handedness Questionnaire (Oldfield, 1971), and all had normal or corrected-to-normal vision. They all gave their informed consent to the experimental procedure. Their ages ranged from 18 to 32 years old (German group: M = 26.26; SD = 3.64; Italian Group: M = 24.61; SD = 3.58). The study was approved by the local ethic committees.
MATERIALS
Materials consisted of word pairs (sentences) composed of a transitive verb and a concept noun. To study the dimension abstract-concrete in a continuum we contrasted two kinds of Verbs (Concrete vs. Abstract) with two kinds of Nouns (Concrete vs. Abstract). We defined Concrete Nouns as nouns referring to graspable objects, Concrete Verbs as verbs referring to hand actions, Abstract Nouns as nouns that do not refer to manipulable objects, and Abstract Verbs as verbs that do not refer to motor actions. Therefore we created 192 sentences -48 quadruples -in the German language and 192 sentences -48 quadruples -in the Italian language. Each quadruple was constructed by pairing a Concrete Verb (e.g., to grasp) both with a Concrete Noun (e.g., a flower) and an Abstract Noun (e.g., a concept); and by pairing an Abstract verb (e.g., to describe) with the previously used concrete and abstract nouns (e.g., to squeeze/find a sponge/friendship; to lift/receive a table/criticism; to caress/wait for a dog/idea; to bend/respect the menu/will; to paint/admire the frame/sunset; to write/look for the document/end; to carve out/wait for a newspaper/moment). We decided to use sentences with a very simple grammatical structure (a verb plus a noun) as it was not possible to develop more complex sentences with a similar grammatical structure that fulfilled the criteria of the quadruples. The majority of these sentences' meanings matched in both languages; a few of them slightly differed, as some pairs did not allow for a literal translation. Due to the different syntax of the German and Italian languages, the German sentences were composed of a noun followed by a verb; the Italian ones were composed of a verb followed by a noun. We chose to compare these two languages as the specific differences in the syntactical structure allowed us to speculate on the different effects caused by a verb preceded by a noun (German sample) vs. a noun preceded by a verb (Italian sample).
To select 30 critical quadruples from the 48 ones, we asked 20 German students and 20 Italian students to judge how familiar each sentence sounded and with what degree of probability they would use each sentence. They were required to provide ratings on a continuous scale (Not familiar -Very Familiar; Not probably -Very probably), by making a cross on a line. We selected the quadruples with highest scores for both familiarity and probability of use, and, from these, we finally chose the quadruples with lower scores in the SDs. Thus we obtained 120 verb-noun pairs (balanced for familiarity and probability of use).
Due to the peculiarity of our linguistic materials, to further test if the 120 selected verb-noun pairs differed as far as the frequency of use is concerned, we checked on the research engine "Google" the frequency of each pair, by using quotations marks (Page et al., 1998;Griffiths et al., 2007;Sha, 2010). The frequencies were submitted to a 2 (kind of Noun: Concrete vs. Abstract) × 2 (kind of Verb: Concrete vs. Abstract) × 2 (Language: German vs. Italian) ANOVA. Crucially we did not find any significant effect. This further control on written frequency prevented us from accounting for possible differences on processing resting on different association degrees between words pairs composing German and Italian quadruples.
In addition to the 30 critical quadruples, we created 30 filler quadruples using the same criteria. We combined a concrete verb both with a concrete noun and with an abstract noun; and we combined an abstract verb with the same concrete noun and abstract noun, leading to nonsensical sentences (e.g., "to switch off the shoe"). Each quadruple was presented only once.
PROCEDURE
German and Italian participants were randomly assigned to one of two groups. Members of both groups were tested individually in a quiet library room. They sat on a comfortable chair in front of a computer screen and were instructed to look at a fixation cross that remained on the screen for 1000 ms. Then a sentence appeared on the screen for 2600 ms. The German sentences were composed of a determinative or non-determinative article plus a noun plus a verb (example for the concrete noun -concrete verb combination: "einen Kuchen anschneiden," to cut a cake), while the Italian sentences were composed of a verb plus a determinative or nondeterminative article plus a noun (example for the concrete verb -concrete noun combination: "stringere una spugna," to squeeze a sponge).
The timer started operating when the sentence appeared on the screen. For each verb-noun pair, participants were instructed to press one key if the combination made sense, and to press another key if the combination did not make sense.
Participants in the first group (both German and Italian) were asked to respond"yes" with their left hand and"no" with their right hand; participants in the other group (both German and Italian) were required to do the opposite. All participants were informed that their response times (RT) would be recorded and were invited to respond as quickly as possible while still maintaining accuracy. Stimuli were presented in a random order. The 240 experimental trials were preceded by 8 training trials, in order to allow the participants to familiarize themselves with the procedure.
STATISTICAL ANALYSIS
In our analyses we considered only the sensible sentences. Participants were accurate in responding; no participant's responses included errors over 15%. To screen for outliers, scores 2 SDs higher or lower than the mean participant score were removed for each participant. Removed outliers accounted for 3.6% of response trials. The remaining RT and errors were submitted to a 2 (kind of Noun: Concrete vs. Abstract) × 2 (kind of Verb: Concrete vs. Abstract) × 2 (Mapping: yes-right/no-left vs. yesleft/no-right) × 2 [Language: German: noun (first), verb (second) vs. Italian: noun (second), verb (first)] mixed factor ANOVA, with Mapping and Language as between-participants variables.
Frontiers in Psychology | Cognition
We conducted the analyses with participants as a random factor. As the error analysis revealed that there was no speed-accuracy trade-off, we will discuss only the RT analysis
ASSESSMENT OF GERMAN AND ITALIAN PAIRS
Materials were controlled regarding a variety of dimensions. 30 students from the University of Hamburg and 30 students from the University of Bologna were asked to rate the ease or difficulty with which each pair evoked mental images (imageability: Low imagery rate -High Imagery rate) on a continuous scale (scores ranging from 0 to 100); how literally they would take each pair (literality: Literal -No Literal); whether and to what extent each pair elicited movement information (quantity of motion: Not much movement -Much movement). Finally 10 German students and 10 Italian students were asked to rate at which age approximately they had learned to use each pair (age of acquisition ratings). For each rating, we calculated the scores' averages and the scores' SDs for each condition.
Imageability
Both German and Italian participants judged the Concrete Verb -Concrete Noun pairs as the easiest to imagine (see . Results showed that German and Italian participants had the same pattern: the pair containing two concrete words was judged as the easiest to imagine. Moreover for both groups the noun was stronger than the verb in determining the imageability of the sentence.
Literality-metaphoricity
German participants rated the Abstract verb -Concrete noun pairs as the ones that they would take most literally (see The sentences rated as more literal are the ones which contained a Concrete Verb plus a Concrete Noun for Italian participants and containing an Abstract Verb plus a Concrete Noun for German participants. Both groups judged the combination Concrete Verb -Abstract Noun as the most metaphorical one. It is worth noting that while the concrete noun meaning remains the same through the quadruples, the concrete verb meaning, as well as its concreteness/abstractness, changes through the quadruples, depending on the context: for example, the meaning of the verb "to grasp" is not the same in "grasping an apple" and in "grasping a concept" (Parisi, personal communication).
Quantity of motion
German participants rated the Concrete Verb -Concrete Noun pairs as the ones that elicited most movement information (see www.frontiersin.org FIGURE 2 | Both groups judged the combination Concrete Verb plus Abstract Noun as the most metaphorical one. Note: while the concrete noun meaning remains the same through the quadruples, the concrete verb meaning, as well as its concreteness/abstractness, changes through the quadruples, depending on the context.
FIGURE 3 | Both groups agreed in judging the Abstract Verb plus Concrete Noun combination as the one that elicits less movement. The main difference concerns the Concrete Verb plus Abstract Noun vs. Concrete
Verb plus Concrete Noun combinations: the former suggested the biggest amount of movement for Italian participants; the latter evoked the huger quantity of motion in German participants.
Both groups agreed in judging the Abstract Verb -Concrete Noun combination as the one that elicits less movement. The main difference concerns the combinations Concrete Verb -Abstract Noun vs. Concrete Verb -Concrete Noun combination, as while the former suggested the biggest amount of movement for Italian participants, the latter evoked the larger quantity of motion in German participants.
Age of acquisition
A number of studies (Gilhooly and Gilhooly, 1980;Zevin and Seidenberg, 2002) have demonstrated the validity of age of acquisition ratings, by showing that age rated by adults is the major independent predictor of the objective age of acquisition indices. In our study German participants rated the Concrete Verb -Concrete Noun pairs as the ones they learnt first (see Results suggest that the different age of acquisition of sentences is explained by the noun: as shown in the literature regarding single word age of acquisition, the concrete noun is learned before the abstract one. Consistently, we found that sentences containing a concrete noun, even if in combination with an abstract verb, are acquired earlier than sentences containing an abstract noun.
RESULTS
Neither a main effect of the kind of Mapping nor a main effect of the Language used was found. Crucially, we found a significant interaction between the kind of Noun and the kind of Verb: German and Italian participants responded faster to both kinds of congruent pairs, that is both to pairs composed of an Abstract Verb plus an Abstract Noun (M = 1172.56 ms) and to pairs composed of a Concrete Verb plus a Concrete Noun (M = 1168.83 ms). Consecutively they were slower with the mixed pairs, that is, with pairs composed of an Abstract Verb plus a Concrete Noun (M = 1211.95 ms) and pairs composed of a Concrete Verb plus an Abstract Noun (M = 1206.81 ms), F (1, 72) = 48.83, MSe = 2328.79, p < 0.0001. Interestingly, Abstract Verbs combined with Abstract Nouns did not require a longer processing time than Concrete Verbs -Concrete Nouns pairs.
We also found a significant three-way interaction between Language, kind of Noun, and kind of Verb, F (1, 72) = 5.07, MSe = 2328.79, p < 0.03, see Figure 5. Newman-Keuls post hoc www.frontiersin.org analyses showed that German participants, noun (first), verb (second), were 13.25 ms faster with Abstract Verb plus Concrete Noun pairs than with Concrete Verb plus Abstract Noun pairs; on the contrary, Italian participants, noun (second), verb (first), were 23.51 ms faster with Concrete verb plus Abstract Noun pairs than with Abstract Verb plus Concrete Noun pairs; this difference reached significance only for Italian participants, p < 0.04. As the syntactic construction of German and Italian is different for pairs containing a transitive verb plus an object-noun, German participants, differently from Italians, were presented with the noun preceding the verb. Results with mixed pairs indicate that participants were faster when the first word was concrete rather than when it was abstract -that is when it referred to an object on which we can perform an action involving the hands (German pairs), or to an action performed with the hands (Italian pairs). This suggests that the degree of abstractness of the word plays a more important role than its grammatical class.
Moreover, the interaction between Language and kind of Verb almost reached significance as well, F (1, 72) = 3.68, MSe = 3490.70, p < 0.06. German participants, noun (first), verb (second), were 8.57 ms faster with pairs containing Abstract Verbs than with pairs containing Concrete Verbs. On the contrary, Italian participants, noun (second), verb (first), were 17.42 ms slower with pairs containing Abstract Verbs than with the pairs containing Concrete Verbs. Integrating these results with those obtained previously allows us to speculate that word's concreteness vs. abstractness strongly determines the time necessary to process the sentence (three-way interaction), but also that the verb has a stronger effect than the noun.
DISCUSSION
Our study showed three main new results. First we found that both the abstract verb -abstract noun combinations and the concrete verb -concrete noun combinations were processed faster than the mixed combinations. This in itself is new, particularly considering the fact that it is well known that the sentence evaluation task we used implies accessing to deep semantic representation. Our results on mixed pairs are not predicted by the No difference explanation (view 1); instead, they are predicted by views 2 and 3, and are consistent with the idea that concrete and abstract words activate parallel systems, one relying more on purely perception and action areas, the other more on sensorimotor linguistic areas. Indeed, switching between systems implies a cost in RTs, whereas remaining within the same system does not affect performance. This effect per se favors theories implying multiple types of representation over strictly modal and strictly amodal theories (this issue is addressed more extensively in the second section of the discussion).
The second major result we found is the three-way interaction between Language, kind of Verb, and kind of Noun. This interaction was mainly due to the fact that Germans' and Italians' results on mixed combinations were the opposite: German participants, noun (first), verb (second), were faster with abstract verb and concrete noun combinations than with concrete verb and abstract noun combinations; Italian participants, noun (second), verb (first), showed a mirror pattern. This result can be easily accounted for if we consider that the word presentation order differed across the two languages: German participants saw the noun first and then the verb, while Italians saw the same combination in a reverse order. Thus, participants were faster when the first word shown in the sentence was a concrete one, regardless of its grammatical class (verb vs. noun) and of the spoken language (German vs. Italian; for a similar result see Paivio, 1965: differently from us, in a learning and recall task he contrasted only abstract and concrete nouns, rather than sentences).
The third result is the marginally significant interaction we found between Language and kind of Verb. Integrating the last two findings, it seems that the abstractness vs. concreteness of the first word -that depends on the different sentences' structures -modulates sentence processing more strongly (interaction Language × Noun × Verb) than its grammatical class. Nevertheless it seems to be also an effect of the linguistic category, as verbs are more powerful than nouns in influencing subjects' responses. Fascinatingly, this result could be in keeping with the idea that the grammatical structure of a language shapes to some extent its speakers' perception of the world (Boroditsky, 2003;Gentner, 2003;Mirolli and Parisi, 2009).
Let us now consider results from RTs together, integrating them with the results obtained from the ratings of the materials. We will discuss how each theory could account for them and the problems each theory faces. We will also provide a possible neuroanatomical explanation of the results.
1. No difference view: abstract and concrete concepts have the same core representations. According to both (a) amodal (e.g., Fodor, 1998) and (b) strictly modal (e.g., Barsalou, 1999) theories of concepts and words, concrete, and abstract sentences are represented in the same format (amodal vs. modal). Therefore, for both amodal and modal views we should expect no difference between the four conditions, unless these differences are explained by association degree and familiarity for amodal theories, and by imageability for modal theories. (a) According to amodal theories the results should be explained resting on the association rate between words. Therefore, the advantage of congruent over mixed sentences should be due to a higher association rate of these pairs compared to that of the mixed combinations. To check for this possibility, we calculated the familiarity and the probability of use score averages in each condition for the 120 pairs selected for the behavioral experiment. Ratings showed that, for both German and Italian participants, the advantage of congruent combinations over the mixed pairs is not explained by a supposed higher familiarity or higher probability of use of the first. (b) According to a strictly modal theory, results regarding RT should be explained by imageability rating. An approach based more on metaphors (Lakoff, 1987) should account for the behavioral results considering the literality ratings (that indirectly give us information on the degree of metaphoricity). Actually the advantage for the Concrete Verb -Concrete Noun combination can be explained resting on its high imageability, low metaphoricity rate, and precocious age of acquisition. But neither the modal theory nor the approach based on metaphors was verified by our results on Abstract Verb -Abstract Noun pairs, which were neither imageable nor literal (as opposed to being metaphorical) but provoked a response that was as fast as that for Concrete Verb-Concrete Noun pairs. Finally, an approach proposing that words are grounded in perceptual and especially in motor systems (Glenberg, 1997) would predict a relationship between the behavioral data and the quantity of motion scores. This was not the case, however, as the amount of movement evoked by the sentence did not explain the pattern of results with RT. Therefore, we can conclude that neither a strictly amodal nor a strictly modal theory adequately accounts for our results. 2. Non-embodied multiple representation view and 3. Embodied multiple representation view.
Theories based on multiple types of representation -both in their non-embodied vs. embodied version -can explain the difference between congruent and mixed pairs more easily, even if resting on different reasons, that is: (I) different kinds of formats (still assuming a transduction process: Dove, 2009), or (II) a shift between different kinds of modalities, i.e., linguistic vs. a sensorimotor coding (LASS, WAT).
The interpretation that better accommodates our results assumes that abstract words are processed predominantly in the language system and concrete words are processed in the sensorimotor system to a larger extent. If processing occurs in separate systems, then the switching between concrete and abstract would imply not only conceptual costs, but also costs connected with switching between anatomical systems working in parallel. Within each system (concrete-concrete vs. abstract-abstract) the costs remain low. Some recent pieces of evidence are in line with our results. In a brain imaging study on abstract words Rüschemeyer et al. (2007) found that the processing of verbs with motor meanings (e.g., "to grasp") differed from the processing of verbs with abstract meanings (e.g., "to think"). Motor verbs produced greater signal changes than abstract verbs in several regions within the posterior premotor, primary motor (M1), and somatosensory (S1) cortices, as well as in secondary somatosensory (S2) cortex. More crucially, our interpretation is also consistent with results obtained in a brain imaging study performed using the same paradigm as the one used in the present work (Menz et al., 2011; see also Jirak et al., 2010). Using quadruples containing every possible combination for motor/non-motor verbs and for graspable/non-graspable objects, evidence showed that all motor areas were activated by language stimuli with both concrete and abstract content; but in case of concrete verb plus concrete noun processing there was a stronger engagement of areas typically involved in planning of complex and goal-directed actions (e.g., frontal operculum). In case of abstract verb plus abstract noun combinations, instead, there was a stronger engagement of the supramarginal gyrus (SMG) -typically involved in motor planning (e.g., Tunik et al., 2008) but also during phonological and articulatory words processing (e.g., Celsis et al., 1999;Pattamadilok et al., 2010) -, as well as of the MTG -that is also recruited when performing tasks critical in communication and social interaction (Mellet et al., 1998;Binder et al., 2005;Sabsevitz et al., 2005).
Embodied multiple representation view.
The advantage of non-mixed combinations (AA and CC) on the mixed ones (AC and CA) rules out the No difference views but can be accounted by both the Non-embodied (2) and the Embodied versions of multiple representations views (3). In order to disentangle them, the most critical result is the advantage we found when the first word was a concrete one. A Non-embodied multiple representation view (2) has difficulties in explaining this result: since the task used in the present study is a linguistic one, it should be easier to process first words which activate linguistic information, i.e., abstract words, rather than concrete ones.
LASS AND WAT
Both LASS and WAT can explain the advantage of the first concrete word. However, the explanation based on LASS would be a posteriori. The argument would be that, even if the task is a linguistic one, it requires deep semantic processing, and this might require more time for abstract than for concrete words. A more straightforward explanation of the longer RTs when the first word is an abstract rather than a concrete one derives from the WAT proposal. WAT assumes that both linguistic and sensorimotor processing have the same status -coherent with the advantage of the AA and CC pairs on the mixed pairs -, and it treats the issue of concepts representation as strictly related to their acquisition, stressing the different function of linguistic label for concrete vs. abstract word meanings. So the advantage of concrete words when presented first would be due to the fact that abstract words are learnt differently from concrete ones, and often with the help of a verbal explanation (see . It follows that for the acquisition of abstract terms the social experience due to the presence of others explaining to us specific word meanings is particularly crucial. In support of this interpretation it is worth noting that in the linguistic materials' ratings we basically found the same patterns for Imageability and Age of acquisition for both Germans and Italians: sentences containing a concrete noun (even if in combination with an abstract verb) were the easiest to imagine, and they were acquired earlier than sentences containing abstract nouns. Conversely German and Italian participants showed different patterns as far as Metaphoricity and Quantity of Motion ratings are concerned, thus they were differently influenced by the specific linguistic milieu.
In sum
The results of our behavioral study showed that participants were faster with congruent combinations, and that with mixed combinations they were faster when the first word was a concrete one, independently of the spoken language and of the word grammatical class. Results are in line with those embodied views, such as LASS and WAT, according to which both linguistic and perception www.frontiersin.org and action experience play a role in accounting for word representation. The WAT proposal is able to explain the advantage of the first concrete word better than the LASS view, ascribing it to the fact that abstract words require more time as a consequence of their peculiar acquisition modality.
Our results have a variety of implications as to how concrete and abstract words are represented in the brain, as they suggest that linguistic and perception and action information are differently distributed in accounting for concrete and abstract meanings. Consistently with recent brain imagining study (Rüschemeyer et al., 2007;Menz et al., 2011), we hypothesize that words with concrete motor content are processed to a greater extent in the perception and action systems than words with abstract content, which in turn are processed more in the linguistic areas. | 9,419.2 | 2011-07-06T00:00:00.000 | [
"Linguistics",
"Philosophy"
] |
TurkuNLP: Delexicalized Pre-training of Word Embeddings for Dependency Parsing
We present the TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. The system is based on the UDPipe parser with our focus being in exploring various techniques to pre-train the word embeddings used by the parser in order to improve its performance especially on languages with small training sets. The system ranked 11th among the 33 participants overall, being 8th on the small treebanks, 10th on the large treebanks, 12th on the parallel test sets, and 26th on the surprise languages.
Introduction
In this paper we describe the TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies . The Universal Dependencies (UD) treebank collection (Nivre et al., 2017b) has 70 treebanks for 50 languages with cross-linguistically consistent annotation. Of these, the 63 treebanks which have at least 10,000 tokens in their test section are used for training and testing the systems. Further, a parallel corpus consisting of 1,000 sentences in 14 languages was developed as an additional test set, and finally, the shared task included test sets for four "surprise" languages not known until a week prior to the test phase of the shared task (Nivre et al., 2017a). No training data was provided for these languagesonly a handful of sentences was given as an example. As an additional novelty, participation in the shared task involved developing an end-to-end parsing system, from raw text to dependency trees, for all of the languages and treebanks. The participants were provided with automatically predicted word and sentence segmentation as well as morphological tags for the test sets, which they could choose to use as an alternative to developing own segmentation and tagging. These baseline segmentations and morphological analyses were provided by UDPipe v1.1 (Straka et al., 2016).
In addition to the manually annotated treebanks, the shared task organizers also distributed a large collection of web-crawled text for all but one of the languages in the shared task, totaling over 90 billion tokens of fully dependency parsed data. Once again, these analyses were produced by the UDPipe system. This automatically processed large dataset was intended by the organizers to complement the manually annotated data and, for instance, support the induction of word embeddings.
As an overall strategy for the shared task, we chose to build on an existing parser and focus on exploring various methods of pre-training the parser and especially its embeddings, using the large, automatically analyzed corpus provided by the organizers. We expected this strategy to be particularly helpful for languages with only a little training data. On the other hand we put only a minimal effort into the surprise languages. We also chose to use the word and sentence segmentation of the test datasets, as provided by the organizers. As we will demonstrate, the results of our system correlate with the focus of our efforts. Initially, we focused on the latest ParseySaurus parser (Alberti et al., 2017), but due to the magnitude of the task and restrictions on time, we finally used the UDPipe parsing pipeline of Straka et al. (2016) as the basis of our efforts.
Word Embeddings
The most important component of our system is the novel techniques we used to pre-train the word embeddings. Our word embeddings combine three important aspects: 1) delexicalized syntactic contexts for inducing word embeddings, 2) word embeddings built from character n-grams, and 3) post-training injection and modification of embeddings for unseen words.
Delexicalized Syntactic Contexts
Word embeddings induced from large text corpora have been a key resource in many NLP task in recent years. In many common tools for learning word embeddings, such as word2vec (Mikolov et al., 2013), the context for a focus word is a sliding window of words surrounding the focus word in linear order. Levy and Goldberg (2014) extend the context with dependency trees, where the context is defined as the words nearby in the dependency tree with additionally the dependency relation attached to the context words, for example interresting/amod.
In , we show that word embeddings trained in a strongly syntactic fashion outperform standard word2vec embeddings in dependency parsing. In particular, the context is fully delexicalized -instead of using words in the word2vec output layer, only part-of-speech tags, morphological features and syntactic functions are predicted. This delexicalized syntactic context is shown to lead to higher performance as well as generalize better across languages.
In our shared task submission we build on top of the previous work and optimize the embeddings even closer to the parsing task: We extend the original delexicalized context to also predict parsing actions of a transition-based parser. From the existing parse trees in the raw data collection, we create the transition sequence used to produce the tree and for each word collect features describing the actions taken when the word is on top-2 positions of the stack, e.g. if the focus word is first on stack, what is the next action. Our word-context pairs are illustrated in Table 1. In this way, we strive to build embeddings which relate together words which appear in similar configurations of the parser.
Word embeddings from character n-grams
It is known that the initial embeddings affect the performance of neural dependency parsers and pre-training the embedding matrix prior to training has an important effect on the performance of (Chen and Manning, 2014;Andor et al., 2016).
Since the scope of languages in the task is large, the embeddings used for this dependency parsing task need to be able to be representative of languages with both large and small available resources, and also they need to be able to capture morphological information for languages with complex morphologies as well as those with less morphological variation.
To address better the needs of small languages with very little of data available as well as morphologically rich languages, we build our word embedding models with methods used in the popular fastText 1 representation of Bojanowski et al. (2016). They suggest, instead of tokens, to use character n-grams as the basic units in building embeddings for words. First, words are turned into character n-grams and embeddings are learned separately for each of these character n-grams. Secondly, word vectors are assembled from these trained character embeddings by averaging all character n-grams present in a word. To make the embeddings more informative, the n-grams include special markers for the beginning and the end of the token, allowing the model for example to learn special embeddings for word suffixes which are often used as inflectional markers. In addition to the n-grams, a vector for the full word is also trained, and when final word vectors are produced, the full word vector is treated similarly as other n-grams and is averaged as part of the final product along with the rest. This allows for the potential benefits of token level embeddings to be materialized in our model. Table 1 demonstrates the splitting of a word into character four-grams with special start and end markers. When learning embeddings for these character n-grams, context for each n-gram is replicated from the original word context, i.e. each character n-gram created from the word interesting gets the delexicalized context assigned for that word, namely ADJ, Degree=Pos, amod, stack1 shift, stack2 left-arc, stack2 left amod in the example Table 1.
This procedure offers multiple advantages. One of them is the ability to construct embeddings for previously unseen words, a common occurrence especially with languages with small training corpora. With these character n-gram embeddings we are basically able to build an embedding for any word, except very few cases where we do not find any of the character n-grams from our trained model. Another advantage of this embedding scheme is its better ability to address the morphological variation compared to plain token based embeddings.
Data and Parameters
For the training of word embeddings for each language we took training data from the treebank training section and the automatically analyzed raw corpus . Using also the treebank training section is important for very small languages where there is very little of raw data, especially for Old Church Slavonic where the raw data has only 29,000 tokens compared to 37,500 tokens in the treebank training set, while for big languages it barely makes any difference as the treebank data gets buried under the mass of raw data. For each language we build character n-gram embedding models using word2vecf software 2 by Levy and Goldberg (2014) with negative sampling, skip-gram architecture, embeddings dimensionality of 100, delexicalized syntactic contexts explained in Section 2.1 and character n-grams of length 3-6. The maximum size of raw data used is limited to 50 million unique sentences in order to keep the training times bearable, and sentences longer that 30 tokens are discarded. Languages with only very limited resources we run 10 training iterations, but for rest only one iteration is used. These character n-gram models explained in detail in Section 2.2 can then be used to build word embeddings for arbitrary words, only requiring that at least one of the extracted character ngrams is present in our embedding model.
For parsing we also used the parameters optimized for UDPipe baseline system and changed only parts related to pre-trained embeddings. As we do not perform further parameter optimization, we trained our models always using the full training set, also in cases where different development sets were not provided. For small treebanks without development data, we did not test our models in advance but trusted methods tested on other treebanks to generalize also for these. For each language we include 100-dim word embeddings trained using our methods described in Sections 2.1 and 2.2. Additionally, pre-trained feature embeddings are trained for upos+feat combinations included in the xpostag column. These embeddings are trained using the transition actions as delixicalized context, and vectors for full feature combinations are constructed from individual features using the same character n-gram method as in the word embeddings (one feature is now the same as one character n-gram).
Parsing Pipeline
Our submission builds on top of the UDPipe parsing pipeline by Straka et al. (2016). We use data segmented by the UDPipe baseline systems as our system input, and then morphological analysis and syntactic parses are produced with our own UD-Pipe models. The UDPipe morphological tagger (MorphoDiTa (Straková et al., 2014)) is run as-is with parameters optimized for the baseline system (released together with the baseline models). The only exception is that we replaced the languagespecific postag (xpostag) column with a combined universal postag (upostag) and morphological features (feats), and trained the tagger to produce this instead of the language-specific postags. We did not expect this to affect the tagging accuracy, but instead it was used to provide pre-trained feature embeddings for the parser.
We further modified the UDPipe parser to allow including new embedding matrices after the model has been trained. This gives us an easy way to add embeddings for arbitrary words without a need of training a new parsing model. As we are able to create word embeddings for almost any word using our character n-gram embeddings described in Section 2.2, we are able to collect vocabulary from the data we are parsing, create vectors for previously unseen words and inject these vectors into the parsing model. This method essentially eliminates all out of vocabulary words.
Post-training modifications of word embeddings
The UDPipe parser uses the common technique of adjusting the word embeddings during training. The magnitude of the change imposed by the parser depends on the frequency of the word in the training data and, naturally, only words seen in the training set are subject to this training-phase adjustment. Therefore, we implemented a step whereby we transfer the parser adjustments onto the words not seen in the training data. For every such unseen word, we calculate its translation in the vector space by summing the parserinduced changes of known words in its neighborhood. These are weighted by their cosine similarity with the unknown word, using a linear function mapping similarities in the [0.5, 1.0] interval into weights in the [0.0, 1.0] range. I.e. known words with similarity below 0.5 do not contribute, and thereafter the weighting is linear. The overall effect of this modification observed in our development runs was marginal.
Parallel test sets (PUD)
Parallel test sets are parsed with a model trained on the default treebank of a language (the one without any treebank-specific suffix). For many languages only one treebank exists and no choice is needed, but for some there are two or even more choices. We chose to use these treebanks without treebank suffixes as the very first treebank included for a language will receive just the language code without the treebank suffix while newer treebanks will get a distinguishable treebank suffix. It then means that the default treebanks without suffixes have been part of the UD collection longer, many of these originating from the Google's Universal Treebank collection (McDonald et al., 2013). We hypothesized these treebanks to be more harmonized to the UD guidelines and apply better to the new test sets.
Surprise languages
In this work we did not concentrate on parsing the four surprise languages, and only used a very naive approach to complete the submission of all required test sets. For each surprise language we simply picked one existing model among all models trained for regular treebanks. We parsed the small sample of example sentences (about 20 sentences for each language) with all existing models, and picked the one which maximized the LAS score (Kazakh for Buryat, Galician-TreeGal for Kurmanji, Portuguese for North Sami and Slovenian for Upper Sorbian) without doing any treebank size, language family or related language evaluation. The only change in the parsing model is that during parsing, we mask all word embeddings, this way preventing the parser to use the vector for unknown word too often. This makes our parsing model delexicalized as all word embeddings are zeroed after training and not used in parsing, with the exception that parsing model is trained using information from word embeddings.
Results
The participating systems are tested using TIRA platform (Potthast et al., 2014), where the system must be deployed on a virtual machine and the test sets are processed without direct access to the data. Overall rank of our system in the official evaluation is 11th out of 33 participating teams with a macro-averaged labeled attachment score (LAS) of 68.59%. 3 On macro-LAS score across all treebanks, we are clearly behind the winning system (Stanford, 76.30%), but our pre-trained word embeddings are able to improve over the baseline UDPipe by 0.24% points on average. When looking only at treebanks with very little of training data we gain on average 2.38% over the baseline system. The same number for the big treebanks only is +1.15%, +0.23% for parallel test sets and -16.55% for surprise languages. Based on these numbers we can clearly see that we managed to get a substantial improvement over the baseline system on very small languages where we also assumed our word embeddings to be most helpful. Instead, our very naive approach for handling surprise languages is clearly not sufficient, and a better approach should have been implemented. Detailed results of our system are shown in Table 2.
On official evaluation our system ranked sixth on universal part-of-speech tagging and second on morphological features. We see modest improvement of +0.22% (upos) and +0.27% (feats) over the baseline models. As word embeddings are not used in tagging and we use the same parameters as the baseline system, the only modification we did in tagging is that instead of using language-specific postags (xpostag), we concatenated universal postag and morphological features into xpostag column and trained the tagger to produce this concatenation.
Conclusions
We have presented our entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies, with the 11th rank of 33 participants. During the development, we have focused on exploring various ways of pre-training the embeddings used by the parser, as well as providing embeddings also for the unknown words in the data to be parsed. In particular, we have proposed a method of pretraining the embeddings using an entirely delexicalized output layer of the word2vec skip-gram model. This mode of pre-training the embeddings is shown to be superior to the usual approach of pre-training with the standard word2vec skipgram with negative sampling. We have also explored, here with only a minimal improvement to the score, the possibility of post-hoc application of the adjustments effected by the parser on the word embeddings during the training phase. All our components used in this paper are freely available at https://github.com/TurkuNLP/ conll17-system. | 3,899 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Analysis of the Effectiveness and Countermeasures of High-Quality Economic Development in the Pearl River Delta City Cluster
Through quantitative analysis method, this paper studies the effect of highquality economic development in PRD urban agglomeration from five aspects: innovation, coordination, green, openness and sharing. The results are as follows: The overall situation of high-quality development has an obvious positive trend. The problem of coordinated development has been alleviated, the income gap between urban and rural areas has been gradually narrowed, the level of green development has been improved, individual cities and prefecture-level cities still face great pressure to control the growth rate of energy consumption, the degree of opening up still needs to be deepened, and the real consumption power of people experiencing shared development has been rising. Therefore, to promote the high-quality economic development of the PRD urban agglomeration in the future, it is necessary to rationally coordinate the synergistic development relationship between different aspects within the system, make up for the shortcomings, and synchronously promote the innovative, coordinated, green, open and shared development.
Introduction
As one of the major city clusters in China, the Pearl River Delta City Cluster (PRD city cluster) has a high degree of economic, social, cultural and technolo-gical development, and is one of the important growth poles for the high-quality development of China's regional economy. A reasonable grasp of the status of the high-quality economic development of the PRD city cluster will have a significant impact on the high-quality development of the national regional economy.
President Xi Jinping has also proposed on several occasions to "develop several new power sources to drive high-quality development across the country, especially the Beijing-Tianjin-Hebei region, the Yangtze River Delta and the Pearl River Delta, as well as some major city clusters". In view of this, this paper focuses on the analysis of the phased results of the high-quality economic development of the PRD city cluster and proposes measures to promote the response according to the corresponding situation.
Literature Review
At present, academic research on high-quality economic development is relatively abundant. Sun & Cheng (2022) have conducted in-depth research on the effectiveness of high-quality development in central China, the development characteristics and corresponding improvement measures; Zhang et al. (2020) have actively discussed the direction of the main driving force of high-quality development in China in the new era and its related formation path; Ren & Sun (2022) have analyzed the path and mechanism of cultivating new advantages of high-quality economic development in China mainly from the digital economy, etc. Many scholars have conducted research on high-quality economic development in China from different aspects and obtained relatively abundant results (Chao & Xue, 2020;Zhang et al., 2021;An et al., 2021). While the research on the economic quality development of urban agglomerations is still in its infancy, only individual scholars have made preliminary discussions. Just as Zhang & Qin (2021a) defined the concept of high-quality regional economic development as a development process in which a region forms a new structure by building innovative development dynamics, renewing development conditions and expanding opening to the outside world, thus significantly improving the efficiency and sustainability of economic growth and realizing the positive interaction between economy, society and ecology, so should the concept of high-quality development of urban agglomerations.
Integrated the above can be found, though the academia has high quality development level and characteristics of urban agglomeration economy, but the high quality special analysis of the PRD city cluster economy development stage there wasn't much research on the effectiveness of the majority of research is based on Guangdong-Hong Kong-Macao Greater Bay Area as the research object, is not entirely new development concept, build a index system is analyzed, The analysis results are difficult to accord with the real situation of the highquality economic development of the Pearl River Delta urban agglomeration. Accordingly, this paper will mainly take the new development concept as the Z. Zhang, S. Lin standard, refer to the index design of "Analysis of Economic High-Quality Development in the Guangdong-Hong Kong-Macao Greater Bay Area", "Measurement of Economic High-Quality Development of Chinese City Groups and Comparative Analysis of Differences", and "Analysis of Differences in the Level of Economic High-Quality Development in the Yellow River Basin" to discuss the status of economic high-quality development of the PRD city cluster (Zhang & Qin, 2021b;Xiao & Yu, 2021;Zhang et al., 2022).
Analysis of the Stage Effectiveness of the High-Quality Economic Development of the PRD City Cluster
President Xi Jinping has pointed out that "ideas are the forerunner of actions".
"The concept of development is strategic, programmatic and guiding. It embodies the thinking, direction and focus of development. With the right development philosophy, targets and tasks will be set, and policies and measures will follow." The new development concept put forward by President Xi Jinping provides direction for high-quality development of China's regional economy in the new era. High-quality economic development of urban agglomerations is a type of high-quality regional economic development, and belongs to an important part of China's high-quality economic development (Zhang & Qin, 2021a). As a complex system project, the high-quality development of urban agglomerations is bound to involve numerous aspects.
According to the existing research results, most studies involve five aspects of the new development concept or one of them, according to the actual situation of the development of the PRD urban agglomeration. According to the actual situation of the development of the PRD city cluster, this paper will analyze the stage effectiveness of the high-quality economic development of the PRD city cluster in five aspects, including the level of innovation, coordination, greenness, openness and sharing, based on the main indicators selected from relevant studies. It should be noted here that in the absence of special indication, the original data used for analysis in this paper are all from Guangdong Statistical Yearbook.
Level of Innovation in Economic Development
Innovative development focuses on solving the problem of development momentum. In the process of China's economic development, the innovation capacity of the PRD city cluster is relatively strong, but the development of internal cities varies greatly. Figure 1 shows that the total number of authorized patent applications in the PRD city cluster has risen from 273,074 to 782,441 in 2017-2021, and the contribution of Shenzhen, Guangzhou and Dongguan is relatively large in this process, and the sum of authorized patent applications in the three cities has risen from 84.31% to 88.60%, and the respective percentages of the three have also risen from 34.51%, 22.05% and 27.75% to 35.68%, 24.16% and 28.76% respectively. The total share of patent applications granted in other prefecture-Z. Zhang, S. Lin level cities in 2021 is only 11.4%. It can be found that the changes in the total number of authorized patent applications in the PRD city cluster and its different cities have shown that its economic development momentum is increasing year by year, among which Shenzhen, Guangzhou and Dongguan have relatively stronger development momentum, and other cities still have more room for improvement.
Level of Coordination of Economic Development
Coordinated development focuses on solving the problem of unbalanced development. In the process of China's economic development, the coordination problem of the development of the PRD city cluster has become increasingly prominent, and the urban-rural development gap is especially obvious in the relationship between regional, urban-rural, economic and social, material and spiritual civilization. Figure 2 shows that the per capita disposable income ratio of urban and rural residents in the PRD city cluster has decreased from 2.35 to 2.21 from 2014 to 2020, and the gap between urban and rural residents' per capita disposable income has decreased by 6.27%. In terms of absolute income changes, the per capita disposable income of urban residents has increased from 37063.7 yuan to 59225.1 yuan, and the per capita disposable income of rural residents has also increased from 15,754 yuan to 26856.5 yuan, and the per capita disposable income of both urban and rural residents has shown a certain upward trend. It can be seen that in the process of high-quality economic development of the PRD city cluster, the gap between urban and rural incomes has gradually narrowed, and the problem of coordinated development in this regard has been gradually alleviated.
Green Level of Economic Development
Green development focuses on solving the problem of harmony between man and nature. The PRD city cluster is developing more rapidly and facing increasingly severe problems of resource constraints, serious environmental pollution and ecosystem degradation in terms of ecological environment, and the pursuit Patents granted (items) Year Guangzhou Shenzhen Zhuhai Foshan Jiangmen Zhaoqing Zhongshan Dongguan Huizhou the Pearl River Delta City Cluster of people's taste for living environment is getting increasingly high. Figure 3 shows that the growth rate of energy consumption per unit of GDP of the cities in the PRD city cluster from 2010 to 2020 shows a fluctuating downward trend in general, and only Huizhou and other cities show a large fluctuating upward state. Specifically, from 2010 to 2020, the growth rate of energy consumption per unit GDP in Zhongshan and Shenzhen has dropped from −1.5% to −6.22% and −2.94% to −5.54% respectively, and the growth rate of energy consumption per unit GDP in both cities has dropped by 4.72 percentage points and 2.6 percentage points respectively. The growth rate of energy consumption per unit GDP in Zhuhai and Huizhou has increased from −3.67% to −0.44% and −5.82% to 2.28% respectively, and the growth rate of energy consumption per unit GDP in both cities has increased by 8.1 percentage points and 3.23 percentage points respectively. The comparison shows that in the process of high-quality economic development of the PRD city cluster, the level of green development is gradually improving, especially in terms of energy consumption per unit of GDP, but the pressure to reduce and control the growth rate of energy consumption in individual cities is still greater.
Economic Development Level of Openness
Open development focuses on solving the problem of internal and external linkage of development. The use of the two international and domestic markets and two resources by the PRD city cluster in the early stage provided the necessary conditions for its rapid economic development, and also helped to enhance the ability of China to cope with international economic and trade frictions and international economic discourse. Figure 4 shows that the foreign trade dependence and foreign capital dependence of the PRD city cluster in general show a fluctuating downward trend from 2013 to 2020. Specifically, the foreign trade dependence of the PRD city cluster decreases from 121.65% to 75.60% from 2013 to 2020, and the total import also decreases from 2735.393 billion yuan to 2633.076 billion yuan, while the total export increases from 3769.826 billion yuan to 4134.606 billion yuan. In terms of foreign capital dependence, the PRD city cluster has decreased from 2.62% to 1.73% from 2013 to 2020, while the amount of actual foreign direct investment has increased from 139.986 billion yuan to 155.142 billion yuan. The comparison reveals that the decline in total imports has led to an overall decline in foreign trade dependence. Despite the increasing amount of real foreign direct investment, it is still small compared with the growth of total GDP, and the overall share has declined. It can be seen that in the process of high-quality economic development of the PRD city cluster, it is still necessary to continuously deepen the reform and opening up and precisely improve the level of external opening.
Shared Level of Economic Level
Shared development focuses on solving the problem of social equity and justice. In recent years, the economic scale of the PRD city cluster has been increasing, the gap between the level of public services in urban and rural areas and the problem of unfair distribution have become increasingly prominent, and there is still a lack of sharing and universal access to the fruits of reform and development. To a large extent, these problems need to rely on the continuous improvement of residents' consumption capacity, which is the per capita disposable income of residents, to achieve. As can be seen from Figure 5, the growth trend of per capita disposable income of all residents in the PRD city cluster from 2014 to 2020 is remarkable, having grown from 33642.1 yuan to 54809.6 yuan, with an average annual growth rate of 8.47%. It can be seen that in the process of highquality economic development of the PRD city cluster, the actual consumption ability of the residents is increasing, and the comprehensive strength of sharing the fruits of development and reform is also increasing.
Countermeasures for the High-Quality Economic Development of the PRD City Cluster
Based on the above-mentioned stage results of the high-quality economic development of the PRD city cluster, and the obstacles faced by the PRD city cluster Z. Zhang, S. Lin and its internal cities in the process of promoting high-quality economic development, this paper focuses on the following policy measures to promote the high-quality economic development of the PRD city cluster.
Strengthening the Economic Development Momentum of the PRD City Cluster
The "innovation drive" has gradually replaced the "factor drive" as the main driving force for the high-quality economic development of the PRD city cluster.
To enhance the level of science and technology innovation in the Guangdong-Hong Kong-Macao Greater Bay Area in the future, we should clarify the status of science and technology resources in the PRD city cluster, appropriately integrate the advantages of science and technology resources, and vigorously enhance the level of innovation networks. Further improve the scientific and technological innovation capacity of innovation subjects, establish a mechanism for sharing scientific and technological innovation resources among cities, build an international scientific and technological innovation center, and promote the emergence of new business models and new modes. At the same time, it is necessary to clarify the construction plan of the PRD city cluster, continuously strengthen the policy support of each region and improve the innovation collaboration relationship among each other, establish new platforms and mechan- Year isms more suitable for the development of science and technology innovation in the PRD city cluster, provide favorable technical support for the emergence of new business models and modes in the new era, and promote the rising status of the PRD city cluster in the global industrial value chain.
Innovating the Coordinated Regional Development of the PRD City Cluster
Strengthen the role of cities such as Guangzhou and Shenzhen as growth poles of the city cluster, rationalize the relationship between different cities within the PRD city cluster and between the city cluster and the economic hinterland. Promote the shared development of the economic hinterland while moderately raising the urbanization level of the peripheral cities. Specifically, under the premise of following the law of urban collaboration, the future should be reasonably laid out for industrial development. For the pace of economic transformation in areas with larger economies and relatively faster growth rates (Foshan, Dongguan, etc.), promote the generation of new industries and new models, promote the transformation of economic development dynamics, and maintain a sustained and stable economic growth rate and level in order to enhance the coordination of economic development and promote high-quality regional development. For cities with smaller economic scale and slower growth rate (such as Zhaoqing and Jiangmen), they should focus on the development of special and advantageous industries and make reasonable use of national policy support, international resources and the opportunities of Guangdong-Hong Kong-Macao Greater Bay Area construction to achieve moderate economic scale expansion and growth rate increase. In addition, it is still necessary to pay attention to the optimization of industrial structure and industrial layout adjustment of urban and rural economies, gradually promote the shift of urban and rural economies from labor-intensive to technology-intensive, and continuously gather scattered township enterprises for integrated management in order to enhance the level of urban and rural economic development.
Constructing a New Pattern of Opening up the PRD City Cluster
Continuously increase the proportion of exports of new products with high technological content and added value, improve the efficiency of capital flow and the quality of foreign capital utilization, continuously strengthen external ties to enhance the quality of open development, and accelerate the development of highlevel open economy in the PRD city cluster. Shenzhen, Zhuhai and Dongguan have certain scale advantages in goods trade exports and foreign investment utilization. In the process of future high-quality development, they should focus on innovative trade methods, optimize the structure of foreign trade goods, improve the technical level and value-added content of goods, and improve the current situation of foreign trade development. At the same time actively play the scale and agglomeration effect of foreign investment introduction, focusing on supporting technology-intensive and knowledge-intensive industries, and promoting the development of high-end manufacturing, financial technology, cultural and creative as well as real estate industries. Huizhou, Jiangmen, Zhaoqing and other regions should give full play to their comparative advantages, make up for their development shortcomings, and actively use the existing relevant preferential policies to promote the scale of advantageous industries in order to enhance the level of foreign trade and foreign investment utilization and create favorable conditions for high-quality economic development.
Improving the Modern Industrial System of the PRD City Cluster
In the process of future development, the advantages of Guangzhou and Shenzhen in innovation R&D and operation headquarters, as well as the comparative advantages of Foshan, Dongguan, Huizhou and other regions in high-end manufacturing should be fully utilized to accelerate the speed of optimization and upgrading of traditional industries, moderately expand the scale of advanced manufacturing industries, rapidly upgrade the level of high-tech industries and improve industrial types, and actively explore the application of new production methods, such as cloud computing, Internet+ and other emerging technologies, in the field of high-end equipment manufacturing, promote the development of new manufacturing models such as scale personalization and cloud manufacturing, and create conditions for "manufacturing economy" to "smart economy". In addition, the development level of the service industry should be upgraded, focusing on high-end, specialization, optimization and upgrading in the fields of cultural and creative, tourism services, health industry, shipping and logistics, and professional services, focusing on building a modern service industry system of coordination and cooperation, complementary advantages and staggered development.
Improving the Level of Integrated Transportation in the PRD City Cluster
In the future, in the process of actively promoting the integrated construction of comprehensive transportation in the PRD city cluster and creating a new pattern of staggered development and complementary advantages, the degree of interconnection of transportation infrastructure should be continuously improved, especially the interconnection of the Mainland with Hong Kong and Macao, the improvement of urban software and hardware infrastructure, and the moderate increase of comprehensive transportation hubs. Improve the structural share of transportation modes, and moderately increase rail transportation (urban rail, subway, high-speed rail, etc.) and railroad mileage. Integrate transportation resources, develop multimodal transportation, properly solve the barriers between sea, air and railroad transportation, realize the effective connection of air-sea, sea-rail, air-public and sea-public transportation, and enhance the efficiency of "integrated sea-land-air" transportation.
Innovating Mechanism for Sharing the Outcome of Economic Development in the PRD City Cluster
Taking into account the actual situation of the PRD city cluster in terms of shared development, while vigorously adhering to the important directions of universality, equalization, rationalization, and sustainability, and gradually promoting the formation and improvement of the basic public service system, we will benchmark well-known urban agglomerations at home and abroad and draw on their experiences in the development process of improving the living standards of residents, improving public infrastructure construction, and developing education and health care. Adopt differentiated strategies to make up for the shortcomings in healthcare, education, employment, public goods and other areas in a targeted manner, and narrow the gap between the shared levels of economic development in different regions.
Complementing the Shortcomings of High-Quality Development in the PRD City Cluster
For different cities in the PRD city cluster, they should obey the overall development strategy of the PRD city cluster in the process of future high-quality development, and solve their own shortcomings in accordance with local conditions while giving full play to their comparative advantages. For example, Shenzhen should focus on improving the teacher-student ratio, improving the current situation of foreign trade and foreign investment utilization, and strengthening the construction of transportation and information infrastructure. Guangzhou should focus on rationalizing the teacher-student ratio and actively solving urban employment problems to enhance the level of shared economic development. Huizhou and Jiangmen should take advantage of the opportunity of the development of the Guangdong-Hong Kong-Macao Greater Bay Area to vigorously promote the development of advanced manufacturing industries by relying on their own industrial base. Foshan and Zhaoqing should continuously enhance development efficiency, improve development mode, actively promote green and intelligent development process, and gradually reduce energy consumption to achieve clean development.
Conclusion
This paper mainly analyzes the high-quality economic development of the PRD urban agglomeration from the perspective of new development concept. On the one hand, it focuses more on the main content of high-quality development and overcomes the shortcoming of diluting the role of main aspects by using too many indicators in existing literature. On the other hand, the evaluation ideas and data use are concise and clear, which is conducive to a clearer comparison to find the crux of the problem. At the same time, in view of the staged achievements of the high-quality economic development of the PRD urban agglomeration, the future smooth and sustainable improvement of the high-quality eco-nomic development of the PRD city cluster should reasonably coordinate the synergistic development relationship of different aspects within the system, focusing on strengthening the economic development momentum of the PRD city cluster, innovating a coordinated regional development approach, constructing a new pattern of opening up to the outside world, improving the modern industrial system, improving the level of integrated transportation and Improve the mechanism of sharing the fruits of economic development in order to continuously make up for the shortcomings of high-quality development.
Funding Information
This research was funded by the Guangzhou Philosophy and Social Science planning project (2021GZGJ17), the 2021 School (College) General project of Guangdong Institute of Public Administration (XYYB202109). | 5,163.2 | 2022-01-01T00:00:00.000 | [
"Economics"
] |
Evidence for Interlayer Electronic Coupling in Multilayer Epitaxial Graphene from Polarization Dependent Coherently Controlled Photocurrent Generation
Most experimental studies to date of multilayer epitaxial graphene on C-face SiC have indicated that the electronic states of different layers are decoupled as a consequence of rotational stacking. We have measured the third order nonlinear tensor in epitaxial graphene as a novel approach to probe interlayer electronic coupling, by studying THz emission from coherently controlled photocurrents as a function of the optical pump and THz beam polarizations. We find that the polarization dependence of the coherently controlled THz emission expected from perfectly uncoupled layers, i.e. a single graphene sheet, is not observed. We hypothesize that the observed angular dependence arises from weak coupling between the layers; a model calculation of the angular dependence treating the multilayer structure as a stack of independent bilayers with variable interlayer coupling qualitatively reproduces the polarization dependence, providing evidence for coupling.
2
Abstract: Most experimental studies to date of multilayer epitaxial graphene on C-face SiC have indicated that the electronic states of different layers are decoupled as a consequence of rotational stacking. We have measured the third order nonlinear tensor in epitaxial graphene as a novel approach to probe interlayer electronic coupling, by studying THz emission from coherently controlled photocurrents as a function of the optical pump and THz beam polarizations. We find that the polarization dependence of the coherently controlled THz emission expected from perfectly uncoupled layers, i.e. a single graphene sheet, is not observed. We hypothesize that the observed angular dependence arises from weak coupling between the layers; a model calculation of the angular dependence treating the multilayer structure as a stack of independent bilayers with variable interlayer coupling qualitatively reproduces the polarization dependence, providing evidence for coupling.
Although single-layer graphene is of great interest due to its unique electronic properties, for many applications in both transport and optoelectronics, it is highly desirable to use many layers while maintaining the unique properties of single-layer graphene [1][2][3][4][5][6] In particular, multilayer epitaxial graphene (MEG) grown by thermal decomposition on SiC substrates and patterned via standard lithographic procedures has been proposed as a platform for carbon-based nanoelectronics and molecular electronics 1-3, 7, 8 . A variety of initial studies showed rotational stacking order in multilayer epitaxial graphene (in contrast to A-B stacking in graphite), leading to decoupling of the layers and a linear band structure just as in single-layer graphene [9][10][11][12][13] . Recent experiments have indicated that A-B stacked bilayers may 3 be present in large multilayer stacks, but they constitute at most 10% of the layers 14 .
Angle-resolved photoemission experiments have provided strong evidence that the band structure even for small rotation angles between adjacent layers remains identical to that of isolated graphene 15 . In this work, we describe an optical probe that is in principle very sensitive to interlayer coupling effects, and has the advantage that it is sensitive to all the layers in the sample, and not just the top few layers 15,16 .
We have recently reported a non-contact all-optical femtosecond coherent control scheme to inject ballistic electrical currents in MEG 17 . In this scheme ( Fig. 1(a)), quantum interference between single-photon and two-photon absorption breaks the material symmetry and the photoinjected carriers are generated with an anisotropic distribution in k-space, giving rise to a net current which is detected via an emitted THz signal 18 . The current density generation rate associated with interference between single-and two-photon absorption processes of beams at 2ω and ω is of the form: where is the polarization angle between the ω and 2ω beams. This relationship can be further simplified by noting that / 1 4 η xyyx/ η xxxx =-0.19±0.03 for a fundamental beam ω at 1400 nm 18,22 . In this paper, we measure the X and Y components of the THz field generated by coherently controlled photocurrents in MEG as a function of , in order to determine whether the third-order nonlinear tensor in MEG is consistent with a model incorporating only isolated graphene layers.
The sample used is a MEG film produced on the C-terminated face of single-crystal As in our prior work demonstrating coherent control in MEG 24 , a commercial 250 kHz Ti: sapphire oscillator/amplifier operating at 800 nm is used to pump an optical parametric amplifier (OPA), followed by a difference frequency generator (DFG) to generate pulses with average power of 2-3 mW at 3.2 μm (ω beam) and 200-fs pulse width. The ω beam passes through a AgGeS 2 crystal (type I) to generate the 2ω beam at 1.6 μm with P 2ω =200 μW. The ω and 2ω pulses are separated into the two arms of a Michelson interferometer using a dichroic beamsplitter. The relative polarization between the two pulses is varied by rotating the beam polarization with a λ/2 waveplate in the arm; the relative phase between the ω and 2ω pulses is controlled using a piezoelectric optical delay stage in the ω 5 arm. In all measurements, the polarization of the 2ω pulse is fixed and the polarization of the ω pulse is rotated by a λ/2 waveplate. The two pump beams emerging from the Michelson interferometer are overlapped on the samples with a 20-μm diameter (FWHM) spot size, producing peak focused intensities for the 3.2 μm and 1.6 μm beams of respectively 2.26 GW/cm -2 and 0.32 GW/ cm -2 on the sample, including losses due to all intermediate optics.
The sample is held at room temperature.
The coherently injected photocurrent is detected via the emitted terahertz radiation in the far field by electro-optic sampling 25 The wire-grid polarizer and ZnTe crystal are rotated together to detect THz radiation polarized either parallel or perpendicular to the 2 pump beam. The effective bandwidth of the electro-optic detection system is estimated to be ~2 THz due to phase mismatch between the terahertz and probe beams.
A typical THz waveform generated from the coherently controlled photocurrent is shown in Fig 2(a). The oscillatory temporal waveform is a result of the finite bandwidth of the electro-optic detection system and water-vapor absorption rather than the dynamics in the 6 sample. The THz peak field marked by the arrow in Figure 2(a) is well controlled by the relative phases of the ω and 2ω beams through the phase parameter Δϕ=2ϕ ω -ϕ 2ω , as shown in Figure 2 The experimental quantity of interest in this paper is the dependence of the THz amplitude on the relative polarizations of the and 2 pulses when the THz polarization is either parallel or perpendicular to the 2 beam polarization. We take the peak-to-peak value as shown in Fig. 2 15 . Hence, in order to begin to address whether interlayer electronic coupling could be responsible for our observed angular dependence of the coherent control, we apply a simpler model for the interlayer coupling: we assume that the effect on interlayer coupling will be similar to that of Bernal-stacked bilayers, where we take the interlayer coupling to be a parameter. There is some physical justification for such a model, since if there are coupled layers with small twist angles, there will be large regions of the sample which effectively have A-B alignment, and other regions which have A-A alignment. Indeed such local coupling has been observed in real-space mapping of magnetically quantized states in similar samples 16 . 8 The bilayer response is characterized by the interlayer coupling energy γ 1 and a linewidth Γ that arises because two-photon absorption can be resonant with an intermediate state 26 . The predicted result for the shape of the Y component is given by sin(2θ), the same as for graphene and for any 2D isotropic medium. For 2ħω<γ 1 , the shape of the X component would also be the same as graphene. However, for 2ħω≈γ 1 or 2ħω≈2 γ 1 , as it would be for the standard value for γ 1 =0.4eV in graphite 29 , the model predicts η xyyx /η xxxx ≈ -0.5, reasonably independent of the value of Γ. This results in the angular distributions shown in Fig. 4(b). The predicted angular dependence is in qualitative agreement with experiment if we consider the limited angular accuracy. Of course, in the context of other experiments on the electronic structure of MEG, it is unlikely that MEG should be thought of as a stack of independent bilayers, but this model nonetheless shows that coupling between layers of graphene has a qualitative effect on the predicted coherently-controlled photocurrent. Figure 4(c) shows with a mixture of 70% single-layer and 30% bilayer with η XYYX /η XXXX =2.8, the theoretical predicted angular dependence shows good agreement with that measured in our experiment.
In summary, we have demonstrated that pump pulse polarization dependent coherent controlled photocurrent measurement is a sensitive tool to observe interlayer coupling in multilayer epitaxial graphene. The observed polarization angular dependence differs notably from that expected for a single isolated graphene layer. A model calculation treating the electronic states as those of a bilayer with the interlayer coupling as a parameter qualitatively reproduces the observed angular dependence, thus indicating the presence of interlayer electronic coupling. Future work will focus on incorporating more realistic models of the | 2,154.6 | 2011-09-13T00:00:00.000 | [
"Physics"
] |
Application of the Theory of Planned Behavior to couples' fertility decision-making in Inner Mongolia, China
China relaxed its family planning policy and adopted a universal two-child policy on January 1, 2016 to actively address the country’s aging trend. However, the policy has failed to have any significant effect on the fertility rate of many provinces. In light of the country having the highest sex ratio at birth in the world and the huge burden of the aging population, improving the fertility rate is an urgent priority in China. This facility-based cross-sectional survey aimed to study determinants of fertility decision-making among couples based on the Theory of Planned Behavior. The study was conducted in Inner Mongolia Autonomous Region of China. A structured self-administered questionnaire was completed by 1,399 couples, consisting of wives aged 20–49 years and their husbands. Based on the structural equation modeling method of analysis, determinants of fertility decision-making were perceived behavior control (perceived importance of having a stable income and cost of raising a child), subjective norms (perceived social pressure about “sex preference of the newborn by themselves and their partner”) and attitudes (only healthy parents can have a child). Other significant factors influencing fertility decision were ethnicity and education level, with ethnic minority couples having less perception of social norm towards fertility and those with higher education having higher perceived control toward having a (further) child. The study reveals the importance of the China’s infrastructure and public facilities to support child-rearing to increase the fertility rate among couples of child-bearing age, which in turn will reduce the burden associated with an aging society.
Introduction
Since the 1960s, global fertility rates have halved, grabbing the attention of researchers. Low fertility rates are now common in both developed and developing countries [1][2][3][4]. The situation has raised the question of whether fertility behavior adequately reflects people's preference for the number of children they would like to have; the discrepancy between ideal and actual number of children known as the fertility gap [5]. PLOS On January 1, 2016 China relaxed its family planning policy and adopted a universal twochild policy to actively address the country's aging trend. However, the new policy has failed to have any significant effect on the fertility rate of many provinces [6,7], and in fact the birth rate decreased from 13.0 births/1,000 women in 2016 to 12.3 births/1,000 women in 2017 [6]. In light of the country having the highest sex ratio at birth in the world and the huge burden of the aging population [8], improving the fertility rate is an urgent priority in China. Motivated by the above scenario, an understanding of the determinants of fertility intention will help policy-makers tackle these challenges.
Although Inner Mongolia is an Autonomous Region of China which has relaxed the onechild policy for Mongolian people, the birth rate of Inner Mongolia has been very low for many years. In 2015, the birth rate in Inner Mongolia was 7.72‰ which ranks it 26 th among the 34 provinces of China; at the same time the birth rate was 12.07‰ in the whole country [9]. This study aims to determine the psychosocial determinants on a couple's fertility intention that influences the 'fertility gap' and to understand how couples make their fertility intention within the opportunities and constrains provided by the societal structures in which they are embedded. The discrepancy between ideal number and planned number of children is largely unexplored and calls for further research on fertility decision-making among couples.
Wives' age at marriage is one of the most important determinants of child-birth in a family. Under the Chinese law regarding marriage and birth, the postponement of marriage means the postponement of childbirth. In addition, the out-of-wedlock birth is in conflict with traditional Chinese culture, so any level of fertility in a population can always be traced by the percentage of women at first age of marriage [10,11].
Minorities in China have had low fertility rates since the 1990s [12], even being supported by the relative liberal one child policy in China. Inner Mongolia has the lowest fertility rate in all Autonomous Regions in China [13], in both urban and rural areas. Compared with Han, the Mongolian ethnic group have fewer sons and a multi-child preference [4].
Most research studies combined demographic characteristics, such as education, occupation, and income, as socioeconomic factors linked with fertility [14 -16]. Previous research showed a significant negative correlation between fertility and wives' socioeconomic status [17], but the negative relationship disappears or becomes even positive after accounting for the endogeneity of schooling [18,19]. A study from Norway showed that husbands who had a higher education delayed their fatherhood, yet fewer remained childless, and the rates of second and third births increased with their educational attainment [20]. Among all published studies, the impact of husbands' socioeconomic status on fertility decision has largely been overlooked. Similar situations were reported in studies from China [21][22][23].
The Theory of Planned Behavior (TPB) is an extensively validated psychosocial theory useful for understanding couples' fertility decision [24]. It is a general psychological theory concerning the link between attitudes and behaviors and has also been applied to explain fertility decision-making [25][26][27]. Based on the TPB, three determinants influence behavior intention which is a proxy measure of the behavior (in our case, a decision to have a child): 1. attitudes toward the behavior which refer to the individual's positive and negative feeling of the behavior and outcome of performing the behavior; 2. subjective norms, which relates to the individual's perception of the social environment surrounding the behavior; and 3. perceived control over the performance of the behavior [24].
The objective of this research was to investigate determinants of fertility decision-making among couples in Inner Mongolia. We developed our initial conceptual model based on the Theory of Planned Behavior [27] (Fig 1).
Participants, recruitment setting and sampling procedure
A facility-based cross-sectional survey was conducted in Xin Cheng, an urban county in Hohhot district and Zhuo Zi, a rural county in Ulanqab district of Inner-Mongolia province of China between March and July 2018.
Considering that the estimated proportion of eligible couples applying for permission to have a second child was 0.13 [28], to estimate the proportion with a precision of 5%, and allowing for 20% nonresponse rate, and a design effect of 2 for cluster sampling, the sample size required for this study was 1,305 (174/0.8×2×3). In each county, data were collected from three different settings: marriage registration offices, antenatal care (ANC) clinics, and kindergartens.
All Chinese couples in which the wife was aged 20-49 years and both were an Inner Mongolian citizen who visited one of the above settings were eligible for the study. At marriage registration offices, couples who had just married and had no children were recruited. At the ANC clinics, pregnant women were recruited, while at kindergartens, couples who had already Determinants of fertility decision-making had one or two children were recruited. The minimum age of 20 years for women was used because this is the age at which Chinese women can legally be married in China.
Measures
Social and demographic variables. Data on demographic characteristics and psychosocial determinants on fertility decision were derived from a structured self-administered questionnaire. Demographic variables included wife's age at marriage, couples' education levels (low: both senior high school or below, middle: at least one college degree, high: at least one above college degree), ethnicity (at least one ethnic minority, or both Han), family monthly income, and occupation (both self-employed, at least one self-employed, or both employed).
Predictors of fertility decision. The dependent variable was fertility decision, measured by a question: "what is your family plan in the next 2 to 3 years?". The possible responses were: (1) I will have a (another) child in 2-3 years, (2) I already have two children, (3) no plan for any child, (4) one child is enough for me. Factors predicting fertility intention was measured using a modified TPB questionnaire [27]. Questions were developed according to the main constructs of the TPB [29] in the context of fertility decision-making using the guideline provided in the TPB manual [30]. This questionnaire was developed after performing a literature review and conducting in-depth interviews with a panel of experts who assessed the questionnaire's validity. The panel of experts included one health policy researcher, one psychologist and two epidemiologists. A pilot study was conducted before the survey to ensure feasibility of the study and to test the questionnaire, which was modified after the pilot study and finalized by the team of experts.
The questionnaire included three dimensions with 13 individual items: 1. Perceived behavior control to have a (another) child (PC). This dimension was assessed with six statements: PC1: the couple is ready to sacrifice the time and freedom for the baby; PC2: a suitable babysitter is available when the couple works outside; PC3: a lot of money will be spent to raise a (another) baby; PC4: the couple has sufficient materials for child rearing; PC5: the couple's family members can help them take care of the baby; PC6: the couple has a stable income.
2. Subjective norms regarding having a (another) child (SN). This dimension was assessed with three statements related to the couple's perceived social pressure on their fertility decision. The statements are as follows, SN1: the family should have a boy (or girl); SN2: his/her partner prefers to have a boy (or girl); SN3: the relatives and friends around the couple have already had two children.
3. Attitudes towards having a (another) child (AT). This dimension was assessed with four statements related to the couple's attitudes towards having a child. The statements are as follows, AT1: the couple enjoys having a big family; AT2: the couple enjoys the fun of raising a baby; AT3: Only healthy parents can have a child; AT4: having a child can maintain a good relationship between the couple. The responses of each statement was based on the forced choice method (no neutral choice) [31], which is rated on a four-point scale (1 = not important at all, 2 = of little importance, 3 = very important, and 4 = absolutely essential). The total score of the whole fertility decision model was constructed by adding the scores of all items together, yielding a possible range of scores between 13 and 52.
Ethical approval and consent to participate. Before collecting the data, written informed consent was obtained from all participants. The study protocol was approved by the Research Ethics Committee of the Faculty of Medicine, Prince of Songkla University (reference number: 60-429-18-1). No additional ethical approval was required from the three study sites.
Data management and statistical analysis. Data were entered and validated using Epidata, version 3.1. R version 3.4.4 was used for data analysis. Amos21.0 was used to fit the structural equation models. No question was shown to contain any missing data. Three age groups were created: "< 24 years", "24-35 years", and "36-49 years". The cut-point of 24 years was considered the age for college matriculation and 35 years the maximum safe age for parturient women.
Means and standards deviation were used to describe the manifest variables of the TPB model of fertility decision. The total score of the model was calculated from the sum of the scores of responses to all 13 manifest variables. Comparison of the total scores of the fertility decision model across socio-demographic variables were performed using the Kruskal-Wallis test. Structural equation models (SEM) with maximum likelihood estimation [32] was used to explore the relationship between variables. All 13 manifest variables and social-demographic variables with a p-value less than 0.2 from the Kruskal-Wallis test were included in the initial structural equation prototype model. Standardized regression weights (β) were obtained from the SEM, indicating the total effect of each manifest variable on the outcome variable (fertility decision). As recommended by Anderson and Gerbing [33], goodness of Chi-square statistic, root mean square error of approximation (RMSEA), adjusted goodness fit index (AGFI), and comparative fit index (CFI) were used to choose the best fitting model. Cronbach's alpha was used to test the reliability of scales, a value greater than 0.70 was regarded as satisfactory [34].
Demographic characteristics
A total of 1,513 couples were asked to join the study, of which 1,399 (92.5%) agreed to participate and completed the whole questionnaire and whose data were included in the analysis. The mean age at marriage of the wives was 26.3 years (standard deviation (SD) = 3.1). The majority of the families (74.3%) belonged to the Han ethnicity and most had a middle level education. About one-third of the couples (34.2%) were both self-employed. Over 80% of the couples had monthly income less than 10,000 Chinese yuan (equal to 1,440 US dollars) ( Table 1).
Descriptive analysis of construct variables of fertility decision model
Summary statistics of all variables of the three dimensions in the fertility decision model are shown in Table 2. The mean overall score was highest for the Perceived behavior control domain (2.52), followed by Attitudes (2.34), and Subjective norm (1.89). The means of all individual items were in the middle range (1.86-2.68), indicating that most respondents placed the value of each factor as "of little importance" to "very important". Cronbach's alphas of all domains exceeded the minimum acceptable level of 0.70, indicating high internal consistency of the items measuring the same construct. Table 3 shows the total scores of all manifest variables of the fertility decision model by socialdemographic variables. Higher scores indicate a higher perception of the importance of perceived behavior control, social norm and positive attitudes on the decision to have a (another) child. The results showed that the scores of fertility decision were not different between couples belonging to minority and Han ethnic groups. However, significantly higher score were seen among more educated couples, couples with a higher monthly family income and those who were both employed, compared to their counterparts. Wives who married between the ages of 24-35 years had higher mean scores compared with those married at an earlier or later age. Based on these univariate results, education, employment status, monthly income and wife's age at marriage were included in the initial SEM of the fertility decision model. Although ethnicity was not significantly associated with the fertility decision in this univariate analysis, it was deemed to be an important factor for the fertility decision (the fertility policy has been regulated differently between Han majority and other ethnic minority populations), it was thus kept in the initial model. Standardized regression weights. Table 4 shows standardized regression weights of all variables in the fertility decision model, based on the theory of planned behavior, obtained from the SEM. The strongest determining domain was perceived behavior control (β = 0.92, 95% CI: 0.89, 0.98), followed by attitudes (β = 0.83, 95% CI: 0.78, 0.87) and subjective norm (β = 0.64, 95% CI: 0.59, 0.69). Within each domain, perceived importance of having a stable income (PC6) (β = 0.77, 95% CI: 0.74, 0.79), subjective norm on the partner's sex preference of Ethnic group was significantly associated with the subjective norms towards fertility decision, with ethnic minority couples having less weight than Han couples (β = -0.13, 95% CI: -0.18, -0.06). Education was significantly associated with perceived behavior control and attitudes, but not with subjective norms. Couples with higher education had more perceived behavior control (β = 0.20, 95% CI: 0.14, 0.27) and attitudes (β = 0.83, 95% CI: 0.78, 0.87) than those with lower education. Employment status also had significant and positive relationship with all three domains of the fertility decision model.
Discussion
This study followed the Theory of Planned Behavior (TPB) with the aim of advancing the understanding of determinants of fertility behavior among couples in Inner Mongolia, China. The strongest determinants of the fertility decision among Chinese couples were found to be in the domain of attitudes, which were that only healthy parents can have a child (AT3) (β = 0.82, 95% CI: 0.78, 0.85) and that a child can maintain a good relationship between the couples (AT4) (β = 0.79, 95% CI = 0.75, 0.82) reflecting that having a child cannot only affect the relationship between a couple, but also link with the health status of the couple. Infertility in China is very low. If young couples do not have children, there will be pressure from family members and friends. One study [35] showed that both positive and negative partner relationships had a negative effect on the timing of first as well as second and third births. Another study [36] suggested that childlessness might result in a decrease in quality of life and increase in marital discord and sexual dysfunction. Childlessness could also impose physical, psychological, emotional, and financial burdens. Our findings thus emphasize the role of children in Chinese families and call for public support for older couples who are childless. Among the perceived behavior control variables, the perceived importance of having a stable income (β = 0.77, 95% CI: 0.73, 0.79) and having family members to help take care of the child (β = 0.75, 95% CI: 0.71, 0.78) had the highest effects on fertility decision. This indicates the high concerns on the costs for raising a child and awareness of the burden of childcare by couples. The cost of raising a child does not only depend on the financial costs, but also on the value of child in their parent's view, which changes with time. Children became increasingly valued for their emotional worth rather than for their economic contribution in modern society of China, with compulsory education and schooling replacing work as the child's primary social obligation [37]. Because of the inadequate infant-care social welfare system, it is common that the grandparents provide childcare for the third generations in China. Such situation makes the couples at childbearing age hesitate to have a (another) child [38]. Similar findings were found even in the context of a country with strong institutional support for childrearing such as Norway and Japan [26,27,39]. The perceived importance of having a suitable babysitter (PC2) shows the change in childrearing practices of a family where young parents need others to help take care of their baby. In traditional Chinese culture, caring for the elders is the responsibility and obligation of the whole family, especially the children. However, in the Chinese modern society, the family structure relationship has shifted from the older-centered intergenerational relationships to the next generation as the center [40]. Older parents provide childcare for the three generations fostering a better relationship between generations [10]. This can explain why the couples in our study raised the issues of having a babysitter and family member to take care of their baby as the important determinants for their fertility behavior.
The strong effect of perceived importance of self-sacrifice of time and freedom for their baby (PC1) shows that with the development of society and an increase in people's education level and self-consciousness, young couples no longer passively accept the birth of a child. Rearing a newborn baby needs a lot of time and energy by the parents; this not only increases the economic pressure on the entire family, but more importantly, devoted love of children needs enormous efforts from parents. Another reality is that the time allowed during statutory maternity leave has been found to be insufficient for taking care of a newborn child in China [41] and public kindergartens do not accept children less than three years of age [42]. Therefore, to have a child may have a great impact on a couple's career advancement, so a couple's career type and aspirations may have a major influence on their fertility intentions.
The sex preference of the newborn child was measured by two subjective norms (SN1: the preference of the respondent as representing the couple and SN2: the preference of his/her partner), which had significant effects on fertility decision. In contemporary China, gender preference is a sensitive issue, which is often described in the public and popular discourse as ''traditional', 'backward' and 'feudal' [10,43]. In fact, preference for a son, a desire common among couples in India, Nepal, and Bangladesh [43,44], is also common in China. China's one-child policy, implemented in 1979 and recently abolished in January 2016, contributed to the country's high sex ratio, the highest in the world. A higher proportion of males in their reproductive age was estimated at around 30 million in 2010 [45]. Preference for a son has changed in the modern Chinese society. Raising a son entails huge economic pressures, and people now realize that daughters, more than sons, can provide better emotional support for their parents. The advantages of having a daughter are increasingly being recognized by more couples, and more young couples have shifted their attention from reproducing a particular sex to the quality of child care [10]. In this study we did not specify the preference for a boy or girl in the questionnaire, but simply a sex preference, however our findings indicate that sex preference (SN1 and SN2) still matters in the decision by a couple to have a child and, in fact, had a greater impact than their relatives and friends already having two children (SN3), which, since the adoption of the universal two-child policy, has become more popular. Apart from the above-mentioned determinants, ethnic minority couples had less weight than Han couples (β = -0.13, 95% CI: -0.18, -0.06). The fertility culture of Han Chinese prefer to have at least a son and multi children [46]. Compared with the fertility culture of Han, the minority Mongolians do not have such preferences [47].
Couples who were both employed perceived more behavior control compared with couples who had at least one self-employed. People's increasing educational attainment and labor force participation has contributed to fertility decline in most developing countries [48]. The family role conflict theory asserts that there is a contradiction between work and family life, which are time conflicts, stress conflicts and behavior conflicts [49]. In the modern society of China, young couples put high values on educational attainment and career promotion, so they may worry that having a (another) child would be an obstacle and be an opportunity cost.
Although this study showed reasonable similarities with previous studies both internationally and in China, some limitations should be acknowledged. Our study aimed to describe the fertility situation in Inner Mongolia, therefore the generalizability of the findings to the whole China is limited. Although cross-sectional designs cannot reveal causal relationships, they can provide some points for further studies. Also, the discrepancy of fertility intentions between the first and second child was not specified.
Conclusions
This study supports the assumption that use of the theory of planned behavior can explain the fertility decision-making model, which includes perceived behavior control, subjective norms, and attitudes. Due to the culture, minority couples are less affected by subjective norms. With improved education and employment, these couples may be more affected by perceived behavior control, subjective norms and positive attitudes towards to having a child in the future. Result of this study can help the Chinese government improve the fertility intention of its residents and the fertility rate of the country. We have the following recommendations. First, the government should provide equal access to basic public service as the expansion of basic education and infant care. Second, the government should provide adequate provisions for maternity protection and parental leave as an essential policy. Third, more consideration should be given to older ages couples who are involuntarily childless. Finally, the government should implement a supporting policy to reduce the time pressure and opportunity cost for couples who are conflicted by working and child-rearing. Sufficient parental leave for couples should be considered. | 5,511.8 | 2019-08-23T00:00:00.000 | [
"Economics"
] |
Differences in Student Mathematics Learning Achievement Based on Parenting Patterns of Parents of Students Class VIII
The purpose of this study was to determine: To determine students' learning achievement in mathematics based on parenting styles of students at SMP Negeri 3 Wundulako and to find out the difference in students' learning achievement in mathematics based on parenting styles at SMP Negeri 3 Wundulako. The population in this study were all students of class VIII SMP Negeri 3 Wundulako, totaling 47 students divided into 2 classes, namely VIII A and VIII. This study uses descriptive quantitative research, where the researchers used descriptive methods with quantitative approaches. The results showed that: Differences in students' learning achievement in mathematics based on parenting styles of class VIII students at SMP Negeri 3 Wundulako.
conscious and planned effort to create a learning atmosphere and learning process so that students actively develop their potential to have religious spiritual strength, self-control, personality, intelligence, noble character, and skills needed by himself, society, nation and state (Achadi, 2018;Aprizal et al., 2016;Budiarti et al., 2017;Sakir, 2016;Setrianus & Padang, 2017).
In essence, education is created in a formal situation in school environment through learning activities that involve teacher and student interactions in the classroom.Mathematics is a compulsory subject at every level of education but is not easy to learn and becomes an obstacle for students because of the abstract nature of mathematics studies (Farman et al., 2019;Fuadiah, 2016).Mathematics is considered a difficult subject to understand because it contains many different symbols and formulas.Mathematical material is interrelated between one material and the next material so that if students do not understand the initial concept, students will be hampered in solving problems in the next material (Hidajat, 2018).In fact, it is not uncommon for learning activities in mathematics classes to only focus on aspects of remembering without paying attention to their relationship with the family environment and in everyday life (Farman et al., 2021).The final condition of the learning activities can affect the low learning outcomes of students (Nurhandita et al., 2021).
Based on the initial interview conducted by the researcher to one of the mathematics teachers at SMP Negeri 3 Wundulako, there are several factors that cause differences in student learning achievement in mathematics, including student interest in learning, student learning independence, student motivation, and parenting styles.Some parenting behaviors that can make student achievement low, are parents who pay less attention to their children in the form children's opinions.Therefore, parenting plays an important role in improving children's learning achievement.Slameto (2013), argues that the family is the first and foremost educational institution (Chotimah et al., 2018;Febianti & Joharudin, 2018;Pratiwi, 2017;Rahmawati et al., 2018;Wahy, 2012).A healthy family, meaning to educate in small sizes, but determines the quality of education in large sizes, namely the education of the nation and state.From this opinion, it can be concluded that the importance of pattern parenting in education.Where parenting patterns greatly affect student achievement in making children creative and talented.Improper application of parenting patterns can be caused by several factors, namely: 1) lack of parental education; 2) environment; 3) culture.Therefore, how important is the parenting style for children.On the other hand, the application of the right parenting style for children, besides being able to shape children into independent and responsible souls, can also increase the achievements of their children.Relevant achievements in this case is student learning achievement.
This research is supported by previous research, namely (Lestari, 2013)with the research title "The Relationship between Parenting Parenting Patterns with Student Achievement Concentration Patiseri SMK Negeri 1 Sewon Bantul", concluded that warm acceptance from parents, expressions of affection, determination clear standards of behavior and respect from parents, is a form of parental attention to their children.All of these have a very large role in the personality and character of children, so that they can affect children's learning achievement.The purpose of this study was to determine students' learning achievement in mathematics based on parenting at SMP Negeri 3 and to find out whether there were differences in students' mathematics learning achievement based on parenting styles at SMP Negeri 3 Wundulako.The benefits of this research are theoretically in general, the results of the research are expected to provide input on differences in learning achievement in mathematics based on parenting styles and provide information to students about the importance of the role of parents towards children, so that they are expected to respect and respect parents more.
Based on the problems contained in the background that occurred at SMP Negeri 3 Wundulako, researchers were interested in conducting research with the title "Differences in Students' Mathematics Learning Achievement Based on Parenting Patterns for Class VIII Students of SMP Negeri 3 Wundulako".
B. Methodology
This study uses descriptive quantitative research, where researchers use descriptive methods with a quantitative approach.JME/6.1;16-22;June 2021 This research was carried out in class VIII of SMP Negeri 3 Wundulako in the even semester of the 2017/2018 Academic Year.The population in this study was all class VIII students of SMP Negeri 3 21 Wundulako, totaling 47 students divided into 2 classes, namely VIII A and VIII B.
Data collection techniques are the most strategic step in research, because the main purpose of research is to obtain data according to (Sugiyono, 2017).In this study, the data collection method used was a questionnaire or questionnaire and documentation.
The instrument used to obtain data in this study, to obtain data from the variables studied, a questionnaire or questionnaire instrument was used, namely the Student Parenting Parenting Pattern Instrument in Mathematics.There are two types of data in this study, namely the primary data from this study in the form of questionnaire results on parenting patterns.While the secondary data in this study was data on the number of class VIII students of SMP Negeri 3 Wundulako.
C. Findings and Discussion
This research was conducted to find out "The Differences in Students' Mathematics Learning Achievement Based on Parenting Patterns for Class VIII Students of SMP Negeri 3 Wundulako.From the results of data analysis using descriptive statistics and inferential statistics, the research results will be described as follows.
Data were obtained by distributing questionnaires to students who became respondents.Questionnaires were given to respondents totaling 47 students.Variable parenting styles were measured through 31 questions.Categories were based on parenting experienced by students.Authoritative parenting and permissive parenting were compared for each respondent.The highest score between the three parenting patterns indicated the parenting experienced by students.In appendix 7 page 74 shows the authoritarian parenting pattern as many as 18 students, the parenting pattern authoritative parents as many as 20 students, and parenting permissive parents as many as 9 students.The data shows that most of the students of SMP Negeri 3 Wundulako experience authoritative parenting.
Data on students' mathematics learning achievement were taken from the results of the Final Semester Examination (UAS) given by the subject teacher.Authoritarian, authoritative parenting, and permissive parenting.Furthermore, it can be seen in table 1 Based on table 1.2, it shows that learning achievement in authoritarian parenting is on the high criteria as many as 8 students (44%), the medium criteria as many as 9 students (50%), and the low criteria as many as 1 student (6%).Authoritative parenting pattern with high criteria as many as 15 students (75%), moderate criteria as many as 5 students (25%), and does not have low criteria.In the pattern of permissive parenting 42 students who are on the high criteria are 2 students (22%), the medium criteria are 5 students (56%), and the low criteria are 2 students (22%).So it can be concluded that, learning achievement in authoritative parenting is in the high criteria and for the other two parenting styles, authoritarian parenting and permissive parenting, is in moderate criteria.
This test was conducted to test whether the three types of parenting were normally distributed or not.In this case, the researcher uses the normality test kolmogorov-Smirnov with the provisions of a test is said to be normal if the significance level is 0.05, whereas if the significance level is <0.05 then the distribution is said to be abnormal.3.21) and calculated sig (0.01) < sig is determined (0.05), so H0 is rejected and H1 is accepted.These results can be concluded that there are differences in the mathematics learning achievement of students who experience authoritarian parenting, authoritative parenting, and permissive parenting.So parenting affects student achievement.-4,15 Based on table 5, Scheffe's results show the difference in the average value of students' mathematics learning achievement in each type of parenting.For students who experience authoritarian parenting and authoritative parenting, the difference in the average value of learning mathematics achievement is 7.578 with a sig of 0.03 and statistically significant (0.03 < 0.05), which means that there is a significant difference.between the two types of parenting.The difference in the average value of mathematics learning achievement of students who experience authoritative parenting and permissive parenting is 12,800 with a sig of 0.02 and statistically significant (0.02 < 0.05), which means that there is a significant difference between the two the type of parenting style.The difference in the average value of mathematics learning achievement of students who have authoritarian parenting and permissive parenting is 5.222 with a sig of 0.332 and statistically significant (0.332 > 0.005), which means that there is no significant difference between the two types of parenting.The Based on the results of research on students who experienced authoritarian parenting as many as 18 students, the minimum score for students' mathematics learning achievement was 60 with a maximum score of 88.The average score of students' mathematics learning achievement was 73.22 and the standard deviation was 6.933 and the variance was 48.07.Mathematics learning achievement of students in authoritarian parenting is in high criteria as many as 8 students (44%), moderate criteria as many as 9 students (50%), and at low criteria as many as 1 student (6%).So it can be concluded that most of the mathematics learning achievements of students who experience authoritarian parenting are in moderate criteria.
The results showed that there were 20 students who experienced authoritative parenting with a minimum score of 64 students' mathematics learning achievement and a maximum score of 94.The average score of students' mathematics learning achievement was 80.8 and the standard deviation was 8.98 and the variance was 80. ,6.Mathematics learning achievement of students in authoritative parenting style is in high criteria as many as 15 students (75%), moderate criteria as many as 5 students (25%), and low criteria does not exist (0).So it can be concluded that most of the mathematics learning achievement of students who experience authoritative parenting is in high criteria.
The results showed that there were 9 students who experienced permissive parenting with a minimum score of 50 students' mathematics learning achievement and a maximum score of 82.The average score of students' mathematics learning achievement was 68.0 and the standard deviation was 10.198 and the variance was 104.Achievement students' learning mathematics in permissive parenting patterns were 2 students (22%), moderate criteria were 5 students (56%), and 2 students had low criteria (22%).So it can be concluded that the mathematics learning achievement of students who experience permissive parenting is in the moderate criteria.
From the results of hypothesis testing using one-way analysis of variance with SPSS 20.0 and excel 2007 programs manually, it shows that there is a significant difference between Fcount and Ftable.Where Fcount (8.033) >Ftable 51 (3.21) and sig count (0.001) < sig (0.005) which means hypothesis H0 is rejected and hypothesis H1 is accepted.These results indicate that there are differences in learning achievement in mathematics between students who experience authoritarian parenting, authoritative parenting, and permissive parenting.The further test or post hoct test used was the Scheffe test to show the difference in the average value of students' mathematics learning achievement based on three types of parenting experienced by students.The average value of students' mathematics learning achievement in authoritarian parenting and authoritative parenting has a difference in the average score of 7,578.The difference in the average value of students' mathematics learning achievement in authoritative parenting and permissive parenting of 12,800, and the difference in the average value of students' mathematics learning achievement in authoritarian and permissive parenting is 5,222.This is in accordance with (Novitasari, 2016;Sari et al., 2020)that authoritative parenting styles tend to show extra strict supervision of children's behavior, but they are also responsive, appreciate and respect thoughts, feelings, and include children. in decision making, while other parenting styles tend to give a negative relationship.Thus, it can be concluded that there are differences in students' mathematics learning achievement based on parenting styles of VIII grade students of SMP Negeri 3 Wundulako.
D. Conclusion
Based on the results of data processing and discussion in this study, the authors draw the following conclusions: Students' mathematics learning achievement based on authoritarian parenting has an average value of 73.22, a standard deviation of 6.933, and a variance of 48.07.Students' mathematics learning achievement based on authoritative parenting has an average score of 80.80, standard deviation of 8.98, and variance of 80.6.Mathematics learning achievement of students based on permissive parenting has an average score of 68.00, standard deviation of 10.198, and variance of 104.There are differences in student learning achievement in mathematics based on parenting.
05 and n = 18, D = 0.154 and D Table = 0.309, the authoritative parenting pattern obtained a significant level = 0.05 and n = 20, D = 0.119 and DTabel = 0.294, and permissive parenting pattern obtained significant level = 0.05 and n = 9, D = 0.137 and DTabel = 0.430.This shows that D D .1 below: Based on table 1 shows the highest score achieved is 94 and the lowest score is 50.The authoritative parenting style has a minimum score of 60 and a maximum score of 88.The average learning achievement score is 73.22 and the standard deviation is 6,933 and the variance is 48.07.The authoritative parenting style has a minimum score of 64 and a maximum score of 94.The average learning achievement score is 80, 8 and a standard deviation of 8.98 and a variance of 80.6.Permissive parenting has a minimum score of 50 and a maximum score of 82.The average learning achievement score is 68 and the standard deviation is 10.198 and the variance is 104.
Table 4 .
Table can be concluded that the data on students' mathematics learning achievement based on three types of parenting styles of VIII grade students of SMP Negeri 3 Wundulako is normally distributed.ANOVA between Students' Mathematics Learning Achievements Based on Authoritarian, Authoritative, and Permissive Parenting Parenting , it shows that SPSS output gives a calculated F value of 8.033 and a sig value of 0.001.While the price of the F table can be seen in the distribution of the value of the F table attached with a 5% error degree (α = 0.05) is 3.21.From the SPSS output table above, it shows that the value of Fcount (8.033) >Ftable (
Table 5 .
Advanced Test With Multiple Comparisonor Difference Scheffe | 3,460.8 | 2021-06-01T00:00:00.000 | [
"Mathematics",
"Education"
] |
Functionalization of an Alginate-Based Material by Oxidation and Reductive Amination
This research focused on the synthesis of a functional alginate-based material via chemical modification processes with two steps: oxidation and reductive amination. In previous alginate functionalization with a target molecule such as cysteine, the starting material was purified and characterized by UV-Vis, 1H-NMR and HSQC. Additionally, the application of FT-IR techniques during each step of alginate functionalization was very useful, since new bands and spiked signals around the pyranose ring (1200–1000 cm−1) and anomeric region (1000–750 cm−1) region were identified by a second derivative. Additionally, the presence of C1-H1 of β-D-mannuronic acid residue as well as C1-H1 of α-L-guluronic acid residue was observed in the FT-IR spectra, including a band at 858 cm−1 with characteristics of the N-H moiety from cysteine. The possibility of attaching cysteine molecules to an alginate backbone by oxidation and post-reductive amination processes was confirmed through 13C-NMR in solid state; a new peak at 99.2 ppm was observed, owing to a hemiacetal group formed in oxidation alginate. Further, the peak at 31.2 ppm demonstrates the presence of carbon -CH2-SH in functionalized alginate—clear evidence that cysteine was successfully attached to the alginate backbone, with 185 μmol of thiol groups per gram polymer estimated in alginate-based material by UV-Visible. Finally, it was observed that guluronic acid residue of alginate are preferentially more affected than mannuronic acid residue in the functionalization.
Introduction
The past decade has seen an increase in the importance of the utilization of algal-based materials and improvement of the physicochemical properties of polysaccharides extracted from the cell wall of brown algae ( Figure 1) [1]. This is due to the strong dependence on taxonomic characteristics and the effect of environmental growing conditions on natural sources, which effect its metabolites' properties [2][3][4][5]. Depending on the type of seaweed, time of harvest, temperature, and other factors, metabolites with novel features can be obtained. These have been used in a wide range of applications [1]. For instance, phlorotannins present in brown seaweeds have been applied as a bioactive agent for their antioxidant, bactericidal and inhibitive properties [6]. Also, the anticoagulant and elicitor properties of fucoidan obtained from brown seaweed have demonstrated promising applications in biomedical fields [7]. Other polysaccharides like alginate have been gaining attention for their application in biomedical and environmental fields [8,9].
Alginate is a linear anionic copolymer composed of [1][2][3][4] β-D-mannuronic' acid (M) and α-l-guluronic acid (G). This polysaccharide is mainly extracted from brown seaweeds and is characterized by its random block distribution, which offers unique properties depending on the length of the polymeric chain and the presence of different stretches of alternating or homogeneous M and G sequences, referred to as MG-blocks, MM-blocks or GG-blocks. The M/G ratio is an important parameter that gives crucial information associated with the block composition of alginates from a particular brown seaweed. It must be mentioned that the block distribution depends exclusively on the species, habitat, season of harvest and salinity of water [10,11]. Thus, it is essential to refer to the algae's background for chemical modification purposes. Alginates are employed industrially for their viscosifying properties, water binding capacities and gelling properties [12], which correlate with the amount of GG blocks in the polymer chain as well as the presence of divalent cations in the aqueous medium [13]. These properties have allowed us to widen alginate applications, especially in biomedical areas, including wound healing [14][15][16], cell microencapsulation [17][18][19] and drug delivery systems [20][21][22]. However, the main limitation regarding these applications lies especially on low biocompatibility and poor mechanical properties, preventing the utilization of unmodified alginates for sophisticated biomedical areas such as cell immobilization [23,24]. Taking into account these drawbacks, functionalization of alginate-based materials emerges as an excellent option to enhance the quality and long-term stability of alginates without restricting the action of some specific properties characteristic of them [25]. Furthermore, the physical properties of alginate-based materials can be improved while avoiding the incorporation of other materials, such as carbon nanomaterials [26,27] or nanocellulose [28]. Different studies have reported that chelating [29], permeation [30] and, especially, mucoadhesive [31,32] properties can be improved when sulphur-containing molecules are attached to the alginate backbone. In particular, the synthesis of thiol-containing biopolymers offers numerous attractive features for a variety of biomedical applications in which the biomaterial can perform as a drug delivery system and bioadhesive [33,34].
In this regard, Bernkop-Schnürch et al. synthetized thiomers from the implantation of thiol groups in different polysaccharides, such as chitosan, pectin and alginate [23,35,36]. The main pathway to obtain thiomers in alginate is to render the carboxylic acid group into an amine-reactive reagent via carbodiimide coupling [31,37], but the main drawback is that this procedure tends to generate side products during the functionalization process [37,38]. On the other hand, other chemical routes that involve oxidation and reductive amination processes [39,40] have shown promising properties, including increasing the degradability and chain flexibility of the polymer [41,42]. Novel materials obtained have demonstrated their versatility in varied fields, but most of these are not focused on structural changes upon oxidation and subsequent grafting. Based on this, the present study aims to provide an efficient method to functionalize the alginate in two steps via oxidation and post reductive amination reaction for evaluating the preference of uronic residue involved in this process.
Purification of Commercial Alginate
The alginate (Alg) was purified in order to remove polyphenol compounds that are usually present in commercial-grade products. For this purpose, a solution of 1% was treated with n-butanol in a ratio of 3 to 2. This mixture was sonicated for 1 h, and then the solution was reposed until the formation of 2 phases. Finally, the aqueous phase was separated by decantation and lyophilized for 24 h to obtain the purified alginate (AlgP).
Functionalization of Purified Alginate with Cysteine
The functionalization of AlgP with cysteine was prepared following the procedure reported by our group previously, but with several modifications [32]. First, 4 g of AlgP was dissolved thoroughly in 170 mL of distilled water prior the addition of 0.2 mol L −1 NaIO 4 (30 mL). This solution was stirred for 6 h in complete darkness to prevent undesired reactions [43]. Then, 1 mL of ethylene glycol was added and stirred for 30 min under the same conditions. The solution containing the oxidized alginate was dialyzed against deionized water by using a cellulose membrane (cut-off molecular weight of 12 kDa) until the conductivity of the aqueous medium was less than 10 µS cm −1 . Next, the oxidized alginate (AlgPO) was obtained by freeze-drying for 36 h. Afterward, 40 mL of 0.1 mol L −1 phosphate buffer solution (PBS, pH 7.4) was used as the medium to dissolve 0.4 g of AlgPO before the addition of 0.2 mol L −1 cysteine solution (10 mL). Finally, the mixture was stirred for 24 h at room temperature prior to the addition of 0.05 mol L −1 NaBH 4 (10 mL). The solution reacted for 12 h and a functionalized alginate-based material (AlgPOS) was obtained through precipitation in ethanol and a freeze-drying process ( Figure 2).
Evaluation of Polyphenols by UV-Vis
For this analysis, both phases (organic and aqueous) were collected and analyzed by UV-Vis in the range of 500-200 nm. UV-Vis analyses were performed with a UV-1800 Shimadzu scanning spectrophotometer.
Evaluation of Thiols by UV-Vis
The presence of thiol groups was quantified spectrophotometrically using Ellman's reagent (DTNB, 5,5-dithio-bis (2-nitrobenzoic acid)) [44]. Then, 40.2 mg of AlgPOS was dissolved in a stock solution of 0.1 mmol L −1 DTNB prepared in PBS buffer. The quantity of thiol groups was estimated from a standard curve of L-cysteine.
FT-IR Measurements
Fourier-Transform Infrared (FT-IR) measurements were performed during each step of alginate functionalization (AlgP, AlgPO and AlgPOS) using a Shimadzu IR Prestige-21 spectrometer with Attenuated Total Reflection (ATR). The spectra were acquired (64 scans/sample) in the range of 4000-600 cm −1 at room temperature with a resolution of 4 cm −1 . Deriva-tions, including a Savitzky-Golay algorithm with 23 smoothing points, were analyzed with OriginLab 9.0 software.
1 H NMR and HSQC Analyses of AlgP
In order to reach a degree of polymerization around 10-30, AlgP was hydrolyzed according to the procedure reported in the literature [45]. Then, 10 mg of hydrolyzed AlgP wwas dissolved in 500 µL of D 2 O. Next, TMSP-d 4 -3-(Trimethylsilyl)-propionic-2,2,3,3d 4 -acid sodium salt was added as an internal standard for the chemical shift. The 1 H NMR spectrum was recorded on a Bruker Avance III-400 spectrometer at 80 • C and processed with TopSpin 3.2 software (Bruker BioSpin, Billerica, MA, USA).
Analysis of Alginate Derivatives by Solid State 13 C NMR
The analyses by solid state 13 C NMR were performed in a Bruker Avance III-400 operating at 9.4 T magnetic field, ν ( 1 H) = 400 MHz. The 13 C resonance frequency was 100.57 MHz with a pulse sequence (cross-polarization on magic-angle spinning (CP-MAS)) and CP-MAS with total sideband suppression (CP-MAS-TOSS). A high-power decoupling field was set at 83.3 kHz [P 90 ( 1 H) = 3 µs], and adamantane (38.5 p pm for CH 2 resonance) was used as an external reference for adjustment of the 13 C chemical shift.
Analysis of the Starting Material
Purification of commercial-grade alginates ought to be carried out prior to alginate functionalization, since production of this polysaccharide is based exclusively on extraction from brown seaweeds [46]. In this regard, short aliphatic chains like tert-butanol have been employed in the three-phase partitioning method for the bioseparation of proteins [47]. In contrast, n-butanol can be used for separating low-molecular substances present in alginates, including aromatic acids, hydroxyl acids, carbohydrates and polyols [48]. Hence, in this study, n-butanol was used, aiming to remove a significant quantity of substances commonly found in commercial-grade alginates. In Figure 3A, at around 280 nm, a signal can be seen probably by π-π* transitions attributed to the ring benzene present in polyphenols [6,49]. This asseveration is evidenced in the spectrum by its first derivative. Moreover, in Figure 3B, the organic phase shows its capability of extracting low-molecularweight molecules and polyphenols owing to the presence of bands around 207 nm and 280 nm, attributed to the ring benzene present in phlorotannins [50,51]. Consequently, these bands could be assigned to polyphenols or similar molecules characterized by having a ring benzene, which may still remain in alginates. Figure 4 shows the 1 H NMR spectrum of AlgP for evaluating the M/G ratio according to the method described by S. Pawar and K. Edgar [52]. This parameter was calculated from the signals in the anomeric region (4.4-5.5 ppm) through the relationship of G and M block distribution [53,54]. In this case, the M/G value of commercial sodium alginate after the purification process with n-butanol is 1.02 (Table S2). This parameter directly influences the capacity to form gels through crosslinking reactions with divalent ions, since it is a wellknown fact that alginates with a low M/G ratio produce stronger structures due to the high affinity of G blocks towards calcium ions. On the contrary, alginates with a high M/G ratio possess promising elastic properties and are able to form acidic gels [55,56]. Considering that AlgP is composed of almost 50% of each uronic acid, oxidation and reductive amination processes could be studied to evaluate the susceptibility of hemiacetal formation in G or M blocks, as well as their effect on cysteine after the reductive amination process. The application of solid state NMR spectroscopy for analyzing the linking of nitrogencontaining molecules and the presence of thiol groups have been reported elsewhere [57]. For this reason, 13 C CP-MAS NMR spectra of alginate derivatives were studied in detail and discussed in terms of the results obtained by second derivative FT-IR analysis. The 1 H and 13 C NMR chemical shifts of AlgP are depicted in Table 1. For a better interpretation of AlgP's structure, Figure 5 displays the 13 C/ 1 H HSQC spectrum of AlgP in order to evaluate the correlation between 13 C-1 H ( Table 1). As can be observed, the 103.92/4.67 ppm correlation was assigned to C 1 /H 1 of β-D-mannuronic acid residues, whereas the signal at 102.70/5.06 ppm was attributed to C 1 /H 1 of α-L-guluronic acid residue. The 80.78/3.91 ppm and 82.63/4.13 ppm correlations were assigned to C 4 /H 4 of manuronic (AlgP a ) and guluronic (AlgP b ) acid residues in AlgP. Figure 6 depicts three different ATR-IR spectra obtained during each step of alginate functionalization. The typical signals around 1598 cm −1 and 1411 cm −1 were assigned to asymmetric and symmetric movements of carboxylate groups present in the structure despite the functionalization process. According to this, carboxylate groups are not affected considerably by oxidation and reductive amination processes. In contrast, the band associated with C-O stretching vibration of pyranose rings was clearly affected, as is demonstrated with the chemical shifts from 1024 cm −1 to 1014 cm −1 (after periodate oxidation), and then to 1020 cm −1 due to the structural modification with cysteine. The bands at 1122 cm −1 and 1083 cm −1 assigned to C-O and C-C stretching vibrations of pyranose rings [58] were also affected by structural changes produced by functionalization with cysteine. With regard to the anomeric region around 1000-750 cm −1 , the spectra exhibit a band at 949 cm −1 assigned to C-O stretching vibration of uronic acid residues, while the band at 888 cm −1 is assigned to C 1 -H deformation vibration of β -D mannuronic acid residues [59].
In order to improve the resolution of overlapping bands in the normal FT-IR spectrum [60], the second-derivative FT-IR technique was applied for analyzing alginate derivatives-as this method has been widely used for structural analysis of macromolecules [32,59,61]. Thus, in Figure 6, the C-O and C-C stretching vibrations were found at 1122 cm −1 for AlgP. This band was considerably affected by oxidation, as well as the reductive amination process, and, therefore, this band vanished in AlgPO and AlgPOS spectra. Another similar situation is observed in Figure 7 since the band at 1170 cm −1 (second derivative) is assigned to the C-O stretching vibration of the glycosidic linkage of AlgP; this band was affected in each step of the functionalization, resulting in a chemical shift towards 1161 cm −1 for AlgPO (second derivative), whereas this band was at 1150 cm −1 for AlgPOS (second derivative). The C-O and C-C stretching vibrations of pyranose rings (observed at 1083 cm −1 ) were shifted to 1074 cm −1 (second derivative) and 1070 cm −1 (second derivative) because of oxidation and the reductive amination process, respectively. These bands are possibly shifting due to the Malaprade reaction, as periodates do indeed cleave the pyranose ring between C 2 and C 3 . This procedure is the best route to oxidize diols into aldehydes under dark conditions (Figure 3). Additionally, the reductive amination process using cysteine leads to the formation of imines, which are reduced with sodium borohydride (Figure 3). It must be noted that the second-derivative technique is very useful for studying the anomeric region, where bands at 949 cm −1 (AlgP), 945 cm −1 (AlgPO) and 940 cm −1 (AlgPOS, second derivative), associated with C-O stretching vibration, shifted to lower vibration frequencies as a consequence of the new environment caused by the ring opening and post introduction of cysteine molecules into the polymer chain. According to Matsuhiro et al. [62], the alginate always exhibits two characteristic bands associated with the C 1 -H anomeric of β-D-mannuronic acid at around 888 cm −1 , and another band at 902 cm −1 characteristic of C 1 -H of α-L-guluronic acid residue. In AlgPO and AlgPOS, it could be observed how these signals were affected due to the degradation process of blockchains caused by the functionalization process (Figure 7) [63]. Furthermore, the band at around 814 cm −1 is assigned to COH, CCH and OCH stretching vibrations of α-L-guluronic acid residues, with a contribution of bending deformation vibrations of C-O-C glycosidic linkages in homopolymeric blocks [58,59]. This band is clearly observed at 818 cm −1 in the normal ATR-IR spectrum of AlgPO. However, without the utilization of the second-derivative technique, it would be challenging to detect the signal at 819 cm −1 , which was overlapped in the classical ATR-IR spectrum of AlgPOS. Finally, the band observed at 858 cm −1 may be assigned to N-H stretching vibration as a consequence of cysteine molecules [64]. The presence of thiol groups in the alginate structure was corroborated through the Ellman method ( Figure S2), reporting 185 µmol thiol groups per gram of AlgPOS, as shown in Table S3.
Analysis of Alginate and Its Derivatives by 13 C NMR in Solid State
The effect ascribed to the chemical reactions during alginate functionalization was studied by the FT-IR technique and confirmed by using 13 C NMR in solid state. Figure 8 displays the spectrum of AlgP, and its characteristic peaks are assigned in Table 2. Owing to the oxidation produced by the utilization of sodium metaperiodate as an oxidizing agent, a new peak emerges at 92.8 ppm attributed to hemiacetal groups formed by intramolecular interactions between aldehyde groups of oxidized residues and hydroxyl groups of unoxidized hydroxyl moieties that were situated in neighboring positions of the same polymer chain [65]. To achieve an effective functionalization via reductive amination, crucial factors must be controlled to form the imine bond successfully, e.g., pH, reaction time and the amount of nitrogen-containing molecules [32]. As we demonstrated in a previous work, thiosemicarbazide (NH 2 -NH-(C=S)-NH 2 ) promotes crosslinking reactions between both terminal primary amine groups present in the molecule and oxidized moieties of alginate [66]. Thus, a strict control of pH ≈ 7 and reaction time above 24 h can guarantee the formation of imine bonds between cysteine molecules and hemiacetal groups, which revert back to aldehyde form in aqueous medium during the reductive amination process [32]. In addition, a high concentration of buffer makes it feasible to avoid the formation of thiazolidine, a side product that is produced by the interaction between thiols and aldehydes [67]. In these conditions, this side reaction would not be favorable on thermodynamic grounds [32,67]. On the other hand, it has been previously reported that the -CH 2 -SH moiety appears at 28.8 ppm [57]. Thus, we have considered that the peak at 31.2 ppm is attributed to -CH 2 -SH of the cysteine attached to the alginate backbone. This new peak is present in the spectra due to the efficient reductive amination process, performed in order to achieve the functionalization of alginate. The resonances of both the M-6 and G-6 contribution at 176.2 ppm assigned to carboxylate groups have not been altered, which is clear evidence that sodium metaperiodate is a suitable oxidizing agent to convert diols into aldehydes without affecting carboxylate groups. The pyranose region showed considerable structure changes associated with guluronic residue. The signals corresponding to G-4 grew smaller along each step of the functionalization process. Several signals obtained by NMR in solid state consisted of contributions from more carbon sites within each mannuronic and guluronic acid residue, due to chemical shift distribution [53]. Thus, the signal of 68.8 ppm in AlgP contains a contribution of G-3 and G-5, which is noticeably affected by periodate oxidation in AlgPO and AlgPOS (Table 2). Additionally, the intensity of the peaks at 102.6 ppm (G-1) and 99.6 ppm (M-1), associated with the C 1 -H anomeric of α-L-guluronic acid residue and C 1 -H of β-D-mannuronic acid, respectively [53], have both shown a decrease due to rupture of glycosidic linkage, due to severely affected carbon atoms associated with α-L-guluronic acid residues as displayed in Figure 8. Conversely, the signals corresponding to M have not shown a significant change, as the peaks at 74.2 ppm could be a contribution of both signals at 72.1 ppm and 76.1 ppm, assigned to M-2 and M-4, respectively. These results suggest that G units of AlgPO are preferentially oxidized since G units were greatly affected rather than M-in spite of being present in the alginate structure in almost equal proportions (M/G ratio 1.02). Nevertheless, other factors like block distribution might be more important than the M/G ratio, especially when the M/G ratio gives an almost equal proportion of M and G blocks, as is reported in this work. Since G residue is more reactive than M to cleavage, the stability of these oxidized groups will depend on their neighbors. These might stabilize them, forming hemiacetal groups, or might be affected by a fast hydrolyzation, which would lead to a decrease in alginate-based material formation [68,69]. To conclude, the effect of the reducing agent during the reductive amination process depends on several factors [70] and, thus, the utilization of a specific borohydride derivative ought to be explored, taking into account that sodium cyanoborohydride generates the presence of HCN, NaCN and cyanoborohydride derivatives that are highly toxic for the environment and human beings [71,72]. Hence, according to the literature, sodium borohydride is a low-cost and efficient reagent that can be used during the reductive amination process under different conditions with good yields [73,74].
Conclusions
In summation, the functionalization of purified sodium alginate was achieved successfully via oxidation and the reductive amination process. In order to evaluate the chemical modification using a purified raw material (AlgP), polyphenols and phlorotannins were removed from commercial-grade alginate using n-butanol. Characterization of AlgP shows a M/G of 1.02, whereas HSQC studied correlations of 1 H-13 C. AlgPOS demonstrated cysteine covalently bound to the alginate with a coupling around 185 µmol of attached thiol groups per gram of polymer as estimated by UV-Vis. FT-IR and solid state 13 C NMR analyses confirmed the functionalization of alginate, as peaks were significantly displaced from their natural position. Around the pyranose and fingerprint regions, we observed two peaks at 99.2 ppm and 31.2 ppm, respectively. These peaks were attributed to hemiacetal formation due to the interaction of activated/deactivated aldehydes obtained by oxidation, whereas the attaching of cysteine to the alginate backbone caused an upfield resonance characteristic of CH 2 -SH moieties from cysteine. The chemical shifts and vanished signals to G2 and G4 and G1, G3 and G5, respectively, in AlgPOS by 13 C NMR demonstrated the susceptibility of guluronic groups in the process of functionalization.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 5,220.6 | 2021-01-01T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Biology"
] |
Predictors of mathematical attainment trajectories across the primary-to-secondary education transition: parental factors and the home environment
A ‘maths crisis’ has been identified in the UK, with many adults and adolescents underachieving in maths and numeracy. This poor performance is likely to develop from deficits in maths already present in childhood. Potential predictors of maths attainment trajectories throughout childhood and adolescence relate to the home environment and aspects of parenting including parent–child relationships, parental mental health, school involvement, home teaching, parental education and gendered play at home. This study examined the aforementioned factors as predictors of children's maths attainment trajectories (age 7–16) across the challenging transition to secondary education. A secondary longitudinal analysis of the Avon Longitudinal Study of Parents and Children found support for parental education qualifications, a harmonious parent–child relationship and school involvement at age 11 as substantial predictors of maths attainment trajectories across the transition to secondary education. These findings highlight the importance of parental involvement for maths attainment throughout primary and secondary education.
Introduction
• Page 3 line 36 and line 38 both start with 'parents can also' and 'parents also', change wording so not repetitive.
Method Sample • The following information is confusing-is this two samples (i.e. a core sample and additional sample)? Could this be made clearer for the reader?
• "The core sample consisted of 14,062 live births, of which 13,988 children were alive at 1 year. Additional participants were recruited resulting in a total of 15,589 foetuses, of which 14,901 were alive at 1 year." (iv) Parental education • Justify why CSE (or below) was used as a reference group.
Results
• Table 1 and 2 Cannot see top of table clearly to see what the numbers represent. May be an issue with journal format rather than your tables? • Table 2. Could you provide the numbers of the variables beside the variables names so that the reader does not have to count along the variables? • Table 2. Providing the coefficient and p value significance marked as e.g. 0.28** for .0.01 significance might be easier for the reader as there is a lot of searching to be done to find the p value for each coefficient. This would make it a lot quicker for the reader. • Table 3. Should there be a reference within table 3 that this is predictors of intercepts of maths attainment at age 11? Discussion • Page 16, line 47 needs a lower-case letter change with the word "However, When". • Could the authors provide some literature on gendered play to explain their findings? • Could the authors add more information regarding what is normally seen in literature regarding parent mental health throughout childhood and adolescence and child outcomes as a summary statement of this paragraph (page 17, line 3).
Great paper, well done.
Review form: Reviewer 2
Is the manuscript scientifically sound in its present form? Yes
Is the language acceptable? Yes
The reviewers and handling editors have recommended publication, but also suggest some minor revisions to your manuscript. Therefore, I invite you to respond to the comments and revise your manuscript.
• Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit? journalID=RSOS&manu=RSOS-200422 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Please ensure you have prepared your revision in accordance with the guidance at https://royalsociety.org/journals/authors/author-guidelines/ --please note that we cannot publish your manuscript without the end statements. We have included a screenshot example of the end statements for reference. If you feel that a given heading is not relevant to your paper, please nevertheless include the heading and explicitly state that it is not relevant to your work.
Because the schedule for publication is very tight, it is a condition of publication that you submit the revised version of your manuscript before 29-May-2020. Please note that the revision deadline will expire at 00.00am on this date. If you do not think you will be able to meet this date please let me know immediately.
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions". Under "Actions," click on "Create a Revision." You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you will be able to respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". You can use this to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the referees. We strongly recommend uploading two versions of your revised manuscript: 1) Identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them.
When uploading your revised files please make sure that you have: 1) A text file of the manuscript (tex, txt, rtf, docx or doc), references, tables (including captions) and figure captions. Do not upload a PDF as your "Main Document"; 2) A separate electronic file of each figure (EPS or print-quality PDF preferred (either format should be produced directly from original creation package), or original software format); 3) Included a 100 word media summary of your paper when requested at submission. Please ensure you have entered correct contact details (email, institution and telephone) in your user account; 4) Included the raw data to support the claims made in your paper. You can either include your data as electronic supplementary material or upload to a repository and include the relevant doi within your manuscript. Make sure it is clear in your data accessibility statement how the data can be accessed; 5) All supplementary materials accompanying an accepted article will be treated as in their final form. Note that the Royal Society will neither edit nor typeset supplementary material and it will be hosted as provided. Please ensure that the supplementary material includes the paper details where possible (authors, article title, journal name).
Supplementary files will be published alongside the paper on the journal website and posted on the online figshare repository (https://rs.figshare.com/). The heading and legend provided for each supplementary file during the submission process will be used to create the figshare page, so please ensure these are accurate and informative so that your files can be found in searches. Files on figshare will be made available approximately one week before the accompanying article so that the supplementary material can be attributed a unique DOI.
Please note that Royal Society Open Science charge article processing charges for all new submissions that are accepted for publication. Charges will also apply to papers transferred to Royal Society Open Science from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry).
If your manuscript is newly submitted and subsequently accepted for publication, you will be asked to pay the article processing charge, unless you request a waiver and this is approved by Royal Society Publishing. You can find out more about the charges at https://royalsocietypublishing.org/rsos/charges. Should you have any queries, please contact<EMAIL_ADDRESS>Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. Comments to the Author: I enjoyed seeing the next step in your work on maths attainment in the ALSPAC sample, and the interesting findings in relation to the home environment. As both reviewers commented, the paper is clearly written, and has many strengths. There are some non-trivial changes which need to be made to the manuscript, but they are largely additional explanations, rather than re-analysis or major reframing. Please address each of the reviewers' constructive comments and suggestions -I would particularly like to highlight the following issues: -A clearer motivation in the introduction for including gendered play as a potential predictor of maths attaintment (R.1).
-Previous relevant findings on this topic from ALSPAC (including your own previous work!) (R.2). -Exclusion criteria and potential for alpha slippage (R. 2).
In additon, I would also like to see a (brief) discussion on the pros and cons of using a large dataset such as ALSPAC -it clearly has major advantages in terms of the size of the dataset (N and breadth) but also inherent limitations in terms of depth of measurement, attrition, and cohort effects. I think this would provide helpful context. Finally, the discussion would benefit from an explicit consideration of how the effect sizes you report for your predictors compare to those previously reported in the literature for those predictors, and how your study extends the existing knowledge base.
Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) The manuscript reviews current literature well and has been well thought out. The introduction is informative and well structured. Method is also very informative with great level of detail and nice breakdown of statistical analysis. Results are structured nicely and easy to follow. In the discussion results are nicely explained and backed up with relevant literature. Minor changes to be made as follows:
Introduction
• Page 3 line 36 and line 38 both start with 'parents can also' and 'parents also', change wording so not repetitive.
Method Sample • The following information is confusing-is this two samples (i.e. a core sample and additional sample)? Could this be made clearer for the reader?
• "The core sample consisted of 14,062 live births, of which 13,988 children were alive at 1 year. Additional participants were recruited resulting in a total of 15,589 foetuses, of which 14,901 were alive at 1 year." (iv) Parental education • Justify why CSE (or below) was used as a reference group.
Results
• Table 1 and 2 Cannot see top of table clearly to see what the numbers represent. May be an issue with journal format rather than your tables? • Table 2. Could you provide the numbers of the variables beside the variables names so that the reader does not have to count along the variables? • Table 2. Providing the coefficient and p value significance marked as e.g. 0.28** for .0.01 significance might be easier for the reader as there is a lot of searching to be done to find the p value for each coefficient. This would make it a lot quicker for the reader. • Table 3. Should there be a reference within table 3 that this is predictors of intercepts of maths attainment at age 11?
Discussion
• Page 16, line 47 needs a lower-case letter change with the word "However, When". • Could the authors provide some literature on gendered play to explain their findings?
• Could the authors add more information regarding what is normally seen in literature regarding parent mental health throughout childhood and adolescence and child outcomes as a summary statement of this paragraph (page 17, line 3).
Great paper, well done.
Reviewer: 2
Comments to the Author(s) All of my comments to the author(s) are in the attached file.
13-Jun-2020
Dear Ms Evans, It is a pleasure to accept your manuscript entitled "Predictors of Mathematical Attainment Trajectories across the Primary-to-Secondary Education Transition: Parental Factors and the Home Environment" in its current form for publication in Royal Society Open Science.
Please ensure that you send to the editorial office individual files for table included in your manuscript. You can send these in a zip folder if more convenient. Failure to provide these files may delay the processing of your proof.
You can expect to receive a proof of your article in the near future. Please contact the editorial office<EMAIL_ADDRESS>and the production office<EMAIL_ADDRESS>to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal.
Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication.
Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. Thank you for the opportunity to review the article "Predictors of Mathematical Attainment Trajectories across the Primary-to-Secondary Education Transition: Parental Factors and the Home Environment," (RSOS-200422) which was submitted for consideration of publication in the Royal Society Open Science. The study is organized, well-written and describes the relation among key variables related to mathematics development throughout life (using the ALSPAC database), as the participants in the sample are now approximately 28 years old. I especially enjoyed the discussion of the early developmental trajectories in relation to later academic achievement, and the reference to Eccles work. Literature review appeared to be complete and succinct in relation to the data studied. This review details some concerns I have about the study and these concerns should be addressed before a decision about publication can be made. They are detailed below in order of importance.
(1) The authors admit that these data are based on a cohort that is now almost 30 years old. This discussion needs to occur in the Introduction, not as an afterthought in the Discussion section as a limitation. For example, there are differences in exposure to technology (no cell phones, tablets, internet), curriculum and inclusion policies (to name a few) that could affect children's learning. It would be interesting to speculate how this cohort was influenced by societal influences at the time they were learning math. Further, there could be some detailing of the kinds of research that has already been conducted on this cohort (i.e., what is already known about these participants), and how this current study relates to previous work on the cohort.
(2) According to my search engine, over 1800 articles have been published using the ALSPAC database. The data set includes records on over 14,000 participants and their families and schools. What measures have been taken to protect against alpha slippage in this current study? Analyses were conducted on data that ranged from 3429 to 7263 participants, depending on the variable selected. With the size of this sample, it would be relatively easy to obtain significance by including various variables in question, and given very basic a priori hypotheses, many relations could be deemed significant.
(3) Related to sample size, I am concerned that "2652 students were excluded because they had special needs" as determined by their teachers and schools. Almost 20% of the sample was excluded (which goes well beyond the 10% of children estimated to have learning concerns in current populations). Too many children were excluded for reasons that are unclear. Additionally, more participants were excluded because of missing data, resulting in over half of the sample not being including in most analyses. Although it is powerful to study large cohorts of the population, I question the information that is not being detected as a result of missing or ignored data.
(4) The variables used in the study are quite simplified compared to math measures used in current day research. For example, instead of using math measures as an outcome variable, school grades were used instead (which incur obvious bias with teacher ratings and school context). Home learning variables were based on gender-type play and general parent involvement. Homework help was included in a long list of other parent involvement variables, where today, this variable is studied on its own. Color knowledge was grouped with literacy proficiency and shape knowledge was considered with numeracy proficiency, for no apparent reason, and based on parent ratings. I would complete factor analyses on some of these large composite variables to try and add meaning (e.g., early home teaching, gender stereotyped play, and home interactions).
(5) "Parent education," "parent school involvement," "harmonized learning" and "female gendered play" relating to mathematics attainment at age 11. However, home interactions (at age 3 and 7), parental mental health, early home teaching, working memory and intelligence were not found to be related. All of these variables have been found to relate to children's math development in various studies. Because of the enormity of the study, and the inability to delve into some of these measures, the overall picture is difficult to interpret. I wonder why some variables were significant and others were not.
Minor concerns: (1) How many observations were completed for the parent-child relationship and the working memory and intelligence variables? How were scores computed? It is unclear from the Tables.
(2) It is difficult to read the Table headers. What is %MD? I assume Missing Data-if so, why are these numbers so high? What is KSI? Please include scales for all of your variables (e.g., working memory and intelligence).
(3) The correlation table is overwhelming. Are all of these variables necessary? What story are you trying to tell? Which correlations meet significance, and at what alpha level? What are your effect sizes? (4) Why was masculinized gender play associated with decreased math attainment, and boys outperformed girls with their math grades at age 11? Is there a teacher bias going on here? (5) Please explain your rate of change analyses and provide a rationale for the purpose of these analyses in line with the hypotheses of the study. Thank you very much for taking the time to consider our manuscript for publication at Royal Society Open Science. In the following we address your own and the reviewers' concerns and suggestions, and describe the revisions made to the manuscript in light of these.
Reviewer #1
RC: Page 3 line 36 and line 38 both start with 'parents can also' and 'parents also', change wording so not repetitive.
AR: Line 36 has been changed to: There is also the opportunity for positive experiences for growth and development in childhood that are provided by parents.
RC:
The following information is confusing -is this two samples (i.e. a core sample and additional sample)? Could this be made clearer for the reader? "The core sample consisted of 14,062 live births, of which 13,988 children were alive at 1 year. Additional participants were recruited resulting in a total of 15,589 foetuses, of which 14,901 were alive at 1 year." AR: The core sample refers to the starting sample recruited by ALSPAC i.e., before any participant withdrawal. This has been changed to: The core ALSPAC sample recruited initially consisted of 14,062 live births, of which 13,988 children were alive at 1 year. Additional participants were recruited resulting in a total of 15,589 foetuses, of which 14,901 were alive at 1 year.
RC: Justify why CSE (or below) was used as a reference group.
AR: We used CSE (or below) as the reference group as this was the lowest level of education available -this point has been made clearer in the method section quoted below.
1 Appendix B Having the highest qualification of a CSE (or below) was used as the reference group as this was the lowest level of parental education qualifications available.
RC: Table 1 and 2 Cannot see top of table clearly to see what the numbers represent. May be an issue with journal format rather than your tables?
AR: Apologies they are unclear -it is an issue with the Latex class used.
RC: Table 2. Could you provide the numbers of the variables beside the variables names so that the reader does not have to count along the variables?
AR: Numbers have been added to the variables.
RC: Table 2. Providing the coefficient and p value significance marked as e.g. 0.28** for .0.01 significance might be easier for the reader as there is a lot of searching to be done to find the p value for each coefficient. This would make it a lot quicker for the reader.
AR: A high number of the correlations are significant due to the very large sample size and we felt it was somewhat misleading to provide marked significance for different alpha levels due to the high proportion of significant p-values where many of the correlation coefficients (i.e., the effect sizes) are very small.
RC: Table 3. Should there be a reference within table 3 that this is predictors of intercepts of maths attainment at age 11?
AR: Added to the caption that this refers to age 11.
RC: Page 16, line 47 needs a lower-case letter change with the word "However, When".
AR: Thank you for highlighting -this has been corrected.
RC: Could the authors provide some literature on gendered play to explain their findings?
AR: Following this recommendation and that of Dr Hayiou-Thomas, we have included the below in the introduction to highlight why gendered play is included in the analysis, but felt the effect was too trivial to discuss further than what is already in the discussion.
Parents additionally guide their children's interests and the types of play they participate in by purchasing toys and encouraging (or discouraging) different types of activities and behaviours. One specific area of interest particularly for maths attainment is gender-stereotyped play in childhood. Typically, 'boy toys' include construction toys (such as building blocks and tools), vehicles and sports, and 'girl toys' include dolls, household toys (i.e., tea sets and toy kitchens), and 'dress-up ' [38]. This divide is particularly interesting given reported differences in maths attainment between males and females [4], which could potentially stem from the differences in toys and play through the increased spatial content in 'masculine' toys for example. Because parents heavily shape their child's preferences and activities, examining gender-stereotyped play in childhood could help understand differences in attainment for males and females stemming from parents and aspects of the home environment.
RC: Could the authors add more information regarding what is normally seen in literature regarding parent mental health throughout childhood and adolescence and child outcomes as a summary statement of this paragraph (page 17, line 3).
AR: This section has been edited to the below:
Parental mental health within the first few months following the child's birth did not significantly predict maths attainment at age 11, but did marginally (p = .05) predict the slope in attainment in the expected direction where poorer mental health was linked to a slower rate of change. The effect of parental mental health on attainment was extremely small (b = 0.019) 16 which was smaller than expected [15]. Previous research shows an increased risk of children not attaining a 'pass' grade in GCSE maths at age 16 whose mothers experience severe, persistent, or recurrent postnatal depression [63, OR = 2.65; 64, beta = 1.52]. These effects are much larger than what we found, however, in this study we focused on parental mental health combined as an indicator of the home environment rather than solely investigating maternal effects. This difference may explain the inconsistent findings as poor paternal mental health has not been found to predict maths attainment in previous studies [64]. Additionally, as this variable was measured in the first few months of life, it means that any changes over time (i.e., throughout childhood and adolescence), were not accounted for. Additional research would be beneficial in assessing any association between the trajectory of parental mental health and child maths attainment across the transition.
Reviewer #2
RC: The authors admit that these data are based on a cohort that is now almost 30 years old. This discussion needs to occur in the Introduction, not as an afterthought in the Discussion section as a limitation. For example, there are differences in exposure to technology (no cell phones, tablets, internet), curriculum and inclusion policies (to name a few) that could affect children's learning. It would be interesting to speculate how this cohort was influenced by societal influences at the time they were learning math. Further, there could be some detailing of the kinds of research that has already been conducted on this cohort (i.e., what is already known about these participants), and how this current study relates to previous work on the cohort.
AR: Societial differences between the cohort and children now have been added into the discussion (first quote), and a discussion of the pros and cons of using ALSPAC has been added also (quote 2). More information on the findings from studies using the ALSPAC cohort has been added to the introduction (quote 3).
Additionally, children's social environment has changed in a number of ways since the participants in this sample transitioned to secondary education between the years of 2001 and 2004. These changes include the increase in adolescents owning mobile phones and using social media apps and sites [114,115], and increases in mental health issues [116] meaning that the effects of the transition may be somewhat different for students now. For example, the social media sites Facebook and Twitter were launched in 2004 and 2006 respectively, meaning few children in this study would have had access to these sites before transitioning, and it is unlikely that a large percentage of them would have used them during secondary education. Other popular apps such as Instagram and Snapchat were launched in 2010 and 2011 which would have been after this sample had finished secondary education entirely. Whereas, adolescents transitioning now are already likely to use many of these sites/apps before transitioning [117], or begin using them in early adolescence post-transition. Moreover, a report by the Children's Commissioner for England has found that children using social media prior to secondary education focus on games and creative activities, whereas the focus post transition is on "likes" and "comments", affecting their emotional wellbeing [117]. Increased social media and phone use in adolescence has been linked to heightened depression and suicidal ideation in adolescents by other researchers also [118]. These findings imply that a greater number of children transitioning now may encounter emotional difficulties around this period compared to children in this study, and given that emotional wellbeing predicts maths attainment trajectories [62], it is possible that transition experiences and attainment differs between these groups which affects the generalisability of the findings. Additional research utilising more recent data would help to further understand the impact of the transition in light of these changes in children's environments, and how they potentially alter the impact of the transition on psychological and academic outcomes.
There are many advantages of using a large birth cohort such as ALSPAC, for example the large sample size and breadth of topics assessed, however, there are also limitations including the high level of missing data, and the lack of depth for some of the measures. For example, in this study, numeracy home-teaching was measured using parents' self-report of whether they had taught their child numbers and shapes, this does not account for the wide range of other maths and numeracy teaching activities (such as cooking together, handling money in shops, playing boardgames etc.,) that help develop children's maths skills. Most of the measures also rely heavily on parents' abilities to identify their own behaviours and report them accurately and honestly. There are additional generalisability issues where children in ALSPAC achieve slightly higher grades in national curriculum exams at age 16 and are more likely to be white with higher socio-economic status compared to children not enrolled in the study [67] suggesting that it would be beneficial to conduct additional research with a more diverse sample.
ALSPAC is a large UK birth cohort following children and their parents from pregnancy up to the present day, and covers a broad range of measures. Previous work investigating home and parental factors using the ALSPAC cohort have focused particularly on the impact of mothers and have shown that that maths attainment is associated with maternal perinatal and postnatal mental health [63,64], and maternal prenatal locus of control [65]. Parental education qualifications at birth are also linked to maths attainment in the ALSPAC sample [62], though, mothers' participation in adult learning was not found to improve maths grades [66]. The current study aims to add to this existing literature by focusing on the influence of the home environment and parental factors on the trajectory of children's maths attainment (measured from age 7 up to age 16 using national curriculum assessments).
RC: According to my search engine, over 1800 articles have been published using the ALSPAC database. The data set includes records on over 14,000 participants and their families and schools. What measures have been taken to protect against alpha slippage in this current study? Analyses were conducted on data that ranged from 3429 to 7263 participants, depending on the variable selected. With the size of this sample, it would be relatively easy to obtain significance by including various variables in question, and given very basic a priori hypotheses, many relations could be deemed significant.
AR: Rightfully said, many of the effects could be statistically significant resulting from the large sample size, to avoid being misleading we tried to focus on discussing the actual size of the effect in the context of the scales used, the average change in attainment per year, and whether the effect is meaningful/has practical applications rather than to focus on the specific p-values. An example of this is given below where the effect of gendered play was significant but the size of the effect in practical terms was essentially zero.
Gendered play at age 3.5 years, but not age 8, predicted the intercept and slope in maths attainment, with more "masculine" play predicting lower maths attainment at age 11 and a slower rate of change over time. However, When placing these effects within the context of the scale of the PSAI (which ranges from 0-100), both effects are extremely small -a 10-unit increase in PSAI score would equate to a decrease of -0.02 in attainment at age 11, whereas, for the slope, even with a change of 100 units (i.e., the entire scale), the rate of change in maths attainment per year would be -0.04. It is important to note that the average rate of change per year is half a national curriculum level (i.e., 0.48), which illustrates the extremely minimal effects of gendered play found here.
RC: Related to sample size, I am concerned that "2652 students were excluded because they had special needs" as determined by their teachers and schools. Almost 20% of the sample was excluded (which goes well beyond the 10% of children estimated to have learning concerns in current populations). Too many children were excluded for reasons that are unclear. Additionally, more participants were excluded because of missing data, resulting in over half of the sample not being including in most analyses. Although it is powerful to study large cohorts of the population, I question the information that is not being detected as a result of missing or ignored data.
AR: Children with SEN were excluded due to the anticipated effects this would have on their attainment. This group includes children identified as having (or have had) the following: • Learning difficulties • Specific learning difficulties (e.g. Dyslexia) • Emotional and behavioural difficulties • Speech and language difficulties • Sensory impairment (Hearing) The breadth of special educational needs in the above criterion explains the high number of exclusions compared to what is seen in the population for general learning difficulties, and we felt the exclusions were justified because the model would not be useful (i.e., would not have good fit) in predicting maths attainment for individuals with SEN (or for typically developing children) due to the high heterogeneity within this group. We did not have access to the above SEN data for individual children within the sample that we obtained from ALSPAC, so could not look at this group separately taking their specific SEN into consideration. To increase clarity, we have added that this study focuses on typically developing children into the introduction and added additional information to the method section as per the below: Therefore, by utilising secondary analysis of the Avon Longitudinal Study of Parents and Children (ALSPAC), this study aims to investigate parental influences in childhood and early adolescence as predictors of maths attainment trajectories for typically-developing children across the transition from primary to secondary education.
2,652 children identified by teachers (at ages 7-8 and 10-11) as having or have had special educational needs (such as learning difficulties, emotional and behavioural difficulties, physical disabilities and speech and language difficulties) were excluded (N = 11,832) due to the high heterogeneity within this group.
In terms of the large amount of missing data, a significant proportion (37%) of the final sample were excluded as they had over 50% of their responses missing. We deemed it necessary to exclude participants where the level of missing data was over 50% because we conducted multiple imputation in the growth model and we felt that it would be inaccurate to impute data based on a sample where so many responses are missing.
RC:
The variables used in the study are quite simplified compared to math measures used in current day research. For example, instead of using math measures as an outcome variable, school grades were used instead (which incur obvious bias with teacher ratings and school context). Home learning variables were based on gender-type play and general parent involvement. Homework help was included in a long list of other parent involvement variables, where today, this variable is studied on its own. Color knowledge was grouped with literacy proficiency and shape knowledge was considered with numeracy proficiency, for no apparent reason, and based on parent ratings. I would complete factor analyses on some of these large composite variables to try and add meaning (e.g., early home teaching, gender stereotyped play, and home interactions).
AR: Factor analysis was conducted to create the composites which is detailed in the code on the OSF repository, but this information was left out of the manuscript which has now been added into the method section.
RC: "Parent education," "parent school involvement," "harmonized learning" and "female gendered play" relating to mathematics attainment at age 11. However, home interactions (at age 3 and 7), parental mental health, early home teaching, working memory and intelligence were not found to be related. All of these variables have been found to relate to children's math development in various studies. Because of the enormity of the study, and the inability to delve into some of these measures, the overall picture is difficult to interpret. I wonder why some variables were significant and others were not.
AR: We felt this is covered in the dicussion.
RC: How many observations were completed for the parent-child relationship and the working memory and intelligence variables? How were scores computed? It is unclear from the Tables.
AR:
The above measures were assessed at a single timepoint. We felt this information was clear from the method section, but have added more information on the working memory measure in the method.
RC: It is difficult to read the Table headers. What is %MD? I assume Missing Data-if so, why are these numbers so high? What is KSI? Please include scales for all of your variables (e.g., working memory and intelligence).
AR: This is an issue with the Latex class used, the definition of MD was given in the caption but we have edited this to make it clearer. KS1 refers to key stages (which are the assessments used in schooling in the UK) -this abbreviation is now defined in the caption.
RC: The correlation table is overwhelming. Are all of these variables necessary? What story are you trying to tell? Which correlations meet significance, and at what alpha level? What are your effect sizes?
AR: The matrix includes the raw correlations so readers are able to see the relationship between variables in the growth model, and theoretically they could reconstruct the analysis from the correlation matrix. So, all the variables are necessary.
Which correlations meet significance? The table includes the p-values for all correlations so it is clear which ones are significant.
What are your effect sizes? The correlation coefficient is an effect size.
In summary, the previous version of the table contained all of the information the reviewer needed to answer questions about effect size and significance. Furthermore, in a sample size this large we feel it is important not to draw undue attention to significance by using ***, it is better that readers process the correlation coefficients themselves because many small effects are, in fact, significant in this sample.
RC: Why was masculinized gender play associated with decreased math attainment, and boys outperformed girls with their math grades at age 11? Is there a teacher bias going on here?
AR: Due to the extremely small, and trivial effect of gendered play (betas = -0.002 & -0.0004), we felt a further discussion of this effect was not appropriate as the significant p-value is likely due to the large sample size. We tried to highlight this within the discussion by focusing on the size of the effect rather than the significance of the p-value as stated below: Gendered play at age 3.5 years, but not age 8, predicted the intercept and slope in maths attainment, with more "masculine" play predicting lower maths attainment at age 11 and a slower rate of change over time. However, When placing these effects within the context of the scale of the PSAI (which ranges from 0-100), both effects are extremely small -a 10-unit increase in PSAI score would equate to a decrease of -0.02 in attainment at age 11, whereas, for the slope, even with a change of 100 units (i.e., the entire scale), the rate of change in maths attainment per year would be -0.04. It is important to note that the average rate of change per year is half a national curriculum level (i.e., 0.48), which illustrates the extremely minimal effects of gendered play found here.
RC: Please explain your rate of change analyses and provide a rationale for the purpose of these analyses in line with the hypotheses of the study.
AR: We used a latent growth model predicting maths attainment to measure attainment at age 11 and the growth in attainment over time. Anyone familiar with growth models would know that the slope represents the rate of change. However, in the description of the statistical analysis we say this explicitly: The predictors were included as exogenous observed variables that predict the intercept and slope (i.e., the rate of change) of growth in maths attainment.
We have also included a figure of the model for clarity.
We are extremely appreciative of your reviews especially under the current circumstances and hope that these amendments are responsive to your recommendations.
Yours sincerely
Danielle Evans | 9,594.2 | 2020-07-01T00:00:00.000 | [
"Education",
"Mathematics",
"Psychology"
] |